3D cameras that can capture range information, in addition to color information, are increasingly prevalent in the consumer marketplace and available in many consumer mobile imaging platforms. An interesting and important application enabled by 3D cameras is photogrammetry, where the
physical distance between points can be computed using captured imagery. However, for consumer photogrammetry to succeed in the marketplace, it needs to meet the accuracy and consistency expectations of users in the real world and perform well under challenging lighting conditions, varying
distances of the object from the camera etc. These requirements are exceedingly difficult to meet due to the noisy nature of range data, especially when passive stereo or multi-camera systems are used for range estimation. We present a novel and robust algorithm for point-to-point 3D measurement
using range camera systems in this paper. Our algorithm utilizes the intuition that users often specify end points of an object of interest for measurement and that the line connecting the two points also belong to the same object. We analyze the 3D structure of the points along this line
using robust PCA and improve measurement accuracy by fitting the endpoints to this model prior to measurement computation. We also handle situations where users attempt to measure a gap such as the arms of a sofa, width of a doorway etc. which violates our assumption. Finally, we test the
performance of our proposed algorithm on a dataset of over 1800 measurements collected by humans on the Dell Venue 8 tablet with Intel RealSense Snapshot technology. Our results show significant improvements in both accuracy and consistency of measurement, which is critical in making consumer
photogrammetry a reality in the marketplace.
No Supplementary Data.
No Article Media
Document Type: Research Article
January 29, 2017
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.