Skip to main content
padlock icon - secure page this page is secure

Open Access Multiscale Matched Filter For Structured Light Decoding Using Sequential MAP Estimation

Download Article:
(PDF 891.8 kb)
Structured light depth sensors work by projecting a codeword pattern, usually made up of NIR light, on a scene and measuring distortions in the light received on an NIR camera to get estimates of the camera-projector disparities. A well-known challenge associated with using structured light technology for depth estimation is its sensitivity to NIR components in the ambient illumination spectrum. While various methodologies are employed to increase the codeword-to-ambient-light ratio – for instance, using narrow-band NIR filters and selecting a spectral band for the NIR laser where the interference from ambient light is expected to be low – structured light setups usually do not work well outdoors under direct sunlight. The standard deviation of shot noise increases as the square root of the ambient-light intensity, reducing the SNR of the received codeword pattern and making the decoding process challenging. One way to improve the SNR of the received structured light pattern is to use codewords of larger spatial support for depth sensing. While large codewords do improve the SNR of the received pattern, the disadvantage is decreased spatial resolution of the estimated disparity field. In this paper, we use a multiscale random field (MSRF) to model the codeword labels and use a Bayesian framework, known as sequential MAP (SMAP) estimation, developed originally for image segmentation, for developing a novel multiscale matched filter for structured light decoding. The proposed algorithm decodes codewords at different scales and merges coarse-to-fine disparity estimates using the SMAP framework. We present experimental results demonstrating that our multiscale filter provides noise-robust decoding of the codeword patterns, while preserving spatial resolution of the decoded disparity maps.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics


Document Type: Research Article

Publication date: January 1, 2018

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more