Skip to main content

Open Access Towards Perceptually Coherent Depth Maps in 2D-to-3D Conversion

Download Article:
 Download
(PDF 4896.48828125 kb)
 
We propose a semi-automatic 2D-to-3D conversion algorithm that is embedded in an efficient optimization framework, i.e., cost volume filtering, which assigns pixels to depth values initialized by user-given scribbles. The proposed algorithm is capable of capturing depth changes of objects that move towards or farther away from the camera. We achieve this by determining a rough depth order between objects in each frame, according to the motion observed in the video, and incorporate this depth order into the depth interpolation process. In contrast to previous publications, our algorithm focuses on avoiding conflicts between the generated depth maps and monocular depth cues that are present in the video, i.e., motion-caused occlusions, and thus takes a step towards the generation of perceptually coherent depth maps. We demonstrate the capabilities of our proposed algorithm on synthetic and recorded video data and by comparison with depth ground truth. Experimental evaluations show that we obtain temporally and perceptually coherent 2D-to-3D conversions in which temporal and spatial edges coincide with edges in the corresponding input video. We achieve competitive 2D-to-3D conversion results. Our proposed depth interpolation can clearly improve the conversion results for videos that contain objects which exhibit motion in depth, compared to commonly performed naive depth interpolation techniques.
No References for this article.
No Supplementary Data.
No Data/Media
No Metrics

Document Type: Research Article

Publication date: 2016-02-14

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more