Skip to main content
padlock icon - secure page this page is secure

Open Access Depth from stacked light field images using generative adversarial network

Download Article:
 Download
(PDF 1,842.1 kb)
 
The estimated depth map provides valuable information in many computer vision applications such as autonomous driving, semantic segmentation and 3D object reconstruction. Since the light field camera capture both the spatial and angular light ray, we can estimate a depth map throughout that properties of light field image. However, estimating a depth map from the light field image has a limitation in term of short baseline and low resolution issues. Even though many approach have been developed, but they still have a clear flaw in computation cost and depth value accuracy. In this paper, we propose a network-based and epipolar plane image (EPI) light field depth estimation technique. Since the light field image consists of many sub-aperture images in a 2D spatial plane, we can stack the sub-aperture images in different directions to handle occlusion problem. However, usually used many light field subaperture images are not enough to construct huge datasets. To increase the number of sub-aperture images for stacking, we train the network with augmented light field datasets. In order to illustrate the effectiveness of our approach, we perform the extensive experimental evaluation through the synthetic and real light field scene. The experimental result outperforms the other depth estimation techniques.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: Adversarial network; Depth estimation; Light field; Occlusion area

Document Type: Research Article

Publication date: January 13, 2019

This article was made available online on January 13, 2019 as a Fast Track article with title: "Depth from stacked light field images using generative adversarial network".

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more