Skip to main content
padlock icon - secure page this page is secure

Open Access Depth-map estimation using combination of global deep network and local deep random forest

Download Article:
(PDF 983.9 kb)
This study propose a robust 3D depth-map generation algorithm using a single image. Unlike previous related works estimating global depth-map using deep neural networks, this study uses the global and local feature of image together to reflect local changes in the depth map instead of using only global feature. A coarse-scale network is designed to predict the global-coarse depth map structure using a global view of the scene and the finer-scale random forest (RF) is to be designed to refine the depth map based on combination of original image and coarse depth map. As the first step, we use a partial structure of the multi-scale deep network (MSDN) to predict the depth of the scene at a global level. As the second step, we propose local patchbased deep RF to estimate the local depth and smoothen noise of local depth map by combining MSDN global-coarse network. The proposed algorithm was successfully applied to various single images and yielded a more accurate depthmap estimation performance than other existing methods.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: 3D depth-map; deep neural networks; finer-scale network; global depth-map; multi-scale deep network

Document Type: Research Article

Publication date: January 13, 2019

This article was made available online on January 13, 2019 as a Fast Track article with title: "Depth-map estimation using combination of global deep network and local deep random forest".

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more