Skip to main content
padlock icon - secure page this page is secure

Open Access Dynamic Multi-View Autostereoscopy

Download Article:
 Download
(PDF 551.2 kb)
 
With advantages of motion parallax and viewing convenience, multi-view autostereoscopic displays have attracted increasing attention in recent years. It is obvious that increasing the number of views improves the quality of 3D images/videos and leads to better motion parallax. However, it requires huge amount of computing resources to generate large numbers of view images in real time. In principle, objects appearing near the screen plane have very small absolute disparity. It can use fewer views to present these objects for achieving the same level of motion parallax. The concept of dynamic multi-view autostereoscopy is to dynamically control the number of views to generate for the points in 3D space based on their disparity. Points with larger absolute disparity use more views, while points with smaller absolute disparity use fewer views. As a result, fewer computing resources are required for real-time generation of view images. Subjective assessments show that only slight degradation in 3D experience is resulted on its realization over 2D plus depth based multi-view autostereoscopic display. However, the amount of computation for generating view images can be reduced by about 44.3% when 3D scenes are divided into three spaces.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: autostereoscopy; dynamic multi-view; image-based rendering

Document Type: Research Article

Publication date: January 13, 2019

This article was made available online on January 13, 2019 as a Fast Track article with title: "Dynamic multi-view autostereoscopy".

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more