With the provision of motion parallax and viewing convenience, multi-view autostereoscopic displays have been popular in recent years. Obviously, increasing the number of views improves the quality of 3D images/videos and produces to better motion parallax. The tradeoff is the larger
number of view images required to generate in real time leading to the need of the huge amount of computing resources for the systems. In fact, people often focuses on the distinctive objects in a scene. For achieving the same level of motion parallax, it can use more views to present distinctive
objects and less views for the rest. As a result, fewer computing resources are required for rendering multi-view images. With exploiting this principle, a new multi-view rendering scheme based on visual saliency is proposed for the application of autostereoscopic displays. The new method
uses saliency maps to extract distinctive regions with different saliency level in a scene and dynamically control the number of views to generate for them. Points in the distinctive regions with high saliency use more views, while points in the regions with low saliency use less views. By
controlling the number of views in use for different salient regions, the proposed scheme can maintain low computation complexity without causing significant degradation in 3D experience. In this paper, a 2D+Z format based multi-view rendering system with the use of saliency maps is presented
to illustrate the feasibility of the new scheme. Subjective assessment results demonstrate that the saliency map based multi-view system has slight degradation in 3D performance compared with true 28-view system and achieves 55% reduction in computation complexity.
No References for this article.
No Supplementary Data.
No Article Media
Document Type: Research Article
January 13, 2019
This article was made available online on January 13, 2019 as a Fast Track article with title: "Saliency map based multi-view rendering for autostereoscopic displays".
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.