
Light-Field Appearance Editing based on Intrinsic Decomposition
Abstract
The authors present a framework for image-based surface appearance editing for light-field data. Their framework improves over the state of the art without the need for a full “inverse rendering,” so that full geometrical data, or presence of highly specular or reflective surfaces are not required. It is robust to noisy or missing data, and handles many types of camera array setup ranging from a dense light field to a wide-baseline stereo-image pair. They start by extracting intrinsic layers from the light-field image set maintaining consistency between views. It is followed by decomposing each layer separately into frequency bands, and applying a wide range of “band-sifting” operations. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like translucency. Their GPU-based implementation allow interactive editing of an arbitrary light-field view, which can then be consistently propagated to the rest of the views. The authors provide extensive evaluation of our framework on various datasets and against state-of-the-art solutions.
The authors present a framework for image-based surface appearance editing for light-field data. Their framework improves over the state of the art without the need for a full “inverse rendering,” so that full geometrical data, or presence of highly specular or reflective surfaces are not required. It is robust to noisy or missing data, and handles many types of camera array setup ranging from a dense light field to a wide-baseline stereo-image pair. They start by extracting intrinsic layers from the light-field image set maintaining consistency between views. It is followed by decomposing each layer separately into frequency bands, and applying a wide range of “band-sifting” operations. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like translucency. Their GPU-based implementation allow interactive editing of an arbitrary light-field view, which can then be consistently propagated to the rest of the views. The authors provide extensive evaluation of our framework on various datasets and against state-of-the-art solutions.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics
Document Type: Research Article
Publication date: November 21, 2018
This article was made available online on September 28, 2018 as a Fast Track article with title: "Light-Field Appearance Editing based on Intrinsic Decomposition".
- Journal of Perceptual Imaging (JPI) is an open access, peer-reviewed publication of the Society for Imaging Science and Technology (IS&T). JPI publishes research in perception and cognition that supports or is inspired by imaging and visualization technologies and applications. Papers cover imaging science, technology, and art that is influenced by research in human perception, cognition, and neuroscience, including algorithms, evaluation methods, and innovations, as well as art and psychology that addresses sensory representation, processing, and understanding. Experimental, theoretical, and survey papers are welcome. Please visit the publication website for a list of topics.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details. - Editorial Board
- Information for Authors
- Submit a Paper
- Membership Information
- Terms & Conditions
- Society for Imaging Science and Technology
- IS&T Electronic Imaging Sympsoium
- IS&T Human Vision and Electronic Imaging Conference (HVEI)
- Ingenta Connect is not responsible for the content or availability of external websites