Volumetric video is becoming easier to capture and display with the recent technical developments in the acquisition, and display technologies. Using point clouds is a popular way to represent volumetric video for augmented or virtual reality applications. This representation, however,
requires a large number of points to achieve a high quality of experience and needs compression before storage and transmission. In this paper, we study the subjective and objective quality assessment results for volumetric video compression, using a state-of-the-art compression algorithm:
MPEG Point Cloud Compression Test Model Category 2 (TMC2). We conduct subjective experiments to find the perceptual impacts on compressed volumetric video with different quantization parameters and point counts. Additionally, we find the relationship between the state-of-the-art objective
quality metrics and the acquired subjective quality assessment results. To the best of our knowledge, this study is the first to consider TMC2 compression for volumetric video represented as coloured point clouds and study its effects on the perceived quality. The results show that the effect
of input point counts for TMC2 compression is not meaningful, and some geometry distortion metrics disagree with the perceived quality. The developed database is publicly available to promote the study of volumetric video compression.
No References for this article.
No Supplementary Data.
No Article Media
3D visual representation;
Coloured point cloud;
Objective quality metrics;
Subjective quality assessment;
Volumetric video compression
Document Type: Research Article
January 13, 2019
This article was made available online on January 13, 2019 as a Fast Track article with title: "Subjective and objective quality assessment for volumetric video compression".
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.