This work presents the results of a psycho-physical experiment in which a group of forty (40) human participants rated the overall quality of a set of 40 high-definition audio-visual sequences. These audio-visual sequences were impaired with audio and video types of distortions commonly
encountered in an Internet-based transmission scenario. More specifically, Packet-Loss and Frame Freezing distortions were added to the video component, while Background noise, Chop, Clipping, and Echo distortions were added to the audio component. Our goal was to study how audio and visual
degradations interact with each other and with the content to produce the overall audio-visual quality. An immersive experimental methodology was used to obtain more accurate observer scores. Preliminary results show that the audio and video degradations interact with each other to produce
the overall audio-visual quality. For different types of audio degradations, the Clip degradation obtained slightly lower quality scores. Similarly, for the different video degradations, Framefreezing distortions were rated higher. Also, when audio degradations were combined with Packet-loss,
they had a stronger impact on the audio-visual quality.
No References for this article.
No Supplementary Data.
No Article Media
Immersive experimental methodologies;
Quality of Experience (QoE);
Document Type: Research Article
Publication date: January 13, 2019
This article was made available online on January 13, 2019 as a Fast Track article with title: "Analyzing the influence of cross-modal IP-based degradations on the perceived audio-visual quality".
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.