Skip to main content
padlock icon - secure page this page is secure

Open Access Constructing glossiness perception model of computer graphics with sounds

Download Article:
(PDF 1,822.4 kb)
In this paper, we construct a model for cross-modal perception of glossiness by investigating the interaction between sounds and graphics. First, we conduct evaluation experiments on cross-modal glossiness perception using sounds and graphics stimuli. There are three types of stimuli in the experiments. The stimuli are visual stimuli (22 stimuli), audio stimuli (15 stimuli) and audiovisual stimuli (330 stimuli). Also, there are three sections in the experiments. The first one is a visual experiment, the second one is an audiovisual experiment, and the third one is an auditory experiment. For the evaluation of glossiness, the magnitude evaluation method is applied. Second, we analyze the influence of sounds on glossiness perception from the experimental results. The results suggest that the cross-modal perception of glossiness can be represented as a combination of visual-only perception and auditory-only perception. Then, based on the results, we construct a model by a linear sum of computer graphics and sound parameters. Finally, we confirm the feasibility of the cross-modal glossiness perception model through a validation experiment.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: Audiovisual; Computer graphics; Cross-modal; Glossiness; Modeling; Shitsukan perception

Document Type: Research Article

Publication date: January 13, 2019

This article was made available online on January 13, 2019 as a Fast Track article with title: "Constructing glossiness perception model of computer graphics with sounds".

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more