Skip to main content
padlock icon - secure page this page is secure

Open Access A blind mesh visual quality assessment method based on convolutional neural network

Download Article:
 Download
(PDF 1,350.7 kb)
 
This paper presents a new method for no reference mesh visual quality assessment using a convolutional neural network. To do this, we first render 2D images from multiple views of the 3D mesh. Then, each image is split into small patches which are learned to a convolutional neural network. The network consists of two convolutional layers with two max-pooling layers. Then, a multilayer perceptron (MLP) with two fully connected layers is integrated to summarize the learned representation into an output node. With this network structure, feature learning and regression are used to predict the quality score of a given distorted mesh without the availability of the reference mesh. Experiments have been successfully conducted on LIRIS/EPFL generalpurpose database. The obtained results show that the proposed method provides good correlation and competitive scores comparing to some influential and effective full and reduced reference methods.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: 3D mesh; Convolutional neural network; Image quality

Document Type: Research Article

Publication date: January 28, 2018

This article was made available online on January 13, 2018 as a Fast Track article with title: "A blind mesh visual quality assessment method based on convolutional neural network".

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more