Skip to main content
padlock icon - secure page this page is secure

Open Access Joint and Discriminative Dictionary Learning for Facial Expression Recognition

Download Article:
 Download
(PDF 1,911.7 kb)
 
Dictionary Learning and sparse coding methods have been widely used in computer vision with applications to face and object recognition. A common challenge when performing expression recognition is that face similarities may confound the expression recognition process. An approach to deal with this problem is to learn expression specific dictionaries, so that each atom corresponds to one expression class. However, even when employing expression specific dictionaries, it is likely that two atoms from two sub-dictionaries share common characteristics due to facial similarities. In this paper, we consider a joint dictionary that captures common facial attributes, and class-specific dictionaries that are used to classify different expressions. We investigate three dictionary learning methods for sparse representation classification: one that learns a global dictionary based on K-SVD, one that learns expression specific dictionaries based on Fisher Discrimination Dictionary Learning (FDDL), and one that learns a shared as well as expression specific dictionaries based on Dictionary Learning Separating Commonality and Particularity (DL-COPAR). We demonstrate the effectiveness of the shared dictionary learning approach on the extended Cohn-Kanade database where DL-COPAR outperforms FDDL and KSVD by a significant margin.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Document Type: Research Article

Publication date: February 14, 2016

More about this publication?
  • For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.

    Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more