Dictionary Learning and sparse coding methods have been widely used in computer vision with applications to face and object recognition. A common challenge when performing expression recognition is that face similarities may confound the expression recognition process. An approach
to deal with this problem is to learn expression specific dictionaries, so that each atom corresponds to one expression class. However, even when employing expression specific dictionaries, it is likely that two atoms from two sub-dictionaries share common characteristics due to facial similarities.
In this paper, we consider a joint dictionary that captures common facial attributes, and class-specific dictionaries that are used to classify different expressions. We investigate three dictionary learning methods for sparse representation classification: one that learns a global dictionary
based on K-SVD, one that learns expression specific dictionaries based on Fisher Discrimination Dictionary Learning (FDDL), and one that learns a shared as well as expression specific dictionaries based on Dictionary Learning Separating Commonality and Particularity (DL-COPAR). We demonstrate
the effectiveness of the shared dictionary learning approach on the extended Cohn-Kanade database where DL-COPAR outperforms FDDL and KSVD by a significant margin.
No References for this article.
No Supplementary Data.
No Article Media
Document Type: Research Article
February 14, 2016
More about this publication?
For more than 30 years, the Electronic Imaging Symposium has been serving those in the broad community - from academia and industry - who work on imaging science and digital technologies. The breadth of the Symposium covers the entire imaging science ecosystem, from capture (sensors, camera) through image processing (image quality, color and appearance) to how we and our surrogate machines see and interpret images. Applications covered include augmented reality, autonomous vehicles, machine vision, data analysis, digital and mobile photography, security, virtual reality, and human vision. IS&T began sole sponsorship of the meeting in 2016. All papers presented at EIs 20+ conferences are open access.
Please note: For purposes of its Digital Library content, IS&T defines Open Access as papers that will be downloadable in their entirety for free in perpetuity. Copyright restrictions on papers vary; see individual paper for details.