Biased manifold embedding for person-independent head pose

Publication Type:

Conference Proceedings

Authors:

V.N. Balasubramanian, S. Panchanathan

Source:

International Conference on Computer Vision Theory and Applications (VISAPP), Barcelona, Spain, p.76-84 (2007)

Abstract:

Head pose estimation is an integral component of face recognition systems and human computer interfaces. To determine the head pose, face images with varying pose angles can be considered to lie on a smooth low-dimensional manifold in high-dimensional feature space. In this paper, we propose a novel supervised approach to manifold-based non-linear dimensionality reduction for head pose estimation. The Biased Manifold Embedding method is pivoted on the ideology of using the pose angle information of the face images to compute a biased geodesic distance matrix, before determining the low-dimensional embedding. A Generalized Regression Neural Network (GRNN) is used to learn the non-linear mapping, and linear multi-variate regression is finally applied on the low-dimensional space to obtain the pose angle. We tested this approach on face images of 24 individuals with pose angles varying from -90o to +90o with a granularity of 2o. The results showed significant reduction in the error of pose angle estimation, and robustness to variations in feature spaces, dimensionality of embedding and other parameters.

Authors

Vineeth N Balasubramanian

Vineeth N Balasubramanian

Assistant Research Professor

Dr. Sethuraman "Panch" Panchanathan

Dr. Sethuraman "Panch" Panchanathan

Executive Vice President, ASU Knowledge Enterprise; Chief Research and Innovation Officer; Director, Center for Cognitive Ubiquitous Computing (CUbiC); Foundation Chair in Computing and Informatics