Multiple Cue Integration in Transductive Confidence Machines for Head Pose Classification

Publication Type:

Conference Proceedings


V. Balasubramanian, S. Chakraborty, S. Panchanathan


IEEE CVPR 2008 Workshop on Online Learning for Classification, Anchorage, Alaska (2008)


An important facet of learning in an online setting is the confidence associated with a prediction on a given test data point. In an online learning scenario, it would be expected that the system can increase its confidence of prediction as training data increases. We present a statistical approach in this work to associate a confidence value with a predicted class label in an online learning scenario. Our work is based on the existing work on Transductive Confidence Machines (TCM), which provided a methodology to define a heuristic confidence measure. We applied this approach to the problem of head pose classification from face images, and extended the framework to compute a confidence value when multiple cues are extracted from images to perform classification. Our approach is based on combining the results of multiple hypotheses and obtaining an integrated p-value to validate a single test hypothesis. From our experiments on the widely accepted FERET database, we obtained results which corroborated the significance of confidence measures - particularly, in online learning approaches. We could infer from our results with transductive learning that using confidence measures in online learning could yield significant boosts in the prediction accuracy, which would be very useful in critical pattern recognition applications.


Vineeth N Balasubramanian

Vineeth N Balasubramanian

Assistant Research Professor

Dr. Shayok Chakraborty

Dr. Shayok Chakraborty

Assistant Research Professor, School of Computing, Informatics, and Decision Systems Engineering; Associate Director, Center for Cognitive Ubiquitous Computing (CUbiC)

Dr. Sethuraman "Panch" Panchanathan

Dr. Sethuraman "Panch" Panchanathan

Director, National Science Foundation


Effective communication requires a shared context. In face-to-face interactions, parts of this shared context are the number and location of people, their facial expression, head pose, eye contact, and movements of each person engaged in a conversation. Faces serve an…