Human-Centered Machine Learning in a Social Interaction Assistant for Individuals with Visual Impairments

Publication Type:

Conference Paper

Authors:

V. Balasubramanian, S. Chakraborty, Krishna S, S. Panchanathan

Source:

Symposium on Assistive Machine Learning for People with Disabilities at Neural Information Processing Systems (NIPS) (2009)

Abstract:

Over the last couple of decades, the increasing focus on accessibility has resulted in the design and development of several assistive technologies to aid people with visual impairments in their daily activities. Most of these devices have been centered on enhancing the interaction of a user who is blind or visually impaired with objects and environments, such as a computer monitor, personal digital assistant, cellphone, road traffic, or a grocery store. Although these efforts are very essential for the quality of life of these individuals, there is also a need (which has so far not been seriously considered) to enrich the interactions of individuals who are blind, with other individuals. Non-verbal cues (including prosody, elements of the physical environment, the appearance of communicators and physical movements) account for as much as 65% of the information communicated during social interactions [1]. However, more than 1.1 million individuals in the US who are legally blind (and 37 million worldwide) have a limited experience of this fundamental privilege of social interactions. These individuals continue to be faced with fundamental challenges in coping with everyday interactions in their social lives. The work described in this paper is based on the design and development of a Social Interaction Assistant that is intended to enrich the experience of social interactions for individuals who are blind, by providing real-time access to information about individuals and their surrounds. The realization of a Social Interaction Assistant device involves solving several challenging problems in pattern analysis and machine intelligence such as person recognition/tracking, head/body pose estimation, gesture recognition, expression recognition, etc on a wearable real-time platform. A list of eight significant daily challenges faced by these individuals was identified in our initial focus group studies conducted with 27 individuals who are blind or visually impaired [1]. Each of these problems raises unique machine learning challenges that need to be addressed. While the problems discussed above are typically encountered in many other fields including robotics, this application presents a unique perspective to the design of machine learning (ML) algorithms for recognition and learning: the presence of the ‘human in the loop’. In addition to the challenges of implementing such systems on wearable real-time platforms, we note that the end users in this context have their cognitive capabilities intact (as we have been often reminded by our target user population). The intelligence of the human user can be judiciously used to design systems that demonstrate improved reliability and performance. More importantly, these users often would like to use their cognitive and decision-making capabilities at every possible opportunity to compensate for the sensory deficit. To illustrate with an example, if a system was built to recognize individuals standing in front of a user who is blind, the user may not like to receive a singular answer with the identity of the individual. Rather, the user would prefer to receive a set of possible identities with their confidence levels, and make the decision himself/herself. This necessitates the design of ML algorithms and systems that are user-centric by design, utilize the cognitive capabilities of the userto solve the problem at hand, and support the user actively in decision-making, rather than provide decisions passively. We term such algorithms as ‘human-centered’ ML algorithms. In this paper, we present a few of our efforts in machine learning that address the challenges of the Social Interaction Assistant. We first present a brief introduction to our current prototype, followed by two examples that illustrate our human-centered approach and then briefly review other ML contributions that have been made as part of this effort.

Authors

Vineeth N Balasubramanian

Vineeth N Balasubramanian

Assistant Research Professor

Dr. Shayok Chakraborty

Dr. Shayok Chakraborty

Assistant Research Professor, School of Computing, Informatics, and Decision Systems Engineering; Associate Director, Center for Cognitive Ubiquitous Computing (CUbiC)

Sreekar Krishna

Sreekar Krishna

Assistant Research Technologist

Dr. Sethuraman "Panch" Panchanathan

Dr. Sethuraman "Panch" Panchanathan

Director, National Science Foundation