Source:International Conference on Systemics, Cybernetics and Informatics, Hyderabad, India (2005)
Gesture recognition has been an active research area for several decades. Pose-driven Hidden Markov Models (HMMs), in which each state corresponds to a single pose, have been the most popular technique for gesture recognition. While this approach gives reasonably good results for small sets of gestures, the human body has many of degrees of freedom, allowing it to assume an essentially unlimited number of distinct poses. Since the complexity of an HMM is proportional to its number of states, this approach cannot be scaled to reliably recognize a large set of full-body gestures. However, all motion sequences performed by humans are constrained by the anatomy of the body. In this paper we use this constraint to construct a novel algorithm for modeling gestures as a sequence of events that take place within the segments and the joints of the human body. Each gesture is then represented in an event-driven HMM as a sequence of events, occurring in the various segments and joints. The inherent advantage of using an event-driven HMM (instead of a pose-driven HMM) is that there is no need to add states to represent more complex gestures. The proposed model was tested with a 3D motion gesture library of 58 gestures. When the model was trained using 3 instances for each gesture, and was then tested using 3 other instances, it achieved an average recognition rate of 91.4%. These results indicate that the proposed method is useful for recognizing the gestures in our library and, given the inherent advantage of an event-driven model (with its fixed number of states) this approach might provide a better solution to the problem of whole-body gesture recognition than traditional pose-driven methods.