10:00   Oral Session 1-GZ – Gesture & Emotion Recognition
Chair: Louis-Philippe Morency
10:00
25 mins
Gesture and Emotion: Can basic gestural form features discriminate emotions?
Michael Kipp, Jean-Claude Martin
Abstract: The question how gesture and emotion are interrelated is not very well covered in research. We investigate how basic gestural form features (handedness, hand shape, palm orientation and motion direction) are related to components of emotion. We argue that material produced by actors in movies or theater stagings are particularly well suited for such analyses. Our results indicate that there may be a general association of gesture handedness with the emotional dimensions of pleasure and arousal. We discuss this and more specific findings, and conclude with possible implications and applications of this study.
10:25
25 mins
Learning Models of Speaker Head Nods with Affective Information
Jina Lee, Alena Neviarouskaya, Helmut Prendinger, Stacy Marsella
Abstract: During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions, and may also be influenced by our emotions. The goal for this work is to build a domain-independent model of speaker's head movements and investigate the effect of using affective information during the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker's head nods using an annotated corpora of face-to-face human interaction and emotion labels generated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help predict head nods better than when no affective information is used.
10:50
25 mins
Automated Classification of Gaze Direction Using Spectral Regression and Support Vector Machine
Mohammad H Mahoor, Steven Cadavid, Daniel S. Messinger, Jeffrey Cohn
Abstract: This paper presents a framework to automatically estimate the gaze direction of an infant in an infant-parent face-to-face interaction. Commercial devices are sometimes used to produce automated measurement of the subjects' gaze direction. This approach is intrusive, requiring cooperation from the participants, and cannot be employed in interactive face-to-face communication scenarios between a parent and their infant. Alternately, the infant gazes that are at and away from the parent's face may be manually coded from captured videos by a human expert. However, this approach is labor intensive. A preferred alternative would be to automatically estimate the gaze direction of participants from captured videos. The realization of a such a system will help psychological scientists to readily study and understand the early attention of infants. One of the problems in eye region image analysis is the large dimensionality of the visual data. We address this problem by employing the spectral regression technique to project high dimensionality eye region images into a low dimensional sub-space. Represented eye region images in the low dimensional sub-space are utilized to train a Support Vector Machine (SVM) classifier to predict the gaze direction (i.e., either looking at parent's face or looking away from parent's face). The analysis of more than 39,000 video frames of naturalistic gaze shifts of multiple infants demonstrates significant agreement between a human coder and our approach. These results indicate that the proposed system provides an efficient approach to automating the estimation of gaze direction of naturalistic gaze shifts.