16:01   Poster Session 1 – Models of Emotion & Emotion Recognition - 1st Balcony
Chair: Bart van Straalen and Anton Nijholt
EEG-Based Emotion Recognition Using Hybrid Filtering and Higher Order Crossings
Panagiotis Petrantonakis, Leontios Hadjileontiadis
Abstract: EEG-based emotion recognition is a relatively new research field in the Human Computer Interaction area and its aim is the implementation of new algorithms that would identify and recognize emotions from EEG (electroencephalogram) signals. Towards that, a novel method is presented in this paper that employs an optimized hybrid filter, using Empirical Mode Decomposition (EMD) and Genetic Algorithms (GA), in order to isolate the Intrinsic Mode Functions (IMFs) corresponding to the plurality of the energy content of the initial signal for classification. The filtered signal is constructed by the selected IMFs and is subjected to Higher Order Crossings (HOC) analysis for feature extraction. The final feature vector is classified into six emotion classes, i.e., happiness, anger, fear, disgust, sadness, and surprise, using Quadratic Discriminant Analysis. The high classification performance (84.72% maximum mean classification rate) justifies the efficiency of the proposed EEG-based emotion recognition approach.
Exploring Relations between Cognitive Style and Multimodal Expression of Emotion in a TV Series Corpus
Céline Clavel, Jean-Claude Martin
Abstract: In order for virtual characters to be believable when expressing emotion, researchers are trying to endow them with a personality focusing mostly on lexical approaches from Psychology. Whereas multimodal corpora are developing to inform the definition of models relating emotion and their expression in different modalities, they seldom enable to study the impact of personality on the way individuals appraise various emotional situations. In this paper we explain how we collected a TV series corpus which is relevant for the study of cognitive styles. We describe how subjects perceive multimodal expressions of emotion and personality and if there are links between personality and emotional expressions.
PAD-based Multimodal Affective Fusion
Stephen William Gilroy, Marc Cavazza, Markus Niranen, Elisabeth André, Thurid Vogt, Jérôme Urbain, Maurice Benayoun, Hartmut Seichter, Mark Billinghurst
Abstract: The study of multimodality is comparatively less developed for Affective interfaces than for their traditional counterparts. However, one condition for the successful development of Affective interface technologies is the development of frameworks for the real-time multimodal fusion. In this paper, we describe an approach to multimodal affective fusion, which relies on a PAD dimensional model to support the fusion of affective modalities, each input modality being represented as a PAD vector. We describe how this model supports both affective content fusion and temporal fusion within a unified approach. We report results from early user studies which confirm the existence of a correlation between measured affective input and user temperament scores.
Rapport and Facial Expressions
Ning Wang, Jonathan Gratch
Abstract: How to build virtual agents that establish rapport with human? According to Tickle-Degnen and Rosenthal [4], the three essential components of rapport are mutual attentiveness, positivity and coordination. In our previous work, we designed an embodied virtual agent to establish rapport with a human speaker by providing rapid and contingent nonverbal feedback [13] [22]. How do we know that a human speaker is feeling a sense of rapport? In this paper, we focus on the positivity component of rapport by investigating the relationship of human speakers’ facial expressions on the establishment of rapport. We used an automatic facial expression coding tool called CERT to analyze the human dyad interactions and human-virtual human interactions. Results show that recognizing positive facial displays alone may be insufficient and that recognized negative facial displays was more diagnostic in assessing the level of rapport between participants.
Towards Emotion Recognition from Electroencephalographic Signals
Kristina Schaaff, Tanja Schultz
Abstract: During the last decades, information about the emotional state of users has become more and more important in human-computer interaction. Automatic emotion recognition enables the computer to recognize a user's emotional state and thus allows for appropriate reaction, which may pave the way for computers to act emotionally in the future. In the current study, we investigate different feature sets to build an emotion recognition system from electroencephalographic signals. We used pictures from the International Affective Picture System to induce three emotional states: pleasant, neutral, and unpleasant. We designed a headband with four build-in electrodes at the forehead, which was used to record data from five subjects. Compared to standard EEG-caps, the headband is comfortable to wear and easy to attach, which makes it more suitable for everyday life conditions. To solve the recognition task we developed a system based on support vector machines. With this system we were able to achieve an average recognition rate up to 66.7% on subject dependent recognition, solely based on EEG signals.
Affective Haptics in Emotional Communication
Dzmitry Tsetserukou, Alena Neviarouskaya, Helmut Prendinger, Naoki Kawakami, Mitsuru Ishizuka, Susumu Tachi
Abstract: In the paper we are proposing a conceptually novel approach to reinforcing (intensifying) own feelings and reproducing (simulating) the emotions felt by the partner during online communication through specially designed system, iFeel_IM!. The core component, Affect Analysis Model, automatically recognizes nine emotions from text. The detected emotion is stimulated by innovative haptic devices integrated into iFeel_IM!. The implemented system can considerably enhance emotionally immersive experience of real-time messaging.
Using Ensemble Classifier Systems for Handling Missing Data in Emotion Recognition from Physiology: One Step Towards a Practical System
Cornelia Setz, Johannes Schumm, Claudia Lorenz, Bert Arnrich, Gerhard Troester
Abstract: Previous work on emotion recognition from physiology has rarely addressed the problem of missing data. However, data loss due to artifacts is a frequent phenomenon in practical applications. Discarding the whole data instance if only a part is corrupted results in a substantial loss of data. To address this problem, two methods for handling missing data (imputation and reduced-feature models) in combination with two classifier fusion approaches (majority and confidence voting) are investigated in this work. The five emotions amusement, anger, contentment, neutral and sadness were elicited in 20 subjects by films while six physiological signals (ECG, EMG, EOG, EDA, respiration and finger temperature) were recorded. Results show that classifier fusion significantly increases the recognition accuracy in comparison to single classifiers by up to 16.3%. Regarding the methods for handling missing data, reduced-feature models are competitive or even slightly better than models which employ imputation. This is beneficial for practical applications where computational complexity is critical.
A Comparison of PCA, KPCA and LDA for Feature Extraction to Recognize Affect in Gait Kinematics
Michelle Elisabeth Karg, Robert Jenke, Wolfgang Seiberl, Kolja Kühnlenz, Ansgar Schwirtz, Martin Buss
Abstract: This study investigates recognition of affect in human walking as daily motion, in order to provide a means for affect recognition at distance. For this purpose, a data base of affective gait patterns from non-professional actors has been recorded with optical motion tracking. Principal Component Analysis (PCA), Kernel PCA (KPCA) and Linear Discriminant Analysis (LDA) are applied to kinematic parameters and compared for feature extraction. LDA in combination with Naive Bayes leads to an accuracy of 91% for person-dependent recognition of four discrete affective states based on observation of barely a single stride. Extra-success comparing to inter-individual recognition is twice as much. Furthermore, affective states which differ in arousal or dominance are better recognizable in walking. Though primary task of gait is locomotion, cues about a walker’s affective state are recognizable with techniques from machine learning.
Estimation of User Interest Using Time Delay Features between Proactive Content Presentation and Eye Movements
Jean-Baptiste Dodane, Takatsugu Hirayama, Hiroaki Kawashima, Takashi Matsuyama
Abstract: Human-machine interaction still lacks smoothness and naturalness despite the widespread utilization of intelligent systems and emotive agents. In order to improve the interaction, this work proposes an approach to estimate user's interest based on the relationships between dynamics of user's eye movements, more precisely the endogenous control mode of saccades, and machine's proactive visual content presentation. Under a specially-designed presentation phase to make the user express the endogenous saccades, we analyzed delays between the saccades and the presentation events. As a result, we confirmed that the delay while the user's gaze is maintained on the previous presented content regardless of the next event, called resistance, is a good indicator of the interest estimation (70\% success, upon 20 experiments). It showed higher accuracy than the conventional interest estimation based on gaze duration.
Smart Sensor Integration: A Framework for Multimodal Emotion Recognition in Real-Time
Johannes Wagner, Elisabeth André, Frank Jung
Abstract: Affect sensing by machines has been argued as an essential part of next-generation human-computer interaction (HCI). To this end, in the recent years a large number of studies have been conducted, which report automatic recognition of emotion as a difficult, but feasible task. However, most effort has been put towards offline analysis, whereas to date only few applications exist, which are able to react to a user's emotion in real-time. In response to this deficit we introduce a framework we call Smart Sensor Integration (SSI), which considerably jump-starts the development of multimodal online emotion recognition (OER) systems. In particular SSI supports the pattern recognition pipeline by offering tailored tools for data segmentation, feature extraction, and pattern recognition, as well as, tools to apply them offline (training phase) and online (real-time recognition). Furthermore, it has been designed to handle input from various input modalities and to suit the fusion of multimodal information.
Emotion measurement platform for daily life situations
Joyce H.D.M. Westerink, Gert-Jan de Vries, Stijn de Waele, Jack van den Eerenbeemd, Marco van Boven, Martin Ouwerkerk
Abstract: The growing interest in affective computing is expected to have its beneficial impact on consumer lifestyle products. Especially emphatic applications – applications that make you feel they really understand you – will serve the current consumer interest in enhanced and meaningful experiences. To do so, the applications will have to measure the user’s emotional experience. Well-established psychophysiological techniques bear promise, but so far have mainly been validated in laboratory situations. To also apply them in real-life situations, we built an emotion measurement platform. This platform shows that emotional experiences can be measured in a relatively unobtrusive way, while at the same time it enables us to gather knowledge on emotional experiences in everyday-life and it offers the opportunity to prototype emphatic application concepts and test them in relevant situations.
An Approach for Automatically Measuring Facial Activity in Depressed Subjects
Gordon James McIntyre, Roland Goecke, Matthew Hyett, Melissa Green, Michael Breakspear
Abstract: This paper is motivated by Ellgring’s work in non-verbal communication in depression to measure and compare the levels of facial activity, before and after treatment, of endogenous and neurotic depressives. Similar to that work, we loosely associate the measurements with Action Units (AU) groups from the Facial Action Coding System (FACS). However, we use the neologism Region Units (RU) to describe regions of the face that encapsulate AUs. In contrast to Ellgring’s approach, we automatically generate the measurements and provide both prototypical expression recognition and RU-specific activity measurements. Latency between expressions is also measured and the system is conducive to comparison across groups and individual subjects. By using Active Appearance Models (AAM) to locate the fiduciary facial points, and MultiBoost to classify prototypical expressions and the RUs, we can provide a simple, objective, flexible and cost-effective means of automatically measuring facial activity.
A Model for Emotional Contagion Based on the Emotional Contagion Scale
Joao Carlos Viegas Martins Bispo, Ana Paiva
Abstract: Emotional contagion is a process by which a person or group induces emotions, consciously or unconsciously, to another person. In the work here presented we introduce and describe an agent-based model for emotional contagion. The Emotional Contagion Scale (ECS) is used to model the behavior of the agents. The model focuses on the unconscious aspect of Emotional Contagion, and is implemented in a scenario built with a set of autonomous agents. The results show that the model is adequate to simulate the process of emotional contagion.
Does the Mood Matter?
Irene Lopatovska
Abstract: We report the results of the experiment that examined effects of mood on search performance. Participants were asked to use Google search engine to find answers to two questions. Searchers’ mood was measured using the Positive Affect and Negative Affect Scale (PANAS). Search performance was measured by the number of websites visited, time spent reading search results, the quality of the answers and other similar measures. Analysis of relationship between the mood and search performance indicated that positive mood prior to the search affected certain search behaviors, but neither positive nor negative moods had significant effect on the quality of search results.
Is there a dominant channel in perception of emotions?
Noam Amir, Adva Weiss, Rachel Hadad
Abstract: The objective of this study was to determine whether one perceptually dominant channel in carrying emotional cues could be determined among speech, textual content and facial expression. To this end a Wizard-Of-Oz type scenario was used to elicit a corpus of emotional speech and facial expressions from five female speakers. Excerpts from this corpus were then presented to 48 listeners in the various modalities: audio only, video only, text only and video+audio. Listeners judged emotional content on two scales: Activation and Valence. Most listeners rated the combined modality easiest to judge and video alone as most difficult. Statistical analysis of the judgments revealed that Activation was more difficult to judge than Valence. Furthermore, the best agreement between judgments of Valence was obtained between judgments based on audio alone, text alone, and the combined channel, indicating that textual content had a major and indeed dominant influence on the judgments.
Resolution of focus of attention using gaze direction estimation and saliency computation
Zeynep Yucel, Albert Ali Salah
Abstract: Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces