16:01   Demo Session 1 - 1st/2nd Balconies
Chair: Jonathan Gratch, Ioannis Patras, Marc Schröder
Multimodal real-time conversation analysis using a novel process engine
Stefan Scherer, Volker Fritzsch, Friedhelm Schwenker
Abstract: This contribution introduces a software framework enabling researchers to develop real-time pattern recognition and sensor fusion applications in an abstraction level above that of common programming languages in order to reduce and minimize programming errors and technical obstacles. Furthermore, a proof of concept using two separate instances of the process engine on different computers with audiovisual data processing is described. The scenario shows the capability of the engine to process data in real-time and synchronously on multiple machines, which are necessary features in large scale projects.
Recording emotions with “MyInnerLife”
Elisabeth Eichhorn, Reto Wettach, Boris Müller
Abstract: Our project is a system to express emotions and record them on a long-term basis. In contrast to a lot of the research in the field of affective computing our project is not dedicated to enable machines to detect human emotions but to allow new input methods. This demo presents “MyInnerLife”, a physical input device to express and record emotions non-verbally.
EmoText: Applying Differentiated Semantic Analysis in Lexical Affect Sensing
Alexander Osherenko
Abstract: Recently, there has been considerable interest in the recognition of affect from written and spoken language. We developed a computer system that implements a semantic approach to lexical affect sensing. This system analyses English sentences utilizing grammatical interdependencies between emotion words and intensifiers of emotional meaning.
EMBR: A Realtime Animation Engine for Interactive Embodied Agents
Alexis Heloir, Michael Kipp
Abstract: Embodied agents are a powerful paradigm for current and future multimodal interfaces, yet require high effort and expertise for their creation, assembly and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. In this demo paper, we present a new realizer called EMBR (Embodied Agents Behavior Realizer) and its control language EMBRScript.
A More Effective Way to Label Affective Expressions
Micah Rye Eckhardt, Rosalind Picard
Abstract: Labeling videos for affect content such as facial expression is tedious and time consuming. Researchers often spend significant amounts of time annotating experimental data, or simply lack the time required to label their data. For these reasons we have developed VidL, an open source video labeling system that is able to harness the distributed people-power of the internet. Through centralized management VidL can be used to manage data, custom label videos, manage workers, visualize labels, and review coders work. As an example, we recently labeled 700 short videos, approximately 60 hours of work, in 2 days using 20 labelers working from their own computers.
A Demonstration of Audiovisual Sensitive Artificial Listeners
Marc Schröder, Elisabetta Bevacqua, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Sathish Pammi, Maja Pantic, Catherine Pelachaud, Björn Schuller, Etienne de Sevin, Michel Valstar, Martin Wöllmer
Abstract: Sensitive Artificial Listeners (SAL) are virtual dialogue partners who, despite their very limited verbal understanding, intend to engage the user in a conversation by paying attention to the user's emotions and non-verbal expressions. The SAL characters have their own emotionally defined personality, and attempt to drag the user towards their dominant emotion, through a combination of verbal and non-verbal expression. The demonstrator shows an early version of the fully autonomous SAL system based on audiovisual analysis and synthesis.
Demonstration: Interactive Social-Emotional Toolkit (iSET)
Mish Anna Rimm Madsen, Rana el Kaliouby, Micah Eckhardt, Matthew Goodwin, Mohammed Ehsan Hoque, Rosalind Picard
Abstract: Social communication in autism is significantly hindered by difficulties processing affective cues in real-time face-to-face interaction. The interactive Social-Emotional Toolkit (iSET) allows its users to record and annotate video with emotion labels in real time, then review and edit the labels later to bolster understanding of affective information present in interpersonal interactions. The iSET demo will let the ACII audience experience the augmentation of interpersonal interactions by using the iSET system.