Computationally Modeling the Cognitive Antecedents and Consequences of Emotion
Institute for Creative Technologies,
Marina del Ray
Biographical Note: Dr. Jonathan Gratch is an Associate Director for Virtual Humans Research at the University of Southern California’s (USC) Institute for Creative Technologies, Research Associate Professor in the Department of Computer Science and co-director of USC’s Computational Emotion Group. He completed his Ph.D. in Computer Science at the University of Illinois in Urban-Champaign in 1995. Dr. Gratch’s research focuses on virtual humans (artificially intelligent agents embodied in a human-like graphical body), and computational models of emotion. He studies the relationship between cognition and emotion, the cognitive processes underlying emotional responses, and the influence of emotion on decision making and physical behavior. A recent emphasis of this work is on social emotions, emphasizing the role of contingent nonverbal behavior in the co-construction of emotional trajectories between interaction partners. His research has been supported by the National Science Foundation, DARPA, AFOSR and RDECOM. He is on the editorial board of the journal Emotion Review and the President-Elect of the HUMAINE Association for Research on Emotions and Human-Machine Interaction. He is sitting member of the organizing committee for the International Conference on Intelligent Virtual Agents (IVA) and frequent organizer of conferences and workshops on emotion and virtual humans. He belongs to the American Association for Artificial Intelligence (AAAI) and the International Society for Research on Emotion. Dr. Gratch is the author of over 100 technical articles.
Abstract: Contemporary research emphasizes emotion's functional role in how organisms sense events, relate them to internal needs, characterize appropriate responses and recruit the cognitive, physical and social resources needed to adaptively respond. Recognizing, modeling and exploiting such influences can have broad impact across a variety of scientific disciplines and applications. In this talk, I will summarize a decade of research, in collaboration with Stacy Marsella, to computationally model the cognitive antecedents and consequences of emotion. I will describe alternative motives for building such models and how these different motivations necessarily lead to different evaluation criteria and potentially different designs. I will focus on
EMA, a computational model that simulates both cognitive and social emotional processes and can engage in meaningful social exchanges with human users. I will describe a series of empirical studies on the fidelity of these simulations and discuss their potential, both as practical tools to advance human-computer interaction, but also as methodological tools for the study of human social and cognitive behavior.
Social Practice: Sociocultural Approaches to Identity in Embodied Conversational Agents?
Center for Technology & Social Behavior
Biographical Note: Justine Cassell holds the AT&T Research Chair and is a full professor in the departments of Electrical Engineering and Computer Science, and Communication Studies at Northwestern University, with courtesy appointments in Linguistics, Psychology, and Learning Science. She is also the director of the Northwestern Center for Technology and Social Behavior, and the director of the new doctoral program in Technology and Social Behavior. Before coming to Northwestern, Cassell was a tenured professor at the MIT Media Lab where she directed the Gesture and Narrative Language Research Group. In 2001, Cassell was awarded the prestigious Edgerton Faculty Award at MIT; in 2008 she was awarded the Anita Borg Institute Women of Vision Leadership Award; in 2009 Cassell was made an ACM Distinguished Lecturer. She spent 2008-2009 on sabbatical at the Stanford Center for Advanced Study in the Behavioral Sciences (CASBS). Cassell's research builds on her multidisciplinary background: she holds undergraduate degrees in Comparative Literature from Dartmouth and in Lettres Modernes from the Universite de Besançon (France). She holds a M.Phil in Linguistics from the University of Edinburgh (Scotland) and a double Ph.D. from the University of Chicago in Linguistics and Psychology. After having spent ten years studying verbal and non-verbal aspects of human communication through microanalysis of videotaped data she began to bring her knowledge of human conversation to the design of computational systems. Her current research investigates the relationship between cultural, linguistic and social phenomena, and how this intersection plays out in the display and deployment of identity, both in real and virtual humans. Cassell has authored more than 100 journal articles, conference proceedings and book chapters on these topics, and has given more than 50 keynote addresses at international conferences.
Abstract: In this talk I propose a particular sociocultural approach to the study of the so-called "social emotions" - intrinsically dyadic states such as rapport, friendship, intimacy, affection. I propose in particular to parameterize these social emotions, which I think of as inferable underlying shared states of being, into separable components, including similarity (being like the other person), familiarity (knowing the other person well), positive affect (liking the other person), and commonality (sharing common experience). I use this parameterization to describe the surface level observable verbal and nonverbal behaviors that function to evoke, deepen, reveal, and destroy these dyadic social emotions. I show how the social emotions do not always play a positive role in people's interactions. And then I describe how the relationship between observable behaviors and inferable underlying social states functions against a background of culture and history, and therefore plays a role in deploying, displaying, and demonstrating aspects of identity. Each step of this talk is demonstrated by experiments that involve human-human, and human-agent interaction. I include novel approaches to modeling and generating behaviors for human-agent interaction on the basis of the human-human corpora. And finally, lessons are drawn both for the study of human behavior, and the improved design of technologies capable of engaging in human-computer interaction.
User Affect Modeling, Recognition, and Assistance
Rensselaer Polytechnic Institute
Biographical Note: Qiang Ji received his Ph.D degree in Electrical Engineering from the University of Washington. He is currently a Professor with the Department of Electrical, Computer, and Systems Engineering at Rensselaer Polytechnic Institute (RPI). He is also a program director at the National Science Foundation, managing NSF’s computer vision and machine learning programs. He has also held teaching and research positions with the Beckman Institute at University of Illinois at Urbana-Champaign, the Robotics Institute at Carnegie Mellon University, the Dept. of Computer Science at University of Nevada at Reno, and the US Air Force Research Laboratory. Prof. Ji currently serves as the director of the Intelligent Systems Laboratory (ISL) at RPI.
Prof. Ji's research interests are in computer vision, pattern recognition, and probabilistic graphical models. He has published over 150 papers in peer-reviewed journals and conferences. His research has been supported by major governmental agencies including NSF, NIH, DARPA, ONR, ARO, and AFOSR as well as by major companies including Honda and Boeing. He is an editor on several computer vision and pattern recognition related journals and he has served program chair, technical area chair, and program committee in numerous international conferences/workshops.
Prof. Ji has received several awards including the research excellence award from RPI’s school of Engineering, 2006; the best paper award from the IEEE Transaction of Vehicular Technology,2004; the best paper award from IEEE Computer Vision and Pattern Recognition Workshop on Face Recognition Grand Challenge Experiments, 2005; and the Honda initiation award, 1998. He is a senior member of the IEEE.
Abstract: User emotional states can seriously affect user's psychomotor and decision-making capabilities. The goal of this research is to develop a system to recognize task-specific negative user affective states (e.g. fatigue and stress), and to provide the appropriate intervention to compensate performance decrement resulted from these negative states. The proposed system consists of two major components: multi-modality user state sensing, and user affect and assistance modeling.
For user state sensing, we develop a real time non-invasive system that provides user state measurements from sensors of different modalities. The sensory measurements include physical appearance (facial expression, eye movements, and head movements) extracted from remote video cameras, physiological measurements collected from an emotional mouse we developed, behavioral data from user interaction with the computer, and performance measures. For user affect and assistance modeling, we introduce a general unified decision-theoretic framework based on the Dynamic Influence Diagrams for simultaneously modeling user affect recognition and assistance. Using the framework, affective state recognition is achieved through active probabilistic inference from the available sensory data. Specifically, we introduce an active sensing strategy that allows performing purposive and sufficing information integration in order to infer user's affective state in a timely and efficient manner. User assistance is automatically accomplished through a decision-making process that balances the benefits of keeping the user in productive affective states and the costs of performing user assistance. An information-theoretic approach is introduced to probabilistically determine the most appropriate user augmentation and its application timing in order to maximize the chance of returning user to a productive affective state while minimizing the associated costs. Validation of the proposed framework via a simulation study demonstrates its capability in efficient user affect recognition as well as timely and appropriate user assistance. The affect recognition component of the prototype system is subsequently validated through a real-world study involving human subjects.