Tuesday, December 04, 2018
10:00 AM
Wegmans Hall 2506
Professor Yelin Kim
University at Albany
Multimodal Emotion Recognition: Quantifying Dynamics andStructure in Audio-Visual Expressive Speech
The rise of AI assistant systems, including Google Home, Apple Siri, and Amazon Echo, brings the urgent need for increased and deeper understanding of users. In this talk, I will present algorithmic and statistical methods for analyzing audio-visual human behavior, particularly focusing on emotional and social signals inferred from speech and facial expressions. These methods can provide emotional intelligence to AI systems. However, developing automatic emotion recognition systems is challenging since emotional expressions are complex, dynamic, inherently multimodal, and are entangled with other factors of modulation (e.g. speech generation and emphasis). I will present several algorithms to address these fundamental challenges in emotion recognition: (i) cross-modal modeling methods that capture and control for interactions between individual facial regions and speech using the Minimum Description Length (MDL) principle-based segmentation; (ii) localization and prediction of events with salient emotional behaviors using a max-margin optimization and dynamic programming; and (iii) temporal modeling methods to learn co-occurrence patterns between emotional behaviors and emotion label noise. These algorithms have enabled advancements in the modeling of audio-visual emotion recognition systems and increased the understanding of the underlying dynamic and multimodal structure of affective communication (e.g., cross-modal interaction, temporal structure, and inherent perceptual ambiguity).