SeNSE

Sense: Socio Emotional Signals

Seminars

 


HATICE GUNES, from Queens Mary University, Londres,

Télécom-ParisTech,  the 10/09/14

Automatic Recognition of Affective and Social Signals

Computing that is sensitive to affective and social phenomena aims to equip devices and interfaces  with the means to interpret, understand, and respond to human traits, communicative and social states, emotions, moods, and, possibly, intentions in a naturalistic way - similar to the way humans rely on their senses to assess each other’s affective and social behaviour.

This talk will give a brief summary of automatic recognition of affective and social signals and will focus on answering three main questions: 1) why is automatic affective and social signal recognition needed? 2) what is automatic affective and social signal recognition about? 3) how is automatic affective and social signal recognition achieved?

The talk will also provide an overview of the works I have conducted in the field ranging from automatic analysis of emotions to attractiveness and personality traits.

Short bio: Hatice Gunes is a Senior Lecturer in the School of Electronic Engineering and Computer Science, Queen Mary University of London, UK. Her research interests lie in the multidisciplinary areas of affective computing and social signal processing, focusing on automatic analysis of emotional and social behaviour and human aesthetic canons, multimodal information processing, machine learning and human-human, human-virtual agent, and human-robot interactions. She has published more than 70 technical papers in these areas and was a recipient of a number of awards for Outstanding Paper (IEEE FG'11), Quality Reviewer (IEEE ICME'11), Best Demo (IEEE ACII'09) and Best Student Paper (VisHCI'06). Dr Gunes serves in the Management Board of the Association for the Advancement of Affective Computing (AAAC) and the Steering Committee of IEEE Transactions on Affective Computing. She has also served as a Guest Editor of Special Issues in Int'l J. of Synthetic Emotions, Image and Vision Computing, and ACM Transactions on Interactive Intelligent Systems, member of the Editorial Advisory Board for the Affective Computing and Interaction Book (2011), co-founder and main organizer of the EmoSPACE Workshops at IEEE FG'13 and FG'11, workshop chair of MAPTRAITS'14, HBU'13 and AC4MobHCI'12, and area chair for ACM Multimedia’14, IEEE ICME'13, ACM ICMI'13 and ACII'13. She is currently involved as PI and Co-I in several projects funded by the Engineering and Physical Sciences Research Council UK (EPSRC) and the British Council.


 

DANIEL GATICA-PEREZ, from IDIAP, EPFL

ISIR, UPMC, the 09/10/2013

Understanding conversational social video

The variety and volume of online conversational video are together creating new possiblities for communication and interaction. Research in social media has made great progress in understanding text content. However, communication is more than the words we say: the nonverbal channel - prosody, gaze, facial expressions, gestures, and postures - enriches the online communication experience and plays a key role in the formation and evolution of a number of fundamental social constructs. In this talk, I will present an overview of our work on characterizing and mining conversational social video, more specifically conversational vlogs. I will first discuss methods to characterize communicative behavior from audio and video data. I will then discuss work that has examined connections among nonverbal and verbal cues, personality traits, mood, and attention. Finally, I will discuss the role that video crowdsourcing techniques plays in interpersonal perception research online.


GUILLAUME DUMAS, from Institut du cerveau et de la moelle épiniaire

ISIR, UPMC, the 02/10/2013

L'interaction spontanée chez l'homme: neuroimagerie et modèles computationelles

Comprendre la cognition sociale nécessite une étude des interactions spontanées. Pourtant, même à l'échelle dyadique, les défis méthodologiques et théoriques sont nombreux. Les composantes dynamiques et réciproques de l'interaction humaine sont notamment mal explorées en neurosciences du fait de la difficulté à enregistrer simultanément l'activité cérébrale de plusieurs individus. C'est l'objectif de la méthode d'hyperscanning. La première partie présentera comment la combinaison de paradigmes sociaux situés avec de l'hyperscanning-EEG a permis de démontrer que les états de synchronie interactionnelle, au niveau comportemental, corrèlent avec l'émergence de synchronisation inter-individuelle au niveau neural. Cela a ainsi démontré pour la première fois des similitudes anatomo-fonctionnelles entre les deux cerveaux humains à l'échelle de la milli-seconde, et sans aucun signal de commande extérieur commun. Cette synchronisation inter-cerveaux, liée à différentes bandes de fréquences, reflète certains aspects de l'interaction sociale comme la synchronie interactionnelle, l'anticipation de l'autre, et la co-régulation de la prise de parole. Dans un second temps, nous verrons comment ces phénomènes peuvent être simulés numériquement à l'aide de modèles neurocomputationels, intégrant des données structurelles anatomiques (Dumas et al. PLoS ONE 2012). Ces simulations mettent en évidence en quoi les synchronies inter-cervaux observables reflètent plusieurs phénomènes distincts, et démontrent en quoi la structure anatomique du cerveau humain—le connectome—tend à faciliter les synchronisations inter-individuelles à l'échelle biologique. Ce dernier résultat peut donc expliquer, en partie, notre propension à entrer en couplage avec les autres. Enfin, il sera présenté un nouveau paradigme appelé Virtual Partner Interaction (VPI) (Kelso, et al. PLoS ONE 2009). Ce paradigme consiste en le couplage en temps réel d'un humain et d'un "partenaire virtuel" dont la dynamique comportementale est régie par des modèles dynamiques empiriquement validés. Sur le plan expérimental, cela permet l'établissement d'une interaction spontanée tout en gardant le contrôle sur une moitié de la dyade. Mais cette nouvelle approche permet également un dialogue direct entre les approches empiriques et théoriques de l'interaction sociale chez l'homme. L'étude des interactions spontanées "homme-homme" et "homme-machine" permet donc non-seulement de mieux étudier les mécanismes neurobiologiques sous-tendant la cognition sociale, mais également d'élaborer de nouveaux modèles théoriques intégrant à la fois le niveau neural, comportemental et social.


ANGELICA LIM, from Okuno Speech Media Processing Lab

ISIR, UPMC, the 27/09/2013

The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence

We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, and even music, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI’s development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours as a scaffold to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity and Extent). Experiments with MEI show that this voice-trained model can recognize happiness and sadness in human gait and drive the expression of a robot's emotions in speaking, gesturing and walking. As a discussion, we will present our current scheme for grounding these emotions in low-level robot needs, such as energy levels and temperature.