Dear Colleagues,

The 2nd International Multimodal Sentiment Analysis in Real-life Media 
Challenge and Workshop (MuSe 2021) 
@ ACM Multimedia, October 2021, Chengdu China

is now open: 
https://www.muse-challenge.org



Description

The Multimodal Sentiment Analysis Challenge and Workshop (MuSe) focuses on 
Multimodal Sentiment Recognition of data sourced from user-generated content 
and stress-induced situations. The competition is aimed to compare multimedia 
processing and deep learning methods for automatic audiovisual, biological, and 
textual based sentiment and emotion sensing, under a common experimental 
condition set.

The goal of the challenge is to provide a common benchmarkable test set for 
multimodal information processing and to bring together the Affective 
Computing, Sentiment Analysis, and Health Informatics communities, to compare 
the merits of multimodal fusion for a large amount of modalities under 
well-defined conditions. Another motivation is the need to advance sentiment 
and emotion recognition systems to be able to deal with previously unexplored 
naturalistic behaviour in large volumes of in-the-wild data. The raw video 
recordings, transcriptions, pre-processed features and model baselines are 
available on our website.

We are calling for teams to participate in four Sub-Challenges:

Multimodal Continuous Emotions in-the-Wild Sub-challenge (MuSe-Wilder)
Predicting the level of emotional dimensions (valence, arousal) in a 
time-continuous manner from audio-visual recordings.

Multimodal Sentiment Classification Sub-challenge (MuSe-Sent)
Predicting 5 advanced intensity classes of emotions based on valence and 
arousal for segments of audio-visual recordings.

Multimodal Emotional Stress
Sub-challenge (MuSe-Stress)
Predicting the level of emotion (dimensions of arousal, valence) in a 
time-continuous manner from biological signals and audio-visual recordings.

Multimodal Biosignal Affect
Sub-challenge (MuSe-Physio)
Predicting the combined signal of human annotated arousal and Electrodermal 
activity (i.e., physical arousal) in a time-continuous manner based on 
audio-visual-text data and biological signals.



Important Dates 


Challenge opening

01 April 2021

Paper submission

Late July 2021

Notification of acceptance

Late August 2021

Camera ready paper

Early September 2021

Workshop

20-24 October 2021




Organisers 


Björn W. Schuller

Imperial College London, UK, schul...@ieee.org

Erik Cambria

NTU/SenticNet, SG, camb...@ntu.edu.sg

Eva-Maria Meßner

Ulm University, DE, eva-maria.mess...@uni-ulm.de

Guoying Zhao

University of Oulu, FN, guoying.z...@oulu.fi

Lukas Stappen

University of Augsburg, DE, stap...@ieee.org 




Welcome to the Challenge!


Best wishes,

Björn Schuller 
On behalf of the organisers







___________________________________________

Univ.-Prof. mult. Dr. habil. 
Björn W. Schuller, 
FBCS, Fellow ISCA, FIEEE

Professor and Chair of Embedded Intelligence for Health Care and Wellbeing
University of Augsburg / Germany

Professor of Artificial Intelligence
Head GLAM - Group on Language, Audio & Music
Imperial College London / UK

CSO/MD audEERING GmbH
Germany

Field Chief Editor Frontiers in Digital Health

schul...@ieee.org
www.schuller.one


_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to