Multimodal Communication Symposium 2023

From emcawiki
Jump to: navigation, search
MMSYM 23
Type Symposium
Categories (tags) Uncategorized
Dates 2022/08/08 - 2022/09/30
Link http://mmsym.org/
Address
Geolocation 41° 22' 44", 2° 10' 47"
Abstract due 2022/09/30
Submission deadline
Final version due
Notification date 2022/12/15
Tweet The Multimodal Communication Symposium @mmsym2023 are accepting abstracts until 30 Sept 22! If multimodal interaction is of interest, visit http://mmsym.org/ for more info! #EMCA #LSI
Export for iCalendar

Multimodal Communication Symposium 2023:


Details:

Organised by the GrEP Research Group (Prosodic Studies Group, Dept of Translation and Language Sciences, UPF) and the GEHM Research Network (GEstures and Head Movements in Language) The 1st International Multimodal Communication Symposium, MMSYM 2023, aims to provide a multidisciplinary forum for researchers from different disciplines who study multimodality in human communication as well as in human-computer interaction. This 2023 edition of the MMSYM symposium is organised by the GrEP Research Group (Prosodic Studies Group), from the Department of Translation and Language Sciences of the University of Pompeu Fabra, Barcelona, Catalonia, in conjunction with the GEHM research network (e.g., GEstures and Head Movements in Language) https://cst.ku.dk/english/projects/gestures-and-head-movements-in-language-gehm/.

The symposium follows up on a tradition established by the Swedish Symposia on Multimodal Communication held from 1997 until 2000, and continued by the Nordic Symposia on Multimodal Communication held from 2003 to 2012. Since 2013 the symposium has acquired a broader European dimension, with editions held in Malta, Estonia, Ireland, Denmark, Germany and Belgium. This year the symposium will be held in Spain for the first time and has a truly international ambition, hence the new name.

This year we focus on three research themes, which are of particular interest to the GEHM network. The first is language-specific characteristics of gesture-speech interaction, which seeks to account for how speakers’ ability to process and produce gesture and speech is affected and changed by their language profile. The second is multimodal prominence, which investigates the theoretical question of how linguistic prominence is expressed through combinations of kinematic and prosodic features. The third is conceptual and statistical modelling of multimodal contributions, with particular regard to head movements and the use of gaze.