Call: Multimodal Corpora – LREC 2016 Workshop

Call for Papers

LREC 2016 Workshop

Multimodal Corpora:
Computer vision and language processing

24 May 2016, Grand Hotel Bernardin Conference Center, Portorož, Slovenia

http://www.multimodal-corpora.org/

Submission deadline: 12 February, 2016

The creation of a multimodal corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, gaze, etc. An increasing number of research areas have transgressed or are in the process of transgressing from focused single modality research to full-fledged multimodality research, and multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data.

We are pleased to announce that in 2016, the 11th Workshop on Multimodal Corpora will once again be collocated with LREC [Language Resources and Evaluation Conference]

THEME

As always, we aim for a wide cross-section of the field, with contributions ranging from collection efforts, coding, validation, and analysis methods to tools and applications of multimodal corpora. Success stories of corpora that have provided insights into both applied and basic research are welcome, as are presentations of design discussions, methods and tools. This year, we would also like to pay special attention to the integration of computer vision and language processing techniques – a combination that is becoming increasingly important as the accessible video and speech data increases. The special theme for this instalment of Multimodal Corpora is how processing techniques for vision and language can be combined to manage, search, and process digital content.

This workshop follows previous events held at LREC 00, 02, 04, 06, 08, 10, ICMI 11, LREC 2012, IVA 2013, and LREC 2014. All workshops are documented under www.multimodal-corpora.org and complemented by a special issue of the Journal of Language Resources and Evaluation which came out in 2008, a state-of-the-art book published by Springer in 2009 and a special issue of the Journal of Multimodal User Interfaces under publication. This year, some of the contributors, along with selected contributors from 2009 up until now, will be invited to submit an extended version for a new publication gathering recent research in the area.

The LREC’2016 workshop on multimodal corpora will feature a special session on the combination of processing techniques for vision and language.

Other topics to be addressed include, but are not limited to:

  • Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar and human-robot interaction, etc.) and descriptions of existing multimodal resources
  • Relations between modalities in human-human interaction and in human-computer interaction
  • Multimodal interaction in specific scenarios, e.g. group interaction in meetings or games
  • Coding schemes for the annotation of multimodal corpora
  • Evaluation and validation of multimodal annotations
  • Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora
  • Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)
  • Collaborative coding
  • Metadata descriptions of multimodal corpora
  • Automatic annotation, based e.g. on motion capture or image processing, and its integration with manual annotations
  • Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
  • Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)
  • Machine learning applied to multimodal data
  • Multimodal dialogue modelling

IMPORTANT DATES

  • Deadline for paper submission (complete paper): 12 February
  • Notification of acceptance: 20 March
  • Final version of accepted paper: 1 April
  • Final program and proceedings: 2 May
  • Workshop: 24 May

SUBMISSIONS

The workshop will consist primarily of paper presentations and discussion/working sessions. Submissions should be 4 pages long, must be in English, and follow the LREC’s submission guidelines.

Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).

Submissions are made at https://www.softconf.com/lrec2016/MMC2016/.

TIME SCHEDULE AND REGISTRATION FEE

The workshop will consist of a morning session and an afternoon session. There will be time for collective discussions.

Registration and fees are managed by LREC – see the LREC 2016 website (http://lrec2016.lrec-conf.org/).

ORGANIZING COMMITTEE

Jens Edlund, KTH Royal Institute of Technology, Sweden
Dirk Heylen, University of Twente, The Netherlands
Patrizia Paggio, University of Copenhagen, Denmark/University of Malta, Malta

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z