Call: Multimodal Corpora 2018: Multimodal Data in the Online World

Call for Papers

MULTIMODAL CORPORA 2018:
Multimodal Data in the Online World

LREC 2018 Workshop
12 May 2018, Phoenix Seagaia Conference Center, Miyazaki, Japan
http://lrec2018.lrec-conf.org/

Deadline for paper submission: 12 January 2018

INTRODUCTION

The creation of a multimodal corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, gaze, etc. An increasing number of research areas have transgressed or are in the process of transgressing from focused single modality research to full-fledged multimodality research, and multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data.

We are pleased to announce that in 2018, the 12th Workshop on Multimodal Corpora will once again be collocated with LREC.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08, 10, ICMI 11, LREC 2012, IVA 2013, LREC 2014, and LREC 2016. The workshop series has established itself as of the main events for researchers working with multimodal corpora, i.e. corpora involving the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, gaze, etc.

SPECIAL THEME AND TOPICS

As always, we aim for a wide cross-section of the field of multimodal corpora, with contributions ranging from collection efforts, coding, validation, and analysis methods to tools and applications of multimodal corpora. Success stories of corpora that have provided insights into both applied and basic research are welcome, as are presentations of design discussions, methods and tools. This year, to comply with one of the hot topics of the main conference, we would also like to pay special attention to multimodal corpora collected and adapted from data occurring online rather than especially created for specific research purposes.

In addition to this year’s special theme, other topics to be addressed include, but are not limited to:

  • Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar and human-robot interaction, etc.) and descriptions of existing multimodal resources
  • Relations between modalities in human-human interaction and in human-computer or human-robot interaction
  • Multimodal interaction in specific scenarios, e.g. group interaction in meetings or games
  • Coding schemes for the annotation of multimodal corpora
  • Evaluation and validation of multimodal annotations
  • Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora
  • Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)
  • Collaborative coding
  • Metadata descriptions of multimodal corpora
  • Automatic annotation, based e.g. on motion capture or image processing, and its integration with manual annotations
  • Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
  • Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)
  • Machine learning applied to multimodal data
  • Multimodal dialogue modelling

PROGRAMME

The workshop will consist primarily of paper and poster presentations. In addition, we want to start discussing a shared task involving multimodal corpus development and/or use for predicting communication behaviour. Therefore, prior to the workshop, participants will be asked to submit ideas for such a shared task. The goal is for the task to be launched next time the workshop is held.

There will also be one or two keynote speakers.

IMPORTANT DATES

Deadline for paper submission: 12 January
Notification of acceptance: 9 February
Final version of accepted paper: 23 February
Final program and proceedings: 9 March
Workshop: 12 May

SUBMISSIONS

Submissions should be 4 pages long, must be in English, and follow the LREC’s submission guidelines.

Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).

Submissions should be made at the following address:
https://www.softconf.com/lrec2018/MMC2018/

TIME SCHEDULE AND REGISTRATION FEE

The workshop will consist of a morning session and an afternoon session.

Registration and fees are managed by LREC – see the LREC 2018 website (http://lrec2018.lrec-conf.org/).

IDENTIFY, DESCRIBE AND SHARE YOUR LANGUAGE RESOURCES (LRS)!

Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.

As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2018 endorses the need to uniquely Identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.org), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will be offered at submission time.

ORGANIZING COMMITTEE

Patrizia Paggio
Centre for Language Technology, Univ. of Copenhagen, Denmark
Institute of Linguistics and Language Technology, Univ. of Malta, Msida, Malta

Kirsten Bergmann
Cluster of Excellence in Cognitive Interaction Technology, Univ. Bielefeld, Germany
Institute of Cognitive Science, Univ. Osnabrück, Germany

Jens Edlund
KTH Speech, Music and Hearing, Stockholm, Sweden

Dirk Heylen
Univ. Twente, Human Media Interaction, Enschede, The Netherlands

CONTACT

Patrizia Paggio

Senior Researcher
University of Copenhagen
Centre for Language Technology
paggio@hum.ku.dk

Associate Professor
University of Malta
Institute of Linguistics and Language Technology
patrizia.paggio@um.edu.mt

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php