3rd International Workshop on
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction (MA3HMI 2016)
November 16th, 2016 in Tokyo, Japan.
A satellite workshop of the 18th ACM International Conference on Multimodal Interaction (ICMI 2016).
Submission Deadline: August 28th, 2016
Scope:
One of the aims in building multimodal user interfaces and combining them with technical devices is to make the interaction between user and system as natural as possible. The most natural form of interaction may be how we interact with other humans. Current technology is far from human-like, and systems can reflect a wide range of technical solutions. Transferring the insights for analysis of human-human communication to human-machine interactions remains challenging. It requires that the multimodal inputs from the user (e.g., speech, gaze, facial expressions) are recorded and interpreted. This interpretation has to occur at both the semantic and affective levels, including aspects such as the personality, mood, or intentions of the user. These processes have to be performed in real-time in order for the system to respond without delays ensuring that the interaction is smooth. The MA3HMI workshop aims at bringing together researchers working on the analysis of multimodal data as a means to develop technical devices that can interact with humans. In particular, artificial agents can be regarded in their broadest sense, including virtual chat agents, empathic speech interfaces and life-style coaches on a smart-phone. More general, multimodal analyses support any technical system in the research area of human-machine interaction. We focus on the real-time aspects of human-machine interaction. We address the development and evaluation of multimodal, real-time systems. We solicit papers that concern the different phases of the development of such interfaces. Tools and systems that address real-time conversations with artificial agents and technical systems are also within the scope of the workshop.
Topics (but not limited to):
- Multimodal Annotation
- Representation formats for merged multimodal annotations
- Best practices for multimodal annotation procedures
- Innovative multimodal annotation schemes
- Annotation and processing of multimodal data sets
- Real-time or on-the-fly annotation approaches
- Multimodal Analyses
- Multimodal understanding of user behavior and affective state
- Dialogue management using multimodal output
- Evaluation and benchmarking of humanmachine conversations
- Novel strategies of human-machine interactions
- Using multimodal data sets for human-machine interaction
- Applications, Tools, and Systems
- Novel application domains and embodied interaction
- Prototype development and uptake of technology
- User studies with (partial) functional systems
- Tools for the recording, annotation and analysis of conversations
Important Dates:
Submission Deadline: August 28th, 2016
Notification of Acceptance: October 2nd, 2016
Camera-ready Deadline: October 9th, 2016 (fixed date)
Workshop Date: November 16th, 2016
Submissions:
Prospective authors are invited to submit full papers (8 pages) and short papers (5 pages) in ACM format as specified by ICMI 2016. Accepted papers will be published as post-proceedings in the ACM Digital Library. All submissions should be anonymous.
Organisers:
Ronald Böck, University Magdeburg, Germany
Francesca Bonin, IBM Research, Ireland
Nick Campbell, Trinity College Dublin, Ireland
Ronald Poppe, Utrecht University, Netherland
Leave a Reply