Call: CHI 2017 Workshop on Designing Speech, Acoustic, and Multimodal Interactions (DSLI 2017)

CHI 2017 Workshop on Designing Speech, Acoustic, and Multimodal Interactions (DSLI 2017)
Sunday, May 7, 2017
Colorado Convention Center, Denver, CO (USA)

Call for position papers

  • January 25th, 2017: Submission of position papers
  • February 8th, 2017: Notification of acceptance
  • February 22nd, 2017: Camera-ready submissions

Traditional interfaces are continuously being replaced by mobile, wearable, or pervasive interfaces. Yet when it comes to the input and output modalities enabling our interactions, we have yet to fully embrace some of the most natural forms of communication and information processing that humans possess: speech, language, gestures, thoughts. Very little HCI attention has been dedicated to designing and developing spoken language, acoustic-based, or multimodal interaction techniques, especially for mobile and wearable devices. In addition to the enormous, recent engineering progress in processing such modalities, there is now sufficient evidence that many real-life applications do not require 100% accuracy of processing multimodal input to be useful, particularly if such modalities complement each other. This multidisciplinary, one-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions especially with mobile and wearable devices, and to look at how we can leverage recent advances in speech, acoustic, and multimodal processing.

Our goal is to create, through an interdisciplinary dialogue, momentum for increased research and collaboration in:

  • Formally framing the challenges to the widespread adoption of speech, acoustic, and natural language interaction,
  • Taking concrete steps toward developing a framework of user-centric design guidelines for speech-, acoustic-, and language-based interactive systems, grounded in good usability practices,
  • Establishing directions to take and identifying further research opportunities in designing more natural interactions that make use of speech and natural language, and
  • Identifying key challenges and opportunities for enabling and designing multi-input modalities for a wide range of emerging devices such as wearables, smart home personal assistants, or social robots.

We invite the submission of position papers demonstrating research, design, practice, or interest in areas related to speech, acoustic language, and multimodal interaction that address one or more of the workshop goals, with an emphasis, but not limited to, applications such as mobile, wearable, smart home, social robots, or pervasive computing.

Position papers should be 4-6 pages long, in the ACM SIGCHI extended abstract format and include a brief statement justifying the fit with the workshop’s topic. Summaries of previous research are welcome if they contribute to the workshop’s multidisciplinary goals (e.g. a speech processing research in clear need of HCI expertise). Submissions will be reviewed according to:

  • Fit with the workshop topic
  • Potential to contribute to the workshop goals
  • A demonstrated track of research in the workshop area (HCI and/or speech, acoustic, or multimodal processing).

Submissions should be sent to:


For information see


Honda Research Institute Japan


Workshop enquiries:


Cosmin Munteanu
University of Toronto Mississauga

Pourang Irani
University of Manitoba

Sharon Oviatt
Incaa Designs

Matthew Aylett

Gerald Penn
University of Toronto

Shimei Pan
University of Maryland, Baltimore County

Nikhil Sharma
Google, Inc.

Frank Rudzicz
Toronto Rehabilitation Institute University of Toronto

Randy Gomez
Honda Research Institute

Benjamin Cowan
University College Dublin

Keisuke Nakamura
Honda Research Institute

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z