Call: Designing Speech Synthesis for Intelligent Virtual Agents – IVA 2019 Workshop

Call for Papers

Workshop: Designing Speech Synthesis for Intelligent Virtual Agents
At ACM IVA 2019 (https://iva2019.sciencesconf.org)
2nd July 2019
Paris, France
http://homepages.inf.ed.ac.uk/matthewa/speechIVA2019wshop

Extended submission deadline: April 26, 2019

OVERVIEW

In this workshop we will look at the elements in an artificial voice that support an embodied (either digital or tangible) and dis-embodied form of an intelligent virtual agent (IVA). In this context the agent can be seen to perform, or act a role, where naturalness of the voice may be subservient to appropriateness, and communicating the character of the agent can be as important as the information it presents. We will introduce the current ways a voice can be made to have character, how this can be designed to fit a specific agent, and how such a voice can be effectively deployed.

The intended audience is academics and industry researchers who are tired of seeing their carefully created conversational agents spoilt by inappropriate speech synthesis voices, and who want to lead the way in which speech synthesis takes into account the design requirements of IVAs. The workshop is not primarily for speech technologists (although they are of course welcome) but rather the engineers and scientists exploring the use of IVAs, and are curious to see how modern speech synthesis can dramatically alter the way such agents are perceived and used.

ATTENDING

Attendees are encouraged to also attend the main conference but this is not a requirement. We request those interested to email matthewa@cereproc.com with the following information by April 26th 2019.

  1. 100-200 word biography and main motivation for attending.
  2. Sketch of an actual or imagined embodied agent that intends to use speech to converse or present dynamic content. The sketch could include pictures, example content and design motivations and could vary from well specified to speculative, from ground breaking to well understood domains. It could be a couple of pages to a paragraph. The pre-designs will be used to help the organizers select and formulate a design challenge of interest to the attendees.

Five of the submissions will be chosen for a short 7 minute + 3 minute question presentation at the workshop. Please` indicate if you would like to be considered for this. The submissions chosen will aim to give a varied and provocative view of potential use cases and speech interface designs.

ORGANISERS

Matthew Aylett is Chief Science Officer at CereProc Ltd. He has over 20 years’ experience in commercial speech synthesis and speech synthesis research. He is a founder of CereProc, which offers unique emotional and characterful synthesis solutions and has been awarded a Royal Society Industrial Fellowship to explore the role of speech synthesis in the perception of character in artificial agents. His AltCHI position paper in 2014 on the relationship between speech technology and HCI won best presentation award.

Cosmin Munteanu is an Assistant Professor at the Institute for Communication, Culture, Information, and Technology at University of Toronto Mississauga. His research includes speech and natural language interaction for mobile devices, mixed reality systems, learning technologies for marginalized users, usable cyber-safety, assistive technologies for older adults, and ethics in human-computer interaction research.

Leigh Clark is a Postdoctoral Research Fellow in the School of Information & Communication Studies at University College Dublin. His research examines the communicative and user experience aspects of people’s interactions with speech interfaces, how context impacts perceptions of computer speech and how linguistic theories can be implemented and redefined in speech-based HCI.

Benjamin R Cowan is an Assistant Professor at University College Dublin’s School of Information & Communication Studies. His research lies at the juncture between psychology, human-computer interaction and computer science in investigating how theoretical perspectives in human communication can be applied to understand phenomena in speech based human-machine communication.

Ari Shapiro is a Research Assistant Professor in the Department of Computer Science at the University of Southern California. He heads the Character Animation and Simulation research group at the USC Institute for Creative Technologies, where his focus is on synthesizing realistic animation for virtual characters. Shapiro has published many academic articles in the field of computer graphics in animation for virtual characters, and is a nine-time SIGGRAPH speaker.

Blaise Potard is a Senior Researcher at CereProc Ltd. His research revolve around the application of machine learning methodologies to speech synthesis, in particular for speech style and speaker characteristics transfer. He is one of the main contributor of Idlak, a free speech synthesis framework.

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z