Call: Virtual Agents for Social Skills Training (VASST) – Journal on Multimodal User Interfaces special issue

CALL FOR PAPERS

Journal on Multimodal User Interfaces
Special Issue: Virtual Agents for Social Skills Training (VASST)

Guest Editors:
Merijn Bruijnes, University of Twente
m.bruijnes@utwente.nl
Jeroen Linssen, University of Twente
j.m.linssen@utwente.nl
Dirk Heylen, University of Twente
d.k.j.heylen@utwente.nl

Paper submission deadline: 30th October 2017

Interactive technology, such as virtual agents, to train social skills can improve training curricula. For example police officers can train for interviewing suspects or detecting deception with a virtual agent. Other examples of application areas include (but are not limited to) social workers (training for dealing with broken homes), psychiatrists (training for interviewing people with various difficulties / vulnerabilities / personalities), training of social skills such as job interviews, or social stress management.

We invite all researchers who investigate the design, implementation, and evaluation of technology to submit their work to this special issue on Virtual Agents for Social Skills Training (VASST). With this technology we mean virtual agents for social skills training and any supporting technology. The aim of this special issue is to give an overview of recent developments of interactive virtual agent applications with the goal: improving social skills. Research on VASST reaches across multiple research domains: intelligent virtual agent, (serious) game mechanics, human factors, (social) signal processing, user-specific feedback mechanisms, automated education, and artificial intelligence.

SCOPE

We welcome (literature) studies describing the state-of-the-art for sensing user behaviour, reasoning about this behaviour, and generation of virtual agent behaviour in training scenarios. Topics related to VASST include, but are not limited to:

  • Recognition and interpretation of (non)verbal social user behaviours;
  • Training and fusion of user’s signs detected in different modalities;
  • User/student profiling, such as level or training style preference;
  • Anonymously processing of user data;
  • Dialogue and turn-taking management;
  • Social-emotional and cognitive models;
  • Automatic improvement of knowledge representations;
  • Coordination of signs to be displayed by the virtual agents in several modalities,
  • Mechanics to support learning, for example
    • Feedback or after action review;
    • Personalised scenarios and dialogues;
  • Big data approaches to enrich social interactions;
  • Other topics dealing with innovations for VASST.

TIMELINE

Paper submission deadline: 30th October 2017
Acceptance notifications: 15th January 2018
Final papers: 15th March 2018

SUBMISSION INSTRUCTIONS

Submissions should be around 8-16 pages and must not have been previously published.

Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: http://www.editorialmanager.com/jmui/. The article type to be selected is “Special Issue S.I. : VASST”.

Editor-in-Chief: Jean-Claude Martin, LIMSI-CNRS, Univ. Paris South
2015 Impact Factor = 1.017

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z