Call: Appropriate Trust in Human-AI Interactions – ECSCW 2022 Workshop

Call for Papers

Appropriate Trust in Human-AI Interactions
Workshop held in conjunction with ECSCW 2022, The 20th European Conference on Computer-Supported Cooperative Work
Coimbra, Portugal
June 27, 2022
Workshop: https://websites.fraunhofer.de/trust-in-human-ai-interaction/
ECSCW 2022: https://ecscw.eusset.eu/2022/

Submission deadline: April 22, 2022

AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has extensively addressed fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust hinders interpretation of results and cross-study comparisons. As a result, the fundamental understanding of human-AI trust remains fragmented.

This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions:

  1. What does trust mean between humans and AI in different contexts?
  2. How can we create and convey the calibrated level of trust in interactions with AI?
  3. How can we develop a standardized framework to address new challenges?

CALL FOR PARTICIPATION

This one-day workshop aims to provide a forum for researchers as well as practitioners and activists to discuss challenges in building trust and to start working on solutions that are more practical and viable to adapt in different AI interaction contexts. The topics include but are not limited to:

  • Definitions of trust and reliance.
  • Interpersonal trust and lessons from social sciences.
  • Qualitative and quantitative methods for building and evaluating trust.
  • Challenges of designing appropriate trust and tradeoffs with other objectives.
  • Solutions (and their limitations) for promoting appropriate trust (e.g., XAI, control mechanisms, human agency, communicating uncertainty etc).
  • Safety mechanisms for when trust is broken.

We invite anyone interested in participating to submit a paper up to 4 pages (not including references). Template can be found (https://www.acm.org/publications/taps/word-template-workflow). Papers should critically reflect upon the authors’ experiences from the field or research area related to challenges they face when building trust in AI interactions. Authors’ prior experience does not have to be specifically concerned with these challenges, but the position papers will be expected to demonstrate how their experience is relevant to the workshop’s topic and can be applied within the workshops’ context.

Submissions should be sent to fatemeh.alizadeh@uni-siegen.de in .pdf format. Position papers will be reviewed based on relevance and potential for contribution to the workshop. At least one co-author of each accepted paper must register to the ECSCW 2022 conference to attend the workshop.

IMPORTANT DATES

  • April 22: Submission Deadline.
  • May 6: Notification of acceptance.
  • June 27: Workshop at ECSCW 2022.

ORGANIZERS

Fatemeh Alizadeh, University of Siegen
Oleksandra Vereschak, Sorbonne Université
Dominik Pins, Fraunhofer Institute for Applied Information Technology (FIT)
Gunnar Stevens, University of Siegen
Gilles Bailly, Sorbonne Université
Baptiste Caramiaux, Sorbonne Université

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z