Call: Trust and AI – Theme Section in ACM Transactions on Internet Technology (TOIT)

Call for Papers for a Theme Section on Trust and AI
ACM Transactions on Internet Technology (TOIT)

Submission deadline: November 1, 2018

Trust is critical in building effective AI systems. It characterizes the elements that are essential in social reliability, whether this be in human-agent interaction, or how autonomous agents make decisions about the selection of partners and coordinate with them. Many computational and theoretical trust models and approaches to reputation have been developed using AI techniques over the past twenty years. However, some principal issues are yet to be addressed, including bootstrapping; causes and consequences of trust; trust propagation in heterogeneous systems where agents may use different assessment procedures; group trust modelling and assessment; trust enforcement; trust and risk analysis, etc.

Increasingly, there is also a need to understand how human users trust AI systems that have been designed to act on their behalf. This trust can be engendered through effective transparency and lack of bias, as well as through successful attention to user needs.

The aim of this special section is to bring together world-leading research on issues related to trust and artificial intelligence. We invite the submission of novel research in multiagent trust modelling, assessment and enforcement, as well as in how to engender trust in and transparency of AI systems from a human perspective. The scope of the theme includes:

  • Trust in Multi-Agent Systems: socio-technical systems and organizations; service-oriented architectures; social networks; and adversarial environments
  • Trustworthy AI Systems: detecting and addressing bias and improving fairness; trusting automation for competence; understanding and modelling user requirements; improving transparency and explainability; and accountability and norms
  • AI for combating misinformation: detecting and preventing deception and fraud; intrusion resilience in trusted computing; online fact checking and critical thinking; and detecting and preventing collusion
  • Modelling and Reasoning: game-theoretic models of trust; socio-cognitive models of trust; logical representations of trust; norms and accountability; reputation mechanisms; and risk-aware decision making
  • Real-world Applications: e-commerce; security; IoT; health; advertising; and government.


Submissions: November 1, 2018
Preliminary decisions: January 15, 2019
Revisions: April 1, 2019
Final decisions: May 15, 2019
Final versions: June 15, 2019
Publication date: Fall 2019


To submit a paper, please follow the standard instructions:

Please select “Theme Section: Trust and AI” in the Manuscript Central website

Contact Email Address:


Munindar P. Singh
Department of Computer Science, North
Carolina State University


Jie Zhang
Nanyang Technological University

Jamal Bentahar
Concordia University

Rino Falcone

Timothy J. Norman
University of Southampton

Murat Şensoy
Ozyegin University

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z