Call: “Trust in AI vs Human Experts: Cognitive, Social, and Emotional Perspectives” for Discover Psychology

Call for Papers:

Trust in Artificial Intelligence vs Human Experts: Cognitive, Social, and Emotional Perspectives
A collection for the journal
Discover Psychology
https://link.springer.com/collections/iigdgbbdbg

Editors:

  • Dr. Dimitri Volchenkov, PhD, Texas Tech University, Department of Mathematics and Statistics, United States
  • Dr. Ori Swed, PhD, Department of Sociology, Anthropology, and Social Work at Texas Tech University, United States

Deadline for submissions: March 1, 2026

As AI systems become increasingly embedded in high-stakes decision-making, from medical diagnoses to financial advice, understanding how people trust AI relative to human experts is crucial. Psychological research reveals mixed patterns. On one hand, many individuals exhibit algorithm aversion, showing reluctance to rely on algorithmic decisions and preferring human judgment even when an algorithm consistently outperforms human experts. People tend to lose confidence in an AI system quickly after it makes a mistake, becoming less likely to choose it over an arguably inferior human advisor. On the other hand, under certain conditions, studies document algorithm appreciation, where people sometimes put more weight on advice if they believe it came from an AI rather than from a person. For instance, lay participants adhered more strongly to advice labeled as coming from an algorithm in tasks requiring objective analysis. In more subjective or personal domains (such as healthcare or hiring decisions), however, trust often shifts back toward human experts, people fear an AI might overlook personal nuances, and they value the empathy and individualized understanding of a human advisor. These contrasts suggest that trust in AI vs. humans is highly context-dependent, with cognitive and emotional factors moderating people’s openness to algorithmic guidance.

This collection of Discover Psychology, titled “Trust in Artificial Intelligence vs Human Experts: Cognitive, Social, and Emotional Perspectives,” invites submissions that delve into the psychological mechanisms and boundary conditions of trust in AI-based versus human expert decision-making. Key themes and questions include:

TRANSPARENCY AND EXPLAINABILITY:

How do transparency and explainable AI affect user trust? For example, does providing clearer insight into an algorithm’s workings or rationale foster greater trust? Initial evidence suggests that transparency is crucial for calibrating appropriate trust in AI, helping counteract excessive skepticism or aversion. We seek studies on when and how explanation interfaces (e.g. interpretable models, reason codes) bolster confidence in AI recommendations.

PERCEIVED EXPERTISE AND CREDIBILITY:

 What is the impact of perceived expertise on whether people trust an AI system or a human expert? If an AI is seen as an objective, data-driven analyst versus a human who is deemed more knowledgeable or experienced in a domain, how do these perceptions shift trust? Prior work indicates that people acknowledge algorithms’ superior objectivity in certain tasks, yet may still favor human judgment in domains where subjective experience and contextual knowledge are valued. We welcome research on credibility heuristics—for instance, do users trust AI more in analytical domains while preferring human experts for decisions requiring intuition or ethical reasoning?

AWARENESS OF BIASES:

How does awareness of bias (either in algorithmic models or in human decision-makers) influence trust? People might appreciate that a well-designed algorithm can avoid certain human biases, which could increase trust in objective scenarios. Conversely, knowledge of potential algorithmic biases or fairness issues could undermine trust in AI. Submissions might explore whether informing users about an AI’s bias mitigation (or a human expert’s biases) alters their reliance on recommendations.

COGNITIVE INTERPRETATIONS OF ERRORS AND UNCERTAINTY:

What cognitive processes underlie how people interpret AI errors, uncertainty estimates, or failures? For example, do individuals overweight a single AI mistake as indicative of the technology’s overall reliability? Research shows that a visible error can disproportionately erode confidence in an AI, even if the AI is statistically more accurate over time. We encourage studies on how providing uncertainty information or performance metrics (e.g. confidence scores, error rates) might help users rationally adjust their trust—neither overreacting to minor glitches nor exhibiting blind faith.

SOCIAL AND EMOTIONAL FACTORS:

How do social and emotional dynamics shape trust in AI vs. human experts? Emotional responses like fear of dehumanization, lack of empathy, or discomfort with machine-like interactions can reduce trust in AI advisors in personal contexts. Social factors are also critical: for instance, anthropomorphism of AI (perceiving it as more human-like) might increase trust in some cases, while clear non-human framing might increase acceptance in others. Likewise, endorsements by authorities or experts can significantly sway trust—studies find that patients were more willing to accept AI-driven medical advice when their human doctor supported the AI’s use. We welcome research into how empathy, rapport, and social presence (or their absence) in AI systems influence user trust, as well as how cultural or individual differences play a role in these emotional perceptions.

HUMAN–AI COLLABORATION AND OVERSIGHT:

Under what circumstances do people trust outcomes most when algorithms and human experts work in tandem? Emerging evidence suggests that the highest levels of trust may be achieved when AI recommendations are vetted or complemented by human judgment, marrying computational efficiency with human intuition. For example, IT professionals in one study trusted decisions most when an AI’s analysis was combined with human expert oversight, relative to AI alone. We seek insights into designing effective human-AI collaboration frameworks and how shared decision authority impacts user confidence. Who acts as the “check and balance” (AI or human), and how does that dynamic influence trust and acceptance of the final decision?

Given the growing societal role of AI across industries and domains, this topic is extremely timely. We encourage submissions of both rigorous empirical studies and insightful theoretical or review papers that advance our understanding of trust dynamics in human–AI interactions. By bringing together diverse perspectives on when people trust AI systems versus human experts—and why—this collection aims to deepen psychological theory on trust and provide practical guidance for developing AI systems that are transparent, fair, and trustworthy. Ultimately, a richer understanding of how trust is earned, calibrated, or lost in artificial vs. human decision-makers will help society leverage AI in responsible, human-centered ways.

KEYWORDS: Trust in AI, Human experts, Algorithm aversion, Algorithm appreciation, Explainable AI, Transparency, Uncertainty communication, Perceived expertise, Empathy, Affect, Anthropomorphism, Human-AI collaboration, Trust calibration and repair, Bias and fairness.

Submit your manuscript to this collection through the participating journal at https://link.springer.com/journal/44202


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives