Call for Participation:
“‘I’d Rather Talk to an AI’: Examining the Moral Risks of Outsourcing Belief Revision to ChatGPT”
2025 Interdisciplinary Speaker Series on the Ethics of Argumentation
Argumentation network of the AMERICAS
A free online talk Friday August 1, 2025 at 1 PM ET [and via recording later]
https://www.argnet.org/ethics-of-arg
We are excited to announce the eighth talk in the 2025 Interdisciplinary Speaker Series on the Ethics of Argumentation! The series takes place on the first Friday of every month at 1PM ET. It hosts a variety of distinguished speakers from many areas, including philosophy, political science, communication studies, jurisprudence as well as public intellectuals.
You will be able to see the list of abstracts for this year under:
https://www.argnet.org/ethics-of-arg
You will also find information about our upcoming speaker. Furthermore, you can watch past talks on our YouTube channel:
https://www.youtube.com/channel/UCF50_BXQYXwcqFfLdXav5rg
IF YOU WOULD LIKE TO SEE A TALK, PLEASE EMAIL Katharina.stevens@uleth.ca. You will be added to a mailing list that receives the zoom links for the talks each month. We will assume that those who subscribed in the last years would like to remain subscribed, but please email us if you would like to be removed from the list.
We are looking forward to the eighth talk of our 2025 series on August 1st at 1 PM ET by Martina Orlandi:
“I’d Rather Talk to an AI”: Examining the Moral Risks of Outsourcing Belief Revision to ChatGPT
Convincing people that their beliefs are unwarranted is a notoriously challenging task. Granted that nobody enjoys being lectured, the culprit is that individuals often have non-epistemic interests that motivate them to hold onto certain beliefs. This is the case of conspiracy theorists. The standard view in both psychology and philosophy argues that conspiracy theorists are drawn to conspiracy theories for non-epistemic reasons and that reiterating the evidence not only is insufficient for belief revision, but it can also have a boomerang effect (Douglas et al. 2017; Horne at al. 2015).
However, in April 2024 a comprehensive study from a group of psychologists at MIT has challenged this received wisdom and showed a surprising result: that while individuals struggle to persuade conspiracy theorists that they are wrong, ChatGPT can successfully change their minds by engaging in an evidence-based dialogue (Costello et al. 2024). What’s more, this change seems to be durable and to last for months.
What should we, philosophers, make of these results? In this talk, I examine the philosophical import of outsourcing belief revision to AI. Insofar as abandoning false beliefs is epistemically rational, ChatGPT seems to bring about positive consequences by leading conspiracy theorists to revise their beliefs in light of factual evidence. However, I argue that outsourcing belief revision to AI also carries ethical risks by undermining moral growth. For example, when it comes to morally loaded conspiracy theories that target particular segments of the population (like those that drive distrust in scientists, or target vulnerable minorities, etc.), reconciling with factual beliefs can also restore trust in those targets, thus bringing about positive collective consequences by strengthening the social fabric. But this can only occur when true beliefs are delivered by other persons. Outsourcing belief revision to ChatGPT necessarily eradicates such positive returns because it undermines this relational benefit.
ABOUT THE SPEAKER:
Martina Orlandi is Assistant Professor in Applied Artificial Intelligence at Trent University Durham and affiliate member of the Rock Ethics Institute at the Pennsylvania State University.
Her research focuses on philosophy of mind and action, philosophical implications of AI, epistemology, and moral psychology.
She is currently writing a book on how resilience can lead to making irrational and maladaptive choices for Bloomsbury’s Why Philosophy Matters series that is expected to be published in 2026. She is also a public philosopher and her recent article on how fiction gives us a special insight into our flaws has been published in Aeon Psyche.
See here for her thoughts on self-deception as a normative violation.
See here for an argument about what it means when people say they “knew something all along”, which has been discussed in this blog entry.
Leave a Reply