Call: The Philosophy of Generative AI: Perspectives from East and West – Special issue of Synthese

[See also the Call for another special issue of Synthese about “Epistemic Agency in the Age of AI.” -Matthew]

Call for Papers:

The Philosophy of Generative AI: Perspectives from East and West
Special issue of Synthese, an International Journal for Epistemology, Methodology and Philosophy of Science
Hybrid publishing model (Traditional and optional Open Access with Article Processing Charge)
https://link.springer.com/collections/hbhigbabhd

Deadline for submissions: June 1, 2026

Generative AI, a rapidly advancing field of artificial intelligence, has the capacity to create original and often remarkably convincing content across a wide range of domains. Built on sophisticated machine learning models, generative AI systems simulate aspects of human learning and decision-making to detect patterns and relationships in vast datasets. They then leverage this knowledge to respond to users’ natural language queries in ways that increasingly resemble human reasoning and creativity. At the same time, important challenges remain, particularly concerning the reliability of their outputs, the accuracy of their reasoning, and the explainability of their underlying processes, as well as the broader philosophical implications these issues raise.

This special issue invites contributions that explore the philosophical implications of generative AI and examine what philosophy can contribute to its development and understanding. We particularly encourage submissions that bring together Eastern and Western perspectives, fostering dialogue across traditions and intellectual borders.

THEMES AND AREAS OF INTEREST

Contributions may address, but are not limited to, the following areas or domain

  • Logic: How can we evaluate the reasoning capacity of generative AI? What can we learn from the existing benchmarks established for LLMs? In what ways might it enrich traditional accounts of logical reasoning? How does symbolic logic interface with LLM reasoning—and with probabilistic reasoning more generally? Consistency has been a major challenge for LLMs; what are the best ways to handle inconsistency? As the current face of AI, how can generative models be combined with traditional symbolic reasoning? And how is logical reasoning connected to explainability and causality in the context of generative AI?
  • Epistemology: What are the epistemic implications of relying on generative AI for knowledge production, justification, and understanding? How can generative AI, with its capacity to generate contents and act as an “epistemic broker”, influence human sense-making? How could it potentially lead to an ‘illusion of understanding’ and/or to undermine epistemic agency? In addition, how could this technology perpetuate existing societal prejudices leading to biased outputs?
  • Psychology: How does generative AI interact with, model, or illuminate human cognitive and psychological processes? What tools could it offer for simulating complex social interactions and analyzing behavioral data? Could the predictive capacities of generative AI (in terms of memory, cause-effect relationships, for example) rival or even surpass human cognition. And thanks to its capacity to navigate through vast amount of information, could it provide culturally sensitive solutions to complex problems?
  • Neuroscience: How generative AI help large-scale data analysis, thereby building foundation models of neural activity? Could generative AI models serve as powerful tools for building mechanistic understandings of the brain? Could it help neuroscientists understand trends in understudied problems and inspire future research directions, possibly leading to the formulation of new hypotheses?
  • Philosophy of Mind and Cognition: given our proclivity to build hybrid thinking systems – that fluidly incorporate non-biological resources with biological ones- could generative AI help transcending our human boundaries and cognitive abilities? Could it favor the emergence of Extended Minds and what challenges may this process pose to our human nature?

Exceptional contributions addressing the ethics of generative AI and responsible AI may be considered—particularly when framed through practical issues arising from one of the five key areas listed above.

SUBMISSION INFORMATION

Links:

Important dates:

  • Paper submission deadline: June 1, 2026
  • Notification of acceptance: September 1, 2026
  • Expected publication: November 1, 2026

EDITORIAL PROCESS

  • When submitting your paper online, please select “SI: The Philosophy of Generative AI: Perspectives from East and West” in the drop-down menu “Article Type”.
  • Papers do not ordinarily exceed 10,000 words.
  • All papers will undergo the journal’s standard review procedure (double-blind peer-review), according to the journal’s Peer Review Policy, Process and Guidance.
  • Reviewers will be selected according to the Peer-Reviewer Selection policies.

Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives