Call: “AI, Misinformation, and the Future of Algorithmic Fact-Checking” issue of JOBEM

Call for Papers:

AI, Misinformation, and the Future of Algorithmic Fact-Checking
A special issue of the Journal of Broadcasting and Electronic Media (JOBEM)
https://beaweb.org/wp/jobem-ai-misinformation-and-the-future-of-the-algorithmic-fact-checking/

Special Issue Editor:
Don Shin, College of Media and Communication, Texas Tech University – don.h.shin@ttu.edu

Deadline for submissions: December 30, 2025
Publication Date: September 2026

In early 2024, an AI fact-checking system flagged a viral image of a protest scene as “misleading content,” drastically reducing its reach on social media. The image, showing police officers confronting demonstrators, was not altered or generated by AI—the caption attached to it falsely claimed it depicted a protest in a different country. While the AI model accurately detected inconsistencies in metadata and flagged potential context manipulation, human fact-checkers later found that similar authentic images were left unchecked, creating a perception of selective enforcement. The backlash was swift—some accused AI of suppressing dissent, while others criticized its failure to catch misinformation in real time.

As AI takes on a more prominent role in fact-checking, it creates new problems of its own—bias, blind spots, and unintended consequences that shape what we see and believe online (Chae & Tewksbury, 2024). According to DeVerna et al. (2024), false or misleading information can erode trust in institutions, deepen political polarization, and influence public health and democratic processes. Jia and Lee (2024) highlight that, with the proliferation of electronic media and social platforms, misinformation spreads rapidly across global networks, necessitating robust, scalable, and adaptive fact-checking solutions. The controversy raises urgent questions about fairness, accountability, and trust in AI technologies meant to protect public discourse (Westlund et al., 2024).

As AI fact-checkers improve, misinformation tactics will evolve to evade detection, leveraging deepfakes, narrative manipulation, and hyper-personalized disinformation to subtly shape public perception. This AI-misinformation feedback loop threatens to degrade information quality, reinforcing biases, and homogenizing discourse (Chung et al., 2023). To address these challenges, AI fact-checking must become more transparent, explainable, and proactive, incorporating digital provenance tools and cross-platform verification networks. The challenge extends beyond combating misinformation—it is about preserving trust in information itself. As AI plays a greater role in knowledge production, the real question arises: who controls the truth? This shift raises critical concerns about the centralization of epistemic authority—who determines what is accurate, credible, and valuable in an AI-driven information landscape?

Against this backdrop, this Special Issue seeks contributions that explore the role of AI in combating misinformation and enhancing fact-checking in electronic media. We invite theoretical, empirical, and interdisciplinary studies examining AI’s capabilities, limitations, and societal implications in this domain. Our goal is to foster discourse among academics, media professionals, policymakers, and fact-checking organizations, advancing a comprehensive understanding of AI’s transformative impact on fact verification.

We welcome international perspectives and diverse methodologies addressing AI fact-checking across different cultural, political, and technological contexts. Submissions should investigate technological advancements, ethical concerns, policy implications, and user engagement strategies to promote effective and trustworthy fact-checking.

TOPICS OF INTEREST

Submissions may cover, but are not limited to:

  • AI-driven fact-checking and AI hallucinations
  • Large Language Models (LLMs) in journalistic fact-checking
  • Human cognition and responses to AI-generated fact-checks
  • Case studies on AI-powered fact-checking in electronic media
  • Media-AI collaborations in misinformation detection
  • Ethical concerns in AI-driven fact-checking
  • Comparative effectiveness of human vs. AI fact-checking
  • Algorithmic transparency and accountability in media platforms
  • Real-time misinformation detection and verification models
  • Policy frameworks for AI-assisted fact-checking
  • User trust and engagement with AI-verified content
  • AI-based credibility scoring systems for news verification
  • Psychological and social impacts of AI-driven fact-checking
  • Strategies for inoculating audiences against misinformation
  • Biases in AI fact-checking models and training data
  • Evaluation metrics for AI-driven fact-checking accuracy
  • AI’s role in preventing information silos and echo chambers
  • Cross-platform misinformation tracking and countermeasures
  • Fact-checking across cultural and linguistic contexts
  • Interactive and visual tools for AI-verified information

SUBMISSION GUIDELINES

We invite original research articles, theoretical essays, case studies, and review papers that contribute to the discourse on AI fact-checking in electronic media. Submissions should follow the journal’s formatting and submission guidelines.

Manuscripts can be submitted via the JoBEM online submission system. Please select Automating Trust: AI, Misinformation, and the Future of Algorithmic Fact-Checking when submitting.

For inquiries, please contact Don Shin at don.h.shin@ttu.edu.

We look forward to your contributions in advancing responsible AI-driven fact-checking in journalism and electronic media.

REFERENCES

Chae, J. H., & Tewksbury, D. (2024). Perceiving AI intervention does not compromise the persuasive effect of fact-checking. New Media & Society. https://doi.org/10.1177/14614448241286881

DeVerna, M. R., Yan, H. Y., Yang, K.-C., & Menczer, F. (2024). Fact-checking information from large language models can decrease headline discernment. Proceedings of the National Academy of Sciences, 121(50), Article e2322823121. https://doi.org/10.1073/pnas.2322823121

Chung, M., Moon, W. K., & Jones-Jang, S. M. (2023). AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages. Digital Journalism, 12(10), 1548–1569. https://doi.org/10.1080/21670811.2023.2254820

Jia, C., & Lee, T. (2024). Journalistic interventions matter: Understanding how Americans perceive fact-checking labels. Harvard Kennedy School Misinformation Review, 5(2). https://doi.org/10.37016/mr-2020-138&#8203

Westlund, O., Belair-Gagnon, V., Graves, L., Larsen, R., & Steensen, S. (2024). What is the problem with misinformation? Fact-checking as a sociotechnical and problem-solving practice. Journalism Studies, 25(8), 898–918. https://doi.org/10.1080/1461670X.2024.2357316


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives