Call for Papers:
‘The Age of AI Agents: Data Power, Social, Political and Ethical Challenges’
A special issue of the journal International Journal of Human–Computer Interaction
https://think.taylorandfrancis.com/special_issues/the-age-of-ai-agents-data-power-social-political-and-ethical-challenges/
Special Issue Editor(s):
- Mirko Farina, Huaqiao University, China and Institute for Digital Economy and Artificial Systems [Xiamen University and Moscow State University], farinamirko@gmail.com
- Andrea Lavazza, Pegaso University, Italy, lavazza67@gmail.com
Deadline for submissions: March 15, 2026 (likely to be extended)
BACKGROUND, RATIONALE AND OBJECTIVES
2024 witnessed the rapid rise of AI agents and widespread investment by the tech industry. Big tech companies and numerous well-regarded LLM startups have collectively invested hundreds of billions of dollars in AI agents. Deloitte predicts that by the end of 2025, 25% of companies developing AI will launch pilot projects or proof-of-concept for AI agents, and this figure will increase to 50% by 2027. In the increasingly fierce competition, big tech companies are racing towards a common goal – to upgrade chatbots (like ChatGPT) to not only provide answers for humans but also to control computers and take actions on behalf of human users. These tech companies unanimously promise that AI agents will be the next leap in the forthcoming AI revolution, fundamentally transforming human-computer interaction. Their promotion efforts at conferences and keynote speeches of various kind place significant emphasis on “personalization”, “customization”, and “user-centricity”, closely embedding these ideas into the term “agent”, while gradually phasing out the old term “assistant”.
Unlike existing generative AI tools that rely on a certain amount of user hand-holding to prompt their output, AI agents are engineered to autonomously analyze data, reason, make decisions, and execute operations following user-defined parameters, while engaging in continuous self-learning and self-adaptation over time. This necessitates several critical capabilities: a deep understanding of users to ensure goal alignment, the ability to adapt to changing circumstances to provide proactive support, and the capacity to independently plan and execute cross-platform tasks to achieve ultimate goals. Realizing these advanced capabilities demands the integration of sophisticated solutions such as adaptive training, planning, tool invocation, external knowledge retrieval, and memory retention.
At the time of writing, a number of big AI companies have been actively unveiling their respective initiatives for AI agent development. In December 2024, Google officially announced its entry into the Agentic AI era by launching Gemini 2.0, its core multimodal model, and developing new experimental prototypes based on it, such as Project Mariner. Mariner was developed as an extension for Google’s widely utilized web browser, Chrome, allowing users to input requests directly into their Chrome browser and have Mariner execute tasks on their behalf. Microsoft’s strategic agenda is more focused on developing AI agents for enterprises and organizations to enhance the efficiency of business activities for employees. Its “M365 Copilot” platform introduces a comprehensive suite of AI agents that cover key business areas such as sales, services, finance, supply chain management, enterprise resource planning, and customer relationship management.
Chinese technology companies have also demonstrated significant interest and long-term strategic planning in the development of AI agents. At the time of writing, an AI system named Manus represents the latest advancement in China’s research and development within this domain. Manus is designed as a fully autonomous AI agent with the ability to independently think, plan, and execute tasks without human intervention. In demonstration videos, Manus has showcased its proficiency in handling three distinct tasks: screening resumes, analyzing stock correlations, and searching for real estate information in New York. Manus operates within a cloud-based virtual computing environment, enabling users to set objectives and subsequently disconnect while providing a “console” window for users to monitor the agent’s operations in real-time and intervene when necessary.
In the face of this technological revolution it is becoming increasingly clear that to render AI agents fully potent and personalized, big tech companies will necessitate extensive access to data. However, the structural integration of AI agent agendas with those of Big Tech corporations raises a central concern: whether AI agents will further consolidate the monopolistic data power of these corporations, thereby exacerbating social issues such as capitalist exploitation and corporate dominance.
Scholars have already critiqued the increasingly data-intensive practices of tech giants. Crawford (2021) -for example- highlighted that despite the rapid expansion of global AI, only a few corporations wield control over the predominant infrastructure platforms, which significantly influence the accessibility and viability of AI development and deployment (Farina et al., 2025). More recently, other scholars have introduced the concept of “Big AI” (Vlist et al., 2024) “AI Empire” (Tacheva and Ramasubramanian, 2023), “Technofeudalism” (Varoufakis, 2023) to illustrate how AI systems evolve into a networked entity comprising several monopolistic corporate axes, agendas, and powers, characterized by the deep interdependence between AI and the infrastructure, resources, and investments of existing tech giants.
However, the monopolistic tendencies of AI also present substantial risks associated with increased datafication and the corporate centralization of data within society. Numerous researchers (e.g., Dencik, 2025; Zuboff, 2019) have already highlighted the connections between this phenomenon and concerns such as bias, discrimination, mass surveillance, and privacy infringements (Stahl, 2018, Sartori and Theodorou, 2022, Sadowski, 2020). As AI continues to incorporate advanced machine learning capabilities for data processing, it may exacerbate the potential misuse of data (García, 2024; Gabriel, 2020). Failing to prioritize transparency and fairness in data-intensity practices could lead to significant adverse effects on the economy, individual privacy, and democratic institutions. Yet, the data-intensive nature of future AI agents and their association with the concentration of data power among tech giants have not garnered adequate academic attention (Winkel, 2024).
This Topical Collection aims to close this gap and address this oversight by bringing together interdisciplinary researchers from East and West to analyse the problematic aspects of datafication inherent to the development of future AI agents, in 10 crucial dimensions: algorithms and big data (4.1), decentralized technologies (4.2); privacy and security (4.3), bias and fairness (4.4), explainability and accountability (4.5), social power asymmetry (4.6), AI governance (4.7), business ethics (4.8); AI public literacy (4.9), and environmental impact (4.10). The analysis conducted across these 10 dimensions will be instrumental in formulating mitigation strategies and solutions for the development of future AI agents and more equitable and sustainable AI ecosystems (Taddeo and Floridi, 2018).
LIST OF TOPICS RELEVANT TO THIS SPECIAL ISSUE INCLUDE BUT ARE NOT LIMITED TO:
Philosophy and Ethics of AI Agency
- Philosophical and ethical foundations of AI agents and autonomous systems
- The concept of “agency” in artificial intelligence: philosophical, psychological, and computational perspectives
- From assistants to agents: conceptual, historical, and sociotechnical transitions
- Epistemology of delegation: trust, responsibility, and epistemic authority in AI agents
- Explainability, interpretability, and accountability in AI-driven actions
- The role of memory, context-awareness, and personalization in AI agents: cognitive and ethical concerns
- The future of human autonomy in an agent-saturated digital world
- Responsible innovation and design ethics for next-generation AI agents
Power, Capital, and Data Governance
- The political economy of AI agents: platform capitalism, corporate dominance, and digital monopolies
- AI agents and the concentration of data power: risks for democracy and informational justice
- Datafication and surveillance: ethical and legal implications of agentic AI
- Business ethics of agentic ecosystems: customer manipulation, data ownership, and market asymmetries
- Philosophical critiques of technofeudalism, “Big AI,” and infrastructural dependency
- AI agents and digital sovereignty: national and global policy perspectives
Bias, Fairness, and Human Rights
- Bias, discrimination, and fairness in autonomous AI decision-making
- Privacy, consent, and user autonomy in agent-mediated digital environments
- Public understanding and literacy of AI agents: risks of anthropomorphism and techno-solutionism
- Transparency, accountability, and democratic oversight in agentic systems
Applications and Sectorial Implications
- AI agents in the public sector: education, healthcare, legal systems
- AI agents and the future of work: labour displacement, productivity, and worker surveillance
- Agentic AI in the enterprise: decision-making, automation, and organizational control
- Environmental and sustainability challenges of data-intensive agentic infrastructures
Comparative and Interdisciplinary Approaches
- Comparative perspectives on AI agents: Western and Eastern approaches to ethics, governance, and design
- Interdisciplinary methodologies for studying AI agents (philosophy, law, computer science, STS, sociology)
- Cross-cultural imaginaries and narratives of artificial agency
- AI governance models: from technical standards to socio-political frameworks
Big Data in Computer Science
- Big data analytics, machine learning, data mining, and cloud computing.
- Additionally, research focusing on data management, data structures, architectures for big data analytics, as well as the application of big data in various fields, such as social media.
ABOUT THE EDITORS
MIRKO FARINA (LEADING EDITOR):
Mirko Farina is Full Professor of Philosophy of Technology and AI in the School of Philosophy and Social Development at Huaqiao University, Talent C Level of the Fujian Province, and Head of the Human-Machine Interaction Lab at the Institute for Digital Economy and Artificial Systems [IDEAS] established -under the framework of the ‘BRICS Partnership on New Industrial Revolution’- in Xiamen (People’ Republic of China), by Xiamen University [XMU], Lomonosov Moscow State University [MSU] and Xiamen Municipal People’s Government. During his career Prof Farina published 4 book 4 books (Oxford University Press; Routledge (2x); Elsevier) and more than 110 academic papers, most of which in Q1 journals (in cognitive science, philosophy, and computer science in top journals such as Synthese, The British Journal for the Philosophy of Science, IEEE Transactions on Systems, Man and Cybernetics: Systems, Technology and Society, Behavioural Brain Sciences, Philosophy & Technology; Neuroethics; American Journal of Bioethics, Biology and Philosophy, Computer Science Review, Neurocomputing etc). He also delivered more than 100 talks across 4 continents in very prestigious venues (such as University of Oxford, King’s College London, University of Edinburgh, Moscow State University, Moscow Institute for International Relations, ITMO, Royal Society of Edinburgh, Xian Jiao Tong University, Pekin University, University of Technology Deft, Saint Petersburg State University, Institute of Philosophy in London, National University of Australia, Monash University, University of Sydney, Macquarie University, etc), and participated and contributed to prestigious events (such as the WAIC 2025, the BRICS Forum on Development of Industrial Internet and Digital Manufacturing 2023, 2024, the Baltic Platform, The International Science Symposium ‘Inventing the Future’ in Moscow 2024, The Astana Club 2002, and the G20[2021]), while receiving funds for approximately 3millions USD (from the Fujian Province, The Jiangsu Province, The Ministry of Industry and Information Technology, The British Academy, Huawei Technologies, The Russian Science Foundation, The Australian Government etc).
In addition, Prof Farina is a) the founding editor and co-editor in chief- together with Prof Chen Jin (Tsinghua University) and Prof Rongrong Ji (Xiamen University)- of ‘AI & Innovation‘ (Wiley) an international, multi-disciplinary journal sponsored by the Institute for Digital Economy and Artificial Systems of Xiamen City and developed in strategic partnership with both the Research Center of Technological Innovation at Tsinghua University and with the Institute of Artificial Intelligence of Xiamen University; b) the Co-Editor in Chief with Prof Chen Jin (Tsinghua University) of a Permanent Collection titled: ‘Sustainable Digital Development: Business, Values, and Governance’ which is published by Springer; and c) the Editor in Chief of a Book series, titled: ‘Anthem Advances in AI and Innovation’ (Anthem Press part of Cambridge Core). Prior to take up his current job Prof Farina held a number of positions (including a prestigious British Academy Postdoctoral Fellowship) at different universities in Russia (Saint Petersburg State University, Innopolis University), Kazakhstan (Nazarbayev University), and The UK (King’s College London). He also held visiting positions at Tsinghua University, Aarhus University, Ruhr University Bochum, HSE Moscow, Saint Petersburg State University, Western Caspian University, among the others. Personal Website: https://mirkofarina.weebly.com/
ANDREA LAVAZZA:
Andrea Lavazza is Associate Professor of Moral Philosophy and coordinator of the Observatory for the Ethics of New Technologies at Pegaso University. He received his academic training at the University of Milan and has taught Neuroethics at the University of Milan and the University of Pavia. He has also taught Philosophy of Mind at the University of Pavia. His background in philosophy and the humanities has progressively been complemented by scientific expertise in the biomedical field, leading him to focus on the emerging discipline of neuroethics, which explores the moral, social, and legal implications of neuroscience. Lavazza’s research in this area addresses issues such as free will, human enhancement, memory manipulation, brain organoids, and neurotechnologies.
More recently, he has turned his attention to the ethics of artificial intelligence—another key area, along with bioethics in a broad sense, in which he has published internationally. His research interests also include the epistemology of expertise and philosophy of mind. Lavazza has published over 180 contributions in peer-reviewed international journals and in volumes by prestigious academic publishers. He has authored or edited 15 books in English and Italian, including the recent Expertise: Philosophical Perspectives (Oxford University Press) and Philosophy, Expertise, and the Myth of Neutrality (Routledge). He is regularly invited to universities and research centers worldwide for his ethical reflections in the field of neuroethics. In 2024, he was a featured speaker at TED Vienna as a leading figure in AI ethics. Lavazza In 2024, he was listed in the Stanford/Elsevier World’s Top 2% Scientists ranking. He is also a regular contributor to the Italian press, both as an editorialist and a science communicator.
SUBMISSION INSTRUCTIONS
Instructions for Authors:
List of Important Dates:
Manuscript Submission Deadline Date: 15th March, 2026
Authors Notification Date: 5th June, 2026
Revised Papers Due Date: 31st August, 2026
Final notification Date: 28th November, 2026
Leave a Reply