Call: Large Language Models: A Philosophical Reckoning, special issue of Ethics and Information Technology

Call for Papers

Large Language Models: A Philosophical Reckoning
Special issue of Ethics and Information Technology
https://link.springer.com/collections/fajiigfiih

Submission deadline: November 1, 2023

The journal of Ethics and Information Technology is hosting a Special Issue on the normative dimensions of Large Language Models.

Large Language Models (LLMs), such as LaMDA and GPT-3, are often presented as breakthroughs in AI because of their ability to generate natural language in text in a matter of seconds. Some suggest that the use of LLMs can improve the way we search for information, compose creative writing, and process and comprehend text. The rapid commercial rollout of dialogue-style interfaces underpinned by LLMs, most notably ChatGPT and BARD, have spurred the public imagination and engendered accompanying concerns. The emergent concerns about LLMs mirror their complex sociotechnical nature and span across multiple dimensions, including the technical (e.g., the quality and bias of the output produced by ChatGPT-like technologies, the algorithmic limitations of LLMs, their opaqueness), the social (e.g., the power relations between users and big tech companies producing LLMs, the environmental impact of their development and use), and the cultural (e.g., displacing the opportunities for critical reasoning, the educational challenges, changes to social norms and values). This Special Issue aims to understand what is ethically at stake in the use of LLMs and how to develop and introduce LLM-based applications in a responsible way. We invite the submission of papers focusing on but not restricted to the following areas:

  • critical examination of LLM-related case studies
  • transformative effects of LLMs on individuals and societies
  • the sociotechnical systems approach to LLMs
  • responsible design of LLMs
  • informed use and contestation of LLMs
  • governance and institutional embedding of LLMs
  • LLMs and value change
  • power dimensions of LLMs
  • sustainability and LLMs
  • cultural diversity and LLMs
  • LLMs and intellectual property rights
  • working conditions of LLM content moderators and other microworkers
  • LLMs and privacy
  • LLMs and the hallucination of misinformation
  • dual use potential of LLMs for persuasion, propaganda, and epistemic disorientation
  • LLMs as assistive tools
  • expressive authenticity and LLM-generated content
  • parasocial relationships with LLM-based chatbots

Deadline: November 1st, 2023

Guest Editors: Olya Kudina and Mark Alfano

More information:
https://link.springer.com/collections/fajiigfiih

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z