OpenAI and MIT Media Lab release research on affective engagement with ChatGPT and emotional well-being

[An ambitious collaborative research project by the company that developed and operates the general purpose ChatGPT AI service, and the MIT Media Lab, has produced some interesting presence-related results. The summary below from MIT Technology Review is followed by a more pointed assessment from Futurism that concludes that “the longer you use the chatbot, the more likely you are to become emotionally dependent upon it”). For a more detailed summary of the project, see the blog post from OpenAI and Media Lab. –Matthew]

[Image: Credit: Photo Illustration by Sarah Rogers/MITTR | Photo Getty]

OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being

We’re starting to get a better sense of how chatbots are affecting us—but there’s still a lot we don’t know.

By Rhiannon Williams
March 21, 2025

OpenAI says over 400 million people use ChatGPT every week. But how does interacting with it affect us? Does it make us more or less lonely? These are some of the questions OpenAI set out to investigate, in partnership with the MIT Media Lab, in a pair of new studies.

They found that only a small subset of users engage emotionally with ChatGPT. This isn’t surprising given that ChatGPT isn’t marketed as an AI companion app like Replika or Character.AI, says Kate Devlin, a professor of AI and society at King’s College London, who did not work on the project. “ChatGPT has been set up as a productivity tool,” she says. “But we know that people are using it like a companion app anyway.” In fact, the people who do use it that way are likely to interact with it for extended periods of time, some of them averaging about half an hour a day.

“The authors are very clear about what the limitations of these studies are, but it’s exciting to see they’ve done this,” Devlin says. “To have access to this level of data is incredible.”

The researchers found some intriguing differences between how men and women respond to using ChatGPT. After using the chatbot for four weeks, female study participants were slightly less likely to socialize with people than their male counterparts who did the same. Meanwhile, participants who interacted with ChatGPT’s voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment. OpenAI plans to submit both studies to peer-reviewed journals.

Chatbots powered by large language models are still a nascent technology, and it’s difficult to study how they affect us emotionally. A lot of existing research in the area—including some of the new work by OpenAI and MIT—relies upon self-reported data, which may not always be accurate or reliable. That said, this latest research does chime with what scientists so far have discovered about how emotionally compelling chatbot conversations can be. For example, in 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a user’s messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or if you act sadder, so does the AI. 

OpenAI and the MIT Media Lab used a two-pronged method. First they collected and analyzed real-world data from close to 40 million interactions with ChatGPT. Then they asked the 4,076 users who’d had those interactions how they made them feel. Next, the Media Lab recruited almost 1,000 people to take part in a four-week trial. This was more in-depth, examining how participants interacted with ChatGPT for a minimum of five minutes each day. At the end of the experiment, participants completed a questionnaire to measure their perceptions of the chatbot, their subjective feelings of loneliness, their levels of social engagement, their emotional dependence on the bot, and their sense of whether their use of the bot was problematic. They found that participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely, and to rely on it more.

This work is an important first step toward greater insight into ChatGPT’s impact on us, which could help AI platforms enable safer and healthier interactions, says Jason Phang, an OpenAI safety researcher who worked on the project.

“A lot of what we’re doing here is preliminary, but we’re trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is,” he says.

Although the research is welcome, it’s still difficult to identify when a human is—and isn’t—engaging with technology on an emotional level, says Devlin. She says the study participants may have been experiencing emotions that weren’t recorded by the researchers.

“In terms of what the teams set out to measure, people might not necessarily have been using ChatGPT in an emotional way, but you can’t divorce being a human from your interactions [with technology],” she says. “We use these emotion classifiers that we have created to look for certain things—but what that actually means to someone’s life is really hard to extrapolate.”

[From Futurism’s The_Byte]

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

Well, that’s not good.

By Noor Al-Sibai
March 24, 2025

Power Bot ‘Em

Researchers have found that ChatGPT “power users,” or those who use it the most and at the longest durations, are becoming dependent upon — or even addicted to — the chatbot.

In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more “problematic use,” defined in the paper as “indicators of addiction… including preoccupation, withdrawal symptoms, loss of control, and mood modification.”

To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of “affective cues,” which was defined in a joint summary of the research as “aspects of interactions that indicate empathy, affection, or support,” they used when chatting with it.

Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a “friend.” The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too.

Chat Lackeys

Add it all up, and it’s not good. In this study as in other cases we’ve seen, people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable.

This new research also highlighted unexpected contradictions based on how ChatGPT was used.

For instance, people tended to use more emotional language with text-based ChatGPT than with Advanced Voice Mode, and “voice modes were associated with better well-being when used briefly,” the summary explained.

And those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.

Perhaps the biggest takeaway, however, was that prolonged usage seemed to exacerbate problematic use across the board. Whether you’re using ChatGPT text or voice, asking it personal questions, or just brainstorming for work, it seems that the longer you use the chatbot, the more likely you are to become emotionally dependent upon it.


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives