[This Daily Dot story reports on a viral video by Astrophysicist Neil deGrasse Tyson that demonstrates and draws attention to the dangers of presence-evoking deepfake videos. I’ve added additional details and some viewer reactions from coverage by NDTV following the Daily Dot story below (both stories include more images and the video). –Matthew]

Neil deGrasse Tyson shows how AI can be used to mislead—even in science
“That’s not me. It was never me.”
By Anna Good
October 31, 2025
Astrophysicist Neil deGrasse Tyson’s latest StarTalk video‘s introduction started off looking like a conspiracy video. An AI-generated Tyson claimed, “Lately, I’ve been doing calculations as well as looking back at old NASA footage and raw data from satellites hovering above Earth. And I just can’t escape the conclusion that the Earth might actually be flat.”
Moments later, the real Tyson pulled his phone back and revealed the bit as a fake. “That’s not me. It was never me,” he said, calling the clip an example of synthetic media now invading public trust.
Although Tyson often joked about playful parody, like a clip that “babyified” him during a podcast, he stressed that convincing AI impersonations created real consequences.
Moreover, he said even friends fell for false science videos using his likeness, including one that impressed actor Terry Crews. When Crews texted him about it, Tyson remembered thinking, “I don’t remember this. I never did this.”
What Tyson and Cosoi said about deepfake tech
During the 18-minute episode, he spoke with Alex Cosoi, Chief Security Strategist at Bitdefender, who defined the tech plainly, saying, “So a deep fake […] is synthetic or manipulated media. And by media, I mean video, audio, …, or images which is generated with AI, artificial intelligence, to make people appear to say or do things that never actually happened in reality.”
Then, Tyson cued another fake moment featuring an “Andromedan.” A digital alien chirped, “My apologies. This form takes some getting used to,” before Cosoi explained how deep learning mimicked the brain.
Additionally, Tyson noted that subtle deepfakes confused even highly observant friends. He added that clips made of him typically misrepresented his science messaging by about 15%. He emphasized that most creators probably did not intend harm, yet he worried about deception when viewers did not realize they were watching a parody.
Meanwhile, Cosoi pointed to past war misinformation, including crude videos of Ukrainian President Volodymyr Zelenskyy and Russian President Vladimir Putin. “They fooled some people,” Cosoi said, even if many noticed issues like an oddly sized head.
Scams, politics, and what comes next
Tyson turned toward the broader threats. He warned that deepfakes already tricked people financially. Cosoi agreed, citing romance schemes, fake investment chats, and a Hong Kong case where scammers impersonated executives in a virtual meeting and directed $25 million in transfers. “One thing leads to the other, and then you’re bankrupt,” he said.
Furthermore, Cosoi described political deepfakes deployed right before elections when candidates could not respond, influencing voters. He also discussed AI honeypots, a tool called Scamo, and upcoming detection systems that identify manipulated areas in images or AI-generated audio.
Tyson then asked whether there might come a time when detection tools fail. Cosoi admitted, “I believe that there may be a day when a deepfake is going to be more appealing to a person, even though a protection tool will tell him that’s fake.” Tyson suggested platform-wide permissions like the newly announced Sora 2 rules, though Cosoi warned competitors would not always cooperate.
[snip to end]
—
[From NDTV]
Neil deGrasse Tyson Spooks Internet By Sharing AI Deepfake Video: ‘Earth Is Flat’
The video opened with a deepfake version of Neil deGrasse Tyson, as the astrophysicist attempted to demonstrate the dangers of the new technology.
Edited by Abhinav Singh
November 1, 2025
[snip]
“I didn’t think much about deepfakes until I got deep-faked. The early stuff is fine if it’s parody, and if it is obvious it’s parody,” said Tyson [in the video], adding: “But when you do this and the viewer does not know it’s a parody, then you are crossing a line.”
“Obviously, I’m not alone in this landscape as a victim of deep fakes. There are many, many celebrities out there. They are public celebrities that have been deep faked in ways that are also affecting the integrity of their actual message that they would post on their authentic platforms.”
As the video went viral, social media users were terrified that the technology easily managed to mislead most of them.
“Deepfakes are becoming too good. It’s getting harder to tell. I don’t know if the video after is a deepfake,” said one user, while another added: “Damn. Honestly convinced the second segment was also a deepfake.”
A third commented: “How will we ever know what’s real anymore? We are going to have to disconnect from everything and live in the forest to have a stable life with meaning. This is completely out of control.”
A fourth said: “Important to know what comes from where. To have a data source that cannot be changed and that can be verified. This is where blockchain has a place for culture. This is where content chains matter for the storage of knowledge.”
[snip to end]
Leave a Reply