“What is even real anymore?”: AI used to deceive public about L.A. wildfires

[This Newsweek story describes the latest example of the use of technology to purposely blur “the line between what is real and what is faked,” i.e., evoke a presence misperception (the more abrupt question in the title of this post comes from a 404 Media email newsletter promoting a related story from that organization). In addition to these stories see fact-checks from Snopes about the Hollywood sign and NewsMeter about that and other misrepresentations of the Los Angeles fires. –Matthew]

[Image: Source: NewsMeter]

How AI Convinced World Hollywood Sign Was Burning Down in LA Fire

By Melissa Fleur Afshar, Life and Trends Reporter
January 10, 2025

As wildfires continue to ravage Los Angeles, California, causing loss of life and extensive property damage, speculation that the Hollywood sign also succumbed to the flames has taken the internet by storm.

The viral spread of AI-generated imagery showing the landmark ablaze has shed light on the potential for misinformation in emergencies, raising concerns about the use of the technology.

The wildfires in Los Angeles have, at the time of writing, claimed ten lives and destroyed 10,000 structures, according to officials. Huge swathes of the Pacific Palisades, a neighborhood known for its lavish celebrity pads, have been burned to the ground, sparking widespread concern. To date, more than 130,000 residents have been placed under evacuation orders.

Amid this chaos, images circulated online suggesting the Hollywood sign—a cultural symbol—was engulfed by flames. Later, these images, which gained traction on social media platforms like X (formerly Twitter), were determined to be AI-generated.

Fire incident maps from the California Department of Forestry and Fire Protection (CAL FIRE) provided a clearer picture: the Hollywood sign was not within the affected blaze area, at the time of writing.

Live cams of the landmark provided further confirmation that had it remained untouched by the neighboring fires. Several social media users pointed out the AI-generated nature of the images, sparking conversation around the use of the technology to spread false news quickly and convincingly.

Gleb Tkatchouk, product director at AI image generator ARTA, told Newsweek: “AI generator tools based on machine learning models can allow users to illustrate any idea that comes to mind and generate highly realistic visuals in seconds.

“The difficulty increases when aiming for highly specific or complex outputs as you need to refine prompts, but with a combination approach, a clear understanding of what you expect, and some practical skills, creating lifelike images of any subject or event is no longer challenging.

“And making a fake now costs nothing.”

Tkatchouk said the AI-generated images showing the Hollywood sign ablaze have the potential to spread panic and fear.

“They also show disrespect for the tremendous efforts shown by firefighters,” he said.

With the rapid advancement of AI technology and the increasingly realistic quality of its output, the line between what is real and what is fake has for many become somewhat blurred.

Tkatchouk said that it is our social responsibility to use AI technology thoughtfully, especially since it is now widely accessible and creations can be shared online.

For those concerned about the digital spread of misinformation, deepfakes—videos of a person in which their face or body has been digitally altered—have become particularly troublesome over the past few years.

Komninos Chatzipapas, the founder of HeraHaven AI, told Newsweek: “The barrier of entry to create a deepfake has lowered substantially over the past two years.

“Now, by using freely available software, anyone can create fake images without needing specialized AI knowledge.

“The worst part is that, not only can they create a new fake image from scratch, but also edit existing images.”

The Los Angeles County Fires

The wildfires engulfing Los Angeles County in Southern California appear to have created a fertile ground for the spread of misinformation.

A suspected arsonist, allegedly armed with a flamethrower, was arrested on Thursday. The Los Angeles Police Department indicated that the Kenneth fire, which broke out north of the Palisades fire, was likely started intentionally.

This development has added another layer of complexity to the already fraught situation.

Chatzipapas and Tkatchouk say that it is in times like these, when people are already on high alert, that verified and trustworthy information is needed most. For Tkatchouk, social media platforms need to adopt further measures to combat disinformation.

“Platforms apply several approaches to fight against disinformation, such as digital watermarking invisible to the human eye yet accessible for computer fact-checking or metadata tagging,” he said. “However, while these approaches are good for final filtering of generated content, platforms and newsfeeds still have to decide whether to serve this information at all.”

Chatzipapas said that as challenges in detecting AI-generated content grow due to their increasing sophistication, the attention paid to verifying content must grow in parallel.

“Deepfake detection software just does not work as it should and it is not at all accurate,” he said. “The reason is that these AI tools do not leave any traces.

“The only way to protect ourselves from misinformation is to evaluate the trustworthiness of whoever sent or uploaded a picture.”

The AI startup founder said that at this point, AI-generated video content is easier to trust than imagery, as it is “much harder” to create realistic or believable video content in the form of deepfakes.

As residents and emergency services work to manage the wildfires, the circulation of AI-generated content about the disaster serves as a stark reminder of the potential pitfalls of this technology.


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives