[Despite the headline of this New York Times story, increasingly convincing and accessible generative AI makes the depressing confusion about the role of technology in images related to the Ukraine-Russia and Israel-Hamas wars predictable and unsurprising. Two particularly distressing comments come at the end of the story: an executive at the cybersecurity firm Sophos says, “Proving what’s fake is going to be a pointless endeavor,” and the director of the Poynter media literacy program MediaWise says, “People will believe anything that confirms their beliefs or makes them emotional… It doesn’t matter how good it is, or how novel it looks, or anything like that.” See the original version of the story for examples of deepfake images and videos. For more on this topic, see a story in The Daily Beast that describes
“a paper published [October 22] in the journal PLOS ONE looking at deepfakes related to the Russo-Ukrainian War and their impact on public perception of conflict. The study found that not only did the AI-generated videos create confusion and concern among the public and news media, they also contributed to the erosion of trust that users had in videos they saw coming from the war—regardless of whether or not they were true. The paper is the first of its kind to look at the impact that such AI is having on war.”
–Matthew]
[Image: A New York Times photograph shows thick smoke from Israeli bombs rising in Gaza on Thursday. Other authentic footage from the war with Hamas has faced skepticism stemming from the rise of A.I. technology. Credit: Samar Abu Elouf for The New York Times]
A.I. Muddies Israel-Hamas War in Unexpected Way
Fakes related to the conflict have been limited and largely unconvincing, but their presence has people doubting real evidence.
By Tiffany Hsu and Stuart A. Thompson, who monitor the spread of false and misleading information online
Published October 28, 2023; Updated October 30, 2023
It was a gruesome image that shot rapidly around the internet: a charred body, described as a deceased child, that was apparently photographed in the opening days of the conflict between Israel and Hamas.
Some observers on social media quickly dismissed it as an “A.I.-generated fake” — created using artificial intelligence tools that can produce photorealistic images with a few clicks.
Several A.I. specialists have since concluded that the technology was probably not involved. By then, however, the doubts about its veracity were already widespread.
Since Hamas’s terror attack on Oct. 7, disinformation watchdogs have feared that fakes created by A.I. tools, including the realistic renderings known as deepfakes, would confuse the public and bolster propaganda efforts.
So far, they have been correct in their prediction that the technology would loom large over the war — but not exactly for the reason they thought.
Disinformation researchers have found relatively few A.I. fakes, and even fewer that are convincing. Yet the mere possibility that A.I. content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic.
On forums and social media platforms like X, Truth Social, Telegram and Reddit, people have accused political figures, media outlets and other users of brazenly trying to manipulate public opinion by creating A.I. content, even when the content is almost certainly genuine.
“Even by the fog of war standards that we are used to, this conflict is particularly messy,” said Hany Farid, a computer science professor at the University of California, Berkeley, and an expert in digital forensics, A.I. and misinformation. “The specter of deepfakes is much, much more significant now — it doesn’t take tens of thousands, it just takes a few, and then you poison the well and everything becomes suspect.”
Artificial intelligence has improved greatly over the past year, allowing nearly anyone to create a persuasive fake by entering text into popular A.I. generators that produce images, video or audio — or by using more sophisticated tools. When a deepfake video of President Volodymyr Zelensky of Ukraine was released in the spring of 2022, it was widely derided as too crude to be real; a similar faked video of President Vladimir V. Putin of Russia was convincing enough for several Russian radio and television networks to air it this June.
“What happens when literally everything you see that’s digital could be synthetic?” Bill Marcellino, a senior behavioral scientist and A.I. expert at the RAND Corporation research group, said in a news conference last week. “That certainly sounds like a watershed change in how we trust or don’t trust information.”
Amid highly emotional discussions about Gaza, many happening on social media platforms that have struggled to shield users against graphic and inaccurate content, trust continues to fray. And now, experts say that malicious agents are taking advantage of A.I.’s availability to dismiss authentic content as fake — a concept known as the liar’s dividend.
Their misdirection during the war has been bolstered partly by the presence of some content that was created artificially.
A post on X with 1.8 million views claimed to show soccer fans in a stadium in Madrid holding an enormous Palestinian flag; users noted that the distorted bodies in the image were a telltale sign of A.I. generation. A Hamas-linked account on X shared an image that was meant to show a tent encampment for displaced Israelis but pictured a flag with two blue stars instead of the single star featured on the actual Israeli flag. The post was later removed. Users on Truth Social and a Hamas-linked Telegram channel shared pictures of Prime Minister Benjamin Netanyahu of Israel synthetically rendered to appear covered in blood.
Far more attention was spent on suspected footage that bore no signs of A.I. tampering, such as video of the director of a bombed hospital in Gaza giving a news conference — called “A.I. generated” by some — which was filmed from different vantage points by multiple sources.
Other examples have been harder to categorize: The Israeli military released a recording of what it described as a wiretapped conversation between two Hamas members, but what some listeners said was spoofed audio (The New York Times, the BBC and CNN reported that they have yet to verify the conversation).
In an attempt to discern truth from A.I., some social media users turned to detection tools, which claim to spot digital manipulation but have proved to be far from reliable. A test by The Times found that image detectors had a spotty track record, sometimes misdiagnosing pictures that were obvious A.I. creations, or labeling real photos as inauthentic.
In the first few days of the war, Mr. Netanyahu shared a series of images on X, claiming they were “horrifying photos of babies murdered and burned” by Hamas. When the conservative commentator Ben Shapiro amplified one of the images on X, he was repeatedly accused of spreading A.I.-generated content.
One post, which garnered more than 21 million views before it was taken down, claimed to have proof that the image of the baby was fake: a screenshot of AI or Not, a detection tool, identifying the image as “generated by AI.” The company later corrected that finding on X, saying that its result was “inconclusive” because the image was compressed and altered to obscure identifying details; the company also said it refined its detector.
“We realized every technology that’s been built has, at one point, been used for evil,” said Anatoly Kvitnitsky, the chief executive of AI or Not, which is based in the San Francisco Bay Area and has six employees. “We came to the conclusion that we are trying to do good, we’re going to keep the service active and do our best to make sure that we are purveyors of the truth. But we did think about that — are we causing more confusion, more chaos?”
AI or Not is working to show users which parts of an image are suspected of being A.I.-generated, Mr. Kvitnitsky said.
Available A.I. detection services could potentially be helpful as part of a larger suite of tools, but are dangerous when treated like the final word on content authenticity, said Henry Ajder, an expert on manipulated and synthetic media.
Deepfake detection tools, he said, “provide a false solution to a much more complex and difficult-to-solve problem.”
Rather than relying on detection services, initiatives like the Coalition for Content Provenance and Authenticity and companies like Google are exploring tactics that would identify the source and history of media files. The solutions are far from perfect — two groups of researchers recently found that existing watermarking technology is easy to remove or evade — but proponents say they could help restore some confidence in the quality of content.
“Proving what’s fake is going to be a pointless endeavor and we’re just going to boil the ocean trying to do it,” said Chester Wisniewski, an executive at the cybersecurity firm Sophos. “It’s never going to work, and we need to just double down on how we can start validating what’s real.”
For now, social media users looking to deceive the public are relying far less on photorealistic A.I. images than on old footage from previous conflicts or disasters, which they falsely portray as the current situation in Gaza, according to Alex Mahadevan, the director of the Poynter media literacy program MediaWise.
“People will believe anything that confirms their beliefs or makes them emotional,” he said. “It doesn’t matter how good it is, or how novel it looks, or anything like that.”
Leave a Reply