AI-generated images of celebrities at MET Gala fool fans and Katy Perry’s mother

[Apologies for the glitchy re-start of ISPR Presence News posts; if you didn’t receive the full version of yesterday’s post titled “Virtual reality environment for teens may offer an accessible, affordable way to reduce stress,” you can read it here.

The story below is from NPR and reports on a series of AI-generated images distributed on social media that appeared to show certain celebrities attending and modeling dramatic fashions at the recent MET Gala even though those individuals hadn’t attended the event. In one case the AI image deceived even singer Katy Perry’s mother. Both this story and an excerpt from coverage by The Independent that follows below consider some of the broader implications of the proliferation of AI images. –Matthew]

Katy Perry’s own mom fell for her Met Gala AI photo. Do you know what to look for?

By Rachel Treisman
May 7, 2024

Some of the biggest names in music, entertainment and fashion assembled in New York City for Monday’s “Garden of Time”-themed Met Gala, decked out in flowers, sparkles and extravagant timepieces.

Contrary to the images circulating on social media, Katy Perry was not one of them.

“Couldn’t make it to the MET, had to work,” the singer posted on Instagram, alongside a video of herself singing in the studio — as well as two photos seemingly showing her at the gala.

They were actually made with AI.

In the first, Perry appears to be standing on a hedge-lined red carpet at the Met, wearing an elaborate ball gown covered in flowers and butterflies, with her dark hair styled in long waves. In the second, a close-up shot, Perry is wearing a metallic corset top with a large key handle down the middle and a short skirt of flowers and leaves, her hair straight and tousled.

The photos — whose exact origin is unclear — made a splash on X (formerly known as Twitter) earlier in the night, as viewers at home refreshed their feeds and weighed in on their favorite celebrity fits.

One X post of the ball gown photo had over 300,000 likes and nearly 70,000 reposts as of Tuesday morning — and a note at the bottom clarifying that it was created with AI.Another post, of the corset outfit, garnered over 100,000 likes and was eventually labeled “digitally created.”

Perry is a regular Met Gala attendee (and famous for dressing on-theme, including as a chandelier and hamburger in recent years), so internet observers would be forgiven for assuming she was there on Monday.

Even Perry’s mom, Mary Hudson, thought so. One of the posts in the singer’s Instagram carousel was a screenshot of a text conversation between the two.

“Didn’t know you went to the Met,” her mom wrote. “What a gorgeous gown, you look like the Rose Parade, you are your own float lol.”

Perry was quick to clear things up.

“Lol mom the AI got you too,” she replied. “BEWARE!”

AI-generated images are increasingly easy to make, and celebrity deep fakes are increasingly prevalent — from sexually explicit deep fakes of Taylor Swift circulating earlier this year to robocalls imitating President Biden ahead of the New Hampshire primary.

In fact, Perry wasn’t the only star to be basically photoshopped onto the Met Gala red carpet: A viral X post claimed to show an elaborately dressed Rihanna in attendance, when she was actually home sick with the flu. Another purported to show Lady Gaga, who hasn’t been there since 2019.

And before the gala, photos circulated of Dua Lipa wearing bangs and a corset, only for her to show up on the carpet with crimson hair and an all-black ensemble — and for an X user to point out the early photos were from a 2021 Vogue shoot.

While some social media users dismissed the Met Gala fakes as part of the fun, others see them as a worrisome sign of what could lie ahead.

Experts warn AI-generated deep fakes pose a threat to everything from election security to everyday scams. And just last month, more than 200 artists — including Perry herself — signed a letter urging tech platforms and digital music services to stop using AI to “infringe upon and devalue the rights of human artists.”

How to tell if an image is fake (or at least suspicious)

So what clues should Mary Hudson — and other discerning viewers — be looking for to spot potential fakes?

Sam Gregory of the nonprofit Witness, which helps people use video and technology to protect human rights, encourages viewers to rely on context and intuition in situations like this one.

“My starting point with all images like [Perry’s] is to not trust the online detectors as there are too many variables around whether they give an accurate result,” he explained over email.

He said when he ran both Perry images through a widely-used detector, the flower dress came back as “likely human” and the corset as “likely AI generated.” He also discourages people from looking for visual clues in these kinds of images, saying that can “lead down a rabbit hole of unproductive forensic skepticism.”

With a high-profile event like the Met Gala, Gregory says, it’s best to use “classic media literacy and verification approaches.” In this case, that could mean looking for more proof of Perry’s attendance, from a variety of sources.

“Although some media literacy strategies like checking the source might lead us astray — perhaps we do trust Katy Perry to share real images of herself — if we use another strategy and look for other images from the same event from reliable sources, we’d quickly see this isn’t real,” he explained.

He adds that it reminds him of the AI-created images of a fire at the Pentagon that went viral last year. In both cases, he says, the first question people should ask themselves is not whether they can spot the AI glitches in the photo, but “Why aren’t there other photos and videos of this event in a highly populated area?”

But the bigger question, as Gregory sees it, is whether the public should be expected to do this at all.

“Why wouldn’t Katy Perry be at the Met Gala and why would we second-guess that, particularly if she’s part of the deception?” he says.

More help may be coming from social media platforms, amid growing concerns about the potential for AI to mislead users. Meta said earlier this year that it would start labeling AI-generated images on Facebook, Instagram and Threads, beginning in May.

But for now — and as always — keeping your guard up is key. If you need some more pointers, check out these expert tips on how to spot AI-generated images and avoid getting tricked online.

[From The Independent]

Katy Perry and Rihanna didn’t attend the Met Gala. But AI-generated images still fooled fans

No, Katy Perry and Rihanna didn’t attend the Met Gala this year

By Wyatte Grantham-Philips
May 6, 2024

[snip]

This is far from the first time we’ve seen generative AI, a branch of AI that can create something new, used to create phony content. Image, video and audio deepfakes of prominent figures, from Pope Francis to Taylor Swift, have gained loads of traction online before.

Experts note that each instance underlines growing concerns around the misuse of this technology — particularly regarding disinformation and the potential to carry out scams, identity theft or propaganda, and even election manipulation.

“It used to be that seeing is believing, and now seeing is not believing,” said Cayce Myers, a professor and director of graduate studies at Virginia Tech’s School of Communication — pointing to the impact of Monday’s AI-generated Perry image. “(If) even a mother can be fooled into thinking that the image is real, that shows you the level of sophistication that this technology now has.”

While using AI to generate images of celebs in make-believe luxury gowns (that are easily proven to be fake in a highly-publicized event like the Met Gala) may seem relatively harmless, Myers and others note that there’s a well-documented history of more serious or detrimental uses of this kind of technology.

Earlier this year, sexually explicit and abusive fake images of Swift, for example, began circulating online — causing X, formerly Twitter, to temporarily block some searches. Victims of nonconsensual deepfakes go well beyond celebrities, of course, and advocates stress particular concern for victims who have little protections. Research shows that explicit AI-generated material overwhelmingly harms women and children — including disturbing cases of AI-generated nudes circulating through high schools.

And in an election year for several countries around the world, experts also continue to point to potential geopolitical consequences that deceptive, AI-generated material could have.

“The implications here go far beyond the safety of the individual — and really does touch on things like the safety of the nation, the safety of whole society,” said David Broniatowski, an associate professor at George Washington University and lead principal investigator of the Institute for Trustworthy AI in Law & Society at the school.

Utilizing what generative AI has to offer while building an infrastructure that protects consumers is a tall order — especially as the technology’s commercialization continues to grow at such a rapid rate. Experts point to needs for corporate accountability, universal industry standards and effective government regulation.

Tech companies are largely calling the shots when it comes to governing AI and its risks, as governments around the world work to catch up. Still, notable progress has been made over the last year. In December, the European Union reached a deal on the world’s first comprehensive AI rules, but the act won’t take effect until two years after final approval.


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives