Alexa has a new voice – your dead relative’s

[The headline of this Washington Post story is premature since Amazon is still developing the feature, but the ability to have Alexa speak in the voice of someone who has died based on only a minute’s worth of voice recordings represents another example of ways technology can be used to evoke presence after death. The excerpt from coverage in The Verge that follows below notes that Amazon’s senior VP and head scientist of Alexa artificial intelligence “said the feature could enable customers to have ‘lasting personal relationships’ with the deceased.” Of course as the Washington Post sub-headline notes, this raises a variety of practical and ethical concerns. In a Microsoft blog post a few days ago, that company’s Chief Responsible AI Officer announced that the company is restricting its similar Custom Neural Voice feature: “This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners.” A more vivid and humorous critique is made in “As your dead relative, I don’t want Amazon’s Alexa to mimic me” in the Tampa Bay Times. For technical details about Amazon’s technology see a post in Amazon Science; the demonstration from the Las Vegas Amazon event is available on YouTube. –Matthew]

Alexa has a new voice — your dead relative’s

Experts call the device’s new feature a slippery slope, comparing it to an episode of ‘Black Mirror’

By María Luisa Paúl
June 23, 2022

Propped atop a bedside table during this week’s Amazon tech summit, an Echo Dot was asked to complete a task: “Alexa, can Grandma finish reading me ‘The Wizard of Oz’?”

Alexa’s typically cheery voice boomed from the kids-themed smart speaker with a panda design: “Okay!” Then, as the device began narrating a scene of the Cowardly Lion begging for courage, Alexa’s robotic twang was replaced by a more human-sounding narrator.

“Instead of Alexa’s voice reading the book, it’s the kid’s grandma’s voice,” Rohit Prasad, senior vice president and head scientist of Alexa artificial intelligence, excitedly explained Wednesday during a keynote speech in Las Vegas. (Amazon founder Jeff Bezos owns The Washington Post.)

The demo was the first glimpse into Alexa’s newest feature, which — though still in development — would allow the voice assistant to replicate people’s voices from short audio clips. The goal, Prasad said, is to build greater trust with users by infusing artificial intelligence with the “human attributes of empathy and affect.”

The new feature could “make [loved ones’] memories last,” Prasad said. But while the prospect of hearing a dead relative’s voice may tug at heartstrings, it also raises a myriad of security and ethical concerns, experts said.

“I don’t feel our world is ready for user-friendly voice-cloning technology,” Rachel Tobac, chief executive of the San Francisco-based SocialProof Security, told The Washington Post. Such technology, she added, could be used to manipulate the public through fake audio or video clips.

“If a cybercriminal can easily and credibly replicate another person’s voice with a small voice sample, they can use that voice sample to impersonate other individuals,” added Tobac, a cybersecurity expert. “That bad actor can then trick others into believing they are the person they are impersonating, which can lead to fraud, data loss, account takeover and more.”

Then there’s the risk of blurring the lines between what is human and what is mechanical, said Tama Leaver, a professor of internet studies at Curtin University in Australia.

“You’re not going to remember that you’re talking to the depths of Amazon … and its data-harvesting services if it’s speaking with your grandmother or your grandfather’s voice or that of a lost loved one.”

“In some ways, it’s like an episode of ‘Black Mirror,’ ” Leaver said, referring to the sci-fi series envisioning a tech-themed future.

The new Alexa feature also raises questions about consent, Leaver added — particularly for people who never imagined their voice would be belted out by a robotic personal assistant after they die.

“There’s a real slippery slope there of using deceased people’s data in a way that is both just creepy on one hand, but deeply unethical on another because they’ve never considered those traces being used in that way,” Leaver said.

Having recently lost his grandfather, Leaver said he empathized with the “temptation” of wanting to hear a loved one’s voice. But the possibility opens a floodgate of implications that society might not be prepared to take on, he said — for instance, who has the rights to the little snippets people leave to the ethers of the World Wide Web?

“If my grandfather had sent me 100 messages, should I have the right to feed that into the system? And if I do, who owns it? Does Amazon then own that recording?” he asked. “Have I given up the rights to my grandfather’s voice?”

Prasad didn’t address such details during Wednesday’s address. He did posit, however, that the ability to mimic voices was a product of “unquestionably living in the golden era of AI, where our dreams and science fiction are becoming a reality.”

Should Amazon’s demo become a real feature, Leaver said people might need to start thinking about how their voices and likeness could be used when they die.

“Do I have to think about in my will that I need to say, ‘My voice and my pictorial history on social media is the property of my children, and they can decide whether they want to reanimate that in chat with me or not?’ ” Leaver wondered.

“That’s a weird thing to say now. But it’s probably a question that we should have an answer to before Alexa starts talking like me tomorrow,” he added.

[From The Verge]

Amazon shows off Alexa feature that mimics the voices of your dead relatives

Hey Alexa, that’s weird as hell

By James Vincent
June 23, 2022

[…]

Although this specific application is already controversial, with users on social media calling the feature “creepy” and a “monstrosity,” such AI voice mimicry has become increasingly common in recent years. These imitations are often known as “audio deepfakes” and are already regularly used in industries like podcasting, film and TV, and video games.

Many audio recording suites, for example, offer users the option to clone individual voices from their recordings. That way, if a podcast host flubs her or his line, for example, a sound engineer can edit what they’ve said simply by typing in a new script. Replicating lines of seamless speech requires a lot of work, but very small edits can be made with a few clicks.

The same technology has been used in film, too. Last year, it was revealed that a documentary about the life of chef Anthony Bourdain, who died in 2018, used AI to clone his voice in order to read quotes from emails he sent. Many fans were disgusted by the application of the technology, calling it “ghoulish” and “deceptive.” Others defended the use of the technology as similar to other reconstructions used in documentaries.

Amazon’s Prasad said the feature could enable customers to have “lasting personal relationships” with the deceased, and it’s certainly true that many people around the world are already using AI for this purpose. People have already created chatbots that imitate dead loved ones, for example, training AI based on stored conversations. Adding accurate voices to these systems — or even video avatars — is entirely possible using today’s AI technology, and is likely to become more widespread.

However, whether or not customers will want their dead loved ones to become digital AI puppets is another matter entirely.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z