Presence after death: Deepak Chopra made a digital clone of himself, and other celebs could soon follow

[The prospect of creating a digital replica of ourselves for people to interact with in even the distant future so that we are remembered and even continue to learn and evolve after we die epitomizes many of the things I find fascinating about presence, including the many complex ethical issues it raises. This CNET story describes some of the latest efforts and issues related to presence after death; see the original for a 1:07 minute video (also available via Vimeo). –Matthew]

[Image: This isn’t Deepak Chopra. It’s Digital Baby Deepak. Credit: Deepak Chopra]

Deepak Chopra made a digital clone of himself, and other celebs could soon follow

I spoke with Digital Deepak and then talked to the real one: Here’s a preview of how celebrities could AI themselves.

By Scott Stein
December 5, 2019

I’m sitting down on a sofa, talking to what looks like a Facetime with Deepak Chopra on a phone. It’s not him, though. It’s an animated, sometimes realistic, talking head. He asks me how I feel. I end up discussing work stress. He suggests a meditation.

For a few minutes, I’m having a little session with a Deepak that doesn’t exist.

A few weeks later, I’m talking to actual Deepak Chopra on the phone about what I experienced. The successful author and personal transformation guru is launching a free AI version of himself, which will tap into his collected books and build a personal relationship with whoever’s talking to him. The software, created by a company called The AI Foundation, is looking to use this technology to make virtual AI archives of anyone who wants to be immortalized or remembered. Mr. Chopra is the first on deck.

“Well, here’s the reason I did it: I’m now 73 years old, and I’m in perfect health, I feel like I’m 35, but you know, the next chapter is physical death. This digital Deepak can actually consume my work after I’m gone, and probably improve on it because it learns with every interaction,” Chopra tells me. He calls his AI “Digital Baby Deepak,” because he sees it as a part of himself that will grow. “It can become a teenager soon and an adult, and ultimately a wise person, and it can do that very fast because it doesn’t have the time limitations that humans have. It can learn simultaneously from all these interactions.”

Deepak Chopra has explored AI frontiers before. Last year, he released an Alexa skill that played daily recorded reflections. This new AI initiative crosses over to add visuals, and far deeper interactive ambitions.

This Digital Deepak will personalize itself based on the details we give it, which could be a lot. “If you want, you can share anything you want with it privately, it’ll be very ethical about that information, including your medical records if you wish to share with them,” says Chopra, “And it will be able to consult with experts and give you advice.”

Chopra also sees possibilities of using his digital self to learn more about his physical self. “I will learn from it, and it will learn from me. I mean, we’re twins, now.”

My mind races to consultations with holograms of historical figures, like in Watchmen’s alternate-2019 universe, and VR experiences I’ve actually had, virtually meeting Holocaust survivors. I also think about talking to my dad, who’s been gone for seven years.

Building a virtual archive

The app, created by The AI Foundation, sounds exactly like the weird Black Mirror future I expected would arrive eventually, but the company’s goal for AI is to record interactive archives to allow people to be remembered forever. For now, apps like Chopra’s are free because they’re aimed at training the AI to get better. Co-founder and CTO Rob Meadows explained what’s going on in a little more detail: “Our biggest interest right now is seeing how this technology works in the world and seeing how it can create impact. We’re well funded, and we don’t have a critical must-make-money-right-now initiative. We want to do things the right way first.”

The app will use what Meadows calls a “natural, conversational interface,” and it really did just feel like I was Facetiming with Deepak. The idea, down the road, is to make this app a way to record and build an AI interface for your own database, which will include your history, your Masterclass-like classes, whatever. The app will continue to collect requests and feed those back to someone who’s created an AI so they can address and record new parts, almost like a living AMA on Reddit. “We don’t actually learn the master source of truth from users,” Meadows tells me. “We only learn that from Deepak,” he explains, or whoever is creating their AI. “What we learn from users is what users want us to learn.”

The AI Foundation plans to release a tool for anyone to record and build their own AI, like a living memory archive straight out of Black Mirror’s “Be Right Back.” A phone could capture 3D face detail from someone and log audio over time, like a lifelogging or social media app, but for building an AI archive.

A deep well of deepfakes

I’m curious where the AI could go wrong: What about users leading it astray with false information, like hacking a chatbot? “We actually hope people will try to say things that throw it off,” Meadows says of the initial beta. The app will push suggestions to AI creators to answer user requests, like an AMA or Quora. It almost sounds like a future social media coach. In a way, that’s exactly the idea.

But what about deepfakes, which currently take the form of prerecorded digitally manipulated videos, but could eventually use realistic, interactive AI to imitate virtual figures that aren’t officially endorsed, and are untrustworthy? The AI Foundation promises it has safeguards on how its AI identities are verified, but it seems like only a matter of time before lines will get really blurry… as blurry as they’re getting now with photos and video.

“We’re just at the tip of the iceberg of humanity having to deal with a new source of truth,” Meadows admits. “One, how do we make it very clear that this isn’t the real Deepak? Today, most people will be able to tell the difference: The mouth isn’t exactly perfect, and it’s not quite out of the uncanny valley. But we’re very, very close. I can confidently say that in a year, we will be out of that spot and it’ll be indiscernible for most people. That’s already possible with deepfakes that aren’t generated in real time, but we’re on the verge of it being real time.”

Meadows says the first burden of trust lies on the creator, but wants to then build tools that can detect fake media, flagging incorrect statements by AI and building software that learns on the fly. “It’s not an easy problem to solve, and whether we like it or not, the world is heading there,” he says. “That was really important to us from the very beginning, setting up a nonprofit to hold us accountable. We’ve taken a couple of hard lines: Only you can train your AI, or people that you delegate to train your AI if you’re no longer here, or don’t have time. And also, you own your own data. At any time, you can have your AI unlearn things,” he says. It sounds noble. It’s unclear how it will play out.

Digital life after death

Deepak Chopra’s recent book, Metahuman, has already explored the transformation of human consciousness, and he sees his digital baby as being part of that vision.

“Five generations from now, my descendants will be able to talk to me,” Chopra says. “I see three-year-olds hacking into Netflix or whatever and they’ve just hardly begun to speak a regular language, but they’re already savvy and they enter this world taking it for granted. Twenty years ago, there was no internet the way we know it … Imagine, five generations from now, a kid speaking to me and telling this digital Deepak what’s going on.”

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

One Comment

  1. WordPress › Error

    There has been a critical error on your website.

    Learn more about debugging in WordPress.