Telepresence and death: Immortal avatars: Back up your brain, never die

[From New Scientist via Revelations Radio Network]

Immortal avatars: Back up your brain, never die

Jun 09, 2010

Zoe Graystone is a girl with two brains. Only one of them is human: the other is an exact digital copy that has become conscious in its own right. When the human Zoe dies, her digital brain is implanted into a humanoid robot, effectively bringing her back from the grave.

Such ideas have littered science fiction for decades. Indeed, Graystone is a character in the American TV drama Caprica. But could such a tale ever become reality?

Though there is little prospect of creating a genuinely conscious robo-clone in the foreseeable future, several companies are taking the first steps in that direction. Their initial goal is to enable you to create a lifelike digital representation, or avatar, that can continue long after your biological body has decomposed. This digitised “twin” might be able to provide valuable lessons for your great-grandchildren – as well as giving them a good idea of what their ancestor was like.

Ultimately, however, they aim to create a personalised, conscious avatar embodied in a robot – effectively enabling you, or some semblance of you, to achieve immortality. “If you can upload yourself into this digital form, it could live forever,” says Nick Mayer of Lifenaut, a US company that is exploring ways to build lifelike avatars. “It really is a way of avoiding death.”

For now, Lifenaut relies on a series of personality tests, teaching sessions and uploaded personal material such as photos, videos and correspondence. The result, Mayer says, will be an avatar that looks like you, talks like you and will be able to describe key events in your life, such as your wedding day. But how far can such technology go? How much of your personality and knowledge can be reproduced by a computer? Can we ever hope to use avatars to resurrect the dead?

Like many people, I have often dreamed of having a clone: an alternative self that could share my workload, give me more leisure time and perhaps provide me with a way to live longer. My first step on the road to immortality is to use Lifenaut’s website to create a basic visual interface with which others, hopefully including my descendants, can interact. This involves uploading an expressionless photo of myself, taken face-on. Lifenaut’s software then animates it so my face can speak, wink and blink.

Right now this kind of avatar is rather crude, though other companies are generating much more lifelike representations that could be adapted for use by projects like Lifenaut. One such company is Image Metrics in Santa Monica, California, which specialises in creating digital faces for films and games.

Faces are particularly difficult to reproduce. For years, animators have struggled with a problem dubbed the “uncanny valley”, in which a computer-generated face looks almost, but not quite, lifelike, triggering a sense of revulsion among human observers. “Systems which look close to real but not quite real are very creepy to people,” says Dmitri Williams of the University of Southern California in Los Angeles.

Image Metrics believes it has cracked the problem. The company’s engineers record a series of high-resolution images of a person’s face, each one with a different expression. Then they calculate the differences between these expressions using powerful mathematical modelling software. The result is pretty convincing. For example, the digital version of American actor Emily O’Brien, which the company unveiled at the ACM Siggraph meeting in Los Angeles in 2008, not only looks realistic, but can be manipulated in real time. “The movements are perfect. We can pretty much make Emily say anything we want,” says Mike Starkenburg, CEO of Image Metrics.

At the moment the process is expensive: creating the virtual Emily cost around $500,000, so for now I’ll make do with my primitive avatar and hope my grandchildren won’t feel too repelled.

Making a human

How my avatar looks may in the end matter less than its behaviour, according to researchers at the University of Central Florida in Orlando and the University of Illinois in Chicago. Since 2007, they have been collaborating on Project Lifelike, which aims to create a realistic avatar of Alexander Schwarzkopf, former director of the US National Science Foundation.

They showed around 1000 students videos and photos of Schwarzkopf, along with prototype avatars, and used the feedback to try to work out what features of a person people pay most attention to. They conclude that focusing on the idiosyncratic movements that make a person unique is more important than creating a lifelike image. “It might be how they cock their head when they speak or how they arch an eyebrow,” says Steve Jones of the University of Illinois.

Equally important is ensuring that these movements appear in the correct context. To do this, Jones’s team has been trying to link contextual markers like specific words or phrases with movements of the head, to indicate that the avatar is listening, for example. “If an avatar is listening to you tell a sad story, what you want to see is some empathy,” says Jones, though he admits they haven’t cracked this yet.

The next challenge is to make an avatar converse like a human. At the moment the most lifelike behaviour comes from chatbots, software that can analyse the context of a conversation and produce intelligent-sounding responses as if it is thinking. Lifenaut goes one step further by tailoring the chatbot software to an individual. According to Rollo Carpenter of artificial intelligence (AI) company Icogno in Exeter, UK, this is about the limit of what’s possible at the moment, a software replica that is “not going to be self-aware or equivalent to you, but is one which other people could hold a conversation with and for a few moments at least believe that there was a part of you in there”.

The Lifenaut avatar’s conversational abilities come from a chatbot created by Carpenter called Jabberwacky. This has been developed through conversations with millions of people since 1997 and has twice won the Loebner prize for the most human-like chatbot. While many chatbots are preprogrammed with set phrases and reactions in response to keywords, Jabberwacky looks for common patterns between conversations, and uses this to ensure that what it says makes the most possible sense in the context of what has just been said to it.

Essence of me

Lifenaut’s avatar might appear to respond like a human, but how do you get it to resemble you? The only way is to teach it about yourself. This personality upload is a laborious process. The first stage involves rating some 480 statements such as “I like to please others” and “I sympathise with the homeless”, according to how accurately they reflect my feelings. Having done this, I am then asked to upload items such as diary entries, and photos and video tagged with place names, dates and keywords to help my avatar build up “memories”. I also spend hours in conversation with other Lifenaut avatars, which my avatar learns from. This supposedly provides “Linda” with my mannerisms – the way I greet people or respond to questions, say – as well as more about my views, likes and dislikes.

A more sophisticated series of personality questionnaires is being used by a related project called CyBeRev. The project’s users work their way through thousands of questions developed by the American sociologist William Sims Bainbridge as a means of archiving the mind. Unlike traditional personality questionnaires, part of the process involves trying to capture users’ values, beliefs, hopes and goals by asking them to imagine the world a century in the future. It isn’t a quick process: “If you spent an hour a day answering questions, it would take five years to complete them all,” says Lori Rhodes of the nonprofit Terasem Movement, which funds CyBeRev. “But the further you go, the more accurate a representation of yourself the mind file will become.”

So is it possible to endow my digital double with a believable representation of my own personality? Carpenter admits that in order to become truly like you, a Lifenaut avatar would probably need a lifetime’s worth of conversations with you. Nor am I sure to what extent a bunch of photos and videos can ever represent my real memories. So might there be a better way to upload your mind?

One alternative would be to automatically capture information about your daily life and feed it directly into an avatar. “Lifeloggers” such as Microsoft researcher Gordon Bell are already doing this to some extent, by wearing a portable camera that records large portions of their lives on film.

A team led by Nigel Shadbolt at the University of Southampton, UK, is trying to improve on this by developing software that can combine digital images taken throughout the day with information from your diary, social networking sites you have visited, and GPS recordings of your location. Other researchers are considering integrating physiological data like heart rate to provide basic emotional context. To date, however, there has been little effort to combine all this into anything resembling an avatar. We’re still some way off creating an accurate replica of an individual, says Shadbolt. “I’m sure we could create a software agent with attitude, but whether it’s my attitude seems to be very doubtful,” he says.

Not surprisingly then, creating a conscious avatar like Zoe Graystone’s alter ego is far more problematic. AI researchers have had some success in making machines with human-like characteristics, including the humanoid robots Cog and Kismet built by Rodney Brooks at the Massachusetts Institute of Technology, and an intelligent software system called Cyc developed by Doug Lenat of Cycorp, a Texas-based AI company. Yet according to Ra?l Arrabales of the Carlos III University of Madrid in Spain, who has developed a test of machine consciousness called Conscale, the best effort so far is probably equivalent to a 1-year-old child. That’s not to say we shouldn’t try, says David Hanson of Hanson Robotics in Dallas, Texas. “Certainly we have no proof that machines can be conscious – we still don’t understand consciousness – but likewise, it’s silly to assume that machines can’t be,” he says.

A bit of body

One problem is that some kind of physical body is probably essential for human-like consciousness to develop, says robotics researcher Antonio Chella from the University of Palermo in Italy. “Consciousness requires a tight interaction between brain, body and environment.” We perceive with our whole body, he says, so a conscious entity needs sensors both to perceive the world and to monitor its own movements.

Researchers working on Project Lifelike are trying to integrate a camera into their digital Schwarzkopf so that it can pick up visual clues from people’s body language and adapt its behaviour accordingly. Hanson is yet more ambitious. His company makes realistic-looking androids, and he and Mayer have discussed integrating one of Lifenaut’s avatars into a robot body. “Combining a mind emulation with a physical body allows that mind to physically interact with the world, to explore and live among us,” he says.

That’s a step towards making a conscious machine, but to go further will require a massive, coordinated effort involving the now fragmented areas of AI research. To this end, Hanson has launched the Apollo Mind Initiative to promote collaboration between research groups, setting the goal of achieving human-level creative intelligence by 2019. His first step is to launch collaborative software for the machine intelligence community, enabling scientists to map exactly what stage research has reached and help them identify which improvements need to be made. Hanson says that the project’s eventual aim is to exceed human intelligence, creating Mozart-like genius. “In a way we’re looking for protégé machines,” he says.

The eventual aim of the project is to actually exceed human intelligence, creating Mozart-like genius

What about my own avatar? At Carpenter’s suggestion I ask my husband to assess it for realism. After a short chat, he tells me that its responses to questions on politics, food and sports were nonsense. It also told him I’m younger than I really am. I haven’t yet started to lie about my age but perhaps Lifenaut’s questionnaires picked up on my latent vanity? Finally the avatar revealed it was depressed.

So how human is it? In July, Arrabales plans to test Lifenaut’s avatar using Conscale. Although some aspects of the software might meet the criteria for higher consciousness, Arrabales predicts that gaps in its abilities mean it may only score 3 out of 10. Forget Zoe Graystone – that’s about the equivalent of an earthworm, he says. All that time and effort for an annelid. No wonder I’m depressed.

http://www.newscientist.com/article/mg20627631.100-immortal-avatars-back-up-your-brain-never-die.html?full=true

This entry was posted in Presence in the News. Bookmark the permalink. Both comments and trackbacks are currently closed.
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z