Personal A.I.: The best virtual assistant might turn out to be your virtual self

[The technologies are or soon will be available to let us create multi-platform (embodied and disembodied) artificially intelligent versions of ourselves to interact with other people and companies and their virtual representatives. This story from Computerworld thoughtfully considers some of the possible presence-implicated scenarios (interacting with your own virtual assistant suggests an intriguing form of self presence!). For more information including a 0:24 minute video see coverage in BGR.  –Matthew]

[Image: ObEN co-founder and CEO Nikhil Jain and his virtual self]

Why you’ll fire Siri and do the job yourself

In the world of A.I., the best virtual assistant might turn out to be your virtual self.

By Mike Elgan, Contributing Columnist, Computerworld
January 6, 2018

Have you ever wished you could clone yourself? Imagine how much you could accomplish.

The future of A.I. will make something kind of like that possible. By scanning your face and voice and observing how you talk and what you know, future A.I. could build a virtual assistant that’s a virtual you.

Sounds like science fiction. But one company is already working on it.

PAI in the sky?

A company based in Pasadena, Calif., called ObEN built a 3D A.I. avatar technology that produced what it calls a “personal A.I.,” or PAI.

I spoke to ObEN co-founder and CEO Nikhil Jain this week. He told me ObEN’s technology generates a 3D, computer-generated representation of the user’s face with a single selfie.

ObEN also learns to copy your voice. Once it’s got your voice down, it can do things with your voice that you cannot — speak Chinese, for example, or sing.

That “personality” is based not only on how you speak, but on what you know as well. It’s even possible to add knowledge manually.

ObEN is a technology play, rather than a product. Some of the first ObEN-based products will enable fans in South Korea and Hong Kong to interact with A.I. versions of celebrities.

The company is also thinking about less frivolous applications. For example, Jain suggested Kaiser Permanente could develop avatars of doctors who can interact with patients.

Today, Kaiser interacts with patients through an email-like message system on its website. It would be easy to imagine a virtual video avatar popping up and informing patients of, say, blood-test results, and then being able to answer non-medical questions on the spot (such as general questions about making an appointment). That avatar would be a representation of the patient’s actual, specific doctor.

Another application is in marketing and advertising. Celebrities have always been used for advertising. But ads are becoming increasingly personalized and interactive. The use of celebrities for personalized, interactive marketing doesn’t scale — unless, of course, the celebrities are virtualized.

ObEN is also working on WeChat integration. WeChat is the major messaging app in China.

By far the most interesting potential applications for ObEN’s technology are for enterprise and business.

Above all, ObEN envisions the use of its technology as a super assistant that looks and sounds like you. This “virtual you” would be able to go out and interact with business associates, partners, clients and others, setting up meetings, interacting and generally helping the real you carry the load.

ObEN also sees a role in the sharing economy of the future, where Airbnb-type services can use virtual avatars to negotiate arrival schedules and deal with various lodging situations between guests and hosts or even provide general customer service.

ObEN has the potential to help both on a personal and a professional level. Like the 1996 science fiction comedy Multiplicity, where Michael Keaton’s character feels overwhelmed by competing demands of work and family (and so does the obvious thing and has himself cloned), ObEN’s virtual assistant application would create a virtual you to do a lot of the gruntwork and low-level communication and decision-making at both work and home.

In the perfect ObEN universe, different simultaneous instances of your PAI would be off scheduling meetings, answering questions, negotiating rates and even telling bedtime stories to your children, according to Jain, while you are freed up to focus on the stuff that requires human attention and experience.

At the end of the day, the user can review everything the PAI did that day.

ObEN’s PAI approach is one answer to the question of how virtual assistants with agency might function. We’ve assumed for years that virtual assistants will do more than just answer our questions, which is mostly what they do today. Future virtual assistants should buy things, negotiate fees, automatically remind co-workers of their deadlines and more.

Consider Amy, the virtual assistant. Amy is A.I. that interacts via email and schedules meetings. Amy has a personality and can make decisions in an email conversation, such as the meeting participants and the Amy virtual assistant negotiating available times for meetings. Amy is a virtual person, and many people who encounter Amy assume they’re interacting with a real human.

If our virtual assistants are to be “personalities” like Amy, they could also be virtual representations of ourselves. This approach is actually more transparent than the A.I. that’s currently used.

For example, when people interact with Amy, or when they use Google’s “Smart Reply,” they usually don’t know A.I. was involved. With something like ObEN’s PAI, it’s obvious what’s going on — it’s obviously A.I.

Interestingly, the use of one’s self as the virtual assistant “personality” also could serve as a kind of authentication, if done right.

ObEN is working with something called the Project PAI for authentication. (The PAI Project is separate from ObEN, but ObEN is the main company the project is working with right now.)

The intent is to cultivate something called the PAI blockchain, enabling people to maintain control and ownership of their digital personas. (This is especially important for celebrities, who could “sell” or “rent” their personas for a fee.)

Today, ObEN’s technology is nascent, and many of the applications discussed here lie far into the future.

“A.I. is not magic. It takes a lot of human effort,” Jain told me. “It’s not going to happen overnight.”

That’s true. It’s not going to happen overnight. But it is going to happen.

A world of ‘me bots’

I wrote about the coming age of “me bots” two years ago, predicting that bots and agents would interact as us for trivial conversations.

These probably will, or at least should, be tied into our virtual assistants.

What’s easier to predict is that they’ll be available on various media. For example, ObEN is working on something that replaces live video chat. But other technologies will use only voice or only text (for use in messaging apps or email).

The voice-only virtual self idea would represent a huge upgrade to today’s voicemail phone tag hell. Instead of leaving a message, you could be directly connected to the person’s A.I. assistant, which could have a limited ability to answer questions or take action (such as schedule meetings). Any request made by the caller could pop up on the user’s screen or phone with a simple “yes” or “no” option. (So if someone requests a document, the A.I. tells the caller it will check and, if you approve it, it will then take the action to send the document.)

Voice A.I. is interesting. But ObEN’s audio-visual approach is highly compatible with two major trends.

The first is face scanning, as represented in Apple’s iPhone X. This kind of technology could produce extremely accurate 3D renderings of people. And that capability should be standard in smartphones in three years.

The second trend is the rise of augmented reality and virtual reality.

(ObEN is leveraging Apple’s ARKit and Google’s ARCore to enable PAIs on phones, according to Jain.)

With people spending more time using augmented reality, first with phones and later with smart glasses — and also spending more time in virtual reality — the idea of interacting with video versions of people’s virtual agents becomes more compelling.

Imagine if Siri, Cortana and Alexa were each represented by a universal “person” created by the company to represent that virtual assistant. Then, imagine these virtual assistants gaining agency and acquiring the ability to show up in other people’s augmented reality glasses. Video Siri shows up and says, “Carl would like to schedule a meeting for next week.” Then the same video Siri shows up again representing Janet.

When everyone who interacts with you via virtual assistant does so using the same face and voice, confusion is inevitable. It doesn’t work.

It would be far more clear, and far easier to authenticate, if, instead of video Siri, video versions of Carl and Janet themselves showed up in the glasses.

Predicting the future is all about following the trends to see where they intersect.

Virtual assistants are growing more sophisticated and will gain agency. A.I. is advancing. Face-scanning will become mainstream. Augmented reality and virtual reality will definitely become ubiquitous.

All this points to a future where A.I. virtual assistants, which look and sound just like us, will help us in myriad ways.

People fear a future where A.I. takes our jobs. But the more likely outcome is that A.I. will simply do some of the work, enabling us to succeed better at our jobs.

We’ll all have a clone of our own.


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News: