Augmented reality robot provides physical sensations from virtual characters

[From ExtremeTech]

Futurama: "I Dated a Robot"

Augmented reality robot provides physical sensations from virtual characters

By James Plafke on February 11, 2013

One of the most common tropes in science fiction is the ability to apply the image of a person over the frame of a robot, thus creating a visually identical copy of a human. Now, as Japanese startup Different Dimension Inc. (DDI) gears up to market a life-sized augmented reality robot, the world is one step closer to creating visual human doppelgangers.

In recent science fiction, such as Minority Report or the “I Dated a Robot” episode of Futurama, fictional technology is explored that showcases what it would be like to have either virtual reality mimic physical sensation, or a robot mimic reality in order to provide physical sensation. Different Dimensions is working on a system that combines a physical robot with an augmented reality headset in order to achieve that seemingly sci-fi goal.

The tech is eerily similar to that episode of Futurama. In the episode, Fry and company travel to the internet, where they discover a parody version of Napster that allows them to download the image of a celebrity and have it projected on a blank robot. Fry pirates Lucy Liu, and embarks on a journey of human-robot love. DDI’s technology — based on a 2006 project from Yokohama National University and the Japan Science and Technology Agency – requires the use of an augmented reality headset. The robot can’t go out on dates, but the end result could be considered the early stages for the kind of advanced technology shown in that Futurama episode. The robot is wearing what is essentially a full-body green screen suit, and a virtual character is projected over the robot when you look through the glasses, while the surrounding environment is captured and displayed through a head-mounted camera.

The original 2006 version, U-Tsu-Shi-O-Mi, caused a few problems, such as outlines of the green screen material protruding from behind thinner virtual models, and the rig only had the ability to playback prerecorded audio. DDI improved upon the 2006 model, reducing its size by 60% by removing the bottom half of the robot, as well as making it thinner in order to remove the protruding green screen outlines. DDI also changed the green screen material to feel more natural upon touch.

The new robot can move its neck, shoulders, elbows, and wrists, and uses sensors inside the joints to send the movement data back to the software — called MMDAgent — that dictates what the virtual character is doing. At the moment, though MMDAgent uses speech recognition to help the system perform real-time conversations, it projects an anime character, aesthetically reminiscent of characters from the newer Final Fantasy games. Check out a [1:56 minute] video of the 2006 U-Tsu-Shi-O-Mi below to get an idea.

DDI is putting this system on the market with preorders opening up in March, though the company does not currently have a website where you can stalk the process. The system will cost around $4,800 to $5,300, and DDI only expects to sell around 120 units during the first year of them being available.

Though we’re not yet at the point where we can pirate celebrities, this tech is certainly (and perhaps eerily) on track.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z