Invoked computing – Device-free ubiquitous augmented reality

[From DigInfo TV, where the story includes a 2:51 minute video; more information is available at the researchers’ web site]

[Image: ‘Invoked Computing’ at home]

Invoked Computing – Device-free Ubiquitous Augmented Reality

18 November 2011

A research group at the University of Tokyo are creating a new paradigm in Human Computer Interaction. Dubbed ‘Invoked Computing’ the idea is to turn everyday objects into computer interfaces and communication devices.

“For example, if you make a gesture, the computer should be able to recognize this as “I want to use the telephone”. So with an iPhone for example, you have everything in a small device and you have to learn how to use it, here we want to do the opposite, the computer will have to learn what you want to do.”

“If you want to use a laptop, you just make a gesture it will recognize this, project the screen, the keyboard and everything, you won’t have to carry a device, no battery or everything, everything is ubiquitous, ubiquitous augmented reality.”

The system won the grand prize at Laval Virtual 2011 in France and was on display at the Digital Content Expo in Tokyo with two proof of concept prototypes. The first demonstration turns a regular banana into a phone. By using a high speed camera to track the banana and a parametric speaker array to direct the sound in a very narrow beam, this creates the impression that the sound is coming directly out of the banana. The second demonstration is a laptop in a pizza box. The video and sound is projected onto the lid of the pizza box and the user can interact with it by moving the playhead and changing the volume.

“For this prototype here we have tracking to get the position of the augmented object and then we project sound on the object as well as video. So usually for augmented reality, we can use goggles, we can use even iPhone or iPad with a camera, and you see augmented reality through this device. Here, it’s special augmented reality so we use a projector to directly augment objects, so it’s multi-user and the particular thing here is that we also have sound as well as the video.”

In the future they want to broaden the range of gestures and objects that the system can recognize and interact with, with the goal being the creation of a ubiquitous AR system which can learn and anticipate the intentions of the user in various situations.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z