Robots learn to move like humans using virtual reality

[This short story from Forbes describes the clever use of tele-operation via VR to help artificially intelligent robots learn how to move like humans. The original story includes a 1:29 minute video, and for more information see the short interview with Peter Abbeel in Supply Chain Management Review and the November 2017 New York Times story it links to. –Matthew]

[Image: Robot performs human movements using VR. Photo Credit: Berkeley Robot Learning Lab.]

Robots Learn To Move Like Humans Using Virtual Reality

February 1, 2018
Andréa Morris, Contributor – I cover S.T.E.A.M. (Science, Technology, Engineering, Art and Math).

The historic 1997 chess match left World Chess Champion, Garry Kasparov, defeated by an IBM computer named Deep Blue. During the final tense moments, Kasparov held his head in his hands and breathed into his clenched fists. He suddenly stood, conceded, and walked away from the chess board, towards the audience, arms held out, palms facing up. Deep Blue remained where it was, doing nothing. In fact, Deep Blue hadn’t even been able to move a single one of its own chess pieces. A human surrogate had to move them. Arguably, the most astounding thing Kasparov did the whole match wasn’t a genius chess move. It was this movement through our three-dimensional environment, walking off the stage and leaving the new champion trapped in its problem-solving prison.

The variability problem.

Working to break AI out of its ivory tower is Pieter Abbeel, electrical engineering and computer science professor at UC Berkeley. Last quarter, Abbeel and his graduate students launched the startup, Embodied Intelligence Inc. (EI) to make a dent in the variability problem. “Many problems in AI have to do with variability,” says Abbeel. “The more variability the harder the problem.”

The variables in chess boil down to the coordinates of the chess pieces. The number of possible coordinates for moving a chess piece are limited. Outside a chess match, our moves have far more variables. You might walk, jog, skip, trip or cartwheel, stop to smell the roses, crouch to tie your shoes, wave down a friend and walk through the park, etc. Abbeel and colleagues want to graduate artificial intelligence from a controlled learning environment to the obstacle course of our unruly world. They’ve hit the ground running using Virtual Reality (VR) to maneuver robots. The human commandeers the robot like an avatar, so the movements of the human are mirrored by the robot. Human moves, robot moves, robot learns by experience.

A virtual reality tele-operation system.

The team at Embodied Intelligence use a standard, consumer-grade virtual reality system to demonstrate to a robotic pupil how to move with the agility (or functional awkwardness) of a human. The information imparted has a lot more detail than a piece of code. Each pixel is one of more than two million pieces in a visual puzzle: coordinated movement captured at more than 70 frames per second. This type of imitation learning is an enhanced version of the learning we humans do. When a pro golf instructor demonstrates how to execute a golf swing, you then have to try to repeat the maneuvers. But with virtual reality, a human teacher performs the demonstration and the robot executes the moves in tandem. There is no missing step of trying to interpret. The skills being demonstrated are integrated immediately.

After a half hour and a handful of reps, the robot “gets it” and moves objects through space like a human. This time, on its own. “Let’s puppeteer the robot around in a way that the robot experiences exactly what it’s going to experience when it’s doing it itself, “says Abbeel. “That way a robot can learn much more efficiently.”

By using VR, Abbeel and his team have shortened the robotic skill-acquisition process from months and weeks down to a single day.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

One Comment

  1. Shuhua Li
    Posted February 17, 2018 at 6:18 pm | Permalink

    Not only human studying VR, but robot studying, either! What if robot proficient in VR technology? Do they still need human as engineers? Or they can self-repair and updated by themselves?

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z