NAVIgoid lets human get immersed in remote robot’s actions

[From Discovery]

[Image: Toyohashi University of Technology assistant professor Dzmitry Tsetserukou guides the NAVIgoid telepresence robot system.]

Human Gets Immersed In Remote Robot’s Actions

These kinds of robots could represent us in faraway places, letting us interact with friends, family and coworkers as if we were there.

By Alyssa Danigelis
Dec 28, 2011

What if you could be in two places at once? Or four? A group of Japanese roboticists envisions a world where we all use robots to visit friends and family, and represent us in distant work sites. They are developing a telepresence robot they think will give humans more physical immersion in remote locations.

“Vision is not enough,” said Dzmitry Tsetserukou, an assistant professor at Toyohashi University of Technology’s Advanced Interdisciplinary Electric Research Center. “We have to provide tactile feedback to make him or her more involved, and also motion feedback so we can feel more like we are human on the robot side.”

Tsetserukou, along with computer science and engineering professor Jun Miura and PhD candidate Sugiyama Junichi, developed a robot called NAVIgoid that enables a human controller to guide it remotely using torso movements, and receive physical feedback from the robot. The robot was recently demonstrated at the SIGGRAPH conference on computer graphics and interactive techniques in Asia.

The wheeled robot has two built-in cameras to transmit stereoscopic views of the remote location. It’s also equipped with laser range finders that scan the surroundings and map all the obstacles present. The system is programmed to detect the shape and speed of obstacles on all sides of the robot to make navigation easier and safer. Information is transmitted wirelessly between the robot and operator.

On the human side, the user dons a virtual reality eye shield typically used by gamers to get the stereoscopic vision of what the robot sees. The user also puts on a special wide belt containing an actuator and a flex sensor. Instead of a joystick or a mouse, the operator maneuvers the robot by tilting the torso.

Surprisingly, Tsetserukou said he and his colleagues found that using the torso gave almost the same precision of navigation as using one’s hands. When the robot detects objects or people nearby, it transmits that information to the belt. For example, if it detects a human walking around it, the robot produces a pattern of clockwise or counterclockwise stimuli in the operator’s belt.

Telepresence robots already exist, including the QB robot made by Mountain View company Anybots that self-balances and is controlled through a Web interface. Tsetserukou praises the QB, but pointed out that while it moves around, the user doesn’t.

“You just sit in the chair and see the video,” he said. “It’s just very similar to Skype.”

Skype has limitations, especially for elderly and disabled people who want to have more dynamic interactions with their families and friends, Tsetserukou said. The NAVIgoid platform could allow human users to physically feel like part of the action.

Ilana Nisky is a mechanical engineering postdoctoral research fellow at Stanford University who works in professor Allison Okamura’s Collaborative Haptics and Robotics in Medicine lab.

This is the first time Nisky had encountered mobile control of a robot using one’s torso. Although the NAVIgoid platform seems promising, she said she wonders if the tactile feedback interface will be accepted by users as pleasant and useful.

Nisky added that she’s curious to see what happens with the robot in the future. “The more robots and computers are incorporated into our lives, the more an intuitive, natural, engaging and immersive interaction with them becomes important,” she said.

Currently the team is working on refining the system. They would like to give the robot more human-like features so those interacting with it feel more like they’re communicating with the actual person on the other end. Such features might be an expressive face that reflects the user’s emotions, and arms covered in warm, touch-sensitive material.

Another idea they’re considering is creating a hierarchical multi-robot telepresence so that robots with different, complementary skills can collaborate to perform tasks for the user. One could explore the location while another could fetch objects.

Citing sci-fi movies like “Surrogates” and “Avatar,” Tsetserukou said that controlling a remote robot through a direct connection to the brain is still the distant future. Until that time, he thinks humans could use the NAVIgoid platform to have multiple robots represent us.

“This kind of robot can enhance our ability to live and to spend our time more a more useful way,” he said. “We can use this technology to give us the ability to be in different places at the same time.”

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

One Comment

  1. WordPress › Error

    There has been a critical error on your website.

    Learn more about debugging in WordPress.