Virtual reality users must learn to use what they see

[An interesting new study demonstrates that users don’t perceive the 3D cues provided by virtual reality as they do the same cues in the real (nonmediated) world unless they’re trained to do so. This story is from University of Wisconsin-Madison News and the full study is available from Nature Scientific Reports. –Matthew]

[Image: Figure 2c from the study indicating that task feedback improves performance over time. Left: Target interceptions over trials when feedback was not provided (blue symbols) and when feedback was provided (orange symbols). Circles correspond to the percentage of target interceptions across observers on each trial.]

Virtual reality users must learn to use what they see

December 4, 2017
By Chris Barncard For news media

Anyone with normal vision knows that a ball that seems to quickly be growing larger is probably going to hit them on the nose.

But strap them into a virtual reality headset, and they still may need to take a few lumps before they pay attention to the visual cues that work so well in the real world, according to a new study from University of Wisconsin–Madison psychologists.

“The companies leading the virtual reality revolution have solved major engineering challenges — how do you build a small headset that does a good job presenting images of a virtual world,” says Bas Rokers, UW–Madison psychology professor. “But they have not thought as much about how the brain processes these images. How do people perceive a virtual world?”

Turns out, they don’t perceive it like the real world — at least not without training, according a study Rokers and postdoctoral psychology researcher Jacqueline Fulvio published recently in the journal Nature Scientific Reports.

In 2015, Fulvio found that people were flunking her simple test of three-dimensional perception using a flat screen and standard 3D movie glasses. They were not good at discerning which direction a target was moving.

“Most importantly, they confused whether the object was coming toward them or going away from them,” she says. “It was a surprising finding. Nobody believed it, because it’s not something that happens often in the real world. You’d get hurt.”

The researchers decided to move the test to virtual reality to provide more realistic indications of motion in three dimensions — such as binocular cues, in which slightly different views from the left and right eye reveal depth, and parallax, where closer objects appear to be moving faster than those farther away.

“We thought it was as easy as taking the same object-tracking task, putting it in the virtual environment, and having people do it the same way,” Fulvio says. “And they did do it the same way. They made the same mistakes.”

Given a one-second snippet of the movement of a small, round target across a plane that stretched away from the viewer at roughly eye level, study participants correctly moved a virtual paddle to intercept the target’s course less than a quarter of the time.

What Fulvio and Rokers found was that when most people put on a virtual reality headset, they still treat what they see like it’s happening on any run-of-the-mill TV screen.

“There’s no depth to a computer screen. There are no binocular cues. Close one eye, close the other eye, nothing changes,” says Rokers, whose work was funded by Google. “If you take that expectation into a VR headset, where you do have binocular cues, you somehow just don’t use them.”

Unless you’re trained to use those cues.’

Fulvio began giving study subjects visual and audible feedback. Once they’d watched the one-second flight and set their virtual paddle to catch the target, the game would reveal the full path of the target and a cowbell noise for success or swish for a miss.

The visual feedback nearly doubled success rates. (The cowbell improved scores, too, but less so.)

“They were getting better, but how were they getting better? What were they doing differently?” Fulvio asked.

When she turned off the VR system’s head tracking, taking away the effects of players’ head movements and making them passive viewers, they were bad again. When she gave a little of that freedom back — restoring the system’s response to head movements, but making the virtual world shifts lag behind players by as much as half a second — they were still bad.

Interestingly, even players who reported keeping their heads stock still showed improvements when the virtual reality system was incorporating the smallest wobbles of their heads into the scene they were seeing.

“These are head motions people make, tiny jitters, that are not planned movements,” Rokers says. “When you think you’re sitting still, your head is moving a little bit. And, it turns out, people actually use that information to improve depth perception. It’s tiny. It’s almost involuntary. But the visual system actually exploits that.”

The results — that tiny head movements and typical binocular cues of motion are there for the taking in virtual reality, but that most people only use them if they are actively shown how VR differs from a flat computer screen — should help virtual reality creators improve uptake of their products.

“Google packages a virtual reality YouTube viewer with their headset. That’s a passive experience, and not the best thing to do,” Rokers says. “What they should be doing is packaging action games with their headset, something that forces users to interact with the environment. That teaches them to use the information available in virtual reality, and treat it more like the real world and less like a computer screen.”

“Otherwise you just have a really fancy TV, really close to your face,” says Fulvio, who has moved on to testing the extent to which people’s expectations influence their perception of flat versus virtual depth by having her study subjects watch TV inside virtual reality.

Rokers says showing the effects of teaching people to use cues to three-dimensional motion that they are otherwise ignoring may ultimately help refine treatment for vision disorders such as blind spots or amblyopia (“lazy eye”) in which the brain can be trained to compensate for perceptual limitations.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z