[A new study indicates that we don’t have to physically move through virtual spaces to learn where things are in them. This story is from the UA News, where it includes 3 more images and a 1:55 minute video. –Matthew]
[Image: Derek Huffman navigates a virtual environment in 360 degrees on an omnidirectional treadmill. Credit: Roy Wageman/UAHS BioCommunications]
Brain May Not Need Body Movements to Learn Virtual Spaces
A new study conducted by the University of Arizona and the University of California, Davis enhances our understanding of how the brain learns in virtual reality.
Alexis Blue, University Communications
September 18, 2019
Virtual reality is becoming increasingly present in our everyday lives, from online tours of homes for sale to high-tech headsets that immerse gamers in hyper-realistic digital worlds. While its entertainment value is well-established, virtual reality also has vast potential for practical uses that are just beginning to be explored.
Arne Ekstrom, director of the Human Spatial Cognition Lab in the University of Arizona Department of Psychology, uses virtual reality to study spatial navigation and memory. Among the lab’s interests are the technology’s potential for socially beneficial uses, such as training first responders, medical professionals and those who must navigate hazardous environments. For those types of applications to be most effective, though, we need to better understand how people learn in virtual environments.
In a new study published in the journal Neuron, Ekstrom and co-author Derek Huffman, a post-doctoral researcher in the Center for Neuroscience at the University of California, Davis, advance that understanding by looking at whether or not being able to physically move through virtual spaces improves how we learn them.
“One of the big concerns or drawbacks with virtual reality is that it fails to capture the experience that we actually have when we navigate in the real world,” said Ekstrom, an associate professor of psychology and the study’s senior author. “That’s what we were trying to address in this study: What information is sufficient for forming spatial representations that are useful in actually knowing where things are?”
The researchers had study participants explore three virtual cities while wearing virtual reality headsets. The participants navigated each city in one of three ways:
- Participants wore the headset while walking on an omnidirectional, or 360-degree, treadmill, which allows users to walk freely in any direction. In this condition, the participants could navigate through the virtual environment by walking and turning their heads.
- Participants navigated through the virtual environments using only a handheld joystick; they were not able to navigate by moving their heads or walking.
- Participants navigated by moving their bodies side to side and moving a joystick back and forth; they were not able to walk around.
Participants spent two to three hours, on average, exploring the virtual cities and locating certain shops they were instructed to find. Once they’d had an opportunity to learn the environments well, they were asked a series of questions to test their spatial memory. For example, they might be asked to imagine they were standing at the coffee shop, facing the bookstore. They would then be asked to point in the direction of the grocery store.
The accuracy of participants’ responses did not vary based on which condition they were in.
Participants then underwent an MRI scan while answering a similar set of questions. This allowed the researchers to see what was happening in the brain as participants retrieved spatial memories.
The researchers found that the same areas of the brain were activated for participants in all three situations. In addition, the patterns of interaction between different regions of the brain were similar among the three conditions.
“What we found was that the neural codes were identical between the different conditions,” Ekstrom said. “This suggests – as far as the brain is concerned and what we were also able to measure with behavior – that there is sufficient information with just seeing things in a virtual environment. The information you get from moving your body, once you know the environment well enough, doesn’t really add that much.”
The findings address a long-standing scientific debate around whether or not body movements aid in learning physical spaces.
“There’s been this idea that how you learn might make a huge difference, and that if you don’t have body-based cues, then you’re lacking a big part of what might be important for forming memories of space,” said Huffman, the study’s first author. “Our research would suggest that once you have a well-formed memory of an environment, it doesn’t matter as much how you learned it.”
“We would say you don’t need body immersion, and you don’t need body cues to form complex spatial representations,” Ekstom added. “That can happen with sufficient exposure in simple virtual reality applications.”
From a practical standpoint, the research suggests that even basic virtual reality systems may be useful in instructional applications.
“Virtual reality has the potential to allow us to understand situations that we might not otherwise be able to directly experience,” Ekstrom said. “For example, what if we could train first responders to be able to find people after an attack on a building, without them actually ever having been to that building?
“Our findings suggest there’s promise for using virtual reality – even simple applications where you’re just moving a joystick – to teach people fairly complex knowledge about spatial environments.”
—
Original Study DOI: 10.1016/j.neuron.2019.08.012
CONTACTS
Researcher contact:
Arne Ekstrom
UA Department of Psychology
520-621-4594
adekstrom@email.arizona.edu
Media contact:
Alexis Blue
University Communications
520-626-4386
ablue@arizona.edu
Leave a Reply