[The VR add-on described in this story from Seeker should enhance presence – note the last short paragraph, in which the MindMaze creator and CEO says “We’re moving away from VR as a technological experience to being a real human experience…” The original story includes other images and a video. See coverage of Google’s related tech in an ISPR Presence News post from a few months ago. –Matthew]
MindMaze’s Neural VR Interface Reads Your Mind to Reflect Your Facial Expression
MASK, a new brain-computer product for desktop and mobile virtual reality headsets, can predict a smile or a wink milliseconds before you even move.
By Dave Roos
April 13, 2017
If Facebook CEO Mark Zuckerberg is reading his crystal ball correctly, then the next big thing will be social virtual reality. In the very near future, you’ll put on a virtual reality headset and meet up with friends for virtual hangouts, live concerts, and interactive games.
But as anyone who survived the early Second Life scene can attest, virtual avatars can be pretty socially inept. After all, there’s only so much you can say with a permasmile frozen on your face.
This week, a neurotechnology company based in Switzerland called MindMaze unveiled a product that can synchronize a variety of human facial expressions on virtual avatars. Called MASK, the technology reads your brain signals to predict a smile or a wink milliseconds before you even move. The result is a faster-than-real-time reflection of your changing facial expressions that has the potential to add new emotional depth to social and gaming interactions in VR and bring the technology’s use further into the mainstream.
Seeker tested out the predictive VR technology at MindMaze’s San Francisco offices. The MASK device is little more than a foam insert that overlays comfortably into existing VR headsets, whether they are desktop models like Oculus Rift and HTC Vive or mobile units like Google Daydream. The foam insert is equipped with eight sensors that can detect different brain signal “channels.”
It takes between 10 to 20 milliseconds for a “smile” signal from the brain to make it to the mouth, which is just enough time for MASK to hijack the brain signal and project a simultaneous smile onto a virtual avatar.
The effect is uncanny. In our demonstration, we put on a VR headset and found ourselves sitting across from a cartoony avatar that faithfully mimicked 10 different facial expressions, including a frown, a smirk, and both a right and left wink, and reflected when the person wearing the headset was speaking.
Mimicked isn’t quite the right word, since the facial expressions morphed instantly. The effect was almost like looking into a mirror, though the face staring back wasn’t the same.
The neuro VR technology behind the MASK was first developed for the healthcare sector. MindMaze was founded in 2011 by Tej Tadi, a neuroscientist with training in electrical engineering, VR, and computer graphics. The company’s first products were immersive virtual reality setups designed to aid the rehabilitation of people who had experienced a debilitating stroke or amputees who were coping with the loss of a limb. Partnering with hospitals such as UCSF Medical Center and the Stanford Stroke Center, the suite of technologies used a virtual avatar to walk patients through a series of movements while monitoring brain signals for targeted progress.
Tadi told Seeker that MindMaze’s healthcare research and development laid the groundwork for what the company is now doing with MASK. It has essentially learned how to analyze brain signals and match them with different physical intentions, such as “raise my right hand.” It’s the same type of brain-computer interface that allows a paralyzed person to steer an electric wheelchair with his or her mind, Tadi said.
All of our brains use the same electrical signals to communicate the intention to smile, wink, or smirk, he explained. That’s why MASK works out of the box for at least 10 baseline expressions. The product’s initial accuracy is impressive, with only occasional lapses in its recognition of certain facial adjustments. But it includes a robust machine learning component that works to better reflect your physiognomy, Tadi noted, which means its ability to register your personal and unique grins and grimaces will steadily improve over time.
Developing sophisticated facial recognition in VR is a crowded field, with most products making use of some form of camera technology directed at a user’s face to capture facial movements and positions. Zuckerberg aims to be at the forefront of this vanguard, but Tadi wasn’t impressed by Facebook’s demo of social VR avatars at last October’s Oculus Connect conference.
Rather than employ eye or facial tracking, the expressions conveyed through Oculus were triggered by gestures — shake your fist and the avatar’s face appears angry; shrug with your palms up and it appears confused, etc. Facebook likened the result to the use of emojis to convey reactions in virtual conversations.
“It was very forced,” Tadi remarked. “You had these very 2D characters who were being driven artificially. If someone bumped their fist, there was a smile generated on the avatar. That’s a very artificial way of thinking. If you want to smile, you just smile. There’s no other way to do it.”
MindMaze is currently in licensing talks with all of the major consumer VR headset makers. The plan is to strike a deal for MASK to ship with an Oculus Rift or HTC Vive in time for the holidays. In February of 2016, MindMaze’s valuation topped $1 billion after receiving a $100 million round of funding to expand into the consumer space.
“This is going to bring real human emotion into VR,” said Tadi, enabling everything from more immersive first-person gaming experiences to more engaging social interactions online. “We’re moving away from VR as a technological experience to being a real human experience for the first time.”
Leave a Reply