[Lots of press coverage in the last 24 hours cites the first-person report in Rolling Stone on Magic Leap’s reveal of the company’s first consumer mixed reality product. The abbreviated version of the original story below focuses on the author’s experience using the new technology – note the many indirect mentions of presence factors; see the original story for much more information, several images and a very short video. –Matthew]
Magic Leap: Founder of Secretive Start-Up Unveils Mixed-Reality Goggles
Rony Abovitz talks with Glixel about the tech and ambitions behind new MR Goggles
By Brian Crecente
December 20, 2017
Magic Leap today revealed a mixed reality headset that it believes reinvents the way people will interact with computers and reality. Unlike the opaque diver’s masks of virtual reality – which replace the real world with a virtual one – Magic Leap’s device, called Lightwear, resembles goggles, which you can see through as if you’re wearing a special pair of glasses. The goggles are tethered to a powerful pocket-sized computer, called the Lightpack, and can inject life-like moving and reactive people, robots, spaceships – anything – into a person’s view of the real world.
Magic Leap, founded in 2011, remains a bit of a mystery, confounding tech writers and analysts with its ability to pull in seemingly endless amounts of investment from major companies and interest from bright minds. (The company has raised $1.9 billion to date.) While the secretive augmented-reality startup has released a few high-level concept videos that show what it hopes to achieve by injecting virtual creations into the real world, it hasn’t shown off a single piece of working technology to the public. It’s been so long that some publications have even publicly wondered if the entire thing is a sort of scheme. That, despite the company’s ever-increasing valuation – last listed at $6 billion.
The whole company rides on the back of founder Rony Abovitz, a bombastic bioengineer who helped design the surgery-assisting robot arms of Mako Surgical Corp. The sale of that company for $1.65 billion funded nearly the first four years of Magic Leap.
The last time the company spoke publicly in any great detail was about a year ago, when it invited Wired magazine to its South Florida headquarters to see the tech in action, but not to write about what the hardware looked like. Earlier this month, Glixel received a similar invitation. Abovitz invited me down to visit the company headquarters in Fort Lauderdale to write about the science of the technology and to finally detail how the first consumer headgear works and what it looks and feels like.
This revelation – the first real look at what the secretive, multi-billion dollar company has been working on all these years – is the first step toward the 2018 release of the company’s first consumer product. It also adds some insight into why major companies like Google and Alibaba have invested hundreds of millions of dollars into Magic Leap, and why some researchers believe the creation could be as significant as the birth of the Internet. Technology like this “is moving us toward a new medium of human-computing interaction,” said David Nelson, creative director of the MxR Lab at USC Institute for Creative Technologies. “It’s the death of reality.”
The Leap
My first experience with Magic Leap’s technology in action occurred in a sort of sound stage, inside a building separated from the rest of the complex. This is where the company tests out experiences that might one day be built for use in a theme park or other large area. As with almost all of the demos I experienced over the course of an hour or so, I can describe the intent and my own thoughts, but I agreed not to divulge the specifics of the characters or IP. In many cases, these are experiences that will never see the light of day; instead, they were constructed to give visitors who pass through the facility – under non-disclosure agreements – a chance to see the magic in action.
This first, oversized demo dropped me into a science-fiction world, playing out an entire scene that was, in this one case, augmented with powerful, hidden fans, building-shaking speakers and an array of computer-controlled, colorful lighting. It was a powerful experience, demonstrating how a theme park could potentially craft rides with no walls or waits. Most importantly, it took place among the set-dressing of the stage – the real world props that cluttered the ground and walls around me – and while it didn’t look indistinguishable from reality, it was close. To see those creations appearing not on the physical world around me, as if it were some sort of animated sticker, but in it, was startling.
Next, we made our way back into the building to a large room decorated to look like a big living space, complete with couches, tables, knick-knacks, and rugs. The demo area also gave me a chance to try a half-dozen or so different demonstrations. My first was a visit with Gimble, a floating robot that hovered in the mid-field between my eyes and a distant wall. I walked up to it, around it, viewed it from different angles – and it remained silently hovering in my view. The world around it still existed, but I couldn’t see through it. It was as if it had substance, volume – not at all a flat image. I was surprised to find that the closer I got to the robot, to an extent, the more detailed it became. Getting close to the floating object didn’t expose pixels; it highlighted details I wasn’t able to see from afar. If I got too close, though, it sort of disappeared, or else I was suddenly inside the thing: artifacts, I was told, of a demo that hasn’t yet been polished. I also noticed that the sounds of the whirring robot shifted around as I moved around it, always placing the noise where it should be no matter where I stood.
Next, Sam Miller, senior director, systems engineering, and Shanna de luliis, senior technical marketing manager, walked me through the process of launching three screens, essentially computer monitors, into my world. They looked like large, extremely flat television screens or monitors. More importantly, they stayed put, so I could pop them up wherever I wanted, even creating an array of them that I could use just as I would any multi-monitor setup. In this case, I had three screens up; each spread out just enough that I had to turn my head to look at them. And the robot still hovered over to the side.
After the screens, I tried a little demo that created a floating, four-sided television, each showing live TV. I could walk around the object, watching different channels. All of the channels kept playing whether I was watching them or not.
At another point, a wall in the room suddenly showed the outline of a door with bright white light shining through it. The door opened and a woman walked in.
She walked up to me, stopping a few feet away, to stand nearby. The level of detail was impressive. Though I wouldn’t mistake her for a real person – there was something about her luminescence, her design, that gave her away. While she didn’t talk or react to what I was saying, she does have that ability. Miller had her on manual control, running her face through a series of emotions: smiling, angry, disgusted. I noticed that when I moved or looked around, her eyes tracked mine. The cameras inside the Lightwear were feeding her data so she could maintain eye contact. It was a little unnerving and I found myself breaking eye contact eventually, to avoid being rude.
One day, this human construct will be your Apple Siri, Amazon Alexa, or OK Google, but she won’t just be a disembodied voice. She will walk with you, look to you, deliver AI-powered, embodied assistance.
The demonstrations were an odd collection of ideas, including a massive comic that you could walk up to and view as if looking through a window.
Earlier this year, I chatted with the people behind Madefire, a company that is working to breathe a bit more life into the act of reading comics books. In Florida, I got to see their work in action, and it was surprisingly substantive. The comic book page sort of floated over a coffee table; I could tell that some of the panels were floating at different depths. When I walked up to them, I could peer around the side and see the flat image floating in space, or simply view the art from different angles, like looking at a painting on the wall. The particular scene I was examining took place during a storm, and the air surrounding the comic was filled with falling rain and the sound of a thunderstorm. It was a subtle, but powerful touch that helped pull me into the experience.
In another example, they showed me a quick demonstration of something they called volumetric capture. The team went to some capture studios and had them capture live performances of actors using special equipment. They then dropped that actor into the system, essentially putting the live performance into any room the user happens to be standing. While some of the finer points of the capture were rough, like the space between nose and lip being a little too rounded together, the overall impression was of being able to watch up close and walk around a live performance of real people. I can’t describe what the capture showed, only that it was of someone moving very quickly, and my view of it kept up. There was no stuttering or slowdowns, even when I walked around the performance, up close and far away. And, I was told, the performance could be made larger or shrunk down to fit in the palm of my hand.
[snip re: the evolution of the technology and company]
This system – the ninth generation of the hardware – is made up of these three components: a headset and small pod-like computer connected by a single, long cable and a controller, known simply as Control. The headset looks almost like a pair of goggles held in place with a thick strap. They’re lightweight, modern-looking, if not exactly stylish, and certainly much sleeker than anything virtual reality has to offer. “The lens are a very iconic form,” Natsume says. “The aspiration is that eventually, this will become like glasses and people will wear them every day.”
The headband that holds the goggles in place uses a “crown temple” design, Natsume says. “It came from our study on how to distribute weight evenly around your head.” To put on the goggles, a person holds either side of the plastic crown in their hands and pulls apart. The crown spreads apart into a left, right and back piece. Then you slide it onto your head like a headband. Two short cables come out of the back of the headband and merge into one, before slinking four or five feet into the system’s Lightpack. The Lightpack is two rounded pods connected smoothly on one end to form a gap between them. It’s designed, Natsume says, to clip onto your pocket or onto a shoulder strap that Abovitz describes as a sort of guitar strap.
The goggles will come in two sizes, and the forehead pad, nose pieces, and temple pads can all be customized to tweak the comfort and fit. By the time they launch, the company will also take prescription details to build into the lenses for those who typically wear glasses.
The controller is a rounded bit of plastic that sits comfortably in your hand and features an array of buttons, six-degrees of freedom motion sensing, haptics, and a touchpad.
The Lightwear and Lightpack are almost toy-like in their design, not because they feel cheap – they don’t – but because they’re so light and there seems to be so little to them. Abovitz, though, is quick to point out just how much is packed into that small space. “This is a self-contained computer,” he says. “Think about something close to like a MacBook Pro or an Alienware PC. It’s got a powerful CPU and GPU. It’s got a drive, WiFi, all kinds of electronics, so it’s like a computer folded up onto itself.
Then he points to the Lightwear goggles. “There is another powerful computer in here, which is a real-time computer that’s sensing the world and does computer vision processing and has machine learning capability so it can constantly be aware of the world outside of you,” he says. “So you have the least amount of weight with what is like a satellite of engineering up here.”
The headset also can sense the sound around a user through four built-in microphones and uses a real-time computer vision processor along with – I counted six – external cameras to track the wearer and the world they’re in, in real-time. Tiny, high-end speakers built into the temples of the device provides spatial sound that can react to your movement and the movement of the creations with which you’re interacting. “This isn’t a pair of glasses with a camera on it,” he says. “It’s what we think of as spatial computing. It has full awareness.”
Abovitz declines to say what the GPU, CPU or other specs are of the headset, nor will he say what the battery life is. They need to hold something back to release later, he says, besides they’re still working on battery optimization. As we leave, I notice a long table shrouded in a white sheet. What’s under there? I ask. The next prototypes, Abovitz says.
Sigur Ros Music and Weta Robots
As things wrapped up in the demo room, Miller asked me what I thought, and I told him: The goggles were so comfortable you almost forget you’re wearing them. The computer attachment fits neatly into my pocket, and its tether to the headset never got in my way. The controller felt intuitive almost immediately. The sound was both accurate and powerful. But I had one concern: The field of view.
Like Microsoft’s HoloLens, which uses a different sort of technology to create mixed reality, Magic Leap’s Lightwear doesn’t offer you a field of view that matches your eyes. Instead, the Magic Leap creations appear in a field of view that is roughly the shape of a rectangle on its side. Because the view is floating in space, I couldn’t measure it, so I did the next best thing: I spent a few minutes holding out first a credit card in front of my face and then my hands to try to be able to describe how big that invisible frame is. The credit card was much too small. I ended up with this: The viewing space is about the size of a VHS tape held in front of you with your arms half extended. It’s much larger than the HoloLens, but it’s still there.
“I can say that our future-gen hardware tech significantly expands the field of view,” Miller says. “So the field of view that you are looking at on these devices is the field of view this will ship with. For the next generation product, it will be significantly bigger. We have that stuff working in the labs.”
De Luliis adds that developers have the option to fade the edges so that there won’t be such a harsh break where the image stops “and your brain will kind of naturally fill in the gaps for you.”
Miller wanted to show me one other neat trick. He walked to the far end of the large room and asked me to launch Gimble. The robot obediently appeared in the distance, floating next to Miller. Miller then walked into the same space as the robot and promptly disappeared. Well, mostly disappeared, I could still see his legs jutting out from the bottom of the robot.
My first reaction was, “Of course that’s what happens.” But then I realized I was seeing a fictional thing created by Magic Leap technology completely obscure a real-world human being. My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.
Finally, I went to a separate room to see an experience that I can talk about in full detail. Iceland experimental rock band Sigur Ros has been quietly working with some folks at Magic Leap to create an experience that they like to call a soundscape. For this particular demo, the team had me put on earbuds plugged into the goggles. “What you are about to see is a project called Tonandi,” Mike Tucker, technical lead on the project, tells me. “What you’re going to see is not a recorded piece of music but an interactive soundscape. That’s how they like to describe it.”
Tonandi starts by creating a ring of ethereal trees around you and then waiting to see what you do next. Inside, floating all around me are these sorts of wisps dancing in the air. As I wave my hands at them, they create a sort of humming music, vanishing or shifting around me. Over time, different sorts of creations appear, and I touch them, wave at them, tap them, waiting to see what sort of music the interaction will add to the growing orchestral choir that surrounds me. Soon pods erupt from the ground on long stalks and grass springs from the carpet and coffee table. The pods open like flowering buds and I notice stingray-like creators made of colorful lights floating around me. My movements, don’t just change this pocket world unfolding around me, it allows me to co-create the music I hear, combining my actions with Sigur Ros’ sounds.
Experiencing Tonandi was effortless; the sort of surreal, magic-realism-infused musical creation that you could hand over to anyone to try. But behind the scenes, a lot was going on. Tucker says the project uses a lot of the tech built into Magic Leap’s gear. “We’re using a bunch of unique features to Magic Leap,” he says. “We’re using the meshing of the room, we’re using eye tracking, and you’re going to use gesture, our input system for most of the experience.”
Later, over lunch in a conference room, Abovitz says that the team did once experiment with a horror experience. “It was terrifying,” he says. “People would not go into the room anymore. It was very, very, very scary, like almost life-threateningly scary, so we kind of said, ‘OK, let’s put that aside for now.”
[snip: re: Dr. Grordbort game in development]
The Persistence of Reality
The billion-dollar technology of Magic Leap seems so effortless at times that it would be easy to underestimate it. And in some ways, that’s one of the key innovations of the technology. It can feel like it’s not there.
One of the fundamental problems that Abovitz and his team at Magic Leap were hoping to solve was the discomfort that some experience while using virtual reality headsets and nearly everyone finds in the prolonged use of screens of any type. “So our goal is to ultimately build spatial computing into something that a lot of people in the world can use all day every day all the time everywhere,” Abovtiz tells me. “That’s the ambitious goal; it’ll take time to get there. But part of all day is that you need something that is light and comfortable. It has to fit you like socks and shoes have to fit you. It has to be really well tuned for your face, well tuned for your body. And I think a fundamental part of all day is the signal has to be very compatible with you.”
Finding a way to recreate a light field, he says, means that the result is a viewing experience as natural and comfortable as looking around you. That, he says, is the bedrock upon which Magic Leap’s work is built. “You don’t ever want to think about it again,” he says. You just want to know that we took care of it and we think that’s an important first step.”
[snip]
Abovitz also declined to give me a ship date or price. But he did say that there is no doubt that the first version will ship in 2018. As for the cost: “So we have an internal price, but we are not talking about that yet,” he says. “Pre-order and pricing will come together. I would say we are more of a premium computing system. We are more of a premium artisanal computer. “
Despite not answering a number of key questions, the day spent wandering the hallways and byways of Magic Leap left me with a much clearer sense of what the company was up to – that it wasn’t just about the headset, or even the light field tech that’s driving it. Magic Leap, in releasing their system to the world, is combining a slew of technologies into something that could one day reinvent the way we deal with all technology.
The light field photonics, which can line up a fake reality in your natural light real one, may be the most obvious of the innovations on display, but there’s much more. The visual perception system is actively tracking the world you’re moving through, noting things like flat surfaces, walls, objects. The result is a headset that sees what you do, and that can then have its creations behave appropriately, whether that means hanging a mixed reality monitor next to your real one, or making sure the floating fish in your living room don’t drift through a couch. That room mapping is also used to keep track of the things you place in your world so they’re there waiting for you when you come back. Line up six monitors above your desk and go to sleep, the next day they’ll be exactly where you left them. While we don’t know the specifications of the miniature computer built into the Lightpack, we do know that it is designed to run video games in a world in which it can see and react. The Control is a straightforward way to interact with the system, but it can also use Magic Leap’s own gesture tracking which includes not just hands and fingers, but your head position, voice, eye tracking and more. And finally, the technology delivers not just a light field, but a sound field, the sort of aware stereo that can track your movements and react to make sure the audio is always coming from the object, no matter where you stand. It can even relay distance and intensity. This is just the hardware and software used to create the baseline of Magic Leap One and its Lightwear, Lightpack and Control. Early next year, the company plans to open up a creator portal and deliver access to its software development kit. Then it won’t just be Magic Leap and its partners – folks like Weta, Sigur Ros, ILMxLAB and Twilio – working on new experiences. It will be everyone and anyone.
[snip to end]
Leave a Reply