How Microsoft researchers might invent a holodeck

[From Wired’s Gadget Lab blog, where the original post includes many additional images; also see the follow-up at Venture Beat here]

How Microsoft Researchers Might Invent a Holodeck

By Dylan Tweney
August 31, 2011

A few hundred yards away, in Hardware Studio B, the rubber gets a little closer to the road. An impressive, multistory curtain of LEDs hangs in the lobby, displaying some sort of interactive art that responds to movement and sounds in the space, while employees enjoy a game of pingpong. The rest of the building is more prosaic, with surplus computers stacked up in the unused back sections of long, windowless corridors.

It’s here that hardware engineers carve 3-D mock-ups, create prototypes, test and refine circuitry, and get products ready for the market. A high-concept idea that originates in the rarefied ideas of Building 99 (hey! wouldn’t it be cool if your computer were a giant touchscreen table?) may get turned into an actual product in the hardware studio (hello, Microsoft Surface).

Wired recently toured both buildings to see some of the work Microsoft scientists and engineers are doing to invent the computer interfaces of the future.

Muscle Movement

Imagine playing Guitar Hero — with an air guitar.

That’s exactly what the “Skinput” system being developed by Microsoft researcher Scott Saponas can do. A bracelet of electrodes on your arm senses how you are moving your hand and fingers, and transmits the data wirelessly to your computer, where the game can put it to use.

You might also use it to control your phone: For instance, you could touch your forefinger and thumb together to answer a call, or touch your middle finger and thumb together to pause music playback.

“Our muscles are generating a lot of electrical data we can sense,” Saponas says. All the sensor has to do is figure out which electrical signals correspond to which gestures, and you could control your phone, computer or game console just by moving your fingers.

That could be handy, Saponas says, if your hands are otherwise occupied: washing dishes, making pottery, or riding a bike, for instance.

It’s an intriguing interface idea, but it has a ways to go before it’s practical.

For starters, figuring out which muscles correspond to which finger movements is a computational challenge.

“It would be nice if you had a ring finger muscle and an index finger muscle, but that’s not how it works,” Saponas says. Instead, there are groups of muscles that work in various combinations to move your fingers more or less individually. Sorting out the electrical signals is an exercise in pattern recognition that Saponas has been working on for several years.

“There’s a lot of noise in the data, which is one of the things that makes it difficult,” he says.

It’s also a bit of a hardware problem at the moment. The Skinput system would look cool on the playa at Burning Man, it’s a bit too bulky and aggressive-looking for consumer use just yet. It’s also not terribly accurate.

But these are all merely bumps in the road to Saponas, who seems genuinely delighted by his research — and his good fortune in being able to pursue it here. A few years ago, Saponas was a University of Washington graduate student in computer science. He was lucky enough to land an internship at Microsoft Research, where he did work pertaining to his dissertation — and when he got his Ph.D., he got hired on by the company to continue that work.

“Don’t tell them, because I like the paycheck, but I would come here even if they didn’t pay me,” Saponas confides as we leave his office.

And who knows? Something like this might be on the shelves at Best Buy before you know it.

Light Space

Senior researcher Andy Wilson helped start Microsoft’s investigations into tabletop displays in 2002. That work culminated, a few years ago, with the launch of Microsoft Surface.

Wilson’s still working with tabletops. But now his research extends the computer interface from the tabletop throughout the entire space around it, including the air above the table, adjacent walls and even the floor.

The key to his “Light Space” project is a trio of depth cameras: Cameras that can record 3-D data by sensing how far away each point is. A similar sensor is used in Microsoft’s Xbox Kinect, where it helps detect the position and orientation of your body, and can even be used by Kinect hackers to create 3-D maps of rooms.

In Wilson’s setup, three depth cameras are trained at different parts of a room to create a real-time map of the space.

“The data you get from the depth cameras is in millimeters,” says Wilson. “That allows you to combine the views of the three cameras into a 3-D view we can reason about.”

Completing the setup are several high-definition projectors aimed at the tabletop and at a nearby wall. Everything is bolted to a cubical frame about ten feet on a side, made out of silver scaffolding, similar to the metal girders used by lighting designers for holding up stage lights.

The metal cube is kind of a wireframe of a room, and it encloses a brightly lit white table, which stands out against the theatrical darkness of Wilson’s lab.

When you step inside the cube, the computer recognizes your arrival, building a 3-D model of your body and of anyone else inside the space.

In Light Space, you can manipulate photos and video windows on a tabletop, just by using your hands. But the 3-D aspect of the space means you can do other nifty things: For example, you can swipe a window off the table and onto your hand, where it becomes a little red dot. You can carry this dot around the room — it follows your hand wherever you go — and when you want, you can throw it onto the wall, where it reconstitutes itself as a window.

Or you can move a window from one screen to another by touching it with one hand, then touching the other screen where you want it to go with your other hand. The screen moves across, as if it were an electrical current zapping through your body.

Senior researcher Andy Wilson

It’s also possible to use virtual space to control things. For instance, Wilson has the system create a “menu” icon on the floor. Depending on what height you hold your hand at above this menu, you can select different menu options. A light shining on your hand changes colors to indicate which option you’re selecting, and there’s an audible prompt as well.

“With this kind of interaction, can you go to your Zune music library and find the song you want?” says Wilson. “I don’t know about that — it’s an open question.”

But it is cool.

Mouse House

User-experience designer Karsten Aagaard

Across a wide open space from Building 99 is Microsoft’s Studio B. At the end of a long corridor a pair of double doors lead to the company’s model shop.

If you spent any time at all during your childhood assembling models, this place is Valhalla. A half dozen craftsman sit at workbenches here, manufacturing models and mock-ups of hardware concepts. Nearly every tool a model-maker could want is in the shop: Carvable blocks of foam, bits of wood and plastic and metal, knives, scrapers, chisels, glue, screws and, of course, piles and piles of discarded failed masterpieces. There’s a paint shop where you can mix and spray any color you can think of onto anything you can get under the hood.

In a closet off to the side, two Objet Eden 350V 3-D printers are humming 24-7, squirting tiny jets of epoxy with an accuracy of 1/1000 inch and then curing it in ultraviolet light, manufacturing three-dimensional plastic objects layer by layer. (During our visit, the observation window on one of the printers is covered with opaque paper, to keep us from seeing what’s inside.)

“We take care of anybody with anything tangible,” says Karsten Aagaard, a user-experience designer in Microsoft’s hardware group.

What that means in practice is that Microsoft engineers with clever ideas, CAD drawings, or manufacturing issues to debug come to the shop looking for help.

“We turn around concepts within hours,” says Aagaard. “So we basically are helping them run their schedules in real time.”

For example, when developing Microsoft’s Touch Mouse, the team carved dozens of possible variations in “Ren board,” a kind of soft, low-density foam that is easy to sculpt. It turns out there’s an art to designing mice: You can’t just compute the perfect curve, or even design it in a CAD program; you have to sculpt it, hold it in your hand, play with it, and try a bunch of variations.

Aagaard has been at Microsoft for 5 years, and before that spent 8 years at Motorola. Before he got into tech, he was a toymaker and builder of custom houses. Now, he spends his days making things that are meant to be picked up, played with and then thrown away.

“A lot of what we do lives for half an hour,” says Aagaard. “People look at it and they say, ‘We didn’t know what we want, but now we do.’ We can make stuff really quick and it allows people to move on.”

The Wedge

Behind the nondescript double doors of Room 1960 — Microsoft’s “Edison Lab” — Steven Bathiche, the enthusiastic, polymath head of Microsoft’s Applied Sciences Group, shows us the latest technology he’s obsessing over.

It’s a wedge of clear acrylic.

“Not only is this a new interaction experience, this is also the technology to make it real,” Bathiche says, in the middle of a long, rhapsodic, and sometimes quite technical explanation of his experiments with “seeing displays” — monitors that can see you and respond to you. The key to this work, he says, is The Wedge. (You can hear the capital letters in the way he pronounces it.)

The Wedge is a very carefully engineered piece of acrylic. It is essentially a wide, flat prism. Its angles are computed precisely so that light entering at the narrow end bounces around inside, working its way along towards the thick end, and gradually coming out along the long flat side. In effect, it makes light from the narrow end turn 90 degrees while spreading it out across the face of the plastic. If you place a tiny LCD projector on the narrow end, it can throw a monitor-sized image on the flat surface.

The wedge works in reverse, too, so a tiny scanner along the narrow end can capture an image of whatever is placed in front of the screen.

The wedge was designed by a Cambridge University spinoff called CamFPD, which Microsoft acquired and incorporated into the Applied Sciences Group. Now, Bathiche, the CamFPD team, and the rest of the group’s engineers and scientists are working to create next-generation displays using this piece of plastic.

When Bathiche started at Microsoft, in 1999, he was the only member of the ASG. He had just completed a master’s degree in bioengineering at the University of Washington, after studying electrical engineering at University of Virginia. Like Scott Saponas, he did internships at Microsoft while completing his graduate work, which segued into a full-time job after he graduated.

Bathiche later worked with the Surface Computing Group’s Andy Wilson in developing surface computing into a marketable product, Microsoft Surface.

“That’s the great thing about Microsoft: There are no walls between groups,” Bathiche says.

He developed a reputation for debugging engineering problems with new as well as established hardware products. Over time, his team grew, adding engineers, coders, and scientists of various descriptions. The ASG now numbers about 20 people.

Because the wedge works in both directions, it’s possible to create a display that can “see” you at the same time that it’s showing an image. What’s more, the light emitted by a wedge-based display is collimated — the light waves move in parallel lines — so the display can direct a different image to each eye, or a different image to the person sitting next to you. When the team combined eye-tracking technology with collimated light aimed at each eye, they created “the world’s first steerable autostereoscopic 3-D display,” as Bathiche calls it.

What that means in plain English: As you look at the display, you see a 3-D image. You might even see your own reflection in a shiny surface within that image. Move your head, and the 3-D effect still works, because the display is tracking your eyes to ensure each one gets the right image. What’s more, the person sitting next to you can see a different 3-D image.

I saw a biplane circling a shiny teapot with my reflection. Looking into the same display at the same time, Wired’s photo editor Jim Merithew, sitting to my right, saw a skull.

It’s an impressive demo, but what’s it for? It’s not clear yet.

“Our job is to push the boundaries of how people use their computers,” says Bathiche.

Reporter Dylan Tweney (left) joins Cati Boulanger, an Applied Sciences Group team member, in a demo of the wedge.

One way he sees the technology being used is to create more and more sophisticated “windows” into other parts of the world: A sort of hyper-realistic webcam. His ultimate goal, he says, is a 3-D display that incorporates viewpoint tracking. That means it would respond to your head’s motion, so you can move left, right, forward and back to see different perspectives on the scene. Bathiche’s lab is using the Wedge and other technologies, such as remote cameras that follow your head’s movement, to experiment with different ways of making this happen.

It’s still a ways off, but Bathiche seems confident he’s got the components he needs.

“These are the pieces we need to create the ultimate display, which is kind of like a holodeck window to anywhere in the world,” Bathiche says.

Surface 2.0

You can see how Microsoft’s hardware mavens work in the evolution of Microsoft Surface.

Surface started out in Andy Wilson’s lab as an experiment with tabletop displays.

By the time it got to market, five years later, it was still a bit impractical. Surface 1.0 was big and expensive ($12,500). One parody of a Microsoft promotional video mocked it as a “big-ass table,” showing how it worked much like a touchscreen smartphone or tablet, except much less conveniently.

But if Surface 1.0 wasn’t exactly a hit, Surface 2.0 might do better. That’s because Microsoft’s hardware group, working in conjunction with Samsung, has completely reworked the technology for the table’s display and sensor.

Surface 1.0 used a projection screen and infrared cameras, making it thick and boxy. Surface 2.0 uses a new kind of LCD with integrated IR sensors, called PixelSense.1

In an ordinary LCD, each pixel is made up of a cluster of sub-pixels, one each for emitting red, green and blue light. In the PixelSense display, each pixel includes a fourth color, infrared, as well as a tiny infrared sensor. IR light emitted by each pixel is reflected back by objects near or on the screen, then picked up by the sensors, which can tell how far away things are by their brightness.

“Your fingertip looks like a comet,” says Microsoft group program manager Pete Kyriacou, who is showing us around a demonstration lab full of Surfaces. Where your finger touches the screen, it’s bright white, but the parts of your finger that are further away fade into darkness. Therefore, Surface’s software can tell which direction you’re pointing.

The new display and sensing technology means that Surface 2.0 is thinner, cheaper, lighter and stronger than the old version. At 40 inches diagonal, it’s not much thicker than an ordinary TV. You can even hang it on a wall.

Its high-definition 1,920 by 1,080 pixel display captures images at 60Hz at the same resolution that it displays them. That adds up to a gigabit per second of imaging data, which is pumped into the computer underneath through a custom image processing unit. Otherwise, the guts of the Surface look a lot like a typical computer motherboard, only much larger.

It’s also strong. The design specs required that it support up to 180 pounds (heavy dudes at the bar, please don’t dance on the Surface). The front of the Surface is a large, 0.7mm thick sheet of Gorilla Glass, which gives it enough toughness to shrug off the impact of a full beer bottle dropped from 18 inches. It’s waterproof, too, and even the edges are sealed to prevent your beer from leaking into the electronics underneath.

Surface is interesting enough to developers that more than 350 of them have started creating Surface applications, mostly for use in commercial, retail and hospitality settings. High-profile customers include Red Bull, Sheraton Hotels, Fujifilm, Royal Bank of Canada, and Dassault Aviation (an executive jet company).

Surface 2.0 will cost $7,900 and will start shipping this summer.

“I want people to not really know how this is working,” says Kyriacou. “It’s an opportunity to take advantage of the technology and do really magical things.”

And for that, Kyriacou points out, all the smarts at Microsoft are not going to be enough. Once an idea has come out of the research labs, been prototyped and honed and turned into a product, once it’s been revised and its bugs ironed out, then it’s effectively out of Microsoft’s hands — and in the hands of its developers.

“We want our hardware to take a back seat to what software developers can light up,” Kyriacou says.

So get to work, brainiacs!

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z