[From MIT Media Relations; a 1:42 minute video is here]
MIT Media Lab unveils ‘Surround Vision’
New system would let TV programs spill off the screen and into the living room
for immediate release: April 9, 2010
contact: Jen Hirsch – MIT News Office
email: jfhirsch@mit.edu
call: 617-253-1682
written by: Larry Hardesty, MIT News Office
CAMBRIDGE, Mass. — Surround sound technology uses multiple speakers to extend the world of a TV show or movie beyond the edges of the screen: the audience can, in effect, hear what’s happening just off-camera. Researchers at MIT’s Media Lab have developed a system called Surround Vision that uses ordinary handheld devices to do something analogous, but with images. “If you’re watching TV and you hear a helicopter in your surround sound,” says Santiago Alfaro, a graduate student in the lab who’s leading the project “wouldn’t it be cool to just turn around and be able to see that helicopter as it goes into the screen?”
Surround Vision is intended to work with standard, Internet-connected handheld devices. If a viewer wanted to see what was happening off the left edge of the television screen, she could simply point her cell phone in that direction, and an image would pop up on its screen. The technology could also allow a guest at a Super Bowl party, for instance, to consult several different camera angles on a particular play, without affecting what the other guests see on the TV screen.
To get his prototype up and running, Alfaro had to attach a magnetometer — a compass — to an existing handheld device and to write software that incorporated its data with that from the device’s other sensors. But Alfaro says that devices now on the market, including the most recent version of the iPhone, have magnetometers built in.
Alfaro and his advisor, Media Lab research scientist Michael Bove, envision that, if the system were commercialized, the video playing on the handheld device would stream over the Internet: TV service providers wouldn’t have to modify their broadcasts or their set-top boxes. “In the Media Lab, and even my group, there’s a combination of far-off-in-the-future stuff and very, very near-term stuff, and this is an example of the latter,” Bove says. “This could be in your home next year if a network decided to do it.”
How they did it: Once he’d rigged up a handheld with the requisite motion sensors, Alfaro shot video footage of the street in front of the Media Lab from three angles simultaneously. A television set replays the footage from the center camera. If a viewer points a motion-sensitive handheld device directly at the TV, the same footage appears on the device’s screen. But if the viewer swings the device either right or left, it switches to one of the other perspectives. The viewer can, for instance, watch a bus approach on the small screen before it appears on the large screen.
Since many DVDs of commercial films now come with bonus footage of scenes shot from different angles, Alfaro was also able to devise demos that allowed the user to switch between the final version of a film and alternate takes.
Next steps: The researchers plan a series of user studies in the spring and summer, which will employ content developed in conjunction with a number of partners. Since sports broadcasts and other live television shows already feature footage taken from multiple camera angles, they’re a natural fit for the system. Many children’s shows already encourage a kind of audience participation that could be enhanced, Bove says, and viewers of the type of criminal-forensics shows now popular could use Surround Vision to, say, see what the show’s protagonists are looking at through the microscope lens.