Popular Science is building the telepresent robotic boss of the future

[From Popular Science, where the story includes an additional image and two videos]

Digital embodiment of editor

[Image: The Digital Embodiment Of Editor In Chief Jacob Ward Big on “tele-” but not so much on “presence.” Dan Nosowitz]

Popular Science Is Building The Telepresent Robotic Boss Of The Future

To overcome the physical distance between our New York offices and our editor in chief–who lives and works on the West Coast–Popular Science is exploring the cutting edge of telepresence technologies.

By Clay Dillow Posted 01.28.2013

Earth circa 1993 was a radically different place. In roughly two decades, technology has completely reorganized our lives, our workplaces, and our interpersonal interactions. We have more means and methods of communicating, of interacting, of collaborating and sharing information than we could have envisioned twenty years ago. It’s easy to feel like there is no problem–especially where communication is concerned–that technology can’t solve. At least until you run headlong into one that it can’t.

Popular Science is a magazine about the future and the science and technology that will get us there, but right now PopSci has a technology problem. Our editor in chief, Jacob Ward, lives and works in San Francisco. The rest of us work out of PopSci’s Park Avenue offices in New York City. But building magazines is nothing if not collaborative, and office culture is nothing if not interpersonal. So for Jake, collaborating with us across video chat is somewhat like watching a live concert streaming over the Internet; the sensory information coming across the fiber networks is real and true to real life, but not only is the experience not the same as actually being there, it’s not even remotely close. Which has us wondering: Can technology ever shrink this distance? Could it even go beyond? Can technology create an experience that’s better than actually being present?


We need ubiquity, something that is easily accessible to most everyone Ward needs to interact with. We need collaborative tools that allow users at both end to share ideas and interact with one another to create a shared experience beyond simple audio and video transmission. Where possible, effective telepresence requires some kind of humanizing element–something analogous to “breathing the same air,” Ward says–that helps convey the idea of physical presence, non-verbal cues, and the like. And we need mobility. “We have meetings in front of a wall covered in printouts or in front of a whiteboard sometimes,” Ward says. “I’d love to be able to go to wherever the meeting is taking place.”

But more than anything, our solution needs to be simple and unobtrusive–something that makes office life easier rather than more complicated. “This needs to be as simple as someone walking into my office in New York, hitting a button, and me popping up wherever I am in the country,” Ward says. Editor in chief on demand–that’s the future we want, now.

“I’ve tried a few different solutions, including Skype, Facetime, and something called Fusebox, and I have reverted to Google Chat,” Ward says. For the magazine that promises “The Future Now,” that might not sound very futuristic, and it’s not. Google Chat has been around for years, and it’s somewhat restrictive in that it ties the users on both ends to their computer monitors. But it’s also a window into the first pillar of effective telepresence. Ward says he’s gone back to using Google’s video chat and instant messaging system because everyone is already using Gmail and Google Chat. It’s accessible, no downloads or hardware reconfigurations necessary. Google’s products have enough ubiquity and reliability to make them convenient.

This is the jumping off place from which we’ve begun carving out a list of requirements for the telepresent editor in chief we hope to create in the very near future.


To build the most futuristic possible telepresence solution right here in our office, we decided to look as far toward the future as we reasonably could and work our way back to what’s feasible in the near term. That meant looking to the very vanguard of this discipline, asking ourselves “What is the most PopSci way we can think of to transport our editor in chief from west coast to east on demand? What is the most technologically dazzling means by which we might bring the future of telepresence to Park Avenue?” The answer: holograms. That’s how we ended up on the phone with Michael Bove, Director of MIT’s Object-Based Media Group.

Our vision: All six-feet-eight-inches of our towering editor in chief striding through the office like a semi-opaque phantom overseer, interacting with us in real time while striking fear into the hearts of interns. “There are some elements of holographic telepresence that are really appealing,” Ward says of this notion, most notably the imparting of very human-like, non-verbal cues and behaviors–things like the ability to look someone in the eye while speaking to them or gesturing naturally with one’s hands. This is a future promised to us by science fiction, so how far away is it from being real?

“This is not science fiction, this is plain old engineering,” Bove says. “It’s clear that you could do it. It’s just not something that we have the circuit diagrams all drawn up for. We’re not ordering parts yet.”

The state of holographic telepresence today: Small and slow. But it’s improving all the time. There are two primary challenges in realizing our vision, Bove explains. The first is scale. “Desktop size is at the ragged edges of where we could get in the near future,” Bove says. “You could absolutely do full color holographic telepresence a couple of inches high, but that’s not very interesting for most applications.”

Then there’s the second problem of bandwidth and processing. In order to capture Ward in his office in San Francisco and recreate him in New York in realtime, there would have to be a huge amount of data being streamed across the Web and a massive amount of computation happening at the receiving end. Researchers at the University of Arizona are probably the very best in the world at this, Bove says, but while they could possibly write a life-size hologram right now, it would be a static image, refreshable but not at nearly the frame rate needed to convince the human eye it is seeing continuous, fluid motion.

That’s not even taking into account other challenges, which would likely require some kind of highly immersive interface on the user end and a mobile holographic video display fitted with all kinds of cameras and sensors on the New York side of the equation so the hologram could move around the office and Ward could see what the hologram version of himself was “seeing.”

“Life-size Jake is hard,” Bove sums up, citing the restraints physics place on the system and the current state of technology. That technology will improve over time, but our near-term dreams for a disembodied holo-editor are on indefinite hold.

Bove, like most people working in the telepresence field, points us toward the much more practical and far more developed field of telerobotics–a nascent discipline that is still trying to figure out what it should be, what we humans want from it, and how best to deliver on those desires. Several companies, including robotics startups like Willow Garage, Anybots, and VGo, have developed untethered robotic solutions for telepresence problems that can actually move around a workspace under the telepresent user’s command. One of the more clever solutions we’ve seen comes from Double Robotics, whose app-driven robot is basically just a mobile platform for the iPad, which can be docked to the robot and used as the sensor/display package and conferencing interface, like Facetime on wheels. The telepresent user simply controls the robot from elsewhere with another iPad. Hobbyists have even patched together telepresence robots from off-the-shelf products like iPads and laptops. Though it’s not quite where we want it yet, telerobotics technology is already a reality.

That’s how we ended up on the phone with Colin Angle, co-founder and CEO of iRobot, which makes everything from the bomb-disposal robots the U.S. military uses in Iraq, Afghanistan, and elsewhere to the floor-sweeping Roomba robot. In telerobotics, iRobot is entering a field that is already crowded. But it’s a field with plenty of room for good ideas to develop and thrive.

iRobot recently released its RP-VITA telepresence robot, aimed at porting doctors from their offices around the globe into hospitals wherever they are needed aboard a mobile robotic platform know as AVA. While the RP-VITA product is tailored for telemedicine, the AVA platform itself is not application-specific. It’s simply a mobile robotic platform that iRobot developed with versatility in mind. iRobot hopes to develop AVA into a variety of specialized robots suited to specific tasks as market opportunities present themselves–tasks that could someday include robo-editor, Angle says.

At iRobot, Angle and his engineers are pursuing an idea he calls “better than being there,” the notion that technology doesn’t have to simply be a mitigating solution for the problem of physical separation, but that it can actually enhance the experience–that having a robot present rather than an actual person could actually make the experience more fruitful for all parties involved. He points to RP-VITA. It carries many of the doctor’s standard tools on board. It can stay on its feet all day while the doctor is seated at a computer terminal. And RP-VITA can automatically drive itself from patient’s room to patient’s room, during which time the doctor using it has instantaneous touchscreen access to patient records, lab data, medical databases, and other information that generally isn’t readily available when a doctor is making rounds. “You could argue that from one or two perspectives the doctor in the robot is somewhat better off than the doctor in the room who doesn’t have instant access to all the information,” Angle says.

“There are a lot of neat ideas that can make having technology between you and the people you’re interacting with even more effective,” Angle says. “Will we ever get better than actually being there? Maybe not. But can we get to the point where it is a viable alternative and your editor won’t feel like he simply can’t exist on the other coast and still get his job done? We think these better-than-being-there apps will augment the user’s experience and deliver a more effective and sustainable remote, collaborative existence.”

This kind of better-than-being-there notion isn’t rooted simply in robotics, but also in the interfaces they employ. And it’s at the confluence of the two where we find the most convincing reasons to be optimistic that better-than-being-there is a viable objective. The field of robotics is currently in the throes of a technological revolution (much of which has been driven by the other side of iRobot’s business: military robotics). And there’s no denying that user interfaces have drastically changed our relationships with our technology–and with each other–for the better.

Right now interfaces are being driven by tablet technology, Angle says, and that will continue for the foreseeable future. The potential to overlay what Ward sees on his screen in San Francisco with additional layers of information is vast. Imagine seven people in a conference room in New York linked up via our robo-editor with Ward in California. Facial recognition software could identify each person in the room and automatically pull up a list of things that Ward wants to discuss with each of them from his digital agenda and overlay it onto each person while pulling each individual’s recent email and chat correspondence up on his computer screen as he goes around the table speaking to each one. This is where robotics meets augmented reality meets the huge potential for touchscreen interfaces, Angle says, and this is what’s going to add serious value to the telepresence experiences of the very near future.

“There’s so much untapped potential just in touchscreen tablets, and the near-term stuff is all going to be tablet-based,” Angle says. “But we’re just tapping the smallest elements of what can be done today. The video-gaming industry puts a huge amount of energy into the tactile touchscreen industry, and there will be plenty that we borrow shamelessly from it.”

Which is really what we want, right? Our video game avatars move through their virtual worlds, touching objects and altering their physical space, interacting with their environments as seamless extensions of ourselves. What we really want is to create an avatar that can just as seamlessly represent our editor in chief and interact with the environment that is our office–a task made much more difficult by the fact that the office is not a virtual world governed by computer code but a real physical space governed by the laws of physics, human interactions, and the ability to collaborate. But even here robots are gaining ground, often via off-the-shelf hardware.

“Do I need to be able to pick up paper?” Angle says. “Do I need to be able to go get coffee? What else do I need to be able to do? These are good questions. In your case, if the issue were largely paper-oriented, you could actually have a payload where you could hold up a piece of paper to the high-def camera, which could take an image of it and de-warp it, and then you could use the iPad to write on the digital version it creates, to annotate it. Then you could spit it back out in paper form with a printer attached to the robot. That would be a pretty neat thing in a paper-oriented environment.”

These kinds of tailored applications are the very things AVA was designed for. Modular robots with advanced interfaces are a reality now, if not commercially then certainly in the lab. And they’re only going to get better, Angle says. We could see new application-optimized versions of AVA like the one described above on the market in the next few years.


The right robot could offer freedom of movement and enhanced collaboration, ticking off two of our three main requirements for porting Jake’s presence into the office. But lacking any kind of real humanizing element beyond realtime videoconferencing, can this kind of telepresence really be better than being there?

“The office is complicated enough,” Ward says. “I’m not sure we can invent a whole new social language around these robots.” The mobility they offer is fantastic, he says. But these elongated, wheeled robots with video monitor or tablet computer “faces” could also undermine his credibility. How can you take a Segway with an iPad perched atop it seriously? How do you project authority, or fill a room with your presence? How do you exude genuine elation or express anger across that medium? How do you fire somebody?

Still, the potential is intriguing. If we could solve the problem of not just being present, but actually being a presence, we’d be a long way toward achieving the future we envision. That means being able to participate in the physical world beyond simply rolling around and talking into a Webcam. We’re not sure exactly what that should look like right now, but we come across new ideas for improving our user experience every day even if we don’t yet have the technology at our disposal to put them into action. “If I could pour someone a drink from 3,000 miles away, that would be pretty good,” Ward says. We’re adding that to the list.

Popular Science has a real-world technology problem, and our offices are currently a real-world test-bed for potential solutions. In the coming months, we’re going to put various telepresence technologies and ideas to the test, and we’ll document our progress here at PopSci.com. In the meantime, we’re currently accepting suggestions as well as technologies that might contribute to the ultimate fix. Send us your best of both–we’ll put them to the test.


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News: