[Computer science researchers at Princeton University are developing a system that brings the physical world into the virtual one via ‘invisible’ robots. The story below from the University provides the details; for more information including videos, follow the links within it, particularly the one to the “Reality Promises” website. Coverage in Scienmag – Science Magazine makes the connection with presence clear, without using the term:
“What sets this work apart is its focus on dissolving the traditional barriers posed by robotic presence. Usually, robots in physical spaces are intrusive and palpable, often breaking immersion. By rendering the robot ‘invisible’ through visual erasure techniques and coordinated virtual overlays, users are presented with an experience that feels magical—objects arrive and depart with fluid spontaneity, and the mechanism powering the illusion becomes irrelevant. The technology recedes into the background, letting users interact intuitively as if manipulating a conjured reality.”
–Matthew]

Erasing the seams between the virtual and physical worlds
By Julia Schwartz, Office of Engineering Communications
August 25, 2025
Computer scientists at Princeton are working to bring virtual reality into the physical world, with the potential to enhance a variety of experiences, including remote collaboration, education, entertainment and gaming.
Someday virtual and augmented reality technology will likely be commonplace, said Parastoo Abtahi, assistant professor of computer science. It will be important that users of this technology are able to seamlessly interact with the physical world.
Abtahi and postdoctoral research associate Mohamed Kari are working to make this possible by pairing virtual reality technology with a physical robot that the user can control. Their research will be presented next month at the ACM Symposium on User Interface Software and Technology in Busan, Korea.
Someone using this system can, while wearing a mixed reality headset, select a drink from a list of options and then place it, virtually, on the desk in front of them. Or, more fantastically, they can ask an animated bee to deliver a bag of chips to them on the sofa.
At first, the drink and the chips might be only pixels. But after a minute or so, they will physically materialize, as if by magic. But it’s not magic — it’s a robot, rendered invisible to the user, that has delivered the snack. “Visually, it feels instantaneous,” said Abtahi.
By removing “all unnecessary technical details, even the robot itself,” said Kari, the experience appears seamless. The goal is to make the technology disappear and have the interaction between human and computer feel intuitive.
A key technical challenge in this system is communication. The user must be able to communicate their desires simply — selecting a pen across the room, for example, and moving it to the table in front of them. Kari and Abtahi created an interaction technique where a simple hand gesture allows the user to select an object, even from far away.
These gestures are then translated into commands for the robot to execute. The robot is outfitted with its own mixed reality headset, so it knows where to place objects within the virtual environment.
Another technical challenge is adding and erasing physical objects from the user’s field of view. Using a technology called 3D Gaussian splatting, Abtahi and Kari create a realistic digital copy of the physical space. Once everything in the room has been scanned, it allows the system to erase something from view, like a moving robot, or add something, like an animated bee.
To achieve this, every inch of the room and every object within it must be scanned and rendered digitally. Right now, the process is somewhat tedious, said Abtahi. Streamlining it, perhaps by assigning the task to a robot, is a subject for future research in her lab.
The paper, “Reality Promises: Virtual-Physical Decoupling Illusions in Mixed Reality via Invisible Mobile Robots” will be presented at the ACM Symposium on User Interface Software and Technology in Busan, Korea, September 28-October 1, 2025. The work is supported by the Princeton Presidential Postdoctoral Fellowship and the Princeton School of Engineering and Applied Science Innovation Fund.
Leave a Reply