ISPR Presence News

Category Archives: Presence in the News

News stories explicitly or implicitly related to presence from a wide variety of sources

Ethical, practical challenges of VR-AR-AI-IoT blurring the lines between physical and virtual reality

[Presence scholars should lead the way in making sure everyone involved considers and addresses the challenges created by evolving technologies, as highlighted in this column from Futurism. –Matthew]

[Image: “Reality is An Illusion” by Louis Dyer via Deviant Art]

Will AI Blur the Lines Between Physical and Virtual Reality?

By Jay Iorio, Innovation Director for the IEEE Standards Association
August 15, 2017

The Notion of Reality

As technologies like artificial intelligence (AI), augmented and virtual reality (AR/VR), big data, 5G, and the internet of things (IoT) advance over the next generation, they will reinforce and spur one another. One plausible scenario is a physical world so enhanced by personalized, AI-curated digital content (experienced with what we today call augmented reality) that the very notion of reality is called into question.

Immersion can change how we interact with content in fundamental ways. For example, a fully immersive AR environment of the future, achieved with a wide-field-of-view headset and full of live content integrated with the built environment, would be intended by design to create in the user an illusion that everything being sensed was “real.” The evolution toward this kind of environment raises a host of ethical questions, specifically with attention to the AI that would underlie such an intelligent and compelling illusion.

When watching a movie, the viewer is physically separated from the illusion. The screen is framed, explicitly distinct from the viewer. The frame is a part of traditional art forms; from the book to the painting to the skyscraper, each is explicitly separated from the audience. It is bounded and physically defined.

But with digital eyewear, things change. Digital eyewear moves the distance of digital mediation from the screen (approximately 20 feet) to the human face, which is at zero distance, and almost eliminates the frame. It starts raising inevitable questions about what constitutes “reality” when much of one’s sensory input is superimposed on the physical world by AI. At that stage of the technology’s evolution, one could still simply opt out by removing the eyewear. Although almost indistinguishable from the physical world, that near-future world would still be clinging precariously to the human face.

The next step would be moving the source of the digital illusion into the human body – a distance of less than zero – through contact lenses, implants, and ultimately direct communication. At that point, the frame is long gone. The digital source commandeers the senses, and it becomes very hard to argue that the digital content isn’t as “real” as a building on the corner – which, frankly, could be an illusion itself in such an environment. Enthusiasts will probably argue that our perception is already an electrochemical illusion, and implants merely enhance our natural selves. In any case, opting out would become impractical at best. This is the stage of the technology that will raise practical questions we have never had to address before. Read more on Ethical, practical challenges of VR-AR-AI-IoT blurring the lines between physical and virtual reality…

Posted in Presence in the News | Leave a comment

Understanding children’s relationships with social robots

[This post from the MIT Media Lab website (it also appears in Medium and IEEE Spectrum) is a first-person report on a program of research that examines children’s social (medium-as-social-actor presence) responses to robots; I think it’s a model for how to introduce a wider audience to these ideas (e.g., I plan to assign and discuss it in undergraduate courses). The original version includes several more pictures and a video. –Matthew]

[Image: A child listens to DragonBot tell a story during one of our research studies. Credit: Personal Robots Group]

Making new (robot) friends

Understanding children’s relationships with social robots

by Jacqueline M. Kory Westlund

Hi, my name is Mox!

This story begins in 2013, in a preschool in Boston, where I hide, with laptop, headphones, and microphone, in a little kitchenette. Ethernet cables trail across the hall to the classroom, where 17 children eagerly await their turn to talk to a small fluffy robot.

“Hi, my name is Mox! I’m very happy to meet you.”

The pitch of my voice is shifted up and sent over the somewhat laggy network. My words, played by the speakers of Mox the robot and picked up by its microphone, echo back with a two-second delay into my headphones. It’s tricky to speak at the right pace, ignoring my own voice bouncing back, but I get into the swing of it pretty quickly.

We’re running show-and-tell at the preschool on this day. It’s one of our pilot tests before we embark on an upcoming experimental study. The children take turns telling the robot about their favorite animals. The robot (with my voice) replies with an interesting fact about each animal, Did you know that capybaras are the largest rodents on the planet?” (Yes, one five-year-old’s favorite animal is a capybara.) Later, we share how the robot is made and talk about motors, batteries, and 3D printers. We show them the teleoperation interface for remote-controlling the robot. All the kids try their hand at triggering the robot’s facial expressions.

Then one kid asks if he can teach the robot how to make a paper airplane.

We’d just told them all how the robot was controlled by a human. I ask: Does he want to teach me how to make a paper airplane?

No, the robot, he says.

Somehow, there was a disconnect between what he had just learned about the robot and the robot’s human operator, and the character that he perceived the robot to be.

Relationships with robots?

In the years since that playtest, I’ve watched several hundred children interact with both teleoperated and autonomous robots. The children talk with the robots. They laugh. They give hugs, drawings, and paper airplanes. One child even invited the robot to his preschool’s end-of-year picnic.

Mostly, though, I’ve seen kids treat the robots as social beings. But not quite like how they treat people. And not quite like how they treat pets, plants, or computers.

These interactions were clues: There’s something interesting going on here. Children ascribed physical attributes to robots—they can move, they can see, they can feel tickles—but also mental attributes: thinking, feeling sad, wanting companionship. A robot could break, yes, and it is made by a person, yes, but it can be interested in things. It can like stories; it can be nice. Maybe, as one child suggested, if it were sad, it would feel better if we gave it ice cream.

Although our research robots aren’t commercially available, investigating how children understand robots isn’t merely an academic exercise. Many smart technologies are joining us in our homes: Roomba, Jibo, Alexa, Google Home, Kuri, Zenbo…the list goes on. Robots and AI are here, in our everyday lives.

We ought to ask ourselves, what kinds of relationships do we want to have with them? Because, as we saw with the children in our studies, we will form relationships with them. Read more on Understanding children’s relationships with social robots…

Posted in Presence in the News | Leave a comment

Google’s presence experiment: VR vs. video to train people to make espresso

[This post from Google’s blog reports on interesting results and lessons learned from an experiment in presence. The original includes more images; for more information see coverage in Daily Coffee News. –Matthew]

Daydream Labs: Teaching Skills in VR

Ian MacGillivray, Software Engineer
July 20, 2017

You can read every recipe, but to really learn how to cook, you need time in the kitchen. Wouldn’t it be great if you could slip on a VR headset and have a famous chef walk you through the basics step by step? In the future, you might be able to learn how to cook a delicious five-course meal—all in VR. In fact, virtual reality could help people learn all kinds of skills.

At Daydream Labs, we tried to better understand how interactive learning might work in VR. So we set up an experiment, which aimed at teaching coffee making. We built a training prototype featuring a 3D model of an espresso machine which reacts like a real one would when you press the buttons, turn the knobs or drop the milk. We also added a detailed tutorial. Then, we tasked one group of people to learn how to pull espresso shots by doing it in VR. (At the end, we gave people a detailed report on how they’d done, including an analysis of the quality of their coffee.) For the purpose of comparison, another group learned by watching YouTube videos. Both groups were able to train for as long as they liked before trying to make a coffee in the real world; people assigned to watch the YouTube tutorial normally did so three times, and people who took the VR training normally went through it twice.

We were excited to find out that people learned faster and better in VR. Both the number of mistakes made and the time to complete an espresso were significantly lower for those trained in VR (although, in fairness, our tasting panel wasn’t terribly impressed with the espressos made by either group!) It’s impossible to tell from one experiment, of course, but these early results are promising. We also learned a lot of about how to design future experiments. Here’s a glimpse at some of those insights. Read more on Google’s presence experiment: VR vs. video to train people to make espresso…

Posted in Presence in the News | Leave a comment

“Be there” for the August 21 total solar eclipse via 4K and VR

[This story from 4K describes options for experiencing next week’s total solar eclipse via technology. The CNN press release notes that “While only a fraction of the country will be able to witness the total eclipse in-person, CNN’s immersive livestream will enable viewers nationwide to ‘go there’ virtually and experience a moment in history, seven times over.” For more information about all aspects of the eclipse see coverage on the NASA website. –Matthew]

Where to Watch The 2017 Total Solar Eclipse In Full 4K And Virtual Reality

by Stephen on August 14, 2017

Although not even the best 4K OLED or QLED TVs on the market will beat experiencing a Total Solar Eclipse live and in-person with your own eyes (while using special-purpose solar filters, such as “eclipse glasses” or hand-held solar viewers), for those who can’t get a true naked-eye view of the upcoming fantastic astronomical event, there’s CNN. In partnership with Volvo, CNN will be broadcasting 2017’s Total Solar Eclipse of North America on August 21, and you’ll be able to watch it on your 4K resolution TV/Monitor or in immersive 360-degree virtual reality using a VR head-set.

The broadcast will be available all around the world in 4K resolution at CNN.com/eclipse, or in 4K and other available resolutions and formats through CNN’s mobile apps, Samsung Gear VR powered by Oculus via Samsung VR, Oculus Rift via Oculus Video and through CNN’s Facebook page via Facebook Live 360.

According to CNN, the livestream will be enhanced by real-time graphics, close-up views of the sun, and the running commentary of experts from the science community. Four cameras will be spaced across the United States in four different locations: Snake River Valley, Idaho; Beatrice, Nebraska; Blackwell, Missouri; and Charleston, South Carolina. Each camera will be shooting 4K video shot of the total eclipse’s path.

Other sites like NASA, The Weather Channel, National Geographic and Astronomy.com will be also broadcasting the natural event through their main websites and Facebook pages, presumably in 4K as well at least for NASA, which has a regular habit of shooting astronomical events in ultra HD for public consumption. Read more on “Be there” for the August 21 total solar eclipse via 4K and VR…

Posted in Presence in the News | Leave a comment

Have a near death experience in VR in “Flatline”

[This short interview from the Vive blog is about the use of virtual reality and presence to explore a universal experience in a visceral, first-person way. The original blog post includes a second image; for a text and audio report on the experience that includes more images see coverage by Southern California Public Radio (SCPR); and a 0:39 minute trailer is available on YouTube. –Matthew]

Take a trip to The Other Side in Flatline

Stephen Reid
August 3, 2017

What happens at the exact moment of death? Religion and science disagree, but many survivors of near-death experiences have similar stories from all around the world. In Flatline, you’ll have your own near-death experience in virtual reality. We chatted to Julian McCrea of Portal Experiences about the creation of this unique app.

Hello Julian! Tell us your part in the production of Flatline.

I’m the Executive Producer on Flatline (find us on Facebook, Twitter and Instagram). The production was directed by Jon Schnitzer and co-produced between Portal Experiences, The Brain Factory and 3DLive AXO. Our website is at www.flatlineexperience.com

How would you describe Flatline?

Flatline is a non-fiction VR series where the audience have a near-death experience, go to the Other Side and come back, irreversibly changed.

In each episode you experience a Flatline, as retold by someone who had experienced it, first-hand. At the end of the episode you can hear from world expert psychologists, cardiologists and spiritualist who try and explain what happened to that ‘Flatliner’.

What was the initial inspiration for Flatline?

The initial inspiration came from the director Jon Schnitzer who had a close friend of his retell a near-death experience that had happened to him 16 years ago.

As we began digging into it, two things stuck out. Firstly the stories were viscerally very different but patterns started to emerge. It was like peeling an onion; every time you read a new one, the mystery of what happened on the Other Side grew larger and larger.

Secondly, virtual reality was perfect for telling these stories as it could allow us to retell the stories in first-person, intimately, in a visceral way that no-other medium could. We try and explain it as ‘Dr Strange meets Tree of Life‘. You will understand what I mean when you do it! Read more on Have a near death experience in VR in “Flatline”…

Posted in Presence in the News | Leave a comment

Augmented reality graffiti will lead to advertising ambush wars

[This story from New Scientist highlights negative consequences of presence-based advertising; while it focuses mostly on competition between advertisers, the larger concern is unwanted clutter in augmented/mixed reality, as illustrated in the Kevin Matsuda short film Hyper-Reality. The original story includes a 0:35 minute video and a different image. –Matthew]

Augmented reality graffiti will lead to advertising ambush wars

By Matt Reynolds
4 August 2017

Imagine looking up at the sky and seeing every cloud crammed full of adverts. High above a fast food restaurant, a competitor has scrawled its own cheeky pitch, urging shoppers to eat their burgers instead.

This future is nearer than you might think: last month saw the launch of the first augmented reality app that lets anyone write on the sky. And, according to a report released this week by a global law firm, advertisers are worried.

The new app, called Skrite, lets users write messages or post photos onto the sky. Anyone who has installed the app can then point their phone skyward to see what other users have left there. “The sky’s not the limit, in fact, it is a barrier that must be broken,” the company wrote in a press release that was reprinted at Wired.

Right now, the only companies using the app to advertise to customers are small businesses in Orlando, Florida, says Skrite co-founder Arshia Siddique. But she’s hoping to entice big brands. Augmented reality, she says, is a “third space” – after the physical world and the internet – just waiting to be “filled with content”. Read more on Augmented reality graffiti will lead to advertising ambush wars…

Posted in Presence in the News | Leave a comment

Exhibit explores realism and artifice in photographic portrayals of war

[This disturbingly timely story from Yale News describes a new exhibition about the blurring of the real and artificial in our representations and understanding of war. The original includes five more images, and more information and images are available at the Gallery’s website. –Matthew]

[Image: A detail from An-My Lê’s Film Set (“Free State of Jones”), Battle of Corinth, Bush, Louisiana, 2015. Inkjet print. Courtesy STX Entertainment]

Exhibit features photographic portrayals of war, real and staged

July 21, 2017

“Before the Event/After the Fact: Contemporary Perspectives on War,” an exhibition that brings together a range of contemporary approaches to the visual representation of conflict, is on view at the Yale University Art Gallery through the end of the year.

The works in the exhibit depict not only combat zones but also training sites, forensic reconstructions, and popular entertainment. Encompassing conceptual, documentary, and architectural imaging techniques, the exhibition investigates the visual relationships between staged images and real events, and between factual data and their digital representations. Among the works on view are photographs by Adam Broomberg and Oliver Chanarin, An-My Lê (Yale M.F.A. ’93), and Peter van Agtmael (Yale B.A. ’03); a video installation by the filmmaker Harun Farocki; and a video and digital reconstruction created by the interdisciplinary design studio SITU Research. Read more on Exhibit explores realism and artifice in photographic portrayals of war…

Posted in Presence in the News | Leave a comment

Startup Neurable unveils the world’s first brain-controlled VR game

[In addition to describing the company’s new brain-controlled VR game, in this interview from IEEE Spectrum the CEO of Neurable argues that the brain-computer interface will “be the interaction method that allows for ubiquitous VR and AR”; the original story includes four different pictures and a 0:42 minute video. –Matthew]

[Image: From Diorama, where coverage includes a 2:08 minute demonstration video]

Startup Neurable Unveils the World’s First Brain-Controlled VR Game

By Eliza Strickland
Posted 7 Aug 2017

Imagine putting on a VR headset, seeing the virtual world take shape around you, and then navigating through that world without waving any controllers around—instead steering with your thoughts alone.

That’s the new gaming experience offered by the startup Neurable, which unveiled the world’s first mind-controlled VR game at the SIGGRAPH conference this week.

In the Q&A below, Neurable CEO Ramses Alcaide tells IEEE Spectrum why he believes thought-controlled interfaces will make virtual reality a ubiquitous technology.

Neurable isn’t a gaming company; the Boston-based startup works on the brain-computer interfaces (BCIs) required for mind control. The most common type of BCI uses scalp electrodes to record electrical signals in the brain, then use software to translate those signals into commands for external devices like computer cursors, robotic limbs, and even air sharks. Neurable designs that crucial BCI software.

The game on display at SIGGRAPH is a collaboration between Neurable and the Madrid-based VR graphics company estudiofuture, and it is meant merely as a demo of Neurable’s tech and its capabilities. It’s played on an HTC Vive headset by swapping out the Vive’s standard elastic strap and putting in Neurable’s upgraded strap, which is studded with seven electrodes.

CEO Alcaide, who has a PhD in neuroscience, tells Spectrum that his tech doesn’t use the EEG brainwave patterns associated with focused concentration or relaxation as control signals. Those signals are used by a few BCI devices already on the market, such as the Muse brain-sensing headband designed to improve meditation and to play simple real-world games. Instead Neurable’s software registers event-related potentials, more specific signals that occur when the brain responds to a stimuli, which allows for an intention-based interaction method.

IEEE Spectrum: Tell me about this game you’re showing off at SIGGRAPH. Read more on Startup Neurable unveils the world’s first brain-controlled VR game…

Posted in Presence in the News | Leave a comment

‘Computational zoom’ lets photographers adjust image compositions after capture

[The tools available to creators allow increasingly sophisticated manipulations of mediated reality; this story from the UC Santa Barbara Current describes an interesting new one; follow the links at the end for more information, examples and a 5:18 minute video, and see coverage in DIY Photography for an explanation of how this new tool compares to others. –Matthew]

Picture Perfect

UCSB and NVIDIA researchers develop a new technique that enables photographers to adjust image compositions after capture

By James Badham
Monday, July 31, 2017

When taking a picture, a photographer must typically commit to a composition that cannot be changed after the shutter is released. For example, when using a wide-angle lens to capture a subject in front of an appealing background, it is difficult to include the entire background and still have the subject be large enough in the frame.

Positioning the subject closer to the camera will make it larger, but unwanted distortion can occur. This distortion is reduced when shooting with a telephoto lens, since the photographer can move back while maintaining the foreground subject at a reasonable size. But this causes most of the background to be excluded. In each case, the photographer has to settle for a suboptimal composition that cannot be modified later.

As described in a technical paper to be presented July 31 at the ACM SIGGRAPH 2017 conference, UC Santa Barbara Ph.D. student Abhishek Badki and his advisor Pradeep Sen, a professor in the Department of Electrical and Computer Engineering, along with NVIDIA researchers Orazio Gallo and Jan Kautz, have developed a new system that addresses this problem. Specifically, it allows photographers to compose an image post-capture by controlling the relative positions and sizes of objects in the image.

Computational Zoom, as the system is called, allows photographers the flexibility to generate novel image compositions — even some that cannot be captured by a physical camera — by controlling the sense of depth in the scene, the relative sizes of objects at different depths and the perspectives from which the objects are viewed.

For example, the system makes it possible to automatically combine wide-angle and telephoto perspectives into a single multi-perspective image, so that the subject is properly sized and the full background is visible. In a standard image, the light rays travel in straight lines into the camera at an angle specified by the focal length of the lens (the field of view angle). However, this new functionality allows photographers to produce physically impossible images in which the light rays “bend,” changing from a telephoto to a wide angle as they go through the scene. Read more on ‘Computational zoom’ lets photographers adjust image compositions after capture…

Posted in Presence in the News | Leave a comment

Companion robots are here. Just don’t fall in love with them

[From Roomba to Kuri to sophisticated and adorable future companion robots that take care of us, we can’t help perceiving machines as independent, sentient social actors – i.e., experiencing media-as-social-actor presence. This interesting story from Wired considers some key design choices and ethical challenges. See the original for a 5:12 minute video. –Matthew]

Companion Robots Are Here. Just Don’t Fall In Love With Them

Matt Simon
August 2, 20.17

“Hey, Kuri,” I say. “I love you.”

Pause. I brace for rejection, but then the robot lets out a balooop and shimmies back and forth. This, I am to presume, means Kuri loves me too.

Interacting with Kuri, a robot set to hit the market in December, is at once fascinating, delightful, and puzzling. Kuri’s creators call it a “companion robot,” but this is no Furby. Kuri belongs to a new class of machines that actually are intelligent, and actually make useful assistants at home. You can see them out in the wild, helping disabled people with routine daily tasks. Soon they’ll remind the elderly to take their medication. Kuri’s more of an all-purpose companion, a member of your family that also happens to play music and take video.

But the vanguard of increasingly intelligent machines invites questions about how people should interact with them. How do we build relationships with what is essentially a new kind of being? How do roboticists make it clear to people that the bond they form with a machine will never be as robust as a bond with a human? And how does the system keep bad actors from exploiting these bonds to, say, use these robot companions to squeeze money out of the elderly?

All big questions that society must start talking about, and now. Sure, no robot is in danger of forming a complex bond with its owner—not even Kuri. The technology just isn’t there yet. But the arrival of Kuri and other companion robots means that in the near future, you’ll need to pay very close attention to how robots make you feel. I mean, I just declared my love for one, for Pete’s sake. Read more on Companion robots are here. Just don’t fall in love with them…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php