ISPR Presence News

Author Archives: Matthew Lombard

Call: Design for Health

Call for Papers: Design for Health

Design for Health is an international refereed journal covering all aspects of design in the context of health and wellbeing. The Journal is published twice a year and provides a forum for design and health scholars, design professionals, health-care practitioners, educators and managers worldwide.

Design for Health is affiliated with the Design4Health conference, established in 2011.

The Journal aims to publish thought provoking work based on rigorous research. It invites high quality, original submissions that make a contribution to knowledge and practice in the context of the design of health products, services and interventions that promote dignity and enhance quality of life. It adopts the World Health Organisation’s definition of health as the ‘state of complete physical and mental wellbeing and not only the absence of disease’ (1948).

The Journal publishes work which utilizes design and creative practices as methods and tools within research to engage people to understand problems, and visualize new possibilities and future scenarios. Read more on Call: Design for Health…

Posted in Calls | Leave a comment

Ethical, practical challenges of VR-AR-AI-IoT blurring the lines between physical and virtual reality

[Presence scholars should lead the way in making sure everyone involved considers and addresses the challenges created by evolving technologies, as highlighted in this column from Futurism. –Matthew]

[Image: “Reality is An Illusion” by Louis Dyer via Deviant Art]

Will AI Blur the Lines Between Physical and Virtual Reality?

By Jay Iorio, Innovation Director for the IEEE Standards Association
August 15, 2017

The Notion of Reality

As technologies like artificial intelligence (AI), augmented and virtual reality (AR/VR), big data, 5G, and the internet of things (IoT) advance over the next generation, they will reinforce and spur one another. One plausible scenario is a physical world so enhanced by personalized, AI-curated digital content (experienced with what we today call augmented reality) that the very notion of reality is called into question.

Immersion can change how we interact with content in fundamental ways. For example, a fully immersive AR environment of the future, achieved with a wide-field-of-view headset and full of live content integrated with the built environment, would be intended by design to create in the user an illusion that everything being sensed was “real.” The evolution toward this kind of environment raises a host of ethical questions, specifically with attention to the AI that would underlie such an intelligent and compelling illusion.

When watching a movie, the viewer is physically separated from the illusion. The screen is framed, explicitly distinct from the viewer. The frame is a part of traditional art forms; from the book to the painting to the skyscraper, each is explicitly separated from the audience. It is bounded and physically defined.

But with digital eyewear, things change. Digital eyewear moves the distance of digital mediation from the screen (approximately 20 feet) to the human face, which is at zero distance, and almost eliminates the frame. It starts raising inevitable questions about what constitutes “reality” when much of one’s sensory input is superimposed on the physical world by AI. At that stage of the technology’s evolution, one could still simply opt out by removing the eyewear. Although almost indistinguishable from the physical world, that near-future world would still be clinging precariously to the human face.

The next step would be moving the source of the digital illusion into the human body – a distance of less than zero – through contact lenses, implants, and ultimately direct communication. At that point, the frame is long gone. The digital source commandeers the senses, and it becomes very hard to argue that the digital content isn’t as “real” as a building on the corner – which, frankly, could be an illusion itself in such an environment. Enthusiasts will probably argue that our perception is already an electrochemical illusion, and implants merely enhance our natural selves. In any case, opting out would become impractical at best. This is the stage of the technology that will raise practical questions we have never had to address before. Read more on Ethical, practical challenges of VR-AR-AI-IoT blurring the lines between physical and virtual reality…

Posted in Presence in the News | Leave a comment

Call: Parallel Worlds: Designing Alternative Realities in Videogames

Parallel Worlds: Designing Alternative Realities in Videogames
Saturday 30 September 2017
Victoria & Albert Museum, London

Building on the sell-out success of last year’s Parallel Worlds Videogame Design conference at the Victoria and Albert Museum in London, we’re excited to announce our follow up event, providing a critical and cultural platform to discuss one of the most important fields in contemporary design.

Read more on Call: Parallel Worlds: Designing Alternative Realities in Videogames…

Posted in Calls | Leave a comment

Understanding children’s relationships with social robots

[This post from the MIT Media Lab website (it also appears in Medium and IEEE Spectrum) is a first-person report on a program of research that examines children’s social (medium-as-social-actor presence) responses to robots; I think it’s a model for how to introduce a wider audience to these ideas (e.g., I plan to assign and discuss it in undergraduate courses). The original version includes several more pictures and a video. –Matthew]

[Image: A child listens to DragonBot tell a story during one of our research studies. Credit: Personal Robots Group]

Making new (robot) friends

Understanding children’s relationships with social robots

by Jacqueline M. Kory Westlund

Hi, my name is Mox!

This story begins in 2013, in a preschool in Boston, where I hide, with laptop, headphones, and microphone, in a little kitchenette. Ethernet cables trail across the hall to the classroom, where 17 children eagerly await their turn to talk to a small fluffy robot.

“Hi, my name is Mox! I’m very happy to meet you.”

The pitch of my voice is shifted up and sent over the somewhat laggy network. My words, played by the speakers of Mox the robot and picked up by its microphone, echo back with a two-second delay into my headphones. It’s tricky to speak at the right pace, ignoring my own voice bouncing back, but I get into the swing of it pretty quickly.

We’re running show-and-tell at the preschool on this day. It’s one of our pilot tests before we embark on an upcoming experimental study. The children take turns telling the robot about their favorite animals. The robot (with my voice) replies with an interesting fact about each animal, Did you know that capybaras are the largest rodents on the planet?” (Yes, one five-year-old’s favorite animal is a capybara.) Later, we share how the robot is made and talk about motors, batteries, and 3D printers. We show them the teleoperation interface for remote-controlling the robot. All the kids try their hand at triggering the robot’s facial expressions.

Then one kid asks if he can teach the robot how to make a paper airplane.

We’d just told them all how the robot was controlled by a human. I ask: Does he want to teach me how to make a paper airplane?

No, the robot, he says.

Somehow, there was a disconnect between what he had just learned about the robot and the robot’s human operator, and the character that he perceived the robot to be.

Relationships with robots?

In the years since that playtest, I’ve watched several hundred children interact with both teleoperated and autonomous robots. The children talk with the robots. They laugh. They give hugs, drawings, and paper airplanes. One child even invited the robot to his preschool’s end-of-year picnic.

Mostly, though, I’ve seen kids treat the robots as social beings. But not quite like how they treat people. And not quite like how they treat pets, plants, or computers.

These interactions were clues: There’s something interesting going on here. Children ascribed physical attributes to robots—they can move, they can see, they can feel tickles—but also mental attributes: thinking, feeling sad, wanting companionship. A robot could break, yes, and it is made by a person, yes, but it can be interested in things. It can like stories; it can be nice. Maybe, as one child suggested, if it were sad, it would feel better if we gave it ice cream.

Although our research robots aren’t commercially available, investigating how children understand robots isn’t merely an academic exercise. Many smart technologies are joining us in our homes: Roomba, Jibo, Alexa, Google Home, Kuri, Zenbo…the list goes on. Robots and AI are here, in our everyday lives.

We ought to ask ourselves, what kinds of relationships do we want to have with them? Because, as we saw with the children in our studies, we will form relationships with them. Read more on Understanding children’s relationships with social robots…

Posted in Presence in the News | Leave a comment

Call: Philosophy and the Moving Image (book chapters)

Call for Chapters: Collective Book

Editors: Chris Rawls (Roger Williams University), Diana Neiva (University of Porto) and Steven S. Gouveia (University of Minho)

Preface: Professor Thomas E. Wartenberg (Mount Holyoke College)

Submission deadline: November 13, 2017

Topics and issues of interest include (but are not limited to):

  • Classical and contemporary film theory (formalist, psychoanalytic, feminist, cognitive, structuralist, etc.)
  • Definitions of cinema (essencialist – e.g. medium specificity – and nonessentialist)
  • Philosophical themes in film (ethics, political philosophy, philosophy of mind, philosophy of religion, epistemology, etc.)
  • Genre(s) (the issues on film genres, avant-garde, documentary, pornography, horror, drama, etc.)
  • Specific films (interpretations, influences, presented philosophy, etc.)
  • Sex and gender issues (feminist and queer film, the image of women in movies, etc.)
  • Issues on narrative, spectatorship, authorship, etc.
  • Authors on philosophy and film (Arnheim, Carroll, Wartenberg, Mulvey, Bordwell, Deleuze, etc.)
  • Cinema and other arts (film and music, film and photography, film and architecture, etc.)
  • Filmmakers (Bergman, Tarkovsky, Chaplin, Eisenstein, Carpenter, Kubrick, etc.)
  • Video games and other forms of digital media
  • Other issues and relations between philosophy and film

We encourage people from various backgrounds to submit the proposal for we support a multidisciplinary dialogue. We also strongly encourage submissions any under-represented groups! Read more on Call: Philosophy and the Moving Image (book chapters)…

Posted in Calls | Leave a comment

Google’s presence experiment: VR vs. video to train people to make espresso

[This post from Google’s blog reports on interesting results and lessons learned from an experiment in presence. The original includes more images; for more information see coverage in Daily Coffee News. –Matthew]

Daydream Labs: Teaching Skills in VR

Ian MacGillivray, Software Engineer
July 20, 2017

You can read every recipe, but to really learn how to cook, you need time in the kitchen. Wouldn’t it be great if you could slip on a VR headset and have a famous chef walk you through the basics step by step? In the future, you might be able to learn how to cook a delicious five-course meal—all in VR. In fact, virtual reality could help people learn all kinds of skills.

At Daydream Labs, we tried to better understand how interactive learning might work in VR. So we set up an experiment, which aimed at teaching coffee making. We built a training prototype featuring a 3D model of an espresso machine which reacts like a real one would when you press the buttons, turn the knobs or drop the milk. We also added a detailed tutorial. Then, we tasked one group of people to learn how to pull espresso shots by doing it in VR. (At the end, we gave people a detailed report on how they’d done, including an analysis of the quality of their coffee.) For the purpose of comparison, another group learned by watching YouTube videos. Both groups were able to train for as long as they liked before trying to make a coffee in the real world; people assigned to watch the YouTube tutorial normally did so three times, and people who took the VR training normally went through it twice.

We were excited to find out that people learned faster and better in VR. Both the number of mistakes made and the time to complete an espresso were significantly lower for those trained in VR (although, in fairness, our tasting panel wasn’t terribly impressed with the espressos made by either group!) It’s impossible to tell from one experiment, of course, but these early results are promising. We also learned a lot of about how to design future experiments. Here’s a glimpse at some of those insights. Read more on Google’s presence experiment: VR vs. video to train people to make espresso…

Posted in Presence in the News | Leave a comment

Call: TVX 2018 – ACM International Conference on Interactive Experiences for TV and Online Video

Call for Papers

ACM TVX 2018: The ACM International Conference on Interactive Experiences for TV and Online Video
Theme: ‘Immersive Media Experiences’
26 – 28 June
Seoul, Republic of Korea

Workshops proposals – 30 Nov. 2017
Full & Short Papers proposals – 2 Feb. 2018

The ACM International Conference on Interactive Experiences for TV and Online Video (ACM TVX) is the leading international conference for presentation and discussion of research into online video and TV interaction and user experience. The conference brings together international researchers and practitioners from a wide range of disciplines, ranging from human-computer interaction, multimedia engineering and design to media studies, VR/AR Technologies, media psychology, media artists, and sociology.

ACM TVX 2014 was held in Newcastle upon Tyne, UK, TVX 2015 in Brussels, Belgium, TVX 2016 in Chicago, USA, TVX 2017 in Hilversum, The Netherlands. Next year TVX 2018 comes to Seoul, Republic of Korea for the first time in ASIA. All members of TVX look out the opportunity to build up a strong companion research community with Asian researchers and students.

Next year’s theme is ‘Immersive Media Experiences’   and we particularly welcome contributions that look into specific tools, methods and evaluation of experiences related to creating and enabling high-resolution VR media content consumption. The following topics are interested to be covered in the conference, but are not limited to: Read more on Call: TVX 2018 – ACM International Conference on Interactive Experiences for TV and Online Video…

Posted in Calls | Leave a comment

“Be there” for the August 21 total solar eclipse via 4K and VR

[This story from 4K describes options for experiencing next week’s total solar eclipse via technology. The CNN press release notes that “While only a fraction of the country will be able to witness the total eclipse in-person, CNN’s immersive livestream will enable viewers nationwide to ‘go there’ virtually and experience a moment in history, seven times over.” For more information about all aspects of the eclipse see coverage on the NASA website. –Matthew]

Where to Watch The 2017 Total Solar Eclipse In Full 4K And Virtual Reality

by Stephen on August 14, 2017

Although not even the best 4K OLED or QLED TVs on the market will beat experiencing a Total Solar Eclipse live and in-person with your own eyes (while using special-purpose solar filters, such as “eclipse glasses” or hand-held solar viewers), for those who can’t get a true naked-eye view of the upcoming fantastic astronomical event, there’s CNN. In partnership with Volvo, CNN will be broadcasting 2017’s Total Solar Eclipse of North America on August 21, and you’ll be able to watch it on your 4K resolution TV/Monitor or in immersive 360-degree virtual reality using a VR head-set.

The broadcast will be available all around the world in 4K resolution at, or in 4K and other available resolutions and formats through CNN’s mobile apps, Samsung Gear VR powered by Oculus via Samsung VR, Oculus Rift via Oculus Video and through CNN’s Facebook page via Facebook Live 360.

According to CNN, the livestream will be enhanced by real-time graphics, close-up views of the sun, and the running commentary of experts from the science community. Four cameras will be spaced across the United States in four different locations: Snake River Valley, Idaho; Beatrice, Nebraska; Blackwell, Missouri; and Charleston, South Carolina. Each camera will be shooting 4K video shot of the total eclipse’s path.

Other sites like NASA, The Weather Channel, National Geographic and will be also broadcasting the natural event through their main websites and Facebook pages, presumably in 4K as well at least for NASA, which has a regular habit of shooting astronomical events in ultra HD for public consumption. Read more on “Be there” for the August 21 total solar eclipse via 4K and VR…

Posted in Presence in the News | Leave a comment

Call: “Image Evolution: Technological Transformations of Visual Media Culture” for Yearbook of Moving Image Studies

Call for Abstracts/Articles

Yearbook of Moving Image Studies (YoMIS):
Image Evolution. Technological Transformations of Visual Media Culture

Deadline for Abstracts: November 5, 2017
Deadline for Articles: May 27, 2018

The double-blind peer-reviewed Yearbook of Moving Image Studies (YoMIS) is now accepting abstracts from scientists, scholars, artists, film makers, game designers or developers for the fourth issue entitled »Image Evolution. Technological Transformations of Visual Media Culture«. YoMIS will be enriched by disciplines like media and film studies, image science, (film)philosophy, phenomenology, semiotics, design and fine arts, art and media history, game studies and other research areas related to static, moving and digital images in general.

The history of images can be described as a history of technology and mediality, because material transformations have always had a great impact on form, structure or content of mediatized and often multimodal representations. It took many years from the origin of images in the caves of our prehistoric ancestors to the interactive, arithmetic and highly immersive images of the digital age. This development always seemed to be deeply rooted in the potentials of media technologies and the numerous human inventions in the range of traditional craftsmanship, engineering science, computer science, and art and design. This perspective is the beginning of an autonomous media theory, whether if it starts with leading thinkers like Walter Benjamin or Marshal McLuhan. Nowadays, these academic discourses would surely work with more profound and more detailed analytical tools and concepts. But also a modern media theory that analyzes describes and characterizes technological transformations surely receives new insights. The factual embedding of images in the historical-technological processes constitutes a complex structure of an autonomous »Image Evolution« that must be highlighted, characterized and analyzed by the interdisciplinary academic discourses that are related to the functions and structures of visuality, pictoriality and forms of multi-sensoric representations. The chosen term »Evolution« is deliberately indicating structural laws that underlie historical events. These laws are not teleological or ontological driven, but more intentional and logical processes of an historical and technological interdependency. In this interdependency, the technology is evolving out of its inherent structures and additionally embedded in anthropological conditions and sociocultural dynamics. In this context, we should work with the concept of an »Image Evolution«. The editors of YoMIS would like to understand images as visual, and further multi-sensoric, artifacts that are historically and technologically embedded within the ‘developments’ and ‘relations’ of materiality, mediality and reception. Beside the integration of this different aspects the issue is also expanding the time frame of the research topic: The development of mediality is not only a project for media historiographies in the context of a media archaeology, but also connected with the logic of recent developments in the context of prototypes, future ideas and innovations.

Topics of submissions should focus on (but are not necessarily limited to) the materiality and technology of images and media, the academic approaches on the history and logic of image evolution and media developments, the processes of creating or programming digital images, and the material and technological effects on the reception of dynamic representations, the multi-sensoriality of static, moving and digital images, which goes beyond pure visuality, and a specific focus on the historical, cultural and transformational impact of prototypes, prototype research and future innovations. Read more on Call: “Image Evolution: Technological Transformations of Visual Media Culture” for Yearbook of Moving Image Studies…

Posted in Calls | Leave a comment

Designing robots to account for intriguing ways humans interact with them

[This story from CNBC highlights the need for robot designers to carefully consider the range of responses, including medium-as-social-actor presence responses, their creations will evoke. For more on this topic see the recent story in Psychology Today’s blog “How do We Read Emotions in Robots? Of social robots, innovation spaces, and creatively trying things out.” –Matthew]

Next-gen robots: The latest victims of workplace abuse

  • Robots must contend not just with internal flaws and bugs but with humans.
  • Recent introductions of robots to everyday scenarios have led people to initiate some intriguing forms of interaction.
  • Knightscope’s security bot, for example, has been harassed by kids, painted in red lipstick and used as a canvas for graffiti artists.

Mike Juang
Published 9 Aug 2017 | Updated 11 Aug 2017

With jobs it’s oftentimes not the work that’s difficult, but the people.

Take STEVE, for instance. Throughout his career his brothers have been knocked over by drunkards, bullied by schoolkids and even sprayed with graffiti.

But STEVE is not a person. He is an autonomous security robot resembling a cross between a rocket ship and R2D2 and is officially called the K5 Security Technology Enhancement Vehicle — STEVE, for short.

Hiccups, bugs and public failures are an inevitable part of the deployment of any tool in the real world, but robots must also be designed to account for sometimes unpredictable human interactions.

“Social robots, if they’re engaged in a public sense — even in a limited public sense — the design has to include considerations for social interactions,” said David Harris Smith, associate professor at McMaster University’s Department of Communication Studies and Multimedia in Canada.

Smith knows this firsthand. Together with associate professor Frauke Zeller of Ryerson University in Canada — and a cadre of other scientists, artists and engineers — he developed HitchBOT, a robot designed to travel across a country by “hitching” a ride from friendly humans. Completely immobile, HitchBOT was designed with an LED “face,” a hitchhiking thumb and the ability to respond to simple voice questions. Creator Frauke Zeller said she wanted to create the impression that HitchBOT “is a helpless robot and challenge people to become active and engaged.”

The HitchBOT experiment came to an end after the bot was found dismembered and destroyed in a Philadelphia alley. With the dawn of the everyday robot age, destruction is a necessary part of creation.

“In terms of designing these robots, we have to take a step back and have people decide,” said Zeller. The key question is how — or if — we actually want to live with robots in our midst. Read more on Designing robots to account for intriguing ways humans interact with them…

Posted in Uncategorized | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z