ISPR Presence News

Author Archives: Matthew Lombard

Job: Post-doc: Multimodal data analysis of behavioral and physiological signals from HHI, HMI interactions – Aix Marseille University

Two-year Post-doctoral Position

Multimodal data analysis of behavioral and physiological signals from human-human and human-machine interactions
Laboratoire d’Informatique et des Systèmes (LIS) et Laboratoire Parole et Langage (LPL)
Aix-Marseille Université & CNRS

Deadline for application: 30 October

Keywords: conversational speech, multimodal data analysis, neurophysiological data, machine learning

The A*MIDEX project PhysSocial aims at a better understanding of the specificities of social interactions by comparing relationships between behavior and neurophysiology in human‐human and human‐robot discussion. The goal of the post-doc is to analyze the multimodal signals (speech, eyes direction, physiological, and neurophysiologic signals) from conversational activity using signal processing and machine learning methodologies in order to compare the human-human and human-robot interactions.
The Post-doc is organized around 2 main tasks:

  • Multimodal data preprocessing: in a first step, the objective is to process the row data (speech, transcribed speech, eyes tracking, physiological and neurophysiological signals) corresponding to human-human and human-robot conversation in order to extract time series corresponding to behavioral features, as well as cognitive events derived from local activity in well-defined brain areas involved in language and social cognition
  • Machine learning of causal relations: in a second step, time series will be used by statistical learning to identify causal relations between behavioral and physiological features and cognitive events extracted from neurophysiological recording with fMRI. From a learning point of view, one challenge in this project is the high-dimensional data. We address this issue with a focus on the features representation and selection problems.

The candidate should have a Phd in Computer Science, Applied Mathematics, Signal or Natural Language Processing (with solid background in machine learning).

The candidate should have a strong background in machine learning and signal processing with a focus on multimodality. Some complementary previous experience would be appreciated in the following topics:

  • Multimodal data processing
  • Data science applied to language data

The post-doc is fully funded during 2 years as part of the A*MIDEX interdisciplinary project PhysSocial, including personalized training, travel expenses, and conferences attendance.

French language is not required.

Aix Marseille University (, the largest French University, is ideally located on the Mediterranean coast, and only 1h30 away from the Alps. Read more on Job: Post-doc: Multimodal data analysis of behavioral and physiological signals from HHI, HMI interactions – Aix Marseille University…

Posted in Jobs | Leave a comment

Presence Pictures: Robots at Work and Play

[The Atlantic has published a set of 35 vivid photographs that demonstrate the diverse roles robots are occupying in 2018, and in many cases the medium-as-social-actor presence responses we have to them. Three of the photos are below and see the original feature for the full-size versions of all 35. –Matthew]

Read more on Presence Pictures: Robots at Work and Play…

Posted in Presence in the News, Presence Pictures | Leave a comment

Call: Towards Conscious AI Systems – AAAI Spring Symposium


AAAI Spring Symposium
Stanford, CA
March 25 – 27, 2019

DEADLINE: November 2, 2018

SUBMISSION: (look for this symposium)

The study of consciousness remains a challenge that spans multiple disciplines. Consciousness has a demonstrated, although poorly understood, role in shaping human behavior. The processes underpinning consciousness may be crudely replicated to build better AI systems. Such a ‘top-down’ perspective on AI readily reveals the gaps in current data-driven approaches and highlights the need for ‘better AI’. At the same time, the process of designing AI systems creates an opportunity to better explain biological consciousness and its importance in system behavior.

Measuring the components that may lead to consciousness (e.g., modeling and assessing others’ behaviors; calculating utility functions for not only an individual agent, but also an interacting society of agents) is increasingly important to address concerns about the surprising capabilities of today’s AI systems.

The symposium is an excellent opportunity for researchers considering consciousness as a motivation for ‘better AI’ to gather, share their recent research, discuss the fundamental scientific obstacles, and reflect on how it relates to the broader field of artificial intelligence and robotics.

Research on consciousness and its realization in AI systems motivates research to account for, with scientific rigor: the motivations of AI systems, the role of sociality with and between machines, and how to implement machine ethics.

The meeting will offer a platform to discuss the connection between AI systems and other fields such as psychology, philosophy of mind, ethics, and neuroscience.


Some of the topics that the symposium will cover are: Read more on Call: Towards Conscious AI Systems – AAAI Spring Symposium…

Posted in Calls | Leave a comment

Stanford research: VR (and presence) increases empathy and action on homelessness

[New research provides more evidence of the power of virtual reality and presence to enhance empathy, affect behavior, and improve people’s lives. This story is from Stanford News, where it includes a 1:05 minute trailer for the “Becoming Homeless” VR experience. Read the new paper at PLOS ONE. –Matthew]

[Image: Fernanda Herrera, left, watches as fellow student Hannah Mieczkowski navigates through the VR experience that begins with an eviction notice. Image credit: L.A. Cicero.]

Virtual reality can help make people more compassionate compared to other media, new Stanford study finds

Stanford researchers found that people who underwent a virtual reality experience, called “Becoming Homeless,” were more empathetic toward the homeless and more likely to sign a petition in support of affordable housing than other study participants. 

By Alex Shashkevich
October 17, 2018

A Stanford-developed virtual reality experience, called “Becoming Homeless,” is helping expand research on how this new immersive technology affects people’s level of empathy.

According to new Stanford research, people who saw in virtual reality, also known as VR, what it would be like to lose their jobs and homes developed longer-lasting compassion toward the homeless compared to those who explored other media versions of the VR scenario, like text. These findings are set to publish Oct. 17 in PLOS ONE.

“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” said Jeremy Bailenson, a professor of communication and a co-author of the paper. Read more on Stanford research: VR (and presence) increases empathy and action on homelessness…

Posted in Presence in the News | Leave a comment

Call: 3rd International Conference on Human Computer Interaction Theory and Applications


3rd International Conference on Human Computer Interaction Theory and Applications
February 25 – 27, 2019
Prague, Czech Republic

New Submission Deadline: October 22, 2018

HUCAPP is sponsored by INSTICC – Institute for Systems and Technologies of Information, Control and Communication.

In Cooperation with
ACM Special Interest Group on Computer Human Interaction
European Association for Computer Graphics
French Association for Computer Graphics

With the presence of internationally distinguished keynote speakers:
Daniel McDuff, Microsoft, United States
Diego Gutierrez, Universidad de Zaragoza, Spain
Jiri Matas, Czech Technical University in Prague, Faculty of Electrical Engineering, Czech Republic
Dima Damen, University of Bristol, United Kingdom
Stefano Baldassi, Meta Company, United States


The International Conference on Human Computer Interaction Theory and Applications aims at becoming a major point of contact between researchers, engineers and practitioners in Human Computer Interaction. The conference will be structured along four main tracks, covering different aspects related to Human Computer Interaction, from Theories, Models and User Evaluation, Interaction Techniques and Devices, Haptic and Multimodal Interaction, and Agents and Human Interaction. We welcome papers describing original work in any of the areas listed below. Papers describing advanced prototypes, systems, tools and techniques as well as general survey papers indicating future directions are also encouraged. Paper acceptance will be based on quality, relevance to the conference themes and originality. The conference program will include both oral and poster presentations. Special sessions, dedicated to case-studies and commercial presentations, as well as technical tutorials, dedicated to technical/scientific topics, are also envisaged. Companies interested in presenting their products/methodologies or researchers interested in lecturing a tutorial are invited to contact the conference secretariat.


Each of these topic areas is expanded below but the sub-topics list is not exhaustive. Papers may address one or more of the listed sub-topics, although authors should not feel limited by them. Unlisted but related sub-topics are also acceptable, provided they fit in one of the following main topic areas:


Read more on Call: 3rd International Conference on Human Computer Interaction Theory and Applications…

Posted in Calls | Leave a comment

Macy’s sees benefits from VR and AR for furniture and beauty product sales

[Digital Commerce 360 reports on how the Macy’s department store chain is using presence-evoking technologies to enhance customer experience, increase sales, reduce returns, and save space while increasing its offerings. For more information, see a press release via Business Wire, a CNBC interview with the CEO of Marxent Labs, an earlier story and video from Marxent, Marxent’s Products page, and a report in PYMNTS on both Macy’s and Walmart’s recent adoption of VR and AR. –Matthew]

Virtual reality increases furniture AOV by 60% at Macy’s

On the heels of a virtual reality furniture pilot, Macy’s adds augmented reality for furniture in its iOS app.

April Berthene
September 19, 2018

Macy’s Inc.’s in-store virtual reality pilot is increasing basket size and decreasing returns, the retail chain announced this week, along with other technology-focused news.

By early November, Macy’s will have in-store virtual reality spaces at 69 U.S. stores. In pilot stores, shoppers who used the virtual reality headsets to view Macy’s furniture had more than a 60% greater average order value than non-virtual reality furniture shoppers, Macy’s says. Plus, shoppers who used virtual reality had less than a 2% return rate on transactions. Macy’s would not reveal its average return rate.

The retailer says this is because, “customers more accurately visualize their space and add multiple furnishings with confidence.”

For Macy’s, it can offer a wider product assortment at stores using less space on the sales floor.

While virtual reality lets shoppers view virtual furniture in a virtual world, Macy’s recently launched an augmented reality feature in its iOS app that allows shoppers to view virtual furniture within the context of the real world. Read more on Macy’s sees benefits from VR and AR for furniture and beauty product sales…

Posted in Presence in the News | Leave a comment

Call: Philosophical issues raised by use of technologies in armed conflict – PJCV special issue

Call for Papers

Philosophical Journal of Conflict and Violence (PJCV)
Special issue on philosophical issues raised by the use of existing and emerging military and civilian forms of technologies in armed conflict

Read more on Call: Philosophical issues raised by use of technologies in armed conflict – PJCV special issue…

Posted in Calls | Leave a comment

New DextrES ultra-light gloves let users feel and manipulate virtual objects

[This press release from ETH Zurich describes a promising new technology for evoking realistic haptic sensations that contribute to presence illusions. For more information watch a 1:58 minute video on YouTube and see the project’s web page (which features the full paper and a longer video). –Matthew]

Ultra-light gloves let users “touch” virtual objects

Scientists from ETH Zurich and EPFL have developed an ultra-light glove – weighing less than 8 grams – that enables users to feel and manipulate virtual objects. Their system provides extremely realistic haptic feedback and could run on a battery, allowing for unparalleled freedom of movement.

October 15, 2018

Engineers and software developers around the world are seeking to create technology that lets users touch, grasp and manipulate virtual objects, while feeling like they are actually touching something in the real world. Scientists at ETH Zurich and EPFL have just made a major step toward this goal with their new haptic glove, which is not only lightweight – under 8 grams – but also provides feedback that is extremely realistic. The glove is able to generate up to 40 Newtons of holding force on each finger with just 200 Volts and only a few milliwatts of power. It also has the potential to run on a very small battery. That, together with the glove’s low form factor (only 2 mm thick), translates into an unprecedented level of precision and freedom of movement.

“We wanted to develop a lightweight device that – unlike existing virtual-reality gloves – doesn’t require a bulky exoskeleton, pumps or very thick cables,” says Herbert Shea, head of EPFL’s Soft Transducers Laboratory (LMTS). The scientists’ glove, called DextrES, has been successfully tested on volunteers in Zurich and will be presented at the upcoming ACM Symposium on User Interface Software and Technology (UIST). Read more on New DextrES ultra-light gloves let users feel and manipulate virtual objects…

Posted in Presence in the News | Leave a comment

Call: “Authoring for Interactive Storytelling” ICIDS 2018 Workshop

CALL for ICIDS 2018 WORKSHOP Submissions:
“Authoring for Interactive Storytelling”
December 8, 2018
Dublin, Ireland

3-6 page papers due: October 22

This call for participation is seeking submitted papers and attendance at the Authoring for Interactive Storytelling workshop held at the International Conference for Interactive Digital Storytelling (ICIDS) 2018 in Dublin ( This workshop seeks to provide a venue for researchers in the area of interactive digital narrative authoring and narrative systems to share early work, new ideas, and identify challenges facing the field, with a view to fostering collaboration and the formation of a coherent research community in this space.

In particular, we are focusing on the overarching question:  When, why, and do we actually need authoring tools?

Relevant work discussed at recent workshops has evoked a number of more specific questions:

  • What is a tool, anyway, in the context of authoring for interactive storytelling? From visual editors to graphs or textual notations including scripting languages and story formalizations, tools can be considered very broadly as any technology intended to assist interactive story creators.
  • What are the main merits of graphical or interactive tool creation? In research projects, budget limitations often create a trade-off between sophisticated engine development vs. usable tools. While often the motivation for the latter is a greater accessibility, creative storytellers recently also criticized formal limitations of expressivity in specialized tools, a discrepancy that clearly needs to be addressed.
  • Are interactive storytelling tools necessarily specific to inherent interactive modalities? Experimental paradigms of story creation for different settings, such as location-based, language-based or using virtual and augmented reality, add new dimensions for consideration beyond just character and conflict, including interaction design for end-users.
  • Can tools be door-openers for non-programmers to AI-based storytelling? While easy-to-learn editors and tools often allowed for creating explicit storylines and simple branching interactivity, there is still very little reported experience with the successful implicit configuration of procedural content for stories.
  • Who are the future interactive storytellers and what are their talents? All in all, due to the long-term nature of authoring projects, publications of fully evaluated principles and tools with different target groups (including insights on pitfalls and failures) are not as common as for example in the domain of E-Learning. All of these questions relate to the basic assumptions underlying much existing research on authoring tools for interactive storytelling.

Paper contributions by participants will be expected to in some way address or contribute to exploring the questions and issues mentioned above. Accepted papers will be published online, as in the previous year’s workshop. A session at the workshop will be dedicated to collective discussion on the issues and positions presented beforehand. The outcomes of this discussion will be recorded and documented as a white paper after the workshop. Read more on Call: “Authoring for Interactive Storytelling” ICIDS 2018 Workshop…

Posted in Calls | Leave a comment

Snapchat’s Originals incorporate AR and presence for “whatever comes after TV”

[Presence-evoking technologies are changing traditional audio-visual media in interesting ways. Snapchat is experimenting with using augmented reality for both watching and responding to original interactive programming via mobile devices, as described in this story in Fast Company; see the original story for a 1:04 minute video and animated gifs. –Matthew]

Snap is making whatever comes after TV

Today the company is announcing new original programming that puts users inside the story. Is it prestige TV, social storytelling, or a whole new medium?

October 10, 2018
By Mark Wilson

I’m standing on a beach, and a group of beautiful twentysomethings gather round a bonfire. There’s Dylan, a passionate dreamer, who strums at an acoustic guitar in his millennial pink hoodie. And Summer, she’s an ambitious life-lover, who dons red Chuck Taylors that match her jacket.

I know their names, and their personalities, because their backgrounds float right over their heads. And in case I hadn’t made it clear, I’m not actually on the beach in Orange County. I’m standing in my basement, viewing the scene through a Snapchat AR “portal,” which immerses me in a 360-degree moment that feels like a cross between a chill beach party and a trailer for some new MTV reality show.

This effect is by design. The experience I’m previewing is part of a new Snapchat show called Endless Summer, produced by Bunim-Murray, the same production company that brought us The Real World. It’s all part of Snap’s continued obsession with leveraging its own interactive, social platform to push the nature of programming forward.

This week, Snap is unveiling its biggest initiative in original programming yet. Dubbed Snap Originals, it includes a dozen new shows produced specifically for Snap, ranging from a horror anthology, to a mysterious campus mystery created by a writer on Riverdale, to a docuseries about drag queens who are coming of age.

Snap’s own content hasn’t all been a hit–following a controversial redesign, especially, its publishers reported losing views. But over the last two years, it’s become a successful platform for the company that Sean Mills, head of Original Content for Snap, is quick to point out mirrors the habitual viewing style of TV audiences, rather than the viral one-offs of YouTube. SportsCenter’s show on Snapchat reaches 2.5 million viewers, and NBC News’s audience has doubled on Snap over the last year, from 2.5 to 5 million viewers a day. That’s a small cry from Snapchat’s 188 million daily active users worldwide, but puts NBC’s Snapchat viewership on par with a hit cable show.

With its new content push, Snap doesn’t want to just duplicate TV shows or Netflix binges, though. For whatever position its stock might be in, Snap wants to do something more ambitious: Push the medium of storytelling forward, leveraging its AR breakthroughs and social platform to do so. Read more on Snapchat’s Originals incorporate AR and presence for “whatever comes after TV”…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z