ISPR Presence News

Monthly Archives: April 2017

Call: Academic MindTrek Conference 2017

Call for Papers, Posters, Demos & Workshops

Academic MindTrek Conference 2017
September 20-21
Tampere, Finland
Full details:


  • Call for Papers submission deadline May 21st
  • Acceptance/Rejection notification: July 14th
  • Copyright forms: August 14th
  • Conference registration/Camera-ready papers: August 21st

Academic Mindtrek is…

  • A meeting place where researchers, experts and thinkers can present results from their latest work in the conference thematic areas.
  • Special academic sessions (e.g. demonstrations, workshops and multidisciplinary sessions) held parallel to the Mindtrek conference.
  • A real chance for media enthusiasts to think outside the box.
  • Keynote speakers announced later.

We are pleased to invite you to the 21st International Academic Mindtrek conference, 20th to 21st September 2017. Academic Mindtrek is a meeting place where researchers, experts and thinkers present results from their latest work regarding the development of novel technology, media and digital culture for the society of tomorrow.

Academic Mindtrek is part of the renowned Mindtrek business conference. Mindtrek brings together people not only from various fields and domains but also from different sectors: from companies, startups, academia and various governmental institutions. This makes Mindtrek the perfect opportunity for advancing research results towards practical utilization by the industry, as well as getting out-of-the-box research ideas based on the interaction with practitioners.

Mindtrek events are accessible for the Academic Mindtrek attendees, and vice versa.

The academic conference features the following major themes:

  • Human-computer interaction (HCI)
  • Interaction design and user experience
  • Developer experience
  • Games and gamification
  • Virtual, augmented and mixed reality
  • Collaboration, literacies and multimedia technologies in education
  • Crowdsourcing and citizen participation
  • Open data and data science
  • New forms of journalism and media
  • Theatre, performance and media
  • Enhancing work in socio-technological environments

We are especially enthusiastic about applied research and papers related to practical work.

Submit your paper here


Academic Mindtrek is organized in cooperation with ACM SIGMM, and ACM SIGCHI. The conference proceedings will be published in the ACM Digital Library, which includes full papers, posters, workshop proposals and demonstration proposals. All papers should follow the style guidelines of the conference (more information under submission guidelines). In the Finnish classification of publication forums, Academic Mindtrek proceedings are classified as Jufo 1.

There will also be a reward for the best paper(s) of the academic conference. Read more on Call: Academic MindTrek Conference 2017…

Posted in Calls | Leave a comment

MindMaze’s neural VR interface reads your mind to reflect your facial expression

[The VR add-on described in this story from Seeker should enhance presence – note the last short paragraph, in which the MindMaze creator and CEO says “We’re moving away from VR as a technological experience to being a real human experience…” The original story includes other images and a video. See coverage of Google’s related tech in an ISPR Presence News post from a few months ago. –Matthew]

MindMaze’s Neural VR Interface Reads Your Mind to Reflect Your Facial Expression

MASK, a new brain-computer product for desktop and mobile virtual reality headsets, can predict a smile or a wink milliseconds before you even move.

By Dave Roos
April 13, 2017

If Facebook CEO Mark Zuckerberg is reading his crystal ball correctly, then the next big thing will be social virtual reality. In the very near future, you’ll put on a virtual reality headset and meet up with friends for virtual hangouts, live concerts, and interactive games.

But as anyone who survived the early Second Life scene can attest, virtual avatars can be pretty socially inept. After all, there’s only so much you can say with a permasmile frozen on your face.

This week, a neurotechnology company based in Switzerland called MindMaze unveiled a product that can synchronize a variety of human facial expressions on virtual avatars. Called MASK, the technology reads your brain signals to predict a smile or a wink milliseconds before you even move. The result is a faster-than-real-time reflection of your changing facial expressions that has the potential to add new emotional depth to social and gaming interactions in VR and bring the technology’s use further into the mainstream. Read more on MindMaze’s neural VR interface reads your mind to reflect your facial expression…

Posted in Presence in the News | Leave a comment

Call: Virtual Agents for Social Skills Training (VASST) – Journal on Multimodal User Interfaces special issue


Journal on Multimodal User Interfaces
Special Issue: Virtual Agents for Social Skills Training (VASST)

Guest Editors:
Merijn Bruijnes, University of Twente
Jeroen Linssen, University of Twente
Dirk Heylen, University of Twente

Paper submission deadline: 30th October 2017

Interactive technology, such as virtual agents, to train social skills can improve training curricula. For example police officers can train for interviewing suspects or detecting deception with a virtual agent. Other examples of application areas include (but are not limited to) social workers (training for dealing with broken homes), psychiatrists (training for interviewing people with various difficulties / vulnerabilities / personalities), training of social skills such as job interviews, or social stress management.

We invite all researchers who investigate the design, implementation, and evaluation of technology to submit their work to this special issue on Virtual Agents for Social Skills Training (VASST). With this technology we mean virtual agents for social skills training and any supporting technology. The aim of this special issue is to give an overview of recent developments of interactive virtual agent applications with the goal: improving social skills. Research on VASST reaches across multiple research domains: intelligent virtual agent, (serious) game mechanics, human factors, (social) signal processing, user-specific feedback mechanisms, automated education, and artificial intelligence.


We welcome (literature) studies describing the state-of-the-art for sensing user behaviour, reasoning about this behaviour, and generation of virtual agent behaviour in training scenarios. Topics related to VASST include, but are not limited to:

  • Recognition and interpretation of (non)verbal social user behaviours;
  • Training and fusion of user’s signs detected in different modalities;
  • User/student profiling, such as level or training style preference;
  • Anonymously processing of user data;
  • Dialogue and turn-taking management;
  • Social-emotional and cognitive models;
  • Automatic improvement of knowledge representations;
  • Coordination of signs to be displayed by the virtual agents in several modalities,
  • Mechanics to support learning, for example
    • Feedback or after action review;
    • Personalised scenarios and dialogues;
  • Big data approaches to enrich social interactions;
  • Other topics dealing with innovations for VASST.

Read more on Call: Virtual Agents for Social Skills Training (VASST) – Journal on Multimodal User Interfaces special issue…

Posted in Calls | Leave a comment

Lyrebird is a voice mimic for the fake news era

[The evolution of presence-evoking technology will increasingly make it harder to distinguish the ‘real’ from the artificial, with both positive and negative consequences. This story is from TechCrunch, where it includes a video of the (real) Lyre bird in action. –Matthew]

[Image: Source: TechSpot]

Lyrebird is a voice mimic for the fake news era

Posted April 26, 2017 by Natasha Lomas

A Montreal-based AI startup called Lyrebird has taken the wraps off a voice imitation algorithm that the team says can not only mimic the speech of a real person but shift its emotional cadence — and do all this with just a tiny snippet of real world audio.

The public demo, released online yesterday, consists of a series audio samples of (fake) speech generated using their algorithm and one minute voice samples of the speakers. They’ve used voice samples from Presidents Trump, Obama and Hillary Clinton to demo the tech in action — and for maximum FAKE NEWS impact, obviously.

Here’s a sample of the fake Obama.

And here’s a fake Trump.

And here’s a totally fabricated discussion between fake Trump, fake Obama and fake Clinton. Truly we live in the strangest times…

Lyrebird says its intention is to offer an API in the future so that third parties can make use of the audio mimicry technology for their own ends. So if you think fake news online is bad now, wait until there’s a tech that lets anyone generate a ‘recording’ of a person apparently incriminating themselves, trivially easily. Read more on Lyrebird is a voice mimic for the fake news era…

Posted in Presence in the News | Leave a comment

Call: 23rd ACM Symposium on Virtual Reality Software and Technology (VRST 2017)

Call for Papers, Posters, and Demos

VRST 2017: 23rd ACM Symposium on Virtual Reality Software and Technology
Gothenburg, Sweden, November 08-10

First submission deadline:  June 30, 2017

The 23rd ACM Symposium on Virtual Reality Software and Technology (VRST 2017) is an international forum for the exchange of experience and knowledge among researchers, developers, and industry concerned with virtual and augmented reality (VR/AR) software and technology. VRST provides an opportunity for VR/AR researchers to interact, share new results, show live demonstrations of their work, and discuss emerging directions for the field. The event is sponsored by ACM SIGCHI and ACM SIGGRAPH.

We invite original, high-quality research papers in all areas of virtual reality, augmented reality, mixed reality, as well as 3D interaction. Research papers should describe results that contribute to advancements in the following areas:

  • 3D interaction for VR/AR
  • VR/AR systems and toolkits
  • Immersive projection technologies and other advanced display technologies
  • Presence, cognition, and embodiment in VR/AR/MR
  • Haptics, audio, and other non-visual modalities
  • User studies and evaluation
  • Multi-user and distributed VR, tele-immersion and tele-presence
  • Serious games and edutainment using VR/AR/MR
  • Novel devices (both input and output) for VR, AR, MR, and haptics
  • Applications of VR/AR/MR

Submissions in related areas are welcome too. See the symposium website,, for more details. Read more on Call: 23rd ACM Symposium on Virtual Reality Software and Technology (VRST 2017)…

Posted in Calls | Leave a comment

VR and presence at the gym

[Can presence provide the motivation and distraction for long-term physical fitness? This story from Bloomberg examines some of the issues (and includes two more images). –Matthew]

Virtual Reality Hits the Gym

  • Icaros lets exercisers feel like they’re flying or diving
  • Skeptics say gimmicks won’t trick brain into making body work

by Yuji Nakamura
April 26, 2017

Johannes Scholl is betting virtual reality can keep people excited about working out.

Scholl’s startup, Munich-based Icaros GmbH, has developed a VR exercise machine that delivers a core workout by making it seem like users are flying and deep-ocean diving. About 200 gyms and entertainment centers from London to Tokyo have installed the machines, which cost about $10,000 after including shipping and other costs.  A cheaper home version for about $2,000 is under development and could be unveiled around the start of next year.

“There’s no comparable thing you can do at a gym,” says Scholl, who co-founded Icaros in 2015 with fellow industrial designer Michael Schmidt.

The fitness industry has been trying for decades to make exercise less boring — from TVs embedded in treadmills to apps nudging users to stay on schedule — but technology has yet to find a cure for the monotony of working out. Scholl is part of a nascent community that believes the addictive pull of video games combined with the immersive power of VR will do the trick. Read more on VR and presence at the gym…

Posted in Presence in the News | Leave a comment

Call: Cyborg Classics: An Interdisciplinary Symposium

Call for Abstracts

Cyborg Classics: An Interdisciplinary Symposium
University of Bristol, UK
Friday July 7, 2017

Submission deadline: May 31, 2017

We are pleased to announce a one-day symposium, sponsored by BIRTHA (The Bristol Institute for Research in the Humanities and Arts) to be held at the University of Bristol, on Friday July 7, 2017.

Read more on Call: Cyborg Classics: An Interdisciplinary Symposium…

Posted in Calls | Leave a comment

Using VR to help prevent falls in the elderly and others

[Telepresence via VR is being used to better understand and prevent balance impairments that become more likely as we age; this story is from the UNC Healthcare and UNC Newsroom; the study is available from Nature Scientific Reports.  –Matthew]

[Image: Applied Biomechanics Laboratory at UNC (Courtesy of the UNC/NC State Department of Biomedical Engineering)]

Can virtual reality help us prevent falls in the elderly and others?

For the elderly and people with neurodegenerative conditions, balance is not taken for granted. UNC and NC State biomedical engineers are using a new virtual reality system that might one day be used to reveal balance impairments currently undetectable during conventional testing or normal walking.

April 20, 2017

Media Contact: Mark Derewicz, 919-974-1915,

CHAPEL HILL, NC – Every year, falls lead to hospitalization or death for hundreds of thousands of elderly Americans. Standard clinical techniques generally cannot diagnose balance impairments before they lead to falls. But researchers from the University of North Carolina at Chapel Hill and North Carolina State University have found evidence that virtual reality (VR) could be a big help – not only for detecting balance impairments early, but perhaps also for reversing those impairments and preventing falls.

In a study published in Nature Scientific Reports, a research team led by Jason R. Franz, PhD, assistant professor in the Joint UNC/NC State department of biomedical engineering, used a novel VR system to create the visual illusion of a loss of balance as study participants walked on a treadmill. By perturbing their sense of balance in this way and recording their movements, Franz’s team was able to determine how the participants’ muscles responded. In principle, a similar setup could be used in clinical settings to diagnose balance impairments, or even to train people to improve their balance while walking. Read more on Using VR to help prevent falls in the elderly and others…

Posted in Presence in the News | Leave a comment

Call: Automotive HMI: Vehicles in the Transition from Manual to Automated Driving (at People and Computers 2017)

Call for Papers

6th Workshop Automotive HMI:
Vehicles in the Transition from Manual to Automated Driving
colocated with People and Computers (MuC) 2017 (
September 10-13, 2017 | Regensburg

Submission deadline: June 12, 2017

[via Google Translate:
People and Computers 2017 website in English
Automotive HMI workshop website in English

Automotive user interfaces and especially automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different user groups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective and quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers.

In adherence to the main topic of this year’s conference “Spielend einfach interagieren” the workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.), artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction), etc.


The main aim of this workshop is to discuss methods and models for the quantification of quality criteria for automotive user interfaces in the transition from manual to automated driving (human factors perspective).

Topics of interest include, but are not limited to:

  • Acceptance criteria for (automated) driving systems
  • Ergonomic aspects in highly automated driving
  • Natural user interfaces in the automotive context
  • Futuristic concepts of shared control, vehicle interior, and in-vehicle non-driving-related activities
  • Interface concepts that address the transition from manual to fully automated driving
  • Hedonic and pragmatic qualities of driving experiences
  • Methods for enabling and quantifying trust-in-automation
  • Methods to foster “driving fun” as well as concepts for in-vehicle gaming
  • Requirements for automated driving systems based on personality, age, gender, culture, or other parameters
  • Personalization of vehicle behavior and interfaces
  • Supporting situational awareness through design
  • Validity of driving simulator studies in the broader context of automated driving
  • User studies addressing automated driving within field operational tests
  • Artificial intelligence in UI’s (predictive HMIs, adaptive systems)

We welcome CONTRIBUTIONS from both academia and industry in either GERMAN or ENGLISH language!

SUBMISSION GUIDELINES Read more on Call: Automotive HMI: Vehicles in the Transition from Manual to Automated Driving (at People and Computers 2017)…

Posted in Calls | Leave a comment

How VR and presence are reinventing Holocaust remembrance

[This story from Haaretz discusses design decisions and the power and ethics of presence experiences across media in the most serious of contexts. The original version includes more images and a 0:51 minute video. –Matthew]

How Virtual Reality Is Reinventing Holocaust Remembrance

In ‘The Last Goodbye’ at the Tribeca Virtual Arcade this month, the viewer wears a virtual-reality headset as a survivor recounts his ordeal at Majdanek. It’s an experience more authentic than ‘Shoah,’ its producer says

Neta Alexander (New York) Apr 24, 2017

NEW YORK − When asked a question, Pinchas Gutter doesn’t simply provide an answer − the 85-year-old Holocaust survivor tells a story.

In an interview Saturday during a lunch in his honor at the Tribeca Film Festival, Gutter recalled how he barely survived five concentration camps and a death march from Germany to Czechoslovakia. In the early ‘50s, the Jewish orphan who lost his family at Majdanek decided to volunteer for the Israeli army. He later moved to Jerusalem and found himself working in construction. The project he was helping build was the Yad Vashem Holocaust memorial museum.

While Yad Vashem − with its vast archive, outdoor sculptures and memorial sites such as the Children’s Memorial and Hall of Remembrance − set the standard for remembrance centers around the world, two new initiatives featuring Gutter can teach us something about the future of Holocaust education and preservation.

The first is “New Dimensions in Testimony,” which premiered last year at the international documentary festival in Sheffield, England. It featured a 3-D responsive hologram of Gutter − letting audiences ask questions and receive answers based on his prerecorded memories. The second initiative, which can be seen at the Tribeca Virtual Arcade until April 29, is “The Last Goodbye” − the first-ever immersive recreation of a concentration camp, shot at Majdanek last summer.

Gutter, who carefully leads the viewer of “The Last Goodbye” through Majdanek while recounting his tale of survival and loss, is a remarkable storyteller. In a moment of self-reflection he states with a smile, “I guess that’s why they chose me as their guinea pig.” “They” refers to the team behind this virtual-reality work, which was directed by Gabo Arora and Ari Palitz and produced by Stephen Smith in association with the University of Southern California and the USC Shoah Foundation.

While Gutter and 11 other survivors in “New Dimensions” were transformed into responsive holograms, his participation in “The Last Goodbye” takes memorialization and technology one step further. Upon entering a white exhibition space at Tribeca’s Spring Studios on Varick Street, you’re asked to take off your shoes and put on a VR headset covering your eyes and ears. You then meet Gutter in an unlikely place: a hotel bathroom in which the octogenarian shaves in front of a small mirror. You’re barefoot while Gutter is wearing a white bathrobe. Using a voice-over, Gutter confesses that he’s extremely anxious about going back to Majdanek for what he describes as “my very last visit to the camp.” Read more on How VR and presence are reinventing Holocaust remembrance…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z