ISPR Presence News

Monthly Archives: December 2015

Call: Multimodal Corpora – LREC 2016 Workshop

Call for Papers

LREC 2016 Workshop

Multimodal Corpora:
Computer vision and language processing

24 May 2016, Grand Hotel Bernardin Conference Center, Portorož, Slovenia

Submission deadline: 12 February, 2016

The creation of a multimodal corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, gaze, etc. An increasing number of research areas have transgressed or are in the process of transgressing from focused single modality research to full-fledged multimodality research, and multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data.

We are pleased to announce that in 2016, the 11th Workshop on Multimodal Corpora will once again be collocated with LREC [Language Resources and Evaluation Conference]


As always, we aim for a wide cross-section of the field, with contributions ranging from collection efforts, coding, validation, and analysis methods to tools and applications of multimodal corpora. Success stories of corpora that have provided insights into both applied and basic research are welcome, as are presentations of design discussions, methods and tools. This year, we would also like to pay special attention to the integration of computer vision and language processing techniques – a combination that is becoming increasingly important as the accessible video and speech data increases. The special theme for this instalment of Multimodal Corpora is how processing techniques for vision and language can be combined to manage, search, and process digital content.

This workshop follows previous events held at LREC 00, 02, 04, 06, 08, 10, ICMI 11, LREC 2012, IVA 2013, and LREC 2014. All workshops are documented under and complemented by a special issue of the Journal of Language Resources and Evaluation which came out in 2008, a state-of-the-art book published by Springer in 2009 and a special issue of the Journal of Multimodal User Interfaces under publication. This year, some of the contributors, along with selected contributors from 2009 up until now, will be invited to submit an extended version for a new publication gathering recent research in the area.

The LREC’2016 workshop on multimodal corpora will feature a special session on the combination of processing techniques for vision and language.

Other topics to be addressed include, but are not limited to: Read more on Call: Multimodal Corpora – LREC 2016 Workshop…

Posted in Calls | Leave a comment

Experience, create and control New Year fireworks in VR

[Happy New Year and best wishes for a safe, healthy, productive 2016 filled with wonderful (nonmediated and telepresence) experiences! The story below from VR Focus highlights one option for celebrating within VR (see the creator’s 3:16 minute descriptive demo video on YouTube), and this and other virtual fireworks options are featured in a story at UploadVR. –Matthew]

Read more on Experience, create and control New Year fireworks in VR…

Posted in Presence in the News | Leave a comment

Call: Collaboration with Interactive Surfaces and Devices – Theory and Practice (book chapters)

Call for Book Chapters


Collaboration with Interactive Surfaces and Devices – Theory and Practice


Craig Anslow, Department of Computer Science, Middlesex University, UK
Pedro Campos, University of Madeira, Madeira Interactive Technologies Institute, Portugal
Joaquim Jorge, University of Lisbon, INESC-ID Lisbon, Portugal


Springer International Publishing AG


The vast screen real estate – which is provided in large-scale interaction environments – presents novel ways to visualize and interact with data-rich models, in the context of many different work domains. In parallel to this technological revolution, interactive surfaces, spaces and devices have also become widespread in different sizes and shapes, ranging from large-scale walls or touch surfaces to wearable computing mobile devices. The HCI and CSCW communities have witnessed, throughout the last decade, an exponentially increased usage of interactive large-scale walls, touch displays, tabletops, mobiles (e.g. tablets and smart phones), and wearable devices (e.g. watches, glasses). In this context, this book aims at becoming an important reference for theoretical and applied research about collaboration with interactive surfaces and devices.

TOPICS (NOT LIMITED TO): Read more on Call: Collaboration with Interactive Surfaces and Devices – Theory and Practice (book chapters)…

Posted in Calls | Leave a comment

The Japanese professor who’s spent three decades perfecting a human avatar

[This piece from Motherboard covers some fascinating presence work and history, and includes some key terminological distinctions; see the original story for many more pictures and a 4:52 minute video. –Matthew]

Prof. Susumu Tachi with TELESAR-V

The Japanese Professor Who’s Spent Three Decades Perfecting a Human Avatar

Written by Emiko Jozuka
December 23, 2015

When Susumu Tachi made his first prototype telexistence machine in 1981, he was amazed at what he saw. As he peered through the contraption, he glimpsed another version of himself from behind, looking through the very same prototype device. The effect induced a curious out-of-body sensation. Excited, he called his lab mates, who experienced similar feelings of self-displacement when trying the tech.

“It was different to looking at yourself in a mirror, or looking at your image in a video recording. I saw my image moving as I was moving myself. At that moment, I wondered where I was,” said the virtual reality and robotics professor at the University of Tokyo with a boyish grin. “I really felt that this was telexistence.”

For the uninitiated, “telexistence” is the real-time feeling of being in another location (real or virtual) that is different to one’s current location. Telexistence is different to “teleoperation,” which sees the operator electronically control another machine using a remote control. With the latter, there is no sense of being in the place.

Tachi said he came up with the concept of “telexistence” in 1980 and corroborated his discovery with a publication in Japanese in 1982, and then in English in 1984. Over in the US, “telepresence,” which is similar to “telexistence,” was coined by Marvin Minsky, a cognitive scientist at the Massachusetts Institute of Technology in 1980. Tachi has written that while they are similar, telexistence has more focus on make a user feel like they actually inhabit the virtual space they’re working in. Read more on The Japanese professor who’s spent three decades perfecting a human avatar…

Posted in Presence in the News | Leave a comment

Job: Researcher – Future Everyday Interaction with Autonomous IoT at University of Southampton

Job: Researcher – Future Everyday Interaction with Autonomous IoT at University of Southampton

Closing Date: Friday 29 January 2016

We are looking for a talented, HCI or Ubicomp researcher to contribute to a new exciting project about Future Everyday Interaction with the Autonomous Internet of Things (A-IoT), funded by the EPSRC. The post-doctoral researcher position is for 36 months, starting on the 1st of April 2016.

Applications are invited from highly skilled and ambitious researchers in Human-Computer Interaction and Ubiquitous Computing. You should have a PhD or equivalent professional qualifications and a track record of publication at international leading venues (e.g. ACM CHI, Ubicomp, CSCW, TOCHI).

The role will involve the design, prototyping and field evaluation of new interactive A-IoT technology. You will lead on, or take part in, building software and hardware prototypes, designing, running and analyzing participatory design workshops and field trials, as well as writing up the results for academic publication. The successful candidate will have a “T-shaped profile” with proficiency in one of these areas, and familiarity with the other ones.

You will join an internationally renowned and multi-disciplinary team across the University of Southampton (where this post is based) and the Mixed Reality Lab at the University of Nottingham. The team won several best paper awards at CHI and AAMAS. Co-investigators include professors Nick Jennings in Southampton and Tom Rodden in Nottingham. Read more on Job: Researcher – Future Everyday Interaction with Autonomous IoT at University of Southampton…

Posted in Jobs | Leave a comment

VR pornstar as therapist: Virtual intimacy and the promise/peril of presence

[The power, promise and peril of presence are all illustrated in this story from Inverse, which features two more images. –Matthew]

Ela Darling using VR

The VR Pornstar of the Future Will Play Therapist, for Better and Worse

Virtual reality intimacy is going to be a growth industry, but is emotionally empowering sex workers in transactional relationships really a good idea?

Yasmin Tayag
November 9, 2015

According to Ela Darling, porn’s potential as a tool for emotion growth and counseling is often overshadowed by a popularly imagined seediness or the immediacy of the thing. Darling, an adult performer who co-founded the virtual reality porn site, thinks the intimacy at the core of pornography, the germ of fantasy, is about to come into better focus. With the advent of VR porn and its emphasis on live, tailor experiences, Darling is predicting — and betting — that adult entertainment is about become both more human and more humane. This is, she says, what the massive viewing audience has always wanted and never really been offered. Read more on VR pornstar as therapist: Virtual intimacy and the promise/peril of presence…

Posted in Presence in the News | Leave a comment

Call: Challenges, best practices to study HRI in natural interaction settings – HRI 2016 Workshop

Call for Position Statements

HRI 2016: full-day workshop on

The challenge (not) to go wild!
Challenges and best practices to study HRI in natural interaction settings

07 March 2016, Christchurch, New Zealand

Submission deadline: 15 January 2016

Robots are more and more leaving our labs and entering natural interaction settings, such as homes, care facilities, factories, kindergartens, museums, and many more. In the HRI research community, we are aware of all the challenges involved in studying autonomously behaving agents “in the wild”. In order “not to go wild” we want to discuss best practices as well as pitfalls in studying robots in natural interaction settings. Specifically, we want to focus on challenges in the development of systems for everyday usage in peoples’ homes and on challenges in evaluating these systems in the wild, especially with regard to long-term interaction. We want to discuss and explore common and new methodological approaches to study HRI outside the lab, and collect recommendations for fellow researchers. In tandem, we want to reflect our visions for the “new era of socially capable robots” and how we as robot developers and researchers imagine them to integrate into the fabrics of our everyday life. Read more on Call: Challenges, best practices to study HRI in natural interaction settings – HRI 2016 Workshop…

Posted in Calls | Leave a comment

The Climb: Head-spinning, experiential gameplay provides insights about presence

[This story about a climbing simulation provides more insights regarding how to best evoke (spatial) presence, including the roles of narrative, photorealism and natural mapping; it’s from The Guardian, where it includes another image. –Matthew]

The Climb sim

[Image: The Climb uses the Oculus Rift’s motion-controller technology to let the player control the hands of the protagonist. Photograph: Crytek]

The Climb – the most head-spinning virtual reality experience yet

Crytek’s new project for the Oculus Rift shows us exactly where VR gaming is going – towards heady and experiential gameplay

Keith Stuart
Thursday 24 December 2015

Above you, the craggy face of the cliff seems to stretch up endlessly toward the sky, offering perilously few footholds. In the far distance there’s a small village by a beach, bathed in orange sunshine – an exotic idyll. But below you there is … nothing. Nothing but a long deadly drop into the crashing sea far below. Your only option is to keep climbing.

Crytek has always been interested in pushing graphics technology. In the mid-2000s, the Frankfurt-based developer and publisher achieved wide acclaim for its visually spectacular first-person shooters Far Cry and Crysis; although several years old, both are still widely used as a benchmark for near photo-realism in games, especially in terms of environmental detail. With its steamy tropical rain forests, Far Cry presented a lush counterpoint to the genre’s obsession with steel grey interiors.

But the company’s latest project is perhaps its most ambitious attempt to bring immersive naturalism to game worlds. The Climb is a virtual reality climbing simulator, which gives the player the chance to attempt a series of tricky ascents on rock faces based around the world. “We started out by working on the mechanics of virtual reality,” says executive producer Elijah Freeman, who started as an artist at Crytek 15 years ago. “When we were prototyping, climbing just stood out for us – it was almost instantaneously fun.” Read more on The Climb: Head-spinning, experiential gameplay provides insights about presence…

Posted in Presence in the News | Leave a comment

Presence Picture #11: Stephen Colbert warms hands with virtual fire

Stephen Colbert warming hands with virtual fire

During the opening monologue on his Late Show with Stephen Colbert on Friday December 18, the host ‘warmed’ his hands using the ‘heat’ from an on-set hearth and fire.

Here’s wishing all members of the ISPR/Presence community a joyous holiday season…

Read more on Presence Picture #11: Stephen Colbert warms hands with virtual fire…

Posted in Presence Pictures | Leave a comment

Presence tech helps save couple’s ‘inoperable’ infant

[An inexpensive and increasingly common presence technology helps save a life; the story is from the Miami Herald, where it features a 4:02 video and several more images. –Matthew]

Doctors with Google Cardboard

[Image: Dr. John F. Rhodes, left, looks towards Dr. Redmond P. Burke, center, as he holds the Google Cardboard virtual reality device and explains its use in the cutting-edge medical procedure with Dr. Juan Cuarlos Muniz, right. On Tuesday, December 22, 2015 doctors at Nicklaus Children Hospital and the Lexcen family held a press conference on life-saving heart surgery performed on Teegan Lexcen at Nicklaus using a new procedure, virtual reality imaging, pioneered at the Miami hospital. CARL JUSTE Miami Herald Staff]

A virtual miracle in Miami for Minnesota couple’s ‘inoperable’ infant

December 21, 2015
By Howard Cohen

Teegan Lexcen, deemed inoperable after her birth in Minnesota, sleeps in a small crib festooned with Christmas blankets in an ICU room at Nicklaus Children’s Hospital. Santa’s toy factory couldn’t be a happier place.

The 4-month-old is ready for her first Christmas with her family in Miami after a seven-hour open-heart operation at Nicklaus. She’s on the mend, doctors say, because of a small cardboard box and a smartphone.

Virtual reality imaging, using an iPhone tucked inside a Google Cardboard device, which looks like the box a Baby Boomer’s childhood View-Master came in, was used for the first time to plan Teegan’s complex pediatric procedure on Dec. 10. The Google Cardboard turns a smartphone into a low-cost stereoscopic virtual reality viewer that doctors can use to convert a two-dimensional CT scan into a three-dimensional model of a tiny heart. The image is uploaded and used in the operating room for the clearest view in which to make delicate repairs. Read more on Presence tech helps save couple’s ‘inoperable’ infant…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z