ISPR Presence News

Author Archives: Matthew Lombard

Call: Computational Intelligence and Games – Special session at CEEC 2016

Call For Papers

CEEC 2016
Eighth Computer Science and Electronic Engineering Conference
Special Session on Computational Intelligence and Games
28th – 30th September 2016, University Of Essex

Paper Submission deadline: 15th May 2016

Games are an ideal domain to study Computational Intelligence (CI) methods because they provide affordable, competitive, dynamic, reproducible environments suitable for testing new search algorithms, pattern-based evaluation methods, or learning concepts. They are also interesting to observe, fun to play, and very attractive to students. Additionally, there is great potential for CI methods to improve the design and development of both computer games and non-digital games such as board games. This special session aims at gathering not only leading researchers, but also young researchers as well as practitioners in this field who research applications of Computational Intelligence methods to computer games.

Researchers are hereby invited to submit a full paper (5-6 pages) detailing their research, or a short paper (max 4 pages) describing their work-in-progress. All submitted papers will be subject to peer reviewing by at least two reviewers for technical merit, significance and relevance to the topics. Further information is available from the CEEC 2016 website http://www.ceec.uk. Submission implies the willingness of at least one author per paper to register, attend the conference and present the paper. Proceedings will be published on IEEE Xplore. Authors of selected articles will be invited to submit an extended version to a Special Issue of the Computers journal (http://www.mdpi.com/journal/computers/special_issues/ceec_2016).

This special session welcomes submissions on (but not limited to) the following topics: Read more on Call: Computational Intelligence and Games – Special session at CEEC 2016…

Posted in Calls | Leave a comment

Anne Frank virtual reality film raises ethical issues

[Technology can now reproduce and immerse us in historical and personal moments as never before; the presence experiences will not always be positive or deemed appropriate. This story is from The Hollywood Reporter; for more on the ethics involved see coverage in Bustle. –Matthew]

Anne Frank

[Image: Anne Frank at her home in Amsterdam in 1942, just weeks before she and her family entered the annex. Photograph: Reuters/Corbis/Reuters Photographer / Reuters. Source: The Guardian]

Anne Frank Virtual Reality Film Planned

The sensitive nature of ‘Anne’s’ plot — and the intensity of the Frank family’s situation — will invariably leave the project open to criticism and debate over its ethical implications.

May 3, 2016 by Seth Abramovitch

A new virtual reality film will take audiences where they never expected to go: directly inside Anne Frank’s attic in 1942.

The project, Anne, was announced on Tuesday in an eyebrow-raising press release touting how the technology will allow viewers to be “immersed in the presence of Anne Frank” and other inhabitants of the secret Amsterdam annex that hid them from the Nazis during World War II. Read more on Anne Frank virtual reality film raises ethical issues…

Posted in Presence in the News | Leave a comment

Call: Social Believability in Games Workshop at DiGRA/FDG 2016

Call for Papers to the Social Believability in Games Workshop 2016 @ Digra/FDG
Dundee, Scotland
August 1, 2016

https://sites.google.com/site/socialbelievabilityingames

Submission of long and short papers: May 8, 2016

The Social Believability in Games Workshop intends to be a point of interaction for researchers and game developers interested in modeling, discussing, developing systems for, and the humanistic inquiry of social believability in games. This can include behaviour based on social and behavioural science theories and models, systems for SBG frameworks, approaches, methodologies, theories, interpretations, social affordances when interacting with game worlds and more. We invite participants from a multitude of disciplines in order to create a broad spectrum of approaches to the area.

The SBG 2016 workshop is co-located and organised with the 2016 Joint International Conference of the Digital Games Research Association (DiGRA) and the Foundations of Digital Games conferences (FDG) (DIGRA/FDG 2016) in Dundee, Scotland. The workshop will take place in the Dalhousie Building on the 1st of August 2016.

In the context of this workshop, characters in games are not limited to opponents but can play any role. The development of multiplayer games has increased the demands put on the NPCs as believable characters, especially if they are to cooperate with human players. The social aspect of intelligent character behaviour has been comparatively neglected in the area of games. In particular, the interplay between behaviour that is task-related, the emotions that may be attached to the events in the game world, and the social positioning and interaction of game entities is less developed. This workshop aims to address this by putting forward demonstrations of work in the integration of these three aspects of intelligent behaviour, as well as models and theories that can be used for the believable emotional and social aspects.

For this workshop, we invite participants to bring both their demonstrations and initial prototypes built to address them, research questions, and research results. Additionally, we welcome contributions from research on social simulation, the social impact of believable characters, intelligent virtual agents, game studies and practices, and other related areas. The day will be dedicated to demonstration and discussion, with ample time for collaboration and comparison of theory, method, practice, and results. Read more on Call: Social Believability in Games Workshop at DiGRA/FDG 2016…

Posted in Calls | Leave a comment

Next big thing for virtual reality: Eye-tracking lasers in your eyes

[Adding eye-tracking to VR headsets should allow more natural and effortless interactions and enhance both spatial and social presence; this story is from USA Today, where it features more images and a 1:53 minute video. A 0:21 minute EyeFluence ‘demo clip’ is available on YouTube. –Matthew]

Eyefluence eye-tracking

Next big thing for virtual reality: Lasers in your eyes

Marco della Cava, USA TODAY May 2, 2016

SAN FRANCISCO – The next big leap for virtual and augmented reality headsets is likely to be eye-tracking, where headset-mounted laser beams aimed at eyeballs turn your peepers into a mouse.

A number of startups are working on this tech, with an aim to convince VR gear manufacturers such as Oculus Rift and HTC Vive to incorporate the feature in a next generation device. They include SMI, Percept, Eyematic, Fove and Eyefluence, which recently allowed USA TODAY to demo its eye-tracking tech.

“Eye-tracking is almost guaranteed to be in second-generation VR headsets,” says Will Mason, cofounder of virtual reality media company UploadVR. “It’s an incredibly important piece of the VR puzzle.”

At present, making selections in VR or AR environments typically involve moving the head so that your gaze lands on a clickable icon, and then either pressing a handheld remote or, in the case of Microsoft’s HoloLens or Meta 2, reaching out with your hand to make a selection by interacting with a hologram.

As shown in Eyefluence’s demonstration, all of that is accomplished by simply casting your eyes on a given icon and then activating it with another glance.

“The idea here is that anything you do with your finger on a smartphone you can do with your eyes in VR or AR,” says Eyefluence CEO Jim Marggraff, who cofounded the Milpitas, Calif-based company in 2013 with another entrepreneur, David Stiehr.

“Computers made a big leap when they went from punchcards to a keyboard, and then another from a keyboard to a mouse,” says Marggraff, who invented the kid-focused LeapFrog LeapPad device. “We want to again change the way we interface with data.” Read more on Next big thing for virtual reality: Eye-tracking lasers in your eyes…

Posted in Presence in the News | Leave a comment

Call: Multisensory HCI – Special issue of International Journal of Human-Computer Studies

Call for Papers:
International Journal of Human-Computer Studies (IJHCS): Special Issue on
Multisensory Human-Computer Interaction

http://www.journals.elsevier.com/international-journal-of-human-computer-studies/call-for-papers/special-issue-on-multisensory-human-computer-interaction

IMPORTANT DATES

Expression of interest (400 words): July 1th, 2016
Submission deadline: August 1st, 2016
First round of notifications: October 10th, 2016
Final round of notifications: April 23rd, 2017

SPECIAL ISSUE EDITORS

Marianna Obrist (University of Sussex, UK)
Nimesha Ranasinghe (National University of Singapore, Singapore)
Charles Spence (University of Oxford, UK)

SCOPE OF THE SPECIAL ISSUE

Our interactions with technology are primarily limited to the senses of vision and audition, whilst increasingly involving touch. Even though there is a growing number of researchers working on touch, taste, and smell, such senses are still largely underexplored in the context of Human-Computer Interaction (HCI). Given the growing knowledge concerning multisensory perception and the development of sensory devices, there is a need for further developing our understanding on people’s multisensory experiences in HCI.

In this Special Issue, entitled “Multisensory Human-Computer Interaction”, we want to call for papers that contribute to the understanding of what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Moreover, we also invite papers that aim to determine the contribution from multiple senses along with their interactions, in order to understand and design HCI applications, digital multisensory experiences, and consumer applications. Contributions that approach both the limitations that come into play when users need to monitor more than one sense at a time are also welcomed.

We encourage submissions dealing with one of the following areas promoting a user-centric and/or engineering perspective, and emphasising the combination of theory and practice in the creation of multisensory experiences with interactive technologies:

  • Designing for touch, taste, and/or smell interaction: How, why, and when?
  • Multisensory perception and crossmodal correspondences: Principles for design?
  • Case studies for new interactive multisensory experiences: Application use cases?

Read more on Call: Multisensory HCI – Special issue of International Journal of Human-Computer Studies…

Posted in Calls | Leave a comment

Expedia takes sick children on thrilling real-time adventures — without leaving the hospital

[Here’s a heart-warming use of presence (which will be more impressive if the partners follow through in establishing a permanent installation at St. Jude to help more children); the story from Advertising Age features four videos. –Matthew]

St. Jude Dream Adventures - a virtual 'high five'

[Image: Source: AdWeek]

Expedia Takes Sick Children on Thrilling Real-Time Adventures — Without Leaving the Hospital

Campaign From 180LA for St. Jude Is the Stuff of Dreams

By Ann-Christine Diaz. Published on March 21, 2016.

Children with serious illnesses and confined to their hospital beds might think that swimming with tropical fish or running with wild horses is the stuff of dreams. But Expedia, along with its agency 180LA, has made such dreams come true for kids battling cancer at St. Jude Children’s Research Hospital in Memphis, Tenn.

The “St. Jude Dream Adventures” campaign consisted of a temporary 360-degree installation at the hospital that “transported” the children to Cordoba and Talampaya Park in Argentina, Monkey Jungle in Florida and the Great Maya Reef in Mexico. In it, they experienced the locations’ natural wonders — from fossils to colorful sea-life to wily monkeys — in real time. Expedia employees whose own lives had been affected by serious illnesses were on location as personal tour guides to show the kids the sites.

In one “adventure,” a girl in love with horses watches them speed across Argentine plains with Expedia employee Sara L., a brain tumor survivor. Chera, whose family members have been diagnosed with cancer, leads a boy on a monkey adventure and Expedia employee Reenie, who also has family affected by cancer, goes scuba diving in Mexico to lead the underwater tour for Hannah, who has since passed away. Since it’s all occurring in real-time, the patients were able to ask questions, “touch” and explore the locations with the help of their guides.

See all the films, including a making-of video on the campaign’s YouTube channel. Read more on Expedia takes sick children on thrilling real-time adventures — without leaving the hospital…

Posted in Presence in the News | Leave a comment

Call: MHMC 2016 – International Workshop on Multimodal Interaction in Industrial Human-Machine Communication

Call for Papers

MHMC 2016 – International Workshop on Multimodal Interaction in Industrial Human-Machine Communication
In connection with the 21st IEEE International Conference on Emerging Technologies and Factory Automation
September 6, 2016, Berlin

http://etfa2016.org/images/track-cfp/MHMC_CfP.pdf

Deadline for submission of workshop papers: May 20

AIMS AND OBJECTIVES

Nowadays, industrial environments are full of sophisticated computer-controlled machines. In addition, recent developments in pervasive and ubiquitous computing provide a further support for advanced activity control. Even if the exploitation of these technologies is very often committed to specialized workers, who have been purposely trained to use complex equipments, easy and effective interaction is a key factor that can bring many benefits – from faster task completion to error prevention, cognitive load reduction and higher employee satisfaction.

Multimodal interaction means using “non-conventional” input and/or output tools and modalities to communicate with a device. The main purpose of multimodal interfaces is to combine both multiple input modes – usually more “natural” than traditional input devices, such as touch, speech, hand gestures, head/body movements and eye gaze – and solutions in which different output modalities are used in a coordinated manner – such as visual displays (e.g. virtual and augmented reality), auditory cues (e.g. conversational agents) and haptic systems (e.g. force feedback controllers). Besides handling input fusion, multimodal interfaces can also handle output fission, in an essentially dynamic progress. Sophisticated multimodal interfaces can integrate complementary modalities to get the most out of the strengths of each mode, and overcome weaknesses. In addition, they can support handling different environmental situations as well as different user (sensory/motor) abilities.

Although multimodal interaction is becoming more and more common in our everyday life, industrial applications are still rather few, in spite of their potential advantages. For example, a camera could allow a machine to be controlled through hand gesture commands, or the user might be monitored in order to detect potential dangerous behaviors. On the other side, an augmented or virtual reality system could be employed to provide an equipment operator with sophisticated visual cues, where auditory/olfactory displays might be used as an additional alerting mechanism in risky environments. Besides being used in real working situations to increase the amount and quality of available information, augmented/virtual reality interaction can be also exploited to implement an effective and safe training plan.

This workshop aims at gathering works presenting different forms of multimodal interaction in industrial processes, equipment and settings with a twofold purpose:

  • Taking stock of the current state of multimodal systems for industrial applications.
  • Being a showcase to demonstrate the potential of multimodal communication to those who have never considered its application in industrial settings.

Both proposals of novel applications and papers describing user studies are welcome. Read more on Call: MHMC 2016 – International Workshop on Multimodal Interaction in Industrial Human-Machine Communication…

Posted in Calls | Leave a comment

3D ‘zebra crossings’ stop drivers in their tracks

[Some relatively simple uses of presence can save lives; this story is from The Architects Newspaper, where it features other images; coverage in Mashable notes that India’s transport minister tweeted on April 27 that the country will be trying out the 3D paintings, and that the use of optical illusions as speed breakers was first pioneered in Philadelphia. For more on the use of road design illusions created to increase safety, see a 2014 story from BBC News. –Matthew]

Side and driver view of 3D zebra crosswalk in India

3D “zebra crossings” stop drivers in their tracks

By Jason Sayer
April 21, 2016

Earlier this year, it was reported that Saumya Pandya Thakkar and Shakuntala Pandya, two women from Ahmedabad in East India, had come up with an imaginative solution to stop cars and let pedestrians cross the road without the aid of traffic lights. Their “zebra crossing”— rectangular volumes drawn in perspective—appeared to do the trick.

While Thakkar and Pandya may have thought they were pioneering new techniques, this strategy had already been realized in Taizhou and Xingsha in China some eight years prior. Using bright and bold colors, these “3D” roadblocks-cum-crossings span China’s roads to deceive drivers. Here, instead of using the road surface as a color like in India, blue or red is added to amplify the three dimensional effect. Read more on 3D ‘zebra crossings’ stop drivers in their tracks…

Posted in Presence in the News | Leave a comment

Call: International Workshop on Emotion Representations and Modelling for Companion Technologies (ERM4CT 2016)

International Workshop on Emotion Representations and Modelling for Companion Technologies (ERM4CT 2016)
in conjunction with ACM ICMI 2016, Tokyo
www.erm4ct.cogsy.de

Submission deadline: August 28, 2016

The major goal in human computer interaction (HCI) research and applications is to improve the interaction between humans and computers. As interaction is often very specific for an individual and generally of multimodal nature, the current trend of multi-modal user-adaptable HCI systems arose over the past years. These systems are designed as companions capable of assisting their users based on the users’ needs, preferences, personality and affective state. Companion systems are dependent on reliable emotion recognition methods in order to provide natural, user-centred interactions.

In order to study natural, user-centred interactions, to develop user-centred emotion representations and to model adequate affective system behaviour, appropriate multi-modal data comprising not just audio and video material must be available. Following its ancestor, the ERM4HCI workshop series, the ERM4CT workshop focuses on emotion representations, signal characteristics used to describe and identify emotions as well as their influence on personality and user state models to be incorporated in companion systems. The ERM4CT 2016 workshop allows the in-depth analysis of technical prerequisites, modelling aspects and applications linked to the development of affective, multi-modal, user-adapted HCI systems.

Researchers are encouraged to discuss possible interdependencies of characteristics on an intra- and intermodality level. Such interdependencies may occur if two characteristics are influenced by the same physiological change in the observed user and are especially relevant to multi-modal affective systems. Theoretical papers contributing to the understanding of emotions in order to aid in the technical modelling of emotions for companion systems are welcomed. The workshop supports discussions on the necessary prerequisites for consistent emotion representations in multi-modal companion systems.

The ERM4CT 2016 workshop is the second joint-workshop aiming at highlighting the specific issues associated with the multi-modal emotion representations needed for companion technologies. As a further highlight, this year’s workshop offers a “hands-on” session, where a dataset comprised of 10 different modalities will be made available prior to the workshop to the participants. The participants are encouraged to analyse the dataset in terms of emotion recognition, interaction studies, conversational analyses, etc. The dataset provided is a snapshot of a new multi-modal dataset (Tornow, et al., 2016). Researchers are invited to address a specific research question using this dataset and submit their results to the workshop. During the workshop, all results will be presented and discussed in a subsequent panel session. Read more on Call: International Workshop on Emotion Representations and Modelling for Companion Technologies (ERM4CT 2016)…

Posted in Calls | Leave a comment

Report forecasts more telepresence robots for more uses

[It looks like in addition to the many other telepresence technology formats from phones to rooms, telepresence robots will become increasingly common; this story is from Inverse, where it includes a very short video of the early Greenman Teleperated Humanoid system mentioned. –Matthew]

Hospital patient Cookie Topp attends school via VGo telepresence robot

[Image: From VGo: “Despite a diagnosis of lymphoma resulting in a bone-marrow transplant that’s left her bed-bound in the Children’s Hospital, 13-year-old Cookie Topp is going to school, doing her homework and meeting with her friends.”]

Telepresence Robots May Be Awkward Now, but Expect More of Them by 2020

Research group Tractica projects a 50 percent increase in telepresence in four years.

Lauren J. Young
April 15, 2016

By 2020, instead of Skyping with colleagues or distant friends and relatives, you may be chatting through a mobile, interactive robot. At least that’s what a new report suggests.

Telepresence robots are forecasted by the technology intelligence research firm Tractica to reach 31,600 units by 2020 — a growth rate of 49.7 percent from the current 4,200 count in 2015 — with cumulative shipments over the next five years expecting to total nearly 92,000. Telepresence robots go beyond Skype, too: These are personal and mobile teleoperatic robotic systems, meaning they are computers with wheels that you can listen and talk to.

Wendell Chun, the author of the report, expects these teleoperatic bots will roam around all sorts of industries, particularly in hospitals and schools.

“I found thousands of applications,” Chun tells Inverse. “I plotted them all out and it seemed like medical and education were the highest, but it ranges from retail to corporate offices.” Read more on Report forecasts more telepresence robots for more uses…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php