ISPR Presence News

Author Archives: Matthew Lombard

Call: Symposium on Computational Modelling of Emotion: Theory and Application (during AISB 2017)

CALL FOR PAPERS:

Symposium on Computational Modelling of Emotion: Theory and Applications

During AISB 2017 Convention
http://aisb2017.cs.bath.ac.uk/index.html
18-21st April 2017, Bath, UNITED KINGDOM

http://www.cs.bham.ac.uk/~ddp/aisb17cme/

Deadline for submissions: 15th January 2017

OVERVIEW

Contemporary emotion modelling includes many projects attempting to understand natural emotions or to implement simulated emotions in chatbots, avatars or robots, for practical uses of many sorts from entertainment to caring. The numerous models of affective phenomena in the literature differ in important respects. They differ in how they describe and explain a range of phenomena, including the nature and order of perceptual, cognitive and emotional mental processes and behavioural responses in emotional episodes. They also differ in their target level of granularity: from fine-grained neural to coarse-grained psychological. Different models simulate emotions (and other mental states) with different ontological status and with a different focus on whether they model external behaviour or internal states. This diversity provides a challenge, but also an opportunity. This symposium aims to facilitate movement towards a mature integrated field with a deeper and richer understanding of biological minds by more clearly setting out interrelationships between emotion models.

Contributions that identify and attempt to remedy gaps and lack of breadth in current research on affective phenomena are particularly welcome. A narrow modelling focus may be appropriate for narrowly focused applications of AI, such as toys or entertainment. Richer theories that are intended to advance the science of mind should include affective states such as: motives, attachments, preferences, values, standards, attitudes, moods, ambitions, obsessions, humour, grief, various kinds of pride, and various moral and aesthetic phenomena. So the symposium will consider how varieties of affect can be integrated and validated in computational models.

The aims of this symposium also include: presenting the state of the art in emotion modelling; bringing together an interdisciplinary community interested in exploiting this technology; and looking forward by defining new challenges, including empirical, philosophical, and technological, as well as contributing to our understanding of natural varieties of affect and how they fit in with other aspects of cognition.

Topics include, but are not limited to:

  • How models explain the nature of interaction between reasoning and emotion, and the emotional underpinnings of reasoning
  • Computational architectures which model emotion
  • Models of affect which are incorporated within applications in human computer interaction and health technology. For example, in the health domain, emotion models which can enhance assessment, diagnosis and treatment.
  • Explaining how technological applications can be used to make contributions to psychological theory
  • Is emotion algorithmic/computational? to what extent?
  • Embodied, situated and enactivist approaches to emotion
  • Emotion model validation
  • Towards computational models for online dynamic diagnosis and therapeutic interventions
  • Modelling of emotion regulation for self-help, cognitive and mindfulness psychotherapy, and positive psychology.
  • Emotion modelling in computational psychiatry, including investigating the mechanisms of pathological thinking and emotion
  • Attachment modelling
  • How computational models can provide accounts of how emotions and cognitions shape each other over different timescales, from momentary episodes to the development of personality
  • Using computational emotion models in research on: self-control, meta-management, and coherence in thought and behaviour (and loss of these states)
  • As the AISB convention has the overall theme of “Society with AI,” submissions are welcome that focus on social and ethical questions, including:
  • Can artificial systems be given the full range of human emotions? Or can these emotions simply emerge from the functioning of the model components? If ‘yes’, are there ethical limitations in what systems should be created or allowed to develop?
  • How will people respond to emotional agents as they become more realistic?; What implications will sophisticated emotional agents have for human to human relations, and how humans understand what it means to be human?
  • the near-future relevance of emotions in AI
  • the potential benefits or threats to society

Read more on Call: Symposium on Computational Modelling of Emotion: Theory and Application (during AISB 2017)…

Posted in Calls | Leave a comment

I was a robot and this is what I learned

[This story from The Register is an interesting first-person account of the experience of using a Beam telepresence robot (‘unit’) to attend an IT conference. –Matthew]

I was a robot and this is what I learned

The internet of me

7 Dec 2016, Trevor Pott

Sysadmin blog For one brief instant, Microsoft was the good guy. Deep within the often customer-hostile behemoth, left after the arrogance and straight on past the victim blaming is the office of Brian First, with the Microsoft Experience Design Group. Alongside a company called Event Presence, Brian made me feel like a real person, actual and whole.

I am not in great shape. My physical deterioration continues apace and in some places may even qualify me as disabled. This isn’t an easy thing to admit. Not to myself, and certainly not publicly. The carefully constructed lies we build up to maintain our fragile egos are often far more important to us than the truth.

IT conferences, however, have a way of slapping me in the face with reality. Airports go on forever. They’re enormous, sprawling things made out of queues and erratic, frequently hostile flesh-missiles that once were people with hopes and dreams and holiday plans somewhere nice.

Airport security and being trapped in a sardine can that smells like a sewer, with a dozen screaming infants for 12 hours, erases that humanity. And then the real gong show begins.

Careening through some foreign city’s streets at breakneck speeds clinging to the back of a taxi wondering just exactly how many laws your cabbie is violating is replaced by more queueing, surly hotel staff, another terror-taxi, and yet more queueing. Once you’re clear of the event registration line you’ve got that fist full of user group meetings, vendor parties and hopefully sleeping all interspersed with either lots of walking, terror-taxis or both.

This is all the day before the conference even starts.

The big conferences are usually four days long. The show floor is giganamous, there are yet more parties, dozens of interviews, seemingly unlimited terror-taxis and I don’t even remember what else because the urge to repress everything is overwhelming.

To say that conferences are extremely difficult for me is a sad understatement. Beyond the physical pain, beyond the exhaustion lies humiliation. These events, though critical for my career, leave me feeling broken and inhuman.

To admit this, especially within the confines of the technology industry, is to show weakness. We are supposed to be emotionless automatons, interested only in technology and driven by nothing other than cold pragmatism. How ironic, then, that a remote-controlled robot provided me by a company I so often malign as the evil empire should restore some of my humanity. Read more on I was a robot and this is what I learned…

Posted in Presence in the News | Leave a comment

Call: European Conference on Cognitive Ergonomics (ECCE 2017)

CALL FOR PAPERS:
European Conference on Cognitive
Ergonomics (ECCE 2017)
September 19-22, 2017, Umeå, Sweden
http://www.informatik.umu.se/ecce2017/

Due date for all submissions: February 28, 2017

ECCE 2017 is the 35th annual conference of the European Association of Cognitive Ergonomics. This leading conference in human-technology interaction and cognitive engineering provides an opportunity for both researchers and practitioners to exchange new ideas and practical experiences from a variety of domains. We invite papers from researchers and practitioners, which address the broad spectrum of challenges and opportunities in this area. The main theme of ECCE 2017 is Transforming the everyday. The role of digital technologies in our life has dramatically changed in the last decades, and the scope of cognitive ergonomics has extended accordingly. While industrial, safety-critical, applications and systems remain an essential object of analysis and design, an increasingly important focus of ECCE conferences is on how digital technologies become a part of our everyday environments and practices. The theme of ECCE 2017 reflects this development.
Submission Topics

We invite various types of contributions from researchers and practitioners – long and short papers, demos, posters, doctoral consortium applications, and proposals for workshops and panel sessions – which address the broad spectrum of cognitive ergonomics challenges in the analysis, design, and evaluation of digital technologies. This includes, but is not limited to, the following topics:

  • Design methods, tools, and methodologies for supporting cognitive tasks
  • Affective/emotional aspects of human interaction with IT artefacts
  • Motivation, engagement, goal sharing
  • Ecological approaches to human cognition and human-technology interaction
  • User research concepts, methods, and empirical studies
  • Cognitive processes in design
  • Collaborative creativity
  • Collaboration in design teams
  • Cognitive task analysis and modeling
  • Methods and tools for studying cognitive tasks
  • Decision aiding, information presentation and visualization
  • Human Factors and simulation
  • Trust and control in complex systems
  • Situation awareness
  • Human error and reliability
  • Resilience and diversity

Read more on Call: European Conference on Cognitive Ergonomics (ECCE 2017)…

Posted in Calls | Leave a comment

New Global Virtual Reality Association (GVRA) to promote industry best practices

[The new Global Virtual Reality Association (GVRA) seems likely to play an important role in the development of a key presence-related industry; this story is from Engadget, where it includes a different image; the official press release is available from PR Newswire. –Matthew]

Biggest names in VR band together to create industry standards

GVRA isn’t exactly the catchiest name

Cherlynn Low
December 7, 2016

The world’s most popular virtual reality headset makers have assembled. Google, Oculus, Sony, HTC, Samsung and Acer have come together to create a non-profit organization called the Global Virtual Reality Association (or the far snappier GVRA, for short). The association’s goal is to “promote responsible development and adoption of VR globally,” according to its website, and members will do so by researching, developing and sharing what it believes to be industry best practices. Read more on New Global Virtual Reality Association (GVRA) to promote industry best practices…

Posted in Presence in the News | Leave a comment

Call: Web3D 2017 – 22nd International Conference on 3D Web Technology

Call For Papers, Posters, Tutorials And Workshops

Web3D 2017
The 22nd International Conference on 3D Web Technology
Date:   5-7 June 2017
Location: Queensland University of Technology, Brisbane, Australia
Website:  http://web3d2017.org

Papers submission deadline: 13 February 2017

2017 will be a historic year for 3D on the Web. We are seeing the explosion of WebVR and the potential of WebAR just around the corner.

With WebGL now widely supported by default in modern browsers, tools such as X3D, X3DOM, Cobweb, three.js, glTF, and A-Frame VR are allowing nearly anyone to create Web3D content. Commercial game engines such as Unity and Unreal are starting to offer ways to export and publish directly to Web3D.

The conference will explore topics including: research on simulation and training using Web3D, enabling technology of web-aware, interactive 3D graphics from mobile devices up to high-end immersive environments, and the use of ubiquitous multimedia across a wide range of applications and environments. For example:

  • Virtual Reality (VR)
  • Mixed Augmented Reality (MAR)
  • Cultural and Natural Heritage
  • Medical, Telemedicine (eHealth)
  • Transportation and Geospatial
  • Industry Applications
  • Archival Digital Publications
  • Human Animation, Motion Capture
  • 3D Printing and 3D Scanning
  • CAD and Advanced Manufacturing
  • Education and E-learning
  • Collaboration and Annotation
  • Tourism and Accessibility
  • Gaming and Entertainment
  • Creativity and Digital Art
  • Public Sector and Government
  • Open Web Platform Integration

For the 2017 conference edition, we welcome works addressing the emerging opportunities and research of portable, integrated information spaces over the web including, but not limited to:

  • Emerging WebVR and WebAR capabilities.
  • Scalable, interoperable representations and modeling methods for complex geometry, structure, and behaviors.
  • 3D similarity search and matching, 3D search interfaces, and sketch-based approaches.
  • Visualization and exploration of 3D object repositories.
  • Scientific and medical visualization, medical simulation and training.
  • 3D content creation, analysis tools and pipelines, and 3D classification.
  • Annotation, metadata, hyperlinking and Semantic Web for 3D objects and scenes.
  • Generative and example-based shape modeling and optimization.
  • 3D scanning, 3D reconstruction 3D digitization, and 3D printing.
  • Streaming and rendering of large-scale models, animations, and virtual worlds.
  • Progressive 3D compression and model optimization.
  • Collaborative operation and distributed virtual environments.
  • Web-wide human-computer interaction and 3D User Interfaces.
  • 3D City Models, Building Information Modeling & Geo-visualization.
  • Mixed and Augmented Reality (MAR) including standardization aspects.
  • Web3D and associated APIs, toolkits, and frameworks.
  • Novel Web3D interaction paradigms for mobile/handheld applications.
  • Web based interaction techniques including gesture, pen and voice.
  • Agents, animated humanoids, and complex reactive characters.
  • Motion capture for composition and streaming of behaviors and expressions.
  • Web3D for social networking.
  • Script algorithms and programming for lightweight Web3D.
  • Network transmission over mobile Internet.
  • Server-side 3D engines and cloud-based gaming.
  • Architecture support by large Web3D applications for CAD tools and solid modelers.
  • Data analysis and intelligent algorithms for big Web3D data.
  • Interactive Web 3D applications in all applications and sectors.

Read more on Call: Web3D 2017 – 22nd International Conference on 3D Web Technology…

Posted in Calls | Leave a comment

‘Disfellowshipped’ pushes forward the possibilities of immersive storytelling in journalism

[Journalism continues to explore how best to use presence-evoking technologies to engage and inform; this is from Reveal’s website, where you can access various versions of the Disfellowshipped production. –Matthew]

A woman watching Disfellowshipped: A VR Experience

Disfellowshipped: A virtual reality experience

By Trey Bundy / November 29, 2016

 

Our first virtual reality production, Disfellowshipped, is an attempt to help push forward the possibilities of immersive storytelling in journalism – to tell a narrative, character-rich, investigative story in VR.

Media outlets around the world are getting their feet wet in the VR world, and some astonishing work has been done. But so far, VR largely has been used to provide visual context to stories, while the heavy lifting of storytelling is still left to text or video.

We didn’t know it at the time, but Disfellowshipped was born in May 2015. A group from Reveal from The Center for Investigative Reporting went to Berlin to host TechRaking, a gathering of journalists and technologists, supported by the Google News Lab. That’s where we met Stephan Gensch and Linda Rath-Wiggins, founders of a Berlin-based VR start-up called Vragments. Their idea was to build a tool called Fader that would allow journalists to quickly and easily turn their reporting into VR experiences.

To design the tool, they needed a reporter and a story. I was in the middle of a prolonged investigation into Jehovah’s Witnesses and child sexual abuse. We decided to focus our work on one of my sources, Debbie McDaniel, a woman with an extraordinary past.

So, what’s the point of telling this story in VR?

The answer is to give the viewer a more intimate understanding of a character and her experience. The technology allows us to put you in the reporter’s shoes, to feel what it’s like to sit with people as they look you in the eye and tell you their story, to visit their towns and the places that affected their lives. In some instances, it becomes a window into a person’s emotional memory. Read more on ‘Disfellowshipped’ pushes forward the possibilities of immersive storytelling in journalism…

Posted in Presence in the News | Leave a comment

Call: Experiencing Nonhuman Spaces: Between Description and Narration

Dear all,

Below is a Call for Proposals that may be of interest to some of you.

We’re looking for contributions to an edited volume on space and the nonhuman in narrative. Our focus will be on how the materiality of space shapes certain kinds of narratives (and readers’ imagination thereof), creating a platform for thinking beyond the human scale. The collection aims to bring narrative theory and cognitive approaches to literature into a conversation with the nonhuman turn.

If you have any questions, feel free to get in touch!

All the best,

David, Marco, and Marlene

Call for Proposals

Experiencing Nonhuman Spaces: Between Description and Narration

Abstracts due: 24 February 2017

“We felt enlarge itself round us the huge blackness of what is outside us, of what we are not,” declares Bernard in Virginia Woolf’s The Waves (1931/2000, 213). “What we are not”-the nonhuman-has emerged as one of the most thought-provoking concepts in contemporary literary scholarship. As Mark McGurl puts it, “the obdurate rock, the dead-cold stone [has taken] center stage as an image of the non-human thing, the thing that simply does not care, and has been not-caring for longer than anyone can remember” (2011, 384). Thinking about-and with-the raw physicality of matter has proven instrumental in destabilizing a metaphysically entrenched conception of the human. Just as Bernard’s language views the nonhuman through spatial metaphors (“enlarge,” “round,” “outside”), the experience and imagination of space are key to any attempt to move beyond what we are.

The proposed collection of essays seeks to come to grips with how literary narrative may confront readers with the materiality of the spaces we live in, leveraging spatial description and spatial metaphors as a springboard toward the nonhuman. Literary studies after the “spatial turn” has explored the intersection between spatiality and multiple embodiments of “otherness”-postcolonial, queer, animal. In narratology, space has been a rich area of recent research, including Katrin Dennerlein’s monograph Narratologie des Raumes (2009), Fludernik and Keen’s special issue of Style, “Interior Spaces and Narrative Perspective” (2014), and Ryan, Foote, and Azaryahu’s collaboration Narrating Space/Spatializing Narrative (2016). But in these areas of scholarship the phenomenological dimension of narrative space and its interpretive and epistemological ramifications for theories of the (non)human have been left on the sidelines. Michel Butor claimed: “[The novel] is the phenomenological realm par excellence, the best place to explore how reality appears to us, or might appear.” Revisiting and extending insights from phenomenologically and cognitively oriented approaches to literature, we are interested in how literary narrative may translate the nonhuman into a concrete experience that, like Bernard, readers may feel “enlarge itself round” them.

The primary focus of the proposed essays should be an open dialogue within and without literary studies in order to evaluate the role of spatial description and spatial language in challenging ingrained distinctions between human cultures and subjectivity and material realities. Particularly of interest are connections between narratology and emerging fields that prioritize the nonhuman, such as new materialism and object-oriented ontology. Possible topics include:

  • The challenges involved in conveying an experience of spaces and phenomena beyond (or below) the human scale.
  • “Empty deictic center” texts, in Ann Banfield’s (1987) term, or narratives that focus on spaces in the absence of human observers.
  • Empathy and affect for inanimate objects, as elicited by spatial descriptions.
  • The productivity of spatial metaphors in genres such as climate change and postapocalyptic fiction.
  • Affordances of specific media that foreground space and the nonhuman.
  • Any other project exploring narrative challenges to anthropocentrism, and how they play out in spatial terms.

Read more on Call: Experiencing Nonhuman Spaces: Between Description and Narration…

Posted in Calls | Leave a comment

How virtual reality is about to take you to Pearl Harbor and the Berlin Wall

[I think one of the most interesting uses of presence is the recreation of historical places and events, which comes with an obligation to do it accurately and sensitively; this story is from The Washington Post and more information is available from the Newseum’s website. –Matthew]

Man using VR at the Newseum

[Image: Maria Bryk/Newseum]

How virtual reality is about to take you to Pearl Harbor and the Berlin Wall

By Brian Fung
December 1, 2016

If you’re like most Americans, you probably learned about Pearl Harbor from a textbook or by watching a film about the surprise attack in 1941 against U.S. naval forces.

But now, visitors to a Washington-area museum can experience a measure of what it was really like to be at the scene.

Standing amid the wreckage of burned-out trucks and a downed Japanese fighter plane, they can gaze up at the sinking U.S.S. West Virginia as thick, black smoke curls its way slowly across the sun. They can listen to the lapping of oil-choked waters against the shore and hear muffled explosions in the distance. They can even be transported to a typical American home the day after the attack and hear President Roosevelt address the nation while they leaf through that week’s newsmagazines and hang ornaments on a Christmas tree.

All it takes is a virtual-reality headset and pair of handheld controllers.

“It was very, very much like being there,” said Kathy Ernst, a retired teacher from Alaska who had never tried virtual reality before she strapped on a pair of goggles Monday at a pre-release event for “Remembering Pearl Harbor,” an exhibit running from Dec. 5 to Dec. 11 at the Newseum. Read more on How virtual reality is about to take you to Pearl Harbor and the Berlin Wall…

Posted in Presence in the News | Leave a comment

Call: Fifth International Conference on Human-Agent Interaction (HAI 2017)

HAI 2017 Call for Papers

The Fifth International Conference on Human-Agent Interaction (HAI 2017)
Bielefeld, Germany ~ 17 – 20 October 2017
http://hai-conference.net/hai2017/

Submission Site: http://precisionconference.com/~hai/

Full paper deadline: 14 May 2017

HAI 2017 is the 5th annual International Conference on Human-Agent Interaction. It aims to be the premier interdisciplinary venue for discussing and disseminating state-of-the-art research and results that have implications across conventional interaction boundaries including robots, software agents and digitally-mediated human-human communication.

The theme for HAI 2017 is “How autonomy shapes interaction”. During the last decade a large body of research has been devoted on increasing the interaction quality with artificial agents. This has now reached a quite convincing level for focused application scenarios. However, with robots and agents entering our everyday lives such scenarios will require more initiative and flexibility from the agent, i.e. more autonomy. Such autonomous behavior means, on the other hand, that the interaction will become less predictable. This may become problematic given the strong research focus on statistical behavior models that focus on observable behavior based on rather shallow structures. These models may not be able to capture the underlying interaction structure in less restricted scenarios. We thus need a better understanding and model of the underlying interaction principles that not only takes situational and task aspects into account but also includes detailed user models. Therefore, a stronger research focus is needed to better understand the underlying principles of interaction between autonomous agents, leading to better and deeper models of interaction. We thus encourage contributions that try to tackle this question by focusing on more realistic and life-like scenarios.

The conference seeks contributions from a broad range of fields spanning engineering, computer science, psychology and sociology, and will cover diverse topics, including: human-robot interaction, affective computing, computer-supported collaborative work, gaming and serious games, artificial intelligence, and more.

Topics of interest include, but are not limited to,

  • designs and studies of Human-Agent Interaction, including quantitative and qualitative results
  • theoretical models of Human-Agent Interaction
  • technological advances in Human-Agent Interaction
  • impacts of embodiment (e.g., physical vs digital, human vs animal-like)
  • experimental methods for Human-Agent Interaction
  • character and avatar design in video games
  • agents in social network

This includes more targeted results that have implications to the broader human-agent interaction community:

  • human-robot interaction
  • human-virtual agent interaction
  • interaction with smart homes and smart cars
  • distributed groupware where people have remote embodiments and representations
  • and more!

Full papers, posters, late-breaking results, tutorial/workshop overviews will be archived in the ACM Digital Library. Read more on Call: Fifth International Conference on Human-Agent Interaction (HAI 2017)…

Posted in Calls | Leave a comment

Riken’s Substitutional Reality (SR) raises new prospects for presence

[To virtual and augmented reality we have to add substitutional reality, and as in this story from The Japan Times, consider its far-reaching implications. –Matthew]

Riken's Naotaka Fujii tests SR

[Image: Riken researcher Naotaka Fujii (right) tests a ‘substitutional reality’ system that lets people experience how a prerecorded 360-degree video can be mixed into real-life surroundings. | Courtesy of RIKEN]

Riken mind bender stays one step ahead of virtual reality

By Ayako Mie, Staff Writer
December 4, 2016

Imagine you are standing on the Grand Canyon Skywalk, a horseshoe-shaped bridge suspended 1,200 meters above the Colorado River. You are likely to get dizzy and freeze up at the thought of venturing out onto the 10-cm thick glass.

Nevertheless, you step forward and breathe a sigh of relief after realizing you are not tumbling toward the river.

But how can you be sure what you see is really there? And more fundamentally, how do we know that we live in the real world and not a virtual one?

The boundary between what’s real and virtual is blurring with the rise of virtual and augmented reality.

By wearing Oculus Rift, a type of virtual reality headset, you can transport yourself to the middle of a desert without actually being there. By using Google Glass, you can use the overlaid information in front of your eyeballs to find the nearest coffee shop without opening the maps app on your smartphone.

And now Naotaka Fujii, a researcher at the state-backed Riken institute, is taking VR to an entirely new level: “substitutional reality.”

The CEO of VR company Hacosco led Riken’s Laboratory for Adaptive Intelligence in 2012 to develop a system with the goal of manipulating reality. The so-called SR system can be used to re-create an experience similar to the one presented in the sci-fi movie “Total Recall,” in which a character played by Arnold Schwarzenegger buys memory implants to enjoy a real-life experience on colonized Mars.

Riken’s SR system does not actually implant memories, but the visual and audio effects are so realistic that the brain becomes too confused to distinguish between present and past, or fake and real.

The trick is simple. First, a head-mounted display (HMD) must be put on in order to electronically view what’s actually in front of you. Later, a pre-recorded 360-degree video shot in the same surrounding environment is loaded into the HMD.

The shift is so subtle the user does not notice the difference. Read more on Riken’s Substitutional Reality (SR) raises new prospects for presence…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php