ISPR Presence News

Monthly Archives: April 2019

Call: “Uses and Effects of Smart Media: How AI Impacts User Experience” issue of Journal of Broadcasting & Electronic Media

Call for Papers

Journal of Broadcasting & Electronic Media
Special Issue: Uses and Effects of Smart Media: How AI Impacts User Experience

Submission Deadline: November 15, 2019

The increasing integration of artificial intelligence (AI) into digital media technologies has provided additional affordances and altered the nature of user experience, providing new audience engagement and gratification opportunities that meet human needs for information, communication and entertainment in a variety of innovative ways.

Read more on Call: “Uses and Effects of Smart Media: How AI Impacts User Experience” issue of Journal of Broadcasting & Electronic Media…

Posted in Calls | Leave a comment

Butterfly World: Gaming and VR for insect and ecosystem conservation

[A computer scientist and a biologist are using presence to teach users about insects and environmental conservation; some of the details are in this press release from EurekAlert!. More information is in their article in Rethinking Ecology, and see the project’s Patreon page for a 2:21 minute video (also available via YouTube; an earlier 1:30 minute YouTube video is also available). –Matthew]

Living room conservation: Gaming & virtual reality for insect and ecosystem conservation

Players explore and search for butterflies using knowledge gained through gameplay

News Release 18-Apr-2019
Pensoft Publishers

Gaming and virtual reality (VR) could bridge the gap between urban societies and nature, thereby paving the way to insect conservation by the means of education, curiosity and life-like participation.

This is what Florida International University‘s team of computer scientist Alban Delamarre and biologist Dr Jaeson Clayborn strive to achieve by developing a VR game (desktop version also available) dedicated to insect and plant species. Focused on imperiled butterflies, their innovative idea: Butterfly World 1.0, is described in the open-access journal Rethinking Ecology.

Butterfly World 1.0 is an adventure game designed to engage its users in simulated exploration and education. Set in the subtropical dry forest of the Florida Keys (an archipelago situated off the southern coast of Florida, USA), Butterfly World draws the players into an immersive virtual environment where they learn about relationships between butterflies, plants, and invasive species. While exploring the set, they interact with and learn about the federally endangered Schaus’ swallowtail butterfly, the invasive graceful twig ant, native and exotic plants, and several other butterflies inhabiting the dry forest ecosystem. Other nature-related VR experiences, including conservation awareness and educational programs, rely on passive observations with minimal direct interactions between participants and the virtual environment.

According to the authors, virtual reality and serious gaming are “the new frontiers in environmental education” and “present a unique opportunity to interact with and learn about different species and ecosystems”. Read more on Butterfly World: Gaming and VR for insect and ecosystem conservation…

Posted in Presence in the News | Leave a comment

Call: Designing Speech Synthesis for Intelligent Virtual Agents – IVA 2019 Workshop

Call for Papers

Workshop: Designing Speech Synthesis for Intelligent Virtual Agents
At ACM IVA 2019 (https://iva2019.sciencesconf.org)
2nd July 2019
Paris, France
http://homepages.inf.ed.ac.uk/matthewa/speechIVA2019wshop

Extended submission deadline: April 26, 2019

OVERVIEW

In this workshop we will look at the elements in an artificial voice that support an embodied (either digital or tangible) and dis-embodied form of an intelligent virtual agent (IVA). In this context the agent can be seen to perform, or act a role, where naturalness of the voice may be subservient to appropriateness, and communicating the character of the agent can be as important as the information it presents. We will introduce the current ways a voice can be made to have character, how this can be designed to fit a specific agent, and how such a voice can be effectively deployed.

The intended audience is academics and industry researchers who are tired of seeing their carefully created conversational agents spoilt by inappropriate speech synthesis voices, and who want to lead the way in which speech synthesis takes into account the design requirements of IVAs. The workshop is not primarily for speech technologists (although they are of course welcome) but rather the engineers and scientists exploring the use of IVAs, and are curious to see how modern speech synthesis can dramatically alter the way such agents are perceived and used.

ATTENDING

Attendees are encouraged to also attend the main conference but this is not a requirement. We request those interested to email matthewa@cereproc.com with the following information by April 26th 2019.

  1. 100-200 word biography and main motivation for attending.
  2. Sketch of an actual or imagined embodied agent that intends to use speech to converse or present dynamic content. The sketch could include pictures, example content and design motivations and could vary from well specified to speculative, from ground breaking to well understood domains. It could be a couple of pages to a paragraph. The pre-designs will be used to help the organizers select and formulate a design challenge of interest to the attendees.

Five of the submissions will be chosen for a short 7 minute + 3 minute question presentation at the workshop. Please` indicate if you would like to be considered for this. The submissions chosen will aim to give a varied and provocative view of potential use cases and speech interface designs. Read more on Call: Designing Speech Synthesis for Intelligent Virtual Agents – IVA 2019 Workshop…

Posted in Calls | Leave a comment

Vulcan Holodome at TED2019: An immersive 360-degree world… without a headset?

[As this report from the TED2019 blog makes clear, immersive platforms that don’t rely on isolating headsets have important advantages in the creation of (social) presence experiences. The original story includes more images and  for more information Axios has a concise summary, GeekWire has a detailed story about a new interactive, haptic-enhanced game for the Holodome as well as an earlier report on the platform’s debut in Seattle, and a 1:00 minute video that includes viewer reactions is available via YouTube. –Matthew]

[Image: Step up close to, and almost into, the work of Monet, a favorite artist of Vulcan founder Paul Allen. Vulcan brought their new Holodome environment to TED2019: Bigger Than Us, in Vancouver, BC, Canada. Credit: Bret Hartman / TED.]

Vulcan Holodome at TED2019: An immersive 360-degree world… without a headset?

Posted by: TED Staff
April 17, 2019

Have you ever loved a painting so much you wanted to step inside it? While the world of VR is usually utilised to take us to inaccessible locations like the depths of the ocean or the surface of the Moon, Vulcan’s Holodome offers the opportunity to enter the world of an impressionist painting in one of two experiences previewing at TED2019.

Unlike the usual headset-based VR experience, the Holodome is a fully immersive environment you can explore with your fellow adventurers, unhindered by wearable equipment. Inspired by a love of Monet’s works, the late chair of Vulcan, Microsoft cofounder Paul Allen, wanted to create a way to step inside them. One where you can walk across the painter’s Poppy Field as it undulates around and beneath you, and Woman with a Parasol disappears over a nearby rise.

“With Holodome, our goal is to transport people into immersive adventures across real and imagined worlds, from the highest mountaintop to an impressionist landscape to the boundaries of space, without the need for mounted headgear,” says Kamal Srinivasan, Vulcan’s director of product management. Read more on Vulcan Holodome at TED2019: An immersive 360-degree world… without a headset?…

Posted in Presence in the News | Leave a comment

Call: University College Dublin Workshop on Implementing Machine Ethics

Call for Abstracts

UCD Workshop on Implementing Machine Ethics
School of Computer Science, University College Dublin
2-3 July, 2019
https://aristotle.ucd.ie/

Confirmed keynote: Prof. Alan Winfield, Bristol Robotics Lab, University of West of England

Read more on Call: University College Dublin Workshop on Implementing Machine Ethics…

Posted in Calls | Leave a comment

Baylor virtual reality project puts viewers in Victorian poets’ living room

[Here’s another example of the potential of presence to enrich and expand education. The story is from the Waco Tribune-Herald, where it includes two more images. For more information see an earlier story in Baylor’s Instant Impact and Amanda Gardner’s website. –Matthew]

[Image: Baylor graduate student Amanda Gardner puts on the virtual reality goggles to experience a film shot in the Armstrong Browning Library’s salon room, a recreation of the living room of poets Robert and Elizabeth Barrett Browning. Credit: Rod Aydelotte.]

Baylor virtual reality project puts viewers in Victorian poets’ living room

By Carl Hoover
April 11, 2019

Baylor graduate student Amanda Gardner always treated English and literature as immersive subjects for her high school students to plunge into and find shared experiences that could be transformative.

Her latest educational project also immerses participants, but in a virtual way: a short virtual reality film that puts viewers in the living room of British poet Robert Browning, where Browning, his son Pen, their friends and family recall the life and poetry of Browning’s wife Elizabeth Barrett Browning.

For Baylor’s Armstrong Browning Library, which preserves one of the world’s largest collections of Browningiana, Gardner’s Cinematic Virtual Reality project offers an unusual way to celebrate the library’s annual Browning Day on Friday.

Gardner and her virtual reality collaborators will talk about the project with participants able to see for themselves through Oculus Go VR goggles.

The library’s salon provided the ready-made and authentic setting during filming last year. It’s a recreation of the Brownings’ living room in Casa Guido, their home Florence, Italy, complete with Elizabeth’s writing desk, a prayer stand and two tables used by her sisters.

Baylor theater professor Steven Pounders and students Noah Alderfer and Bailey Harris led a cast of about 10, all dressed in period costumes and working off a script written by Gardner.

Those who put on the VR goggles can walk through the room and eavesdrop on several conversations as those at the memorial, set in 1866, share their memories of the poet and some of her work.

If they are more inclined to explore her work and her life, then the immersive film has done what Gardner thinks VR filmmaking will bring to 21st Century classrooms: open students’ eyes to subject matter that they’ll then want to explore on their own. Read more on Baylor virtual reality project puts viewers in Victorian poets’ living room…

Posted in Presence in the News | Leave a comment

Job: Post-doctoral position on “Modeling Virtual Coaches” project at Sorbonne University

Post-doctoral position – Modeling Virtual Coaches
Institute for Intelligent Systems and Robotics (ISIR) lab
Sorbonne University, Paris

There is a Post-doctoral position open at Sorbonne University in the Institute for Intelligent Systems and Robotics (ISIR) lab on modeling group of virtual coaches.

Read more on Job: Post-doctoral position on “Modeling Virtual Coaches” project at Sorbonne University…

Posted in Jobs | Leave a comment

Antony Gormley’s Lunatick VR experience invites Londoners to moon walk

[The VR installation described in this story from de zeen offers a version of an otherwise unavailable experience that offers new perspective, actual and metaphorical. If you’ll be in London before April 25, consider visiting. The original story includes more images. –Matthew]

Antony Gormley’s Lunatick invites Londoners to moon walk

Augusta Pownall
April 15, 2019

Artist Antony Gormley has teamed up with astrophysicist Priya Natarajan on a virtual-reality experience that allows users to walk on a digital version of the moon created using data from NASA.

The 15-minute immersive experience sees visitors don a virtual-reality (VR) headset to travel from an imagined version of Christmas Island in the Indian Ocean, through the earth’s atmosphere to the moon, where they can walk across its surface.

On the way, they pass through the stratosphere and around virtual asteroid belts before eventually travelling from the moon on towards the sun.

“Our nearest neighbour is the moon, and this project allows us to experience it as a found object in space, to explore its vast open spaces and swoop the ridges and valleys of its craters,” said Gormley.

“This collaboration is an opportunity to experience the mind/body relationship in a new way and consider our own body’s relationship to other bodies in space.”

Surface of the moon digitally recreated

Produced with Acute Art, a virtual and augmented reality production studio that specialises in creating digital artworks, the experience takes place in a room kitted out with five VR headsets at The Store X‘s space at 180 The Strand in London.

With Acute Art’s chief technology officer Rodrigo Marques, the pair recreated the surface of the moon, using the publicly available data set from NASA’s ongoing Lunar Reconnaissance Orbiter – a robotic spacecraft currently orbiting and mapping the earth’s moon.

Using a hand-held gaming stick and the movement of their own bodies, visitors are able to move across the moon’s craters, and experience the weightlessness of bouncing on its surface.

“This year we celebrate the 50th anniversary of the moon landing. Only 12 human beings have walked on the moon to date, but many unmanned missions have provided comprehensive high-resolution maps of the lunar surface,” said Natarajan.

“We can experience walking on the moon, feel the sensation in our bodies and minds of stepping on the surface that has been so intricately mapped with data that space missions have provided,” she continued. Read more on Antony Gormley’s Lunatick VR experience invites Londoners to moon walk…

Posted in Presence in the News | Leave a comment

Call: Ethics Oversight and Committees for AI: Towards a Future Research Agenda – University of Oxford workshop

Call for Participation

Ethics Oversight and Committees for AI: Towards a Future Research Agenda
A Wellcome Centre for Ethics and Humanities (WEH) workshop
Friday, May 10, 2019, 10am to 6pm
Big Data Institute, University of Oxford
https://www.weh.ox.ac.uk/upcoming-events/workshop-ethics-oversight-and-committees-for-ai-towards-a-future-research-agenda

Read more on Call: Ethics Oversight and Committees for AI: Towards a Future Research Agenda – University of Oxford workshop…

Posted in Calls | Leave a comment

AR will spark the next big tech platform – Call it mirrorworld

[This recent Wired story from the magazine’s founding editor is a thoughtful and thought-provoking perspective on the evolution of technology that explains how and why presence illusions will increasingly occur via augmented reality, creating both important benefits and potential costs. The original version of the story is extremely long so it’s abridged here, but see the original for more information and detail. –Matthew]

AR Will Spark the Next Big Tech Platform—Call it Mirrorworld

Kevin Kelly
February 12, 2019

EVERY DECEMBER, ADAM Savage—star of the TV show MythBusters—releases a video reviewing his “favorite things” from the previous year. In 2018, one of his highlights was a set of Magic Leap augmented reality goggles. After duly noting the hype and backlash that have dogged the product, Savage describes an epiphany he had while trying on the headset at home, upstairs in his office. “I turned it on and I could hear a whale,” he says, “but I couldn’t see it. I’m looking around my office for it. And then it swims by my windows—on the outside of my building! So the glasses scanned my room and it knew that my windows were portals and it rendered the whale as if it were swimming down my street. I actually got choked up.” What Savage encountered on the other side of the glasses was a glimpse of the mirrorworld.

The mirrorworld doesn’t yet fully exist, but it is coming. Someday soon, every place and thing in the real world—every street, lamppost, building, and room—will have its full-size digital twin in the mirrorworld. For now, only tiny patches of the mirrorworld are visible through AR headsets. Piece by piece, these virtual fragments are being stitched together to form a shared, persistent place that will parallel the real world. The author Jorge Luis Borges imagined a map exactly the same size as the territory it represented. “In time,” Borges wrote, “the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.” We are now building such a 1:1 map of almost unimaginable scope, and this world will become the next great digital platform.

Google Earth has long offered a hint of what this mirrorworld will look like. My friend Daniel Suarez is a best-selling science fiction author. In one sequence of his most recent book, Change Agent, a fugitive escapes along the coast of Malaysia. His descriptions of the roadside eateries and the landscape describe exactly what I had seen when I drove there recently, so I asked him when he’d made the trip. “Oh, I’ve never been to Malaysia,” he smiled sheepishly. “I have a computer with a set of three linked monitors, and I opened up Google Earth. Over several evenings I ‘drove’ along Malaysian highway AH18 in Street View.” Suarez—like Savage—was seeing a crude version of the mirrorworld.

It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call place­ness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”

The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.

At first, the mirrorworld will appear to us as a high-resolution stratum of information overlaying the real world. We might see a virtual name tag hovering in front of people we previously met. Perhaps a blue arrow showing us the right place to turn a corner. Or helpful annotations anchored to places of interest. (Unlike the dark, closed goggles of VR, AR glasses use see-through technology to insert virtual apparitions into the real world.)

Eventually we’ll be able to search physical space as we might search a text—“find me all the places where a park bench faces sunrise along a river.” We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products.

The mirrorworld will have its own quirks and surprises. Its curious dual nature, melding the real and the virtual, will enable now-unthinkable games and entertainment. Pokémon Go gives just a hint of this platform’s nearly unlimited capability for exploration.

These examples are trivial and elementary, equivalent to our earliest, lame guesses of what the internet would be, just after it was born—fledgling Compu­Serve, early AOL. The real value of this work will emerge from the trillion unexpected combinations of all these primitive elements.

The first big technology platform was the web, which digitized information, subjecting knowledge to the power of algorithms; it came to be dominated by Google. The second great platform was social media, running primarily on mobile phones. It digitized people and subjected human behavior and relationships to the power of algorithms, and it is ruled by Facebook and WeChat.

We are now at the dawn of the third platform, which will digitize the rest of the world. On this platform, all things and places will be machine-­readable, subject to the power of algorithms. Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in history, just as those who now dominate the first two platforms have. Also, like its predecessors, this new platform will unleash the prosperity of thousands more companies in its ecosystem, and a million new ideas—and problems—that weren’t possible before machines could read the world. Read more on AR will spark the next big tech platform – Call it mirrorworld…

Posted in Presence in the News | Leave a comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php