ISPR Presence News

Monthly Archives: March 2018

Call: Haptic Technologies for Healthcare – Workshop at Eurohaptics 2018

Call for Papers

Haptic Technologies for Healthcare is a one-day workshop to be held as part of the Eurohaptics 2018 conference.

It will be held on Wednesday, June 13, 2018 in Pisa, Italy.

Website: https://hapticsforhealthcare.wordpress.com

Deadline for extended abstracts: April 30th, 2018

About the workshop:

Recent technologies allow for novel classes of wearable and holdable devices that make use of forms of haptic communication and interaction that go beyond elementary uses such as simply attracting attention. This can give rise to an expanded design space that raises many research issues, as this workshop will explore.

The goal of this workshop is to spark interest in the topic of haptic technologies for healthcare, with a specific focus on using haptic technologies for assisting, enhancing and extending current practices, both in the hospital and home setting.

The workshop is open to students, researchers, engineers, clinical and rehabilitation practitioners, and industry experts interested in research issues raised by novel or complex haptic interactions for wearable and holdable devices for healthcare. We will provide ample opportunity for round-table discussions where speakers and workshop participants will be encouraged to propose questions, identify issues and raise challenges for designers and researchers. Read more on Call: Haptic Technologies for Healthcare – Workshop at Eurohaptics 2018…

Posted in Calls | Leave a comment

Presence as perceived realism: Two cautionary examples

[Two recent examples vividly illustrate the increasing use of technology to manipulate our perceptions of reality. Business Insider introduces the first example this way:

“’Question everything’ is the Instagram influencer and blogger Carolyn Stritch’s latest message to her followers. And to encourage people to do just that, she conducted an experiment to show people just how easy it is to fake ‘perfection’ on the photo-sharing platform.

Stritch, who is from the UK, is the 32-year-old lifestyle blogger and freelance photographer behind The Slow Traveler. She has amassed 190,000 followers on her Instagram account @theslowtraveler through sharing perfectly poised photos of cosy-looking settings involving copious cups of coffee and stacks of books […].

But as we should all know by now, not everything is as it seems on Instagram.

In a post titled ‘Why I hacked my own Instagram account,’ Stritch reveals how she fooled people into believing she had taken a trip to Disneyland.”

The thoughtful blog post is below; both the Business Insider story and blog post include more images. See also the second half of coverage in Inc.

The second example concerns the shameful, immoral and dangerous creation and distribution of altered and created photos, gifs, videos, social media posts and ‘news’ stories that make false claims designed to discredit the students of Stoneman Douglas High School in Parkland, Florida and their efforts to prevent future mass shootings and other gun deaths. Media Matters has documented many of these ‘hoaxes’ in a link-filled blog post, and the journalism analysis website Poynter has a detailed story on the topic.

The rapid evolution and wide availability of media technologies and the negative aspects of human nature suggest that we’ll need to be increasingly vigilant to counter attempts to use presence as realism to manipulate us.

–Matthew]

Why I hacked my own Instagram account

March 13, 2018

To be clear: all images I posted prior to this project are really me, really in those places.

I download FaceApp, £1.99! I take a selfie: bed hair, no makeup. I tap “Impression” and my face changes quickly and dramatically: fine lines flatten, wrinkles smooth out, blemishes unblemish, dark circles disappear, cheekbones rise, eyes brighten, lips get bigger, nose gets smaller.

My face is gone.

Staring back at me, wearing my clothes, sitting in my bed, is a stranger. Or, perhaps more accurately: it’s my perfect self.

I feel horrified by how much my face changes. Does FaceApp modify other people’s faces this much?! I must be less attractive than most.

When I swipe back to the real image, the flaws seem far more prominent than when I first took the the selfie.

I quickly swipe back to the edited image. The longer I look at this new, perfect me, the more I wonder what it would be like if I really looked like that.

I uploaded the selfie as a profile picture on Facebook as a sort of experiment and nobody questioned it. Not my best friend, my sisters, or even my own mam!

QUESTION EVERYTHING

I’m finishing up the second year of a degree in photography. The degree teaches that above all else we should question everything, especially our own work.

I decided to bring that idea home and question the work I do on Instagram.

I came up with a story: my FaceApped perfect self, who’s ten years younger than I am, flies off to Disneyland for the day, and somehow manages to photograph herself all alone in front of Sleeping Beauty’s Castle.

I manipulated images, captioned them with a fictional narrative, and presented them as real-life.

I hacked my own Instagram account. Read more on Presence as perceived realism: Two cautionary examples…

Posted in Presence in the News | Leave a comment

Call: Playable Cities: The City As A Digital Playground (at ArtsIT 2018)

Call for Workshop Papers

3rd International Workshop
PLAYABLE CITIES: THE CITY AS A DIGITAL PLAYGROUND

This workshop will be held as part of the 7th EAI International Conference: ArtsIT 2018, Interactivity & Game Creation, October 24-26, 2018, Braga, Portugal (http://artsit.org/)

Workshop Date: 24 October 2018

Paper Submission Deadline: 1 July 2018

BACKGROUND

After the success of the first (Utrecht, 2016), the second (Funchal, Madeira, Portugal) we now organize the third one-day workshop on Playable Cities in the beautiful town of Braga.

PLAYFULNESS AND THE (PLAYABLE) CITY

The Playable City is a term, introduced a number of years ago in Bristol, UK and imagined as a counterpoint to ‘A Smart City’. From the Playable City website (https://www.playablecity.com/background/): “A Playable City is a city where people, hospitality and openness are key, enabling its residents and visitors to reconfigure and rewrite its services, places and stories.”

Cities by their very nature are utilitarian creations built to support the needs and/or represent the image of the communities that build them, however the nature of the urban environment is such that it invites play in both its construction (architecture) and the interactions that take place within its confines and as such the city is an ever changing living organism where the past and the present are intertwined in a myriad of physical and ephemeral facets of the landscape.  On the face of it “the smart city” with its array of sensors and actuators designed to bring a higher level of efficiency to the management of urban services is equally utilitarian and banal.  However, these sensors and the digital communication networks that unite them offer new opportunities for playful interaction by bringing to life inert objects such as park benches and garbage cans, preserving and visualizing previously lost bits of the urban experience and enabling a host of new interactions and experiences in addition to raising a number of new challenges and concerns.

In this workshop we wish to explore the ways in which the broad gamut of technologies that make up the smart city infrastructure can be harnessed to incorporate more playfulness into the daily life activities that take place within the city making the city not only more efficient but also more enjoyable to the people who live and work within its confines.

Topics of interest include, but are not limited to:

  • Embedding playfulness in outdoor daily life activities
  • Digital art and entertainment in urban environments
  • Playful interactions with large digital displays
  • Playfulness and smart city infrastructure
  • Outdoor play for children and adults
  • Child-friendly cities
  • Enabling the disabled through playful interactions
  • Playful interactions for urban animals
  • Community building, maker cultures, and playfulness
  • Robust sensor and actuator technology for urban environments

Read more on Call: Playable Cities: The City As A Digital Playground (at ArtsIT 2018)…

Posted in Calls | Leave a comment

Where presence happens: Virtual reality for all

[Reading this story made me feel good. It’s from the Boise Weekly (in the Rocky Mountain state of Idaho) and it’s about the real work and real benefits of bringing VR and presence to diverse members of the public. –Matthew]

[Image: By Adam Rosenlund]

Virtual Reality for All

“We have helped people with all ranges of body limitations, stress or anxiety, autism, social skills, and even if you are fully paralyzed, we have found relaxing VR videos or music experiences to be effective.”

By Harrison Berry
March 21, 2018

Ted, a resident of the assisted living center The Terraces, was the first of a group of seniors waiting in line to don a virtual reality headset, and he gripped the hand controls as if they were poisonous snakes. Prompted by Danielle Worthy, a library assistant for the Bown Crossing branch of the Boise Public Library, Ted said he saw some fish and a stingray.

“What do you think?” asked Worthy.

“It’s goofy. It was really weird,” Ted said. Gesturing to another resident sitting on a nearby couch, he added, “You’re going to like it.”

Ted was using a deep sea diving simulator called theBlu, and the rest of the residents tried other simulators in turn. Darlene chose Richie’s Plank Experience, in which she rode an elevator to the top floor of a building. The doors opened, and she was greeted by a pirate ship-style plank suspended 525 feet above a city street. Rather than walk the plank, she pressed another button that took her down to the “fire deck.” Her fellow residents watched on a laptop screen as the doors slid open to billowing digital flames.

This late-January event wasn’t the first virtual reality demonstration at The Terraces, and it won’t be the last. The trend began in early 2017 when a woman in hospice care at the facility used the technology to ride on the Trans Canada Railroad, checking an item off her bucket list and alerting the staff to the promise of technology. Trips to the Bown Crossing library branch for residents to use its headsets may soon become a regular occurrence for the seniors, though this time the library came to them.

In years past, people would put on cumbersome equipment at video game arcades to immerse themselves in poorly rendered 3D environments, paying extra for the privilege, but a high-tech breakout was on the horizon. In 2014, Facebook bought VR startup Oculus for $2 billion. Oculus offered a suite of headsets, cameras and controllers that was expensive ($600), and required a powerful computer to render high frame rates and detailed 3D graphics, but other companies, including HTC Vive, PlayStation, Samsung Gear and more, quickly jumped on the VR bandwagon.

The decreasing cost and widening availability of the technology is slowly bringing virtual reality to the mass market, where it’s now used for work and play. Architects and engineers, for example, use virtual spaces to design structures. At the same time, the Boise Public Library and one Boise-area virtual reality arcade have discovered VR is a gateway for people of all abilities to play and learn. Read more on Where presence happens: Virtual reality for all…

Posted in Presence in the News | Leave a comment

Call: “Speculative Futures for Artificial Intelligence and Educational Inclusion” (Book Chapters)

Call for Chapters:

Speculative Futures for Artificial Intelligence and Educational Inclusion
Springer Nature – AICFE Future Schools 2030 book series

https://jeremyknox.net/call-for-chapters-speculative-futures-for-artificial-intelligence-and-educational-inclusion/

Abstract submissions due: March 30, 2018

We are inviting experts and practitioners from around the world to write chapters for a book that will help to advance our understanding of education. The book is an outcome of the Future School 2030 research project supported by the Advanced Innovation Centre for Future Education, Beijing Normal University. The publisher for the book will be Springer Nature, an international publisher.

This book seeks to bring together the fields of ‘artificial intelligence’ (often known as A.I.) and ‘inclusive education’ in order to speculate on the future of educational practice. Heretofore, there has been little in the way of direct association between research and practice in these domains: A.I. has been predominantly a technical field of research and development, and while intelligent computer systems and software are being increasingly applied in many areas of industry, economics, social life, and education itself, a specific engagement with the idea of inclusion appears lacking. Inclusive education, the agenda of which has been addressed by the recent UN Sustainable Development Goal 4, concerns an education system’s ability to accept and celebrate student diversity (Armstrong et al. 2010). Pedagogically, it is to deliver quality educational provision for all learners, ‘regardless of any perceived difference, disability or other social, emotional, cultural or linguistic difference’ (Florian 2008, p202). While different kinds of technologies have been applied in various ways to support inclusive teaching and learning, the implications of the rapid development of A.I. seems to be much overlooked.

This book is motivated by the question of why this relationship between A.I. intelligence research and inclusive educational practice has not been more substantively formed. This position develops established work that has critiqued the often-simplistic ways that technology has been applied to education (Selwyn 2014), exposing deep-rooted assumptions of ‘solutionism’ in the increasingly powerful tech sector (Watters 2013), highlighting the alignment with neoliberal models of the university, and surfacing underlying conflicts between digital technologies and foundational ideas about the purpose of education (Selwyn & Facer 2013; Hoofd 2017). In a general sense, A.I. is grounded in disciplinary knowledge from computer science and psychology, and this combination tends towards methods that seek generalisations and models from data, and a world view that assumes a ‘normal’ human condition. Both these tendencies may seem at odds with an inclusive education that attempts to foreground and embrace diversity, and question the devaluing of difference in highly standardised and performative education systems (Wang 2016). In addition, the supposed ‘technology revolution’ in education is habitually framed as being driven from the ‘outside’ (DeMillo 2015), and this view has tended to overlook established educational expertise. Of course, these two positions are not completely contradictory, and A.I. systems may be put to work for one kind of ‘inclusion’ agenda, while at the same time underming other perspectives on equity or diversity. Reflecting on the global trends of inclusive education, Singal and Muthukrishna (2014) remind us to ask: ‘inclusion into what and for what purposes’ (p.300).

Moreover, A.I. offers exciting possibilities for education, including intelligent systems that are able to ‘personalise’ learning or adapt to specific contexts. Emerging fields that draw on machine learning techniques, such as learning analytics, offer tangible opportunities to develop A.I. that can support teachers and students to embrace diverse classrooms and varied ways of teaching and learning. As many education institutions appear to be seeking to attract more international students, widen access, increase the diversity of their populations, and to develop ways of scaling their provision, such A.I. systems may become essential parts of educational infrastructure, and key ways of achieving future visions for the education sector.

Contributions to this book may include the following:

PHILOSOPHY AND THEORY OF A.I. AND INCLUSIVE EDUCATION – theoretical work that makes connections between philosophies of education and technology. For example: technological and social determinism and their relationship to educational A.I., focusing on inclusion; links between humanism and A.I. and implications for inclusive education; theories of learning (such as ‘social constructivism’ or ‘behaviourism’) and relationships to A.I. and inclusive education. These might also include historical analyses of the development of education and A.I. technology, or critical accounts of the relationships between the A.I. industry and the market-driven models of education. Work on the politics of education and A.I., and social justice critiques of, would be relevant here too, as would considerations of divides and equality, and global North/South differences. Ethical applications of A.I. and/or usage of data in the inclusive education context (for example, vulnerable learners) would be appropriate here too.

DEVELOPING A.I. TECHNOLOGIES – empirical contributions that examine and analyse specific A.I. technologies employed in education, and their contribution (or lack of contribution) to inclusive education ideals and practices. For example, accounts of hardware (such as assistive technologies, or robotics) or software (such as digital assistants, personalised or adaptive systems, etc.) designed for classroom use. Analysis here could focus on technical design or interface. This can also include more speculative work, focused on future visions for technology. Scope should also be given here for work that considers A.I. technologies not necessarily designed for inclusion, but nevertheless used for inclusive education agendas.

STUDENT AND TEACHER PERSPECTIVES – engagement with educational practice: teachers and students working with or experiencing inclusive education issues and challenges, or indeed A.I. support or interventions. This might include interviews with teachers or students, ethnographic accounts of inclusive classrooms with A.I. technologies. Given the emerging and often speculative nature of A.I., particularly that specifically designed for inclusive education, this theme can include speculative accounts that envision educational futures from the perspective of teachers and students.

SPECULATIVE FUTURES AND CREATIVE WRITING ON A.I. EDUCATION – In the spirit of speculative methods in education (Ross 2017) and ‘design fictions’, we also welcome creative writing pieces that address A.I. and inclusive education. These contributions will be an important part of any book that seeks to speculate and envision possible educational futures, and in so doing, reveal and examine the concerns, challenges, and beliefs that underpin our contemporary relationships with technology. How we foresee and anticipate the future can also have a profound influence on the creative practices and technical developments that eventually build it. Read more on Call: “Speculative Futures for Artificial Intelligence and Educational Inclusion” (Book Chapters)…

Posted in Calls | Leave a comment

Study: 360-degree VR ads outperform 2D ads

[A new industry study suggests that presence-evoking media are effective venues for advertising. This story is from Retail Dive, and see the short OmniVirt report for more information. –Matthew]

360-degree VR ads have 300% higher click-through rates

Dan Alaimo
March 13, 2018

Dive Brief:

  • A recent study from OmniVirt revealed that 360-degree virtual reality ads perform appreciably better than regular advertising across metrics such as click-through rates, viewability, and video completion. The data was derived from over 700 million ads served.
  • In click-through rates, OmniVirt found that VR photos performed 300% better than regular two-dimensional photos, and 14% better than regular photos in viewability.
  • In video completion rates, 360-degree videos were 46% higher than regular videos, with an 85% video completion rate for 360-degree video versus 58% for two-dimensional videos.

Dive Insight: Read more on Study: 360-degree VR ads outperform 2D ads…

Posted in Presence in the News | Leave a comment

Call: User Modeling for Personalized Interaction with Music – Special issue of UMUAI

[Presence isn’t mentioned here but could be usefully applied in this topic area.  –Matthew]

Call for Papers

User Modeling for Personalized Interaction with Music
Special Issue of User Modeling and User-Adapted Interaction: The Journal of Personalization Research (UMUAI)
http://www.cp.jku.at/journals/umuai_si_music.html
Flyer (PDF)

Submission deadline for extended abstracts: June 1, 2018
Submission deadline for full papers (for accepted abstracts): July 31, 2018

SCOPE OF THE SPECIAL ISSUE

Music search, retrieval, and recommendation systems have experienced a boom during the past few years due to streaming services providing access to huge catalogs anywhere and anytime. These streaming services collect the user behavior in terms of actions on music items, such as play, skip, playlist creation and modification. As a result, an abundance of user and usage data has been collected and is available to companies and academics, allowing for user profiling and to create personalized music search and recommendation systems. The importance and timeliness of research on such personalized music systems is evidenced by publications in venues including RecSys, UMAP, ISMIR, CHI, and IUI, as well as in journals including IEEE Transactions on Affective Computing and ACM Transactions on Intelligent Systems and Technology.

On the other hand, there are still plenty of unsolved challenges. In particular, scholars have identified as some of the most vital ones: user understanding and modeling, personalization of recommendation and retrieval systems, user adaptivity in interfaces, and context-awareness.

With this Special Issue, we intend to establish a high-impact forum for latest research on user modeling and personalization for finding, making, and interacting with music. We invite researchers from both academia and industry to submit their original excellent research on these topics.

TOPICS

The following represents a non-exhaustive list of topics. We solicit manuscripts covering all aspects of user modeling for adaptive and personalized music algorithms and applications, in particular for music search and recommendation. These include, but are not limited to the following general categories: models of music consumption, algorithms for personalization and recommendation, interfaces and interaction, and evaluation.

Example topics belonging to one or more of these categories include the following: Read more on Call: User Modeling for Personalized Interaction with Music – Special issue of UMUAI…

Posted in Calls | Leave a comment

Synthetic humans: New tech further closes gap between real and CGI

[Epic Games is leading a group of companies in making significant strides in “crossing the uncanny valley” to create more convincing, less creepy presence illusions of people. The short story below from Slashgear summarizes one of the developments demonstrated at the recent Game Developers Conference; more details follow in a story from VentureBeat. The original versions of both contain more images and videos, and more information is available in coverage by The Verge and the Unreal Engine blog. Note especially this prediction in the VentureBeat story: “By 2024, we may all be interacting with digital humans in some way or other, whether it’s via headsets, films, TV, games, live performances and broadcasts, or by directing digital assistants in our homes in real-time.” –Matthew]

[Image: Source: Engadget]

Epic Games’ Siren demo is both amazing and super-creepy

Eric Abent
Mar 22, 2018

Epic Games is no stranger to incredibly detailed motion capture technology. 2017’s Hellblade: Senua’s Sacrifice was based on Unreal Engine’s motion capture capabilities, and it received a lot of praise for just how detailed the in-game character of Senua was. Now Epic is looking to take what it learned with Hellblade one step further, creating a new digital personality called Siren who is as impressive as she is unsettling.

For this demo, Epic partnered with four other companies: Vicon, Cubic Motion, 3Lateral, and Tencent. What sets Siren apart from other digital personalities similar to her is that she relies on real-time motion capture. Using technologies from all five companies, Unreal Engine is able to render Siren in real-time based on the actions of an actress who seems to be moving in tandem with her.

That’s impressive enough on its own, but the level of detail in Siren’s model takes this to a completely different level. In an interview with VentureBeat, Epic Games chief technology officer Kim Libreri discussed how Siren’s model has improved over Senua from Hellblade, noting that facial detail has drastically improved. For instance, Libreri says that Siren actually has 300,000 tiny hairs all over her face – something that graphics technology couldn’t support back when Hellblade was being created in 2016, but can now.

Add to that improvements to shading, reflection, and refraction, and we have Siren, taking us one step closer to CGI that’s indistinguishable from the real thing. There are still some spots that need work, Libreri admits, and for as advanced as something like Siren is, it’s still somewhat easy to tell that she’s a CGI character. Even then, this represents a pretty big jump in quality over what we saw in Hellblade, and it’s going to keep getting better from here.

While we may not see motion capture on Siren’s level make it into video games any time in the immediate future, this has pretty big implications for the industry as a whole. Outside of video games, the ability to create a digital personality in real time can be applied to a lot of different industries, especially when you consider the rising popularity of live streaming, so Epic and its partners are definitely onto something special here.

—–

[From VentureBeat]

Epic Games shows off amazing real-time digital human with Siren demo Read more on Synthetic humans: New tech further closes gap between real and CGI…

Posted in Presence in the News | Leave a comment

Call: ACM UMAP Workshop on Intelligent User-adapted and Context-aware Applications and User Interfaces (IUadapt 2018)

Call for Papers

ACM UMAP Workshop on Intelligent User-adapted and Context-aware Applications and User Interfaces (IUadapt 2018)
in conjunction with the 26th ACM Conference on User Modelling, Adaptation and Personalization – UMAP 2018 (http://www.um.org/umap2018/)
July 8-11, 2018, Singapore

Workshop website: http://teldh.dibris.unige.it/iuadapt/

IMPORTANT DATES

Paper Submission: 20 April 2018 (23:59 AoE – Anywhere on Earth)
Notification to authors: 15 May 2018
Camera-ready version due: 27 May 2018

DESCRIPTION

The evolution of social semantic environments and content personalization, along with the adoption of context-aware techniques, the spread of handheld devices together with always connected IoTs, and of cloud computing are deeply changing the way people are using the Web and digital devices for a variety of purposes. This change is also pushing application providers to offer a new wave of services and calls for intelligent user interfaces to support users in their interactions. Users want to freely access content as they choose, anytime and anywhere, according to their personal goals, preferences or special needs. Henceforth, they require:

ubiquitous applications that can be provided as a service and can be used on any mobile device, with no need for installation or software updates; proactive user interfaces that track their progress (e.g., fitness, learning, etc), adapt to their needs and prompt them without being disruptive or inconvenient and provide analytics and predictive instruments; smart and personalized information services to explore, search, interact, share, practice, and discover qualitative content that fit their diverse and changing needs.

All of these applications require a deep understanding of content, users, devices and situations where interaction happens. Semantics covers a significant role toward these goals. Smart applications aim to address needs and requirements of 21st century systems, including context-aware, ubiquitous, mobile, adaptation and personalization, along with other new technologies and methods related with natural user interfaces, augmented reality, serious games for education and training.

The objective of the Intelligent User-adapted and Context-aware Applications and User Interfaces workshop is to bring together experts and practitioners of user modeling, adaptive systems, recommenders and artificial intelligence together with domain experts, intelligent user interfaces and ubiquitous computing researchers, in order to shape the next generation of ubiquitous, smart and adaptive application services.

TOPICS

We invite submissions that may include the following topics, but are not limited to: Read more on Call: ACM UMAP Workshop on Intelligent User-adapted and Context-aware Applications and User Interfaces (IUadapt 2018)…

Posted in Calls | Leave a comment

Artist’s “Parallax View” app creates 3-D illusions using iPhone X’s Face ID

[Artist Peder Norrby has created a free app that produces impressive optical illusions, as briefly described in this story from DesignTAXI. For a detailed explanation of the illusions, see the artist’s blog. For more interesting examples of presence, see the (free, public) ISPR Presence Community Facebook group. –Matthew]

Read more on Artist’s “Parallax View” app creates 3-D illusions using iPhone X’s Face ID…

Posted in Presence in the News | 1 Comment
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

css.php