SONICOM: Developing a practical way to create personalized immersive audio for VR/AR

[Imperial College London researchers are part of a consortium developing new “technologies and techniques” to create personalized audio experiences that are immersive and mimic our nonmediated experiences. This story from the University is a first-person report about one part of the project; see the original version for two more pictures and a 20 second audio demonstration. A February 2021 story from the University includes this:

“Imperial College London researchers have won a €5.7million EU Horizon 2020 grant to develop AI-informed immersive audio techniques. The project, called SONICOM, will see researchers develop immersive 3D sound for virtual and alternative reality situations like online meetings, lectures, and gaming. Although similar technologies are used in spaces like cinemas, virtual spaces like video chat, gaming, and doctor appointments lack this kind of technology. During the COVID-19 pandemic where many are working from home, emulating real-life scenarios more accurately could help rebuild the conversational nuances and social cues that can be lost during online communication. …

They will also alter spatial sound rendering depending on its context. Doctor and therapy appointments might sound ‘closer up’ for greater intimacy, whereas lecturers and talks could sound farther away to emulate the real-world lecture experience. Once the personalised immersive audio techniques have been developed, the researchers will explore, map and model how their use influences listeners’ behaviour and physiology during social interactions.”

–Matthew]

3D scanning my head to further virtual reality research

I got my head 3D scanned as part of the Imperial-led SONICOM project – here’s my experience and how it contributes to virtual reality research.

By Harry Jenkins
15 March 2022

At the top of an ordinary metal staircase around the back of Imperial’s Dyson School of Design Engineering is a room unlike any other I’ve ever been in. Sitting in what feels like a sound-proofed chapel is a contraption that wouldn’t look out of place in a Star Trek episode.

This is the Turret Lab, a room kitted out especially for the SONICOM project as part of its research to develop immersive audio technologies for virtual and augmented reality (VR/AR). Coordinated by Imperial’s Dr Lorenzo Picinali, the Audio Experience Design team is looking to use artificial intelligence (AI) to develop tools that can personalise the way sound is delivered in virtual and augmented reality so that it is as similar to real-world sound as possible.

The SONICOM project is managed by Imperial’s Research Project Management team, and as their new Science Communication Officer, I decided to volunteer as part of the study. What better way to get to know a project than to have a 3D rendering of your head created as part of it?

Measuring how my ears receive sound

I was greeted by SONICOM PhD student Rapolas Daugintis and postdoc Dr Aidan Hogg who guided me through what was going to happen. They invited me to sit on the chair in the centre of the apparatus that was apparently not an interdimensional gateway, but a device that would measure something called my ‘head-related transfer function’ (HRTF).

Basically, an HRTF is a mathematical way of describing the unique way someone’s ears receive sound. Because everyone’s heads, ears and shoulders are different sizes and shapes, the way that sound waves actually enter your ears is different from person to person.

Recording someone’s HRTF and using it when delivering sound through headphones is the key to making ‘binaural’ audio. This is the kind of audio we experience in real life where we can locate where a sound comes from, made possible by the fact that we have one ear on each side of our heads that allows us to notice slight differences in what they receive.

Using an HRTF means the headphones can reproduce ‘3D sounds’ that can appear like they’re coming from a specific direction, such as off in the distance above you or as if someone is whispering in your ear right behind you.

Apparently the process of measuring this complex mathematical function involves me putting a stocking on my head, sticking some microphones in my ears, and keeping very still whilst sitting on a chair that slowly rotates as sounds are played from loudspeakers all around me.

This might not sound like everyone’s idea of a good time, but the novelty of this bizarre series of events meant it was actually quite enjoyable. And at the end of it, the team had measured my own personal HRTF.

Matching the mathematical with the physical

Measuring individual HRTFs is all well and good, and is the best way to deliver immersive binaural audio, but realistically, it’s not feasible for everyone to have their very own HRTF measured to achieve this. This is where a 3D scan of my head comes in.

SONICOM wants to connect the mathematical nature of HRTFs with the physical shapes of ears. This means, for example, that you could record your ear shape and then AI could use this to choose an HRTF that it believes would be closest to your own to deliver realistic sound.

This doesn’t mean everyone would need their personal 3D scanner either (although I now really want one). The team took a scan of my head with a handheld high-quality 3D scanner, but also took multiple pictures of my head from different angles with just a phone camera.

By linking all these different pieces of data together – HRTFs, 3D scans, and static photos – one day, someone could sign up to a virtual concert, snap a few photos of their ears, and AI could then make sure their experience sounds as similar to an in-person venue as possible.

The work that the members of SONICOM are doing is incredibly exciting, and if the past few years have proven anything, it’s that virtual experiences are here to stay – so let’s make them as realistic as possible.


Want to have your own HRTF recorded and head 3D scanned? The SONICOM project are always looking for new volunteers to add to their database! If interested, get in touch with Dr Picinali.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z