“At-home holodeck”: Apple technology could create AR, VR content mapped to user’s physical setting

[A new Apple Insider story describes how the company is working on next-level presence-evoking technology that would adapt a multi-sensory “synthesized reality” experience to the contours of a user’s physical location. Further blurring the line between actual and virtual surroundings, a December 2020 Apple Insider story describes a technology system that could use information from standard 2D video and a variety of other sources including audio-only media to extract and (re)create the content and form of a 3D synthesized reality experience – here’s an excerpt from that story about Apple’s research on a “Method and device for generating a synthesized reality reconstruction of flat video content“:

The patent kicks off by defining synthesized reality, or SR, as digitally generated objects and environments. SR is essentially a broad category that Apple says includes virtual reality, augmented reality, and mixed reality.

[snip]

As Apple notes, most SR content is ‘painstakingly created’ ahead of time and added to a library a user can choose from. However, the patent describes a method for generating on-demand SR reconstructions of video content by ‘leveraging digital assets’ to ‘seamlessly and quickly’ port flat content into an SR experience.

The system works by identifying a ‘plot-effectuator’ in a scene, synthesizing a scene description, and generating a reconstruction of the scene by ‘driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.’

As Apple points out, digital assets could correspond to a video game model for ‘plot-effectuators’ — which it defines as characters or actors in the video content.

While the system could generate an SR reconstruction based on the video content, it could also draw information from associated text content and external data. That could be photos of actors in the video clips, height and measurements of the actors, various views of a scene, and other data.

‘In some implementations, the digital assets are generated on-the-fly based at least in part on the video content and external data associated with the video content. In some implementations, the external data associated with the video content corresponds to pictures of actors, height and other measurements of actors, various views (e.g., plan, side, perspective, etc. views) of sets and objects, and/or the like,’ the patent reads.

Interestingly, Apple also notes that a similar SR generation system could create interactive scenes based on audio and associated content — including audiobooks, radio dramas, and other audio-only content.

Several portions of the patent also detail the distinction between action objects — or objects a character interacts with — and inactionable ones. That distinction could allow the SR system to know ahead of time what type of objects in a scene need to be interactive.”

–Matthew]

Apple technology could create AR, VR content based on LiDAR mesh maps

By Mike Peterson
February 25, 2021

Apple is continuing to refine its virtual and augmented reality technology, including work on a system that can map an AR or VR experience to a physical setting.

The company is thought to be working on multiple devices that rely on synthesized reality (SR), a broad category including both AR, VR, and mixed reality (MR). Apple is rumored to be developing both an AR “Apple Glass” device, as well as another MR visor device.

Both of those devices could use new methods and technology detailed in a patent application published Thursday and titled “Method and Device for Tailoring a Synthesized Reality Experience to a Physical Setting.”

The patent application lays out a method that could obtain local environment data, including the spatial information of a “volumetric region around a user.” It would then create a mesh map of that data, likely through LiDAR technology. From there, Apple’s system could use the mesh map to composite or tailor an SR experience.

“For example, while a user is watching a movie in his/her living room on a television (TV), the user may wish to experience a more immersive version of the movie where portions of the user’s living room may become part of the movie scenery,” the patent reads, giving one example of a use case.

Other possibilities in the movie example include projecting portions of the movie onto walls or floors, or providing additional content like maps, graphs, or educational information. The SR system could also “skin” at least some of a living room to reflect content in the movie. Apple gives the example of an “at-home holodeck.”

It doesn’t stop there. The patent also includes uses tailored for AR or MR experiences. For example, presenting synthesized information to a user wearing a pair of glasses. Apple also notes that the system isn’t restricted to head-mounted devices — tablets or smartphones could provide SR experiences too.

Apple also details how the experience could change as a user moves. Using a location tracking system, it could update the synthesized experience as someone moves around a room or environment.

Different display types that could use the SR generating system could include holographic, digital light processing, LCD, or OLED displays. The patent notes that users may also be able to interact with the environment or give voice or gesture commands to change it. It could [have] different feedback mechanisms — like audio, haptic, skin shear, temperature — as a user interacts with their environment.

The patent lists Ian M. Richter; Maxime Meilland; and Patrick W. O’Keefe as its inventors. Of them, Richter has been named in past SR-related patents.

Apple files numerous patents on a weekly basis, so there’s no guarantee that the technology in them will make it to market or any sense of a possible debut timeline. Given the state of Apple AR and Apple VR technology, however, there’s a good chance this specific system could end up in a consumer device sooner than later.

The company is also expanding the use of its LiDAR technology, which can create depth maps of an environment. It’s currently in use in the iPhone 12 Pro and iPad Pro lineups, and rumors suggest Apple could bring it to all iPhone models going forward.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z