Comments
Innovation

An IDEA is Born: CableLabs Heads Up New Alliance That Will Bring Holodecks Into Your Living Room

Apr 11, 2019

CableLabs has joined forces with top players in cutting-edge media technology—Charter Communications, Light Field Lab, OTOY and Visby—to form the Immersive Digital Experiences Alliance (IDEA). Chaired by CableLabs’ Principal Architect and Futurist, Arianne Hinds, the alliance aims to facilitate the development of an end-to-end ecosystem for immersive media, including VR, AR, stereoscopic 3D and the much-talked-about light field holodeck, by creating a suite of display-agnostic, royalty-free specifications. Although the work is already well underway, the official IDEA launch event was on April 8 at the 2019 NAB Show. Learn more about it here.

IDEA’s Challenges: What problems do we want to solve?

Advancements in immersive media offer endless opportunities not only in gaming and entertainment but also in telemedicine, education, business and personal communication and many other areas that we haven’t even begun to explore. It’s an exciting technological frontier that always gets a lot of buzz at tech expos and industry conferences. The question now is not if, but when is it going to become reality and what are the steps to getting there?

Despite numerous innovation leaps in VR and AR in recent years, the immersive media industry as a whole is still in its very early stages. Light field technology, the richest and most dense form of immersive media that allows the user to view and interact with a three-dimensional object in volumetric space, is particularly limited by the shortcomings of the existing video interchange standards.

  • Problem #1: Too much data

A photorealistic, volumetric video requires substantially more data than the traditional 2D media we’re used to today. In order to deliver a truly seamless and lifelike immersive experience, we need to take a different approach for an interoperable media format and network delivery.

  • Problem #2: Inadequate Network Ecosystem

There’s currently no common media format for storage, distribution and display of immersive images. We’ll need to build a media-aware network that’s fully optimized for the new generation of immersive entertainment.

IDEA’s Goals: How will we address these problems?

IDEA is already working on the first version of the Immersive Technologies Media Format (ITMF), a display-agnostic set of specifications for representation of immersive media. ITMF is based on OTOY’s well-established ORBX Scene Graph format currently used in 3D animation.

The initial draft of ITMF, scheduled for release by the end of 2019, will meet the following criteria:

  • It will be royalty-free and open source
  • It will be built on established technologies already embraced by content creators
  • It will be unconstrained by legacy raster-based 2D approaches
  • It will allow for continued improvements and advancements
  • It will address real-life requirements based on input from content creators, technology manufacturers and network operators.

In addition to the development of the ITMF standard, IDEA will also:

  • Gather marketplace and technical requirements to define and support new specifications
  • Facilitate interoperability testing and demonstration of immersive technologies in order to gain industry feedback
  • Produce immersive media educational events and materials
  • Provide a forum for the exchange of information and news relevant to the immersive media ecosystem, open to international participation of all interested parties

IDEA’s New Chairperson: A Woman With a 3D Vision

IDEA’s newly-elected chairperson, Dr. Arianne Hinds, joined CableLabs in 2012 as a Principal Architect of Video & Standards Strategy. A VR futurist, innovator and inventor, she has over 25 years of experience in areas of image and video compression, including MPEG and JPEG. Dr. Hinds has won numerous industry awards, including the prestigious 2017 WICT Rocky Mountain Woman in Technology Award. She is the Chair for the U.S. delegation to MPEG and is currently serving as the Chairperson of the L3.1 Committee for United States MPEG Development Activity for the International Committee for Information Technology Standards. Her new responsibilities at IDEA are a natural extension of her life’s work, perfectly aligned with the IDEA’s mission to bring the beautiful world of immersive media technology into the mainstream.

IDEA-Founders-Arianne-Hinds

Why CableLabs?

The 10G platform positions cable operators as the first commercial network service providers to support truly immersive services beyond the limits of legacy 2D video. With its ability to deliver up to 10Gbps while at the same time supporting low latency for interactive applications, 10G will be crucial to delivering the immersive media at bitrates (e.g. 1.5 Gbps for light field panels) that allow the corresponding displays to operate at their fullest potential. 

Become an IDEA member

No one company can build the future in isolation. IDEA welcomes anyone—technologists, creative visionaries, equipment manufacturers and network distribution operators—who share its vision. If you’re interested in learning more about becoming a member, please visit the website at www.immersivealliance.org.

You can learn more about the CableLabs future vision by clicking below. 


Learn More About 10G

Comments
Technology

Towards the Holodeck Experience: Seeking Life-Like Interaction with Virtual Reality

Arianne Hinds
Principal Architect, Video & Standards Strategy Research and Development

Sep 5, 2017

By now, most of us are well aware of the market buzz around the topics of virtual and augmented reality. Many of us, at some point or another, have donned the bulky, head-mounted gear and tepidly stepped into the experience to check it out for ourselves. And, depending on how sophisticated your set up is (and how much it costs), your mileage will vary. Ironically, some research suggests that it’s the baby boomers who are more likely to be “blown away” with virtual reality rather than the millennials who are more likely to respond with an ambivalent “meh”. And, this brings us to the ultimate question that is simmering on the minds of a whole lot of people: is virtual reality here to stay?

It’s a great question.

Certainly, the various incarnations of 3D viewing in the last half-century, suggest that we are not happy with something. Our current viewing conditions are not good enough, or … something isn’t quite right with the way we consume video today.

What do you want to see?

Let’s face it, the way that we consume video today is not the way our eyes were built to record visual information, especially in the “real-world”. Looking into the real world (which, by the way, is not what you are doing right now) your eyes capture much more information than the color and intensity of light reflected off of the objects in the scene.  In fact, the Human Visual System (HVS) is designed to pick up on many visual cues, and these cues are extremely difficult to replicate both in current generation display technology, and content.

Displays and content? Yes. Alas, it is a two-part problem. But let’s first get back to the issue of visual cues.

What your brain expects you to see

Consider this, for those of us with the gift of sight, the HVS provides roughly 90% of the information we absorb every day, and as a result, our brains are well-tuned to the various laws of physics and the corresponding patterns of light. Put more simply, we recognize when something just doesn’t look like it should, or when there is a mismatch between what we see and what we feel or do. These mismatches in sensory signals are where our visual cues come into play.

Here are some cues that are most important:

  • Vergence distance is the distance that the brain perceives when the muscles of the eyes move to focus at a physical location, or focal plane. When that focal plane is at a fixed distance from our eyes, let’s say, like with the screen in your VR headset, then the brain is literally not expecting for you to detect large changes in distance. After all, your eye muscles are fixed at looking at something that is physically attached to your face, i.e. the screen. But, when the visual content is produced in a way so as to simulate the illusion of depth (especially large changes in depth) the brain recognizes that there is a mismatch between the distance information that it is getting from our eyes vs. the distance it is trained to receive in the real world based on where our eyes are physically focused. The result? Motion sickness and/or a slew of other unpleasantries.
  • Motion parallax: As you, the viewer, physically move, let’s say walk through a room in a museum, then objects that are physically closer to you should move more quickly across your field of view (FOV) vs. objects that are farther away. Likewise, objects that are positioned farther away should move more slowly across your FOV.
  • Horizontal and vertical parallax: Objects in the FOV should appear differently when viewed from different angles, both from changes in visual angles based on your horizontal and vertical location.
  • Motion to photon latency:. It is really unpleasant when you are wearing a VR headset and the visual content doesn’t change right away to accommodate the movements of your head. This lag is called “motion to photon” latency. To achieve a realistic experience, motion to photon latency must be less than 20ms, and that means that service providers, e.g. cable operators, will need to design networks that can deterministically support extremely low latency. After all, from the time that you move your head, a lot of things need to happen, including signaling head motion, identifying the content consistent with the motion, fetching that content if not already available to the headset, and so on.
  • Support for occlusions, including the filling of “holes”. As you move through, or across, a visual scene, objects that are in front of or behind other objects should block each other, or begin to reappear consistent with your movements.

It’s no wonder…

Given all of these huge demands placed on the technology by our brains, it’s no wonder that current VR is not quite there yet. But, what will it take to get there? How far does the technology still have to go? Will there ever be a real holodeck? If “yes”, when? Will it be something that we experience in our lifetimes?

The holodeck first appeared properly in Star Trek: The Next generation in 1987. The holodeck was a virtual reality environment which used holographic projections to make it possible to interact physically with the virtual world.

Fortunately, there are a lot of positive signs to indicate that we might just get to see a holodeck sometime soon. Of course, that is not a promise, but let’s say that there is evidence that content production, distribution, and display are making significant strides. How you say?

Capturing and displaying light fields

Light fields are 3D volumes of light as opposed to the ordinary 2D planes of light that are commonly distributed from legacy cameras to legacy displays. When the HVS captures light in the natural world (i.e. not from a 2D display), it does so by capturing light from a 3D space, i.e. a volume of light being reflected from the objects in our field of view. That volume of light contains the necessary information to trigger the all-too-important visual cues for our brains, i.e. allowing us to experience the visual information in a way that is natural to our brains.

So, in a nutshell, not only does there need to be a way to capture that volume of light, but there also needs to be a way to distribute that volume of light over a, e.g. cable, network, and there needs to be a display at the end of the network that is capable of reproducing the volume of light from the digital signal that was sent over the network. A piece of cake, right?

Believe it or not

There is evidence of significant progress on all fronts. For example, at the F8 conference earlier this year, Facebook, unveiled its light field cameras, and corresponding workflow. Lytro is also a key player in the light field ecosystem with their production-based light field cameras.

For the display side, there is Light Field Lab and Ostendo, both with the mission to make in-home viewing with light field displays, i.e. displays that are capable of projecting a volume of light, a reality.

On the distribution front, both MPEG and JPEG have projects underway to make the compression and distribution of light field content possible. And, by the way, what is the digital format for that content? Check out this news from MPEG’s 119th meeting in Torino:

At its 119th meeting, MPEG issued Draft Requirements to develop a standard to define a scene representation media container suitable for interchange of content for authoring and rendering rich immersive experiences. Called Hybrid Natural/Synthetic Scene (HNSS) data container, the objective of the standard will be to define a scene graph data representation and the associated container for media that can be rendered to deliver photorealistic hybrid scenes, including scenes that obey the natural flows of light, energy propagation and physical kinematic operations. The container will support various types of media that can be rendered together, including volumetric media that is computer generated or captured from the real world.

This latest work is motivated by contributions submitted to MPEG by CableLabs, OTOY, and Light Field Labs.

Hmmmm … reading the proverbial tea-leaves, maybe we are not so far away from that holodeck experience after all.

--

Subscribe to our blog to read more about virtual reality and more CableLabs innovations.

Comments