Immersive Media

  Gaussian Splatting: Painting Immersive Scenes With Reality

Gaussian Splatting

Austin Pahl
Architect, Immersive Media Experiences

Nov 3, 2023

The state of the art of immersive media is evolving so rapidly that it’s hard to keep up! It often feels as if you can choose a random limitation in the latest research, wait a few weeks and find a new paper that’s solved that problem. Here, we’ll look at an example of exactly that, pushing the bar to get faster, higher-quality immersive content.

If you didn’t catch our previous post, we looked at Neural Radiance Fields (NeRF) for their ability to “memorize” beautiful, photorealistic 3D snapshots of the real world. In the past three years, a lively community has sprung up around NeRF. Developers and artists have created content, built tools and pushed boundaries on all the ways NeRF can be used.

One of NeRF’s biggest remaining limitations is that real-time interactive viewing of NeRF-based content generally requires reducing image quality, which can cause fog-like visual artifacts and color inaccuracies in the scene.

As it turns out, the answer to this problem came in a paper at SIGGRAPH 20233D Gaussian Splatting for Real-Time Radiance Field Rendering. Despite having little to no conceptual connection to the original NeRF methodology, Gaussian Splatting dramatically improved both visual fidelity and performance of real-time viewing. The results speak for themselves: In just the few months since the paper was released, we’ve seen dozens of product enhancements and launches incorporating Gaussian Splatting functionality.

The Bridge at Argenteuil, Claude Monet, 1874.

The Bridge at Argenteuil, Claude Monet, 1874 (Collection of Mr. and Mrs. Paul Mellon)

How It Works: Computer-Generated Impressionism

If you’re a fan of Monet or Renoir, you’re likely familiar with Impressionism. This 19th century art movement is known for large, distinct brushstrokes and an emphasis on larger forms, as you can see in the above example. Try looking too close and you’ll mostly see brushstrokes; the full scene comes together when you gaze at it from far enough away.

As it turns out, Impressionism is a useful analogy for Gaussian Splatting. Creating a scene with Gaussian Splatting is like making an Impressionist painting, but in 3D. The scene is composed of millions of “splats,” also known as 3D Gaussians. Each splat is like a voluminous cloud painted onto an empty 3D space, and each splat can show different colors from various angles to mimic view-dependent effects like reflections. When you build a scene from lots of small splats, the result can be amazingly photorealistic!

Here’s an example. I recorded this cellphone video at the Duke Gardens in Durham, North Carolina:

Here’s the result as an interactive Gaussian Splatting scene via Luma AI. You can click and drag to move the scene around.

You can view a couple more examples from my visit to the gardens here and here.

From a technical perspective, 3D Gaussians are a unique variant of point clouds, where each point encodes spherical harmonics for view-dependent color and a covariance matrix for describing shape (some sort of directionally scaled sphere). Although splat-based rendering has existed for a long time, the Gaussian Splatting paper was the first to show that 3D Gaussians serve as an excellent scene representation, and it describes new methods to create and efficiently render these scenes. For more details, refer to the SIGGRAPH paper or the video overview provided on the authors’ website.

Gaussian Splatting scenes tend to be large compared to other scene formats, on the order of hundreds of megabytes to gigabytes. Each splat is 248 bytes, and a scene is typically composed of millions of splats. However, programmer Aras Pranckevičius has a great technical deep dive showing that Gaussian Splatting is ripe for compression, bringing sizes under a gigabyte with little to no visual impact, or smaller if you can accept “lossy” visuals.

Network Traffic of the Future

With all this said about Gaussian Splatting, where are we going next?

The dust hasn’t settled on immersive scene representations. A new research preprint already proposes combining the strengths of NeRF and Gaussian Splatting into a hybrid approach. Still, everything is so fast that the state of the art could change any day. When things do settle, the next step will be standardization.

If Gaussian Splatting is here to stay, we should expect file sizes to grow with the scale of the use cases at play. For example, a real estate agent selling a house may want to deliver an online virtual tour that allows viewers to experience granular details like the sparkle of a fine granite countertop while also walking through the rooms and seeing the house from the outside.

Going even bigger, consider a power transmission/distribution company that constructs a visual digital twin of its entire power grid across a city, then sync that across cloud simulations and user interfaces. Whereas previously we discussed scenes on the order of millions of splats, eventually we’ll need billions and beyond.

CableLabs’ Immersive Media Experiences team engages with immersive standards activities and monitors the state of the art of immersive media to understand and communicate key trends and their impact on the cable industry. Subscribe to our blog for more updates from the Immersive Media Team and other activities at CableLabs.

SUBSCRIBE TODAY

Innovation

How NeRF Technology Is Creating the Next Generation of Media

How NeRF Technology Is Creating the Next Generation of Media

Austin Pahl
Architect, Immersive Media Experiences

Jun 8, 2023

The ways we create and consume visual media are constantly evolving, allowing us to experience places and things as if we’re physically present in those environments. Today, thanks to a fast-growing technology called a Neural Radiance Field ("NeRF"), anyone with a regular camera can make and share "3D photographs" of the real world. NeRFs have been around since the 2020 publication of Representing Scenes as Neural Radiance Fields for View Synthesis, but recent developments have made it easier than ever to start making immersive 3D media.

If you’ve ever viewed a 3D house tour or a 3D piece of furniture on an e-commerce website, you might be wondering: What makes NeRF unique? The answer is that NeRF introduces unprecedented photorealistic detail, including the ability to see reflections and transparencies like never before. You can see an example of NeRF in this capture created by our intern, Tyler McCormick:

 

NeRF makes high-quality 3D content creation fast and intuitive. CableLabs' Immersive Media Experiences team has been following the developments surrounding NeRF and other forms of immersive media to understand how these technologies transform the ways we live, learn, work and play. In time, immersive applications may emerge as major drivers of network traffic, so we’re working to understand the resources required to deliver these next-generation experiences.

In this blog post, we take a look at how NeRF works, how to use it yourself and how it’s influencing the future of immersive media.

NeRF in a Nutshell: How It Works

Essentially, NeRF is a machine learning system that takes photos or videos of a subject and memorizes the appearance of that subject in 3D. The NeRF-creation process looks something like this:

  1. Record a regular video or take a set of photos of your subject. Your phone will do!
  2. Take each of those images and figure out their positions relative to each other. You can do this with sensors fixed to the camera or, more easily, with an AI pipeline such as COLMAP.
  3. Train a multi-layer perceptron (a kind of neural network) to behave like a renderer that’s specialized at producing images of this subject.
  4. Now, you have a NeRF! You can use this neural network to create new images and videos of your subject, as in the above example.

When NeRF was first published in 2020, this creation process took hours. Today, advancements such as NVIDIA’s Instant Neural Graphics Primitives have brought the time down to the order of minutes or even seconds!

When we called NeRF a “3D photograph” earlier, we meant it. Essentially, a NeRF tries to describe the color and density of light emitted at each point in a 3D space. If you look at the same point of a real object from various angles, you might see different colors and densities. NeRF reproduces this effect to achieve reflections and transparencies, just as if you were viewing a real 3D object.

The NeRF process results in a high level of detail, but there’s one catch: The NeRF model assumes that you’re working with a still, unchanging scene. Light-based effects are “baked in,” meaning that you can’t add new objects to the scene and see them cast shadows or appear in reflections. If subjects move or change over time in the input video, the NeRF output will appear blurry or misshapen. New research papers have identified ways around these limitations, but those solutions haven’t yet reached wider adoption. In the meantime, anyone want to bring back the Mannequin Challenge?

Getting Started

It’s easy to start playing with NeRF. For example, Luma AI has built an app for iPhones and the web that automatically builds NeRFs from your videos. Once you have a NeRF, you can make videos and export them to other content-creation tools, including the Unreal game engine. Luma has a gallery of diverse NeRF-based content submitted by their users here.

If you want to take a more hands-on approach to NeRF creation, nerfstudio is a free, open-source toolset for creating NeRFs and designing advanced 3D graphics pipelines with the new technology. The learning curve is steeper, but power users and developers may enjoy the increased flexibility that this method offers.

NeRF and Next-Generation Media

Improved 3D capture of real-world subjects opens up opportunities across multiple industries. Here are a few examples.

Digital productions and VFX artists are already finding ways to incorporate NeRF into creative workflows. The most obvious use in content creation is converting real-world subjects to 3D representations that can be combined with synthetic content, but NeRF can also be used to smooth camera movements or compose multiple camera shots into unified sequences. To see for yourself, check out this Corridor Crew video on YouTube and this McDonald’s commercial about the Chinese New Year (including the additional behind-the-scenes content in the replies).

Digital twins and simulations, as described by platforms like NVIDIA Omniverse, have presented a compelling value proposition for accurate digital modeling of real-world systems such as factories and autonomous vehicles. Where applicable, NeRF may be an effective way to digitize real-world environments for use in models and simulations. One example in the wild is Wayve Technologies’ effort to build city-scale NeRFs for autonomous vehicle simulations, as presented at NVIDIA GTC 2023.

Finally, metaverse initiatives often aim to empower users to build and share their own content and experiences. Games like Minecraft and Roblox provide user-friendly content-creation tools, but photorealistic content creation is usually reserved for experts with training on professional tools or access to specialized photogrammetry software. Now, cloud-hosted apps like Luma and nerfstudio make it possible to generate photorealistic content in minutes with your smartphone and a network connection.

NeRF Is Accelerating Immersive Media

Immersive media comes in many forms, including but not limited to virtual reality, augmented reality, mixed reality and light field displays. NeRF alone isn’t going to make or break any of these technologies as they continue to mature and enter the market, but it gives creators and developers another tool to get one step closer to a photorealistic holographic immersive experience.

In the past, we’ve asked readers to imagine that we had a way to capture life-like holograms of subjects. Thanks to NeRF and related technologies, there's no need for make-believe. Subscribe to our blog for more updates from the Immersive Media Team and other activities at CableLabs.

SUBSCRIBE TODAY

Innovation

10G and Immersive Media Experiences

10G and Immersive Media Experiences

Austin Pahl
Architect, Immersive Media Experiences

Feb 10, 2022

Imagine if you could create a life-like hologram of a given subject—and then be able to study and experience every detail of that subject later without being physically near it. Sounds like science fiction, right? We’re living in an era when such futuristic technology is already available to us! To make this kind of experience a reality, we can capture the rays of light that bounce off a particular subject, and what makes this possible is “light field media.”

CableLabs’ Immersive Media Experiences team has been researching the ways that light fields can transform the ways we live, learn, work and play. Today, there are already many ways to capture light fields, ranging from the latest smartphone cameras to professional light stage studios that capture the tiniest of details. To view a light field, the latest holographic displays provide high-resolution 3D video without the necessity of headwear or face tracking. Experiencing this technology in person feels like magic!

OTOY’s LightStage

OTOY’s LightStage

How 10G Will Deliver an Immersive Future

10G will bring unprecedented speed, reliability and security to the world, which is why it’s essential for enabling light field media. Light fields require tremendous amounts of data—more than any other technology that currently exists. Traditional photographs and videos store only a grid of pixels, whereas light fields track exponentially more light rays of colors and directions. Although the ecosystem continues to evolve and reach more people, the cable industry is also preparing to deliver these immersive experiences over the network.

As part of our strategy to support the emergence of immersive media, CableLabs is a contributing member of the Immersive Digital Experiences Alliance (IDEA), a collaboration between diverse experts across immersive media technologies. IDEA is producing royalty-free specifications that enable standardized end-to-end conveyance of immersive media. These standards will make it possible to create, distribute and enjoy immersive content as the landscape grows richer over time.

From CableLabs’ Near Future series

From CableLabs’ Near Future series

Watch our 10G and immersive media experiences video, in which the Immersive Media Experiences team demonstrates how light fields work and showcases the latest commercially available light field displays.

Watch 10G and Immersive Media Experiences