Comments
DOCSIS

CableLabs Releases DOCSIS® Simulation Model

Greg White
Distinguished Technologist

Sep 10, 2020

When it comes to technology innovation, one of the most powerful tools in an engineer’s toolbox is the ability to rapidly test hypotheses through simulations. Simulation frameworks are used in nearly all engineering disciplines as a way to understand complex system behaviors that would be difficult to predict analytically. Simulations also allow the researcher to control variables, explore a wide range of conditions and look deeply into emergent behaviors in ways that are either impossible or extremely challenging to accomplish in real-world testbeds or prototype implementations.

For some of our innovations, CableLabs uses the “ns” family of discrete-event network simulators (widely used in academic networking research) to investigate sophisticated techniques for making substantial improvements in broadband network performance. The ns family originated at Lawrence Berkeley National Laboratory in the mid-1990s, and has evolved over three versions, with “ns-3” being the current iteration that is actively developed and maintained. The open-source ns-3 is managed by a consortium of academic and industry members, of which CableLabs is a member. Examples of features developed with the help of ns include the Active Queue Management feature of the  DOCSIS 3.1 specifications, which was developed by CableLabs using ns-2, and more recently, the Low Latency DOCSIS technology, which was created using models that we built in ns-3. In both cases, the simulation models were used to explore technology options and guide our decision making. In the end, these models were able to predict system behavior accurately enough to be used as the reference against which cable modems are compared to assess implementation compliance.

CableLabs Releases DOCSIS® Simulation ModelAs a contribution to the global networking research community, CableLabs recently published its DOCSIS simulation model on the ns-3 “App Store,” thus enabling academic and industry researchers to easily include cable broadband links in their network simulations. This is expected to greatly enhance the ability of DOCSIS equipment vendors, operators and academic researchers to explore “what-if” scenarios for improvements in the core technology that underpins many of the services being delivered by cable operators worldwide. For example, a vCMTS developer could easily plug in an experimental new scheduler design and investigate its performance using high-fidelity simulations of real application traffic mixes. Because this DOCSIS model is open source, anyone can modify it for their own purposes and contribute enhancements that can then be published to the community.

If you’ve ever been interested in exploring DOCSIS performance in a particular scenario, or if you have had an idea about a new feature or capability to improve the way data is forwarded in the network, have a look at the new DOCSIS ns-3 module and let us know what you think!

Read More About The New DOCSIS NS-3 Module 

Comments
Latency

Rise of Cloud Gaming – Meeting the Challenges for ISPs

Matt Schmitt
Principal Architect

Aug 26, 2020

Spinning iconLight Reading recently posted an article titled "Operators need to prepare for the game-streaming tsunami" which talks about a new wave of game streaming services (aka cloud gaming services) that are on the way. The article points out that the network demands these services require are completely different from anything cable operators have had to deal with before: cable operators cannot simply assume the work that was done previously in order to better support video streaming will be sufficient to effectively support game streaming. They warn that ISPs should get ahead of the network demands of the new game streaming services or replay the pain of the past. We are all familiar with the exasperation of watching the spinning loading “ball” in the middle of our favorite movie scene; imagine the frustration when things suddenly lock up or lag in the middle of an intense game.

Here at CableLabs, we agree with Light Reading’s assessment of the importance of readying operator networks for the impact of game streaming services. Although cloud gaming is still in its early adoption phase, Sandvine’s May 2020 Phenomena Report shows NVidia’s GeForce Now game streaming service in the top 10 gaming traffic generators.

The good news is that CableLabs has been building and testing latency and congestion management solutions for some time, including one that is well-tailored to game streaming. The suite of features developed by CableLabs and our industry partners, known as Low Latency DOCSIS® (LLD), can provide better customer experiences for both current multiplayer online gaming and emerging cloud gaming performance services.

An early observation of the low latency team at CableLabs was that different applications have different traffic patterns and needs, which ultimately require different solutions for reducing and managing latency. This is true even between seemingly related applications like online gaming and game streaming:

  • Multiplayer online gaming uses very low data rates (~150kbps) but can be very sensitive to latency and jitter (variations in latency).
  • Game Streaming – running the game on a remote server and streaming it to an end device – is also very sensitive to latency and jitter, but also requires high data rates on the order of tens of megabits per second, and cannot be buffered since it’s played in real-time.

Latency for online gaming comes not from a lack of capacity – since the data rates are very low – but rather from gaming traffic getting caught behind other types of traffic that aren’t latency sensitive. Therefore, LLD employs tools to keep that gaming traffic from getting stuck without impacting other traffic negatively.

Game streaming, because of the high data rates involved, requires the addition of something more:  the ability to be able to sense and adapt to changing capacity along the network path at any bottleneck. This is why support for Low Latency, Low Loss, Scalable Throughput (L4S) is a part of LLD technology. L4S technology builds on the mechanisms developed for online gaming by enabling the network to provide precise feedback to applications about impending congestion. If implemented by an application at both ends of a network connection as well as any bottleneck points in between, it permits the application to send at high data rates while maintaining consistent low latency.

Therefore, by deploying DOCSIS equipment that supports the LLD feature set – including L4S support – cable operators will be able to provide the very best game streaming experience as soon as those services incorporate L4S support.

While gamers will be thrilled with this, LLD technology doesn't just apply to gaming: when implemented by application developers, it will also enable improved service for work-from-home applications like video conferencing, making DOCSIS based cable systems the platform of choice for these demanding applications. That’s why latency is one of the pillars of the cable industry’s 10G Platform.

Even better, availability of DOCSIS equipment that supports LLD is just around the corner. CableLabs has been actively working jointly with equipment suppliers to bring these features to market as soon as possible via software updates to their existing DOCSIS 3.1 equipment. We’ve seen support for these features rapidly evolve, and we will continue to support the industry in getting these features deployed in live networks. We’re always interested in working with more partners on testing and validation of these emerging technologies and applications, so please reach out to us here at CableLabs if you’d like to get involved or learn more.

There is a tsunami coming, but with preparation, it will be a tsunami of awesome.

Learn More About Low Latency DOCSIS

Comments
Latency

Gearing Up for 10G: Download the Technical Brief on CableLabs’ Low Latency Technologies for DOCSIS Networks

Sep 17, 2019

If you’ve been following our blog and our recent 10G announcement, you know that one of the main areas of focus for us is latency. Achieving a near-zero latency on DOCSIS networks is one of the goals of the 10G initiative and is just as important as increasing speed or bandwidth. The success of future 10G networks that can support seamless communication and next-level interactive experiences like holodecks and 360° video is heavily dependent on finding technological solutions that decrease latency to imperceptible levels, delivering consistent, real-time responsiveness that our customers desire.

The good news is we are well on our way to getting there. So far we’ve released a number of specifications, including Low Latency DOCSIS (LLD) and Low Latency Mobile Xhaul (LLX), aimed at reducing latency in the DOCSIS networks that provide residential services and also serve as backhaul, midhaul and fronthaul (collectively known as xhaul) for mobile traffic.

Low Latency DOCSIS (LLD)

In modern households, there are often multiple applications and devices connected to the same network at the same time, sending and receiving a variety of traffic. Some, like streaming video and large file downloads, send repeated large bursts of data and expect the network to buffer and play-out those bursts, while others, like online gaming and voice chat, send traffic smoothly. Ordinarily, the traffic from the smooth senders is subjected to the widely varying buffering latency caused by the bursty senders.  LLD technology is optimized for these two different types of traffic behavior, and decreases delays for smooth sending applications (many of which are latency-sensitive) without affecting the other traffic. Low Latency DOCSIS technology can support a consistent sub-1ms latency round-trip for the smooth sending applications, resulting in a much better network performance overall.

Low Latency Mobile Xhaul (LLX)

LLX leverages collaboration between the mobile network scheduler and the DOCSIS scheduler to provide a low latency xhaul solution that achieves a consistent DOCSIS upstream delay of just 1 to 2 milliseconds. LLX also defines a common quality of service framework for both mobile and DOCSIS so that the relative priorities of different traffic streams are maintained across the two systems. In the foreseeable future, deploying LLX technology will help solidify DOCSIS cable networks as the xhaul transport of choice, capable of supporting the latency requirements of 5G and beyond.

For more detail, please download the following member-only technical brief on Low Latency Technologies for DOCSIS Networks which includes information about sources of latency, how we address them, implementation strategies and more.

If you’re not yet a CableLabs member, find out how you can become one here.


Download Technical Brief

Comments
10G

CableLabs Low Latency DOCSIS® Technology Launches 10G Broadband into a New Era of Rapid Communication

Greg White
Distinguished Technologist

Sep 5, 2019

Remember the last time you waited (and waited) for a page to load?  Or when you “died” on a virtual battlefield because your connection couldn’t catch up with your heroic ambitions? Many internet users chalk those moments up to insufficient bandwidth, not realizing that latency is to blame. Bandwidth and latency are two very different things and adding more bandwidth won’t fix the internet lag problem for latency-sensitive applications. Let’s take a closer look at the difference:

  • Bandwidth (sometimes referred to as throughput or speed) is the amount of data that can be delivered across a network over a period of time (Mbps or Gbps). It is very important, particularly when your application is trying to send or receive a lot of data. For example, when you’re streaming a video, downloading music, syncing shared files, uploading videos or downloading system updates, your applications are using a lot of bandwidth.
  • Latency is the time that it takes for a “packet” of data to be sent from the sender to the receiver and for a response to come back to the sender. For example, when you are playing an online game, your device sends packets to the game server to update the global game state based on your actions, and it receives update packets from the game server that reflect the current state of all the other players. The round-trip time (measured in milliseconds) between your device and the server is sometimes referred to as “ping time.” The faster it is, the lower the latency, and the better the experience.

Latency-Sensitive applications   

Interactive applications, where real-time responsiveness is required, can be more sensitive to latency than bandwidth. These applications really stand to benefit from technology that can deliver consistent low latency.

As we’ve alluded, one good example is online gaming.  In a recent survey we conducted with power users within the gaming community, network latency continually came up as one of the top issues. That’s because coordinating the actions of players in different network locations is very difficult if you have “laggy” connections.  The emergence of Cloud gaming makes this even more important because even the responsiveness of local game controller actions depends on a full round-trip across the network.

Queue Building or Not?

When multiple applications share the broadband connection of one household (e.g. several users performing different activities at the same time), each of those applications can have an impact on the performance of the others. They all share the total bandwidth of the connection, and they can all inflate the latency of the connection.

It turns out that applications that want to send a lot of data all at once do a reasonably good job of sharing the bandwidth in a fair manner, but they actually cause latency in the network when they do it, because they send data too quickly and expect the network to queue it up.  We call these “queue-building” applications. Examples are video streaming and large downloads, and they are designed to work this way.  There are also plenty of other applications that aren’t trying to send a lot of data all at once, and so don’t cause latency.  We call these “non-queue-building” applications. Interactive applications like online gaming and voice connections work this way.

The queue-building applications, like video streaming or downloading apps, get best performance when the broadband connection allows them to send their data in big bursts, storing that data in a buffer as it is being delivered.  These applications benefit from the substantial upgrades the cable industry has made to its networks already, which are now gigabit-ready. These applications are also latency-tolerant – user experiences are generally not impacted by latency.

Non-queue-building applications like online gaming, on the other hand, get the best performance when their packets don’t have to sit and wait in a big buffer along with the queue-building applications. That’s where Low Latency DOCSIS comes in.

What is Low Latency DOCSIS 3.1 and how does it work?

The latest generation of DOCSIS that has been deployed in the field—DOCSIS 3.1—experiences typical latency performance of around 10 milliseconds on the access network link. However, under heavy load, the link can experience delay spikes of 100 milliseconds or more.

Low Latency DOCSIS (LLD) technology is a set of new features, developed by CableLabs, for DOCSIS 3.1 (and future) equipment.  LLD can provide consistent low latency (as low as 1 millisecond) on the access network for the applications that need it.  The user experience will be more consistent with much smaller delay variation.

In LLD, the non-queue-building applications (the ones that aren’t causing latency) can take a different path through the DOCSIS network and not get hung up behind the queue-building applications.  This mechanism doesn’t interfere with the way that applications go about sharing the total bandwidth of the connection. Nor does this reduce one application's latency at the expense of others. It is not a zero-sum game; rather, it is just a way of making the internet experience better for all applications.

So, LLD gives both types of applications what they want and optimizes the performance of both.  Any application that wants to be able to send big bursts of data can use the default “classic” service, and any application that can ensure that it isn’t causing queue build-up and latency can identify its packets so they use the “low latency” service. Both then share the bandwidth of the broadband connection without one getting preference over the other.

Incorporating LLD Technology

Deploying Low Latency DOCSIS in a cable operator’s network can be accomplished by field-upgrading existing DOCSIS 3.1 CMs and CMTSs with new software. Some of the low latency features are even available to customers with older (pre-DOCSIS 3.1) CMs.

The technology includes tools that enable automatic provisioning of these new services, and it also introduces new tools to report statistics of latency performance to the operator.

Next Steps

DOCSIS equipment manufacturers are beginning to develop and integrate LLD features into software updates for CMTSs and CMs, and CableLabs is hosting Interoperability Events this year and next year to bring manufacturers together to help iron out the technology kinks.

We expect these features to become available to cable operators in the next year as they prepare their network to support low latency services.

LLD provides a cost-effective means of leveraging the existing hybrid fiber-coaxial (HFC) network to provide a high-performance network for latency-sensitive services. These services will help address customers’ requirements for many years into the future, maximizing the investments that cable operators have made in their networks. The cable industry is provisioning the network with substantial bandwidth and low latency to take another leap forward with its 10G networks.

For those attending the SCTE Cable-Tec Expo in New Orleans, Greg will be presenting the details of this technology on a SCTE panel “Low Latency DOCSIS: Current State and Future Vision”  Room: 243-244,  Monday, September 30, 2019: 3:30 PM - 4:30 PM”.  Hope to see you there!


Read Our Low Latency DOCSIS White Paper

Comments