June 23-24 | An Event for Expanding the Human Connection Learn More & Register


Comments
Latency

CoMP over DOCSIS: Femtocells in the Age of vRAN

Joey Padden
Distinguished Technologist, Wireless Technologies

Sep 12, 2019

As promised in the last couple blogs discussing DOCSIS based femtocells, we’ve saved the best for last. So far in the series, we’ve made the case for femtocells over DOCSIS networks and laid out the total cost of ownership (TCO) benefits of this deployment model. In this final blog post, I’ll share the results of some testing we’ve been doing at CableLabs on using Coordinated Multipoint (CoMP) to optimize femtocell performance in dense deployments.

Decluttering the Radio Signal

Let’s step back and look at a key issue that has limited the benefit of femtocells in the past: intercell interference. When femtocells (or any cells, for that matter) are placed in close proximity, the radio signals each cell site produces can bleed into its neighbor’s territory and negatively affect network performance.

With CoMP, neighboring cells can coordinate their transmissions in a variety of ways to work collaboratively and prevent interference. They can share scheduling and beamforming data to avoid creating interference. Or, they can use joint processing, which allows multiple cells to talk to a single cell phone at the same time, increasing the signal quality.

Although it’s not a perfect analogy, it’s a bit like trying to listen to a bunch of people singing their favorite song at the top of their lungs versus listening to a choir following a conductor, as you see in the following figure. The former is old femtocells, and the latter is virtualized RAN (vRAN) femtocells using CoMP.

CoMP over DOCSIS: Femtocells in the age of vRAN

Icons made by Freepik from Flaticon is licensed by CC 3.0 BY.

Since its inception, CoMP has been largely believed to require fiber transport links to work. For example, in TR 36.819, there’s a whole section devoted to the impact of “higher latency communication between points,” where “higher” refers to 5ms, 10ms or 15ms of latency. In that text, gains decrease as latency increases, ultimately going negative (i.e., losses in performance).

However, with the increase in attention on vRAN, particularly lower-layer splits like the work going on in Telecom Infra Project (TIP) vRAN Fronthaul and O-RAN Alliance WG4, latency takes on new meanings with respect to CoMP.

For example, what matters more, the latency from one radio unit to another or the latency from one virtualized baseband unit (vBBU) to another? And if it’s the latter, does that mean CoMP can provide benefit even over long-latency non-ideal vRAN fronthaul like DOCSIS?

To find out the answers to these questions, we set up a test bed at CableLabs in collaboration with Phluido to explore CoMP over DOCSIS.  We used the hardware from the TIP vRAN Fronthaul project, with an LTE SW stack provided by Phluido that supports CoMP. We installed two radio units in different rooms, each radio connected via a DOCSIS® 3.0 network to the vBBU. We designated two test points, one with a phone located at the cell center, the other with both phone in the cell edge/cell overlap region.

Notably in our setup, the latency from radio unit to vBBU and radio unit to radio unit were both about 10ms. However, the latency between vBBUs was essentially zero as both radios shared the same vBBU. This setup is specifically designed to test whether vBBU-to-radio latency or vBBU-to-vBBU latency is more important for CoMP gains.

Gains!

What we found is that radio-to-radio latency and radio-to-vBBU latency can be quite large in absolute terms, and we can still get good CoMP performance provided that latency is low between the vBBUs and that vBBU-to-radio unit latency is similar for the radios in the CoMP cluster, as you see below.

CoMP over DOCSIS: Femtocells in the age of vRAN

In other words, to realize CoMP gains, the relative latency between a set of cells is more important than the absolute latency from vBBU to each radio.

We tested four configurations of phones at the cell center versus the cell edge, or some mix thereof, as the following figure shows.

CoMP over DOCSIS: Femtocells in the age of vRANCoMP over DOCSIS: Femtocells in the age of vRAN

CoMP over DOCSIS: Femtocells in the age of vRANCoMP over DOCSIS: Femtocells in the age of vRAN

In case 1, we see full cell throughput at each phone with CoMP enabled or disabled. This is great; this result shows that we haven’t lost any system capacity at the cell center by combining the cells into a single physical cell ID (PCI) and enabling CoMP.

In case 2, the phone throughput jumped from 55 Mbps to 78 Mbps when we enabled CoMP, showing a CoMP gain of almost 50 percent.

In case 3, when we enabled CoMP, the phone at the cell edge saw a throughput gain of 84 percent. In this scenario, the throughput of the cell center phone saw a decrease in throughput. This illustrates a tradeoff of CoMP when using legacy transmission modes (TM4, in this case) where the operator must choose whether it wants to favor cell edge users or cell center users. With more advanced transmission modes (e.g., TM10), this tradeoff is no longer an issue. Note that this is true of any CoMP deployment and not related to our use of DOCSIS network fronthaul.

In case 4, we expected to see significant gains from CoMP, but so far we haven’t. This is an area of further investigation for our team.

vRAN Femtocell CoMP in MDUs

Let’s look at an example use case. Cell service in multi-dwelling units (MDUs) can be challenging. A combination of factors, such as commercial construction materials, glazing and elevation, affect the indoor signal quality. As discussed in my previous blog, serving those indoor users can be very resource intensive.

CoMP over DOCSIS: Femtocells in the age of vRAN

As an operator, it would be great to have a low-cost way to deploy indoor cells. With vRAN over DOCSIS networks supporting CoMP, the operator can target femtocell deployments at heavy users, then build CoMP clusters (i.e., the set of radios that collaborate) as needed to optimize the deployment.

Putting It All Together

The testing described here has shown that CoMP gains can be realized even when using long-latency fronthaul over DOCSIS networks. As these solutions mature and become commercial-ready, deployments of this type will provide the following for operators:

  • Low-Cost Hardware: vRAN radios, particularly for femtocells, are low-complexity devices because the majority of the signal processing has been removed and put in the cloud. These radios can be built into the gateway customer premises equipment (CPE) already deployed by operators.
  • Low-OPEX Self Installs: With vRAN radios built into DOCSIS CPEs, operators can leverage the simplicity of self-installation. The ability to dynamically reconfigure CoMP clusters means that detailed RF planning and professional installation aren’t necessary.
  • High-Performing System: As shown in our testing results, CoMP gains can be realized over DOCSIS network–based vRAN femtocells. This eliminates another of the previous stumbling blocks encountered by earlier femtocell deployments.

Learn More About 10G

Comments
Latency

Enabling 5G with 10G Low Latency Xhaul (LLX) Over DOCSIS® Technology

Jennifer Andreoli-Fang
Distinguished Technologist, Wireless Technologies

Sep 10, 2019

I am a GenXer, and I am addicted to my iPhone. But it’s not just me, today’s consumers, millennials and baby boomers and everyone in between, are increasingly spending more and more time on their mobile devices. Have you ever wondered what happens to your traffic when you interact with your iPhone or Android devices? The traffic reaches a radio tower, but it doesn’t just stop there – it needs to reach the internet via a connection between the cellular base station and a distant data center.

Traditionally, that connection (a.k.a., “xhaul”) is mostly provided by fiber. Fiber has great speed and latency performance but is costly to build. With advancements in LTE and 5G, mobile operators are increasingly deploying more and more radios deeper into the neighborhoods. They will need a more scalable solution to provide that xhaul without sacrificing the performance. This is where the hybrid fiber coaxial (HFC) network can help.

With ubiquitous cable infrastructures that are already in place, the cable operators have the scalability to support today’s LTE and tomorrow’s 5G networks without the cost of building new fiber networks. With DOCSIS 3.0+ as well as Low Latency Xhaul (LLX) technology, the DOCSIS network has performance that is virtually indistinguishable from fiber. The CableLabs 10G technologies make the HFC network a better xhaul network, which is a win-win for the consumers, mobile operators, and cable operators.

How Low Latency Xhaul (LLX) Works

Today’s DOCSIS technology provides a good starting point for mobile xhaul but may not be enough to support the ultimate latency requirements needed for future mobile traffic. DOCSIS upstream latency can range from a typical of 8-12 milliseconds to around a maximum of 50 milliseconds under heavy load. We want to see that latency down to 1 to 2 milliseconds range in order to support 5G.

The LLX technology is specifically designed to reduce the latency experienced by mobile traffic while traversing the DOCSIS transport network on its way to the internet. The LLX technology development started about 3 years ago as a joint innovation project between CableLabs and Cisco. I wrote about it here and here.

So, how does LLX work? Let’s look at the case of LTE backhauled over a DOCSIS network as an example. Today, LTE and DOCSIS are two independent systems – their operations occur in serial, and the overall latency is the sum of the two system latencies. But from an engineer’s point of view, both technologies have a similar request and grant-based mechanism to access the channel. If the two processes can be pipelined, then LTE and DOCSIS operations can take place in parallel, removing the “sum” from the latency equation. To enable pipelining, we designed a protocol that utilizes a message called the  bandwidth report (BWR) that allows the LTE network to share information with the DOCSIS network. Pipelining is a unique and inventive aspect of LLX and is the heart of what creates a low latency transport.

 

Low-latency-Xhaul

Operator Trials

So, just how well does LLX work? We have recently teamed up with Shaw, one of our Canadian members, as well as our technology development partners Cisco and Sercomm to perform a series of lab trials. The detail of the trials will be published in the upcoming SCTE Cable-Tec Expo in October. But as a preview, we demonstrated that even when the DOCSIS network is heavily loaded, LLX consistently reduced the DOCSIS upstream latency down to 1 to 2 milliseconds, all without adversely affecting other traffic.

Deploying LLX Technology

The LLX specification was published a few months ago, the result of collaborative efforts from key cable and mobile equipment vendors in the CableLabs-led LLX working group.

LLX technology is designed to work for a variety of deployment models, including backhaul and fronthaul, over DOCSIS as well as over PON networks. To this end, we have taken the technology to mobile industry standardization organizations such as the O-RAN Alliance whose current focus is fronthaul.

LLX works in the DOCSIS 3.0 and later networks as a software upgrade to the CMTS. It has been implemented on commercial DOCSIS and mobile equipment. More information on LLX is available here.

For those attending the SCTE Cable-Tec Expo in New Orleans, we will be discussing the innovation on the Innovation Stage at 12:45pm local time with my industry partners from Shaw, Cisco, and Sercomm. I will also dive deep into the technology and the Shaw trial results in my SCTE panel “Mobile X-haul and DOCSIS”, Wednesday October 2nd at 9am local time. Hope to see you there.


Learn More About 10G

Comments
10G

CableLabs Low Latency DOCSIS® Technology Launches 10G Broadband into a New Era of Rapid Communication

Greg White
Distinguished Technologist

Karthik Sundaresan
Distinguished Technologist

Sep 5, 2019

Remember the last time you waited (and waited) for a page to load?  Or when you “died” on a virtual battlefield because your connection couldn’t catch up with your heroic ambitions? Many internet users chalk those moments up to insufficient bandwidth, not realizing that latency is to blame. Bandwidth and latency are two very different things and adding more bandwidth won’t fix the internet lag problem for latency-sensitive applications. Let’s take a closer look at the difference:

  • Bandwidth (sometimes referred to as throughput or speed) is the amount of data that can be delivered across a network over a period of time (Mbps or Gbps). It is very important, particularly when your application is trying to send or receive a lot of data. For example, when you’re streaming a video, downloading music, syncing shared files, uploading videos or downloading system updates, your applications are using a lot of bandwidth.
  • Latency is the time that it takes for a “packet” of data to be sent from the sender to the receiver and for a response to come back to the sender. For example, when you are playing an online game, your device sends packets to the game server to update the global game state based on your actions, and it receives update packets from the game server that reflect the current state of all the other players. The round-trip time (measured in milliseconds) between your device and the server is sometimes referred to as “ping time.” The faster it is, the lower the latency, and the better the experience.

Latency-Sensitive applications   

Interactive applications, where real-time responsiveness is required, can be more sensitive to latency than bandwidth. These applications really stand to benefit from technology that can deliver consistent low latency.

As we’ve alluded, one good example is online gaming.  In a recent survey we conducted with power users within the gaming community, network latency continually came up as one of the top issues. That’s because coordinating the actions of players in different network locations is very difficult if you have “laggy” connections.  The emergence of Cloud gaming makes this even more important because even the responsiveness of local game controller actions depends on a full round-trip across the network.

Queue Building or Not?

When multiple applications share the broadband connection of one household (e.g. several users performing different activities at the same time), each of those applications can have an impact on the performance of the others. They all share the total bandwidth of the connection, and they can all inflate the latency of the connection.

It turns out that applications that want to send a lot of data all at once do a reasonably good job of sharing the bandwidth in a fair manner, but they actually cause latency in the network when they do it, because they send data too quickly and expect the network to queue it up.  We call these “queue-building” applications. Examples are video streaming and large downloads, and they are designed to work this way.  There are also plenty of other applications that aren’t trying to send a lot of data all at once, and so don’t cause latency.  We call these “non-queue-building” applications. Interactive applications like online gaming and voice connections work this way.

The queue-building applications, like video streaming or downloading apps, get best performance when the broadband connection allows them to send their data in big bursts, storing that data in a buffer as it is being delivered.  These applications benefit from the substantial upgrades the cable industry has made to its networks already, which are now gigabit-ready. These applications are also latency-tolerant – user experiences are generally not impacted by latency.

Non-queue-building applications like online gaming, on the other hand, get the best performance when their packets don’t have to sit and wait in a big buffer along with the queue-building applications. That’s where Low Latency DOCSIS comes in.

What is Low Latency DOCSIS 3.1 and how does it work?

The latest generation of DOCSIS that has been deployed in the field—DOCSIS 3.1—experiences typical latency performance of around 10 milliseconds on the access network link. However, under heavy load, the link can experience delay spikes of 100 milliseconds or more.

Low Latency DOCSIS (LLD) technology is a set of new features, developed by CableLabs, for DOCSIS 3.1 (and future) equipment.  LLD can provide consistent low latency (as low as 1 millisecond) on the access network for the applications that need it.  The user experience will be more consistent with much smaller delay variation.

In LLD, the non-queue-building applications (the ones that aren’t causing latency) can take a different path through the DOCSIS network and not get hung up behind the queue-building applications.  This mechanism doesn’t interfere with the way that applications go about sharing the total bandwidth of the connection. Nor does this reduce one application's latency at the expense of others. It is not a zero-sum game; rather, it is just a way of making the internet experience better for all applications.

So, LLD gives both types of applications what they want and optimizes the performance of both.  Any application that wants to be able to send big bursts of data can use the default “classic” service, and any application that can ensure that it isn’t causing queue build-up and latency can identify its packets so they use the “low latency” service. Both then share the bandwidth of the broadband connection without one getting preference over the other.

Incorporating LLD Technology

Deploying Low Latency DOCSIS in a cable operator’s network can be accomplished by field-upgrading existing DOCSIS 3.1 CMs and CMTSs with new software. Some of the low latency features are even available to customers with older (pre-DOCSIS 3.1) CMs.

The technology includes tools that enable automatic provisioning of these new services, and it also introduces new tools to report statistics of latency performance to the operator.

Next Steps

DOCSIS equipment manufacturers are beginning to develop and integrate LLD features into software updates for CMTSs and CMs, and CableLabs is hosting Interoperability Events this year and next year to bring manufacturers together to help iron out the technology kinks.

We expect these features to become available to cable operators in the next year as they prepare their network to support low latency services.

LLD provides a cost-effective means of leveraging the existing hybrid fiber-coaxial (HFC) network to provide a high-performance network for latency-sensitive services. These services will help address customers’ requirements for many years into the future, maximizing the investments that cable operators have made in their networks. The cable industry is provisioning the network with substantial bandwidth and low latency to take another leap forward with its 10G networks.

For those attending the SCTE Cable-Tec Expo in New Orleans, Greg will be presenting the details of this technology on a SCTE panel “Low Latency DOCSIS: Current State and Future Vision”  Room: 243-244,  Monday, September 30, 2019: 3:30 PM - 4:30 PM”.  Hope to see you there!


Read Our Low Latency DOCSIS White Paper

Comments
10G

10G, Reliably

Jason Rupe
Principal Architect

Jul 31, 2019

The 10g platform is going to provide reliable service. As the cable industry embarks on the development of 10G services, there is a lot of work ahead, but we already have a strong foundation of experience and technology to build upon.

The 10 Gbps goal is about performance. But it must come with low cost, high quality, and sufficient reliability. 10G services have to be easy to install reliably, remain stable and robust against cable plant variations and conditions, and provide a wealth of service flexibility so that services remain reliable under a broad set of use cases.

The Road to 10G…

At CableLabs, we’ve taken big leaps toward 10G with DOCSIS® 4.0, including Full Duplex DOCSIS, and with cable modems (CMs) which will be capable of 5 Gbps symmetrical service in the near future. To fully arrive at 10G, we need to enable 10 Gbps downstream speeds. To accomplish that, we’ll need to expand our use of available spectrum, and we’ll likely need to use that spectrum in a highly efficient manner. Pushing higher bandwidth solutions deeper into the network and closer to the edge customers will be required, too. We have a lot of innovation ahead of us to get to the 10G future.

…Is Paved with Innovation

Invention often begins with an initial solution that is later repeated for verification, then validated further. That initial solution then needs to be scaled; in other words, it needs to be made repeatable, at a low cost, and with sufficient reliability.

Fortunately, DOCSIS networking is a technology with many reliability traits integrated. Data are delivered reliably due to Forward Error Correction. Profile management can control the data rate to allow the best performance possible, but not push performance to low reliability. Adjustments to the connection between the cable modem termination system (CMTS) and CM assure reliable transmission continues under constant environmental and network changes. And Proactive Network Maintenance (PNM) assures that plant conditions are discoverable, and that they can be translated into maintenance activities that can further assure services stay reliable at low cost. The cable industry is starting on a solid foundation.

Consider one possible direction we could take on the road to 10G. As we begin to expand the frequencies that DOCSIS uses, we may need improved error correction, better profile management, or better CMTS-to-CM coordination to assure reliable services continue at expected levels. However, pushing these limits might also mean new failure modes in the plant, or greater service sensitivity to existing failure modes, thus increasing the importance of PNM. Operators should up their PNM game now, understanding that it will be an even more important element to assure a reliable 10G future.

A Super Highway in Many Directions

Because of this strong reliability foundation in cable technologies, particularly DOCSIS, we can build our 10G future with reliability in mind. Rather than simply extending our boundaries and hoping that our existing methods to assure reliable services will be sufficient, we can define solutions that bring reliability with them. By focusing simultaneously on increased performance, lower operational costs, and reliable services, we can evolve into an effective, desirable 10G future for the world.

Also, by thoughtfully choosing the technologies to develop, we can create degrees of freedom and opportunities to enhance reliability while developing 10G. This is the right approach for the industry to take because reliability can only be built into a service, not added later. By choosing to develop solutions now that expand our options for reliable services, we can enable operators to have full control of their services. To make it work reliably, PNM will be there, and so will a few other advantages to come.

 Learn More About 10G

Comments
10G

10G Platform: Coming to Homes, Offices and Cities Near You

Phil McKinney
President & CEO

Jan 7, 2019

In just 2 years, the cable industry has made an unparalleled technological leap by increasing availability of 1 gigabit broadband Internet from only 4 percent to 80 percent of U.S. households. Today, we’re excited to announce that this accomplishment is just the first step toward realizing cable’s 10G vision in the next decade.

Is 10G Technology a Future Vision or Can I Get It Today?

10G is not a single technology; it’s the cable broadband technology platform that can handle more data from more devices 10 times faster than today’s fastest cable broadband networks. But network speed isn’t the only feature of 10G. Its reduced latency, enhanced reliability and security features will open doors to a myriad of new immersive digital experiences and other emerging technologies that will revolutionize the way we live, work, learn and play.

The foundation for 10G technology already exists. The capacity of the cable networks that now deliver 1 gigabit speeds to more than 100 million homes across America will be incrementally expanded over the next few years. Plus, cable’s footprint will allow for deployment of new technologies on a massive scale, bringing multi-gigabit speeds to more homes and businesses globally.

Limitless Possibilities Powered by the 10G Platform

The cable industry has a track record of delivering on its promises. A few years ago, when we were talking about the impact of 1 gigabit speeds on the connectivity industry, we envisioned a world of lag-free 4K streaming, blazing fast upload and download speeds, and smoother gaming—experiences that are now available to 93 percent of U.S. cable customers.

The 10G technology promise, however, takes us into a whole new realm of possibilities that will impact every aspect of our lives:

  • How We Work—Telecommuting is already favored by many businesses around the world, but the new 10G platform-powered remote presence technology will make this practice commonplace. Coworkers will be able to securely and effectively collaborate via distance VR, video walls and realistic light field displays from various locations, maximizing productivity and minimizing business expenses.
  • How We Learn—10G technology will enable the advancement of many emerging technologies, such as head-mounted displays, that can be used in the classroom to integrate VR with real-life objects. This technology can help our children engage with the physical world, distant cultures and the entire universe in new and interesting ways, revolutionizing our approach to education.
  • How We Live—The data capacity and enhanced security of multi-gigabit networks will give rise to a new wave of remote diagnostics technology. Doctors will be able to remotely monitor their patients’ vitals in real time, providing better care, quality of life and peace of mind to the elderly and their families.
  • How We Play—10G networks’ capacity and speed also comes with one-tenth the latency, making sluggish connections a thing of the past. Gamers will be able to enjoy a truly seamless, life-like experience with more control and zero lag time. Plus, very low-latency networks can boost innovation and open more opportunities for VR/AR applications in other areas of our life.

Based on the double-digit bandwidth usage growth that we continue to see every year, we know our customers and the industry are ready for the next step in network innovation. The 10G platform will enable creators and innovators to fulfill their dreams while providing reliability and security that consumers can trust. We believe that 10G is the next leap into the future, and we’re already well on our way there.

To support the rollout, Intel will deliver 10 gigabit ready technology from the network infrastructure to home gateways. To learn more about the technologies enabling the 10G platform click on the link below.


Learn More About 10G

Comments