Comments
HFC Network

Testing Bandwidth Usage of Popular Video Conferencing Applications

Jay Zhu
Senior Engineer

Nov 5, 2020

This year we have seen a shift toward working and learning from home and relying more on our broadband connection. Specifically, most of us use video conferencing for work, school and everyday communications. With that in mind, we looked at how much video conferencing a broadband connection can support.

In the U.S., the Federal Communications Commission (FCC) defines broadband to be a minimum of 25 Mbps downstream and 3 Mbps upstream. So, we started there. The investigation looked at how many simultaneous conferencing sessions can be supported on the access network using popular software including Google Meet, GoToMeeting, and Zoom. The data gathering used typical settings and looked at both upstream and downstream bandwidth usage from and to laptops connected by ethernet cable to a modem connected to a wired broadband connection. To avoid any appearance of endorsement of a particular conferencing application, we have not labeled the figures below with the specific apps under test.

Since this is CableLabs, we used DOCSIS® cable broadband technology. A Technicolor TC8305c gateway was used, which is a DOCSIS 3.0 modem supporting 8 downstream channels and 4 upstream channels. Note that this modem is several years old and not the current DOCSIS 3.1 technology. The modem was connected through the cable access network to a CommScope E6000 cable modem termination system (CMTS).

Laptops used ethernet wired connections to the modem to ensure no variables outside the control of the service provider would impact the speeds delivered, and conferences were set up and parameters varied while traffic flow rates were collected over time. Various laptops were used, running Windows, MacOS and Ubuntu – nothing special, just laptops that were around the lab and available for use.

Most broadband providers over-provision the broadband speeds delivered to customers’ homes – this is for assorted reasons including considering protocol overhead and ensuring headroom in the system to handle unexpected loads. For this testing, the 25/3 service was over-provisioned by 25%, a typical configuration for this service tier.

At a high level, we found that all three conferencing solutions could support at least five concurrent sessions on five separate laptops connected to the same cable modem with the above 25/3 broadband service and with all sessions in gallery view. The quality of all five sessions was good and consistent throughout, with no jitter, choppiness, artifacts, or other defects noticed during the sessions.

This research doesn’t take into account the potential external factors that can affect Internet performance in the home, from the placement of Wi-Fi routers, to building materials, to Wi-Fi interference, to the age and condition of the user’s connected devices, but it does provide a helpful illustration of the baseline capabilities of 25/3 broadband.

The data is presented below where samples were collected every 200 milliseconds using tshark (the Wireshark network analyzer).

Conferencing Application: A

The chart below (Figure 1) shows access network usage for the five concurrent sessions over 300 seconds (five minutes) for one of the above conferencing applications. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the upstream usage stays below 2 Mbps over the five minutes.

Figure 2 shows the upstream bandwidth usage of the five individual conference sessions where each is below 0.5 Mbps.

Figure 3 shows the downstream bandwidth usage for the five individual conference sessions.

Conferencing Application: B

Figure 4 shows access network usage for five concurrent sessions over 300 seconds (five minutes) for the next conferencing application tested. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the upstream usage hovers around 3 Mbps as each conference session attempts to use as much upstream bandwidth as possible.

Figure 5 shows the upstream bandwidth usage of the five individual conference sessions where each is below 1 Mbps, though the individual sessions sawtooth up and down as the individual conference sessions compete for more bandwidth. This is normal behavior for applications of this type, and did not have a negative impact on stream quality.

Figure 6 shows the downstream bandwidth usage for the five individual conference sessions.

Conferencing Application: C

Figure 7 shows access network usage for the five concurrent sessions over 300 seconds (five minutes) for the third of the applications tested. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the total upstream usage hovers around 3 Mbps over the five minutes.

Figure 8 shows the upstream bandwidth usage of the five individual conference sessions where each is below 1 Mbps, though the individual sessions sawtooth up and down as the individual conference sessions compete for more bandwidth. This is normal behavior for applications of this type, and did not have a negative impact on stream quality.

Figure 9 shows the downstream bandwidth usage for the five individual conference sessions. Note the scale of this diagram is different because of higher downstream bandwidth usage.

In summary, each of the video conferencing applications supported at least five concurrent sessions over the 25/3 broadband connection. The focus of this analysis is upstream bandwidth usage, and all three video conferencing technologies manage the upstream usage to fit within the provisioned 3 Mbps broadband speed. For at least two of the conferencing applications, there was also sufficient available downstream speed to run other common applications, such as video streaming and web browsing, concurrently with the five conferencing sessions.

Areas of Future Study

Conferencing services have enhanced modes that allow for higher definition video but that also uses more bandwidth. These modes place additional load on the broadband connection and may reduce the number of simultaneous conferences.

An interesting finding is that upstream bandwidth usage out of a home can depend on how other conference participants choose to view the video. Gallery mode uses lower bit rate thumbnail pictures of participants and is the most efficient for a conference. “Pinning” a speaker’s video can cause higher bandwidth out of a home. In addition, users that purchase add-on cameras that provide higher definition video than the camera included with their laptop may see higher upstream usage.

Learn More About Broadband Network Performance

Comments
10G

CableLabs Low Latency DOCSIS® Technology Launches 10G Broadband into a New Era of Rapid Communication

Greg White
Distinguished Technologist

Sep 5, 2019

Remember the last time you waited (and waited) for a page to load?  Or when you “died” on a virtual battlefield because your connection couldn’t catch up with your heroic ambitions? Many internet users chalk those moments up to insufficient bandwidth, not realizing that latency is to blame. Bandwidth and latency are two very different things and adding more bandwidth won’t fix the internet lag problem for latency-sensitive applications. Let’s take a closer look at the difference:

  • Bandwidth (sometimes referred to as throughput or speed) is the amount of data that can be delivered across a network over a period of time (Mbps or Gbps). It is very important, particularly when your application is trying to send or receive a lot of data. For example, when you’re streaming a video, downloading music, syncing shared files, uploading videos or downloading system updates, your applications are using a lot of bandwidth.
  • Latency is the time that it takes for a “packet” of data to be sent from the sender to the receiver and for a response to come back to the sender. For example, when you are playing an online game, your device sends packets to the game server to update the global game state based on your actions, and it receives update packets from the game server that reflect the current state of all the other players. The round-trip time (measured in milliseconds) between your device and the server is sometimes referred to as “ping time.” The faster it is, the lower the latency, and the better the experience.

Latency-Sensitive applications   

Interactive applications, where real-time responsiveness is required, can be more sensitive to latency than bandwidth. These applications really stand to benefit from technology that can deliver consistent low latency.

As we’ve alluded, one good example is online gaming.  In a recent survey we conducted with power users within the gaming community, network latency continually came up as one of the top issues. That’s because coordinating the actions of players in different network locations is very difficult if you have “laggy” connections.  The emergence of Cloud gaming makes this even more important because even the responsiveness of local game controller actions depends on a full round-trip across the network.

Queue Building or Not?

When multiple applications share the broadband connection of one household (e.g. several users performing different activities at the same time), each of those applications can have an impact on the performance of the others. They all share the total bandwidth of the connection, and they can all inflate the latency of the connection.

It turns out that applications that want to send a lot of data all at once do a reasonably good job of sharing the bandwidth in a fair manner, but they actually cause latency in the network when they do it, because they send data too quickly and expect the network to queue it up.  We call these “queue-building” applications. Examples are video streaming and large downloads, and they are designed to work this way.  There are also plenty of other applications that aren’t trying to send a lot of data all at once, and so don’t cause latency.  We call these “non-queue-building” applications. Interactive applications like online gaming and voice connections work this way.

The queue-building applications, like video streaming or downloading apps, get best performance when the broadband connection allows them to send their data in big bursts, storing that data in a buffer as it is being delivered.  These applications benefit from the substantial upgrades the cable industry has made to its networks already, which are now gigabit-ready. These applications are also latency-tolerant – user experiences are generally not impacted by latency.

Non-queue-building applications like online gaming, on the other hand, get the best performance when their packets don’t have to sit and wait in a big buffer along with the queue-building applications. That’s where Low Latency DOCSIS comes in.

What is Low Latency DOCSIS 3.1 and how does it work?

The latest generation of DOCSIS that has been deployed in the field—DOCSIS 3.1—experiences typical latency performance of around 10 milliseconds on the access network link. However, under heavy load, the link can experience delay spikes of 100 milliseconds or more.

Low Latency DOCSIS (LLD) technology is a set of new features, developed by CableLabs, for DOCSIS 3.1 (and future) equipment.  LLD can provide consistent low latency (as low as 1 millisecond) on the access network for the applications that need it.  The user experience will be more consistent with much smaller delay variation.

In LLD, the non-queue-building applications (the ones that aren’t causing latency) can take a different path through the DOCSIS network and not get hung up behind the queue-building applications.  This mechanism doesn’t interfere with the way that applications go about sharing the total bandwidth of the connection. Nor does this reduce one application's latency at the expense of others. It is not a zero-sum game; rather, it is just a way of making the internet experience better for all applications.

So, LLD gives both types of applications what they want and optimizes the performance of both.  Any application that wants to be able to send big bursts of data can use the default “classic” service, and any application that can ensure that it isn’t causing queue build-up and latency can identify its packets so they use the “low latency” service. Both then share the bandwidth of the broadband connection without one getting preference over the other.

Incorporating LLD Technology

Deploying Low Latency DOCSIS in a cable operator’s network can be accomplished by field-upgrading existing DOCSIS 3.1 CMs and CMTSs with new software. Some of the low latency features are even available to customers with older (pre-DOCSIS 3.1) CMs.

The technology includes tools that enable automatic provisioning of these new services, and it also introduces new tools to report statistics of latency performance to the operator.

Next Steps

DOCSIS equipment manufacturers are beginning to develop and integrate LLD features into software updates for CMTSs and CMs, and CableLabs is hosting Interoperability Events this year and next year to bring manufacturers together to help iron out the technology kinks.

We expect these features to become available to cable operators in the next year as they prepare their network to support low latency services.

LLD provides a cost-effective means of leveraging the existing hybrid fiber-coaxial (HFC) network to provide a high-performance network for latency-sensitive services. These services will help address customers’ requirements for many years into the future, maximizing the investments that cable operators have made in their networks. The cable industry is provisioning the network with substantial bandwidth and low latency to take another leap forward with its 10G networks.

For those attending the SCTE Cable-Tec Expo in New Orleans, Greg will be presenting the details of this technology on a SCTE panel “Low Latency DOCSIS: Current State and Future Vision”  Room: 243-244,  Monday, September 30, 2019: 3:30 PM - 4:30 PM”.  Hope to see you there!


Read Our Low Latency DOCSIS White Paper

Comments
Wireless

Moving Beyond Cloud Computing to Edge Computing

Omkar Dharmadhikari
Wireless Architect

May 1, 2019

In the era of cloud computing—a predecessor of edge computing—we’re immersed with social networking sites, online content and other online services giving us access to data from anywhere at any time. However, next-generation applications focused on machine-to-machine interaction with concepts like internet of things (IoT), machine learning and artificial intelligence (AI) will transition the focus to “edge computing” which, in many ways, is the anti-cloud.

Edge computing is where we bring the power of cloud computing closer to the customer premises at the network edge to compute, analyze and make decisions in real time. The goal of moving closer to the network edge—that is, within miles of the customer premises—is to boost the performance of the network, enhance the reliability of services and reduce the cost of moving data computation to distant servers, thereby mitigating bandwidth and latency issues.

The Need for Edge Computing

The growth of the wireless industry and new technology implementations over the past two decades has seen a rapid migration from on-premise data centers to cloud servers. However, with the increasing number of Industrial Internet of Things (IIoT) applications and devices, performing computation at either data centers or cloud servers may not be an efficient approach. Cloud computing requires significant bandwidth to move the data from the customer premises to the cloud and back, further increasing latency. With stringent latency requirements for IIoT applications and devices requiring real-time computation, the computing capabilities need to be at the edge—closer to the source of data generation.

What Is Edge Computing?

The word “edge” precisely relates to the geographic distribution of network resources. Edge computation enables the ability to perform data computation close to the data source instead of going through multiple hops and relying on the cloud network to perform computing and relay the data back. Does this mean we don’t need the cloud network anymore? No, but it means that instead of data traversing through the cloud, the cloud is now closer to the source generating the data.

Edge computing refers to sensing, collecting and analyzing data at the source of data generation, and not necessarily at a centralized computing environment such as a data center. Edge computing uses digital devices, often placed at different locations, to transmit the data in real time or later to a central data repository. Edge computing is the ability to use distributed infrastructure as a shared resource, as the figure below shows.

Edge computing is an emerging technology that will play an important role in pushing the frontier of data computation to the logical extremes of a network.

Key Drivers of Edge Computing:

  • Plummeting cost of computing elements
  • Smart and intelligent computing abilities in IIoT devices
  • A rise in the number of IIoT devices and ever-growing demand for data
  • Technology enhancements with machine learning, artificial intelligence and analytics

Benefits of Edge Computing

Computational speed and real-time delivery are the most important features of edge computing, allowing data to be processed at the edge of network. The benefits of edge computing manifest in these areas:

  • Latency

Moving data computing to the edge reduces latency. Latency without edge computing—when data needs to be computed at a server located far from the customer premises—varies depending on available bandwidth and server location. With edge computing, data does not have to traverse over a network to a distant server or cloud for processing, which is ideal for situations where latencies of milliseconds can be untenable. With data computing performed at the network edge, the messaging between the distant server and edge devices is reduced, decreasing the delay in processing the data.

  • Bandwidth

Pushing processing to edge devices, instead of streaming data to the cloud for processing, decreases the need for high bandwidth while increasing response times. Bandwidth is a key and scarce resource, so decreasing network loading with higher bandwidth requirements can help with better spectrum utilization.

  • Security

From a certain perspective, edge computing provides better security because data does not traverse over a network, instead staying close to the edge devices where it is generated. The less data computed at servers located away from the source or cloud environments, the less the vulnerability. Another perspective is that edge computing is less secure because the edge devices themselves can be vulnerable, putting the onus on operators to provide high security on the edge devices.

What Is Multi-Access Edge Computing (MEC)?

MEC enables cloud computing at the edge of the cellular network with ultra-low latency. It allows running applications and processing data traffic closer to the cellular customer, reducing latency and network congestion. Computing data closer to the edge of the cellular network enables real-time analysis for providing time-sensitive response—essential across many industry sectors, including health care, telecommunications, finance and so on. Implementing distributed architectures and moving user plane traffic closer to the edge by supporting MEC use cases is an integral part of the 5G evolution.

 Edge Computing Standardization

Various groups in the open source and standardization ecosystem are actively looking into ways to ensure interoperability and smooth integration of incorporating edge computing elements. These groups include:

  • The Edge Computing Group
  • CableLabs SNAPS programs, including SNAPS-Kubernetes and SNAPS-OpenStack
  • OpenStack’s StarlingX
  • Linux Foundation Networking’s OPNFV, ONAP
  • Cloud Native Compute Foundation’s Kubernetes
  • Linux Foundation’s Edge Organization

How Can Edge Computing Benefit Operators?

  • Dynamic, real-time and fast data computing closer to edge devices
  • Cost reduction with fewer cloud computational servers
  • Spectral efficiency with lower latency
  • Faster traffic delivery with increased quality of experience (QoE)

Conclusion

The adoption of edge computing has been rapid, with increases in IIoT applications and devices, thanks to myriad benefits in terms of latency, bandwidth and security. Although it’s ideal for IIoT, edge computing can help any applications that might benefit from latency reduction and efficient network utilization by minimizing the computational load on the network to carry the data back and forth.

Evolving wireless technology has enabled organizations to use faster and accurate data computing at the edge. Edge computing offers benefits to wireless operators by enabling faster decision making and lowering costs without the need for data to traverse through the cloud network. Edge computation enables wireless operators to place computing power and storage capabilities directly at the edge of the network.  As 5G evolves and we move toward a connected ecosystem, wireless operators are challenged to maintain the status quo of operating 4G along with 5G enhancements such as edge computing, NFV and SDN. The success of edge computing cannot be predicted (the technology is still in its infancy), but the benefits might provide wireless operators with critical competitive advantage in the future.

How Can CableLabs Help?

CableLabs is a leading contributor to European Telecommunication Standards Institute NFV Industry Specification Group (ETSI NFV ISG). Our SNAPS™ program is part of Open Platform for NFV (OPNFV). We have written the OpenStack API abstraction library and contributed it to the OPNFV project at the Linux Foundation—“SNAPS-OO”—and leverage object oriented software development practices to automate and validate applications on OpenStack. We also added Kubernetes support with SNAPS-Kubernetes, introducing a Kubernetes stack to provide CableLabs members with open source software platforms. SNAPS-Kubernetes is a certified CNCF Kubernetes installer that is targeted at lightweight edge platforms and scalable with the ability to efficiently manage failovers and software updates. SNAPS-Kubernetes is optimized and tailored to address the need of the cable industry and general edge platforms. Edge computing on Kubernetes is emerging as a powerful way to share, distribute and manage data on a massive scale in ways that cloud, or on-premise deployments cannot necessarily provide.


SUBSCRIBE TO OUR BLOG

Comments
Wireless

Sharing Bandwidth: Cyclic Prefix Elimination

Tom Williams
Principal Architect, Network Technologies

Dec 20, 2017

Unfortunately, there is only so much over-the-air wireless bandwidth, and it must be shared between a lot of folks. And the situation is not getting any better.  While you can usually run another wire or fiber optic cable between two locations to get more bandwidth, if you have a wireless application you must share this scarce resource.

New applications, such as IoT (internet of things), 3-D Virtual Reality headsets, and new cell phone applications are demanding more and more bandwidth. With cable subscribers watching video on portable devices, such as tablets and phones, interference problems (such as frozen pictures and tiling) are becoming more frequent problems. More than half of customer complaints are caused by wireless problems, and the most common problem is Wi-Fi interference, frequently from a neighbor’s service.

Solutions to the Problem

  • One solution to the problem of more bandwidth is to use cellular technology and make the cell size smaller. Have you ever observed that out in the country cell towers are tall for a long reach? But in crowded cities, they are much closer to the ground, and the antennas are pointed downward. This is to reduce cell diameter in highly populated areas, allowing bandwidth reuse in non-overlapping cells. Transmitted power is also reduced for small cells to limit signal reach, thus reducing interference.  However, large numbers of cell sites are expensive to deploy and maintain - and the bandwidth itself can be expensive. In the latest FCC bandwidth auction, the 600MHz band in the United States was sold for almost $20 billion!
  • Other techniques to increase bandwidth include steerable beams and a technique called MIMO (multiple input, multiple output). This is a system for reusing the spectrum with more unique signals in the same air, by transmitting 2 or more signals on different antennas which are physically separated. At a receive site, sophisticated signal processing, using 2 or more antennas, separates the two signals.

CableLabs Innovation: Cyclic Prefix Elimination

CableLabs researchers are constantly looking for efficiency improvements, and they have found one way to improve wireless signals to make them use less bandwidth. This method, called “OFDM CP Elimination” (the full mouthful is Orthogonal Frequency Division Multiplex Cyclic Prefix Elimination!), allows the data to be sent in less time, increasing the resolution of pictures, and reducing the time for screen updates. Their method eliminates an overhead called a “Cyclic Prefix”, thereby improving efficiency by up to 25%.  A side benefit of finishing transmissions earlier is increasing battery life for handheld devices.

Interested in a deep dive into cyclic prefix elimination? Check out my video on the subject, my blog post "Getting Rid of a Big Communications Tax on OFDM Transmissions" and my technical paper in the December issue of the SCTE ISBE Journal titled "OFDM Cyclic Prefix Elimination."

--

CableLabs innovates to help our member companies provide better services to their customer including higher data rates, higher reliability and lower latency. Subscribe to our blog to find out more.

Comments
Consumer

Li-Fi – A Bright Future for Home Networks

Josh Redmore
Principal Architect, Wireless Research & Development

Mar 8, 2016

At CableLabs, we are continually researching new methods of in-home wireless network distribution, and one exciting new contender is Li-Fi.

What is Li-Fi?

Li-Fi is the modulation of a free-space beam of light in order to transmit a signal. It can be thought of as analogous to Wi-Fi, just in a much higher frequency band (430 – 770 THz vs. 2.4 GHz). We’ve actually been using this same basic concept for over 100 years, in the form of Morse code being transmitted from ship to ship via signal lamps.

The Shannon–Hartley theorem allows us to calculate the maximum bitrate of a communications channel with a specific bandwidth. Since capacity increases with bandwidth, we can immediately see the vast potential of Li-Fi, which has ~340 THz to work with in the visible light frequencies. Compare that with Wi-Fi, which has less than 1 GHz available and is able to provide over a gigabit per second, and you can see the potential for ultra-high speed in-home networks. As virtual and augmented reality achieve widespread adoption, these ultra-high speeds will become mandatory.

To give you an idea of just how large a difference this increase in bandwidth is, it’s close to the same difference between the mass of the Earth and the Sun!
earth_sun_scale_lifi

Where are we going?

The ideal product is a Li-Fi enabled light bulb in the same form factor that consumers are used to now – and with the same ease of installation. Li-Fi could be a viable solution to improving the coverage and reliability of a home network by reusing existing light fixtures. Just screw the bulb in and you’ve expanded your network.

idea concept with light bulbs on a blue background

The other side of the connection is the endpoint device, and a number of consumer device manufacturers are beginning the process of integrating Li-Fi. Apple is exploring adding Li-Fi to their mobile devices, which is a natural product evolution, as the majority of smartphones already contain the two things needed for Li-Fi, a light detector (the camera) and a light emitter (the camera flash).

Where are we today?

lifi-demo

CableLabs has fully functioning prototypes of a single-channel Li-Fi system, which have proved to achieve data rates of around 300 Mbps. It is free from Wi-Fi interference and simple to use. Currently the devices need to be directly in-line with each other, so research into improving the signal-to-noise ratio (SNR) is needed for us to achieve omnidirectional Li-Fi.

We’ve also done extensive research into the necessary backhaul systems that will make Li-Fi a useful reality, such as next-generation powerline networking, which uses your existing home electrical wiring as a network. By networking the Li-Fi bulbs together, you can achieve seamless, whole-home coverage. Anywhere there is light, there is connectivity.

The cable industry, with the introduction of DOCSIS 3.1 and beyond, continues to increase Internet connection speeds. These speeds are currently beyond what any current-generation in-home wireless system can handle, so research into technologies like Li-Fi will play a vital part in ensuring customers are able to fully utilize their connection.

Josh Redmore is the Lead Architect in Wireless Network Optimization group at CableLabs.
Follow him on Twitter.

Comments