Hourly Data Consumption of Popular Video Conferencing Applications
Building on our prior work, this investigation explores the hourly data consumption of popular video conferencing applications: Google Meet, GoToMeeting, Microsoft Teams and Zoom. As video conference applications have become an integral part of our daily lives, we wanted to not only better understand the bandwidth usage as previously explored, but also the total data consumption of these applications. This investigation provides a first step in better understanding that latter dimension. To avoid any appearance of endorsement of a particular conferencing application, we have not labeled the figures below with the specific apps under test. In short, we observed that a single user on a video conferencing application consumed roughly one gigabyte per hour, which compares to about three gigabytes per hour when streaming an HD movie or other video. However, we did observe substantial variance in video conferencing app hourly data consumption based on the specific app and end-user device.
Key Components of the Testing Environment
Much like our prior work on bandwidth usage, the test setup used typical settings and looked at both upstream and downstream data consumption from laptops connected to a cable broadband internet service. We used the same network equipment from November and our more recent blog post in February. This includes the same cable equipment as the previous blogs — the same DOCSIS 3.0 Technicolor TC8305c gateway, supporting eight downstream channels and four upstream channels, and the same CommScope E6000 cable modem termination system (CMTS). The cable network was configured to provide 50 Mbps downstream and five Mbps upstream broadband service, overprovisioned by 25 percent.
The data gathering scenario:
- 10 people, each on their individual laptops, participated in the conference under test
- One person on the broadband connection under test, using either a lower-cost or a higher-cost laptop. The other nine participants were not using the broadband connection under test.
- For the laptop under test, the participant used the video conferencing application for the laptop’s operating system, rather than using the video conferencing application through the web browser.
- Total data consumption was recorded for the laptop using the broadband connection under test.
For all 10 participants, cameras and microphones were on. Conference applications were set to "gallery mode" with thumbnails of each person filling the screen, no slides were presented and the video conference sessions just included people talking.
The laptop under test used a wired connection to the cable modem to ensure that no variables outside the control of the service provider would impact broadband performance. Most notably, by using a wired connection, we removed the variable of Wi-Fi performance from our test setup. During data collection, the conference app was the only app open on the laptop under test.
Video conferencing sessions were set up and data consumption was measured over time. We collected 10 minutes of data for each conferencing session under test to calculate the total consumption for one hour. The charts below show the data consumed for each of the 10 minutes of the conference session. During the conference there was movement and discussion to keep the video and audio streams active throughout the period of data collection.
For each test scenario, only one laptop was connected at a time to the broadband connection under test. Our goal was to measure the data consumption of one conferencing user on the broadband connection. The other conference participants were on the internet; they were not in the lab. Once again, we used TShark (a popular, widely used network protocol analyzer) to capture and measure the data.
For the laptop under test, we chose two that have quite different capabilities. The first was a low-cost laptop with an 11-inch screen, like the ones students are often provided by school districts for at-home learning. The second was a higher-cost laptop with a 15-inch screen, like what we often see in an enterprise environment. Note the two laptops not only have quite different hardware components (e.g., CPU, graphics processors, memory, cameras, screens), but also have different operating systems. Once again, to avoid any appearance of endorsement, we are not identifying the specific laptops used.
Table 1 shows hourly bandwidth consumption (combining both upstream and downstream) for the laptop under test, normalized to Gigabytes per hour. The table provides the data consumption for the low-cost and higher-cost laptops in each scenario with the four conferencing applications.
Table 1: Video Conferencing App Hourly Bandwidth Consumption in Gigabytes for Each User (Gigabytes/hour)
The following figures show the data consumption, in Megabytes, for each minute of the 10-minute data collection for each of the permutations of our testing.
A few notes on the charts:
- There was only one client behind the cable modem.
- Each bar represents one minute of data consumption.
- Each bar shows total consumption and includes both the upstream and downstream, and both audio and video, added together.
- App A is blue in each chart; App B is green; App C orange; and App D is purple.
- These charts show real-time consumption measured in Megabytes per hour to illustrate consumption over time.
Figure 1 shows the data consumed when using the lower-cost laptop in the 10-person meetings.
Figure 2 shows data consumed each minute for each of the four apps when using the higher-cost laptop was in the 10-person meetings.
Figure 3 shows the data consumed each minute using App A and compares the two laptops used for data collection. For each minute, the bar to the left is the lower-cost laptop and the bar to the right is the higher-cost laptop.
Figure 4 shows the data consumed each minute using App B and compares the two laptops. The bar to the left is the lower-cost laptop and the bar to the right is the higher-cost laptop.
Figure 5 shows the data consumed each minute using App C and compares the two laptops. The bar to the left is the lower-cost laptop and the bar to the right is the higher-cost laptop.
Figure 6 shows the data consumed each minute using App D and compares the two laptops. The bar to the left is the lower-cost laptop and the bar to the right is the higher-cost laptop.
A. Data Consumption Varies: The first takeaway is that different apps consume different amounts of bandwidth, as shown in Table 1, from 0.5 GBytes per hour up to 3.4 GBytes per hour, for video conferences using the different laptops, the same broadband connections, the same general setup (e.g., gallery view), the same people doing the same things on camera, etc.
- For a given app on a given laptop, data consumption was consistent over the 10-minute collection time.
- App D using the higher-cost laptop consumed the most bandwidth.
- With App D on the lower-cost laptop, there was video quality degradation. We confirmed the broadband connection was operating as expected and was not the cause of the video degradation. Rather, it appeared that the combination of the hardware and operating system of the lower-cost laptop was unable to meet the resource requirements of App D.
- App B consistently consumed less bandwidth regardless of scenario.
B. Comparing Laptops: In Table 1, the two columns of data show the differences between the lower-cost and higher-cost laptops for the data collections. On the lower-cost laptop, Apps A, B and C consume about the same amount of data on an hourly basis.
C. Comparing Laptops: The second column of data show that all apps on the higher-cost laptop consumed more bandwidth than the lower-cost laptop. This difference implies that when using the actual conferencing app (not a web browser), processing power available in the laptop may be a determining factor in consumption.
D. Comparing Apps: App C was the most consistent in data consumption regardless of the laptop used. The other conference applications noticeably consumed more on the higher-cost laptop.
In summary, we observed a more than 7X variation in the data consumption of video conferencing with a very limited exploration of just two variables – laptop and video conferencing application. Notably, however, when data consumption was at its highest, it was of the same magnitude as the data consumption of an HD video stream.
This is an area ripe for further research and study, both to more comprehensively explore these variables (e.g., other device types, larger meetings) and to explore other variables that may meaningfully influence data consumption.
Expanded Testing of Video Conferencing Bandwidth Usage Over 50/5 Mbps Broadband Service
As working from home and remote schooling remain the norm for most of us, we wanted to build on and extend our prior investigation of the bandwidth usage of popular video conferencing applications. In this post, we examine the use of video conferencing applications over a broadband service of 50 Mbps downstream and 5 Mbps upstream (“50/5 broadband service”). The goal remains the same, looking at how many simultaneous conferencing sessions can be supported on the access network using popular video conferencing applications. As before, we examined Google Meet, GoToMeeting, and Zoom, and this time we added Microsoft Teams and an examination of a mix of these applications. To avoid any appearance of endorsement of a particular conferencing application, we haven’t labeled the figures below with the specific apps under test.
We used the same network equipment from November. This includes the same cable equipment as the previous blog -- the same DOCSIS 3.0 Technicolor TC8305c gateway, supporting 8 downstream channels and 4 upstream channels, and the same CommScope E6000 cable modem termination system (CMTS).
The same laptops were also used, though this time we increased it to 10 laptops. Various laptops were used, running Windows, MacOS and Ubuntu – nothing special, just laptops that were around the lab and available for use. All used wired Ethernet connections through a switch to the modem to ensure no variables outside the control of the broadband provider would impact the speeds delivered (e.g., placement of the Wi-Fi access point, as noted below). Conference sessions were set up and parameters varied while traffic flow rates were collected over time. Throughout testing, we ensured there was active movement in view of each laptop’s camera to more fully simulate real-world use cases.
As in the previous blog, this research doesn’t take into account the potential external factors that can affect Internet performance in a real home -- from the use of Wi-Fi, to building materials, to Wi-Fi interference, to the age and condition of the user’s connected devices -- but it does provide a helpful illustration of the baseline capabilities of a 50/5 broadband service.
As before, the broadband speeds were over-provisioned. For this testing, the 50/5 broadband service was over provisioned by 25%, a typical configuration for this service tier.
First things first: We repeated the work from November using the 25/3 broadband service. And happily, those results were re-confirmed. We felt the baseline was important to verify the setup.
Next, we moved to the 50/5 broadband service and got to work. At a high level, we found that all four conferencing solutions could support at least 10 concurrent sessions on 10 separate laptops connected to the same cable modem with the aforementioned 50/5 broadband service and with all sessions in gallery view. The quality of all 10 sessions was good and consistent throughout, with no jitter, choppiness, artifacts or other defects noticed during the sessions. Not surprisingly, with the increase in the nominal upstream speed from 3 Mbps to 5 Mbps, we were able to increase the number of concurrent sessions from the 5 we listed in the November blog to 10 sessions with the 50/5 broadband service under test.
The data presented below represents samples that were collected every 200 milliseconds over a 5-minute interval (300 seconds) using tshark (the Wireshark network analyzer).
Conferencing Application: A
The chart below (Figure 1) shows total access network usage for the 10 concurrent sessions over 300 seconds (5 minutes) while using one of the above conferencing applications. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the total upstream usage stays around 2.5 Mbps which may be a result of running 10 concurrent sessions. Also, the downstream usage stays, on average, around 15 mbps, which leaves roughly 35 Mbps of downstream headroom for other services such as streaming video that can also use the broadband connection at the same time.
Figure 2 shows the upstream bandwidth usage of the 10 concurrent sessions and it appears that these individual sessions are competing amongst themselves for upstream bandwidth. However, all upstream sessions typically stay well below 0.5 Mbps -- these streams are all independent, with the amount of upstream bandwidth usage fluctuating over time.
Figure 3 shows the downstream bandwidth usage for the 10 individual conference sessions. Each conference session typically uses between 1 to 2 Mbps. As previously observed with this application, there are short periods of time when some of the sessions use more downstream bandwidth than the typical 1 to 2 Mbps.
Conferencing Application: B
Figure 4 shows access network usage for 10 concurrent sessions over 300 seconds (5 minutes) for the second conferencing application tested. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the total upstream usage hovers around 3.5 Mbps. The total downstream usage is very tight, right above 10 Mbps.
Figure 5 shows the upstream bandwidth usage of the 10 individual conference sessions where all but one session is well below 1 Mbps and that one session is right at 2 Mbps. We don’t have an explanation for why that blue session is so much higher than the others, but it falls well within the available upstream bandwidth.
Figure 6 shows the downstream bandwidth usage for the 10 individual conference sessions clusters consistently around 1 Mbps.
Conferencing Application: C
Figure 7 shows access network usage for the 10 concurrent sessions over 300 seconds (5 minutes) for the third application tested. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the total upstream usage hovers right at 3 Mbps over the 5 minutes.
Figure 8 shows the upstream bandwidth usage of the 10 individual conference sessions where all stay well below 1 Mbps.
Figure 9 shows the downstream bandwidth usage for the 10 individual conference sessions. These sessions appear to track each other very closely around 2 Mbps, which matches Figure 7 showing aggregate downstream usage around 20 Mbps.
Conference Application: D
Figure 10 shows access network usage for the 10 concurrent sessions over 300 seconds (5 minutes) for the fourth application tested. The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the total upstream usage hovers right at 5 Mbps over the 5 minutes, and there is no visible degradation to the conferencing sessions was observed.
Figure 11 shows the upstream bandwidth usage of the 10 individual conference sessions, where there is some variability in bandwidth consumed per session. One session (red) consistently uses more upstream bandwidth than the other sessions but remained well below the available upstream bandwidth.
Figure 12 shows the downstream bandwidth usage for the 10 individual conference sessions. These sessions show two groups, with one group using less than 1 Mbps of bandwidth and the second group using consistently between 2 Mbps and 4 Mbps of bandwidth.
Running All Four Conference Applications Simultaneously
In this section, we examine the bandwidth usage of all four conferencing applications running simultaneously. The test consists of three concurrent sessions from two of the applications and two concurrent sessions from the other two applications (once again a total of 10 conference sessions running simultaneously). The goal is to observe how the applications may interact in the scenario where members of the same household are using different conference applications at the same time.
Figure 13 shows access network usage for these 10 concurrent sessions over 300 seconds (5 minutes). The blue line is the total downstream usage, and the orange line is total upstream usage. Note that the total upstream usage once again hovers around 5 Mbps without any visible degradation to the conferencing sessions, and the downstream usage is pretty tight right above 10 Mbps.
Figure 14 shows the upstream bandwidth usage of the 10 individual conference sessions where several distinct groupings of sessions are visible. There were 4 different apps running concurrently. One session (red) consumes the most upstream bandwidth at averaging around 2 Mbps, whereas the other sessions use less, and some much less.
Figure 15 shows the downstream bandwidth usage for the 10 individual conference sessions across the four apps and, again, there are different clusters of sessions. Each of the four apps are following their own algorithms.
In summary, with a 50/5 broadband service, each of the video-conferencing applications supported at least 10 concurrent sessions, both when using a single conferencing application and when using a mix of these four applications. In all cases, the quality of the 10 concurrent sessions was good and consistent throughout. The 5 Mbps of nominal upstream bandwidth was sufficient to support the conferencing sessions without visible degradation, and there was more than sufficient available downstream bandwidth to run other common applications, such as video streaming and web browsing, concurrently with the 10 conferencing sessions.
Cable Broadband: From DOCSIS 3.1® to DOCSIS 4.0®
In 1997, CableLabs released the very first version of Data Over Cable Service Interface Specification (DOCSIS ® technology) that enabled broadband internet service over Hybrid Fiber-Coaxial (HFC) networks. Ever since, we’ve been making improvements, greatly enhancing network speed, capacity, latency, reliability and security with every new version. Today, cable operators use DOCSIS 3.1 technologies to make 1 Gbps cable broadband services available to 80% of U.S. homes, easily enabling 4K video, seamless multi-player online gaming, video conferencing and much more. Although there is still a significant runway for DOCSIS 3.1, CableLabs has been hard at work developing the next version – DOCSIS 4.0, which was officially released in March of 2020 and further advances the performance of HFC networks. Let’s take a look.
First, let’s talk about upstream speeds. DOCSIS 4.0 technology will quadruple the upstream capacity of HFC network to 6 Gbps—compared to the 1.5 Gbps that is available with DOCSIS 3.1. While current cable customers still download significantly more data than they upload, upstream data usage is on the rise. In the near future, advanced video collaboration tools, VR and more, will require even more upstream capacity. DOCSIS 4.0 also provides more options for operators to increase downstream speeds, with up to 10 Gbps of capacity. It has been designed to support the widespread availability of symmetric multigigabit speed tiers through full-duplex and extended-spectrum technologies that move us closer to our 10G goal.
In addition to faster speeds, DOCSIS 4.0 will also deliver stronger network security through enhanced authentication and encryption capabilities and more reliability due to the Proactive Network Maintenance (PNM) improvements. It is a great leap toward 10G, setting the stage for a series of subsequent enhancements that will all work together to help us build the future that we always dreamed of.
Upstream: How Much Speed Do You Need?
In the middle of a global pandemic, in which people are working and playing on their various devices at home, internet usage is surging—whether because of virtual meetings or streaming entertainment or mindlessly scrolling through apps. And it’s not just the heavily used downstream aspect that’s seeing increased usage, we’re also seeing an increase in upstream usage.
What Is Upstream?
Upstream is when data flows from the user to the network. When we play an online multiplayer video game or conduct a web conferencing call, we’re using the upstream channel. According to the NCTA’s COVID-19 dashboard, upstream internet traffic through late July was elevated, up 22.1 percent compared with pre-pandemic levels.
Cable networks have ably handled this increased traffic, aided by the fact that popular upstream-dependent applications require relatively modest bandwidth. A web audio conference call requires a modest 0.03 to 0.15 Mbps in bandwidth, whereas a video call may require up to 3 Mbps. Given that nearly all U.S. households passed by cable networks have currently available upstream speeds of at least 20 Mbps, there’s sufficient capacity to meet today's demands.
Your cable broadband internet connection can handle it today and we continue to advance cable network technology to ensure we're also ready for tomorrow.
Getting Rid of a Big Communications Tax on OFDM Transmissions
You can find the background information for this article in the post "Sharing Bandwidth: Cyclic Prefix Elimination."
Most wireless transmissions use a modulation technology called OFDM (orthogonal frequency-division multiplexing). This method was invented by Saltzburg and Chen at Bell Labs in the 1960s, but was not widely commercialized until the 1990s when faster signal processing chips became available. This modulation method has now been adopted into DOCSIS 3.1 technology.
In essence, data symbols are formed into blocks comprised of a large number of cosine waves of differing magnitude and phase values. Because the waves all have an integer number of cycles in the block, they do not interfere with each other. That is, they have a mathematical property of orthogonality.
This modulation technique excels when there are a lot of reflections, a.k.a. multipath, echoes, or dispersion. For this modulation to be successful, a portion of the transmitted signal must be copied from the back and pasted onto the front of the transmitted block. This is illustrated in Figure 1. The copied and pasted signal is called a cyclic prefix (CP), or a cyclic extension, or sometimes a guard interval. The function of the CP is to allow any echoes to die out before the remainder of the block is analyzed.
While this modulation technique works well, the CP is pure overhead, a waste of expensive scarce bandwidth, and depletes battery power on wireless transmitters. A CP is a communications tax on both bandwidth and battery power and CP overhead generally ranges between 5 and 25%.
CableLabs has invented a method to get rid of the CP using a math trick called an “overlapped circular convolution” to remove the effect of echoes without the CP. Parts of neighboring preceding and subsequent blocks are used as “pseudo-prefixes.” After equalization, the neighboring blocks pieces are discarded, leaving a de-ghosted block. Essentially the pseudo-prefix is applied at the receiver, and the transmitter doesn’t need to send any CP. That also means that the duration of the pseudo-prefix can be arbitrarily increased at the receiver for severe echo environments.
Figure 2 is a block diagram illustrating the CableLabs method, where a pseudo-prefix is created at the receiver using neighboring blocks. For wide-bandwidth applications, the overlapped circular convolution can be replaced with an overlapped Fourier transform with frequency domain equalization. This is more computationally efficient. For a user, the implementation of this technology means the cell phone data rates go up when receiving, and battery life is longer when transmitting.
Watch the video below to learn more about cyclic prefix elimination:
A technical paper describing the technique in detail is available in the December issue of the SCTE ISBE Journal of Network Operations on page 42 titled "OFDM Cyclic Prefix Elimination." You can download a copy for free once you register on the site. Subscribe to our blog or contact Principal Architect Tom Williams for more information
UpRamp™ – Connecting Networks, Creating Magic with a New Kind of Accelerator
For decades, CableLabs and our member cable operators have been at the forefront of the broadband revolution: connecting hundreds of millions of businesses and households to the Internet, connecting the many internet devices in the home, connecting families and the world to each other.
CableLabs has taken the next step to connect two of the most powerful networks in the world: the exciting and growing network of startup companies, and the largest, most powerful broadband network in the world, run by our 55 global cable operator members.
UpRampTM is a new kind of accelerator designed to skillfully connect startups and cable operators to amplify innovation that improves people's experience with cable. This is a new kind of accelerator for established startups and later stage emerging technology companies, designed to amplify startup success by guiding them to the world’s largest and most powerful broadband network.
Starting in March, UpRamp will open applications for the first startup accelerator designed to help emerging technology companies find true product/market fit within the global cable industry. UpRamp is built to fill the gap between a startup’s time in a classic startup accelerator and their ability to scale a business for the massive cable and broadband industry. Unlike traditional accelerators, UpRamp is closer to an executive MBA for startups; something we like to call a “Fiterator™”.
The UpRamp Fiterator is a 3 month, non-resident program for companies that already have a product in the market, have either raised capital or built a sustaining revenue stream, and are looking to engage real customers in this large and growing market. While most accelerators close their program with a “demo day,” the outcomes of a Fiterator accelerator are real deals and reference customers. This highly selective program is limited to four startups per cohort, with each startup gaining access to our network of over 250 senior level mentors from CableLabs and our member operators.
This past year, CableLabs did a pilot program of UpRamp with DeepField - working with them to find product market fit to great success.
“UpRamp is the logical next step in how CableLabs catalyzes innovation in the industry. DeepField is proud to be a beta tester of the Fiterator accelerator concept. From identifying technology needs to facilitating industry-wide consensus on solutions, UpRamp helped bridge the gap between Deepfield startup innovation and broad industry adoption. Today, Deepfield is deployed in more than 85% of US cable companies and continues to work extensively with UpRamp on new areas of innovation that improve service quality, simplify network operations and power the next-generation of services," stated Dr. Craig Labovitz, CEO, Deepfield.
At CableLabs, our goal is to bring new innovation into the cable ecosystem. With UpRamp, we are putting our expertise into the game, helping young companies find their fit and expand their network - because we believe that people, communities and companies thrive when networks connect. And that is the magic.
Scott Brown is a Startup Catalyst at CableLabs.
5G — The Beginning of an Exhilarating Journey
“5G” is the next step for the evolution of wireless technology beyond “4G-LTE” with the 2018 and 2020 Olympics acting as powerful Incentives for vendors to accelerate their product development.
A few weeks ago, I travelled to San Francisco to chair a session at the new IEEE SDN-NFV conference and to participate in a panel session at the IEEE 5G Summit taking place in Silicon Valley that same week. I have long been convinced that NFV and SDN would be key enabling technologies for 5G. My panel role was to talk about how the international standards effort around NFV and SDN would support innovation in the 5G space.
In the weeks preceding the conference, Verizon had announced plans for “5G” field tests. As a consequence, demand to attend the 5G Summit was much greater than the organizers had originally anticipated resulting in the event shifting to a larger venue. There was a capacity crowd of greater than 300 academic and industry researchers along with a sprinkling of business development types.
Industry initiatives to define the target for 5G have identified eight key requirements for the technology:
- 1-10Gbps connections to end points
- 1ms end-to-end round trip delay (latency)
- 1000x bandwidth per unit area
- 10-100x number of connected devices
- Perception of 99.999% availability
- Perception of 100% coverage
- 90% reduction in network energy usage
- 10-year battery life for low power devices
We have all seen the impact that broadband connectivity combined with smart devices has had on our lives. Imagine the possibilities if this ambitious target could be realized!
Innovation in a 5G World
As the conference progressed, I became more and more excited about the innovation possibilities in a 5G-enabled world. There were a number of very informative presentations during the day, but the talk that made the most impact on me was the keynote by Professor Gerhard Fettweiss from the 5G Lab. I was floored by his vision for a “Tactile Internet” illustrated with powerful video demonstrations that showed if network latency (round trip delay) could be reduced below 10ms, the potential for machine to machine and human to machine interactions would become limitless. A surgeon undertaking remote surgery was one example, self-driving vehicles with real-time network assisted object tracking another. Of course safeguards would need to be in place to deal with loss of connectivity or faults, but you get the idea.
I am convinced the “Tactile Internet” will spawn a myriad of applications in every field of human endeavor. Many of which we cannot even conceive of yet. However a fellow delegate remarked that he didn’t think Gerhard’s vision would be realized any time soon. In my view he missed the point - a compelling vision is the critical first step for any new ecosystem to get started and I am sure that key elements of the tactile internet vision will be realized sooner than people think. The enabling technologies exist today. They just need to be brought together in the right way with standards that facilitate open innovation.
Other keynotes including those from AT&T and Google reinforced my impression that we are on the cusp of an unprecedented era of innovation driven by low latency, high bandwidth wireless connectivity embodied in what is being termed “5G” — but having far greater implications than simply wireless connectivity.
Back to reality and my panel, which was the final event of the conference. We were challenged with answering the imponderable question “what is 5G”. None of us was able to come up with an answer that satisfied the moderator since 5G is a far richer vision than simply an increase in wireless bandwidth or a 5G icon appearing on a smartphone handset.
What CableLabs is doing in this Space
The cable network will provide an ideal foundation for 5G because it is ubiquitous and already supports millions of Wi-Fi nodes in places where the majority of wireless data is consumed. It has high capacity for both Access and Backhaul. It is highly reliable and has low intrinsic latency because it is based on optical fiber which penetrates deep into the access network feeding wideband coaxial cables reaching all the way to the end-user premises. Moreover, it is a multi-node remotely powered access topology ideally suited to support the connection of the large number of small cells close to homes and businesses that will be needed for 5G.
A multi-faceted CableLabs R&D program is addressing the key technologies required for 5G. For example, we are developing end-to-end architectures based on SDN and NFV technologies that will provide the efficiencies and resource flexibility and adaptability required for 5G. We are studying the co-existence of wireless technologies and we have joined with leading industry organizations such as NYU WIRELESS to evaluate how spectrum in the millimeter wave region combined with the cable network will provide an economical foundation for 5G. CableLabs participated in the recent 3GPP RAN 5G Workshop where we outlined the CableLabs vision for converged broadband and wireless networks.
It is exhilarating to think that the 5G journey is only just beginning and it is going to be incredibly exciting because we will be visualizing how to architect a dynamically configurable 5G network based on the existing cable network and bringing it all together to serve the needs of cable operators and their customers world-wide.
Watch this space for blogs from my CableLabs wireless colleagues, and consider attending our new Inform[ed]: Wireless conference in New York in April.
Don Clarke is a Principal Architect at CableLabs working in the Virtualization and Network Evolution group.
Tuesday, April 13, 2016
8:30am to 6:00pm
New York City
DOCSIS 3.1® Update:
Get Your Engines Running…. The Engines ARE Running!
The original title for this blog was supposed to be “DOCSIS® 3.1 Update: Get your engines running,” but in reality, the engines ARE already running! In fact, the DOCSIS 3.1 engines have been running hot since the start of the year. With the technology rapidly maturing, vendors have accelerated their product development. Let’s take a look at some of the milestones we have hit over the past year as the DOCSIS 3.1 ecosystem evolved.
Last December we held the first official DOCSIS 3.1 equipment interoperability event (interop). Since then, four more interops have been successfully completed. The next one is scheduled for September. All the interops have been very successful, with strong vendor participation, including CM (cable modem) vendors, CMTS (cable modem termination system) vendors, and test equipment vendors. In the initial interops, we saw visible signs of how DOCSIS 3.1 technology will change the industry including the delivery of multi-Gbps performance, and high order modulation densities never seen before in other technologies – both showcasing the capabilities of DOCSIS 3.1 networks even before the equipment has been deployed.
It is also exciting to see how the ecosystem is collectively working to ensure that DOCSIS 3.1 technology is ready for deployment in cable networks. Along with the progress made by the CM and CMTS vendors, we have also seen excellent progress from test equipment vendors, who are preparing the right tools to support DOCSIS 3.1 field deployments. We are very excited that cable network operators are now priming their networks for DOCSIS 3.1 readiness through field testing and trials. Early results show that the utilization of high order DOCSIS 3.1 modulation schemes will significantly increase network efficiency.
With the great strides in product development, CableLabs® has also opened the door for DOCSIS 3.1 product certification submissions. We expect to see product submissions for certification in the near future.
Based on what we have learned, and all the excitement from the vendors and operators, we expect the upcoming DOCSIS 3.1 deployments to drive the next evolution in broadband connectivity.
Stay tuned for further updates ….
Belal Hamzeh is the Director of Broadband Evolution at CableLabs.
Technology Implications of 2Gbps Symmetric Services
Service providers and municipalities alike continue their push toward offering gigabit services over fiber networks. In fact, fiberville is a web site dedicated to listing which service providers and municipalities provide fiber solutions. Recently, Comcast significantly upped the ante by announcing a 2 Gbps symmetric service that will become available in certain locations. The services announced will be 2 Gbps downstream and 2 Gbps upstream. This is a substantial announcement due to the 2 Gbps speeds and symmetrical services which facilitate faster file uploads which is of interest to individuals who work from home, small businesses and gamers.
With all that speedy yumminess, let’s examine some of the technologies required for delivering multi-gigabit symmetrical services to homes and businesses.
Setting the Stage
When a provider deploys broadband services there is typically a peak rate, above the advertised speeds, that provides the headroom necessary for them to support the speeds and service level agreements (SLAs) associated with the service.
When the total available bandwidth is shared among multiple users, like it is in PON solutions, an unscientific but common practice is for the network to support at least twice the highest advertised rate. Specifically, to support an advertised service of N Gbps, the peak rate must provide for at least 2xN Gbps. Thus, for a 2 Gbps service the peak rate must be at least 4 Gbps to safely support the SLA using common practices. This premise allows the operator to investigate and determine the technology to use in order to support the advertised speeds. Once the technology is chosen, then the engineering work required to build out the solution may begin.
Let’s look at the two fiber to the home solutions that will support a 2 Gbps symmetric service today: Point-to-Point Fiber and 10 Gbps Ethernet Passive Optical Network (10G-EPON).
Point to Point Fiber: Best Performance
Point-to-point topology is a “home-run” active Ethernet fiber implementation that provides dedicated fiber from the home all the way through the access network to the headend. It is analogous to building your own personal highway from home to your office so you can get to work faster. While this solution provides the ultimate future-proof network, in terms of bandwidth, flexibility and network reach, it requires a significant amount of fiber and associated optical transceivers. Running a dedicated fiber to a residential customer premise is both complex and resource intensive due to additional fiber management and ongoing maintenance. However, it delivers the best performance to meet customer needs.
10G-EPON: An Efficient 2-Gig Symmetrical Solution
While there are many flavors of Passive Optical Networks (PON), (see: OnePON), 10G-EPON with its symmetric 10 Gbps links, is the only standardized, and commercially available PON technology able to provide at least 4 Gbps peak rate to support a 2 Gbps symmetrical service level agreement. Because of its point-to-multipoint topology and passive implementation, 10G-EPON is a cost effective solution in terms of operations, fiber consolidation, and headend real estate required.
CableLabs has championed PON initiatives through contributions to international standards, hardware and software certification, and interoperability events. CableLabs is facilitating a common approach to provide fiber solutions that will allow for quicker and higher-scale PON deployments.
Both 10G-EPON and point-to-point fiber solutions can provide 2 Gbps symmetrical services, opening up a world of possibilities for cable operators and customers alike. From the realization of all-IP delivered services, more efficient network implementations, improved cloud services, and overall future proofing the network, 2 Gbps symmetrical fiber deployments are a reality today.
Jon Schnoor is a Senior Engineer at CableLabs.
FCC Votes to Expand Wireless Spectrum: A Win for Wi-Fi
Today is a big day for Wi-Fi and everyone that uses it – which is, of course, all of us. Our Wi-Fi is about to get twice as good. How? By doubling the size of the Wi-Fi pipe.
The FCC voted today to double the amount of wireless spectrum that Wi-Fi uses in the 5 gigahertz (GHz) band. That’s 100 megahertz (MHz) of newly useful Wi-Fi bandwidth.
You might have heard of 5 GHz – it's the globally harmonized home for the latest Wi-Fi technology: 802.11ac, also known as “gigabit Wi-Fi” for its incredibly fast broadband speed. 802.11ac is beginning to hit the market in force – the MotoX and latest Samsung Galaxy smartphones, among many other devices, have it already. Pretty soon, 802.11ac will be in just about everything.
The only problem with gigabit Wi-Fi is that regulations prevented it from reaching its full gigabit potential.
It has taken a lot of work by many dedicated people to get to this moment. A little over a year ago, the FCC proposed a number of ways to increase Wi-Fi bandwidth. Additional spectrum is needed to support the continued growth of wireless broadband, which we have written about, and is a central feature of the Administration’s technology policy and the National Broadband Plan.
[Related: CableLabs' Work on Wireless Spectrum]
A strong desire to make progress in wireless policy is not enough, however. Success requires attention to detail. In the context of 5 GHz, that means understanding how Wi-Fi can share the airwaves with the other wireless services that use the same spectrum.
That’s where CableLabs comes in. In collaboration with colleagues at the University of Colorado, we developed a sophisticated simulation of potential interference between Wi-Fi and a satellite phone system that uses part of the 5 gigahertz band. This analysis served as the technology framework for today’s FCC action.
“The FCC vote to expand Wi-Fi access in the 5 GHz band is a great step forward for wireless broadband,” said Phil McKinney, president and chief executive officer of CableLabs. “This action substantially increases Wi-Fi capacity, making gigabit Wi-Fi speeds possible. CableLabs’ insights on spectrum sharing, including sophisticated simulation of how Wi-Fi will interact with other services using the same spectrum, played a key role in helping the FCC move forward.”
Cable operators will put this new bandwidth to good use, along with the rest of the Wi-Fi community. But to be clear, today’s win for Wi-Fi is just the beginning. Regulators in other nations should take note and consider how to fully enable the global gigabit Wi-Fi standard. And the FCC has more work to do as well: Today’s newfound 100 MHz of Wi-Fi bandwidth, while significant, is only 20% of the national goal for new wireless broadband spectrum.
What’s next, then? A framework for spectrum sharing between Wi-Fi and nascent connected vehicle technology would be a good place to start.
To be continued …
Rob Alderfer is a Principal Strategic Analyst for CableLabs, the global research and development consortium of the cable industry, where he guides technology policy and strategy across the industry. He was the Chief Data Officer of the Wireless Telecommunications Bureau at the Federal Communications Commission from 2010 to 2012, leading data-driven wireless policy to encourage investment and innovation in wireless broadband. Previously, he was responsible for overseeing communications policy and programs on behalf of the Administration at the White House Office of Management and Budget.