Innovation Journeys: 10G is new. We have been working on it for years.
You may have noticed that CableLabs is focused on innovation. One of our goals is to be recognized as the leading industry innovation lab in the world but talking about our innovation can be a bit tricky. Our job is to deliver innovation for the worldwide cable industry, but we can’t really talk about what we are working on now. We need to keep that secret for our member companies (cable operators) until the technology is ready to launch.
Our CEO, Phil McKinney has talked about how innovation is messy. Where you start may not be where you end up. I want to tell you about the path that led to one of our most important innovations--and part of our 10G platform. Low latency.
Our Low Latency Journey
We started on this journey over four years ago, with a challenge question (Focus in the FIRE methodology): What applications will drive a need for 60Mbps+ of sustained Internet bandwidth? That led to ideation sessions that unearthed the usual suspects: Internet of Things (billions of sensors, but each with such low bandwidth that they still don’t add up to much), 4K streaming video (good try, but still only 15Mbps or less), “Big Data” (sorry, not really a candidate for consumer households). Those applications didn’t quite answer the question.
But the emergence of 360° immersive video looked promising. Experiencing some of the earliest 360° video at the beginning of 2014 (shot on 6 Go-Pro’s, manually stitched) on a low-resolution Oculus Development Kit Virtual Reality headset got us thinking about where the technology might lead. Six 4K videos, streamed to the headset met the challenge of over 60Mbps, although compression gains would reduce the bandwidth and resolution increases would increase it.
Rather than “geeking out” on the technical possibilities, we followed advice from Phil: “Talk to consumers!” In February of 2015, we did primary research, bringing 50 varied members of the public into CableLabs to try out “immersive video content.” Rather than just focusing on virtual reality (VR) headsets, we constructed some other ways of consuming the content, such as immersive multi- 4K TV displays, ultra-wide projectors, tablets and regular TVs. We needed to understand whether “regular humans” (not geeks) would like these technologies.
The consumer research was massively informative. We shared the insights with our member companies at the time and realized that this ecosystem was likely to take off. We stepped back and tried to work out other mass-market use cases for VR.
We pivoted. We started to look at the possibilities of transforming how people communicate, and the ability to have holographic telepresence using digital human technology to perform digital headset removal. We don’t really want to talk to another person and see that person with a headset on; we want to see other people eye to eye and have them see us eye to eye. To prove the point, later in 2015 and into early 2016 we developed eye and mouth tracking capabilities that we added to a wireless VR headset and developed a digital human avatar of one of our staff.
We linked the head, eye and mouth tracking to real-time control of the digital avatar.
And in May of 2016 we demonstrated this to our board of directors.
We also found that realistic digital human avatars take LOTS of compute to render in real time, and that required a tethered PC. Even as mobile processors get faster and more capable, PC graphics will always be faster and more capable, due to their power budget. Phones get hot when you try to render realistic humans. To get to mass-market adoption, we need to go wireless and move the PC out of the home.
No Less Than a Revolution
VR needs incredibly low latency between head movement and the delivery of new pixels to your eyes, or you start to feel nauseated. To move the PC out of the home, we need to make the communications over the cable network be a millisecond or less round trip. But our DOCSIS® technology at the time could not deliver that.
So, we pivoted again. Since 2016, CableLabs DOCSIS architects Greg White and Karthik Sundaresan have been focused on revolutionizing DOCSIS technology to support sub-1ms latency. Although VR is still struggling to gain widespread adoption, that low and reliable DOCSIS latency will be a boon to gamers in the short term and will enable split rendering of VR and augmented reality (AR) in the longer term. The specifications for Low Latency DOCSIS (as a software upgrade to existing DOCSIS 3.1 equipment) have been released, and we’re working with the equipment suppliers to get this out into the market and to realize the gains of a somewhat torturous innovation journey.
Low latency is a key component of our 10G initiative. You can read more about the importance of latency here, and gain access both to a technical brief (members only) and to a detailed report (members only) on Wi-Fi latency in retail Wi-Fi routers.
Leveraging Machine Learning and Artificial Intelligence for 5G
The heterogenous nature of future wireless networks comprising of multiple access networks, frequency bands and cells - all with overlapping coverage areas - presents wireless operators with network planning and deployment challenges. Machine Learning (ML) and Artificial Intelligence (AI) can assist wireless operators to overcome these challenges by analyzing the geographic information, engineering parameters and historic data to:
- Forecast the peak traffic, resource utilization and application types
- Optimize and fine tune network parameters for capacity expansion
- Eliminate coverage holes by measuring the interference and using the inter-site distance information
5G can be a key enabler to drive the ML and AI integration into the network edge. The figure below shows how 5G enables simultaneous connections to multiple IoT devices generating massive amounts of data. The integration of ML and AI with 5G multi-access edge computing (MEC) enables wireless operators to offer:
- High level of automation from the distributed ML and AI architecture at the network edge
- Application-based traffic steering and aggregation across heterogeneous access networks
- Dynamic network slicing to address varied use cases with different QoS requirements
- ML/AI-as-a-service offering for end users
ML and AI for Beamforming
5G, deployed using mm-wave, has beam-based cell coverage unlike 4G which has sector-based coverage. A machine learned algorithm can assist the 5G cell site to compute a set of candidate beams, originating either from the serving or its neighboring cell site. An ideal set is the set that contains fewer beams and has a high probability of containing the best beam. The best beam is the beam with highest signal strength a.k.a. RSRP. The more activated beams present, the higher the probability of finding the best beam; although the higher number of activated beams increases the system resource consumption.
The user equipment (UE) measures and reports all the candidate beams to the serving cell site, which will then decide if the UE needs to be handed over to a neighboring cell site and to which candidate beam. The UE reports the Beam State Information (BSI) based on measurements of Beam Reference Signal (BRS) comprising of parameters such as Beam Index (BI) and Beam Reference Signal Received Power (BRSRP). Finding the best beam by using BRSRP can lead to multi-target regression (MRT) problem while finding the best beam by using BI can lead to multi-class classification (MCC) problem.
ML and AI can assist in finding the best beam by considering the instantaneous values updated at each UE measurement of the parameters mentioned below:
- Beam Index (BI)
- Beam Reference Signal Received Power (BRSRP)
- Distance (of UE to serving cell site),
- Position (GPS location of UE)
- Speed (UE mobility)
- Channel quality indicator (CQI)
- Historic values based on past events and measurements including previous serving beam information, time spent on each serving beam, and distance trends
Once the UE identifies the best beam, it can start the random-access procedure to connect to the beam using timing and angular information. After the UE connects to the beam, data session begins on the UE-specific (dedicated) beam.
ML and AI for Massive MIMO
Massive MIMO is a key 5G technology. Massive simply refers to the large number of antennas (32 or more logical antenna ports) in the base station antenna array. Massive MIMO enhances user experience by significantly increasing throughput, network capacity and coverage while reducing interference by:
- Serving multiple spatially separated users with an antenna array in the same time and frequency resource
- Serving specific users with beam forming steering a narrow beam with high gain to send the radio signals and information directly to the device instead of broadcasting across the entire cell, reducing radio interference across the cell.
The weights for antenna elements for a massive MIMO 5G cell site are critical for maximizing the beamforming effect. ML and AI can be used to:
- Identify dynamic change and forecast the user distribution by analyzing historical data
- Dynamically optimize the weights of antenna elements using the historical data
- Perform adaptive optimization of weights for specific use cases with unique user-distribution
- Improve the coverage in a multi-cell scenario considering the inter-site interference between multiple 5G massive MIMO cell sites
ML and AI for Network Slicing
In the current one-size-fits-all approach implementation for wireless networks, most resources are underutilized and not optimized for high-bandwidth and low-latency scenarios. Fixed resource assignment for diverse applications with differential requirements may not be an efficient approach for using available network resources. Network slicing creates multiple dedicated virtual networks using a common physical infrastructure, where each network slice can be independently managed and orchestrated.
Embedding ML algorithms and AI into 5G networks can enhance automation and adaptability, enabling efficient orchestration and dynamic provisioning of the network slice. ML and AI can collect real time information for multidimensional analysis and construct a panoramic data map of each network slice based on:
- User subscription,
- Quality of service (QoS),
- Network performance,
- Events and logs
Different aspects where ML and AI can be leveraged include:
- Predicting and forecasting the network resources can enable wireless operators to anticipate network outages, equipment failures and performance degradation
- Cognitive scaling to assist wireless operators to dynamically modify network resources for capacity requirements based on the predictive analysis and forecasted results
- Predicting UE mobility in 5G networks allowing Access and Mobility Management Function (AMF) to update mobility patterns based on user subscription, historical statistics and instantaneous radio conditions for optimization and seamless transition to ensure better quality of service.
- Enhancing the security in 5G networks preventing attacks and frauds by recognizing user patterns and tagging certain events to prevent similar attacks in future.
With future heterogenous wireless networks implemented with varied technologies addressing different use cases providing connectivity to millions of users simultaneously requiring customization per slice and per service, involving large amounts of KPIs to maintain, ML and AI will be an essential and required methodology to be adopted by wireless operators in near future.
Deploying ML and AI into Wireless Networks
Wireless operators can deploy AI in three ways:
- Embedding ML and AI algorithms within individual edge devices for to low computational capability and quick decision-making
- Lightweight ML and AI engines at the network edge to perform multi-access edge computing (MEC) for real-time computation and dynamic decision making suitable for low-latency IoT services addressing varied use case scenarios
- ML and AI platform built within the system orchestrator for centralized deployment to perform heavy computation and storage for historical analysis and projections
Benefits of Leveraging ML and AI in 5G
The application of ML and AI in wireless is still at its infancy and will gradually mature in the coming years for creating smarter wireless networks. The network topology, design and propagation models along with user’s mobility and usage patterns in 5G will be complex. ML and AI can will play a key role in assisting wireless operators to deploy, operate and manage the 5G networks with proliferation of IoT devices. ML and AI will build more intelligence in 5G systems and allow for a shift from managing networks to managing services. ML and AI can be used to address several use cases to help wireless operators transition from a human management model to self-driven automatic management transforming the network operations and maintenance processes.
There are high synergies between ML, AI and 5G. All of them address low latency use cases where the sensing and processing of data is time sensitive. These use cases include self-driving autonomous vehicles, time-critical industry automation and remote healthcare. 5G offers ultra-reliable low latency which is 10 times faster than 4G. However, to achieve even lower latencies, to enable event-driven analysis, real-time processing and decision making, there is a need for a paradigm shift from the current centralized and virtualized cloud-based AI towards a distributed AI architecture where the decision-making intelligence is closer to the edge of 5G networks.
The Role of CableLabs
The cable network carries a significant share of wireless data today and is well positioned to lay an ideal foundation to enable 5G with continued advancement of broadband technology. Next-generation wireless networks will utilize higher frequency spectrum bands that potentially offer greater bandwidth and improved network capacity, however, face challenges with reduced propagation range. The 5G mm-wave small cells require deep dense fiber networks and the cable industry is ideally placed to backhaul these small cells because of its already laid out fiber infrastructure which penetrates deep into the access network close to the end-user premises. The short-range and high-capacity physical properties of 5G have high synergies with fixed wireless networks.
A multi-faceted CableLabs team is addressing the key technologies for 5G deployments that can benefit the cable industry. We are a leading contributor to European Telecommunication Standards Institute NFV Industry Specification Group (ETSI NFV ISG). Our SNAPS™ program is part of Open Platform for NFV (OPNFV). We are working to optimize Wi-Fi technologies and networks in collaboration with our members and the broader ecosystem. We are driving enhancements and are standardizing features across the industry that will make the Wi-Fi experience seamless and consistent. We are driving active contributions to 3GPP Release 16 work items for member use cases and requirements.
Our 10G platform complements 5G and is also a key enabler to provide the supporting infrastructure for 5G to achieve its full potential. CableLabs is leading the efforts for spectrum sharing to enable coexistence between Wi-Fi and cellular technologies, that will enable multi-access sharing with 3.5 GHz to make the 5G vision a reality.
Moving Closer to Reality: CableLabs Holds Second Interop•Labs Point-to-Point Coherent Optics Event
Not every time can be the first time: there can only be one first interop, or first spec release, or first technology demo. Saying you’ve done something for the second time doesn’t carry the same excitement or cachet as saying you did it for the first. And yet, the first time at anything is rarely the last: you take what you learn doing something the first time, and then you apply that to doing it better the second time. And then you take what you learn there, and you continue to improve. It’s that continuous cycle of improvement that brings things closer to reality and ultimately gets us to the finish line.
It’s in that spirit that we’d like to announce the successful completion of our second Interop·Labs Point-to-Point Coherent Optics event, hosted by CableLabs at our facility in Louisville, Colorado, April 23–25, during which we worked to bring point-to-point coherent optics technology closer to reality for cable operators.
We went into this event with two main objectives:
- Demonstrate the ability to pass ethernet traffic between coherent optics transceivers from multiple different manufacturers, representing a real-world use of the technology; and
- Demonstrate compliance with the optical receiver sensitivity requirements from the specification.
Both of these objectives are incremental yet significant steps toward showing a real-world solution as compared with the “plugfest” style event we held in December 2018.
Assisting us in this work were five manufacturers: Acacia, ADVA, Ciena, Edge-Core, and NTT Electronics. It wasn’t a long list, but it was highly representative of the industry, including transceivers utilizing DSP silicon from the majority of the key manufacturers in the coherent optics space. It also represented multiple different pieces of network equipment, necessary for connecting coherent optics transceivers to other networks, including for the first time a network switch not provided by a transceiver module manufacturer and designed to work with a wide range of transceiver modules.
And just because the event built on a previous one doesn't mean that there weren’t issues to resolve. But that’s the point of events like this: to uncover those issues in the lab, to work together in the spirit of collaboration to resolve them and to move ever closer to seeing these products deployed in the field. Which is exactly what happened! I’m happy to say that everyone showed the type of interoperability we would expect, and demonstrated compliance with the optical power and optical signal-to-noise ratio (OSNR) sensitivity requirements that we were testing against.
In the final result, this was a solid step forward on the path toward making the deployment of this 10G technology in cable operator networks a reality. And it certainly won’t be the last, so there will be further opportunities to engage in more events in the future. Any company that manufactures coherent optics transceivers, network equipment for those transceivers, or test equipment for validating coherent optics equipment is welcome to join our Interop·Labs events. Please contact me if you’re interested in getting involved, or keep an eye on our website for announcements of future events.
ANGA 10 Gigabit and Beyond: Powering the Future of Cable
For the 15th consecutive year, thousands of network operators, service providers and vendors from around the globe will gather in Cologne, Germany, for ANGA COM Exhibition and Conference, June 4–6, 2019. Widely known as Europe’s leading business platform for broadband operators and content providers, ANGA COM is organized by ANGA Services GmBH, which represents the interests of more than 200 companies in the German broadband industry that provide service to more than 40 million customers. This year’s event is a 3-day deep dive into key topics such as gigabit networks, smart homes, Internet of Things (IoT), Wi-Fi, network virtualization, big data, streaming, cloud TV and 5G.
With more than 500 exhibitors and 20,000 participants from 75+ countries expected to attend, this year’s list of confirmed participants includes many of our members, together with our partner industry organizations, such as Society of Cable Telecommunications Engineers (SCTE), NCTA and other representatives from the cable community. And, for the first time, ANGA COM is engaging the startup community and will have a featured exhibition area where CableLabs subsidiary UpRamp will be talking to potential Fiterator companies.
10G Gigabit and Beyond: Powering the Future of Cable
On June 4, from 11 a.m. to 12:15 p.m. CEST, CableLabs COO Chris Lammers will kick off the technology track by discussing the cable industry’s 10G platform—a collection of technologies providing faster speeds, lower latency, higher reliability and greater security. He will be joined by some of the most influential names in cable, including Cisco Fellow and CTO of Cable Access John Chapman, General Manager of Intel’s Connected Home Strategy and Technology Office Robert Ferreira, Vodafone’s Director of Deployment Frank Hellemink and Liberty Global VP of Technology Bill Warga. The discussion will be moderated by SCTE·ISBE President and CEO Mark Dzuban. They will talk about the latest 10G developments and answer your burning questions.
Gigabit Broadband: Springboard for 10G
As the suite of technologies that will deliver Internet speeds 10 times faster than today’s networks, the foundation for 10G already exists—the hybrid fiber coaxial network currently offering 1 gigabit speeds to much of the United States and Europe. Panelists will discuss how cable networks have ramped up their services to offer 1 gigabit (gigabit-per-second download speed) service across 80 percent of the United States—up from just 5 percent in 2016, with similar growth seen in Europe.
Not convinced we need 10 gigabit or even 1 gigabit speeds? Based on the double-digit bandwidth growth that we continue to see every year, we know that customers are ready for the next step in innovation and, thanks to a consistent cadence of network updates, cable operators are ready too.
10G Platform: Cable’s Technologies Make It Real
The 10G standard’s promise of faster speeds, lower latency, higher reliability and greater security will enable a wide variety of new services and applications that will change the way millions of consumers, educators, businesses and innovators interact with the world. Learn which technologies power the 10G platform and what’s coming next:
- 10G Speed: With both coax and fiber, the cable industry is working on more efficient ways of using the infrastructure we already have, which—in addition to providing multiple gigabit speeds to our customers—will allow cable operators to grow network capacity to hundreds of terabits and beyond without spending more than they need to. Learn more about how DOCSIS® 3.0 technology, DOCSIS 3.1® technology, Full Duplex DOCSIS® technology, and how fiber initiatives like coherent optics will enable multi-gigabit speeds to support high-resolution video used in bandwidth-sensitive applications.
- 10G Reliability: As the number of connected devices per household increases exponentially, network reliability is key. One of the technology highlights in the reliability suite is the Profile Management Application, which allows operators to reach two goals: minimize transmission errors on the network and maximize network capacity. The bandwidth gains realized by running a well-designed set of profiles can be anywhere from a 15 percent to 40 percent capacity increase on a channel. These gains can translate to a solid 200 to 400 Mbps extra capacity on each orthogonal frequency division multiplexing (OFDM) channel! PMA will continue to gain relevance over the years as operators deploy various 10G technologies, all of which will be based on OFDM. Cablelabs’ PMA SW has a software interface directly into the Casa Systems CMTS, which will be demonstrated in the Casa booth in hall 7, stand G10.
- 10G Security: A good security system builds trust in the network and is an essential part of the 10G future. As more connected devices are added to homes and small businesses, the cable industry is tackling this issue by utilizing technologies and playing an active role in driving security in the broader Internet ecosystem. An example is CableLabs® Micronets, an easy-to-use open source platform that provides simpler and better security at scale.
- 10G Low Latency: Interactive experiences such as virtual reality and gaming require low-latency networks. CableLabs technologies including Low Latency DOCSIS, Low Latency Wi-Fi and Mobile Xhaul will spur a wave of innovation, enabling seamless next-level experiences like holodecks, lightfield displays and 360° video.
Network Convergence: Opening Up New Business Opportunities
Experience 10G with network convergence and the rollout of 5G! The 5G standard requires multiple small cells (radio equipment), and—because those small cells require an efficient way to communicate with one another and the core network—cable can provide the perfect infrastructure. Learn more about technologies like Mobile Xhaul and Fronthaul vRAN that allow the wireline network to efficiently carry traffic back to the mobile network.
Improved Wi-Fi = Happy Customers
In recent years, the cable industry has devoted substantial resources to developing and enhancing wireless technologies for a seamless experience. Learn about the technologies and policy initiatives that increase Wi-Fi speed and reliability for whole-home pervasive coverage and consistent throughput, allowing consumer devices to enjoy the full benefits of advances—in speed, reliability, security and latency—in the coax and fiber portions of the network.
Where: Koelnmesse, Cologne, Germany
When: June 4–6, 2019
Why: Learn the ins and outs of the 10G platform
See the ANGA COM 2019 agenda here for more information.
A Better Wi-Fi Experience with Dual Channel Wi-Fi™
At least 15 percent of customer service calls are Wi-Fi related, ranging from poor connections to video playback issues, translating to more than $600 million in annual support costs for the cable industry (in North America alone). As the Wi-Fi industry looks for ways to increase speed, coverage and support for more devices in the Wi-Fi ecosystem, one critical element has been overlooked: the need to have the necessary airtime to send data to all end devices in a timely manner. Even with faster Wi-Fi connections, if there is no available airtime to send the data, the connection is useless. CableLabs realized this shortfall and addressed it with the development of Dual Channel Wi-Fi technology.
What Is Dual Channel Wi-Fi?
Dual Channel Wi-Fi delivers an efficient and more reliable wireless connection.
The wireless networking technology that we commonly refer to as Wi-Fi is based on the 802.11 standard developed by the Institute of Electrical and Electronics Engineers (IEEE). The 802.11 standard, in turn, has its origins in a 1985 ruling by the U.S. Federal Communications Commission (FCC) that made the Industrial Scientific Medical (ISM) unlicensed radio frequency bands available for public use.
Wi-Fi is often referred to as “polite” because it uses a procedure called Listen-Before-Talk (LBT). LBT is a contention-based protocol that requires data-transmitting devices to listen and wait for a given frequency band to be clear before sending data. If the device (access point [AP] or station) does not detect transmissions, it proceeds to send data. If the device does detect transmissions, it waits for a random period of time and listens again for a clear frequency or channel before commencing transmission.
Wi-Fi has become ubiquitous over the years and is the primary method by which we connect devices in the home, at work and in public places to reach the internet. Multiple Wi-Fi devices in a typical broadband home can cause contention for available frequencies. Dual Channel Wi-Fi addresses Wi-Fi congestion issues by providing one or more channels for downstream-only data in addition to the primary bi-directional channel. The primary channel is used for upstream and small downstream packets and the others channel(s) are used for large downstream and time-critical data, like video. By offering operators configurable tools to intelligently redirect Wi-Fi traffic, better air-time utilization is achieved for all traffic, resulting in fewer interruptions and a much better Wi-Fi experience for everyone.
Benefits of Dual-Channel Wi-Fi
Dual Channel Wi-Fi benefits more than just downlink-only clients:
- A better overall multi-user experience: As demonstrated in our performance testing, without Dual Channel Wi-Fi, tests using two standard Wi-Fi channels show issues with video streams, gameplay delays, download buffering and slower throughput to individual devices. By moving data off the standard Wi-Fi channel, it effectively clears traffic from the channel, allowing both the AP and other clients more opportunities to send data.
- A radically improved multi-user experience: By virtually eliminating hesitation and pixilation of video delivery, Dual Channel Wi-Fi enables smooth gameplay without delays and faster overall delivery of data to both Dual Channel Wi-Fi and non-Dual Channel Wi-Fi devices. In our tests, depending on the application download speeds, data transfer speeds increased up to 12 times while airtime efficiency (by reducing the need for retransmissions) increased by 50 percent.
- Reduction of downlink data packet errors and packet retries: The AP’s ability to send data to clients without contention interference has reduced downlink data packet errors and packet retries, resulting in a reduction in uplink retry messages. This, in turn, allows the AP to send more TCP segments at a time, further reducing the amount of uplink traffic.
These improvements in data delivery over the Wi-Fi network as a whole are an example of user experience improvements that can be achieved by technologies that complement the cable industry’s 10G initiative. As the cable industry drives towards faster speeds, lower latency and increased reliability, Dual Channel Wi-Fi helps ensure that those benefits are experienced all the way to end user devices.
Because Dual Channel Wi-Fi is not limited to one downlink-only data channel, deployments in venues such as stadiums, airports or outdoor arenas can also benefit. Dual Channel Wi-Fi’s configurable filters can selectively assign, move or remove devices from individual downlink-only data channels. The mechanism to determine which downlink-only channels that different devices should be assigned is open to vendor development. This ability will allow operators and vendors to perform load balancing across the downlink-only data channels. The result is the management of the network to ensure the best user experience.
Dual Channel Wi-Fi is compatible with all Wi-Fi releases, including Wi-Fi 6. Dual Channel Wi-Fi has been developed and tested on various AP and client platforms. These include RDK-B, RDK-V, Ubuntu, Windows, MacOS and OpenWrt, which was co-implemented by Edgewater Wireless. Both CableLabs and Edgewater Wireless are excited about the opportunity to improve Wi-Fi for users around the world and look forward to working with standards bodies, internet service providers and device manufacturers of video set-tops, streaming devices, laptops, tablets and gaming consoles.
For more information about Dual Channel Wi-Fi, test results and implementation guides, click below.
Moving Beyond Cloud Computing to Edge Computing
In the era of cloud computing—a predecessor of edge computing—we’re immersed with social networking sites, online content and other online services giving us access to data from anywhere at any time. However, next-generation applications focused on machine-to-machine interaction with concepts like internet of things (IoT), machine learning and artificial intelligence (AI) will transition the focus to “edge computing” which, in many ways, is the anti-cloud.
Edge computing is where we bring the power of cloud computing closer to the customer premises at the network edge to compute, analyze and make decisions in real time. The goal of moving closer to the network edge—that is, within miles of the customer premises—is to boost the performance of the network, enhance the reliability of services and reduce the cost of moving data computation to distant servers, thereby mitigating bandwidth and latency issues.
The Need for Edge Computing
The growth of the wireless industry and new technology implementations over the past two decades has seen a rapid migration from on-premise data centers to cloud servers. However, with the increasing number of Industrial Internet of Things (IIoT) applications and devices, performing computation at either data centers or cloud servers may not be an efficient approach. Cloud computing requires significant bandwidth to move the data from the customer premises to the cloud and back, further increasing latency. With stringent latency requirements for IIoT applications and devices requiring real-time computation, the computing capabilities need to be at the edge—closer to the source of data generation.
What Is Edge Computing?
The word “edge” precisely relates to the geographic distribution of network resources. Edge computation enables the ability to perform data computation close to the data source instead of going through multiple hops and relying on the cloud network to perform computing and relay the data back. Does this mean we don’t need the cloud network anymore? No, but it means that instead of data traversing through the cloud, the cloud is now closer to the source generating the data.
Edge computing refers to sensing, collecting and analyzing data at the source of data generation, and not necessarily at a centralized computing environment such as a data center. Edge computing uses digital devices, often placed at different locations, to transmit the data in real time or later to a central data repository. Edge computing is the ability to use distributed infrastructure as a shared resource, as the figure below shows.
Edge computing is an emerging technology that will play an important role in pushing the frontier of data computation to the logical extremes of a network.
Key Drivers of Edge Computing:
- Plummeting cost of computing elements
- Smart and intelligent computing abilities in IIoT devices
- A rise in the number of IIoT devices and ever-growing demand for data
- Technology enhancements with machine learning, artificial intelligence and analytics
Benefits of Edge Computing
Computational speed and real-time delivery are the most important features of edge computing, allowing data to be processed at the edge of network. The benefits of edge computing manifest in these areas:
Moving data computing to the edge reduces latency. Latency without edge computing—when data needs to be computed at a server located far from the customer premises—varies depending on available bandwidth and server location. With edge computing, data does not have to traverse over a network to a distant server or cloud for processing, which is ideal for situations where latencies of milliseconds can be untenable. With data computing performed at the network edge, the messaging between the distant server and edge devices is reduced, decreasing the delay in processing the data.
Pushing processing to edge devices, instead of streaming data to the cloud for processing, decreases the need for high bandwidth while increasing response times. Bandwidth is a key and scarce resource, so decreasing network loading with higher bandwidth requirements can help with better spectrum utilization.
From a certain perspective, edge computing provides better security because data does not traverse over a network, instead staying close to the edge devices where it is generated. The less data computed at servers located away from the source or cloud environments, the less the vulnerability. Another perspective is that edge computing is less secure because the edge devices themselves can be vulnerable, putting the onus on operators to provide high security on the edge devices.
What Is Multi-Access Edge Computing (MEC)?
MEC enables cloud computing at the edge of the cellular network with ultra-low latency. It allows running applications and processing data traffic closer to the cellular customer, reducing latency and network congestion. Computing data closer to the edge of the cellular network enables real-time analysis for providing time-sensitive response—essential across many industry sectors, including health care, telecommunications, finance and so on. Implementing distributed architectures and moving user plane traffic closer to the edge by supporting MEC use cases is an integral part of the 5G evolution.
Edge Computing Standardization
Various groups in the open source and standardization ecosystem are actively looking into ways to ensure interoperability and smooth integration of incorporating edge computing elements. These groups include:
- The Edge Computing Group
- CableLabs SNAPS programs, including SNAPS-Kubernetes and SNAPS-OpenStack
- OpenStack’s StarlingX
- Linux Foundation Networking’s OPNFV, ONAP
- Cloud Native Compute Foundation’s Kubernetes
- Linux Foundation’s Edge Organization
How Can Edge Computing Benefit Operators?
- Dynamic, real-time and fast data computing closer to edge devices
- Cost reduction with fewer cloud computational servers
- Spectral efficiency with lower latency
- Faster traffic delivery with increased quality of experience (QoE)
The adoption of edge computing has been rapid, with increases in IIoT applications and devices, thanks to myriad benefits in terms of latency, bandwidth and security. Although it’s ideal for IIoT, edge computing can help any applications that might benefit from latency reduction and efficient network utilization by minimizing the computational load on the network to carry the data back and forth.
Evolving wireless technology has enabled organizations to use faster and accurate data computing at the edge. Edge computing offers benefits to wireless operators by enabling faster decision making and lowering costs without the need for data to traverse through the cloud network. Edge computation enables wireless operators to place computing power and storage capabilities directly at the edge of the network. As 5G evolves and we move toward a connected ecosystem, wireless operators are challenged to maintain the status quo of operating 4G along with 5G enhancements such as edge computing, NFV and SDN. The success of edge computing cannot be predicted (the technology is still in its infancy), but the benefits might provide wireless operators with critical competitive advantage in the future.
How Can CableLabs Help?
CableLabs is a leading contributor to European Telecommunication Standards Institute NFV Industry Specification Group (ETSI NFV ISG). Our SNAPS™ program is part of Open Platform for NFV (OPNFV). We have written the OpenStack API abstraction library and contributed it to the OPNFV project at the Linux Foundation—“SNAPS-OO”—and leverage object oriented software development practices to automate and validate applications on OpenStack. We also added Kubernetes support with SNAPS-Kubernetes, introducing a Kubernetes stack to provide CableLabs members with open source software platforms. SNAPS-Kubernetes is a certified CNCF Kubernetes installer that is targeted at lightweight edge platforms and scalable with the ability to efficiently manage failovers and software updates. SNAPS-Kubernetes is optimized and tailored to address the need of the cable industry and general edge platforms. Edge computing on Kubernetes is emerging as a powerful way to share, distribute and manage data on a massive scale in ways that cloud, or on-premise deployments cannot necessarily provide.
CableLabs Sponsors FCBA/IAPP “Data Is King”
Many of today’s most popular consumer products and services are powered by the exponential growth in the generation, collection and use of personal data, enabled by ever-increasing broadband capacity, processing power and storage. These products and services provide consumers with unparalleled personalization, efficiency and convenience. However, the technologies and practices surrounding personal data also create new dimensions of risk to individuals, institutions and society alike.
In response, governments both in the United States and around the world are under increasing pressure to develop new legislation and regulatory models to address these growing concerns. In the past year alone, we have seen the implementation of the European Union’s sweeping General Data Protection Regulation (GDPR), the passing of the California Consumer Privacy Act, and multiple hearings in the U.S. Congress stemming from numerous data breaches and other scandals involving the potential misuse of consumers’ personal data. Here at CableLabs, we recognize the interplay and potential impact of emerging privacy regulations on the direction of next-generation Internet applications.
In that spirit, CableLabs sponsored “Data Is King” – U.S. Privacy Developments and Implications for Global Markets and Technology Development, a recent event co-hosted by the Federal Communications Bar Association (FCBA) Rocky Mountain Chapter and the IAPP Denver/Boulder KnowledgeNet Chapter. The event gathered luminaries from across the policy and technology spectrum to explore trends and recent developments in privacy law and regulation, as well as the potential impact that these policies will have on the products and services of tomorrow.
The event was kicked off by Martin Katz (Chief Innovation Officer and Senior Advisor for Academic Innovation and Design at the University of Denver and the Executive Director at Project X-ITE). Katz discussed the existing gaps and fragmentation in today’s U.S. privacy regime and highlighted the drawbacks of the EU’s approach to comprehensive personal data protection legislation (GDPR). In Katz’s view, such an approach creates a significant and costly compliance regime that can stifle new startups and small businesses, and more generally, innovative new products and services. He emphasized that any comprehensive U.S. federal regime should recognize and seek to minimize compliance costs and ensure room for innovation while protecting consumer choice, trust and accountability.
Tracy L. Lechner (Attorney and Founder at the Law Offices of Tracy L. Lechner) moderated the first panel session, focused on trends and recent developments in privacy regulations domestically and internationally, with the following panelists: Beth Magnuson (Senior Legal Editor of Privacy and Data Security at Thomson Reuters Practical Law); Dale Skivington (Compliance and Privacy Consultant, Adjunct Professor at the University of Colorado, and Former Chief Privacy Officer at Dell); Erik Jones (Partner at Wilkinson, Barker, Knauer); and Scott Cunningham (Owner at Cunningham Tech Consulting and Founder of IAB Tech Lab).
The panelists agreed that the general position of industry has evolved from a preference for best practices with agency oversight to a recognized need for U.S. federal legislation. This shift has been spurred by a desire for a common compliance framework in light of developing differences in state laws and diverging international privacy regimes. The panelists emphasized that changing privacy regulatory requirements has forced organizations to make frequent and costly IT overhauls to ensure compliance that arguably create little to no value for consumers. For instance, GDPR’s expansive definition of “personal data” created a herculean project for large organizations to take the foundational step of identifying and classifying all the potentially covered data. The panelists agreed that state attorneys general could have a valuable and thoughtful role in enforcement, but they also believe that specific requirements should be standardized at the federal level and be based on an outcome- or risk-based approach, unlike GDPR’s highly prescriptive approach.
Mark Walker (Director of Technology Policy at CableLabs) led a second-panel discussion, focused on the interplay of privacy regulation and technology development. The panel featured Walter Knapp (CEO at Sovrn), Scott Cunningham and Danny Yuxing Huang (Postdoctoral Research Fellow at the Center for Information Technology Policy at Princeton University). Walker framed the panel discussion in historic terms, highlighting the privacy concerns generated through the widespread availability of the portable camera in the late 1800s, through the emergence of electronic eavesdropping capabilities in the 1960s and, more recently, through the broad adoption of RFID technology. For each of these examples, public concern drove legal and regulatory changes, but more fundamentally, the privacy “panic” subsided for each technology as society became more familiar and comfortable with each technology’s balance of benefits and drawbacks.
Through that lens, the panelists examined GDPR and highlighted the high associated compliance costs, from both a technical implementation and revenue perspective. Faced with these costs, many smaller publishers are choosing to cut off access to their content from covered geographies rather than trying to comply. In comparison, large Internet firms have the resources to ensure compliance even in a costly and highly fragmented regulatory environment. Until recently, the Internet has largely matured without defined geographic borders and has nearly eliminated global distribution costs for smaller publishers. However, this trend may be reversed in the face of an emerging fragmented and highly regulated environment, reducing the viability of smaller publishers and driving unintended market concentration.
Turning to emerging technologies, Huang described his research into the security and privacy implications of consumer Internet of Things (IoT). He provided an overview of a newly released research tool, Princeton IoT Inspector, that consumers can easily use to gain detailed insights into the network behaviors of their smart home IoT devices. Through this tool, consumers can gain a better understanding of how IoT devices share their personal information. He illustrated how IoT Inspector was able to identify the numerous ad networks and other domains a streaming video device communicated with while streaming a single television program; surprisingly, the streaming device communicated with more than 15 separate domains during that single streaming program.
The event closed with Phil Weiser, Colorado’s Attorney General, providing keynote remarks that outlined the current state of legislative efforts, explained potential approaches that address key privacy challenges and highlighted the role of state attorneys general in developing regulatory approaches and enforcing them. Attorney General Weiser recognized that although curbing a patchwork of state laws in favor of a single federal one would be the ideal outcome, it is unlikely to happen in a reasonable timeframe, saying:
A first best solution would be a comprehensive federal law that protected consumer privacy. Such a law, like the Dodd-Frank law, should authorize State AGs to protect consumers. When Congress starts working on such a law, I will be eager and willing to support such an effort. After all, differing laws and reporting requirements designed to protect privacy creates a range of challenges for companies and those working to comply with different—and not necessarily consistent—laws.
In today’s second-best world, I believe that States have an obligation to move forward. We should do so with a recognition that we need to collaborate with one another and develop approaches that recognize the challenges around compliance. We can use your help and engagement and we work towards just this end.
As CableLabs continues to focus on developing new and innovative network technologies, we must continue to ensure that we have a sound understanding of the rapidly evolving privacy landscape, both here and abroad. But, just as importantly, policymakers should have a sound understanding of how the various regulatory approaches may impact current and developing technologies. Events like this help bridge those gaps in understanding.
Be a Part of the Next Generation – Join the Next Remote PHY Interoperability Event
A CableLabs interoperability event is always a popular affair—and with good reason. It’s where manufacturers from all corners of the industry can come together to test the viability and interoperability of their products, as well as resolve technical issues before going to market. Our next Interop•Labs event, focused on Remote PHY technology, will be held May 6–10 in Louisville, Colorado. Space is limited, so be sure to register before May 1 to reserve your spot!
What to Expect at the Event
CableLabs is known for developing specifications, but our work doesn’t stop there. We want to do everything we can to ensure that our specifications are implemented properly and that the final consumer products deliver the best possible experience for customers. This philosophy benefits our members and vendors and, ultimately, the industry as a whole.
At the event, we will help you verify that your device and software meet the Remote PHY (R-PHY) requirements, and we will address any issues associated with implementation or interoperability. You will also get a rare opportunity to collaborate with other vendors and make sure that your products work together.
All event participants will get access to Kyrio’s state-of-the-art laboratories, fully equipped for comprehensive interoperability and performance testing. All you need to bring is the equipment or software that you intend to test.
A Bit of Housekeeping…
The event is open to all CableLabs members and NDA vendors. You must have a CableLabs member or vendor account to register, as well as approved R-PHY project access. Each participating company can send an appropriate number of engineers, in addition to any contributing engineers from the CableLabs R-PHY working groups. We also ask that you sign the DOCSIS® Participation Agreement prior to the event. If you have any questions, please email us at email@example.com.
OFC: A Third of a Mile of Next-Gen Optics
For the more technically inclined, that’s 2.6 microseconds. Which is how long it would take light to travel a third of a mile through fiber optic cable. It was also the length of the show floor at OFC: The Optical Networking and Communications Conference and Exhibition, held in March in San Diego, California.
Of course, it took me considerably longer – 115,384,615 times longer, or about 5 minutes – to walk that same distance at the show. And that’s assuming I maintained a fast pace and avoided stopping for the entire distance – a feat that proved essentially impossible, given the amazing assortment of next-generation optical technology on display, as well as a large number of familiar faces around me!
The show floor hosted 683 exhibitors – too many to take in over such a short time. Among them were many of the companies that have been involved in the CableLabs P2P Coherent Optics effort, helping to blaze the trail for the use of coherent optics technology in the cable access network, in turn enabling our 10G vision. In those booths, many were showcasing products that support 100G speeds based on our PHYv1.0 specification, as well as their roadmap and plans for devices supporting 200G speeds based on our PHYv2.0 specification. Roaming the show floor, checking out exhibited products or enjoying key sessions, I kept running into many of the direct participants in our efforts, despite the fact that 15,400 people were in attendance.
It didn’t seem that I could go very far without encountering someone from a significant CableLabs contingent or one of our members, reflecting the importance of next-generation optics to the cable industry, as well as CableLabs’ strong commitment to developing new optical technologies. Our Optical Center of Excellence has been at the forefront of developing innovative approaches for applying optical technology to cable networks, such as Full Duplex Coherent Optics.
CableLabs on Display
Although CableLabs wasn’t an official exhibitor, beyond having a contingent of people present, CableLabs and cable definitely had a presence at this year’s OFC. The importance of the cable industry was mentioned during a keynote speech; Curtis Knittle participated on a panel on “Action in the Access Network” as a part of the OIDA Executive Forum, and one of our interns presented a poster as part of a collaboration with CableLabs’ Bernardo Huberman and Lin Cheng.
Another presentation from our own Mu Xu also illustrated how CableLabs is pushing the boundaries of optical technology. This presentation – titled “Multi-Stage Machine Learning Enhanced DSP for DP-64QAM Coherent Optical Transmission” and co-authored by other CableLabs thought leaders including Junwen Zhang, Haipeng Zhang, Jing Wang, Lin Cheng, Zhensheng Jia, Alberto Campos, and Curtis Knittle – was particularly noteworthy because it brought together multiple areas of next-generation technology and research going on at CableLabs.
This was my first year attending OFC, and I feel like I barely scratched the surface of what was there. More than anything else, I came away impressed by the impact that the CableLabs team is making on the optical industry, one that will be critical for enabling our 10G future. I’m greatly looking forward to next year.
Mobility Lab Webinar #3 Recap: Inter-Operator Mobility with CBRS
Today we hosted our third webinar in the Mobility Lab Webinar series, “Inter-Operator Mobility with CBRS.” In case you missed the webinar, you can read about it in this blog or scroll down to see the recorded webinar and Q&A below.
Multiple service operators (MSOs) may be motivated to provide mobile services using the new 3.5 GHz spectrum introduced with Citizens Broadband Radio Service (CBRS). However, because CBRS operates low-power small cells to provide localized coverage in high-traffic environments, MSOs may rely on mobile virtual network operator (MVNO) agreements to provide mobile service outside the CBRS coverage area. In this scenario, MSOs will be motivated to:
- deliver a seamless transition,
- minimize the transition time between the home CBRS network and the visitor MVNO network, and
- maximize device attachment to the home CBRS network.
For inter-operator roaming, mobile operators use one of the two 3GPP roaming standards—Home Routing (HR) or Local Break Out (LBO)—to support the transition between a home network and roaming partner visitor networks. The international or domestic roaming agreements between home and visitor operator networks require the two networks to share roaming interfaces, as dictated by the 3GPP-defined roaming models. Because mobile operators are motivated to keep their subscribers on their network as long as possible to minimize LTE offload, they have little incentive to provide open access and connection to MVNO partners. Thus, the CBRS operator and host MVNO operators may have different and opposing motivations.
Our Webinar: Inter-Operator Mobility with CBRS
The “Inter-Operator Mobility with CBRS” webinar provides key findings that may assist MSOs in evaluating the implementation of the two roaming models for CBRS use cases with regards to:
- inter-operator mobility using network-based triggers for connected and idle modes,
- sharing of roaming interfaces,
- Public Land Mobile Network (PLMN) configurations, and
- higher-priority network selection timer.
The webinar also discusses the alternative solutions to network-based transition, such as:
- device transition controlled with an external server and
- enhancing dual SIM functionality.
You can view the webinar, webinar Q&A and technical brief below:
If you have any questions, please feel free to reach out to Omkar Dharmadhikari. Stay tuned for information about upcoming webinars by subscribing to our blog.