Comments
HFC Network

Cable Broadband: From DOCSIS 3.1® to DOCSIS 4.0®

Nov 6, 2020

In 1997, CableLabs released the very first version of Data Over Cable Service Interface Specification (DOCSIS ® technology) that enabled broadband internet service over Hybrid Fiber-Coaxial (HFC) networks.  Ever since, we’ve been making improvements, greatly enhancing network speed, capacity, latency, reliability and security with every new version. Today, cable operators use DOCSIS 3.1 technologies to make 1 Gbps cable broadband services available to 80% of U.S. homes, easily enabling 4K video, seamless multi-player online gaming, video conferencing and much more. Although there is still a significant runway for DOCSIS 3.1, CableLabs has been hard at work developing the next version – DOCSIS 4.0, which was officially released in March of 2020 and further advances the performance of HFC networks. Let’s take a look.

First, let’s talk about upstream speeds. DOCSIS 4.0 technology will quadruple the upstream capacity of HFC network to 6 Gbps—compared to the 1.5 Gbps that is available with DOCSIS 3.1. While current cable customers still download significantly more data than they upload, upstream data usage is on the rise. In the near future, advanced video collaboration tools, VR and more, will require even more upstream capacity. DOCSIS 4.0 also provides more options for operators to increase downstream speeds, with up to 10 Gbps of capacity. It has been designed to support the widespread availability of symmetric multigigabit speed tiers through full-duplex and extended-spectrum technologies that move us closer to our 10G goal.

In addition to faster speeds, DOCSIS 4.0 will also deliver stronger network security through enhanced authentication and encryption capabilities and more reliability due to the Proactive Network Maintenance (PNM) improvements. It is a great leap toward 10G, setting the stage for a series of subsequent enhancements that will all work together to help us build the future that we always dreamed of.

Cable Broadband- From DOCSIS 3.1® to DOCSIS 4.0®

Download the infographic.

SUBSCRIBE TO OUR BLOG

Comments
Security

A Proposal for a Long-Term Post-Quantum Transitioning Strategy for the Broadband Industry via Composite Crypto and PQPs

Massimiliano Pala
Principal Security Architect

Oct 22, 2020

The broadband industry has historically relied on public-key cryptography to provide secure and strong authentication across access networks and devices. In our environment, one of the most challenging issues—when it comes to cryptography—is to support devices with different capabilities. Some of these devices may or may not be fully (or even partially) upgradeable. This can be due to software limitations (e.g., firmware or applications cannot be securely updated) or hardware limitations (e.g., crypto accelerators or secure elements).

A Heterogeneous Ecosystem

When researching our transitioning strategy, we realized that—especially for constrained devices—the only option at our disposal was the use of pre-shared keys (PSKs) to allow for post-quantum safe authentications for the various identified use cases.

In a nutshell, our proposal combines the use of composite crypto, post-quantum algorithms and timely distributed PSKs to accommodate the coexistence of our main use cases: post-quantum capable devices, post-quantum validation capable devices and classic-only devices. In addition to providing a classification of the various types of devices based on their crypto capabilities to support the transition, we also looked at the use of composite crypto for the next-generation DOCSIS® PKI to allow the delivery of multi-algorithm support for the entire ecosystem: Elliptic Curve Digital Signature Algorithm (ECDSA) as a more efficient alternative to the current RSA algorithm, and a post-quantum algorithm (PQA) for providing long-term quantum-safe authentications. We devised a long-term transitioning strategy for allowing secure authentications in our heterogeneous ecosystem, in which new and old must coexist for a long time.

Three Classes of Devices

The history of broadband networks teaches us that we should expect devices that are deployed in DOCSIS® networks to be very long-lived (i.e., 20 or more years). This translates into the added requirement—for our environment—to identify strategies that allow for the different classes of devices to still perform secure authentications under the quantum threat. To better understand what is needed to protect the different types of devices, we classified them into three distinct categories based on their long-term cryptographic capabilities.

Classic-Only Devices. This class of devices does not provide any crypto-upgrade capability, except for supporting the composite crypto construct. For this class of devices, we envision deploying post-quantum PSKs (PQPs) to devices. These keys are left dormant until the quantum-safe protection is needed for the public-key algorithm.

PKS Protection
 

Specifically, while the identity is still provided via classic signatures and associated certificate chains, the protection against quantum is provided via the pre-deployed PSKs. Various techniques have been identified to make sure these keys are randomly combined and updated while attached to the network: an attacker would be required to have access to the full history of the device traffic to be able to get access to the PSKs. This solution can be deployed today for cable modems and other fielded devices.

Quantum-Validation Capable Devices. This type of device does not provide the possibility to upgrade the secure storage or the private key algorithms, but their crypto libraries can be updated to support selected PQAs and quantum-safe key encapsulation mechanisms (KEMs). Devices with certificates issued under the original RSA infrastructure must still use the deployed PSKs to protect the full authentication chain, whereas devices whose credentials are issued under the new PKI need only protect the link between the signature and the device certificate. For these devices, PSKs can be transferred securely via quantum-resistant KEMs.

Quantum Capable Devices. These devices will have full PQA support (both for authentication and validation) and might support classic algorithms for validation. The use of composite crypto allows for validating the same entities across the quantum-threat hump, especially on the access network side. To validate classic-only devices, the use of Kerberos can address symmetric pairwise PSKs distribution for authentication and encryption.

Composite Crypto Solves a Fundamental Problem

In our proposal for a post-quantum transitioning strategy for the broadband industry, we identified the use of composite crypto and PQPs as the two necessary building blocks for enabling secure authentication for all PKI data (from digital certificates to revocation information).

When composite crypto and PQPs are deployed together, the proposed architecture allows for secure authentication across different classes of devices (i.e., post-quantum and classic), lowers the costs of transitioning to quantum-safe algorithms by requiring the support of a single infrastructure (also required for indirect authentication data like “stapled” OCSP responses), extends the lifetime of classic devices across the quantum hump and does not require protocol changes (even proprietary ones) as the two-certificate solution would require.

Ultimately, the use of composite crypto efficiently solves the fundamental problem of associating both classic and quantum-safe algorithms to a single identity.

To learn more, watch SCTE Tec-Expo 2020’s “Evolving Security Tools: Advances in Identity Management, Crytography & Secure Processing” recording and participate to the KeyFactor’s 2020 Summit.

SUBSCRIBE TO OUR BLOG

Comments
HFC Network

The Cable Security Experience

Steve Goeringer
Distinguished Technologist, Security

Aug 31, 2020

We’ve all adjusted the ways we work and play and socialize in response to COVID. This has increased awareness that our broadband networks are critical – and they need to be secure. The cable industry has long focused on delivering best-in-class network security and we continue to innovate as we move on towards a 10G experience for subscribers.

CableLabs® participates in both hybrid fiber coaxial (HFC) and passive optical network (PON) technology development. This includes the development and maintenance of the Data Over Cable Service Interface Specification (DOCSIS®) technology that enables broadband internet service over HFC networks. We work closely with network operators and network equipment vendors to ensure the security of both types of networks. Let’s review these two network architectures and then discuss the threats that HFC and PON networks face. We’ll see that the physical media (fiber or coax) doesn’t matter much to the security of the wired network. We’ll discuss the two architectures and conclude by briefly discussing the security of the DOCSIS HFC networks.

A Review of HFC and PON Architectures

The following diagram illustrates the similarities and differences between HFC and PON.

The Cable Security Experience
 

Both HFC and PON-based FTTH are point-to-multipoint network architectures, which means that in both architectures the total capacity of the network is shared among all subscribers on the network. Most critically, from a security perspective, all downlink subscriber communications in both architectures are present at the terminating network element at the subscriber – the cable modem (CM) or optical network unit (ONU). This necessitates protections for these communications to ensure confidentiality.

In an HFC network, the fiber portion is between a hub or headend that serves a metro area (or portion thereof) and a fiber node that serves a neighborhood. The fiber node converts the optical signal to radio frequency, and the signal is then sent on to each home in the neighborhood over coaxial cable. This hybrid architecture enables continued broadband performance improvements to support higher user bandwidths without the need to replace the coaxial cable throughout the neighborhood. It’s important to note that the communication channels to end users in the DOCSIS HFC network are protected, through encryption, on both the coaxial (radio) and fiber portions of the network.

FTTH is most commonly deployed using a passive optical networking (PON) architecture, which uses a shared fiber down to a point in the access network where the optical signal is split using one or more passive optical splitters and transmitted over fiber to each home. The network element on the network side of this connection is an Optical Line Terminal (OLT) and at the subscriber side is an ONU. There are many standards for PON. The two most common are Gigabit Passive Optical Networks (GPON) and Ethernet Passive Optical Networks (EPON). An interesting architecture option to note is that CableLabs developed a mechanism that allows cable operators to manage EPON technology the same way they manage services over the DOCSIS HFC network – DOCSIS Provisioning of EPON.

In both HFC and PON architectures, encryption is used to ensure the confidentiality of the downlink communications. In DOCSIS HFC networks, encryption is used bi-directionally by encrypting both the communications to the subscriber’s cable modem (downlink) and communications from the subscriber’s cable modem (uplink). In PON, bi-directional encryption is also available.

Attacker View

How might an adversary (a hacker) look at these networks? There are four attack vectors available to adversaries in exploiting access networks:

  • Adversaries can directly attack the access network (e.g., tapping the coax or fiber cable).
  • They may attack a customer premises equipment (CPE) device from the network side of the service, typically referred to as the wide area network (WAN) side.
  • They may attack the CPE device from the home network side, or the local area network (LAN) side.
  • And they may attack the network operator’s infrastructure.

Tapping fiber or coaxial cables are both practical. In fact, tools to allow legitimate troubleshooting and management by authorized technicians abound for both fiber and coaxial cables. An incorrect assumption is to believe that fiber tapping is difficult or highly technical, relative to tapping a coaxial cable. You can easily find several examples on the internet of how this is simply done. Depending where the media is accessed, all user communications may be available on both the uplink and downlink side. However, both HFC and PON networks support having those communications encrypted, as highlighted above. Of course, that doesn’t mean adversaries can’t disrupt the communications. They can do so in both cases. Doing so, however, is relegated only to houses passed on that specific fiber or coaxial cable; the attack is local and doesn’t scale.

For the other attack vectors, the risks to HFC or PON networks are equivalent. CPE and network infrastructure (such as OLTs or CMTSs) must be hardened against both local and remote attacks regardless of transport media (e.g., fiber, coax).

Security Tools Available to Operators

In both HFC and PON architectures, the network operator can provide the subscriber with an equivalent level of network security. The three primary tools to secure both architectures rely on cryptography. These tools are authentication, encryption, and message hashing.

  • Authentication is conducted using a secret of some sort. In the case of HFC, challenge and response are used based on asymmetric cryptography as supported by public key infrastructure (PKI). In FTTH deployments, mechanisms may rely on pre-shared keys, PKI, EAP-TLS (IETF RFC 5216) or some other scheme. The authentication of endpoints should be repeated regularly, which is supported in the CableLabs DOCSIS specification. Regular re-authentication increases the assurance that all endpoints attached to the network are legitimate and known to the network operator.
  • Encryption provides the primary tool for keeping communications private. User communications in HFC are encrypted using cryptographic keys negotiated during the authentication step, using the DOCSIS Baseline Privacy Interface Plus (BPI+) specifications. Encryption implementation for FTTH varies. In both HFC and PON, the most common encryption algorithm used today is AES-128.
  • Message hashing ensures the integrity of messages in the system, meaning that a message cannot be changed without detection once it has been sent. Sometimes this capability is built into the encryption algorithm. In DOCSIS networks, all subscriber communications to and from the cable modem are hashed to ensure integrity, and some network control messages receive additional hashing.

It is important to understand where in the network these cryptography tools are applied. In DOCSIS HFC networks, user communications are protected between the cable modem and the CMTS. If the CMTS functionality is provided by another device such as a Remote PHY Device (RPD) or Remote MACPHY Device (RMD), DOCSIS terminates there. However, the DOCSIS HFC architecture provides authentication and encryption capabilities to secure the link to the hub as well. In FTTH, the cryptographic tools provide protection between the ONU and the OLT. If the OLT is deployed remotely as may be the case with RPDs or RMDs, the backhaul link should also be secured in a similar manner.

The Reality – Security in Cable

The specifications and standards that outline how HFC and PON should be deployed provide good cryptography-based tools to authenticate network access and keep both network and subscriber information confidential. The security of the components of the architecture at the management layer may vary per operator. However, operators are very adept at securing both cable modems and ONUs. And, as our adversaries innovate new attacks, we work on incorporating new capabilities to address those attacks – cybersecurity innovation is a cultural necessity of security engineering!

Building on more than two-decades of experience, CableLabs continues to advance the security features available in the DOCSIS specification, soon enabling new or updated HFC deployments to be even more secure and ready for 10G. The DOCSIS 4.0 specification has introduced several advanced security controls, including mutual authentication, perfect forward secrecy, and improved security for network credentials such as private keys. Given our strong interest in both optical and HFC network technologies, CableLabs will ensure its own specifications for PON architectures adopt these new security capabilities and will continue to work with other standards bodies to do the same.

Learn More About 10G Security

Comments
Security

EAP-CREDS: Enabling Policy-Oriented Credential Management in Access Networks

Massimiliano Pala
Principal Security Architect

Aug 20, 2020

In our ever-connected world, we want our devices and gadgets to be always available, independently from where or which access networks we are currently using. There’s a wide variety of Internet of Things (IoT) devices out there, and although they differ in myriad ways – power, data collection capabilities, connectivity – we want them all to work seamlessly with our networks. Unfortunately, it can be quite difficult to enjoy our devices without worrying about getting them securely onto our networks (onboarding), providing network credentials (provisioning) and even managing them.

Ideally, the onboarding process should be secure, efficient and flexible enough to meet the needs of various use cases. Because IoT devices typically lack screens and keyboards, provisioning their credentials can be a cumbersome task: Some devices might be capable of using only a username and a password, whereas others might be able to use more complex credentials such as digital certificates or cryptographic tokens. For consumers, secure onboarding should be easy; for enterprises, the process should be automated and flexible so that large numbers of devices can quickly provisioned with unique credentials.

Ideally, at the end of the process, devices should be provisioned with network-and-device specific credentials, directly managed by the network and unique to the device so that compromises impact that specific device on that specific network. In practice, the creation and installation of new credentials is often a very painful process, especially for devices in the lower segment of the market.

It’s Credentials Management, Not Just Onboarding 

After a device is successfully “registered” or “onboarded”, the missing piece that has been and continues to be, so far, ignored is how to manage these credentials. Even when devices allow for configuring them, their deployments tend to be “static” and they rarely get updated. There are two reasons for this: The first reason is the lack of security controls, typically on smaller devices, to set these credentials, and the second, and more relevant, reason is that users rarely remember to update authentication settings. According to a recent article, even in corporate environments, “almost half (47%) of CIOs and IT managers have allowed IoT devices onto their corporate network without changing the default passwords” even though another CISO survey has found that “ ... almost half (47%) of CISOs were worried about a potential breach due to their organization’s failure to secure IoT devices in the workplace.”

At CableLabs we look at the problem from many angles. In particular, we focus on how to provide network credentials management that (a) is flexible, (b) can enforce credentials policies across devices and (c) does not require additional discovery mechanisms.

EAP-CREDS: The Right Tool for the Specific Task 

The IEEE Port-Based Network Access Control (802.1x) provides the basis for access network architectures to allow entities (e.g., devices, applications) to authenticate to the network even before being granted connectivity. Specifically, the Extensible Authentication Protocol (EAP) provides a communication channel in which various authentication methods can be used to exchange different types of credentials. Once the communication between the client and the server has been secured via a mechanism such as EAP-TLS or EAP-TEAP, our work (EAP-CREDS), uses the “extensible” attribute of EAP to include access network credentials management.

EAP-CREDS implements three separate phases: initialization, provisioning and validation. In the initialization phase, the EAP server asks the device to list all credentials available on the device (for the current network only) and, if needed, initiates the provisioning phase during which a credentials provisioning or renewal protocol supported by both parties is executed.  After that phase is complete, the server may initiate the validation phase (to check that the credentials have been successfully received and installed on the device) or declare success and terminate the EAP session.

To keep the protocol simple, EAP-CREDS comes with specific requirements for its deployment:

  • EAP-CREDS cannot be used as a stand-alone method. It’s required that EAP-CREDS is used as an inner method of any tunneling mechanism that provides secrecy (encryption), server-side authentication and, for devices that already have a set of valid credentials, client-side authentication.
  • EAP-CREDS doesn’t mandate for (or provide) a specific protocol for provisioning or managing the device credentials because it’s meant only to provide EAP messages for encapsulating existing (standard or vendor-specific) protocols. In its first versions, however, EAP-CREDS also incorporated a Simple Provisioning Protocol (SPP) that supported username/password and X.509 certificate management (server-side driven). The SPP has been extracted from the original EAP-CREDS proposal and will be standardized as a separate protocol.

 

 EAP-CREDS: Enabling Policy-Oriented Credential Management in Access Networks

Figure 2 - Two EAP-CREDS sessions. On the left (Figure 2a), the server provisions a server-generated password (4 msgs). On the right (Figure 2b), the client renews its certificate by generating a PKCS#10 request; the server replies with the newly issued certificate (6 msgs).

 

When these two requirements are met, EAP-CREDS can manage virtually any type of credentials supported by the device and the server. An example of early adoption of the EAP-CREDS mechanism can be found in the Release 3 of the CBRS Alliance specifications where EAP-CREDS is used to manage non-USIM based credentials (e.g., username/password or X.509 certificates) for authenticating end-user devices (e.g., cell phones). Specifically, CBRS-A uses EAP-CREDS to transport the provisioning messages from the SPP to manage username/password combinations as well as X.509 certificates. The combination of EAP-CREDS and SPP provides an efficient way to manage network credentials.

SPP and EAP-CREDS: Flexibility and Efficiency 

To understand the specific type of messages implemented in EAP-CREDS and SPP, let’s look at Figure 2a which shows a typical exchange between an already registered IoT device and a business network.

In this case, after successfully authenticating both sides of the communication, the server initiates EAP-CREDS and uses SPP to deliver a new password. The total number of messages exchanged in this case is between four (when server-side generation is used) and six (when co-generation between client and server is used). Figure 2b provides the same use-case for X.509 certificates where co-generation is used.

One of the interesting characteristics of EAP-CREDS and SPP is their flexibility and ability to easily accommodate solutions that, today, need to go through more complex processes (e.g., OSU registration). For instance, SPP can also be used to register existing credentials in two ways. Besides using an authorization token during the initialization phase (i.e., any kind of unique identifier, whether a signed token or a device certificate), devices can also register their existing credentials (e.g., their device certificate) for network authentication.

Policy-Based Credentials Management 

As we’ve seen, EAP-CREDS delivers an automatic, policy-driven, cross-device credentials-management system and its use can improve the security of different types of access networks: industrial, business and home.

For the business and industrial environments, EAP-CREDS provides a cross-vendor standard way to automate credentials management for large number of devices,(not just IoT) thus making sure that (a) no default credentials are used, (b) that the ones (credentials) that are used are regularly updated and (c) credentials aren’t shared with other (possibly less secure) home environments. For the home environment, EAP-CREDS provides the possibility to make sure that the small IoT devices we’re buying today aren’t easily compromised because of weak and static credentials and provides a complementary tool (for 802.1x-enabled networks only) to consumer-oriented solutions like the Wi-Fi Alliance’s DPP.

If you’re interested in further details about EAP-CREDS and credentials management, please feel free to contact us and start something new today!

Learn More About 10G

Comments
Home Network

6 Tips to Speed Up Your Home Network

Jun 5, 2020

Are Your Devices up to Speed? Here’s Why It Matters…

Nobody likes to wait for an internet page or movie to load, but did you know there are things you can do to improve the speed? Yes YOU. Even if you think you’re not tech savvy, you can easily check your devices to see if they’re slowing things down. If you like to send/receive email messages, browse the internet, download files, and stream movies without delay, this blog post is for you.

Turns Out, Your Computer’s Age Might Be Affecting Your Speed

Did you know that even if you have purchased 1 Gbps service from your cable provider, your computer may not be able to deliver that speed due to how old your computer is?? You should check the age of your computer and other devices. They may not have the processing speed, gigabit card or compatibility with your service level to meet your expectations. Check your computer settings, the manual or even the manufacturer’s website for the specifications for your model number to make sure your laptop has wireless 802.11ac and gigabit ethernet cards.

Update Your Software and Drivers

Are your devices updated with the latest drivers and software? Go to the support page of your manufacturer’s website and check the “drivers” section. You may need to download the latest drivers which allow your device to properly communicate with your operating system. If you know your processor brand, you can also go to the Intel or AMD websites and use the driver auto detect / support assistant tool which will automatically let you know which drivers you need to download. Be sure to check for the latest drivers every 3 to 4 months.

Need a New Router?

Wi-Fi standards govern how different wireless devices are designed and how they communicate with each other. As these standards are improved, your devices may need to be upgraded to keep up with the improvements. Perhaps everything in your home except your wireless router can deliver gigabit service. Your network speed is dependent on your router specifications, so make sure that your wireless access point, router, or switch has 802.11ac wireless functionality and gigabit ports built in. If not, it might be time to buy a new one.

What’s the Best Location to Place Your Wi-Fi Router?

Where-oh-where is your wireless access point? It’s all about location, location, location. We created a brief video with suggestions for where to place your access point and tips to improve your home Wi-Fi performance.

Let’s Not Forget About Security

Security is imperative. You can learn how to secure your Wi-Fi router to protect your home network.

One last security tip: Your antivirus software will not work at its full capacity unless your computer is fully updated with the latest updates. To get your system updates in Microsoft Windows, go to settings and then click on windows updates. On a Mac, go to systems preferences from the Apple menu and click software update. (There is also a check box to automatically keep your Mac up to date.)

SUBSCRIBE TO OUR BLOG

Comments
10G

10G Integrity: The DOCSIS® 4.0 Specification and Its New Authentication and Authorization Framework

Massimiliano Pala
Principal Security Architect

May 28, 2020

One of the pillars of the 10G platform is security. Simplicity, integrity, confidentiality and availability are all different aspects of Cable’s 10G security platform. In this work, we want to talk about the integrity (authentication) enhancements, that have been developing for the next generation of DOCSIS® networks, and how they update the security profiles of cable broadband services.

DOCSIS (Data Over Cable Service Interface Specifications) defines how networks and devices are created to provide broadband for the cable industry and its customers. Specifically, DOCSIS comprises a set of technical documents that are at the core of the cable broadband services. CableLabs manufacturers for the cable industry, and cable broadband operators continuously collaborate to improve their efficiency, reliability and security.

With regards to security, DOCSIS networks have pioneered the use of public key cryptography on a mass scale – the DOCSIS Public Key Infrastructure (PKIs) are among the largest PKIs in the world with half billion active certificates issued and actively used every day around the world.

Following, we introduce a brief history of DOCSIS security and look into the limitations of the current authorization framework and subsequently provide a description of the security properties introduced with the new version of the authorization (and authentication) framework which addresses current limitations.

A Journey Through DOCSIS Security

The DOCSIS protocol, which is used in cable’s network to provide connectivity and services to users, has undergone a series of security-related updates in its latest version DOCSIS 4.0, to help meet the 10G platform requirements.

In the first DOCSIS 1.0 specification, the radio frequency (RF) interface included three security specifications: Security System, Removable Security Module and Baseline Privacy Interface. Combined, the Security System plus the Removable Security Module Specification became Full Security (FS).

Soon after the adoption of public key cryptography that occurred in the authorization process, the cable industry realized that a secure way to authenticate devices was needed; a DOCSIS PKI was established for DOCSIS 1.1-3.0 devices to provide cable modems with verifiable identities.

With the DOCSIS 3.0 specification, the major security feature was the ability to perform the authentication and encryption earlier in the device registration process, thus providing protection for important configuration and setup data (e.g., the configuration file for the CM or the DHCP traffic) that was otherwise not protected. The new feature was called Early Authorization and Encryption (EAE), it allows to start Baseline Privacy Interface Plus (BPI) even before the device is provisioned with IP connectivity.

The DOCSIS 3.1 specifications created a new Public Key Infrastructure *(PKI) to handle the authentication needs for the new class of devices. This new PKI introduced several improvements over the original PKI when it comes to cryptography – a newer set of algorithms and increased key sizes were the major changes over the legacy PKI. The same new PKI that is used today to secure DOCSIS 3.1 devices will also provide the certificates for the newer DOCSIS 4.0 ones.

The DOCSIS 4.0 version of the specification introduces, among the numerous innovations, an improved authentication framework (BPI Plus V2) that addresses the current limitations of BPI Plus and implements new security properties such as full algorithm agility, Perfect Forward Secrecy (PFS), Mutual Message Authentication (MMA or MA) and Downgrade Attacks Protection.

Baseline Privacy Plus V1 and Its Limitations

In DOCSIS 1.0-3.1 specifications, when Baseline Privacy Plus (BPI+ V1) is enabled, the CMTS directly authorizes a CM by providing it with an Authorization Key, which is then used to derive all the authorization and encryption key material. These secrets are then used to secure the communication between the CM and the CMTS. In this security model, the CMTS is assumed trusted and its identity is not validated.

BPI Plus Authentication Flow

Figure 1: BPI Plus Authorization Exchange

The design of BPI+ V1 dates back more than just few years and in this period of time, the security and cryptography landscapes have drastically changed;  especially in regards to cryptography. At the time when BPI+ was designed, the crypto community was set on the use of the RSA public key algorithm, while today, the use of elliptic-curve cryptography and ECDSA signing algorithm is predominant because of its efficiency, especially when RSA 3072 or larger keys are required.

A missing feature in BPI+ is the lack of authentication for the authorization messages. In particular, CMs and CMTS-es are not required to authenticate (i.e., sign) their own messages, making them vulnerable to unauthorized manipulation.

In recent years, there has been a lot of discussion around authentication and how to make sure that compromises of long-term credentials (e.g., the private key associated with an X.509 certificate) do not provide access to all the sessions from that user in the clear (i.e., enables the decryption of all recorded sessions by breaking a single key) – because BPI+ V1 directly encrypts the Authorization Key by using the RSA public key that is in the CM’s device certificate, it does not support Perfect Forward Secrecy.

To address these issues, the cable industry worked on a new version of its authorization protocol, namely BPI Plus Version 2. With this update, a protection mechanism was required to prevent downgrade attacks, where attackers to force the use of the older, and possibly weaker, version of the protocol. In order to address this possible issue, the DOCSIS community decided that a specific protection mechanism was needed and introduced the Trust On First Use (TOFU) mechanism to address it.

The New Baseline Privacy Plus V2

The DOCSIS 4.0 specification introduces a new version of the authentication framework, namely Baseline Privacy Plus Version 2, that addresses the limitations of BPI+ V1 by providing support for the identified new security needs. Following is a summary of the new security properties provided by BPI+ V2 and how they address the current limitations:

  • Message Authentication. BPI+ V2 Authorization messages are fully authenticated. For CMs this means that they need to digitally sign the Authorization Requests messages, thus eliminating the possibility for an attacker to substitute the CM certificate with another one. For CMTS-es, BPI+ V2 requires them to authenticate their own Authorization Reply messages this change adds an explicit authentication step to the current authorization mechanism. While recognizing the need for deploying mutual message authentication, DOCSIS 4.0 specification allows for a transitioning period where devices are still allowed to use BPI+ V1. The main reason for this choice is related to the new requirements imposed on DOCSIS networks that are now required to procure and renew their DOCSIS credentials when enabling BPI+ V2 (Mutual Authentication).
  • Perfect Forward Secrecy. Differently from BPI+ V1, the new authentication framework requires both parties to participate in the derivation of the Authorization Key from authenticated public parameters. In particular, the introduction of Message Authentication on both sides of the communication (i.e., the CM and the CMTS) enables BPI+ V2 to use the Elliptic-Curves Diffie-Hellman Ephemeral (ECDHE) algorithm instead of the CMTS directly generating and encrypting the key for the different CMs.Because of the authentication on the Authorization messages, the use of ECDHE is safe against MITM attacks.
  • Algorithm Agility. As the advancement in classical and quantum computing provides users with incredible computational power at their fingertips, it also provides the same ever-increasing capabilities to malicious users. BPI+ V2 removes the protocol dependencies on specific public-key algorithms that are present in BPI+ V1. , By introducing the use of the standard CMS format for message authentication (i.e., signatures) combined with the use of ECDHE, DOCSIS 4.0 security protocol effectively decouples the public key algorithm used in the X.509 certificates from the key exchange algorithm. This enables the use of new public key algorithms when needed for security or operational needs.
  • Downgrade Attacks Protection. A new Trust On First Use (TOFU) mechanism is introduced to provide protection against downgrade attacks – although the principles behind TOFU mechanisms are not new, its use to protect against downgrade attacks is. It leverages the security parameters used during a first successful authorization as a baseline for future ones, unless indicated otherwise. By establishing the minimum required version of the authentication protocol, DOCSIS 4.0 cable modems actively prevent unauthorized use of a weaker version of the DOCSIS authentication framework (BPI+).  During the transitioning period for the adoption of the new version of the protocol, cable operators can allow “planned” downgrades – for example, when a node split occurs or when a faulty equipment is replaced and BPI+ V2 is not enabled there. In other words, a successfully validated CMTS can set, on the CM, the allowed minimum version (and other CM-CMTS binding parameters) to be used for subsequent authentications.

Future Work

In this work we provided a short history of DOCSIS security and reviewed the limitations of the current authorization framework. As CMTS functionality moves into the untrusted domain, these limitations could potentially be translated into security threats, especially in new distributed architectures like Remote PHY. Although in their final stage of approval, the proposed changes to the DOCSIS 4.0 are currently being addressed in the Security Working Group.

Member organizations and DOCSIS equipment vendors are always encouraged to participate in our DOCSIS working groups – if you qualify, please contact us and participate in our weekly DOCSIS 4.0 security meeting where these, and other security-related topics, are addressed.

If you are interested in discovering even more details about DOCSIS 4.0 and 10G security, please feel free to contact us to receive more in-depth and updated information.

Learn More About 10G

Comments
Security

False Base Station or IMSI Catcher: What You Need to Know

Tao Wan
Principal Architect, Security

Oct 23, 2019

You might have heard of False Base Station (FBS), Rogue Base Station (RBS), International Mobile Subscriber Identifier (IMSI) Catcher or Stingray. All four of these terminologies refer to a tool consisting of hardware and software that allow for passive and active attacks against mobile subscribers over radio access networks (RANs). The attacking tool (referred to as FBS hereafter) exploits security weaknesses in mobile networks from 2G (second generation) to 3G, 4G and 5G. (Certain improvements have been made in 5G, which I’ll discuss later.)

In mobile networks of all generations, cellular base stations periodically broadcast information about the network. Mobile devices or user equipment (UE) listen to these broadcasting messages, select an appropriate cellular cell and connect to the cell and the mobile network. Because of practical challenges, broadcasting messages aren’t protected for confidentiality, authenticity or integrity. As a result, broadcasting messages are subject to spoofing or tampering. Some unicasting messages aren’t protected either, also allowing for spoofing. The lack of security protection of mobile broadcasting messages and certain unicasting messages makes FBS possible.

An FBS can take various forms, such as a single integrated device or multiple separated components. In the latter form [1], an FBS usually consists of a wireless transceiver, a laptop and a cellphone. The wireless transceiver broadcasts radio signals to impersonate legitimate base stations. The laptop connects to the transceiver (e.g., via an USB interface) and controls what to broadcast as well as the strength of the broadcasting signal. The cellphone is often used to capture broadcasting messages from legitimate base stations and feed into the laptop to simplify the configuration of the transceiver. In either form, an FBS can be made compact with a small footprint, allowing it to be left in a location unnoticeably (e.g., mounted to a street pole) or carried conveniently (e.g., inside a backpack).

An FBS often broadcasts the same network identifier as a legitimate network but with a stronger signal to lure users away. How much stronger does an FBS’s signal need to be to succeed? The answer to that question hasn’t been well understood until recently. According to the experiments in the study [2], an FBS’s signal must be more than 30db stronger than a legitimate signal to have any success. When the signal is 35db stronger, the success rate is about 80 percent. When it’s 40db stronger, the success rate increases to 100 percent. In these experiments, FBS broadcasts the same messages with the same frequency and band as the legitimate cell. Another strategy taken by an FBS is to broadcast the same network identifier but with a different tracking area code, tricking the UE into believing that it has entered a new tracking area, and then switch to the FBS. This strategy can make it easier to lure the UE to the FBS and should help reduce the signal strength required by the FBS to be successful. However, the exact signal strength requirement in this case wasn’t measured in the experiments.

Once camped at an FBS, a UE is subject to both passive and active attacks. In passive attacks, an adversary only listens to radio signals from both the UE and legitimate base stations without interfering with the communication (e.g., with signal injection). Consequences from passive attacks include—but are not limited to—identity theft and location tracking. In addition, eavesdropping often forms a stepping stone toward active attacks, in which an adversary also injects signals. An active attacker can be a man-in-the-middle (MITM) or man-on-the-side (MOTS) attacker.

In MITM attacks, the attacker is on the path of the communication between a UE and another entity and can do pretty much anything to the communication, such as reading, injecting, modifying and deleting messages. One such attack is to downgrade a UE to 2G with weak or null ciphers to allow for eavesdropping. Another example of an MITM attack is aLTEr [3], which only tampers with DNS requests in LTE networks, without any downgrading or tampering of control messages. Although user plane data is encrypted in LTE, it’s still subject to tampering if the encryption (e.g., AES counter mode) is malleable due to the lack of integrity protection.

In MOTS attacks, an attacker doesn’t have the same amount of control over communication as with an MITM attack. More often, the attacker injects messages to obtain information from the UE (e.g., stealing the IMSI by an identity request), send malicious messages to the UE (e.g., phishing SMS) or hijack services from a victim UE (e.g., answering a call on behalf of the UE [4]). A MOTS attacker, without luring a UE to connect to it, can still interfere with existing communication—for example, by injecting slightly stronger signals that are well timed to overwrite a selected part of a legitimate message [2].

FBS has been a security threat to all generations of mobile networks since 2G. The mitigation to FBS was studied by 3GPP in the past—however, without any success due to practical constraints such as deployment challenges in cryptographic key management and difficulty in timing synchronization. In 5G release 15 [5], network side detection of FBS is specified, which can help mitigate the risk, albeit fail to prevent FBS. 5G release 15 also introduces public key encryption of subscriber permanent identifier (SUPI) before it is sent out from the UE, which—if implemented—makes it difficult for FBS to steal SUPI. In 5G release 16 [6], FBS is being studied again. Various solutions have been proposed, including integrity protection of broadcasting, paging and unicasting messages. Other detection approaches have also been proposed.

Our view is that FBS arises mainly from the lack of integrity protection of broadcasting messages. Thus, a fundamental solution is to protect broadcasting messages with integrity (e.g., using public key based digital signatures). Although challenges remain with such a solution, we don’t believe those challenges are insurmountable. Other solutions are based on the signatures of attacks, which may help but can eventually be bypassed when attacks evolve to change their attacking techniques and behaviors. We look forward to agreement from 3GPP SA3 on a long-term solution that can fundamentally solve the problem of FBS in 5G.

To learn more about 5G in the future subscribe to our blog.


SUBSCRIBE TO OUR BLOG

References

[1] Li, Zhenhua, Weiwei Wang, Christo Wilson, Jian Chen, Chen Qian, Taeho Jung, Lan Zhang, Kebin Liu, Xiangyang Li, and Yunhao Liu. “FBS-Radar: Uncovering Fake Base Stations at Scale in the Wild.” In Proceedings of ISOC Symposium on Network and Distributed Systems Security (NDSS), February 2017.

[2] Hojoon Yang, Sangwook Bae, Mincheol Son, Hongil Kim, Song Min Kim, and Yongdae Kim. “Hiding in Plain Signal: Physical Signal Overshadowing Attack on LTE.” In Proceedings of 28th USENIX Security Symposium (USENIX Security), August 2019.

[3] Rupprecht D, Kohls K, Holz T, and Popper C. “Breaking LTE on Layer Two.” In Proceedings of IEEE Symposium on Security & Privacy (S&P), May 2019.

[4] Golde N, Redon K, and Seifert JP. “Let Me Answer That for You: Exploiting Broadcast Information in Cellular Networks.” In Proceedings of the 22nd USENIX Security Symposium (USENIX Security), August 2013.

[5] 3GPP TS 33.501, “Security Architecture and Procedures for 5G System” (Release 15), v15.5.0, June 2019.

[6] 3GPP TR 33.809, “Study on 5G Security Enhancement against False Base Stations” (Release 16), v0.5.0, June 2019.

Comments
Security

Revisiting Security Fundamentals

Steve Goeringer
Distinguished Technologist, Security

Oct 17, 2019

It’s Cybersecurity Awareness Month—time to study up!

Cybersecurity is a complex topic. The engineers who address cybersecurity must not only be security experts; they must also be experts in the technologies they secure. In addition, they have to understand the ways that the technologies they support and use might be vulnerable and open to attack.

Another layer of complexity is that technology is always evolving. In parallel with that evolution, our adversaries are continuously advancing their attack methods and techniques. How do we stay on top of that? We must be masters of security fundamentals. We need to be able to start with foundational principals and extend our security tools, techniques and methods from there: Make things no more complex than necessary to ensure safe and secure user experiences.

In celebration of Cybersecurity Awareness Month, I’d like to devote a series of blog posts to address some basics about security and to provide a fresh perspective on why these concepts remain important areas of focus for cybersecurity.

Three Goals

At the most basic level, the three primary goals of security for cable and wireless networks are to ensure the confidentiality, integrity and availability of services. NIST documented these concepts well in its special publication, “An Introduction to Information Security.”

  • Confidentiality ensures that only authorized users and systems can access a given resource (e.g., network interface, data file, processor). This is a pretty easy concept to understand: The most well-known confidentiality approach is encryption.
  • Integrity, which is a little more obscure, guards against unauthorized changes to data and systems. It also includes the idea of non-repudiation, which means that the source of a given message (or packet) is known and cannot be denied by that source.
  • Availability is the uncelebrated element of the security triad. It’s often forgotten until failures in service availability are recognized as being “a real problem.” This is unfortunate because engineering to ensure availability is very mature.

In Part 1 of this series, I want to focus on confidentiality. I’ll discuss integrity and availability in two subsequent blogs.

As I mentioned, confidentiality is a security function that most people are aware of. Encryption is the most frequently used method to assure confidentiality. I’m not going to go into a primer about encryption. However, it is worth talking about the principles. Encryption is about applying math using space, power and time to ensure that only parties with the right secret (usually a key) can read certain data. Ideally, the math used should require much greater space, power or time for an unauthorized party without the right secret to read that data. Why does this matter? Because encryption provides confidentiality only as long as the math used is sound and that the corresponding amount of space, power and time for adversaries to read the data is impractical. That is often a good assumption, but history has shown that over time, a given encryption solution will eventually become insecure. So, it’s a good idea to apply other approaches to provide confidentiality as well.

What are some of those approaches? Ultimately, the other solutions prevent access to the data being protected. The notion is that if you prevent access (either physically or logically) to the data being protected, then it can’t be decrypted by unauthorized parties. Solutions in this area fall primarily into two strategies: access controls and separation.

Access controls validate that requests to access data or use a resource (like a network) come from authorized sources (identified using network addresses and other credentials). For example, an access control list (ACL) is used in networks to restrict resource access to specific IP or MAC addresses. As another example, a cryptographic challenge and response (often enabled by public key cryptography) might be used to ensure that the requesting entity has the “right credentials” to access data or a resource. One method we all use every day is passwords. Every time we “log on” to something, like a bank account, we present our username (identification) and our (hopefully) secret password.

Separation is another approach to confidentiality. One extreme example of separation is to establish a completely separate network architecture for conveying and storing confidential information. The government often uses this tactic, but even large enterprises use it with “private line networks.” Something less extreme is to use some form of identification or tagging to encapsulate packets or frames so that only authorized endpoints can receive traffic. This is achieved in ethernet by using virtual LANs (VLANs). Each frame is tagged by the endpoint or the switch to which it connects with a VLAN tag, and only endpoints in the same VLAN can receive traffic from that source endpoint. Higher network layer solutions include IP Virtual Private Network (VPNs) or, sometimes, Multiprotocol Label Switching (MPLS).

Threats to Confidentiality

What are the threats to confidentiality? I’ve already hinted that encryption isn’t perfect. The math on which a given encryption approach is based can sometimes be flawed. This type of flaw can be discovered decades after the original math was developed. That’s why it’s traditionally important to use cipher suites approved by appropriate government organizations such as NIST or ENISA. These organizations work with researchers to develop, select, test and validate given cryptographic algorithms as being provably sound.

However, even when an algorithm is sound, the way it’s implemented in code or hardware may have systemic errors. For example, most encryption approaches require the use of random number generators to execute certain functions. If a given code library for encryption uses a random number generator that’s biased in some way (less than truly random), the space, power and time necessary to achieve unauthorized access to encrypted data may be much less than intended.

One threat considered imminent to current cryptography methods is quantum computing. Quantum computers enable new algorithms that reduce the power, space and time necessary to solve certain specific problems, compared with what traditional computers required. For cryptography, two such algorithms are Grover’s and Shor’s.

Grover’s algorithm. Grover’s quantum algorithm addresses the length of time (number of computations) necessary to do unstructured search. This means that it may take half the number of guesses necessary to guess the secret (the key) to read a given piece of encrypted data. Given current commonly used encryption algorithms, which may provide confidentiality against two decades’ worth of traditional cryptanalysis, Grover’s algorithm is only a moderate threat—until you consider that systemic weaknesses in some implementations of those encryption algorithms may result in less than ideal security.

Shor’s algorithm. Shor’s quantum algorithm is a more serious threat specifically to asymmetric cryptography. Current asymmetric cryptography relies on mathematics that assume it’s hard to factor integers down to primes (such as used by the Rivest-Shamir-Adleman algorithm) or to guess given numbers in a mathematical function or field (such as used in elliptical curve cryptography). Shor’s quantum algorithm makes very quick work of factoring; in fact, it may be possible to factor these mathematics nearly instantly given a sufficiently large quantum computer able to execute the algorithm.

It’s important to understand the relationship between confidentiality and privacy. They aren’t the same. Confidentiality protects the content of a communication or data from unauthorized access, but privacy extends beyond the technical controls that protect confidentiality and extends to the business practices of how personal data is used. Moreover, in practice, a security infrastructure may for some data require it to be encrypted while in motion across a network, but perhaps not when at rest on a server. Also, while confidentiality, in a security context, is pretty much a straight forward technical topic, privacy is about rights, obligations and expectations related to the use of personal data.

Why do I bring it up here? Because a breach of confidentiality may also be a breach of privacy. And because application of confidentiality tools alone does not satisfy privacy requirements in many situations. Security engineers – adversarial engineers – need to keep these things in mind and remember that today privacy violations result in real costs in fines and brand damage to our companies.

Wow! Going through all that was a bit more involved than I intended – lets finish this blog. Cable and wireless networks have implemented many confidentiality solutions. WiFi, LTE, and DOCSIS technology all use encryption to ensure confidentiality on the shared mediums they use to transport packets. The cipher algorithm DOCSIS technology typically uses AES128 which has stood the test of time. We can anticipate future advances. One is a NIST initiative to select a new light weight cipher – something that uses less processing resources than AES. This is a big deal. For just a slight reduction in security (measured using a somewhat obscure metric called “security bits”), some of the candidates being considered by NIST may use half the power or space as compared to AES128. That may translate to lower cost and higher reliability of end-points that use the new ciphers.

Another area the cable industry, including CableLabs, continues to track is quantum resistant cryptography. There are two approaches here. One is to use quantum technologies (to generate keys or transmit data) that may be inherently secure against quantum computer based cryptanalysis. Another approach is to use quantum resistant algorithms (e.g., new math that is resistant to  cryptanalysis using Shor’s and Grover’s  algorithms) implemented on traditional computing methods. Both approaches are showing great promise.

There’s a quick review of confidentiality. Next up? Integrity.

Want to learn more about cybersecurity? Register for our upcoming webinar: Links in the Chain: CableLabs' Primer on What's Happening in Blockchain. Block your calendars. Chain yourselves to your computers. You will not want to miss this webinar on the state of Blockchain and Distributed Ledger Technology as it relates to the Cable and Telecommunications industry.

Register NOW

Comments
Policy

Driving Increased Security in All IoT Devices

Mark Walker
Director, Technology Policy

Sep 18, 2019

CableLabs engages with the IoT industry and the broader stakeholder community, including governments, to help drive increased IoT device security.  The rapid proliferation of IoT devices has the potential to transform and enrich our lives and to drive significant productivity gains in the broader economy. However, the lack of sufficient security in a meaningful number of these newly connected devices creates significant risk to consumers and to the basic functionality of the Internet. Insecure IoT devices often serve as building blocks for botnets and other distributed threats that in turn perform DDoS attacks, steal personal and sensitive data, send spam, propagate ransomware, and more generally, provide the attacker access to the compromised devices and their connections.   

To help address the challenge of insecure IoT, CableLabs along with 19 other industry organizations came together to develop “The C2 Consensus on IoT Device Security Baseline Capabilities” released earlier this week.  The broad industry consensus identifies cybersecurity baseline capabilities that all new IoT devices should have, as well additional capabilities that should be phased in over time.  The development kicked off in March with a workshop hosted by the Consumer Technology Association (CTA). Over the past months, the group has coalesced around the identified cybersecurity capabilities.  These include capabilities in the areas of device identity, secured access, data protection and patchability, among others.   

CableLabs has also engaged with the National Institute of Standards and Technology (NIST) as it develops its recently released draft report, “Core Cybersecurity Feature Baseline for Securable IoT Devices: A Starting Point for IoT Device Manufacturers.” Both industry and governments largely agree on the capabilities that must be included to increase device security. Like the C2 Consensus, NIST focuses on foundational cybersecurity capabilities, including device identity, secure access, patchability of firmware and software, protection of device configuration and device data, and cybersecurity event logging.    

The cybersecurity capabilities identified in the C2 Consensus and NIST will help prevent and minimize the potential for exploitation of IoT devices.  Both documents provide a strong foundation and help point IoT manufacturers in the right direction on how to increase device security. However, cybersecurity is an ongoing journey, not a destination. Security practices must evolve and continue to improve to address new and emerging threats and changes in technology. This foundation must continue to be built on overtime.   

CableLabs has long been a leader in the development of security technologiesFor decades, CableLabs has helped guide the cable industry in incorporating many of the identified security capabilities into cable devices and has ensured the maintenance and advancement of these capabilities over time. For instance, since the first DOCSIS specification in 1997, CableLabs has helped ensure the protection of data: All traffic flows between each cable modem and the CMTS are encrypted to protect the confidentiality and integrity of those transmissions. This is not a once-and-done process; CableLabs has and must continue to advance the cryptography used in cable devices to protect against new and more powerful brute force attacks and other potential threats. Similarly, nearly 20 years ago, CableLabs adopted PKI-based digital certificates to support strong device identity and authentication for devices connecting directly to the cable network (e.g., cable modems, Internet gateways, set-top boxes). Since the initial implementation, CableLabs has continued to advance its PKI implementation to address new and emerging threats.   

 CableLabs has leveraged its experience and success in developing and implementing cybersecurity technologies in cable devices to help drive increased security in IoT devices.  The underlying fundamentals, as well as many of the approaches to implementing, are transferable to IoT, as detailed in our white paper, “A Vision for Secure IoT”. We’ve not only engaged with the C2 Consensus and NIST’s IoT security efforts, but also in industry specification organizations, specifically the Open Connectivity Foundation (OCF)—to develop secure interoperability for IoT devices. OCF has implemented nearly all of the identified capabilities in its specification, tests for the capabilities in its certification regime, and provides the capabilities, free of charge, in its open source reference implementation – IoTivity.         

 Since publishing “A Vision for Secure IoT” in the summer of 2017, industry and the broader stakeholder community, including governments, recognize and have begun to address the challenge of insecure IoT.

Download the C2 Document

Comments
Wireless

Moving Beyond Cloud Computing to Edge Computing

Omkar Dharmadhikari
Wireless Architect

May 1, 2019

In the era of cloud computing—a predecessor of edge computing—we’re immersed with social networking sites, online content and other online services giving us access to data from anywhere at any time. However, next-generation applications focused on machine-to-machine interaction with concepts like internet of things (IoT), machine learning and artificial intelligence (AI) will transition the focus to “edge computing” which, in many ways, is the anti-cloud.

Edge computing is where we bring the power of cloud computing closer to the customer premises at the network edge to compute, analyze and make decisions in real time. The goal of moving closer to the network edge—that is, within miles of the customer premises—is to boost the performance of the network, enhance the reliability of services and reduce the cost of moving data computation to distant servers, thereby mitigating bandwidth and latency issues.

The Need for Edge Computing

The growth of the wireless industry and new technology implementations over the past two decades has seen a rapid migration from on-premise data centers to cloud servers. However, with the increasing number of Industrial Internet of Things (IIoT) applications and devices, performing computation at either data centers or cloud servers may not be an efficient approach. Cloud computing requires significant bandwidth to move the data from the customer premises to the cloud and back, further increasing latency. With stringent latency requirements for IIoT applications and devices requiring real-time computation, the computing capabilities need to be at the edge—closer to the source of data generation.

What Is Edge Computing?

The word “edge” precisely relates to the geographic distribution of network resources. Edge computation enables the ability to perform data computation close to the data source instead of going through multiple hops and relying on the cloud network to perform computing and relay the data back. Does this mean we don’t need the cloud network anymore? No, but it means that instead of data traversing through the cloud, the cloud is now closer to the source generating the data.

Edge computing refers to sensing, collecting and analyzing data at the source of data generation, and not necessarily at a centralized computing environment such as a data center. Edge computing uses digital devices, often placed at different locations, to transmit the data in real time or later to a central data repository. Edge computing is the ability to use distributed infrastructure as a shared resource, as the figure below shows.

Edge computing is an emerging technology that will play an important role in pushing the frontier of data computation to the logical extremes of a network.

Key Drivers of Edge Computing:

  • Plummeting cost of computing elements
  • Smart and intelligent computing abilities in IIoT devices
  • A rise in the number of IIoT devices and ever-growing demand for data
  • Technology enhancements with machine learning, artificial intelligence and analytics

Benefits of Edge Computing

Computational speed and real-time delivery are the most important features of edge computing, allowing data to be processed at the edge of network. The benefits of edge computing manifest in these areas:

  • Latency

Moving data computing to the edge reduces latency. Latency without edge computing—when data needs to be computed at a server located far from the customer premises—varies depending on available bandwidth and server location. With edge computing, data does not have to traverse over a network to a distant server or cloud for processing, which is ideal for situations where latencies of milliseconds can be untenable. With data computing performed at the network edge, the messaging between the distant server and edge devices is reduced, decreasing the delay in processing the data.

  • Bandwidth

Pushing processing to edge devices, instead of streaming data to the cloud for processing, decreases the need for high bandwidth while increasing response times. Bandwidth is a key and scarce resource, so decreasing network loading with higher bandwidth requirements can help with better spectrum utilization.

  • Security

From a certain perspective, edge computing provides better security because data does not traverse over a network, instead staying close to the edge devices where it is generated. The less data computed at servers located away from the source or cloud environments, the less the vulnerability. Another perspective is that edge computing is less secure because the edge devices themselves can be vulnerable, putting the onus on operators to provide high security on the edge devices.

What Is Multi-Access Edge Computing (MEC)?

MEC enables cloud computing at the edge of the cellular network with ultra-low latency. It allows running applications and processing data traffic closer to the cellular customer, reducing latency and network congestion. Computing data closer to the edge of the cellular network enables real-time analysis for providing time-sensitive response—essential across many industry sectors, including health care, telecommunications, finance and so on. Implementing distributed architectures and moving user plane traffic closer to the edge by supporting MEC use cases is an integral part of the 5G evolution.

 Edge Computing Standardization

Various groups in the open source and standardization ecosystem are actively looking into ways to ensure interoperability and smooth integration of incorporating edge computing elements. These groups include:

  • The Edge Computing Group
  • CableLabs SNAPS programs, including SNAPS-Kubernetes and SNAPS-OpenStack
  • OpenStack’s StarlingX
  • Linux Foundation Networking’s OPNFV, ONAP
  • Cloud Native Compute Foundation’s Kubernetes
  • Linux Foundation’s Edge Organization

How Can Edge Computing Benefit Operators?

  • Dynamic, real-time and fast data computing closer to edge devices
  • Cost reduction with fewer cloud computational servers
  • Spectral efficiency with lower latency
  • Faster traffic delivery with increased quality of experience (QoE)

Conclusion

The adoption of edge computing has been rapid, with increases in IIoT applications and devices, thanks to myriad benefits in terms of latency, bandwidth and security. Although it’s ideal for IIoT, edge computing can help any applications that might benefit from latency reduction and efficient network utilization by minimizing the computational load on the network to carry the data back and forth.

Evolving wireless technology has enabled organizations to use faster and accurate data computing at the edge. Edge computing offers benefits to wireless operators by enabling faster decision making and lowering costs without the need for data to traverse through the cloud network. Edge computation enables wireless operators to place computing power and storage capabilities directly at the edge of the network.  As 5G evolves and we move toward a connected ecosystem, wireless operators are challenged to maintain the status quo of operating 4G along with 5G enhancements such as edge computing, NFV and SDN. The success of edge computing cannot be predicted (the technology is still in its infancy), but the benefits might provide wireless operators with critical competitive advantage in the future.

How Can CableLabs Help?

CableLabs is a leading contributor to European Telecommunication Standards Institute NFV Industry Specification Group (ETSI NFV ISG). Our SNAPS™ program is part of Open Platform for NFV (OPNFV). We have written the OpenStack API abstraction library and contributed it to the OPNFV project at the Linux Foundation—“SNAPS-OO”—and leverage object oriented software development practices to automate and validate applications on OpenStack. We also added Kubernetes support with SNAPS-Kubernetes, introducing a Kubernetes stack to provide CableLabs members with open source software platforms. SNAPS-Kubernetes is a certified CNCF Kubernetes installer that is targeted at lightweight edge platforms and scalable with the ability to efficiently manage failovers and software updates. SNAPS-Kubernetes is optimized and tailored to address the need of the cable industry and general edge platforms. Edge computing on Kubernetes is emerging as a powerful way to share, distribute and manage data on a massive scale in ways that cloud, or on-premise deployments cannot necessarily provide.


SUBSCRIBE TO OUR BLOG

Comments