Comments
Security

A Proposal for a Long-Term Post-Quantum Transitioning Strategy for the Broadband Industry via Composite Crypto and PQPs

Massimiliano Pala
Principal Security Architect

Oct 22, 2020

The broadband industry has historically relied on public-key cryptography to provide secure and strong authentication across access networks and devices. In our environment, one of the most challenging issues—when it comes to cryptography—is to support devices with different capabilities. Some of these devices may or may not be fully (or even partially) upgradeable. This can be due to software limitations (e.g., firmware or applications cannot be securely updated) or hardware limitations (e.g., crypto accelerators or secure elements).

A Heterogeneous Ecosystem

When researching our transitioning strategy, we realized that—especially for constrained devices—the only option at our disposal was the use of pre-shared keys (PSKs) to allow for post-quantum safe authentications for the various identified use cases.

In a nutshell, our proposal combines the use of composite crypto, post-quantum algorithms and timely distributed PSKs to accommodate the coexistence of our main use cases: post-quantum capable devices, post-quantum validation capable devices and classic-only devices. In addition to providing a classification of the various types of devices based on their crypto capabilities to support the transition, we also looked at the use of composite crypto for the next-generation DOCSIS® PKI to allow the delivery of multi-algorithm support for the entire ecosystem: Elliptic Curve Digital Signature Algorithm (ECDSA) as a more efficient alternative to the current RSA algorithm, and a post-quantum algorithm (PQA) for providing long-term quantum-safe authentications. We devised a long-term transitioning strategy for allowing secure authentications in our heterogeneous ecosystem, in which new and old must coexist for a long time.

Three Classes of Devices

The history of broadband networks teaches us that we should expect devices that are deployed in DOCSIS® networks to be very long-lived (i.e., 20 or more years). This translates into the added requirement—for our environment—to identify strategies that allow for the different classes of devices to still perform secure authentications under the quantum threat. To better understand what is needed to protect the different types of devices, we classified them into three distinct categories based on their long-term cryptographic capabilities.

Classic-Only Devices. This class of devices does not provide any crypto-upgrade capability, except for supporting the composite crypto construct. For this class of devices, we envision deploying post-quantum PSKs (PQPs) to devices. These keys are left dormant until the quantum-safe protection is needed for the public-key algorithm.

PKS Protection
 

Specifically, while the identity is still provided via classic signatures and associated certificate chains, the protection against quantum is provided via the pre-deployed PSKs. Various techniques have been identified to make sure these keys are randomly combined and updated while attached to the network: an attacker would be required to have access to the full history of the device traffic to be able to get access to the PSKs. This solution can be deployed today for cable modems and other fielded devices.

Quantum-Validation Capable Devices. This type of device does not provide the possibility to upgrade the secure storage or the private key algorithms, but their crypto libraries can be updated to support selected PQAs and quantum-safe key encapsulation mechanisms (KEMs). Devices with certificates issued under the original RSA infrastructure must still use the deployed PSKs to protect the full authentication chain, whereas devices whose credentials are issued under the new PKI need only protect the link between the signature and the device certificate. For these devices, PSKs can be transferred securely via quantum-resistant KEMs.

Quantum Capable Devices. These devices will have full PQA support (both for authentication and validation) and might support classic algorithms for validation. The use of composite crypto allows for validating the same entities across the quantum-threat hump, especially on the access network side. To validate classic-only devices, the use of Kerberos can address symmetric pairwise PSKs distribution for authentication and encryption.

Composite Crypto Solves a Fundamental Problem

In our proposal for a post-quantum transitioning strategy for the broadband industry, we identified the use of composite crypto and PQPs as the two necessary building blocks for enabling secure authentication for all PKI data (from digital certificates to revocation information).

When composite crypto and PQPs are deployed together, the proposed architecture allows for secure authentication across different classes of devices (i.e., post-quantum and classic), lowers the costs of transitioning to quantum-safe algorithms by requiring the support of a single infrastructure (also required for indirect authentication data like “stapled” OCSP responses), extends the lifetime of classic devices across the quantum hump and does not require protocol changes (even proprietary ones) as the two-certificate solution would require.

Ultimately, the use of composite crypto efficiently solves the fundamental problem of associating both classic and quantum-safe algorithms to a single identity.

To learn more, watch SCTE Tec-Expo 2020’s “Evolving Security Tools: Advances in Identity Management, Crytography & Secure Processing” recording and participate to the KeyFactor’s 2020 Summit.

SUBSCRIBE TO OUR BLOG

Comments
Security

EAP-CREDS: Enabling Policy-Oriented Credential Management in Access Networks

Massimiliano Pala
Principal Security Architect

Aug 20, 2020

In our ever-connected world, we want our devices and gadgets to be always available, independently from where or which access networks we are currently using. There’s a wide variety of Internet of Things (IoT) devices out there, and although they differ in myriad ways – power, data collection capabilities, connectivity – we want them all to work seamlessly with our networks. Unfortunately, it can be quite difficult to enjoy our devices without worrying about getting them securely onto our networks (onboarding), providing network credentials (provisioning) and even managing them.

Ideally, the onboarding process should be secure, efficient and flexible enough to meet the needs of various use cases. Because IoT devices typically lack screens and keyboards, provisioning their credentials can be a cumbersome task: Some devices might be capable of using only a username and a password, whereas others might be able to use more complex credentials such as digital certificates or cryptographic tokens. For consumers, secure onboarding should be easy; for enterprises, the process should be automated and flexible so that large numbers of devices can quickly provisioned with unique credentials.

Ideally, at the end of the process, devices should be provisioned with network-and-device specific credentials, directly managed by the network and unique to the device so that compromises impact that specific device on that specific network. In practice, the creation and installation of new credentials is often a very painful process, especially for devices in the lower segment of the market.

It’s Credentials Management, Not Just Onboarding 

After a device is successfully “registered” or “onboarded”, the missing piece that has been and continues to be, so far, ignored is how to manage these credentials. Even when devices allow for configuring them, their deployments tend to be “static” and they rarely get updated. There are two reasons for this: The first reason is the lack of security controls, typically on smaller devices, to set these credentials, and the second, and more relevant, reason is that users rarely remember to update authentication settings. According to a recent article, even in corporate environments, “almost half (47%) of CIOs and IT managers have allowed IoT devices onto their corporate network without changing the default passwords” even though another CISO survey has found that “ ... almost half (47%) of CISOs were worried about a potential breach due to their organization’s failure to secure IoT devices in the workplace.”

At CableLabs we look at the problem from many angles. In particular, we focus on how to provide network credentials management that (a) is flexible, (b) can enforce credentials policies across devices and (c) does not require additional discovery mechanisms.

EAP-CREDS: The Right Tool for the Specific Task 

The IEEE Port-Based Network Access Control (802.1x) provides the basis for access network architectures to allow entities (e.g., devices, applications) to authenticate to the network even before being granted connectivity. Specifically, the Extensible Authentication Protocol (EAP) provides a communication channel in which various authentication methods can be used to exchange different types of credentials. Once the communication between the client and the server has been secured via a mechanism such as EAP-TLS or EAP-TEAP, our work (EAP-CREDS), uses the “extensible” attribute of EAP to include access network credentials management.

EAP-CREDS implements three separate phases: initialization, provisioning and validation. In the initialization phase, the EAP server asks the device to list all credentials available on the device (for the current network only) and, if needed, initiates the provisioning phase during which a credentials provisioning or renewal protocol supported by both parties is executed.  After that phase is complete, the server may initiate the validation phase (to check that the credentials have been successfully received and installed on the device) or declare success and terminate the EAP session.

To keep the protocol simple, EAP-CREDS comes with specific requirements for its deployment:

  • EAP-CREDS cannot be used as a stand-alone method. It’s required that EAP-CREDS is used as an inner method of any tunneling mechanism that provides secrecy (encryption), server-side authentication and, for devices that already have a set of valid credentials, client-side authentication.
  • EAP-CREDS doesn’t mandate for (or provide) a specific protocol for provisioning or managing the device credentials because it’s meant only to provide EAP messages for encapsulating existing (standard or vendor-specific) protocols. In its first versions, however, EAP-CREDS also incorporated a Simple Provisioning Protocol (SPP) that supported username/password and X.509 certificate management (server-side driven). The SPP has been extracted from the original EAP-CREDS proposal and will be standardized as a separate protocol.

 

 EAP-CREDS: Enabling Policy-Oriented Credential Management in Access Networks

Figure 2 - Two EAP-CREDS sessions. On the left (Figure 2a), the server provisions a server-generated password (4 msgs). On the right (Figure 2b), the client renews its certificate by generating a PKCS#10 request; the server replies with the newly issued certificate (6 msgs).

 

When these two requirements are met, EAP-CREDS can manage virtually any type of credentials supported by the device and the server. An example of early adoption of the EAP-CREDS mechanism can be found in the Release 3 of the CBRS Alliance specifications where EAP-CREDS is used to manage non-USIM based credentials (e.g., username/password or X.509 certificates) for authenticating end-user devices (e.g., cell phones). Specifically, CBRS-A uses EAP-CREDS to transport the provisioning messages from the SPP to manage username/password combinations as well as X.509 certificates. The combination of EAP-CREDS and SPP provides an efficient way to manage network credentials.

SPP and EAP-CREDS: Flexibility and Efficiency 

To understand the specific type of messages implemented in EAP-CREDS and SPP, let’s look at Figure 2a which shows a typical exchange between an already registered IoT device and a business network.

In this case, after successfully authenticating both sides of the communication, the server initiates EAP-CREDS and uses SPP to deliver a new password. The total number of messages exchanged in this case is between four (when server-side generation is used) and six (when co-generation between client and server is used). Figure 2b provides the same use-case for X.509 certificates where co-generation is used.

One of the interesting characteristics of EAP-CREDS and SPP is their flexibility and ability to easily accommodate solutions that, today, need to go through more complex processes (e.g., OSU registration). For instance, SPP can also be used to register existing credentials in two ways. Besides using an authorization token during the initialization phase (i.e., any kind of unique identifier, whether a signed token or a device certificate), devices can also register their existing credentials (e.g., their device certificate) for network authentication.

Policy-Based Credentials Management 

As we’ve seen, EAP-CREDS delivers an automatic, policy-driven, cross-device credentials-management system and its use can improve the security of different types of access networks: industrial, business and home.

For the business and industrial environments, EAP-CREDS provides a cross-vendor standard way to automate credentials management for large number of devices,(not just IoT) thus making sure that (a) no default credentials are used, (b) that the ones (credentials) that are used are regularly updated and (c) credentials aren’t shared with other (possibly less secure) home environments. For the home environment, EAP-CREDS provides the possibility to make sure that the small IoT devices we’re buying today aren’t easily compromised because of weak and static credentials and provides a complementary tool (for 802.1x-enabled networks only) to consumer-oriented solutions like the Wi-Fi Alliance’s DPP.

If you’re interested in further details about EAP-CREDS and credentials management, please feel free to contact us and start something new today!

Learn More About 10G

Comments
Security

Maintaining Confidentiality in the 10G Network

Andy Dolan
Senior Security Engineer

Aug 4, 2020

The 10G platform will offer almost limitless opportunities for innovation and new experiences in the home, bolstering the capabilities of the Internet of Things (IoT) landscape. While the volume of data that passes over cable technologies continues to grow, the classification of private and confidential boundaries continues to change.

Moreover, security is an abstract topic, particularly in the sense of the assurance it provides. We expect security to be present, but in a way that we don’t need to think about; we expect the assurance of security to be seamless. Behind the scenes, security is a constant source of innovation to make that seamless protection possible in the face of an ever-changing set of vulnerabilities, threats and exploits. The frequent application of the phrase “arms race” to describe that innovation is appropriate. Addressing confidentiality is a key pillar in the 10G security platform; it ensures that user data continues to be protected as new possibilities and services become available.

Brief Review: What Is Confidentiality?

Last fall, CableLabs produced a set of blog posts covering the security pillars of confidentiality, integrity and availability. Confidentiality ensures that access to resources such as hardware components, sensor data or private information is only granted to authorized actors, whether they’re users or processes. Authorization is part of the mechanism that enables confidentiality as the barrier between the actor and resource.

What are the primary ways that we can keep information confidential? We’re going to focus on two techniques applied to information:

  • Encryption-Using algorithms to render information unreadable without the proper materials (keys) to decrypt it.
  • Separation and Isolation-Putting barriers in place that must be traversed before gaining access to information.

In addition, there are some threats against confidentiality, including vulnerabilities in encryption algorithms, exploits that can circumvent authorization mechanisms, and, of course, there’s the possibility of quantum computing right around the corner. To stay ahead of the curve, we must continue to innovate to meet these threats.

Where Confidentiality Counts: The Network

Confidentiality is key when it comes to the amount of data that passes between machines. Even over the past 5 years, the data that we consider to be critical in terms of confidentiality has evolved beyond simply what we’re browsing or streaming. It’s also about what our devices are doing.

In the age of the IoT, where every device is connected and often eager to capture the aspects of our environment or actions it invokes, confidentiality also applies to the data that is shared between devices and their cloud services. A great amount can be garnered from the passive observations and actions of smart devices as they’re used over time, including behaviors typical of users interacting with them (e.g., when the smart coffee pot is typically started in the morning). Protecting such data at home is particularly paramount during these unprecedented times of increased “work-from-home” routines.

Continuing to Ensure Confidentiality on the Network

Since its inception in 1997, the DOCSIS ® specification (and later the extended DOCSIS ® security specification) have implemented encryption to ensure that user data is protected from eavesdropping on the cable network. Over the years, changes to encryption algorithms and key sizes have been enacted in DOCSIS in line with the recommendations and best practices of industry and standards organizations.

As the threats against confidentiality continue to be revealed in the world of 10G, CableLabs will continue to adapt to the cryptographic standards and recommendations of such groups through updates to specifications and best practices. Future innovations in encryption technologies will continue to accommodate the incredible power of the 10G network while ensuring the confidentiality of the data that is carried over it.

Maintaining Confidentiality in the Age of the Smart Home

A major evolution in the architecture of the Internet that has come about in only the past decade has been the advent of the IoT. As noted before, confidential data and its privacy implications are now more prevalent than ever, even within only a single smart home. In addition, it may be possible for one compromised smart device to act as a starting point for further intrusions of other devices or points in the home; how can we utilize separation to protect against such a scenario?

Enter CableLabs Micronets, a system that can dynamically organize devices into different groups and provide network separation among them. With this system enabled on your home (or work) networks, one compromised device doesn’t directly provide an attacker with the possibility to target other devices and/or confidential data they may have stored or transmitted over the network.

As for confidentiality when devices talk to each other, CableLabs continues to engage with standards organizations such as OCF to draft standards that ensure the secure operation (and interoperation) of IoT devices, as well as device interactions with cloud services.

Conclusions

The opportunities that the 10G platform will provide are immense in both promise and scope, allowing previously infeasible technologies to be brought to the forefront to provide new classes of products and services.

In light of these great innovations, we must remain cognizant when it comes to the confidentiality of our resources and the protection of user data. As the “arms race” continues, we will continue innovating and staying one step ahead. That’s the speed of security.


SUBSCRIBE TO OUR BLOG

Comments
10G

10G Integrity: The DOCSIS® 4.0 Specification and Its New Authentication and Authorization Framework

Massimiliano Pala
Principal Security Architect

Simon Krauss
Deputy General Counsel

May 28, 2020

One of the pillars of the 10G platform is security. Simplicity, integrity, confidentiality and availability are all different aspects of Cable’s 10G security platform. In this work, we want to talk about the integrity (authentication) enhancements, that have been developing for the next generation of DOCSIS® networks, and how they update the security profiles of cable broadband services.

DOCSIS (Data Over Cable Service Interface Specifications) defines how networks and devices are created to provide broadband for the cable industry and its customers. Specifically, DOCSIS comprises a set of technical documents that are at the core of the cable broadband services. CableLabs manufacturers for the cable industry, and cable broadband operators continuously collaborate to improve their efficiency, reliability and security.

With regards to security, DOCSIS networks have pioneered the use of public key cryptography on a mass scale – the DOCSIS Public Key Infrastructure (PKIs) are among the largest PKIs in the world with half billion active certificates issued and actively used every day around the world.

Following, we introduce a brief history of DOCSIS security and look into the limitations of the current authorization framework and subsequently provide a description of the security properties introduced with the new version of the authorization (and authentication) framework which addresses current limitations.

A Journey Through DOCSIS Security

The DOCSIS protocol, which is used in cable’s network to provide connectivity and services to users, has undergone a series of security-related updates in its latest version DOCSIS 4.0, to help meet the 10G platform requirements.

In the first DOCSIS 1.0 specification, the radio frequency (RF) interface included three security specifications: Security System, Removable Security Module and Baseline Privacy Interface. Combined, the Security System plus the Removable Security Module Specification became Full Security (FS).

Soon after the adoption of public key cryptography that occurred in the authorization process, the cable industry realized that a secure way to authenticate devices was needed; a DOCSIS PKI was established for DOCSIS 1.1-3.0 devices to provide cable modems with verifiable identities.

With the DOCSIS 3.0 specification, the major security feature was the ability to perform the authentication and encryption earlier in the device registration process, thus providing protection for important configuration and setup data (e.g., the configuration file for the CM or the DHCP traffic) that was otherwise not protected. The new feature was called Early Authorization and Encryption (EAE), it allows to start Baseline Privacy Interface Plus (BPI) even before the device is provisioned with IP connectivity.

The DOCSIS 3.1 specifications created a new Public Key Infrastructure *(PKI) to handle the authentication needs for the new class of devices. This new PKI introduced several improvements over the original PKI when it comes to cryptography – a newer set of algorithms and increased key sizes were the major changes over the legacy PKI. The same new PKI that is used today to secure DOCSIS 3.1 devices will also provide the certificates for the newer DOCSIS 4.0 ones.

The DOCSIS 4.0 version of the specification introduces, among the numerous innovations, an improved authentication framework (BPI Plus V2) that addresses the current limitations of BPI Plus and implements new security properties such as full algorithm agility, Perfect Forward Secrecy (PFS), Mutual Message Authentication (MMA or MA) and Downgrade Attacks Protection.

Baseline Privacy Plus V1 and Its Limitations

In DOCSIS 1.0-3.1 specifications, when Baseline Privacy Plus (BPI+ V1) is enabled, the CMTS directly authorizes a CM by providing it with an Authorization Key, which is then used to derive all the authorization and encryption key material. These secrets are then used to secure the communication between the CM and the CMTS. In this security model, the CMTS is assumed trusted and its identity is not validated.

BPI Plus Authentication Flow

Figure 1: BPI Plus Authorization Exchange

The design of BPI+ V1 dates back more than just few years and in this period of time, the security and cryptography landscapes have drastically changed;  especially in regards to cryptography. At the time when BPI+ was designed, the crypto community was set on the use of the RSA public key algorithm, while today, the use of elliptic-curve cryptography and ECDSA signing algorithm is predominant because of its efficiency, especially when RSA 3072 or larger keys are required.

A missing feature in BPI+ is the lack of authentication for the authorization messages. In particular, CMs and CMTS-es are not required to authenticate (i.e., sign) their own messages, making them vulnerable to unauthorized manipulation.

In recent years, there has been a lot of discussion around authentication and how to make sure that compromises of long-term credentials (e.g., the private key associated with an X.509 certificate) do not provide access to all the sessions from that user in the clear (i.e., enables the decryption of all recorded sessions by breaking a single key) – because BPI+ V1 directly encrypts the Authorization Key by using the RSA public key that is in the CM’s device certificate, it does not support Perfect Forward Secrecy.

To address these issues, the cable industry worked on a new version of its authorization protocol, namely BPI Plus Version 2. With this update, a protection mechanism was required to prevent downgrade attacks, where attackers to force the use of the older, and possibly weaker, version of the protocol. In order to address this possible issue, the DOCSIS community decided that a specific protection mechanism was needed and introduced the Trust On First Use (TOFU) mechanism to address it.

The New Baseline Privacy Plus V2

The DOCSIS 4.0 specification introduces a new version of the authentication framework, namely Baseline Privacy Plus Version 2, that addresses the limitations of BPI+ V1 by providing support for the identified new security needs. Following is a summary of the new security properties provided by BPI+ V2 and how they address the current limitations:

  • Message Authentication. BPI+ V2 Authorization messages are fully authenticated. For CMs this means that they need to digitally sign the Authorization Requests messages, thus eliminating the possibility for an attacker to substitute the CM certificate with another one. For CMTS-es, BPI+ V2 requires them to authenticate their own Authorization Reply messages this change adds an explicit authentication step to the current authorization mechanism. While recognizing the need for deploying mutual message authentication, DOCSIS 4.0 specification allows for a transitioning period where devices are still allowed to use BPI+ V1. The main reason for this choice is related to the new requirements imposed on DOCSIS networks that are now required to procure and renew their DOCSIS credentials when enabling BPI+ V2 (Mutual Authentication).
  • Perfect Forward Secrecy. Differently from BPI+ V1, the new authentication framework requires both parties to participate in the derivation of the Authorization Key from authenticated public parameters. In particular, the introduction of Message Authentication on both sides of the communication (i.e., the CM and the CMTS) enables BPI+ V2 to use the Elliptic-Curves Diffie-Hellman Ephemeral (ECDHE) algorithm instead of the CMTS directly generating and encrypting the key for the different CMs.Because of the authentication on the Authorization messages, the use of ECDHE is safe against MITM attacks.
  • Algorithm Agility. As the advancement in classical and quantum computing provides users with incredible computational power at their fingertips, it also provides the same ever-increasing capabilities to malicious users. BPI+ V2 removes the protocol dependencies on specific public-key algorithms that are present in BPI+ V1. , By introducing the use of the standard CMS format for message authentication (i.e., signatures) combined with the use of ECDHE, DOCSIS 4.0 security protocol effectively decouples the public key algorithm used in the X.509 certificates from the key exchange algorithm. This enables the use of new public key algorithms when needed for security or operational needs.
  • Downgrade Attacks Protection. A new Trust On First Use (TOFU) mechanism is introduced to provide protection against downgrade attacks – although the principles behind TOFU mechanisms are not new, its use to protect against downgrade attacks is. It leverages the security parameters used during a first successful authorization as a baseline for future ones, unless indicated otherwise. By establishing the minimum required version of the authentication protocol, DOCSIS 4.0 cable modems actively prevent unauthorized use of a weaker version of the DOCSIS authentication framework (BPI+).  During the transitioning period for the adoption of the new version of the protocol, cable operators can allow “planned” downgrades – for example, when a node split occurs or when a faulty equipment is replaced and BPI+ V2 is not enabled there. In other words, a successfully validated CMTS can set, on the CM, the allowed minimum version (and other CM-CMTS binding parameters) to be used for subsequent authentications.

Future Work

In this work we provided a short history of DOCSIS security and reviewed the limitations of the current authorization framework. As CMTS functionality moves into the untrusted domain, these limitations could potentially be translated into security threats, especially in new distributed architectures like Remote PHY. Although in their final stage of approval, the proposed changes to the DOCSIS 4.0 are currently being addressed in the Security Working Group.

Member organizations and DOCSIS equipment vendors are always encouraged to participate in our DOCSIS working groups – if you qualify, please contact us and participate in our weekly DOCSIS 4.0 security meeting where these, and other security-related topics, are addressed.

If you are interested in discovering even more details about DOCSIS 4.0 and 10G security, please feel free to contact us to receive more in-depth and updated information.

Learn More About 10G

Comments
Security

With Great Bandwidth, Comes Great Responsibility

Kyle Haefner
Senior Security Engineer

May 5, 2020

Cable's next generation, 10G networks, holds the promise to deliver symmetrical multi-gigabit speeds that are 100 times faster than what some consumers are currently experiencing today. This great leap forward will enable services and experiences that will drive internet innovation for years to come. It is our mutual responsibility to assure that devices we connect to these blazing 10 gigabit internet connections, are updated and patched, free from default passwords and use proper authentication and authorization.

The lack of following basic cyber-security principals surfaced in the late Fall of 2016, when many popular sites such as Twitter, Amazon, Reddit and Netflix, were unreachable for several periods, lasting hours. The cause was a massive distributed denial of service (DDoS) attack coming from hundreds of thousands of compromised internet of things (IoT) devices. Traffic from these devices overwhelmed the DNS service provider dyn.com and effectively blocked customers and users from reaching these popular Internet locations for hours at a time.

As we approach a world where households are connected at gigabit and greater speeds, building secure devices and getting them in the hands of consumers is essential. Over the last several years CableLabs has been engaged with standard organizations such as, the Consumer Technology Association (CTA) and the Open Connectivity Foundation (OCF), to draft specifications and guide security baselines for IoT devices. This work has culminated in the release of OCF's international ISO\IEC specification for IoT interoperability.

The OCF specification brings together over 450 member companies and work that spans half a decade to apply cyber-security best practices to the IoT. This specification, combined with an open source reference implementation, seven approved global testing and certification labs and an active community of practitioners and member companies (from device vendors, network device venders and network operators), is uniquely positioned to be the secure standard that unites the industry.

With the OCF specification a consumer can buy a certified device from Vendor A and be confident in the knowledge that not only will it work with their certified appliance from Vender B, but it will do so in a way that is encrypted and authenticated. OCF can work with many cloud services but does not inherently need the cloud, promising consumers a good balance between the convenience of the cloud and the privacy and availability of their local networks.

The OCF specification's security-first approach brings it into close alignment with several of the security guidelines from government and industry, including:

OCF mapping of Security Baselines

Figure 1: OCF mapping of Security Baselines

The road ahead for 10G and IoT is bright. Ultra-fast networks and connected devices have the potential to change every aspect of daily life, making our surroundings aware and interactive to our presence and able to predict and adjust to our needs. Work, entertainment and social interaction will happen whenever and wherever we are, dynamically and organically. Education and healthcare will be forever changed as sensors and ubiquitous devices allow us to interact in ways never before possible. Yes, the future is bright, but it also must be secure.

Learn More About 10G Security

Comments
Security

Revisiting Security Fundamentals Part 3: Time to Examine Availability

Steve Goeringer
Distinguished Technologist, Security

Nov 12, 2019

As I discussed in parts 1 and 2 of this series, cybersecurity is complex. Security engineers rely on the application of fundamental principles to keep their jobs manageable. In the first installment of this series, I focused on confidentiality, and in the second installment, I discussed integrity. In this third and final part of the series, I’ll review availability. The application of these three principles in concert is essential to ensuring excellent user experiences on broadband.

Defining Availability

Availability, like most things in cybersecurity, is complicated. Availability of broadband service, in a security context, ensures timely and reliable access to and use of information by authorized users. Achieving this, of course, can be challenging. In my opinion, the topic is under represented amongst security professionals and we have to rely on additional expertise to achieve our availability goals. The supporting engineering discipline for ensuring availability is reliability engineering. Many tomes are available to provide detailed insight on how to engineer systems to achieve desired reliability and availability.

How are the two ideas of reliability and availability different? Reliability focuses on how a system will function under specific conditions for a period of time. In contrast, availability focuses on how likely a system will function at a specified moment or interval of time. There are important additional terms to understand – quality, resiliency, and redundancy. These are addressed in the following paragraphs. Readers wanting more detail may consider reviewing some of the papers on reliability engineering at ScienceDirect.

Quality: We need to assure that our architectures and components are meeting requirements. We do that through quality assurance and reliability practices. Software and hardware vendors actually design, analyze, and test their solutions (both in development and then as part of shipping and integration testing) to assure they actually are meeting their reliability and availability requirements. When results aren’t sufficient, vendors apply process improvements (possibly including re-engineering) to bring their design, manufacturing, and delivery processes into line to hitting the reliability  and availability requirements.

Resiliency: Again, however, this isn’t enough. We need to make sure our services are resilient – that is, even when something fails that our systems will recover (and something will fail – in fact, many things over time are going to fail, sometimes concurrently). There are a few key aspects we address when making our networks resilient. One is that when something fails, it does so loudly to the operator so they know something failed – either the failing element sends messages to the management system, or systems that it connects to or relies upon it tells the management system the element is failing or failed. Another is that the system can gracefully recover. It automatically restarts from the point it failed.

Redundancy: And, finally, we apply redundancy. That is to say we set up the architecture so that critical components are replicated (usually in parallel). This may happen within a network element (such as having two network controllers or two power supplies or two cooling units) with failover (and appropriate network management notifications) from one unit to another. Sometimes we’ll use clustering to both distribute load and achieve redundancy (sometimes referred to as M:N redundancy). Sometimes, we’ll have redundant network elements (often employed in data centers) or multiple routes of how network elements can connect through networks (using Ethernet, Internet, or even SONET). In cases where physical redundancy is not reasonable, we can introduce redundancy across other dimensions including time, frequency, channel, etc. How much redundancy a network element should imply depends on the math that balances reliability and availability to achiever your service requirements.

I’ve mentioned requirements several times. What requirements do I mean? A typical one, but not the only one and not even necessarily the most important one is the Mean Time Between Failure (MTBF). This statistic represents the statistical or expected average length of time between failures of a given element of concern, typically many thousands (even millions for some critical well understood components) of hours. There are variations. Seagate, for example, switched to Annualized Failure Rate (AFR) which is the “probable percent of failures per year, based on the [measured or observed failures] on the manufacturer’s total number of installed units of similar type (see Seagate link here). The key thing here, though, is to remember that MTBF and AFR are statistical predictions based on analysis and measured performance. It’s also important to estimate and measure availability – at the software, hardware, and service layers. If your measurements aren’t hitting the targets you set for a service, then something needs to improve.

A parting note, here. Lots of people talk about availability in terms of the percentage of time a service (or element) is up in a year. These are thrown around as “how many 9’s is your availability?” For example, “My service is available at 4x9s (99.99%)?” This is often a misused estimate because the user typically doesn’t know what is being measured, what it applies to (e.g., what’s included in the estimate), or even the basis for how the measurement is made. Never-the-less, it can be useful when backed by evidence, especially with statistical confidence.

Warnings about Availability

Finally, things ARE going to fail. Your statistics will, sometimes, be found wanting. Therefore, it’s critical to also consider how long it will take to RECOVER from a failure. In other words, what is your repair time? There are, of course, statistics for estimating this as well. A common one is Mean Time to Repair (MTTR). This seems like a simple term, but it isn’t. Really, MTTR is a statistic representing that measures how maintainable or repairable systems are. And measuring and estimating repair time is critical. Repair time can be the dominant contributor to unavailability.

So… why don’t we just make everything really reliable and all of our services highly available. Ultimately, this comes down to two things. One, you can’t predict all things sufficiently. This weakness is particularly important in security and is why availability is included as one of the three security fundamentals. You can’t predict well or easily how an adversary is going to attack your system and disrupt service. When the unpredictable happens, you figure out how to fix it and update your statistical models and analysis accordingly; and you update how you measure availability and reliability.

The second thing is simply economics. High availability is expensive. It can be really expensive. Years ago, I did a lot of work on engineering and architecting metropolitan, regional, and nationwide optical networks with Dr. Jason Rupe (one of my peers at CableLabs today). We found, through lots of research, that the general rule of thumb was for each “9” of availability, you could expect an increase of cost of service of around 2.5x on typical networks. Sounds extreme, doesn’t it? Typical availability of a private line or Ethernet circuit between two points (regional or national) is typically quoted at around 99%. That’s a lot of downtime (over 80 hours a year) – and it won’t all happen at a predictable time within a year. Getting that down to around 9 hours a year, or 99.9% availability, will cost 2.5x that, usually. Of course, architectures and technology does matter. This is just sharing my personal experience. What’s the primary driver of the cost? Redundancy. More equipment on additional paths of connectivity between that equipment.

Availability Challenges

There are lots of challenges in design of a highly available cost-effective access network. Redundancy is challenging. It’s implemented where economically reasonable, particularly at the CMTS’s, routers, switches, and other server elements at the hub, headend, or core network. It’s a bit harder to achieve in the HFC plant. So, designers and engineers tend to focus more on reliability of components and software and ensure that CMs, nodes, amplifiers, and all the other elements that make DOCSIS® work. Finally, we do measure our networks. A major tool for tracking and analyzing network failure causes and for maximizing the time between service failures is DOCSIS® Proactive Network Maintenance (PNM). PNM is used to identify problems in the physical RF plant including passive and active devices (taps, nodes, amplifiers), connectors, and the coax cable.

From a strictly security perspective, what can be done to improve the availability of services? Denial of service attacks are monitored and mitigated typically at the ingress points to networks (border routers) through scrubbing. Another major tool is ensuring authorized access through authentication and access controls.

It is important that we include consideration of availability in our security strategies. Security engineers often get too focused on threats caused by active adversaries. We must also include other considerations that disrupt the availability of experiences to subscribers. Availability is a fundamental security component and, like confidentiality and integrity, should be included in any security by design strategy.

Availability Strategies

What are the strategies and tools reliability and security engineers might apply?

  • Model your system and assess availability diligently. Include traditional systems reliability engineering faults and conditions, but also include security faults and attacks.
  • Execute good testing prior to customer exposure. Said another way, implement quality control and process improvement practices.
  • When redundancy is impractical, reliability of elements becomes the critical availability design consideration.
  • Measure and improve. PMN can significantly improve availability. But measure what matters.
  • Partner with your suppliers to assure reliability, availability, and repairability throughout you network, systems, and supply chains.
  • Leverage PNM fully. Solutions like DOCSIS® create a separation from a network problem and a service problem. PNM lets operators take advantage of that difference to fix network problems before they become service problems.
  • Remember that repair time can be a major factor in the overall availability and customer experience.

Availability in Security

It’s important that we consider availability in our security strategies. Security engineers often get too focused on threats caused by active adversaries. We must also include other considerations that disrupt the availability of experiences to subscribers. Availability is a fundamental security component and—like confidentiality and integrity—should be included in any security-by-design strategy.

You can read more about 10G and security by clicking below.


Learn More About 10G

Comments
Security

Revisiting Security Fundamentals Part 2: Integrity

Steve Goeringer
Distinguished Technologist, Security

Oct 24, 2019

Let’s revisit the fundamentals of security during this year’s security awareness month – part 2: Integrity.

As I discussed in Part 1 of this series, cybersecurity is complex. Security engineers rely on the application of fundamental principles to keep their jobs manageable. The first blog focused on confidentiality. This second part will address integrity. The third and final part of the serious will review availability. Application of these three principles in concert is essential to ensuring excellent user experiences on broadband.

Nearly everyone who uses broadband has some awareness of confidentiality, though most may think of it exclusively as enabled by encryption. That’s not a mystery – our browsers even tell us when a session is “secure” (meaning the session they have initiated with a given server is at least using HTTPS which is encrypted). Integrity is a bit more obscure and less known. It’s also less widely implemented and then not always well.

Defining Integrity

In their special publication, “An Introduction to Information Security,” NIST defines integrity as “a property whereby data has not been altered in an unauthorized manner since it was created, transmitted, or stored.” This definition is a good starting place, but it can be extended in today’s cybersecurity context. Integrity needs to be applied not only to data, but also to the hardware and software systems that store and process that data and the networks that connect those systems. Ultimately, integrity is about proving that things are as they should be and that they have not been changed, intentionally or inadvertently or accidentally (like magic), in unexpected or unauthorized ways.

How is this done? Well, that answer depends on what you are applying integrity controls to. (Again, this blog post isn’t intended to be a tutorial in-depth on the details but a simple update and overview.) The simplest and most well-known approach to ensuring integrity is to use a signature. Most people are familiar with this as they sign a document or write a check.  And most people know that the bank, or whomever else we’re signing a document for, knows that signatures are not perfect so you often have to present an ID (passport, driver’s license, whatever) to prove that you are the party to which your signature attests on that document or check.

We can also implement similar steps in information systems, although, the process is a bit different. We produce a signature of data by using math; in fact, integrity is a field of cryptography that complements encryption. A signature is comprised of two parts, or steps. First, data is run through a mathematical function called hashing. Hashing is simply a one-way process which reduces a large piece of data to a few bits (128-256 bits is typical) in a way that is computationally difficult to reverse and unlikely to be copied using different data. This is often referred to as a digest and the digest is unlikely to be duplicated using a different source data (we call this collisions). This alone can be useful and is used in many ways in information systems. But it doesn’t attest the source of the data or the authenticity of the data. It just shows whether it has been changed. If we encrypt the digest, perhaps using asymmetric cryptography supported by a public key infrastructure, we produce a signature. That signature can now be validated through a cryptographic challenge and response. This is largely equivalent to being asked to show your ID when you sign a check.

One thing to be mindful of is that encryption doesn’t ensure integrity. Although an adversary who intercepts an encrypted message may not be able to read that message, they may be able to alter the encrypted text and send it on its way to the intended recipient. That alteration may be decrypted as valid. In practice this is hard because without knowledge of the original message, any changes are likely to just be gibberish. However, there are attacks in which the structure of the original message is known. Some ciphers do include integrity assurances, as well, but not all of them. So, implementors need to consider what is best for a given solution.

Approaches to Integrity

Integrity is applied to data at motion and at rest, and to systems, software, and even supply chains somewhat differently. Here’s a brief summary of the tools and approaches for each area:

  • Information in motion: How is the signature scheme above applied to transmitting data? The most common means uses a process very similar to what is described above. Hash-based Message Authentication Codes are protocols that create a digest of a packet and then encrypt the digest with a secret key. A public key is used to prove that the digest was produced by an authorized source. One old but still relevant description of HMAC is RFC 2104 available from IETF here.
  • Information at rest: In many ways, assuring integrity of files on a storage server or a workstation is more challenging than integrity of transmitted information. Usually, storage is shared by many organizations or departments. What secret keys should be used to produce a signature of those files? Sometimes, things are simplified. Access controls can be used to ensure only authorized parties can access a file. When access controls are effective, perhaps only hashing of the data file is sufficient to prove integrity. Again, some encryption schemes can include integrity protection. The key problem noted above still remains a challenge there. Most storage solutions, both proprietary and open source, provide a wide range of integrity protection options and it can be challenging for the security engineer to architect the best solution for a given application.
  • Software: Software is, of course, a special type of information. And so, the ideas of how to apply integrity protections to information at rest can apply to protecting software. However, how software is used in modern systems with live builds adds additional requirements. Namely, this means that before a given system uses software, that software should be validated as being from an authorized source and that it has not been altered since being provided by that source. The same notion of producing a digest and then encrypting the digest to form a signature applies, but that signature needs to be validated before the software is loaded and used. In practice, this is done very well in some ecosystems and either done poorly or not at all in other systems. In cable systems, we use a process referred to as Secure Software Download to ensure the integrity of firmware downloaded to cable modems. (See section 14 of Data-Over-Cable Service Interface Specifications 3.1)
  • Systems: Systems are comprised of hardware and software elements, yet the overall operation of the hardware tends to be governed by configurations and settings stored in files and software. If the files and software are changed, the operation of the system will be affected. Consequently, the integrity of the system should be tracked and periodically evaluated. Integrity of the system can be tracked through attestation – basically producing a digest of the entire system and then storing that in protected hardware and reporting it to secure attestation servers. Any changes to the system can be checked to ensure they were authorized. The processes for doing this are well documented by the Trusted Computing Group. Another process well codified by the Trusted Computing Group is Trusted Boot. Trusted boot uses secure modules included in hardware to perform a verified launch of an OS or virtual environment using attestation.
  • Supply chain: A recent focus area for integrity controls has been supply chains. How do you know where your hardware or software is coming from? Is the system you ordered the system you received? Supply chain providence can be attested using a wide range of tools, and application of distributed ledger or blockchain technologies is a prevalent approach.

Threats to Integrity

What are the threats to integrity? One example that has impacted network operators and their users repeatedly is changing of the DNS settings on gateways. If an adversary can change the DNS server on a gateway to a server they control (incorporation of strong access controls minimizes this risk), then they can selectively redirect DNS queries to spoofed hosts that look like authentic parties (e.g., banks, credit card companies, charity sites) and get customer’s credentials. The adversary can then use those credentials to access the legitimate site and do whatever that allows (e.g., empty bank accounts). This can also be done by altering the DNS query in motion at a compromised router or other server through which the query traverses (HMAC or encryption with integrity protection would prevent this.) Software attacks can occur even at the source but also at intermediate points in the supply chain. Tampered software is a prevalent way malware is introduced to end-points and can be very hard to detect because the software appears legitimate. (Consider the Wired article, “Supply Chain Hackers Snuck Malware Into Videogames.”)

The Future of Integrity

Cable technology has already addressed many integrity threats. As mentioned above, DOCSIS technology already includes support for Secure Software Download to provide integrity verification of firmware. This addresses both software supply chain providence and tampering of firmware. Many of our management protocols are also supported by HMAC. Future controls will include trusted boot and hardened protection of our encryption keys (also used for signatures). We are designing solutions for virtualized service delivery and integrity controls are pervasively included across the container and virtual machine architectures being developed.

Addition of integrity controls to our tools to ensure confidentiality provides defense in depth. It is a fundamental security component and should include in any security by design strategy. You can read more about security by clicking below.


Learn More About 10G

Comments
Security

False Base Station or IMSI Catcher: What You Need to Know

Tao Wan
Principal Architect, Security

Oct 23, 2019

You might have heard of False Base Station (FBS), Rogue Base Station (RBS), International Mobile Subscriber Identifier (IMSI) Catcher or Stingray. All four of these terminologies refer to a tool consisting of hardware and software that allow for passive and active attacks against mobile subscribers over radio access networks (RANs). The attacking tool (referred to as FBS hereafter) exploits security weaknesses in mobile networks from 2G (second generation) to 3G, 4G and 5G. (Certain improvements have been made in 5G, which I’ll discuss later.)

In mobile networks of all generations, cellular base stations periodically broadcast information about the network. Mobile devices or user equipment (UE) listen to these broadcasting messages, select an appropriate cellular cell and connect to the cell and the mobile network. Because of practical challenges, broadcasting messages aren’t protected for confidentiality, authenticity or integrity. As a result, broadcasting messages are subject to spoofing or tampering. Some unicasting messages aren’t protected either, also allowing for spoofing. The lack of security protection of mobile broadcasting messages and certain unicasting messages makes FBS possible.

An FBS can take various forms, such as a single integrated device or multiple separated components. In the latter form [1], an FBS usually consists of a wireless transceiver, a laptop and a cellphone. The wireless transceiver broadcasts radio signals to impersonate legitimate base stations. The laptop connects to the transceiver (e.g., via an USB interface) and controls what to broadcast as well as the strength of the broadcasting signal. The cellphone is often used to capture broadcasting messages from legitimate base stations and feed into the laptop to simplify the configuration of the transceiver. In either form, an FBS can be made compact with a small footprint, allowing it to be left in a location unnoticeably (e.g., mounted to a street pole) or carried conveniently (e.g., inside a backpack).

An FBS often broadcasts the same network identifier as a legitimate network but with a stronger signal to lure users away. How much stronger does an FBS’s signal need to be to succeed? The answer to that question hasn’t been well understood until recently. According to the experiments in the study [2], an FBS’s signal must be more than 30db stronger than a legitimate signal to have any success. When the signal is 35db stronger, the success rate is about 80 percent. When it’s 40db stronger, the success rate increases to 100 percent. In these experiments, FBS broadcasts the same messages with the same frequency and band as the legitimate cell. Another strategy taken by an FBS is to broadcast the same network identifier but with a different tracking area code, tricking the UE into believing that it has entered a new tracking area, and then switch to the FBS. This strategy can make it easier to lure the UE to the FBS and should help reduce the signal strength required by the FBS to be successful. However, the exact signal strength requirement in this case wasn’t measured in the experiments.

Once camped at an FBS, a UE is subject to both passive and active attacks. In passive attacks, an adversary only listens to radio signals from both the UE and legitimate base stations without interfering with the communication (e.g., with signal injection). Consequences from passive attacks include—but are not limited to—identity theft and location tracking. In addition, eavesdropping often forms a stepping stone toward active attacks, in which an adversary also injects signals. An active attacker can be a man-in-the-middle (MITM) or man-on-the-side (MOTS) attacker.

In MITM attacks, the attacker is on the path of the communication between a UE and another entity and can do pretty much anything to the communication, such as reading, injecting, modifying and deleting messages. One such attack is to downgrade a UE to 2G with weak or null ciphers to allow for eavesdropping. Another example of an MITM attack is aLTEr [3], which only tampers with DNS requests in LTE networks, without any downgrading or tampering of control messages. Although user plane data is encrypted in LTE, it’s still subject to tampering if the encryption (e.g., AES counter mode) is malleable due to the lack of integrity protection.

In MOTS attacks, an attacker doesn’t have the same amount of control over communication as with an MITM attack. More often, the attacker injects messages to obtain information from the UE (e.g., stealing the IMSI by an identity request), send malicious messages to the UE (e.g., phishing SMS) or hijack services from a victim UE (e.g., answering a call on behalf of the UE [4]). A MOTS attacker, without luring a UE to connect to it, can still interfere with existing communication—for example, by injecting slightly stronger signals that are well timed to overwrite a selected part of a legitimate message [2].

FBS has been a security threat to all generations of mobile networks since 2G. The mitigation to FBS was studied by 3GPP in the past—however, without any success due to practical constraints such as deployment challenges in cryptographic key management and difficulty in timing synchronization. In 5G release 15 [5], network side detection of FBS is specified, which can help mitigate the risk, albeit fail to prevent FBS. 5G release 15 also introduces public key encryption of subscriber permanent identifier (SUPI) before it is sent out from the UE, which—if implemented—makes it difficult for FBS to steal SUPI. In 5G release 16 [6], FBS is being studied again. Various solutions have been proposed, including integrity protection of broadcasting, paging and unicasting messages. Other detection approaches have also been proposed.

Our view is that FBS arises mainly from the lack of integrity protection of broadcasting messages. Thus, a fundamental solution is to protect broadcasting messages with integrity (e.g., using public key based digital signatures). Although challenges remain with such a solution, we don’t believe those challenges are insurmountable. Other solutions are based on the signatures of attacks, which may help but can eventually be bypassed when attacks evolve to change their attacking techniques and behaviors. We look forward to agreement from 3GPP SA3 on a long-term solution that can fundamentally solve the problem of FBS in 5G.

To learn more about 5G in the future subscribe to our blog.


SUBSCRIBE TO OUR BLOG

References

[1] Li, Zhenhua, Weiwei Wang, Christo Wilson, Jian Chen, Chen Qian, Taeho Jung, Lan Zhang, Kebin Liu, Xiangyang Li, and Yunhao Liu. “FBS-Radar: Uncovering Fake Base Stations at Scale in the Wild.” In Proceedings of ISOC Symposium on Network and Distributed Systems Security (NDSS), February 2017.

[2] Hojoon Yang, Sangwook Bae, Mincheol Son, Hongil Kim, Song Min Kim, and Yongdae Kim. “Hiding in Plain Signal: Physical Signal Overshadowing Attack on LTE.” In Proceedings of 28th USENIX Security Symposium (USENIX Security), August 2019.

[3] Rupprecht D, Kohls K, Holz T, and Popper C. “Breaking LTE on Layer Two.” In Proceedings of IEEE Symposium on Security & Privacy (S&P), May 2019.

[4] Golde N, Redon K, and Seifert JP. “Let Me Answer That for You: Exploiting Broadcast Information in Cellular Networks.” In Proceedings of the 22nd USENIX Security Symposium (USENIX Security), August 2013.

[5] 3GPP TS 33.501, “Security Architecture and Procedures for 5G System” (Release 15), v15.5.0, June 2019.

[6] 3GPP TR 33.809, “Study on 5G Security Enhancement against False Base Stations” (Release 16), v0.5.0, June 2019.

Comments
Security

Revisiting Security Fundamentals

Steve Goeringer
Distinguished Technologist, Security

Oct 17, 2019

It’s Cybersecurity Awareness Month—time to study up!

Cybersecurity is a complex topic. The engineers who address cybersecurity must not only be security experts; they must also be experts in the technologies they secure. In addition, they have to understand the ways that the technologies they support and use might be vulnerable and open to attack.

Another layer of complexity is that technology is always evolving. In parallel with that evolution, our adversaries are continuously advancing their attack methods and techniques. How do we stay on top of that? We must be masters of security fundamentals. We need to be able to start with foundational principals and extend our security tools, techniques and methods from there: Make things no more complex than necessary to ensure safe and secure user experiences.

In celebration of Cybersecurity Awareness Month, I’d like to devote a series of blog posts to address some basics about security and to provide a fresh perspective on why these concepts remain important areas of focus for cybersecurity.

Three Goals

At the most basic level, the three primary goals of security for cable and wireless networks are to ensure the confidentiality, integrity and availability of services. NIST documented these concepts well in its special publication, “An Introduction to Information Security.”

  • Confidentiality ensures that only authorized users and systems can access a given resource (e.g., network interface, data file, processor). This is a pretty easy concept to understand: The most well-known confidentiality approach is encryption.
  • Integrity, which is a little more obscure, guards against unauthorized changes to data and systems. It also includes the idea of non-repudiation, which means that the source of a given message (or packet) is known and cannot be denied by that source.
  • Availability is the uncelebrated element of the security triad. It’s often forgotten until failures in service availability are recognized as being “a real problem.” This is unfortunate because engineering to ensure availability is very mature.

In Part 1 of this series, I want to focus on confidentiality. I’ll discuss integrity and availability in two subsequent blogs.

As I mentioned, confidentiality is a security function that most people are aware of. Encryption is the most frequently used method to assure confidentiality. I’m not going to go into a primer about encryption. However, it is worth talking about the principles. Encryption is about applying math using space, power and time to ensure that only parties with the right secret (usually a key) can read certain data. Ideally, the math used should require much greater space, power or time for an unauthorized party without the right secret to read that data. Why does this matter? Because encryption provides confidentiality only as long as the math used is sound and that the corresponding amount of space, power and time for adversaries to read the data is impractical. That is often a good assumption, but history has shown that over time, a given encryption solution will eventually become insecure. So, it’s a good idea to apply other approaches to provide confidentiality as well.

What are some of those approaches? Ultimately, the other solutions prevent access to the data being protected. The notion is that if you prevent access (either physically or logically) to the data being protected, then it can’t be decrypted by unauthorized parties. Solutions in this area fall primarily into two strategies: access controls and separation.

Access controls validate that requests to access data or use a resource (like a network) come from authorized sources (identified using network addresses and other credentials). For example, an access control list (ACL) is used in networks to restrict resource access to specific IP or MAC addresses. As another example, a cryptographic challenge and response (often enabled by public key cryptography) might be used to ensure that the requesting entity has the “right credentials” to access data or a resource. One method we all use every day is passwords. Every time we “log on” to something, like a bank account, we present our username (identification) and our (hopefully) secret password.

Separation is another approach to confidentiality. One extreme example of separation is to establish a completely separate network architecture for conveying and storing confidential information. The government often uses this tactic, but even large enterprises use it with “private line networks.” Something less extreme is to use some form of identification or tagging to encapsulate packets or frames so that only authorized endpoints can receive traffic. This is achieved in ethernet by using virtual LANs (VLANs). Each frame is tagged by the endpoint or the switch to which it connects with a VLAN tag, and only endpoints in the same VLAN can receive traffic from that source endpoint. Higher network layer solutions include IP Virtual Private Network (VPNs) or, sometimes, Multiprotocol Label Switching (MPLS).

Threats to Confidentiality

What are the threats to confidentiality? I’ve already hinted that encryption isn’t perfect. The math on which a given encryption approach is based can sometimes be flawed. This type of flaw can be discovered decades after the original math was developed. That’s why it’s traditionally important to use cipher suites approved by appropriate government organizations such as NIST or ENISA. These organizations work with researchers to develop, select, test and validate given cryptographic algorithms as being provably sound.

However, even when an algorithm is sound, the way it’s implemented in code or hardware may have systemic errors. For example, most encryption approaches require the use of random number generators to execute certain functions. If a given code library for encryption uses a random number generator that’s biased in some way (less than truly random), the space, power and time necessary to achieve unauthorized access to encrypted data may be much less than intended.

One threat considered imminent to current cryptography methods is quantum computing. Quantum computers enable new algorithms that reduce the power, space and time necessary to solve certain specific problems, compared with what traditional computers required. For cryptography, two such algorithms are Grover’s and Shor’s.

Grover’s algorithm. Grover’s quantum algorithm addresses the length of time (number of computations) necessary to do unstructured search. This means that it may take half the number of guesses necessary to guess the secret (the key) to read a given piece of encrypted data. Given current commonly used encryption algorithms, which may provide confidentiality against two decades’ worth of traditional cryptanalysis, Grover’s algorithm is only a moderate threat—until you consider that systemic weaknesses in some implementations of those encryption algorithms may result in less than ideal security.

Shor’s algorithm. Shor’s quantum algorithm is a more serious threat specifically to asymmetric cryptography. Current asymmetric cryptography relies on mathematics that assume it’s hard to factor integers down to primes (such as used by the Rivest-Shamir-Adleman algorithm) or to guess given numbers in a mathematical function or field (such as used in elliptical curve cryptography). Shor’s quantum algorithm makes very quick work of factoring; in fact, it may be possible to factor these mathematics nearly instantly given a sufficiently large quantum computer able to execute the algorithm.

It’s important to understand the relationship between confidentiality and privacy. They aren’t the same. Confidentiality protects the content of a communication or data from unauthorized access, but privacy extends beyond the technical controls that protect confidentiality and extends to the business practices of how personal data is used. Moreover, in practice, a security infrastructure may for some data require it to be encrypted while in motion across a network, but perhaps not when at rest on a server. Also, while confidentiality, in a security context, is pretty much a straight forward technical topic, privacy is about rights, obligations and expectations related to the use of personal data.

Why do I bring it up here? Because a breach of confidentiality may also be a breach of privacy. And because application of confidentiality tools alone does not satisfy privacy requirements in many situations. Security engineers – adversarial engineers – need to keep these things in mind and remember that today privacy violations result in real costs in fines and brand damage to our companies.

Wow! Going through all that was a bit more involved than I intended – lets finish this blog. Cable and wireless networks have implemented many confidentiality solutions. WiFi, LTE, and DOCSIS technology all use encryption to ensure confidentiality on the shared mediums they use to transport packets. The cipher algorithm DOCSIS technology typically uses AES128 which has stood the test of time. We can anticipate future advances. One is a NIST initiative to select a new light weight cipher – something that uses less processing resources than AES. This is a big deal. For just a slight reduction in security (measured using a somewhat obscure metric called “security bits”), some of the candidates being considered by NIST may use half the power or space as compared to AES128. That may translate to lower cost and higher reliability of end-points that use the new ciphers.

Another area the cable industry, including CableLabs, continues to track is quantum resistant cryptography. There are two approaches here. One is to use quantum technologies (to generate keys or transmit data) that may be inherently secure against quantum computer based cryptanalysis. Another approach is to use quantum resistant algorithms (e.g., new math that is resistant to  cryptanalysis using Shor’s and Grover’s  algorithms) implemented on traditional computing methods. Both approaches are showing great promise.

There’s a quick review of confidentiality. Next up? Integrity.

Want to learn more about cybersecurity? Register for our upcoming webinar: Links in the Chain: CableLabs' Primer on What's Happening in Blockchain. Block your calendars. Chain yourselves to your computers. You will not want to miss this webinar on the state of Blockchain and Distributed Ledger Technology as it relates to the Cable and Telecommunications industry.

Register NOW

Comments
Security

Vaccinate Your Network to Prevent the Spread of DDoS Attacks

Randy Levensalor
Principal Architect, Future Infrastructure Group, Office of the CTO

Oct 2, 2019

CableLabs has developed a method to mitigate Distributed Denial of Service (DDoS) attacks at the source, before they become a problem. By blocking these devices at the source, service providers can help customers identify and fix compromised devices on their network.

DDoS Is a Growing Threat

DDoS attacks and other cyberattacks cost operators billions of dollars, and the impact of these attacks continues to grow in size and scale, with some exceeding 1 Tbps. The number of Internet of Things (IoT) devices also continues to grow rapidly, many have poor security, and upstream bandwidth is ever increasing; this perfect storm has led to exponential increases in IoT attacks, by over 600 percent between 2016 and 2017 alone. With an estimated increase in the number of IoT devices from 5 billion in 2016 to more than 20 billion in 2020, we can expect the number of attacks to continue this upward trend.

As applications and services are moved to the cloud and the reliance on connected devices grows, the impact of DDoS attacks can continue to worsen.

Vaccinate Your Network to Prevent the Spread of DDoS Attacks

Enabled by the Programmable Data Plane

Don’t despair! New technology brings new solutions. Instead of mitigating a DDoS attack at the target, where it’s at full strength, we can stop the attack at the source. With the use of P4, a programing language designed for managing traffic on the network, the functionality of switches and routers can be updated to provide capabilities that aren’t available in current switches. By coupling P4 programs with ASICs built to run these programs at high speed, we can do this without sacrificing network performance.

As service providers update their networks with customizable switches and edge compute capabilities, they can roll out these new features with a software update.

Comparison Against Traditional DDoS Mitigation Solutions

Feature Transparent Security Typical DDoS solution
Mitigates ingress traffic X X
Mitigates egress traffic X
Deployed at network peering points X X
Deployed at hub/head end X
Deployed at customer premises X
Requires specialized hardware X
Mitigates with white box switches X
Works with customer gateways X
Identifies attacking device X
Time to mitigate attack Seconds Minutes
Packet header sample rate 100% < 0.1%

Transparent Security can mitigate ingress and egress traffic at every point in the network, from the customer premises to the core of the network. To mitigate ingress attacks, typical DDoS mitigation solutions are deployed only at the edge of the network. This means that they don’t protect the network from internal DDoS attacks and can allow their networks to be weaponized.

Transparent Security runs on white box switches and software at the gateway. This provides a wide variety of vendor options and is compatible with open standards, such as P4. Typical solutions frequently rely on the purchase of specialized hardware called scrubbers. It isn’t feasible to deploy these at the customer premises. Finally, Transparent Security can look at the header for every egress packet to quickly identify attacks originating on the service providers network. Typical solutions sample only 1 in 5,000 packets.

Just the Beginning

Transparent Security is just the beginning, and one of many solutions that can be deployed to improve broadband services. Through the programmable data plane, network management will become vastly smarter, and new services will benefit, from Micronets to firewall and managed router as a service.

Join the Project

CableLabs is engaging members and vendors to define the interfaces between the transparent security components. This should create an interoperable solution with a broad vendor ecosystem. The SDNC-Dashboard, AE-SDNC, SDNC-Switch and Switch-AE interfaces in the diagram below have been identified for the initial iteration. Section 6 of the white paper describes these interfaces in detail.

Vaccinate Your Network to Prevent the Spread of DDoS Attacks

The Transparent Security architecture and interface definitions will expand over time to support additional use cases. These interfaces leverage existing industry standards when possible.

You can see see related projects here. You can find out more information on 10G and security here.

Read Our White Paper 

Comments