June 23-24 | An Event for Expanding the Human Connection Learn More & Register


Comments
Security

Revisiting Security Fundamentals Part 3: Time to Examine Availability

Steve Goeringer
Distinguished Technologist, Security

Nov 12, 2019

As I discussed in parts 1 and 2 of this series, cybersecurity is complex. Security engineers rely on the application of fundamental principles to keep their jobs manageable. In the first installment of this series, I focused on confidentiality, and in the second installment, I discussed integrity. In this third and final part of the series, I’ll review availability. The application of these three principles in concert is essential to ensuring excellent user experiences on broadband.

Defining Availability

Availability, like most things in cybersecurity, is complicated. Availability of broadband service, in a security context, ensures timely and reliable access to and use of information by authorized users. Achieving this, of course, can be challenging. In my opinion, the topic is under represented amongst security professionals and we have to rely on additional expertise to achieve our availability goals. The supporting engineering discipline for ensuring availability is reliability engineering. Many tomes are available to provide detailed insight on how to engineer systems to achieve desired reliability and availability.

How are the two ideas of reliability and availability different? Reliability focuses on how a system will function under specific conditions for a period of time. In contrast, availability focuses on how likely a system will function at a specified moment or interval of time. There are important additional terms to understand – quality, resiliency, and redundancy. These are addressed in the following paragraphs. Readers wanting more detail may consider reviewing some of the papers on reliability engineering at ScienceDirect.

Quality: We need to assure that our architectures and components are meeting requirements. We do that through quality assurance and reliability practices. Software and hardware vendors actually design, analyze, and test their solutions (both in development and then as part of shipping and integration testing) to assure they actually are meeting their reliability and availability requirements. When results aren’t sufficient, vendors apply process improvements (possibly including re-engineering) to bring their design, manufacturing, and delivery processes into line to hitting the reliability  and availability requirements.

Resiliency: Again, however, this isn’t enough. We need to make sure our services are resilient – that is, even when something fails that our systems will recover (and something will fail – in fact, many things over time are going to fail, sometimes concurrently). There are a few key aspects we address when making our networks resilient. One is that when something fails, it does so loudly to the operator so they know something failed – either the failing element sends messages to the management system, or systems that it connects to or relies upon it tells the management system the element is failing or failed. Another is that the system can gracefully recover. It automatically restarts from the point it failed.

Redundancy: And, finally, we apply redundancy. That is to say we set up the architecture so that critical components are replicated (usually in parallel). This may happen within a network element (such as having two network controllers or two power supplies or two cooling units) with failover (and appropriate network management notifications) from one unit to another. Sometimes we’ll use clustering to both distribute load and achieve redundancy (sometimes referred to as M:N redundancy). Sometimes, we’ll have redundant network elements (often employed in data centers) or multiple routes of how network elements can connect through networks (using Ethernet, Internet, or even SONET). In cases where physical redundancy is not reasonable, we can introduce redundancy across other dimensions including time, frequency, channel, etc. How much redundancy a network element should imply depends on the math that balances reliability and availability to achiever your service requirements.

I’ve mentioned requirements several times. What requirements do I mean? A typical one, but not the only one and not even necessarily the most important one is the Mean Time Between Failure (MTBF). This statistic represents the statistical or expected average length of time between failures of a given element of concern, typically many thousands (even millions for some critical well understood components) of hours. There are variations. Seagate, for example, switched to Annualized Failure Rate (AFR) which is the “probable percent of failures per year, based on the [measured or observed failures] on the manufacturer’s total number of installed units of similar type (see Seagate link here). The key thing here, though, is to remember that MTBF and AFR are statistical predictions based on analysis and measured performance. It’s also important to estimate and measure availability – at the software, hardware, and service layers. If your measurements aren’t hitting the targets you set for a service, then something needs to improve.

A parting note, here. Lots of people talk about availability in terms of the percentage of time a service (or element) is up in a year. These are thrown around as “how many 9’s is your availability?” For example, “My service is available at 4x9s (99.99%)?” This is often a misused estimate because the user typically doesn’t know what is being measured, what it applies to (e.g., what’s included in the estimate), or even the basis for how the measurement is made. Never-the-less, it can be useful when backed by evidence, especially with statistical confidence.

Warnings about Availability

Finally, things ARE going to fail. Your statistics will, sometimes, be found wanting. Therefore, it’s critical to also consider how long it will take to RECOVER from a failure. In other words, what is your repair time? There are, of course, statistics for estimating this as well. A common one is Mean Time to Repair (MTTR). This seems like a simple term, but it isn’t. Really, MTTR is a statistic representing that measures how maintainable or repairable systems are. And measuring and estimating repair time is critical. Repair time can be the dominant contributor to unavailability.

So… why don’t we just make everything really reliable and all of our services highly available. Ultimately, this comes down to two things. One, you can’t predict all things sufficiently. This weakness is particularly important in security and is why availability is included as one of the three security fundamentals. You can’t predict well or easily how an adversary is going to attack your system and disrupt service. When the unpredictable happens, you figure out how to fix it and update your statistical models and analysis accordingly; and you update how you measure availability and reliability.

The second thing is simply economics. High availability is expensive. It can be really expensive. Years ago, I did a lot of work on engineering and architecting metropolitan, regional, and nationwide optical networks with Dr. Jason Rupe (one of my peers at CableLabs today). We found, through lots of research, that the general rule of thumb was for each “9” of availability, you could expect an increase of cost of service of around 2.5x on typical networks. Sounds extreme, doesn’t it? Typical availability of a private line or Ethernet circuit between two points (regional or national) is typically quoted at around 99%. That’s a lot of downtime (over 80 hours a year) – and it won’t all happen at a predictable time within a year. Getting that down to around 9 hours a year, or 99.9% availability, will cost 2.5x that, usually. Of course, architectures and technology does matter. This is just sharing my personal experience. What’s the primary driver of the cost? Redundancy. More equipment on additional paths of connectivity between that equipment.

Availability Challenges

There are lots of challenges in design of a highly available cost-effective access network. Redundancy is challenging. It’s implemented where economically reasonable, particularly at the CMTS’s, routers, switches, and other server elements at the hub, headend, or core network. It’s a bit harder to achieve in the HFC plant. So, designers and engineers tend to focus more on reliability of components and software and ensure that CMs, nodes, amplifiers, and all the other elements that make DOCSIS® work. Finally, we do measure our networks. A major tool for tracking and analyzing network failure causes and for maximizing the time between service failures is DOCSIS® Proactive Network Maintenance (PNM). PNM is used to identify problems in the physical RF plant including passive and active devices (taps, nodes, amplifiers), connectors, and the coax cable.

From a strictly security perspective, what can be done to improve the availability of services? Denial of service attacks are monitored and mitigated typically at the ingress points to networks (border routers) through scrubbing. Another major tool is ensuring authorized access through authentication and access controls.

It is important that we include consideration of availability in our security strategies. Security engineers often get too focused on threats caused by active adversaries. We must also include other considerations that disrupt the availability of experiences to subscribers. Availability is a fundamental security component and, like confidentiality and integrity, should be included in any security by design strategy.

Availability Strategies

What are the strategies and tools reliability and security engineers might apply?

  • Model your system and assess availability diligently. Include traditional systems reliability engineering faults and conditions, but also include security faults and attacks.
  • Execute good testing prior to customer exposure. Said another way, implement quality control and process improvement practices.
  • When redundancy is impractical, reliability of elements becomes the critical availability design consideration.
  • Measure and improve. PMN can significantly improve availability. But measure what matters.
  • Partner with your suppliers to assure reliability, availability, and repairability throughout you network, systems, and supply chains.
  • Leverage PNM fully. Solutions like DOCSIS® create a separation from a network problem and a service problem. PNM lets operators take advantage of that difference to fix network problems before they become service problems.
  • Remember that repair time can be a major factor in the overall availability and customer experience.

Availability in Security

It’s important that we consider availability in our security strategies. Security engineers often get too focused on threats caused by active adversaries. We must also include other considerations that disrupt the availability of experiences to subscribers. Availability is a fundamental security component and—like confidentiality and integrity—should be included in any security-by-design strategy.

You can read more about 10G and security by clicking below.


Learn More About 10G

Comments
Security

Revisiting Security Fundamentals Part 2: Integrity

Steve Goeringer
Distinguished Technologist, Security

Oct 24, 2019

Let’s revisit the fundamentals of security during this year’s security awareness month – part 2: Integrity.

As I discussed in Part 1 of this series, cybersecurity is complex. Security engineers rely on the application of fundamental principles to keep their jobs manageable. The first blog focused on confidentiality. This second part will address integrity. The third and final part of the serious will review availability. Application of these three principles in concert is essential to ensuring excellent user experiences on broadband.

Nearly everyone who uses broadband has some awareness of confidentiality, though most may think of it exclusively as enabled by encryption. That’s not a mystery – our browsers even tell us when a session is “secure” (meaning the session they have initiated with a given server is at least using HTTPS which is encrypted). Integrity is a bit more obscure and less known. It’s also less widely implemented and then not always well.

Defining Integrity

In their special publication, “An Introduction to Information Security,” NIST defines integrity as “a property whereby data has not been altered in an unauthorized manner since it was created, transmitted, or stored.” This definition is a good starting place, but it can be extended in today’s cybersecurity context. Integrity needs to be applied not only to data, but also to the hardware and software systems that store and process that data and the networks that connect those systems. Ultimately, integrity is about proving that things are as they should be and that they have not been changed, intentionally or inadvertently or accidentally (like magic), in unexpected or unauthorized ways.

How is this done? Well, that answer depends on what you are applying integrity controls to. (Again, this blog post isn’t intended to be a tutorial in-depth on the details but a simple update and overview.) The simplest and most well-known approach to ensuring integrity is to use a signature. Most people are familiar with this as they sign a document or write a check.  And most people know that the bank, or whomever else we’re signing a document for, knows that signatures are not perfect so you often have to present an ID (passport, driver’s license, whatever) to prove that you are the party to which your signature attests on that document or check.

We can also implement similar steps in information systems, although, the process is a bit different. We produce a signature of data by using math; in fact, integrity is a field of cryptography that complements encryption. A signature is comprised of two parts, or steps. First, data is run through a mathematical function called hashing. Hashing is simply a one-way process which reduces a large piece of data to a few bits (128-256 bits is typical) in a way that is computationally difficult to reverse and unlikely to be copied using different data. This is often referred to as a digest and the digest is unlikely to be duplicated using a different source data (we call this collisions). This alone can be useful and is used in many ways in information systems. But it doesn’t attest the source of the data or the authenticity of the data. It just shows whether it has been changed. If we encrypt the digest, perhaps using asymmetric cryptography supported by a public key infrastructure, we produce a signature. That signature can now be validated through a cryptographic challenge and response. This is largely equivalent to being asked to show your ID when you sign a check.

One thing to be mindful of is that encryption doesn’t ensure integrity. Although an adversary who intercepts an encrypted message may not be able to read that message, they may be able to alter the encrypted text and send it on its way to the intended recipient. That alteration may be decrypted as valid. In practice this is hard because without knowledge of the original message, any changes are likely to just be gibberish. However, there are attacks in which the structure of the original message is known. Some ciphers do include integrity assurances, as well, but not all of them. So, implementors need to consider what is best for a given solution.

Approaches to Integrity

Integrity is applied to data at motion and at rest, and to systems, software, and even supply chains somewhat differently. Here’s a brief summary of the tools and approaches for each area:

  • Information in motion: How is the signature scheme above applied to transmitting data? The most common means uses a process very similar to what is described above. Hash-based Message Authentication Codes are protocols that create a digest of a packet and then encrypt the digest with a secret key. A public key is used to prove that the digest was produced by an authorized source. One old but still relevant description of HMAC is RFC 2104 available from IETF here.
  • Information at rest: In many ways, assuring integrity of files on a storage server or a workstation is more challenging than integrity of transmitted information. Usually, storage is shared by many organizations or departments. What secret keys should be used to produce a signature of those files? Sometimes, things are simplified. Access controls can be used to ensure only authorized parties can access a file. When access controls are effective, perhaps only hashing of the data file is sufficient to prove integrity. Again, some encryption schemes can include integrity protection. The key problem noted above still remains a challenge there. Most storage solutions, both proprietary and open source, provide a wide range of integrity protection options and it can be challenging for the security engineer to architect the best solution for a given application.
  • Software: Software is, of course, a special type of information. And so, the ideas of how to apply integrity protections to information at rest can apply to protecting software. However, how software is used in modern systems with live builds adds additional requirements. Namely, this means that before a given system uses software, that software should be validated as being from an authorized source and that it has not been altered since being provided by that source. The same notion of producing a digest and then encrypting the digest to form a signature applies, but that signature needs to be validated before the software is loaded and used. In practice, this is done very well in some ecosystems and either done poorly or not at all in other systems. In cable systems, we use a process referred to as Secure Software Download to ensure the integrity of firmware downloaded to cable modems. (See section 14 of Data-Over-Cable Service Interface Specifications 3.1)
  • Systems: Systems are comprised of hardware and software elements, yet the overall operation of the hardware tends to be governed by configurations and settings stored in files and software. If the files and software are changed, the operation of the system will be affected. Consequently, the integrity of the system should be tracked and periodically evaluated. Integrity of the system can be tracked through attestation – basically producing a digest of the entire system and then storing that in protected hardware and reporting it to secure attestation servers. Any changes to the system can be checked to ensure they were authorized. The processes for doing this are well documented by the Trusted Computing Group. Another process well codified by the Trusted Computing Group is Trusted Boot. Trusted boot uses secure modules included in hardware to perform a verified launch of an OS or virtual environment using attestation.
  • Supply chain: A recent focus area for integrity controls has been supply chains. How do you know where your hardware or software is coming from? Is the system you ordered the system you received? Supply chain providence can be attested using a wide range of tools, and application of distributed ledger or blockchain technologies is a prevalent approach.

Threats to Integrity

What are the threats to integrity? One example that has impacted network operators and their users repeatedly is changing of the DNS settings on gateways. If an adversary can change the DNS server on a gateway to a server they control (incorporation of strong access controls minimizes this risk), then they can selectively redirect DNS queries to spoofed hosts that look like authentic parties (e.g., banks, credit card companies, charity sites) and get customer’s credentials. The adversary can then use those credentials to access the legitimate site and do whatever that allows (e.g., empty bank accounts). This can also be done by altering the DNS query in motion at a compromised router or other server through which the query traverses (HMAC or encryption with integrity protection would prevent this.) Software attacks can occur even at the source but also at intermediate points in the supply chain. Tampered software is a prevalent way malware is introduced to end-points and can be very hard to detect because the software appears legitimate. (Consider the Wired article, “Supply Chain Hackers Snuck Malware Into Videogames.”)

The Future of Integrity

Cable technology has already addressed many integrity threats. As mentioned above, DOCSIS technology already includes support for Secure Software Download to provide integrity verification of firmware. This addresses both software supply chain providence and tampering of firmware. Many of our management protocols are also supported by HMAC. Future controls will include trusted boot and hardened protection of our encryption keys (also used for signatures). We are designing solutions for virtualized service delivery and integrity controls are pervasively included across the container and virtual machine architectures being developed.

Addition of integrity controls to our tools to ensure confidentiality provides defense in depth. It is a fundamental security component and should include in any security by design strategy. You can read more about security by clicking below.


Learn More About 10G

Comments
Security

False Base Station or IMSI Catcher: What You Need to Know

Tao Wan
Principal Architect, Security

Oct 23, 2019

You might have heard of False Base Station (FBS), Rogue Base Station (RBS), International Mobile Subscriber Identifier (IMSI) Catcher or Stingray. All four of these terminologies refer to a tool consisting of hardware and software that allow for passive and active attacks against mobile subscribers over radio access networks (RANs). The attacking tool (referred to as FBS hereafter) exploits security weaknesses in mobile networks from 2G (second generation) to 3G, 4G and 5G. (Certain improvements have been made in 5G, which I’ll discuss later.)

In mobile networks of all generations, cellular base stations periodically broadcast information about the network. Mobile devices or user equipment (UE) listen to these broadcasting messages, select an appropriate cellular cell and connect to the cell and the mobile network. Because of practical challenges, broadcasting messages aren’t protected for confidentiality, authenticity or integrity. As a result, broadcasting messages are subject to spoofing or tampering. Some unicasting messages aren’t protected either, also allowing for spoofing. The lack of security protection of mobile broadcasting messages and certain unicasting messages makes FBS possible.

An FBS can take various forms, such as a single integrated device or multiple separated components. In the latter form [1], an FBS usually consists of a wireless transceiver, a laptop and a cellphone. The wireless transceiver broadcasts radio signals to impersonate legitimate base stations. The laptop connects to the transceiver (e.g., via an USB interface) and controls what to broadcast as well as the strength of the broadcasting signal. The cellphone is often used to capture broadcasting messages from legitimate base stations and feed into the laptop to simplify the configuration of the transceiver. In either form, an FBS can be made compact with a small footprint, allowing it to be left in a location unnoticeably (e.g., mounted to a street pole) or carried conveniently (e.g., inside a backpack).

An FBS often broadcasts the same network identifier as a legitimate network but with a stronger signal to lure users away. How much stronger does an FBS’s signal need to be to succeed? The answer to that question hasn’t been well understood until recently. According to the experiments in the study [2], an FBS’s signal must be more than 30db stronger than a legitimate signal to have any success. When the signal is 35db stronger, the success rate is about 80 percent. When it’s 40db stronger, the success rate increases to 100 percent. In these experiments, FBS broadcasts the same messages with the same frequency and band as the legitimate cell. Another strategy taken by an FBS is to broadcast the same network identifier but with a different tracking area code, tricking the UE into believing that it has entered a new tracking area, and then switch to the FBS. This strategy can make it easier to lure the UE to the FBS and should help reduce the signal strength required by the FBS to be successful. However, the exact signal strength requirement in this case wasn’t measured in the experiments.

Once camped at an FBS, a UE is subject to both passive and active attacks. In passive attacks, an adversary only listens to radio signals from both the UE and legitimate base stations without interfering with the communication (e.g., with signal injection). Consequences from passive attacks include—but are not limited to—identity theft and location tracking. In addition, eavesdropping often forms a stepping stone toward active attacks, in which an adversary also injects signals. An active attacker can be a man-in-the-middle (MITM) or man-on-the-side (MOTS) attacker.

In MITM attacks, the attacker is on the path of the communication between a UE and another entity and can do pretty much anything to the communication, such as reading, injecting, modifying and deleting messages. One such attack is to downgrade a UE to 2G with weak or null ciphers to allow for eavesdropping. Another example of an MITM attack is aLTEr [3], which only tampers with DNS requests in LTE networks, without any downgrading or tampering of control messages. Although user plane data is encrypted in LTE, it’s still subject to tampering if the encryption (e.g., AES counter mode) is malleable due to the lack of integrity protection.

In MOTS attacks, an attacker doesn’t have the same amount of control over communication as with an MITM attack. More often, the attacker injects messages to obtain information from the UE (e.g., stealing the IMSI by an identity request), send malicious messages to the UE (e.g., phishing SMS) or hijack services from a victim UE (e.g., answering a call on behalf of the UE [4]). A MOTS attacker, without luring a UE to connect to it, can still interfere with existing communication—for example, by injecting slightly stronger signals that are well timed to overwrite a selected part of a legitimate message [2].

FBS has been a security threat to all generations of mobile networks since 2G. The mitigation to FBS was studied by 3GPP in the past—however, without any success due to practical constraints such as deployment challenges in cryptographic key management and difficulty in timing synchronization. In 5G release 15 [5], network side detection of FBS is specified, which can help mitigate the risk, albeit fail to prevent FBS. 5G release 15 also introduces public key encryption of subscriber permanent identifier (SUPI) before it is sent out from the UE, which—if implemented—makes it difficult for FBS to steal SUPI. In 5G release 16 [6], FBS is being studied again. Various solutions have been proposed, including integrity protection of broadcasting, paging and unicasting messages. Other detection approaches have also been proposed.

Our view is that FBS arises mainly from the lack of integrity protection of broadcasting messages. Thus, a fundamental solution is to protect broadcasting messages with integrity (e.g., using public key based digital signatures). Although challenges remain with such a solution, we don’t believe those challenges are insurmountable. Other solutions are based on the signatures of attacks, which may help but can eventually be bypassed when attacks evolve to change their attacking techniques and behaviors. We look forward to agreement from 3GPP SA3 on a long-term solution that can fundamentally solve the problem of FBS in 5G.

To learn more about 5G in the future subscribe to our blog.


SUBSCRIBE TO OUR BLOG

References

[1] Li, Zhenhua, Weiwei Wang, Christo Wilson, Jian Chen, Chen Qian, Taeho Jung, Lan Zhang, Kebin Liu, Xiangyang Li, and Yunhao Liu. “FBS-Radar: Uncovering Fake Base Stations at Scale in the Wild.” In Proceedings of ISOC Symposium on Network and Distributed Systems Security (NDSS), February 2017.

[2] Hojoon Yang, Sangwook Bae, Mincheol Son, Hongil Kim, Song Min Kim, and Yongdae Kim. “Hiding in Plain Signal: Physical Signal Overshadowing Attack on LTE.” In Proceedings of 28th USENIX Security Symposium (USENIX Security), August 2019.

[3] Rupprecht D, Kohls K, Holz T, and Popper C. “Breaking LTE on Layer Two.” In Proceedings of IEEE Symposium on Security & Privacy (S&P), May 2019.

[4] Golde N, Redon K, and Seifert JP. “Let Me Answer That for You: Exploiting Broadcast Information in Cellular Networks.” In Proceedings of the 22nd USENIX Security Symposium (USENIX Security), August 2013.

[5] 3GPP TS 33.501, “Security Architecture and Procedures for 5G System” (Release 15), v15.5.0, June 2019.

[6] 3GPP TR 33.809, “Study on 5G Security Enhancement against False Base Stations” (Release 16), v0.5.0, June 2019.

Comments
Security

Revisiting Security Fundamentals

Steve Goeringer
Distinguished Technologist, Security

Oct 17, 2019

It’s Cybersecurity Awareness Month—time to study up!

Cybersecurity is a complex topic. The engineers who address cybersecurity must not only be security experts; they must also be experts in the technologies they secure. In addition, they have to understand the ways that the technologies they support and use might be vulnerable and open to attack.

Another layer of complexity is that technology is always evolving. In parallel with that evolution, our adversaries are continuously advancing their attack methods and techniques. How do we stay on top of that? We must be masters of security fundamentals. We need to be able to start with foundational principals and extend our security tools, techniques and methods from there: Make things no more complex than necessary to ensure safe and secure user experiences.

In celebration of Cybersecurity Awareness Month, I’d like to devote a series of blog posts to address some basics about security and to provide a fresh perspective on why these concepts remain important areas of focus for cybersecurity.

Three Goals

At the most basic level, the three primary goals of security for cable and wireless networks are to ensure the confidentiality, integrity and availability of services. NIST documented these concepts well in its special publication, “An Introduction to Information Security.”

  • Confidentiality ensures that only authorized users and systems can access a given resource (e.g., network interface, data file, processor). This is a pretty easy concept to understand: The most well-known confidentiality approach is encryption.
  • Integrity, which is a little more obscure, guards against unauthorized changes to data and systems. It also includes the idea of non-repudiation, which means that the source of a given message (or packet) is known and cannot be denied by that source.
  • Availability is the uncelebrated element of the security triad. It’s often forgotten until failures in service availability are recognized as being “a real problem.” This is unfortunate because engineering to ensure availability is very mature.

In Part 1 of this series, I want to focus on confidentiality. I’ll discuss integrity and availability in two subsequent blogs.

As I mentioned, confidentiality is a security function that most people are aware of. Encryption is the most frequently used method to assure confidentiality. I’m not going to go into a primer about encryption. However, it is worth talking about the principles. Encryption is about applying math using space, power and time to ensure that only parties with the right secret (usually a key) can read certain data. Ideally, the math used should require much greater space, power or time for an unauthorized party without the right secret to read that data. Why does this matter? Because encryption provides confidentiality only as long as the math used is sound and that the corresponding amount of space, power and time for adversaries to read the data is impractical. That is often a good assumption, but history has shown that over time, a given encryption solution will eventually become insecure. So, it’s a good idea to apply other approaches to provide confidentiality as well.

What are some of those approaches? Ultimately, the other solutions prevent access to the data being protected. The notion is that if you prevent access (either physically or logically) to the data being protected, then it can’t be decrypted by unauthorized parties. Solutions in this area fall primarily into two strategies: access controls and separation.

Access controls validate that requests to access data or use a resource (like a network) come from authorized sources (identified using network addresses and other credentials). For example, an access control list (ACL) is used in networks to restrict resource access to specific IP or MAC addresses. As another example, a cryptographic challenge and response (often enabled by public key cryptography) might be used to ensure that the requesting entity has the “right credentials” to access data or a resource. One method we all use every day is passwords. Every time we “log on” to something, like a bank account, we present our username (identification) and our (hopefully) secret password.

Separation is another approach to confidentiality. One extreme example of separation is to establish a completely separate network architecture for conveying and storing confidential information. The government often uses this tactic, but even large enterprises use it with “private line networks.” Something less extreme is to use some form of identification or tagging to encapsulate packets or frames so that only authorized endpoints can receive traffic. This is achieved in ethernet by using virtual LANs (VLANs). Each frame is tagged by the endpoint or the switch to which it connects with a VLAN tag, and only endpoints in the same VLAN can receive traffic from that source endpoint. Higher network layer solutions include IP Virtual Private Network (VPNs) or, sometimes, Multiprotocol Label Switching (MPLS).

Threats to Confidentiality

What are the threats to confidentiality? I’ve already hinted that encryption isn’t perfect. The math on which a given encryption approach is based can sometimes be flawed. This type of flaw can be discovered decades after the original math was developed. That’s why it’s traditionally important to use cipher suites approved by appropriate government organizations such as NIST or ENISA. These organizations work with researchers to develop, select, test and validate given cryptographic algorithms as being provably sound.

However, even when an algorithm is sound, the way it’s implemented in code or hardware may have systemic errors. For example, most encryption approaches require the use of random number generators to execute certain functions. If a given code library for encryption uses a random number generator that’s biased in some way (less than truly random), the space, power and time necessary to achieve unauthorized access to encrypted data may be much less than intended.

One threat considered imminent to current cryptography methods is quantum computing. Quantum computers enable new algorithms that reduce the power, space and time necessary to solve certain specific problems, compared with what traditional computers required. For cryptography, two such algorithms are Grover’s and Shor’s.

Grover’s algorithm. Grover’s quantum algorithm addresses the length of time (number of computations) necessary to do unstructured search. This means that it may take half the number of guesses necessary to guess the secret (the key) to read a given piece of encrypted data. Given current commonly used encryption algorithms, which may provide confidentiality against two decades’ worth of traditional cryptanalysis, Grover’s algorithm is only a moderate threat—until you consider that systemic weaknesses in some implementations of those encryption algorithms may result in less than ideal security.

Shor’s algorithm. Shor’s quantum algorithm is a more serious threat specifically to asymmetric cryptography. Current asymmetric cryptography relies on mathematics that assume it’s hard to factor integers down to primes (such as used by the Rivest-Shamir-Adleman algorithm) or to guess given numbers in a mathematical function or field (such as used in elliptical curve cryptography). Shor’s quantum algorithm makes very quick work of factoring; in fact, it may be possible to factor these mathematics nearly instantly given a sufficiently large quantum computer able to execute the algorithm.

It’s important to understand the relationship between confidentiality and privacy. They aren’t the same. Confidentiality protects the content of a communication or data from unauthorized access, but privacy extends beyond the technical controls that protect confidentiality and extends to the business practices of how personal data is used. Moreover, in practice, a security infrastructure may for some data require it to be encrypted while in motion across a network, but perhaps not when at rest on a server. Also, while confidentiality, in a security context, is pretty much a straight forward technical topic, privacy is about rights, obligations and expectations related to the use of personal data.

Why do I bring it up here? Because a breach of confidentiality may also be a breach of privacy. And because application of confidentiality tools alone does not satisfy privacy requirements in many situations. Security engineers – adversarial engineers – need to keep these things in mind and remember that today privacy violations result in real costs in fines and brand damage to our companies.

Wow! Going through all that was a bit more involved than I intended – lets finish this blog. Cable and wireless networks have implemented many confidentiality solutions. WiFi, LTE, and DOCSIS technology all use encryption to ensure confidentiality on the shared mediums they use to transport packets. The cipher algorithm DOCSIS technology typically uses AES128 which has stood the test of time. We can anticipate future advances. One is a NIST initiative to select a new light weight cipher – something that uses less processing resources than AES. This is a big deal. For just a slight reduction in security (measured using a somewhat obscure metric called “security bits”), some of the candidates being considered by NIST may use half the power or space as compared to AES128. That may translate to lower cost and higher reliability of end-points that use the new ciphers.

Another area the cable industry, including CableLabs, continues to track is quantum resistant cryptography. There are two approaches here. One is to use quantum technologies (to generate keys or transmit data) that may be inherently secure against quantum computer based cryptanalysis. Another approach is to use quantum resistant algorithms (e.g., new math that is resistant to  cryptanalysis using Shor’s and Grover’s  algorithms) implemented on traditional computing methods. Both approaches are showing great promise.

There’s a quick review of confidentiality. Next up? Integrity.

Want to learn more about cybersecurity? Register for our upcoming webinar: Links in the Chain: CableLabs' Primer on What's Happening in Blockchain. Block your calendars. Chain yourselves to your computers. You will not want to miss this webinar on the state of Blockchain and Distributed Ledger Technology as it relates to the Cable and Telecommunications industry.

Register NOW

Comments
Security

Vaccinate Your Network to Prevent the Spread of DDoS Attacks

Randy Levensalor
Lead Architect, Software Research & Development

Oct 2, 2019

CableLabs has developed a method to mitigate Distributed Denial of Service (DDoS) attacks at the source, before they become a problem. By blocking these devices at the source, service providers can help customers identify and fix compromised devices on their network.

DDoS Is a Growing Threat

DDoS attacks and other cyberattacks cost operators billions of dollars, and the impact of these attacks continues to grow in size and scale, with some exceeding 1 Tbps. The number of Internet of Things (IoT) devices also continues to grow rapidly, many have poor security, and upstream bandwidth is ever increasing; this perfect storm has led to exponential increases in IoT attacks, by over 600 percent between 2016 and 2017 alone. With an estimated increase in the number of IoT devices from 5 billion in 2016 to more than 20 billion in 2020, we can expect the number of attacks to continue this upward trend.

As applications and services are moved to the cloud and the reliance on connected devices grows, the impact of DDoS attacks can continue to worsen.

Vaccinate Your Network to Prevent the Spread of DDoS Attacks

Enabled by the Programmable Data Plane

Don’t despair! New technology brings new solutions. Instead of mitigating a DDoS attack at the target, where it’s at full strength, we can stop the attack at the source. With the use of P4, a programing language designed for managing traffic on the network, the functionality of switches and routers can be updated to provide capabilities that aren’t available in current switches. By coupling P4 programs with ASICs built to run these programs at high speed, we can do this without sacrificing network performance.

As service providers update their networks with customizable switches and edge compute capabilities, they can roll out these new features with a software update.

Comparison Against Traditional DDoS Mitigation Solutions

Feature Transparent Security Typical DDoS solution
Mitigates ingress traffic X X
Mitigates egress traffic X
Deployed at network peering points X X
Deployed at hub/head end X
Deployed at customer premises X
Requires specialized hardware X
Mitigates with white box switches X
Works with customer gateways X
Identifies attacking device X
Time to mitigate attack Seconds Minutes
Packet header sample rate 100% < 0.1%

Transparent Security can mitigate ingress and egress traffic at every point in the network, from the customer premises to the core of the network. To mitigate ingress attacks, typical DDoS mitigation solutions are deployed only at the edge of the network. This means that they don’t protect the network from internal DDoS attacks and can allow their networks to be weaponized.

Transparent Security runs on white box switches and software at the gateway. This provides a wide variety of vendor options and is compatible with open standards, such as P4. Typical solutions frequently rely on the purchase of specialized hardware called scrubbers. It isn’t feasible to deploy these at the customer premises. Finally, Transparent Security can look at the header for every egress packet to quickly identify attacks originating on the service providers network. Typical solutions sample only 1 in 5,000 packets.

Just the Beginning

Transparent Security is just the beginning, and one of many solutions that can be deployed to improve broadband services. Through the programmable data plane, network management will become vastly smarter, and new services will benefit, from Micronets to firewall and managed router as a service.

Join the Project

CableLabs is engaging members and vendors to define the interfaces between the transparent security components. This should create an interoperable solution with a broad vendor ecosystem. The SDNC-Dashboard, AE-SDNC, SDNC-Switch and Switch-AE interfaces in the diagram below have been identified for the initial iteration. Section 6 of the white paper describes these interfaces in detail.

Vaccinate Your Network to Prevent the Spread of DDoS Attacks

The Transparent Security architecture and interface definitions will expand over time to support additional use cases. These interfaces leverage existing industry standards when possible.

You can see see related projects here. You can find out more information on 10G and security here.

Read Our White Paper 

Comments
Security

CableLabs® Micronets Security Reference Code Is Now Open Source

Darshak Thakore
Lead Software Architect

Mar 1, 2019

In November, we introduced CableLabs micronets, a next-generation on-premise networking platform focused on providing adaptive security for all devices connecting to home or small business networks. Micronets uses dynamic micro-segmentation to manage the connectivity to each device and is designed to provide seamless and transparent security without burdening end users with the technical aspects of configuring and maintaining the network. Micronets is also a foundational piece of the cable industry’s recently announced 10G vision – supporting increased security for home and small business users.

Today we are pleased to announce that the release of the micronets reference implementation as open source software. You’ll find links to files and details on how to build and deploy the different Micronets components here. CableLabs plans to continue to develop and add new features to the open source reference implementation – we also welcome contributions from the broader open source community.

Why Open Source?

Here at CableLabs, we believe in the importance of sharing our code to accelerate the adoption of new ideas and to stimulate industry-wide innovation. In this particular case, there was an even stronger sense of urgency to do so.

The rapid and growing proliferation of Internet-connected devices, or the “Internet of Things” (IoT), has ushered in a new era of connectivity that gives us unprecedented control over our environment at home and at work. Unfortunately, along with all the benefits comes significant risk to end users and the broader Internet, alike.  Vulnerable IoT devices are the fuel for botnets and other distributed threats.  Compromised IoT devices are used to launch distributed denial of service (DDoS) attacks, spread ransomware, send spam, and more generally, enabling the theft of personal or sensitive information.  Moreover, vulnerable IoT devices may also create the risk of physical harm, as many connected devices now provide a bridge between the cyber and physical worlds.

CableLabs and the broader IoT ecosystem are committed to driving improved IoT security, but such efforts are not enough alone to address the risks of insecure IoT.   We must also develop network technologies, such as micronets, to help mitigate the risks of insecure IoT.  There will always be legacy devices that don’t meet current IoT security best practices and potentially, manufactures that don’t follow best practices.

We believe addressing the risks of insecure IoT is a shared responsibility. By releasing the reference code as open source, we’re hoping to accelerate the adoption of micronets and encourage others to build upon our work.

More on Micronets and How it Fits into Our Security Agenda

The micronets platform leverages advanced mechanisms like device fingerprinting and artificial intelligence to enable real-time detection and quarantining of compromised IoT devices, minimizing the risk to other devices on the local network and to the broader Internet.  Micronets can also provide enhanced security for high-value or sensitive devices, further reducing the risk of compromise for these devices and applications. Despite the complex technology under the hood, this self-organizing system is geared toward an everyday consumer and is very easy to use. For a deeper dive into micronets’ security features, please download the micronets whitepaper here. Missed our recent public webinar? You can find it on youtube here.

Micronets is just one of many active security projects at CableLabs. For instance, we’re also working on advancing additional cyber-attack mitigation technologies, such as DDoS information sharing, IP-address spoofing prevention and more, as well as actively contributing to industry and government efforts to drive increased IoT security. And although there’s no single solution that protects every network, we will continue working with our members and vendors and various industry organizations to develop better tools that make our world a safer place—one network at a time.

Click below for details on how to build and deploy the different Micronets components. 


Micronets Developer Documentation

Comments
Security

Comparing 4G and 5G Authentication: What You Need to Know and Why

Tao Wan
Principal Architect, Security

Feb 6, 2019

The 5G (fifth generation) of cellular mobile communication is among the hottest technologies today and is under development by 3GPP. Besides providing faster speed, higher bandwidth, and lower latency, 5G also supports more use cases, such as:

  • Enhanced Mobile Broadband (eMBB)
  • Massive Machine Type Communications (mMTC)
  • Ultra Reliable Low Latency Communications (uRLLC)

With global deployment imminent, privacy and security protection are of critical importance to 5G. Calls, messaging, and mobile data must be protected with authentication, confidentiality, and integrity. Authentication and key agreement form the cornerstone of mobile communication security by providing mutual authentication between users and the network, as well as cryptographic key establishment that is required to protect both signaling messages and user data. Therefore, each generation of cellular networks defines at least one authentication method. For example, 4G defines EPS-AKA. 5G defines three authentication methods: 5G-AKA, EAP-AKA’, and EAP-TLS. Network practitioners are asking what motivates the adoption of the new 5G authentication methods, how they differ from 4G authentication, and how they differ from each other.

To answer these questions, CableLabs studied and compared 4G and 5G authentication. Our analysis shows that 5G authentication improves 4G EPS-AKA authentication in a number of areas. For instance, 5G offers a unified authentication framework for supporting more use cases, better UE identity protection, enhanced home network control, and additional key separation in key derivation. This study also points out that 5G authentication is not without weakness and requires continuous evolvement.

For more information, please download the “A Comparative Introduction of 4G and 5G Authentication” white paper. Be sure to contact Tao Wan if you have questions.


New call-to-action

Comments
Security

Security for Blockchains and Distributed Ledgers

Brian Scriber
Vice President, Security Technologies

Jan 10, 2019

Empirical evidence reveals an inimical belief that blockchains and distributed ledger technologies (DLTs) are inherently secure because they use cryptography, employ hashing algorithms and have public/private keypairs—in short, a belief that the data in these systems is extremely unlikely to become exposed. After evaluating requirements and deciding to utilize a blockchain solution, security is important to consider from the start.

Over the past several years, the Security Technologies arm of CableLabs’ Research and Development organization has been tracking blockchain attacks and compromises. From this work, several hazard groupings have been identified. The following list is intended to act as an aid to architecture, design and implementation efforts surrounding enterprise projects that use these technologies.

Smart Contract Injection

The Smart Contract engine is an interpreter for a (sometimes novel) programming language and a parser of data related to the decisions the engine needs to make. The hazard in this situation is when executable code appears inside smart contracts in an effort to subvert the contract language or data. Implementers need to consider sanitizing inputs to smart contracts, proper parsing and error handling.

Replay Attacks

Not only is there a threat in transaction processing and validation, but also in node behavior, authentication, and the securing of confidential messaging. Adding nonces to check against prior transactions is critical.

History Revision Attacks

Blockchains that rely on fault-tolerant consensus models do well when there are many participating nodes processing, competing and collaborating on the next block. When the number of nodes drops, or if there is predictably cyclic behavior, lulls can be leveraged in a history revision attack where a new branch is created, effectively deleting a previously accepted transaction. Designers should consider how to best guarantee minimum support and the diversity of nodes.

Permanence Poisoning

Due to the permanence of blockchains and the cost to fork, it’s possible to sabotage a chain with even claims of illegal content to draw the ire of regulators and law enforcement.

Confidential Information Leaks

Permanence increases the risk of data being exfiltrated out of the chain. Even encrypted data is at risk for future threats against those algorithms or brute-force attacks. Designers need to make sure that they understand the data being stored, how it is protected, who owns it and how it could be re-associated with any pseudonymized users.

Participant Authentication Failure

Are transaction creators cryptographically signing their transactions? Is that signature verified by the protocol? Is transaction receipt confirmed (non-repudiation)? Are sessions managed? Architects need to consider the proof of possession of private keys in the verification and authentication of participants.

Node Spoofing

Nodes are the entities that create and agree on the next new blocks in a chain. Nodes should be authenticated like any other user or system, and authentication must be verified, with multiple votes prohibited. Designers who fail to look for voting irregularities open their implementation to risk.

Node Misbehavior

Nodes that behave incorrectly, intentionally circumventing fault-tolerance mechanisms, or trojan nodes (nodes in public chains that follow the standard protocol but have non-standard implementations) are problematic. Transaction propagation non-compliance is another concern—where nodes don’t convey transactions quickly to other nodes, nodes consistently act in opposition to other nodes, or verifications align consistently within small fiefdoms. In addition, architects need to consider what happens to the chain operations when the chain, the nodes or a subset of the nodes is subject to a denial of service attack.

Untrustworthy Node-Chain Seam

The cryptographic difference between what was intended by the participant, what happens in the node, and what happens on the chain must all be consistent. Architects should enforce a design such that the node is unable to modify a transaction (signing and hash verification), skip a transaction (non-repudiation) or add new transactions (source verification).

General Security Hazards

The hazards fall into this meta-category of general security concerns that have specific implications in the blockchain/DLT realm. Architects, designers and implementers all need to take heed of these practices and work to ensure a complete solution:

  • Unproven Cryptography: Look for best practices and proven cryptography in cipher suites, hash algorithms, key lengths, elliptical curves used, etc.
  • Non-Extensible Cryptography: Should a foundational algorithm aspect of the chain become compromised, can the chain easily migrate to another suite/hash/key pair? Is there a mechanism and process among node operators to agree and deploy this quickly?
  • Security Misconfiguration: Be aware of all code libraries used, stay abreast of the latest security information about deployment technologies such as Docker, and ensure that defaults present in test systems are not available in production systems. Ask if there are any components with known vulnerabilities, determine whether any open ports or file-system permissions may be at risk, and understand protection mechanics for private keys.
  • Insufficient Logging and Alerts: If something goes wrong, are there sufficient methods in place to capture actions that occurred (voting, smart contracts, authentication, authorization)? Project managers must ensure that alerts have been added to the code, that the correct recipients have been added at deployment time, and that procedures for constant monitoring and updating of those recipients take place.
  • Weak Boundary Defense: Development teams need to be aware of, and shore up, defenses so that there are no exploitable holes in client code or node software, smart contract engines, mobile applications, web applications, chain viewers or administrative tools.

Clearly, this list doesn’t contain everything that must be reviewed in a blockchain or DLT application, but the objective is to provide a few key areas to focus on and provide insight to dive deeper where it makes sense in your own applications. Blockchains can help bridge trust gaps in an ecosystem, but security is foundational to that trust.

Want to learn more about security for blockchain and distributed ledgers in the future? Subscribe to our blog by clicking below. 


SUBSCRIBE TO OUR BLOG

Comments
Security

Micronets: Enterprise-Level Security Is No Longer Just For Enterprises

Darshak Thakore
Lead Software Architect

Nov 14, 2018

Today we are introducing CableLabs® Micronets, a framework that simplifies and helps secure increasingly complex home and small business networks.

As we add devices to our networks such as cell phones, computers, printers, thermostats, appliances, lights and even medical monitors, our networks become more susceptible to intrusions. Micronets automatically segments devices into separate, policy-driven trust domains to help protect the devices, data and the user. Agile and easy-to-use, Micronets gives consumers increased protection and control of their local network without overwhelming them with technical details. Micronets reduces the risks associated with vulnerable devices but is not a substitute for strong device security.

The Micronets Advantage: Smart Security and Ease of Use

CableLabs Micronets is an advanced network management framework that utilizes three components to provide enhanced security:

Automated Networked Devices: While CableLabs is not the first organization to introduce the concept of network segmentation, Micronets’ primary advantage is in its implementation. The Micronets framework uses advanced mechanisms like device fingerprinting and Manufacture Usage Definitions (MUD) to intelligently group networked devices into dynamically managed trust domains or “micronets.”

For example, children’s devices are assigned to one micronet, home automation on another and so on. If one device is compromised, devices on the other micronets will not be visible to the attacker. The system will automatically quarantine the infected device, minimizing the risk to the network and other connected devices. While the system is largely autonomous, the user has the visibility and control to adjust trust domains and add new devices.

Seamless User Experience: Micronets provides a layer of dynamic management and secure credential provisioning that hides the complexity associated with network orchestration and focuses on improving the user experience. It’s a self-organizing platform that’s very easy to use and control which is a major benefit to an average customer who lacks the time and knowledge required for manual network administration.

Adaptive Devices: The Micronets framework also includes an intelligence layer that manages the connectivity between the individual trust domains, the Internet and third-party provider services. Because security threats continuously evolve, Micronets is built to evolve as well. State-of-the-art identity management and cloud-based intelligence technologies, like machine learning and neural networks, are leveraged to provide adaptive security that can evolve over the years, thereby providing a solution that will work for today’s as well as tomorrow’s needs.

Another benefit that Micronets can provide is enhanced security for highly sensitive devices or applications, through secure network extension via APIs. For example, Micronets can be used to establish a secure, end-to-end network connection between an Internet-connected medical device, like a glucose tester, and the cloud services of a healthcare provider. This enhanced capability provides confidentiality, integrity and availability of the medical device and the healthcare data to and from the device.

Micronets provides features, such as network isolation, similar to 5G network slicing but can operate across Wi-Fi and mobile networks. Micronets is focused on security of private networks (e.g., home networks and SMB networks) where 5G slicing is focused on different service segment performance levels of end to end networks. Since Micronets is an overlay technology, it’s compatible with existing networks, even 5G slicing, where 5G slicing is dependent on the broad deployment of the underlying 5G technologies.

Under the Hood: A Deeper Dive into How Micronets Works

Micronets has five major architectural components:

  • Intelligent Services and Business Logic: This layer acts as the interface for the Micronets platform to interact with the rest of the world. It functions as a receiver of the user’s intent and business rules from the user’s services and combines them into operational decisions that are handed over to the Micronets Manager for execution.
  • Micronets Manager: This critical element orchestrates all Micronets activities, especially flow switching rules between the home network, cable operator and third-party providers that allow the delivery of services. It also provides controls that allow the user to interact with the Micronets platform.
  • Micronets Gateway: Micronets Gateway could be a cable modem, router, wireless access point, or LTE hub/femtocell. It’s a core networking component that uses Software Defined Networking (SDN) to define how Micronets services interact with the home network. It also oversees the entire device profile on the user network—both wired and wireless.
  • The Home Network: All the devices on the customer’s home or SMB network are automatically organized into appropriate trust domains—or micronets—using the device identity and SDN based logic. However, the customer can always make manual changes through a user-friendly Micronets interface.
  • Micronets API: Operator partners and third-party operators can interact with the Micronet manager via secure APIs. Micronets ensure that third-party devices and services are secured through mutual authenticated and encrypted communications channels.

Micronets Enterprise Level Security

The Rollout: Getting Micronets In Homes and Business

  • White Paper: Our white paper lays out the vision and architecture of Micronets in greater detail.
  • Industry Partnerships: We’re working with our industry partners and cable operator members to bring Micronets to consumers. We are also working on implementing an easy-onboarding framework that builds on top of features from the Wi-Fi Alliance (WFA), namely EasyConnect, WPA3 security and the Internet Engineering Task Force (IETF) Manufacturer Usage Description framework to enable the secure and seamless configuration and on-boarding of consumer devices. We are also leading the development of a secure interoperability specification for IoT devices in the Open Connectivity Foundation, and with Micronets, we’re making significant strides to simplifying and securing increasingly complex networks.
  • Code: We are releasing the reference code, currently under development, to the open source community in the coming months.
  • Government Collaboration: We’re participating in and supporting government efforts like NIST’s National Cybersecurity Center of Excellence project on mitigating botnets in home and small business networks.
  • Our Members and Vendors: We are planning on developing and publishing specifications for standardized API’s for advanced security services based on machine learning and device fingerprinting in collaboration with our members and vendors.

CableLabs has long been a leader in the development of security technologies for the delivery of video and broadband Internet access services. With Micronets we are bringing our expertise to the growing world of connected devices, for which security is a shared responsibility across the Internet ecosystem.  Micronets helps mitigate the risks associated with insecure IoT, but is not a substitute for or alternative to the ongoing efforts to drive increased device security, to prevent vulnerabilities at their source.

Download our white paper by clicking below or learn more here.

Micronets White Paper

Interested in working with the CableLabs team or hearing more about Micronets? Contact Darshak Thakore (d.thakore@cablelabs.com).

Comments
Security

The Need for IoT Standards

Matt Forbes
Senior Systems Engineer, Kyrio

Aug 29, 2018

Imagine a world in which you can tell your phone you’re leaving work, and your washing machine automatically starts the laundry at home so that it’s ready for the dryer when you arrive. Or your oven begins preheating so that you can pop a pizza in when you get home. Or, on cold days, your car automatically starting and warming up for your drive home. Imagine coming home from the grocery store, and your hands are full. No worries! The camera above your door has recognized you, and your door has unlocked and is already swinging open for your convenience.

Actually, you don’t have to imagine these scenarios anymore; they’re happening now. It is estimated there will be 30 billion IoT connected devices by 2020 and 75 billion devices by 2025. But with all these devices from dozens of manufacturers exploding onto the scene, how will they all work together? Today, many of them don’t—but it’s essential that they do.

The Importance of Technical Standards

That’s where technical standards come in. Standardizing products allows devices to work together, making the products easier to use and more appealing to end users. It also creates competition among manufacturers, which reduces prices and gives consumers a choice. But what’s in it for the manufacturer?

Often, companies want to lock you into their products so that you solely use their brand. But most companies don’t make every type of product. Door lock companies don’t usually make dishwashers. Automotive product companies don’t usually make medical devices. So, allowing devices to work together actually expands the market for the manufacturer without having to develop products outside of their specialization. It also allows for smaller niche products to work with more widespread ones. Beyond that, making devices more versatile and easier to use makes these devices more appealing in general so that all manufacturers sell more products. As for the price, the best way for companies to keep prices up is to produce newer, better and more innovative products, which benefits the consumer as well.

Spearheading IoT Standards for Interoperability and Security

Where do standards come from? For standards related to IoT, an organization has been created called the Open Connectivity Foundation (OCF). OCF is committed to consumers, businesses and industries to deliver a standard communication platform to ensure interoperability and security for IoT devices. These standards will span multiple industries, including smart homes, automotive, industrial, scientific and medical, to name a few.

OCF’s goal is for devices from various manufacturers to operate together seamlessly and securely. Currently, OCF’s membership includes roughly 400 member organizations, including major software companies, service providers and silicon chip manufacturers. OCF has developed specifications and is using an open-source platform called IoTivity (hosted by the Linux Foundation) that can be embedded in IoT devices. IoTivity is used to create middleware that will allow various clients and servers to communicate with one another. The communications occur in software, so the physical connections (e.g., Wi-Fi, Bluetooth, Zigbee, Z-wave, ethernet) aren’t an issue.  

But OCF isn’t just about interoperability. The latest release of the OCF platform incorporates PKI security. At a time when security is often taken for granted or is an afterthought for new technologies, OCF is committed to the highest level of security possible for such low-power limited processing devices. Why is this important? We may not think that hacking a lightbulb is a big deal, but the weakest link in a network is often the biggest target for hackers. Once they’re in, they can cause irreparable damage. Therefore, every device on the network needs to be secured. Not to mention the fact that you probably don’t want someone else to be able to unlock your doors, turn off your security devices or control your medical device or vehicle without your knowledge or consent!

Furthering IoT Standards Development with CableLabs and Kyrio

So where do CableLabs and Kyrio fit in? CableLabs has been in the business of developing standards and certifying products for the cable industry for the past 30 years. Kyrio, as a subsidiary of CableLabs, is reaching out to other industries to help develop new technologies. The combination of experience in standards development, as well as certification testing, makes CableLabs and Kyrio a natural fit with the OCF.

For the past few years, CableLabs and Kyrio have been heavily involved with OCF. Our involvement ranges from acting as a standing member of the board, to chairing the security working group, to participating in various working groups such as certification and interoperability testing. Kyrio is also one of seven authorized test labs (ATLs) in the world and have performed certification testing for several of the first devices to be certified. In addition to OCF certification testing, we also offer development support to manufacturers that need to get their implementations ready for certification.

You can learn more about certification testing with the Open Connectivity Foundation here or contact Kyrio today with your IoT support needs at labs@kyrio.com.


Visit Kyrio

Comments