June 23-24 | An Event for Expanding the Human Connection Learn More & Register


Comments
Innovation

ChirpStack: The New Open Source LoRa Server

Daryl Malas
Principal Architect, Advanced Technology Group

Dec 5, 2019

Over the past couple of years, CableLabs and Orne Brocaar have introduced multiple major releases of the LoRa® Server, a community led open source LoRaWAN® network server. The goal of this effort is to provide a powerful tool for enabling LPWAN services using unlicensed bands worldwide. The server is licensed under the MIT license, so it can be used freely for any use from testing to production. Our objective continues to focus on enabling growth and creativity in the LPWAN ecosystem using the LoRaWAN protocol.

We are excited to announce that LoRa Server has been renamed ChirpStack™. What does this rebranding mean for our community of users? Well, nothing really, with exception of assuming a new name. The server continues to provide the functions, capability, LoRa Alliance® compliance and MIT licensing it always has. However, the name and URL location of the resources has changed.

The ChirpStack software, source-code and documentation are now available here: https://www.chirpstack.io. The discussion forum is now available here: https://forum.chirpstack.io

LoRa Server renamed ChirpStack

Since its debut in 2016, the LoRa Server project has gained a lot of traction and is now being used by thousands of users from (currently) 144 countries around the world. And, we fully expect the ChirpStack project will continue to serve this user base with valuable tools, software, and discussion.

“Solutions built on Semtech’s LoRa devices offer the real potential to change the world by delivering analytical insight into how we live and work today. To create a smarter tomorrow, developers working with LoRa devices and the LoRaWAN protocol need access to easy-to-use accelerators that help drive applications to market more quickly,” said Alistair Fulton, Vice President and General Manager of Semtech’s Wireless and Sensing Products Group. “CableLabs and its ChirpStack software have contributed to the growth of LoRaWAN, creating value to the ecosystem by helping to simplify the IoT development process and enable the creation of new, innovative products for the next generation of use cases.”

We have automated the renaming process in the lastest version as much as possible, and we hope this migration will only be a nominal inconvenience. A full list of considerations and changes have been provided on the forum. If you experience any challenges with this migration, please communicate issues and feedback on the forum.

In the latest release(s) you will find a lot of interesting new features. Using NetID filters it is possible to reduce the bandwidth usage of your gateways. This is useful when you are using a cellular backhaul. We have also made it easier to correlate log messages across the different components, which will help when troubleshooting issues as they occur. To increase the geolocation accuracy, we have added support to perform geolocation on multiple uplink frames. We will continue to improve and add new features and we are looking forward to your feedback and contributions to the ChirpStack project.

Note: LoRa is a registered trademark or service mark of Semtech Corporation or its affiliates.

LEARN MORE

Comments
Wireless

Field Trial Results Show Wi-Fi CERTIFIED Vantage™ Devices Offer Significant Improvement to Network Performance

John Bahr
Lead Architect, Wireless Technologies

Mark Poletti
Director, Wireless Network Technologies

Nov 21, 2019

Field Trial Results Show Wi-Fi CERTIFIED Vantage™ Devices Offer Significant Improvement to Network PerformanceIn a high-traffic, high-volume user environments such as subways, airports, and stadiums, maintaining a reliable connection and moving consistently across access points (APs) in a Wi-Fi network has always been a challenge for users and operators. A solution to this issue is now commercially available in the form of Wi-Fi CERTIFIED Optimized ConnectivityTM and Wi‑Fi CERTIFIED Agile MultiBandTM AP and client devices. These are core certifications to the WFA Wi-Fi CERTIFIED VantageTM program. These Wi-Fi Vantage TM devices contain features that optimize management and control frame transmissions, network discovery, authentication, and network transition. A field trial was conducted to measure the performance of a Wi‑Fi network using of Wi-Fi Optimized ConnectivityTM and Wi‑Fi Agile MultiBandTM devices embedded in a highly congested urban environment centered around a busy subway station. Results show the following improvements over non-Wi-Fi Vantage devices:

Optimized Network Discovery

Without Wi-Fi Vantage, the inefficiencies of network discovery and response messages can severely disrupt existing client connections and make it difficult for clients to attach to the network. The optimized network discovery features in Wi-Fi Vantage include suppression of, and broadcast of, probe responses by the AP and also include probe request deferral and suppression by the client. Field trial results show that the number of probe responses in a Vantage network were reduced by 76% on the 2.4 GHz radios and by 72% on the 5 GHz radios. This resulted in a probe response airtime usage reduction of 67% in 2.4 GHz and 44% in 5 GHz.

Optimized Authentication

Without Wi-Fi Vantage, clients can experience long reconnection setup times when moving back into a previously-joined network. With Wi-Fi Vantage, this re-connection setup time is reduced using Fast Initial Link Setup (FILS) Authentication. When FILS Authentication was tested in the Wi-Fi Vantage network, results showed that the connection setup times decreased by 76% (from 228 ms to 55 ms).

Fast Network Transition

Vantage-components_20191Without Fast Network Transition (FT), clients must perform a full Extensible Authentication Protocol (EAP) when roaming, possibly interrupting the end-user experience. With Wi-Fi Vantage, once a client device decides to roam to a different AP, band, or channel, the association and connection happen quickly and seamlessly.  Test results show that FT roaming improved client re-connection setup times by 84%, reducing it from 203 ms to 31 ms. In addition, Fast Network Transition can be deployed with, and will work alongside, FILS Authentication to further optimize client connections and roams.

A full-featured Wi-Fi Vantage network will benefit overall network performance and user experience, especially in high-traffic, high-volume environments. Some Vantage features may already be included in operator-managed Wi-Fi networks using vendor-specific implementation and nomenclature. Field trial results will allow operators to assess the value of a partial- or full-featured Vantage certified Wi-Fi network. CableLabs’ joint leadership with the operator community (cable and mobile operators) created the vision and roadmap for the Wi-Fi Vantage program while partnering with the Wi-Fi ecosystem and will continue these efforts for the next generation of Wi-Fi Vantage.

Read More About Wi-Fi Vantage
For more information about this project or CableLabs’ involvement please contact John Bahr (j.bahr@cablelabs.com) or Mark Poletti (m.poletti@cablelabs.com).

Comments
Wired

Everything You Want to Know About Coherent Optics for Access Networks (But Were Afraid to Ask)

Steve Jia
Distinguished Technologist, Wired Technologies

Alberto Campos
Fellow, Next-Gen Systems

Nov 19, 2019

The cable industry has been well served by its extensive fiber deployment that took place during the initial hybrid fiber-coax (HFC) buildout. Even though cable operators have answered capacity demand through fiber node-splits in specific high demand scenarios, only recently have operators embarked on deeper-fiber roll-out strategies as part of a comprehensive long-term evolution plan.

The exponential growth in demand for capacity prompted CableLabs to explore how to best use cable’s optical infrastructure resources. This exploration led to research activities for the introduction of coherent optics in the access environment. We’re delighted to announce the publication of the book “Coherent Optics for Access Networks” by CRC Press (Taylor & Francis Group), highlighting many of CableLabs’ research activities.

The book discusses how coherent optics in the access network is re-engineered to simultaneously achieve lower complexity and higher performance afforded by the generous link margins characteristic in shorter links. This instantiation of coherent optics is not only suitable for cable access but also for telco and cellular fiber access networks.

Recent developments in the field of coherent optics for access network applications that will support point-to-point (P2P) aggregation use cases and point-to-multipoint (P2MP) fiber to the user’s passive optical network are examined. Optical industry trends as well as the conventional intensity modulation and direct detection (IM-DD) systems and newly developed advanced direct-detection architectures leveraging four-level pulse amplitude modulation format, Stokes receivers and Kramers–Krönig receivers are also presented.

This book focuses on how to adapt coherent optics technology to the access environment in ways that address major cost challenges, such as simplified transceiver design and photonic integration. An example, is the introduction of full-duplex coherent optics, which enables simultaneous bidirectional transmission on the same wavelength thereby doubling fiber’s capacity. Full-duplex coherent optics is an approach that is feasible to implement in the shorter-link-length access environment.

The book provides economical modeling for aggregation uses cases in comparison with traditional 10G IM-DD DWDM based solutions. Implementation requirements, unique to the access environment, are also provided when introducing coherent optics into access scenarios, including coexistence with existing services and security challenges. Progress on recent-specification development activities is reviewed for many industry organizations that focus on short-distance coherent optics interoperability.

In writing this book, the authors have benefitted from the numerous interactions with experts within the optical telecommunication components and systems community, in particular with the vendor and operator members that contributed to CableLabs’ point-to-point coherent optics specification. This book represents a first look of technological advances in coherent optics, in the interest of future proofing of our access networks.

Favorable coherent component cost-reduction trends are expected to continue, technological advancements will enable higher performance and simpler implementations will make coherent technology more pervasive in the access network so that exponential growth in capacity is achieved. Given the headway gained in specification generation bodies and the development progress of optical component and transceiver manufacturers focusing on shorter link distances, a future with coherent optics in the access network is upon us.


READ NOW

Comments
Latency

The March to Budget-Friendly vRAN Continues!

Joey Padden
Distinguished Technologist, Wireless Technologies

Nov 13, 2019

As with most of my recent blog posts, I’m here to share some exciting updates on the work that CableLabs has been doing in the Telecom Infra Project (TIP) with virtualized RAN for non-ideal transport networks—for example, DOCSIS networks, passive optical networks (PONs) and really anything not on dedicated fiber. Over the past 6 months or so, we’ve reached some milestones that are worth a blog post blast. I’m going to keep each update brief, but please follow the links to dig in further where you’re interested.

TIP vRAN Fronthaul White Paper #2

On November 13, TIP’s vRAN Fronthaul Project Group is releasing a white paper discussing the results of Phase 1 of the project. The paper covers the combined learnings from the four Community Lab efforts led by Airtel, BT, CableLabs and TIM. We also include some key takeaways with which operators can assess the network assets that can be used in future vRAN deployments. You can find the paper here.

TIP Summit vRAN Fronthaul Demo

Also this week, the vRAN Fronthaul team has assembled a demo for TIP Summit ’19 in Amsterdam. The demo is showing the newly containerized multi-vendor vRAN solution running two remote radios (RUs) from a single CU/DU virtual baseband unit. In the LTE software stack, the Layer 2 and 3 containers come from Altran, and the Layer 1 container comes from Phluido, with RUs from Benetel. The containerized setup increases CPU efficiency by over 80 percent relative to our previous virtual machine–based architecture. If you’re in Amsterdam at TIP Summit, be sure to stop by the vRAN stand on the show floor.

TIP vRAN Fronthaul Trial with Shaw Communications

In July of this year, Shaw Communications, CableLabs and TIP collaborated to trial the vRAN Fronthaul LTE solution from Altran, Benetel, and Phluido over the Shaw commercial grade DOCSIS networks. In a fantastic result, we were able to demonstrate the ability of the Shaw DOCSIS networks to support Option 7-2 split fronthaul traffic for LTE services. In addition, we replicated all of our lab findings over the Shaw DOCSIS networks, validating the ability of our lab results to transfer to real world networks. “The trial demonstrated that Shaw’s hybrid fibre coaxial FibrePlus network is well positioned to support not only existing wireless services but the significant densification coming with the deployment of 5G,” said Damian Poltz, Vice President, Technology Strategy and Networks, Shaw Communications.

O-RAN Specification Includes Non-Ideal Fronthaul

While the team was busy hitting all these milestones in the TIP vRAN Fronthaul project, during the first half of the year CableLabs also led a collaborative effort to bring non-ideal fronthaul support to the O-RAN Alliance CUS plane specification. As of July, the 2.0 version of the CUS plane specification now includes support for non-ideal fronthaul with latencies up to 30ms over a common Option 7-2 interface. In addition, a new appendix was added to provide further detail on the implementation and operational specifics of deploying the lower-layer split over non-ideal transport such as DOCSIS networks, PON or managed Ethernet.

You can find out more by clicking the link below.


Read the White Paper

Comments
Security

Revisiting Security Fundamentals Part 3: Time to Examine Availability

Steve Goeringer
Distinguished Technologist, Security

Nov 12, 2019

As I discussed in parts 1 and 2 of this series, cybersecurity is complex. Security engineers rely on the application of fundamental principles to keep their jobs manageable. In the first installment of this series, I focused on confidentiality, and in the second installment, I discussed integrity. In this third and final part of the series, I’ll review availability. The application of these three principles in concert is essential to ensuring excellent user experiences on broadband.

Defining Availability

Availability, like most things in cybersecurity, is complicated. Availability of broadband service, in a security context, ensures timely and reliable access to and use of information by authorized users. Achieving this, of course, can be challenging. In my opinion, the topic is under represented amongst security professionals and we have to rely on additional expertise to achieve our availability goals. The supporting engineering discipline for ensuring availability is reliability engineering. Many tomes are available to provide detailed insight on how to engineer systems to achieve desired reliability and availability.

How are the two ideas of reliability and availability different? Reliability focuses on how a system will function under specific conditions for a period of time. In contrast, availability focuses on how likely a system will function at a specified moment or interval of time. There are important additional terms to understand – quality, resiliency, and redundancy. These are addressed in the following paragraphs. Readers wanting more detail may consider reviewing some of the papers on reliability engineering at ScienceDirect.

Quality: We need to assure that our architectures and components are meeting requirements. We do that through quality assurance and reliability practices. Software and hardware vendors actually design, analyze, and test their solutions (both in development and then as part of shipping and integration testing) to assure they actually are meeting their reliability and availability requirements. When results aren’t sufficient, vendors apply process improvements (possibly including re-engineering) to bring their design, manufacturing, and delivery processes into line to hitting the reliability  and availability requirements.

Resiliency: Again, however, this isn’t enough. We need to make sure our services are resilient – that is, even when something fails that our systems will recover (and something will fail – in fact, many things over time are going to fail, sometimes concurrently). There are a few key aspects we address when making our networks resilient. One is that when something fails, it does so loudly to the operator so they know something failed – either the failing element sends messages to the management system, or systems that it connects to or relies upon it tells the management system the element is failing or failed. Another is that the system can gracefully recover. It automatically restarts from the point it failed.

Redundancy: And, finally, we apply redundancy. That is to say we set up the architecture so that critical components are replicated (usually in parallel). This may happen within a network element (such as having two network controllers or two power supplies or two cooling units) with failover (and appropriate network management notifications) from one unit to another. Sometimes we’ll use clustering to both distribute load and achieve redundancy (sometimes referred to as M:N redundancy). Sometimes, we’ll have redundant network elements (often employed in data centers) or multiple routes of how network elements can connect through networks (using Ethernet, Internet, or even SONET). In cases where physical redundancy is not reasonable, we can introduce redundancy across other dimensions including time, frequency, channel, etc. How much redundancy a network element should imply depends on the math that balances reliability and availability to achiever your service requirements.

I’ve mentioned requirements several times. What requirements do I mean? A typical one, but not the only one and not even necessarily the most important one is the Mean Time Between Failure (MTBF). This statistic represents the statistical or expected average length of time between failures of a given element of concern, typically many thousands (even millions for some critical well understood components) of hours. There are variations. Seagate, for example, switched to Annualized Failure Rate (AFR) which is the “probable percent of failures per year, based on the [measured or observed failures] on the manufacturer’s total number of installed units of similar type (see Seagate link here). The key thing here, though, is to remember that MTBF and AFR are statistical predictions based on analysis and measured performance. It’s also important to estimate and measure availability – at the software, hardware, and service layers. If your measurements aren’t hitting the targets you set for a service, then something needs to improve.

A parting note, here. Lots of people talk about availability in terms of the percentage of time a service (or element) is up in a year. These are thrown around as “how many 9’s is your availability?” For example, “My service is available at 4x9s (99.99%)?” This is often a misused estimate because the user typically doesn’t know what is being measured, what it applies to (e.g., what’s included in the estimate), or even the basis for how the measurement is made. Never-the-less, it can be useful when backed by evidence, especially with statistical confidence.

Warnings about Availability

Finally, things ARE going to fail. Your statistics will, sometimes, be found wanting. Therefore, it’s critical to also consider how long it will take to RECOVER from a failure. In other words, what is your repair time? There are, of course, statistics for estimating this as well. A common one is Mean Time to Repair (MTTR). This seems like a simple term, but it isn’t. Really, MTTR is a statistic representing that measures how maintainable or repairable systems are. And measuring and estimating repair time is critical. Repair time can be the dominant contributor to unavailability.

So… why don’t we just make everything really reliable and all of our services highly available. Ultimately, this comes down to two things. One, you can’t predict all things sufficiently. This weakness is particularly important in security and is why availability is included as one of the three security fundamentals. You can’t predict well or easily how an adversary is going to attack your system and disrupt service. When the unpredictable happens, you figure out how to fix it and update your statistical models and analysis accordingly; and you update how you measure availability and reliability.

The second thing is simply economics. High availability is expensive. It can be really expensive. Years ago, I did a lot of work on engineering and architecting metropolitan, regional, and nationwide optical networks with Dr. Jason Rupe (one of my peers at CableLabs today). We found, through lots of research, that the general rule of thumb was for each “9” of availability, you could expect an increase of cost of service of around 2.5x on typical networks. Sounds extreme, doesn’t it? Typical availability of a private line or Ethernet circuit between two points (regional or national) is typically quoted at around 99%. That’s a lot of downtime (over 80 hours a year) – and it won’t all happen at a predictable time within a year. Getting that down to around 9 hours a year, or 99.9% availability, will cost 2.5x that, usually. Of course, architectures and technology does matter. This is just sharing my personal experience. What’s the primary driver of the cost? Redundancy. More equipment on additional paths of connectivity between that equipment.

Availability Challenges

There are lots of challenges in design of a highly available cost-effective access network. Redundancy is challenging. It’s implemented where economically reasonable, particularly at the CMTS’s, routers, switches, and other server elements at the hub, headend, or core network. It’s a bit harder to achieve in the HFC plant. So, designers and engineers tend to focus more on reliability of components and software and ensure that CMs, nodes, amplifiers, and all the other elements that make DOCSIS® work. Finally, we do measure our networks. A major tool for tracking and analyzing network failure causes and for maximizing the time between service failures is DOCSIS® Proactive Network Maintenance (PNM). PNM is used to identify problems in the physical RF plant including passive and active devices (taps, nodes, amplifiers), connectors, and the coax cable.

From a strictly security perspective, what can be done to improve the availability of services? Denial of service attacks are monitored and mitigated typically at the ingress points to networks (border routers) through scrubbing. Another major tool is ensuring authorized access through authentication and access controls.

It is important that we include consideration of availability in our security strategies. Security engineers often get too focused on threats caused by active adversaries. We must also include other considerations that disrupt the availability of experiences to subscribers. Availability is a fundamental security component and, like confidentiality and integrity, should be included in any security by design strategy.

Availability Strategies

What are the strategies and tools reliability and security engineers might apply?

  • Model your system and assess availability diligently. Include traditional systems reliability engineering faults and conditions, but also include security faults and attacks.
  • Execute good testing prior to customer exposure. Said another way, implement quality control and process improvement practices.
  • When redundancy is impractical, reliability of elements becomes the critical availability design consideration.
  • Measure and improve. PMN can significantly improve availability. But measure what matters.
  • Partner with your suppliers to assure reliability, availability, and repairability throughout you network, systems, and supply chains.
  • Leverage PNM fully. Solutions like DOCSIS® create a separation from a network problem and a service problem. PNM lets operators take advantage of that difference to fix network problems before they become service problems.
  • Remember that repair time can be a major factor in the overall availability and customer experience.

Availability in Security

It’s important that we consider availability in our security strategies. Security engineers often get too focused on threats caused by active adversaries. We must also include other considerations that disrupt the availability of experiences to subscribers. Availability is a fundamental security component and—like confidentiality and integrity—should be included in any security-by-design strategy.

You can read more about 10G and security by clicking below.


Learn More About 10G

Comments
Policy

More Than 70 State Legislators Experience the Future of Connectivity at CableLabs

Kelton Shockey
Technology Policy Associate

Nov 7, 2019

For the past several years, CableLabs has annually hosted a large group of state legislators from dozens of states across the country to discuss the future of broadband connectivity. It was our privilege to repeat that honor recently, both to showcase the cable industry’s innovation roadmap and to listen to the interests and concerns of a diverse group of lawmakers.

Policymakers were excited to see firsthand the emerging network technologies that will enable increased cable broadband performance—faster symmetrical speeds, lower latency, enhanced reliability and better security. Collectively, the group of network technologies that will enable this increased performance is the cable “10G Platform,” which will equip cable networks to deliver 10 gigabit services.

CableLabs’ Rob Alderfer, Vice President of Technology Policy, kicked off the visit with an overview of the current state of cable broadband networks and the technologies on the horizon that will drive increased broadband performance. Gigabit-speed service is now available to nearly the entire footprint of the cable network in the United States. Although gigabit service is state of the art today, CableLabs sees an even faster future for connectivity with symmetrical multigigabit speeds coming soon to consumers through the 10G Platform. To help envision this future, CableLabs also screened its latest Near Future Video—“The Near Future: Diverse Thinkers Wanted”—to illustrate the applications and services that could be enabled by widely deployed 10G networks, including holo-rooms, on-call mixed reality (MR) and autonomous taxi fleets, among many others.

The state legislators then took part in a tour of CableLabs, including a series of demonstrations and discussions in the areas of wired network technologies, wireless network technologies, cybersecurity and immersive media.

Wired Network Technologies (Fiber and Coaxial)

In CableLabs’ Optical Center of Excellence, Curtis Knittle, vice president of wired technologies, demonstrated multi-gigabit broadband service in action, with speeds of nearly 5 Gbps being received by a single cable modem using DOCSIS® 3.1 network equipment—a clear step toward 10G. Curtis also provided an overview of the emerging wired network technologies that underpin the 10G Platform, including advances in DOCSIS technology (DOCSIS 4.0), fiber optic access networks (Full Duplex Coherent Optics) and distributed access architectures (Remote PHY and Remote MAC/PHY). Collectively, these technologies will scale to deliver 10 Gbps.

Future-of-Connectivity-Curtis-Knittle

CableLabs’ Curtis Knittle demonstrates the future of broadband speeds over HFC and fiber optic networks to a group of state legislators.

Wireless Network Technologies (Wi-Fi, Mobile and Fixed)

CableLabs continues to grow its investment in wireless network technologies, recognizing that the consumer’s broadband experience depends on a robust wireless connection. Joey Padden, a distinguished technologist at CableLabs, provided a live demo of gigabit Wi-Fi. Robust wireless connectivity, through home Wi-Fi networks, is essential to the delivery and full enjoyment of the full capability of the current cable broadband service and the 10G service of the future.

Joey led a discussion of the underlying technologies and constraints to future growth, including how a shortage of wireless spectrum is a major bottleneck for delivering on industry innovation. Policymakers can play a key role in helping enable the market by making more spectrum available for new and existing technologies, such as Wi-Fi. Recognizing the importance of spectrum policy, CableLabs is an active technical contributor to these important decisions, which were of high interest to the visiting lawmakers.

Cybersecurity and the Internet of Things

The Internet of Things (IoT) has the potential to enhance all our lives through increased efficiency, convenience and productivity. However, this proliferation of Internet-connected devices also creates meaningful risk for consumers, online services and the broader Internet. Insecure IoT devices can fuel cyberattacks, spread ransomware and steal sensitive personal information, among other concerns.

The 10G Platform seeks to mitigate these risks through a number of new technologies, including CableLabs® Micronets. CableLabs’ Mark Walker, Director of Technology Policy, and Kyle Haefner, Senior Security Engineer, demonstrated how new tools such as Micronets will allow cable broadband customers to stay ahead of attackers. Protecting consumers has proven to be a priority we share with legislators. CableLabs and the cable industry are leveraging decades of experience and leadership to help address the challenges and risks that insecure IoT poses.

Immersive Media (VR, MR and Light Fields)

Future-of-Connectivity-Eric-Klassen-and-Debbie-Fitzgerald

CableLabs’ Eric Klassen and Debbie Fitzgerald talk legislators through the future of immersive media experiences, while a visiting legislator explores the inside of a space shuttle cockpit through VR.

The tour also provided the legislators with the opportunity to experience the latest in virtual reality (VR) and discuss the near future of immersive media technologies, including on-call MR and holographic light field displays. CableLabs’ Eric Klassen, Innovation Project Engineer, and Debbie Fitzgerald, Director of Technology Policy, led the discussion and provided legislators with a sense of the applications that cable’s 10G broadband networks will enable. To support the development and adoption of immersive media, CableLabs helped found the Immersive Digital Experience Alliance (IDEA), which is standardizing a new media format for the transmission of volumetric media, such as light fields.

Following the lab tours, the event wrapped with a future-focused session on emerging technology megatrends with CableLabs’ CEO Phil McKinney. Phil provided an innovator’s perspective on key trends that will significantly impact technology development. For example, he highlighted exponential increases in storage capabilities (e.g., biological-based and memristor technologies), artificial intelligence, robotics and bandwidth. Phil explained how these technology megatrends will fundamentally change how we each live, learn, work and play.

As CableLabs continues to build the technologies that will make 10G networks a reality, we recognize the importance of dialogue with policymakers through events such as this. It is critical that government officials have a sound understanding of the industry’s innovation roadmap, and it is equally important that industry listen to public policy interests. Together, we can build the future of connectivity.


Learn More About 10G

Comments
Energy

The Case for Gridmetrics, SAGA and Grid Cybersecurity

Scott Caruso
Director Strategic Ventures

Oct 31, 2019

Today, the electrical grid is essentially blind. Particularly, in the distribution portion (think the last mile to your home). For all the talk of sensors and Industrial Internet of Things (IIoT), there’s a distinct lack of visibility into the status of power availability and quality in the last miles of the electrical distribution grid. (Note that nearly 90 percent of all outages occur in the distribution grid.) Want proof? Just ask utilities how they identify outages. Many will say their number one source is the crowd—yup, phone calls, texts, tweets and so on.

Now, imagine a grid of sensors across the Unites States that live on the last mile of the electrical grid and are connected via a private, high-speed, low-latency network. All of those sensors could be sending data regularly to an aggregation point that provides near real-time insight into the availability and quality of power. That’s what the GridmetricsTM project at CableLabs® does.

What could one do with this big-data set? One application we’re pursuing with our partners at National Renewable Energy Labs (NREL) is a collaborative R&D project called Situational Awareness of Grid Anomalies (SAGA), sponsored by the Department of Energy’s (DOE’s) Office of Cybersecurity, Energy Security and Emergency Response (CESER).

We believe there are many use cases for this type of data. In addition to receiving the DOE award related to grid cybersecurity, we’re working on multiple Gridmetrics pilot opportunities across multiple sectors: power utilities, public safety, insurance and smart cities. For power utilities in particular, we’re engaging innovative utility partners to ingest and activate our dataset for myriad use cases, including outage detection/management, mutual assistance resource acquisition, grid safety and power trading, to name a few.

There’s no use case more important than helping to ensure the security of our electrical grid. We’re pleased to be working with our cable operator members and National Renewable Energy Laboratory (NREL) to develop the analytics, insights and tools to identify and visualize anomalies on the nation’s electrical distribution grid.

Click below to read more about the SAGA program from NREL.


Learn More

Comments
Innovation

Immersive Media Experiences Reaches New Milestones

Debbie Fitzgerald
Technology Policy Director

Oct 30, 2019

In April this year, CableLabs joined Charter Communications, Light Field Lab, OTOY, Visby, and Cox Communications to establish the Immersive Media Experiences Alliance™ (IDEA). The primary purpose of this endeavor is to develop a set of royalty-free standard specifications for immersive media formatting and distribution. This month, several significant milestones were achieved:

  • IDEA released its first set of draft specifications for public review,
  • CableLabs hosted the Light Field and Holographic Display Summit, and
  • IDEA demonstrated the first Immersive Technology Media Format™ (ITMF) content across multiple display types

IDEA Releases Draft Specifications

Based on OTOY’s ORBX format, the Immersive Technology Media Forma (ITMF) is a display-agnostic interchange format for conveying light field imagery to a variety of display types, including light field displays. IDEA has released three draft specifications so far to document this media format: the Scene Graph Specification, the Container Specification and the Data Encoding Specification. As noted, these are draft specifications and there is still work to do in the areas of display profiles, live action capture and representation, as well as media-aware network streaming.  We encourage interested stakeholders to join IDEA and help shape the future of immersive media.

CableLabs Hosts the Light Field and Holographic Display Summit

This year’s Light Field and Holographic Display Summit, produced by Insight Media, was hosted by CableLabs in Louisville, Colorado in early October. The two-day event covered not only display technology but the entire light field and holographic ecosystem. CableLabs, as a founding member of IDEA, is very interested in facilitating the acceleration of this ecosystem and envisions the 10G cable network technologies will enable the delivery of holographic experiences to consumer’s homes.

The agenda was full of many interesting sessions and thought-provoking panels representing 22 different companies in this space, including talks from these IDEA founders:

  • Pete Lude, Chairman of IDEA and CableLabs IDEA Board Director, provided an overview of Light Field Immersive Media and an introduction to the Immersive Digital Experiences Alliance.
  • Jon Karafin, CEO of Light Field Lab, presented an overview of the latest developments in light field display technologies.
  • Ryan Damm, CEO of Visby, discussed how to get real-world footage onto these next-gen displays.
  • Jules Urbach, CEO of OTOY, addressed synthetic media development and formats.
  • Curtis Knittle, VP of Wired Technologies at CableLabs, discussed how cable 10G networks are evolving to carry light field data.

The takeaway from the summit was that there is significant activity, interest, and exciting developments in this space, both for commercial as well as military applications. As we heard Tony Werner, President of Comcast, exclaim during the most recent SCTE CableTec Expo General Session, “Holographic displays are coming sooner than we may think”! Comcast, along with Liberty Global Ventures, Samsung, Verizon Ventures, and others were recently part of a $28 million round of funding raised by Light Field Lab.

IDEA Demonstrates First ITMF Content Across Multiple Display Types

One of the main objectives of the IDEA Immersive Technology Media Format (ITMF) is to make it display-agnostic so that it can be created and stored in one format and rendered out to support multiple types of displays, including traditional 2D flat panels, virtual reality head-mounted displays, and glasses-free light field displays. Only months after IDEA was launched, members of IDEA demonstrated this concept with content created in the ITMF format and played out on an Oculus Go VR headset, a standard 2D television, and a 3D TV with active glasses.

Immersive Media Experiences Reaches New Milestones

Although the Immersive Digital Experiences Alliance has only been established for a few months, these milestones demonstrate exciting progress in this space. And the alliance is just getting started! IDEA welcomes service providers, content producers, technologists and creative visionaries to join IDEA and define the media and distribution formats of the future.

Learn More About IDEA 

Comments
Security

Revisiting Security Fundamentals Part 2: Integrity

Steve Goeringer
Distinguished Technologist, Security

Oct 24, 2019

Let’s revisit the fundamentals of security during this year’s security awareness month – part 2: Integrity.

As I discussed in Part 1 of this series, cybersecurity is complex. Security engineers rely on the application of fundamental principles to keep their jobs manageable. The first blog focused on confidentiality. This second part will address integrity. The third and final part of the serious will review availability. Application of these three principles in concert is essential to ensuring excellent user experiences on broadband.

Nearly everyone who uses broadband has some awareness of confidentiality, though most may think of it exclusively as enabled by encryption. That’s not a mystery – our browsers even tell us when a session is “secure” (meaning the session they have initiated with a given server is at least using HTTPS which is encrypted). Integrity is a bit more obscure and less known. It’s also less widely implemented and then not always well.

Defining Integrity

In their special publication, “An Introduction to Information Security,” NIST defines integrity as “a property whereby data has not been altered in an unauthorized manner since it was created, transmitted, or stored.” This definition is a good starting place, but it can be extended in today’s cybersecurity context. Integrity needs to be applied not only to data, but also to the hardware and software systems that store and process that data and the networks that connect those systems. Ultimately, integrity is about proving that things are as they should be and that they have not been changed, intentionally or inadvertently or accidentally (like magic), in unexpected or unauthorized ways.

How is this done? Well, that answer depends on what you are applying integrity controls to. (Again, this blog post isn’t intended to be a tutorial in-depth on the details but a simple update and overview.) The simplest and most well-known approach to ensuring integrity is to use a signature. Most people are familiar with this as they sign a document or write a check.  And most people know that the bank, or whomever else we’re signing a document for, knows that signatures are not perfect so you often have to present an ID (passport, driver’s license, whatever) to prove that you are the party to which your signature attests on that document or check.

We can also implement similar steps in information systems, although, the process is a bit different. We produce a signature of data by using math; in fact, integrity is a field of cryptography that complements encryption. A signature is comprised of two parts, or steps. First, data is run through a mathematical function called hashing. Hashing is simply a one-way process which reduces a large piece of data to a few bits (128-256 bits is typical) in a way that is computationally difficult to reverse and unlikely to be copied using different data. This is often referred to as a digest and the digest is unlikely to be duplicated using a different source data (we call this collisions). This alone can be useful and is used in many ways in information systems. But it doesn’t attest the source of the data or the authenticity of the data. It just shows whether it has been changed. If we encrypt the digest, perhaps using asymmetric cryptography supported by a public key infrastructure, we produce a signature. That signature can now be validated through a cryptographic challenge and response. This is largely equivalent to being asked to show your ID when you sign a check.

One thing to be mindful of is that encryption doesn’t ensure integrity. Although an adversary who intercepts an encrypted message may not be able to read that message, they may be able to alter the encrypted text and send it on its way to the intended recipient. That alteration may be decrypted as valid. In practice this is hard because without knowledge of the original message, any changes are likely to just be gibberish. However, there are attacks in which the structure of the original message is known. Some ciphers do include integrity assurances, as well, but not all of them. So, implementors need to consider what is best for a given solution.

Approaches to Integrity

Integrity is applied to data at motion and at rest, and to systems, software, and even supply chains somewhat differently. Here’s a brief summary of the tools and approaches for each area:

  • Information in motion: How is the signature scheme above applied to transmitting data? The most common means uses a process very similar to what is described above. Hash-based Message Authentication Codes are protocols that create a digest of a packet and then encrypt the digest with a secret key. A public key is used to prove that the digest was produced by an authorized source. One old but still relevant description of HMAC is RFC 2104 available from IETF here.
  • Information at rest: In many ways, assuring integrity of files on a storage server or a workstation is more challenging than integrity of transmitted information. Usually, storage is shared by many organizations or departments. What secret keys should be used to produce a signature of those files? Sometimes, things are simplified. Access controls can be used to ensure only authorized parties can access a file. When access controls are effective, perhaps only hashing of the data file is sufficient to prove integrity. Again, some encryption schemes can include integrity protection. The key problem noted above still remains a challenge there. Most storage solutions, both proprietary and open source, provide a wide range of integrity protection options and it can be challenging for the security engineer to architect the best solution for a given application.
  • Software: Software is, of course, a special type of information. And so, the ideas of how to apply integrity protections to information at rest can apply to protecting software. However, how software is used in modern systems with live builds adds additional requirements. Namely, this means that before a given system uses software, that software should be validated as being from an authorized source and that it has not been altered since being provided by that source. The same notion of producing a digest and then encrypting the digest to form a signature applies, but that signature needs to be validated before the software is loaded and used. In practice, this is done very well in some ecosystems and either done poorly or not at all in other systems. In cable systems, we use a process referred to as Secure Software Download to ensure the integrity of firmware downloaded to cable modems. (See section 14 of Data-Over-Cable Service Interface Specifications 3.1)
  • Systems: Systems are comprised of hardware and software elements, yet the overall operation of the hardware tends to be governed by configurations and settings stored in files and software. If the files and software are changed, the operation of the system will be affected. Consequently, the integrity of the system should be tracked and periodically evaluated. Integrity of the system can be tracked through attestation – basically producing a digest of the entire system and then storing that in protected hardware and reporting it to secure attestation servers. Any changes to the system can be checked to ensure they were authorized. The processes for doing this are well documented by the Trusted Computing Group. Another process well codified by the Trusted Computing Group is Trusted Boot. Trusted boot uses secure modules included in hardware to perform a verified launch of an OS or virtual environment using attestation.
  • Supply chain: A recent focus area for integrity controls has been supply chains. How do you know where your hardware or software is coming from? Is the system you ordered the system you received? Supply chain providence can be attested using a wide range of tools, and application of distributed ledger or blockchain technologies is a prevalent approach.

Threats to Integrity

What are the threats to integrity? One example that has impacted network operators and their users repeatedly is changing of the DNS settings on gateways. If an adversary can change the DNS server on a gateway to a server they control (incorporation of strong access controls minimizes this risk), then they can selectively redirect DNS queries to spoofed hosts that look like authentic parties (e.g., banks, credit card companies, charity sites) and get customer’s credentials. The adversary can then use those credentials to access the legitimate site and do whatever that allows (e.g., empty bank accounts). This can also be done by altering the DNS query in motion at a compromised router or other server through which the query traverses (HMAC or encryption with integrity protection would prevent this.) Software attacks can occur even at the source but also at intermediate points in the supply chain. Tampered software is a prevalent way malware is introduced to end-points and can be very hard to detect because the software appears legitimate. (Consider the Wired article, “Supply Chain Hackers Snuck Malware Into Videogames.”)

The Future of Integrity

Cable technology has already addressed many integrity threats. As mentioned above, DOCSIS technology already includes support for Secure Software Download to provide integrity verification of firmware. This addresses both software supply chain providence and tampering of firmware. Many of our management protocols are also supported by HMAC. Future controls will include trusted boot and hardened protection of our encryption keys (also used for signatures). We are designing solutions for virtualized service delivery and integrity controls are pervasively included across the container and virtual machine architectures being developed.

Addition of integrity controls to our tools to ensure confidentiality provides defense in depth. It is a fundamental security component and should include in any security by design strategy. You can read more about security by clicking below.


Learn More About 10G

Comments
Security

False Base Station or IMSI Catcher: What You Need to Know

Tao Wan
Principal Architect, Security

Oct 23, 2019

You might have heard of False Base Station (FBS), Rogue Base Station (RBS), International Mobile Subscriber Identifier (IMSI) Catcher or Stingray. All four of these terminologies refer to a tool consisting of hardware and software that allow for passive and active attacks against mobile subscribers over radio access networks (RANs). The attacking tool (referred to as FBS hereafter) exploits security weaknesses in mobile networks from 2G (second generation) to 3G, 4G and 5G. (Certain improvements have been made in 5G, which I’ll discuss later.)

In mobile networks of all generations, cellular base stations periodically broadcast information about the network. Mobile devices or user equipment (UE) listen to these broadcasting messages, select an appropriate cellular cell and connect to the cell and the mobile network. Because of practical challenges, broadcasting messages aren’t protected for confidentiality, authenticity or integrity. As a result, broadcasting messages are subject to spoofing or tampering. Some unicasting messages aren’t protected either, also allowing for spoofing. The lack of security protection of mobile broadcasting messages and certain unicasting messages makes FBS possible.

An FBS can take various forms, such as a single integrated device or multiple separated components. In the latter form [1], an FBS usually consists of a wireless transceiver, a laptop and a cellphone. The wireless transceiver broadcasts radio signals to impersonate legitimate base stations. The laptop connects to the transceiver (e.g., via an USB interface) and controls what to broadcast as well as the strength of the broadcasting signal. The cellphone is often used to capture broadcasting messages from legitimate base stations and feed into the laptop to simplify the configuration of the transceiver. In either form, an FBS can be made compact with a small footprint, allowing it to be left in a location unnoticeably (e.g., mounted to a street pole) or carried conveniently (e.g., inside a backpack).

An FBS often broadcasts the same network identifier as a legitimate network but with a stronger signal to lure users away. How much stronger does an FBS’s signal need to be to succeed? The answer to that question hasn’t been well understood until recently. According to the experiments in the study [2], an FBS’s signal must be more than 30db stronger than a legitimate signal to have any success. When the signal is 35db stronger, the success rate is about 80 percent. When it’s 40db stronger, the success rate increases to 100 percent. In these experiments, FBS broadcasts the same messages with the same frequency and band as the legitimate cell. Another strategy taken by an FBS is to broadcast the same network identifier but with a different tracking area code, tricking the UE into believing that it has entered a new tracking area, and then switch to the FBS. This strategy can make it easier to lure the UE to the FBS and should help reduce the signal strength required by the FBS to be successful. However, the exact signal strength requirement in this case wasn’t measured in the experiments.

Once camped at an FBS, a UE is subject to both passive and active attacks. In passive attacks, an adversary only listens to radio signals from both the UE and legitimate base stations without interfering with the communication (e.g., with signal injection). Consequences from passive attacks include—but are not limited to—identity theft and location tracking. In addition, eavesdropping often forms a stepping stone toward active attacks, in which an adversary also injects signals. An active attacker can be a man-in-the-middle (MITM) or man-on-the-side (MOTS) attacker.

In MITM attacks, the attacker is on the path of the communication between a UE and another entity and can do pretty much anything to the communication, such as reading, injecting, modifying and deleting messages. One such attack is to downgrade a UE to 2G with weak or null ciphers to allow for eavesdropping. Another example of an MITM attack is aLTEr [3], which only tampers with DNS requests in LTE networks, without any downgrading or tampering of control messages. Although user plane data is encrypted in LTE, it’s still subject to tampering if the encryption (e.g., AES counter mode) is malleable due to the lack of integrity protection.

In MOTS attacks, an attacker doesn’t have the same amount of control over communication as with an MITM attack. More often, the attacker injects messages to obtain information from the UE (e.g., stealing the IMSI by an identity request), send malicious messages to the UE (e.g., phishing SMS) or hijack services from a victim UE (e.g., answering a call on behalf of the UE [4]). A MOTS attacker, without luring a UE to connect to it, can still interfere with existing communication—for example, by injecting slightly stronger signals that are well timed to overwrite a selected part of a legitimate message [2].

FBS has been a security threat to all generations of mobile networks since 2G. The mitigation to FBS was studied by 3GPP in the past—however, without any success due to practical constraints such as deployment challenges in cryptographic key management and difficulty in timing synchronization. In 5G release 15 [5], network side detection of FBS is specified, which can help mitigate the risk, albeit fail to prevent FBS. 5G release 15 also introduces public key encryption of subscriber permanent identifier (SUPI) before it is sent out from the UE, which—if implemented—makes it difficult for FBS to steal SUPI. In 5G release 16 [6], FBS is being studied again. Various solutions have been proposed, including integrity protection of broadcasting, paging and unicasting messages. Other detection approaches have also been proposed.

Our view is that FBS arises mainly from the lack of integrity protection of broadcasting messages. Thus, a fundamental solution is to protect broadcasting messages with integrity (e.g., using public key based digital signatures). Although challenges remain with such a solution, we don’t believe those challenges are insurmountable. Other solutions are based on the signatures of attacks, which may help but can eventually be bypassed when attacks evolve to change their attacking techniques and behaviors. We look forward to agreement from 3GPP SA3 on a long-term solution that can fundamentally solve the problem of FBS in 5G.

To learn more about 5G in the future subscribe to our blog.


SUBSCRIBE TO OUR BLOG

References

[1] Li, Zhenhua, Weiwei Wang, Christo Wilson, Jian Chen, Chen Qian, Taeho Jung, Lan Zhang, Kebin Liu, Xiangyang Li, and Yunhao Liu. “FBS-Radar: Uncovering Fake Base Stations at Scale in the Wild.” In Proceedings of ISOC Symposium on Network and Distributed Systems Security (NDSS), February 2017.

[2] Hojoon Yang, Sangwook Bae, Mincheol Son, Hongil Kim, Song Min Kim, and Yongdae Kim. “Hiding in Plain Signal: Physical Signal Overshadowing Attack on LTE.” In Proceedings of 28th USENIX Security Symposium (USENIX Security), August 2019.

[3] Rupprecht D, Kohls K, Holz T, and Popper C. “Breaking LTE on Layer Two.” In Proceedings of IEEE Symposium on Security & Privacy (S&P), May 2019.

[4] Golde N, Redon K, and Seifert JP. “Let Me Answer That for You: Exploiting Broadcast Information in Cellular Networks.” In Proceedings of the 22nd USENIX Security Symposium (USENIX Security), August 2013.

[5] 3GPP TS 33.501, “Security Architecture and Procedures for 5G System” (Release 15), v15.5.0, June 2019.

[6] 3GPP TR 33.809, “Study on 5G Security Enhancement against False Base Stations” (Release 16), v0.5.0, June 2019.

Comments