Comments
Wired

Cable Network Reliability: ProOps Platform for PNM and More!

Jason Rupe
Principal Architect

Jul 23, 2019

Cable network reliability has many important dimensions, but operators are all too familiar with the significant cost of maintenance and repair, and some with the advantages of Proactive Network Maintenance (PNM). But not everyone has taken full advantage of PNM. Let’s have a look at some of the reasons for that, and what CableLabs is doing to address those needs as part of its PNM project.

The Proactive Operations Problem

CableLabs has been informally assessing the reasons why more operators don’t take advantage of the proactive gift that DOCSIS® provides: the ability to use PNM data to find problems in the network before they become impactful and costly.

It takes a lot of work to implement solid PNM solutions that keep working. A key task in operations is to make decisions based on data. That takes expertise and time. Not every operator or vendor has an expert army in place to analyze all the available operations data to find proactive maintenance work worth doing. Machine learning is anticipated to help, but it will require a lot of work to apply those techniques successfully to an operations task like PNM, and even more to develop the needed controls. Likewise, not every operator or vendor has a statistical analysis or IT army in place to build enterprise tools to automate the process of turning data into action.

Some operators need to start small with testing PNM concepts to find a solution that fits their needs. That means many operators must experiment and learn first. But that requires basic, general tools in hand before experimentation can begin.

A ProOps Platform for Everyone


A ProOps Platform for Everyone

Figure 1. ProOps with its elements and workers in four layers, built on CCF, on top of the network.

CableLabs created a generalized process for translating data into operations actions and applied it to PNM. Then we built the Proactive Operations (ProOps) platform to enable this process, thus making it easy for everyone to try, develop, deploy and make full use of PNM.

ProOps translates network data into action through a framework that is not strictly enforced but is enabled and supported to better ensure effective proactive maintenance.

The steps we identify for turning network data into action are briefly as follows, moving up from the network, through data collection, and through the worker layers of ProOps in Figure 1.

  • Extract Data from the Common Collection Framework (CCF)—ProOps uses CCF to extract the data it needs from the network, then applies basic analysis to translate the data into useful information.
  • Analyze for Triggering—Next, the results are analyzed further to determine whether they are interesting or not; interesting results are “triggered” for deeper scrutiny. The data are looked at over time and across data sources to orient the information into context.
  • Make It Actionable—Once we find the most interesting network elements to watch, we group network elements into network tasks and provide a measure of importance for the identified work.

Threshold Analysis—The best work opportunities get picked to become proactive work packages, which can be selected based on impact to customers, likelihood of becoming an emergency, and so on.

You ShOODA Get ProOps!

The steps we outline for turning network data into action—or in this case pro-action—align nicely with the well-known strategy of observe, orient, decide, act (OODA). This OODA loop, or OODA process, was created by U.S. Air Force Colonel John Boyd for combat operations. The operations of combating network failure aren’t much different! If you work as a cable operator, then you know.

ProOps is available upon request to any operator member or vendor of the CableLabs community. CableLabs supports users by helping them to deploy ProOps with an example application that shows how to configure it to a specific operator or use case, and we will help our members develop solutions in it, too. Just contact Jason Rupe to get your copy.

Our goal is to help operators provide highly reliable service, and efficient, effective operations is one proven way to do that. ProOps is the latest tool to combat network failures.


SUBSCRIBE TO OUR BLOG 

Comments
Wireless

2 Resolutions for World Wi-Fi Day 2017

Rob Alderfer
VP, Technology Policy

Jun 19, 2017

It’s World Wi-Fi Day! Really, given how much we work on Wi-Fi technology, every day is Wi-Fi Day at CableLabs. Since the rest of the world has decided to take note, it feels a bit like New Year’s. So, how about a couple of resolutions?

In the coming year, we in the Wi-Fi industry should resolve to support the continued growth of Wi-Fi with two major initiatives:

1) Enhanced Wireless Spectrum Access

Spectrum, or the airwaves that wireless communication travels over, is the key ingredient for Wi-Fi. While CableLabs and others in the industry work hard to improve Wi-Fi technologies, it is all for naught if we don’t have the wireless bandwidth to make it work. This is becoming more important as wireless use grows, putting pressure on the capacity we have today.

To stick with this resolution, we’ll need the help of regulators around the world that control access to spectrum. For example, the latest Wi-Fi technology 802.11ac is known as “gigabit Wi-Fi” for the high performance it offers. Unfortunately, due to lack of spectrum, the full potential of this technology can’t be realized.

To fix this, we must look at the 5 GHz frequency band. You probably have a router at home that is “dual band”, meaning it uses 5 GHz. If you acquired it in the last year or two, it is likely 802.11ac (“gigabit Wi-Fi”) capable. We need to open up more spectrum for Wi-Fi in the 5 GHz range to fully enable this technology. Particularly as wired broadband speeds continue to increase, we need to be sure that the final step to your wireless device isn’t a bottleneck.

Progress on enabling Wi-Fi access to additional spectrum requires technical acumen, to ensure that Wi-Fi can share the airwaves without causing harmful interference to other services. That’s where our team at CableLabs comes in.

In the US, the FCC is examining how Wi-Fi can share with transportation communications technology in the upper part of the 5 GHz band, an issue they have been looking at for over four years. We’ve studied this, and we believe spectrum sharing technology supports this proposal. It’s time to move forward so that consumers can realize the full benefits of gigabit Wi-Fi.

CableLabs has applied our expertise to questions of spectrum sharing before, with a lot of success. In 2014, we did the behind-the-scenes work to open up the lower part of the 5 GHz band for Wi-Fi in the US. And just last month, the Canadian government followed this precedent.

2) Enable Reliable Coexistence Between Wi-Fi and Other Technologies

Though most people may not think too much about 5 GHz spectrum, those who do, probably think of it as Wi-Fi spectrum. That’s understandable since there are literally billions of Wi-Fi devices out there. However, more accurately, it is unlicensed spectrum (or license-exempt, in Europe). Meaning, other technologies also use the same frequencies.

Unlicensed spectrum is becoming more popular, and new technologies are moving in. Those that have a similar usage pattern are likely to run into Wi-Fi. Therefore, these new technologies need to be designed to play nicely, just as Wi-Fi is designed to do with its listen-before-talk protocols. Coexistence between technologies in unlicensed spectrum is of paramount importance to ensure that consumers win from new wireless innovation.

The leading case in this area is, of course, LTE-U, which we’ve written about extensively. CableLabs worked diligently to surface problems with LTE-U coexistence technology and led industry-wide efforts in the Wi-Fi Alliance to develop tests that can verify how well LTE-U equipment shares spectrum with Wi-Fi before it hits the street. Industry collaboration is an effective means of addressing coexistence issues and mobile carriers have stated that they will stick with the results of that process. We have since seen LTE-U devices approved after going through the tests.

The LTE-U story is, for the most part, a good example of how industries can come together to protect consumers that rely on unlicensed spectrum. However, the level of collaboration seen since then has, unfortunately, diminished significantly. Specifically, there isn't visibility into how the industry-agreed coexistence tests are implemented and used. No LTE-U vendor has released the results of its coexistence tests, even though they are happy to tout that they have passed with flying colors. Transparency is important to validate coexistence performance and mobile carriers and vendors should be more forthcoming.

Beyond LTE-U, which is a proprietary and non-standard technology aimed at unlicensed spectrum, we have LAA-LTE, which is the global standard version developed at 3GPP. There’s more reason for optimism around LAA coexistence since it uses listen-before-talk etiquette similar to Wi-Fi. But, when it comes to validating that optimism through coexistence tests, the work at 3GPP has been sorely lacking.

Just a couple of weeks ago, a key 3GPP working group produced what it deems to be final coexistence testing guidance associated with LAA (To read more download here). This work product does little to reassure consumers that rely on unlicensed spectrum. The guidance recommends only limited testing, which will not come close to approximating real-world technology interactions. Additionally, mobile carriers and vendors may not follow even this limited guidance since it is completely optional under the 3GPP specification. It is important that we get coexistence right since new technologies are coming,  MulteFire, eLAA, and 5G, that will also use unlicensed spectrum alongside Wi-Fi.

These are our two big World Wi-Fi Day resolutions. Help us celebrate World Wi-Fi Day by commenting yours below. Be sure to check out our blog posts "Solutions for Whole Home Wi-FI Coverage," Carrier Wi-FI is Now Certified Vantage" and "Multiple Access Point Architectures and Whole WI-Fi Home Coverage" to read more about how CableLabs is engaged in Wi-FI efforts and do our best to protect consumers, making new wireless innovations a win for everyone!

Comments
Networks

NFV and SDN: Paving the Way to a Software-Based Networking Future

Don Clarke
Principal Architect, Network Technologies

Mar 23, 2015

When ONF Executive Director Dan Pitt invited me to contribute a blog post, it brought to mind our interaction in the summer of 2012 on how to treat SDN in the seminal NFV White Paper I was then editing. The operator co-authors were keen to ensure that SDN and NFV were positioned in the paper as being complementary. This was important because we wanted to create momentum for NFV by highlighting use cases that did not require the then perceived complexity of SDN. As soon as the ETSI NFV Industry Specification Group (NFV ISG) was launched, we engaged with ONF, recognizing its key role in championing an open SDN ecosystem. And in 2014 the NFV ISG entered into an MoU with ONF to facilitate joint-work.

The vision for NFV was compelling because the benefits could be readily attained. By replacing network appliances based on proprietary hardware with virtualized network functions (VNFs) running on industry standard servers, operators could greatly accelerate time to market for new services, and operations streamlined through automation. Moreover, important NFV use cases (e.g. virtualized CPE) would not require massive systems upgrades — a huge barrier for innovation in telecoms. We are seeing this first-hand at CableLabs, where we have been able to prototype virtualized CPE for business services and home networks on a two-month development cycle.

In contrast, the simplified definition of SDN- the separation of control plane from data plane -in my mind does not adequately convey the compelling benefits of SDN. The term ‘Software Defined Networking’ should mean just that, every element of the network, including the VNFs and network control should be implemented within a fully programmable software environment, exposing open interfaces and leveraging the open source community. This is the only way to create an open ecosystem and to unleash a new and unprecedented wave of innovation in every aspect of networking.

NFV releases network functions “trapped inside hardware” (a description I stole from an HP colleague) achieving tremendous benefits. But VNFs must be dynamically configured and connected at scale to deliver tangible value. While today’s telecommunications operations support systems (OSS) are adequate for static NFV use cases, the real potential for NFV to transform networking can only be realized through SDN control. Consequently, SDN represents much more than the mere separation of control plane and data plane.

Given telecommunications networks are deployed at massive geographic scale, it is a hard sell to convince thousands, or even millions of customers that their services will be migrating to a new network platform where their services will not be quite the same but prices won’t go down. Couple that with the significant time and cost to upgrade the OSS, wide ranging operational processes changes, and the need to validate that the new platforms are sufficiently stable and reliable, not to mention the obligations of regulation, it is not surprising that there is hesitancy to contemplate significant telecommunications network transformations.

Consequently, the telecoms industry has resorted to decades of incremental network upgrades which have piled legacy functionality on top of legacy functionality to avoid the costs and risks of wholesale network and services migration. In the face of these realities, SDN was perceived to offer insufficient benefit to justify significant investment except in niche areas where it could be overlaid on top of existing systems. Furthermore, the idea of logically centralized SDN control is very scary to network designers who don’t readily understand abstract software concepts and who lose sleep striving to deliver reliable connectivity at massive scale, with relentless downward pressure on costs.

Just over two years into the NFV revolution, it is clear that the emergence of NFV has galvanized the industry to embrace software-based networking; short-circuiting a transition that might otherwise have taken years. The revelation that NFV can be deployed in digestible chunks, without massive system upgrades has forced network designers to take notice. After all, it is difficult to ignore a pervasive industry trend when vendors’ product plans have morphed into software roadmaps!

Given that NFV is now accepted by all major network operators and some have already made significant announcements, there is no turning back. Leading vendors have committed to NFV roadmaps and analysts talk about ‘when’ and not ‘if’ NFV will be deployed. More importantly, SDN and NFV are now frequently discussed in the same breath. In my mind, the distinction between NFV and SDN is becoming an artifact of history, and the terms will ultimately be subsumed by a software-based networking paradigm, which itself will emerge as an integral aspect of Cloud technology.

The emergence of NFV with SDN is accelerating the evolution of cloud technologies to satisfy the stringent requirements of software-based telecommunications networks. Whereas a web service could momentarily stall with minimal customer impact while a virtual machine reboots, some business-critical network services cannot tolerate loss of connectivity even for a few milliseconds. Therein lies both challenge and opportunity. Challenge because meeting stringent telecommunications availability and performance requirements is not easy as evidenced by the ETSI NFV ISG’s deliberations. Opportunity, because I foresee an unprecedented wave of telecommunications innovation on a par with the birth of the Internet.

Carrier-grade network resilience (e.g. 5-nines and beyond) will be achieved by pooling virtualized resources, fault management will be supplanted by autonomic self-healing networks that can not only withstand equipment failures but can even rapidly recover from large scale natural disasters by instantly migrating network capacity to remote location as demonstrated by NTT DOCOMO et al in the aftermath of the Fukushima disaster. And exciting new routing paradigms such as intent-based networking and content-based networking will become feasible in a much earlier timeframe with innovation galvanized by the potential for imminent experimentation on deployed infrastructures. I could go on…

The genie of software-based networking — where synergies between NFV and SDN result in significantly greater capability than either could deliver alone — is now truly out of the bottle. The ultimate challenge is to encourage growth of an open telecommunications ecosystem, where operators and vendors can work together to create and deliver value to their customers. Energized by the NFV ISG and ONF, among other industry groups, and open source projects that are becoming increasingly important, the reality is just around the corner.

Don Clarke is Principal Architect for Virtualization Technologies at CableLabs and Chairman of the ETSI NFV ISG Network Operator Council.

Comments