Device Security in the Internet of Things
As of the writing, some of the largest distributed denial-of-service (DDoS) attacks ever are actively disrupting major service and content providers. Many of the attacks are being reported as leveraging Internet of Things devices such as IP cameras. It’s interesting that these dramatic attacks are happening during Cybersecurity Awareness month.
How to Affect Change In Security
For many, IoT literally opens doors; for those of us in need of electronic assistance for key tasks, this is critical for daily living; with an estimated 20 billion devices online four years from now, it is a critical security requirement. CableLabs is focused on specific goals in securing Internet of Things (IoT) devices for three specific reasons: 1) our desire to protect the privacy and security of our subscribers; 2) enabling trust in the technology automating the environment we live in; and 3) the need to protect the network infrastructure supporting subscriber services. Our technical teams are actively working toward solutions for handling both the heterogeneous security models of existing devices through advanced networking techniques and in future devices through guiding standards bodies and industry coalitions in security considerations.
Who is Looking out for Your Privacy?
Subscriber privacy goes beyond personal anonymity; it includes protecting information that can be used to identify people, or their devices. Consider a mobile device, such as a Bluetooth fitness band, that broadcasts its unique identifier whenever requested (such as during any handshake to authenticate the device on various networks). That broadcast identifier could be used without the device owner’s knowledge to identify and track shoppers in a mall, protesters, or visitors at medical clinics among other concerns. Interestingly, network protection starts with device identity, and while many put this in opposition to the subscriber privacy, it does not need to be. Prior to onboarding devices into the network, which involves authentication and authorization as well as exchanging credentials and network configuration details, devices can provide temporary random identifier for new onboarding requests. After onboarding into a network, devices need an immutable, attestable, and unique identifier so that network operators can trace malicious behavior. Insecure devices that can evade identification, spoof their network address or misrepresent themselves, all while participating in botnets are a threat to everyone. Being able to rapidly trace attacks back to offending devices allows operators to more effectively coordinate with device owners in surgically tracking down and quarantining these threats.
Security – Where, When and How
Subscriber security is different from privacy and looks to ensure availability, confidentiality, and integrity. Availability is the key reason for the need for immutable identifiers within networks. When networked devices are subverted to participate in DDoS attacks, the ability to trace traffic to the corrupted devices is key. Encryption of data (in use, at rest, and in transit) is the primary means of assuring confidentiality. Since many IoT devices are constrained in processing power, it has become easy for manufacturers to overlook the need for confidentiality (data protection), arguing that the processing, storage and power costs for traditional PKI exceed device capabilities. Today, even disposable IoT devices are capable of using PKI thanks to Elliptical Curve Cryptography (ECC). ECC requires smaller keys and enables faster encryption than traditional methods have allowed – all while maintaining the same level of security assurances as traditional (RSA) cryptography. This allows not only for confidentiality, but can also be used to deliver integrity through non-repudiation (a device cannot deny it received a command/message) and message origin assurance (through signing or credential exchange). However, good ECC curve selection is very important. A final element of security is the ability for these devices to securely update their operating system, firmware, drivers, and protocol stacks. No system is perfect, and when a potential vulnerability is discovered, updating those devices already deployed will be a key part of the success of the IoT and how we interact with these tools.
These elements described above, availability, privacy, confidentiality, and integrity, all work together to develop trust. This trust comes from personal and shared experiences. The more positive security experiences consumers have with devices, the more trust is earned. Negative experiences deteriorate this trust, and this can happen disproportionally to events which built trust, and it often happens vicariously as opposed to personal experience. For example, a subscriber who reads about a personal security camera that has been visible to others on the internet, may forego the purchase of that, or similar, devices. The overall goal is to improve experiences for consumers both in future devices and to limit not only how many devices are compromised, but also limit the scope and impact of any individual vulnerability through leveraging multiple layers of defense.
Working Together Toward Network Protection
When IoT devices can be used en masse to leverage attacks targeting DNS servers, and when consumer market incentives don’t enforce security as a primary concern, industry standards bodies and consortia are typically called on to develop solutions . The Open Connectivity Foundation (OCF) is the leading IoT influence group, with over 200 leading global manufacturers and software developers (Intel, Qualcomm, Samsung, Electrolux, Microsoft and others) joining forces to ensure secure and interoperable IoT solutions. Other ecosystems are converging on OCF as well, and groups like UPnP, the AllSeen Alliance, and OneM2M have merged into the OCF organization. CableLabs and network operators including Comcast and Shaw are part of this movement, contributing code, technical security expertise, leadership, specifications, and time to make the Internet of Things safer for everyone. The Linux Foundation project, IoTivity, is being built as a platform to enable device manufacturers to more economically include security and interoperability in their products. OCF is driving toward support within IoT devices for subscriber privacy, security, and trust.
Standards organizations tend to focus on future devices, but helping manage existing devices is another area of research and exploration. The IoT security community is actively engaged not only on the future, but on the present, and how to improve consumer, manufacturer and operator experiences. A key tool to support existing IoT systems will be intermediating device/internet connections and providing bridges between ecosystems for interoperability to the ideas around using advanced networking techniques to help manage devices.
These different needs, privacy, security, trust and network protection, all combine to create a positive perspective on the IoT environment. Imagine devices which are highly available, trusted to do what they need to do, when they need to, for only whom they are intended to, and that communicate across networks securely, all while maintaining privacy. This is the focus of component and device manufacturers, network operators, integrators, academics, and practitioners alike. The convergence we are seeing around standards and open source projects is great news for all of us.
Interested in learning more? Join Brian and several others at the Inform[ED]™ Conference in New York, April 12, 2017.
Multiple Access Point Architectures and Wi-Fi Whole Home Coverage
As mentioned in a previous blog post on AP Coordination by my colleague Neeharika Allanki, homes sizes are growing and the number of client devices in a home network are increasing exponentially. There is a need for not only consistent performance in terms of throughput and connectivity, but also Wi-Fi coverage throughout the home. Consumers often need more than one Wi-Fi Access Point (AP) in the home network to provide that coverage.
Many houses in the world do not have existing wires that can be used to network these APs together, and so one of the easiest and most cost effective ways to provide whole home Wi-Fi coverage is by using Wi-Fi itself to connect together the APs in the home. The technologies available today that can do this are Mesh APs (MAPs), Repeaters or Extenders.
Wireless repeaters and extenders have been around for years due to consumers seeing the need to expand Wi-Fi coverage in their homes. While some form of wireless mesh networking has been around for more than ten years, until recently there were not products designed for the home that used mesh to connect multiple APs. In the past year, there has been a dizzying array of product announcements and introductions for home Wi-Fi coverage, with many of them using mesh networking.
Mesh Access points (MAPs) are quickly gaining traction in home networks mainly due to ease of installation (even over Repeaters/Extenders) and the promise of high throughput with whole home coverage. A mesh AP network can be defined as a self-healing, self-forming, and self-optimizing network of MAPs. Each MAP can communicate with others using smart routing protocols and thereby choose an optimal path in order to relay the data from one point to another.
As mentioned before in our AP Coordination blog, client steering (moving Wi-Fi clients to the best AP in each location) and band steering (moving and keeping Wi-Fi clients on the best band: 2.4 GHz or 5 GHz) are very important in any multi-AP solution, such as mesh or an AP + repeaters/extenders network. This is needed to ensure that each mobile client stays connected to the best AP for its current location. Without client steering, Wi-Fi clients may show connectivity to Wi-Fi, but throughput may suffer tremendously. This often shows up as the dreaded “Buffering…” message when streaming a video or a slow progress bar when loading a web page. In a fully wireless multiple AP solution, client steering and band steering is even more critical due to the throughput and latency penalty when traffic is repeated over Wi-Fi from one AP to another. As MAPs communicate with each other to form the mesh network, they implement some form of AP Coordination, and it is usually proprietary in nature.
CableLabs recently tested mesh networking solutions and AP + repeater solutions consisting of 3 APs in a 5000+ sq. ft. test house. We performed throughput, jitter, latency and coverage testing at more than twenty locations in and around the house. We found that we were able to run two streaming videos, at HD bitrates (~20Mbps), to video clients in the home while also delivering over 50Mbps to our test client. Both mesh and AP + repeater solutions were able to handle this video throughput, as well as deliver over 50Mbps throughput throughout the house and even to some areas 20’ outside the house. This is excellent news for consumers whose access to the Internet is wireless and who want that access everywhere in their homes.
CableLabs is working with vendors to define a standardized AP Coordination Protocol that would allow all APs in a home network to share information to allow them to make client steering decisions, along with other network maintenance tasks.
Debunking the Myths of Shared Networks: The Point-to-Multipoint Effect
“I don’t want to have to share a pipe. The problem with ‘cable’ is shared pipes. If my neighbor is doing a bunch of stuff over the network, I get impacted too. With fiber I get speed and no shared pipes.”
--- Entrepreneur in a focus group
The notion that subscribers connected to residential fiber networks do not “share pipes” is often misunderstood. For residential fiber networks, sharing pipes is one of the main reasons fiber to the home (FTTH) is even remotely cost-effective for service providers to deploy. But what is most surprising is the following: deploying shared network solutions has led to a more rapid increase in residential broadband speeds than otherwise would have been the case with non-shared access network solutions. I like to call this the Point-to-Multipoint Effect. In the process, sharing pipes has allowed broadband speed growth to surpass the predicted 50% compounded annual growth rate commonly known as Nielsen’s Law of Internet Bandwidth. Read on to learn more…
First, a couple of definitions:
- A (non-shared) point-to-point (P2P) network topology is one in which there is a single dedicated connection between two endpoints. In the case of access networks, one endpoint is typically located at the hub or central office, or could be located at a remote distribution point. The other endpoint is a digital subscriber line (DSL) modem, for example, or a simple Ethernet switch, located on the customer premise. In P2P networks, the peak capacity of a link is used exclusively by only the two
- A (shared) point-to-multipoint (P2MP) network topology is one in which there is a single downstream transmitter and multiple access termination devices that all selectively listen to the same downstream data stream. A key characteristic with P2MP networks is the peak capacity of the network is shared between all connected endpoints. Two examples of P2MP networks are HFC and passive optical networking (PON), shown in the figure below (showing downstream transmission).
Two examples of (shared) point-to-multipoint networks: HFC and PON
The PON solution represents the most prevalent residential fiber solution in the world, primarily due to lower costs compared to P2P fiber solutions. To illustrate the sharing, referring to the diagram above, if 10G-EPON is the technology choice, each optical network unit (ONU) connected to the network transmits upstream at ~10 Gbps, but they don’t transmit simultaneously. Instead, an ONU must be scheduled by the OLT for upstream transmission to avoid collisions with other ONUs. In essence, the scheduling of ONUs results in the sharing of the 10 Gbps peak capacity. Consequently, there is a whole lotta pipe sharing going on in PON solutions.
Do shared networks necessarily perform better or worse than non-shared networks? It depends on how performance is measured, but in one key area, residential broadband speeds, shared networks have significantly outperformed non-shared networks by a substantial amount.
A recent blog discussed Nielsen’s Law of Internet Bandwidth and how the cable industry was preparing to meet future broadband speeds with 100G-EPON. When Mr. Nielsen made his initial prediction in 1998, residential broadband access was dominated by dialup and ISDN connections, which are both P2P solutions. Indeed, for approximately the first 14 years since that initial 300 bits per second dialup connection in 1982/1983, the progression of available peak service tier bit rates followed the 50% annual growth rate prediction.
The release of the first DOCSIS® specifications by CableLabs in 1996 essentially represented the dawn of P2MP solutions, i.e. shared, for residential Internet connectivity. According to the data in the chart above, the tremendous rate of technology advancements resulting from the shared DOCSIS/HFC network solution, and later with the development of shared PON technologies, coupled with the relative cost-effectiveness of these solutions, has far exceeded other P2P technologies for residential broadband. While the initial growth prediction in 1998 was a 50% annual growth rate, the Point-to-Multipoint Effect increased the growth rate closer to 70% for residential Internet connectivity. The Point-to-Multipoint Effect indicates that sharing pipes for residential connectivity has provided a solution that has actually allowed residential high speed data rates to increase at a faster pace! This “sharing” trend is expected to continue with the development of Full Duplex DOCSIS and 100G-EPON, making the introduction of new services possible. Thus, just like our parents always told us, it is good to share.
In his role as Vice President Wired Technologies at CableLabs, Curtis Knittle leads the activities which focus on cable operator integration of optical technologies in access networks. Curtis is also Chair of the 100G-EPON (IEEE 802.3ca) Task Force.
Snapping Together a Carrier Grade Cloud
Today's enterprise and hyper-scale cloud solutions will not deliver everything needed to virtualize the service providers’ networks. However, cloud solutions do provide many of the building blocks as a great starting point.
Service providers are evolving their networks and services to better meet customer needs and expectations. Hosted applications are continuously updated with new features and consumers are starting to demand a similar frequency of change with services innovation. This rate of change and innovation in service provider networks will not be achieved by rolling more and more specialized hardware boxes to tens of millions of customers. Delivering software-based network solutions that reduce dependency on specialized hardware boxes is the only way to meet these customer expectations.
End users' expectation for service quality continues to increase, and they are typically not willing to accept a tradeoff between performance and capabilities. They want both increased performance and increased capabilities. Service Level Agreements (SLAs) are typically required for enterprise customers, but simply over-provisioning dedicated resources to meet these needs is neither economic, nor sustainable. High performance and network proximity are key to delivering interactive voice and video solutions with high bandwidth and low latency. No one wants to be misunderstood when delivering nuanced details during a videoconference with their stakeholders!
Currently, network services are delivered on several specialty devices located at customer sites or hosted by operators. Today, these specialty devices only provide a subset of needed capabilities and physical upgrades are both expensive and time consuming.
Critical Success Factors
In addition to being consistent and predictable, the network must be fast. There are no milliseconds to spare while moving across the network. For time sensitive applications such as cellular networks, there is no tolerance for physically routing packets inappropriately. They need to traverse the quickest route to their ultimate destination. To use a reference from "Smokey and the Bandit," one of my favorite movies, Bandit (Burt Reynolds' character) didn't drive through New York City to win the race from Texas to Georgia. He took the shortest and fastest route possible. Network traffic needs to do the same thing. Stick to the fastest and most direct route and only deviate when absolutely necessary,
This is not the natural mode for software running in an interrupt-driven multi-tasking environment. Much like humans trying to multi-task, tasks tend to take much longer if we are very busy. Software needs to be configured to prevent or bound interference when multiple workloads are running on the same computer.
"Location, location, location" is as important to network virtualization as it is to real-estate. Virtual Network Functions (VNFs) are the software components that replace the current Physical Network Functions (PNFs). VNFs need to be strategically placed, including positioning at the customer site or even other service provider nodes. Managing Wi-Fi networks requires access to devices at customer sites. Even when offloading the majority of the work to a hosted cloud, there are still physical accesses, routing and local security workloads that are best hosted on the customer site.
Low latency services, such as Content Delivery Networks need caching instances to be located relatively close to the customer site to reduce latency and core network bandwidth. Storage of data should not be on the other side of a busy or slow network connection. The path the data takes over the network needs to provide a consistent user experience. The network also needs to be flexible, as it must adapt to varying network loads and outages. Typically, enterprise cloud applications are designed for high availability and low cost. Speedier customer use is not always a consideration. The ability to easily manage service delivery locations by automatically placing and moving workloads within a data center, or geographically is a must for virtualizing network services.
VNFs must work with the deployed Network Function Virtualization (NFV) infrastructure and hardware. Should each VNF require a different infrastructure, it would be nearly impossible to manage and would cost much more to deploy. Interoperability can enable more competition and a broader set of vendors to deliver network services. Competition drives innovation. Standards and interoperability drive economies of scale.
ETSI-NFV is leading the way in developing the foundational standards for NFV based on a set of use cases and requirements coming from industry. Other standards bodies are referencing the ETSI-NFV work to address application-specific needs. These standards are becoming the basis for defining interoperability. But as with any standards effort, there will be many interpretations and implementations that follow these guidelines.
All of the independent components will need to be validated at key touch points to ensure interoperability and there is still no single test suite available today that will guarantee interoperability between VNFs or between VNFs and the infrastructure that hosts them. To help address this issue, ETSI-NFV is developing test specifications that are being referenced by OPNFV which itself was initiated by the ETSI NFV co-founders to accelerate implementation and feedback on the NFV specifications.
Over the next two to three years, we should see NFV being incorporated in mainstream cloud platforms. The expected performance and interoperability enhancements will increase the efficiency of compute and networking resources while requiring less power and space to run the same work. The improved, distributed nature of a trusted cloud will simplify managing applications running on or near the customers’ locations.
What CableLabs is Doing
CableLabs’ SDN/NFV Application development Platform and Stack project (SNAPS for short) is just one of the initiatives at CableLabs that attempts to accelerate and ease the adoption of network virtualization.
We are identifying the performance needs for network virtualization by evaluating the best open source software components and commercially available servers in order to build a stable and replicable platform for developing and demonstrating virtualized network capabilities and to validate interoperability and repeatability. Currently, the SNAPS project leverages a specific configuration of OPNFV which is being tested and hardened. Many of our enhancements have been included in the OPNFV "Colorado" release of the Apex installer.
Sharing our Expertise
While trying out different OpenStack installers, we soon ran into the dilemma of how to quickly use and validate our cloud in a repeatable manner. In response, we created a Python library whose responsibility is to deploy and provision OpenStack tenants from which we built a set of test suites to perform this validation. While the test suite tools are still under development, we have already made them available under the Apache v2 open source license in CableLabs' C3 collaborative software environment.
Additional contributors are always welcome. The source repository is located here: https://gerrit.cablelabs.com/#/admin/projects/snaps-provisioning
Accelerating NFV Adoption
The SNAPS project team, consisting of CableLabs member companies and vendors, is currently generating requirements and defining use cases to be shared publicly. These requirements include both performance and interoperability guidelines.
CableLabs wholly owned subsidiary Kyrio is using the lessons learned through this R&D process to drive evolution of the Kyrio SDN/NFV Interoperability lab.
We are actively involved in OPNFV and OpenDaylight, and we actively contribute to ETSI NFV.
CableLabs Joins the CBRS Alliance
On April 28, the FCC finalized its rules for the Citizens’ Broadband Radio Service (CBRS), opening 150 MHz of spectrum for shared use by commercial entities in the 3.5 GHz band (3.55-3.7 GHz). There will be 15 ten megahertz-wide (MHz) channels available at a granular census tract geography across the United States, suitable for LTE time division duplex (TDD). 80 MHz is reserved for unlicensed use and the other 70 MHz can be subject to an auction for licensed periods of three years. Should that not happen for lack of interest at that time then 150 MHz is available for unlicensed use until another opportunity for an auction in a year’s time. This represents the first opportunity for the democratization of LTE for new innovative applications. Unlike spectrum for mobile networks which can be used to cover very wide areas, CBRS is designed for small cells in both inside and outside locations.
CableLabs has joined the CBRS Alliance founded by Google, Qualcomm, Intel, Nokia, Ruckus and Federated wireless to evangelize LTE-based CBRS technology, use cases and business opportunities for our members. We plan to help drive the technology developments necessary to fulfill our mission. The Alliance will also establish an effective product certification program for LTE equipment in the US 3.5 GHz band ensuring multi-vendor interoperability. Kyrio, a fully owned subsidiary of CableLabs, will evaluate the expansion of its current testing services to support the CBRS program.
The CBRS Alliance believes that LTE-based solutions in the CBRS band, utilizing shared spectrum, can enable both in-building and outdoor coverage and capacity expansion at massive scale. For example, cable operators could deploy small cells in their customers’ homes to capture mobile data where it is used at much faster speeds than external LTE networks with owner economics. Outside small cells with higher transmit powers could cover busy streets and similar areas.
In order to maximize CBRS’s full potential, the CBRS Alliance aims to enable a robust ecosystem towards making LTE-based CBRS solutions available.
The innovative shared spectrum model adopted by the U.S. Federal Communications Commission for the Citizens Broadband Radio Service (CBRS) constitutes a bold and historic shift in spectrum allocation.
For more information, see the CBRS Alliance web site.
A Milestone in Wi-Fi / LTE-U Coexistence
Today is an important milestone for unlicensed spectrum coexistence - the Wi-Fi Alliance (WFA) has released its plan for testing how well LTE-Unlicensed coexists with Wi-Fi.
This culminates many months of work by many expert engineers within the WFA and its membership, including CableLabs staff. The outcome is that we now have a definitive set of tests, based on real-world consumer data, against which to judge LTE-U – and we can move past the competing technical studies that were the hallmark of 2015.
The WFA and its staff are to be commended for bringing all sides to the table on this issue of such importance for broadband consumers everywhere. The test plan, developed in record-time, is a product of compromise by all sides, and LTE-U proponents participated robustly in the process. There are a number of tests that CableLabs supported as important that ultimately were not adopted. But the final product is nevertheless essential – both in validating coexistence performance of any LTE-U device proposed for deployment, and as a sign that diverse industry interests can work toward solutions as wireless access becomes ever more important for consumers.
CableLabs will continue to be engaged as the WFA moves to implement this plan with authorized test labs. We look forward to a transparent process with results reported publicly by the WFA. As we move to this implementation phase, it is worth describing what the test plan does, in order to understand why it is so important.
At a high level, the test plan does the following:
- Checks that LTE-U devices select the most lightly used channel, as LTE-U proponents say they will do;
- Ensures that new Wi-Fi networks can access the channel when LTE-U is active;
- Measures the impact to Wi-Fi throughput and latency from LTE-U; and,
- Ensures that LTE-U adapts its use of the spectrum in response to variation in consumer use of Wi-Fi, as occurs in the real world, in real time.
And it does all of this at signal levels that have been shown with real-world data to be reflective of consumer use of Wi-Fi hotspots. These tests are necessary due to the well-documented shortcomings in the LTE-U Forum coexistence specification, and the lack of standardized test procedures to date, which has yielded vastly different coexistence conclusions. For more information on our views of the test procedures, see Jennifer Andreoli-Fang’s contribution to the August workshop of the WFA, which is available here.
Reasonable compromises have been made by all sides in developing this test plan. It is time to move forward using the outcome of this process, in full, as the sole source of reliable determinations of LTE-U coexistence.
Liberty Global and CableLabs Join MulteFire Alliance
Today, CableLabs is taking a significant step to drive the development of next-generation wireless technology. We are excited to announce that, along with our member Liberty Global, we are joining the MulteFire Alliance, an open consortium dedicated to making mobile technologies more widely available for use in shared, unlicensed spectrum.
MulteFire is based on 3GPP License Assisted Access LTE (LAA-LTE), which uses listen-before-talk etiquette to share spectrum in a manner similar to Wi-Fi. But unlike LAA, MulteFire will place control signaling entirely in the unlicensed band, breaking the reliance on licensed spectrum and mobile networks. This is a capability that we and others have proposed several times in 3GPP, as yet without successful adoption in that body. Our hope is that pursuing this technology in the Alliance will enable its rapid integration to global standards.
We see this step as the basis for renewed collaboration on next-generation wireless technology, which will become ever more important as we move toward 5G. Reliable coexistence, full transparency, and deep engagement with partners have long been central to our work on technologies that use unlicensed, shared spectrum. These same principles will continue to apply as we work with the MulteFire Alliance, 3GPP, the Wi-Fi Alliance, IEEE, and other groups going forward.
Below is the full copy of the joint press release that was issued today:
Full Duplex DOCSIS® 3.1 Specification Effort Launches
During the CableLabs 2016 Winter Conference, CableLabs announced the Full Duplex DOCSIS 3.1 specification project that will significantly increase upstream speeds on the DOCSIS network. The announcement of the Full Duplex extension of the DOCSIS 3.1 specification, and its potential of offering multi-Gbps symmetric services over the HFC network, created a lot of excitement in the industry. Since then a lot has been going on behind the scenes.
CableLabs has been actively collaborating with the vendor community to further refine the concept and system architecture of a Full Duplex DOCSIS 3.1 system. The ecosystem support for the Full Duplex DOCSIS 3.1 technology has been staggering, with many vendors collaborating and contributing to the development of the technology. A recent example is Cisco’s contribution of a new silicon reference design of a digital echo canceler that maximizes the use of HFC capacity to provide a scalable multi-gigabit return path.
In June, CableLabs officially launched the Full Duplex DOCSIS 3.1 project, transitioning it from the innovation phase to the R&D phase focused on specification development. Our first face-to-face meeting held in Louisville last month featured strong participation from CableLabs members and the vendor community including several new participants. Working group meetings will be held on a regular basis until the specification development is complete.
Full Duplex DOCSIS 3.1 technology will radically change the art-of-the-possible on the HFC network by delivering an unparalleled experience to cable customers.
Keeping Pace with Nielsen’s Law
The telecommunications industry typically uses Nielsen’s Law of Internet Bandwidth to represent historical broadband Internet speeds and to forecast future broadband Internet speeds. Mr. Nielsen predicted many years ago the high-end user’s downstream connection speed grows by approximately 50% compound annual growth rate (CAGR). In reality, actual peak service tiers offered by service providers over the years may be following something closer to 60% compound annual growth rate, as shown in the figure below.
The point of this blog is not to debate whether the growth rate is 50% or 60%, but rather if the growth rate continues, how do we evolve our networks to keep pace?
For point-to-multipoint networks there is a general rule of thumb for determining the peak service tier given a particular peak network capacity. This capacity-to-peak-tier ratio of 2:1 isn’t necessarily based in scientific fact, but comes from years of experience that a 2:1 ratio allows service providers to have a reasonable level of confidence that speed test measurements will accurately reflect a user’s subscription level. For example, for a particular access network technology, if the network supports 2 Gbps transmission rates to/from the access termination device (i.e., a cable modem) then the peak service tier typically won’t exceed 1 Gbps.
The present state of the art access network technology peaks at 10 Gbps. The IEEE 802.3 10 Gbps Ethernet Passive Optical Network (10G-EPON) has been deployed in China and the United States. ITU-T has recently consented XGS-PON, another 10 Gbps symmetric PON standard that uses the physical layer of XG-PON (ITU-T G.987.2) and 10G-EPON. Even the ITU-T’s NG-PON2 standard, which uses multiple wavelengths to increase network capacity, only defines a single wavelength per optical network unit (ONU), which puts NG-PON2 on par with 10G-EPON and XGS-PON in terms of meeting peak service tier rates. Finally, CableLabs is now certifying DOCSIS 3.1 devices which are capable of 10 Gbps downstream, and soon will certify 10 Gbps symmetric devices based on Full Duplex DOCSIS technology. What does this mean for accommodating Nielsen’s Law? Assuming the peak service tier trends continue, and given the 10 Gbps peak network capacity of current solutions, the maximum peak service tier will level off at approximately 5 Gbps (see red dashed line in chart above) until technology advances to allow higher rates. The telecommunications industry needs a technology roadmap beyond the current state of the art which allows for peak service tiers to exceed 5 Gbps.
CableLabs and its members, along with other service providers and the IEEE, are determined to stay ahead of the trends displayed in the graph above by contributing to the world’s first 100 Gbps EPON solution as part of the IEEE 802.3ca Task Force. The prevailing sentiment of the 802.3ca Task Force is to create a generational standard that allows for growth of peak network capacity (and corresponding peak service tiers) if and when such growth becomes necessary, without creating a new standard. This growth is expected to be achieved through defining four wavelengths, with each wavelength supporting 25 Gbps. Initial product developments will revolve around a single wavelength to provide a 25 Gbps EPON solution. When market conditions demand it, using two wavelengths along with a channel bonding solution will allow an ONU to transmit and receive at up to 50 Gbps. Similarly, with four wavelengths and channel bonding the ONU will transmit and receive at up to 100 Gbps. Examining the chart above, and assuming historical trends continue, the reader can see the 100G-EPON standard will support peak service tiers out to approximately 2030, give or take a couple years, assuming the 50% CAGR predicted by Nielsen continues.
One of the interesting facets of the 802.3ca Task Force activities relates to the improvement in efficiencies in the media access control (MAC). Previously, the IEEE 802.3 standard did not allow frame fragmentation, but recently with the completion of the IEEE 802.3br Interspersing Express Traffic Task Force, frame fragmentation is now allowed in networks based on the 802.3 standard. The 802.3ca Task Force plans to leverage fragmentation to make transmission more efficient in a multi-wavelength, channel-bonded environment. Additionally, contributions to the 802.3ca Task Force will improve the efficiency of the upstream bandwidth allocation process by allowing multiple service flow queue depth reporting and upstream granting in a single message pair. Considering the ITU-T SG15/Q2 is also investigating 25 Gbps per wavelength, the more promising and exciting aspect of these 802.3ca Task Force decisions is that the next generation of IEEE EPON and ITU-T GPON standards could be more closely aligned than ever before in the very near future! This makes a converged optical access solution closer to reality. (see a previous blog regarding a converged optical access initiative)
In his role as Vice President Wired Technologies at CableLabs, Curtis Knittle leads the activities which focus on cable operator integration of optical technologies in access networks. Curtis is also Chair of the IEEE 802.3ca Task Force.
A Sneak Peek of SCTE Cable-Tec Expo
CableLabs and Kyrio will be hosting a booth at the SCTE Cable-Tec Expo 2016. To provide you with a sneak peek of what we plan to show at the event, below are highlights of six demonstrations:
Full Duplex DOCSIS® 3.1 Technology
With Full Duplex DOCSIS 3.1 technology, the HFC network can support 10 Gbps Downstream x 10 Gbps Upstream symmetrical capacities in 1.2 GHz of spectrum. Multi-Gbps symmetric services will meet user demands and support future applications. Learn more about the next evolution of DOCSIS technology.
3.5GHz Shared Spectrum and Wi-Fi Traffic Aggregation
3.5 GHz (3.55 – 3.7GHz) shared spectrum offers the potential democratization of LTE. Cable operators can deploy LTE-based solutions within homes, offices and even in public environments to create low cost mobile networks. See how CableLabs multi-path TCP technology can help cable operators aggregate IP traffic from both 3.5 GHz spectrum and existing Wi-Fi access points to provide their customers with great wireless speeds.
Energy Efficiency of CPE (Consumer Premises Equipment)
CableLabs provides technical leadership to the industry through influencing energy efficiency voluntary agreements for set-top boxes and small network equipment as well as other energy efficiency initiatives. CableLabs also works closely with SCTE Energy 2020 to address end-to-end energy efficiency in the cable infrastructure. Learn more about the voluntary agreement initiatives and see CPE energy efficiency in action!
Automated Leakage Detection and Time Domain Reflectometer (TDR)
Cable operators can automatically gather their own leakage data for use as a diagnostic tool. This new detection method employs GPS and a continuous wave test signal and can pinpoint leakage sources. The leakage data can be used to make Proactive Network Maintenance (PNM) map overlays to speed problem resolution and to prevent LTE interference. The TDR uses standing waves on digital signals to accurately calculate the distance to reflections without interrupting service. Learn more about how these methods can improve network performance.
DOCSIS® 3.1 Profile Management Application (PMA)
The Profile Management Application implements a software application that can configure and manage DOCSIS 3.1 OFDM subcarrier modulation profiles on a DOCSIS 3.1 CMTS. This demonstration shows how PMA interacts with CMTSs, CMs, and other network elements to monitor, create, modify and then assign specific profiles to specific DOCSIS 3.1 CMs to optimize and maximize the capacity on a DOCSIS 3.1 OFDM Channel.
Real World Testing
In a world of constant change and innovation, the need for agility and assurance is crucial. Learn how Kyrio Testing Services provides their customers with the ability to adapt to new requirements, accelerate change, and assure quality of product performance and usability from the end user perspective.
We look forward to meeting you at the SCTE Cable-Tec Expo, Booth # 1424, September 26 – 29 in Philadelphia.