3.5 GHz: The Democratization of LTE
Video Courtesy of Converge! Network Digest
The Wireless Broadband Alliance organizes Wireless Global Congress which was held November 14 – 17 in San Jose, CA. As one of the world’s leading wireless events, more than 700 attendees and over 60 speakers and panelists attended.
The main theme for this year’s conference program was “Innovation and Convergence.” The wireless industry is truly at a crossroads with the coexistence and convergence of licensed and unlicensed spectrum. I presented a paper titled “3.5 GHz – The Democratization of LTE” in the session on “Convergence and Coordinated Shared Spectrum Solution” with Neville Meijers, VP of Small Cells, Qualcomm who presented his paper on “Harmonious Integration of Unlicensed and Licensed Spectrum." Both presentations addressed the new opportunities in unlicensed spectrum with LTE based technologies using either LTE TDD or MuLTEfire.
My presentation addressed an exciting development here in the USA where the U.S. Federal Communications Commission has opened up 150 MHz of spectrum for shared use by commercial entities in the 3.5 GHz band (specifically 3.55-3.7 GHz). The innovative shared spectrum model adopted by the FCC for the Citizens Broadband Radio Service (CBRS) constitutes a bold and historic shift in spectrum allocation.
There will be 15 ten megahertz-wide channels available at a granular census tract geography across the United States suitable for LTE time division duplex (TDD) and other technologies such as MuLTEfire and License Assist Access (LAA). Perhaps more importantly, this frequency range is defined in the mobile standards by 3GPP for mobile use.
CBRS represents the first opportunity for the democratization of LTE for cable operators and other fixed operators for new innovative applications. Unlike spectrum for mobile networks which can be used to cover very wide areas, CBRS is designed for small cells in both inside and outside locations. Additionally, the use of LTE TDD avoids the need for a macro-cell anchor of cells as all the signaling is contained within the band. Effectively, LTE technology becomes available for fixed operators for the first time. The frequency for CBRS covers bands 42 and 43 of the 3GPP mobile bands and is expected to be available in smart phones within the next two years and offers exciting opportunities.
Recently, CableLabs joined the CBRS Alliance to evangelize LTE-based CBRS technology, use cases and business opportunities for our members. The CBRS Alliance believes that LTE-based solutions in the CBRS band, utilizing shared spectrum, can enable both in-building and outdoor coverage and capacity expansion at massive scale. In order to maximize CBRS’s full potential, the CBRS Alliance aims to enable a robust ecosystem towards making LTE-based CBRS solutions available.
Improving Infrastructure Security Through NFV and SDN
October was Cybersecurity Awareness Month in the US. We certainly were aware. In September, IoT cameras were hacked and used to create the largest denial of service attacks to date, well over 600Gbps. On October 21, the same devices were used in a modified attack against Dyn authoritative DNS services resulting in disruption of around 1200 websites. Consumer impacts were widely felt, as popular services such as Twitter and Reddit became unstable.
Open distributed architectures can be used to improve the security of network operators’ rapidly evolving networks, reducing the impacts of attacks and providing excellent customer experiences. Two key technologies enabling open distributed architectures are Network Function Virtualization (NFV) and Software Defined Networking (SDN). Don Clarke detailed NFV further in his blog post on ETSI NFV activities. Randy Levensalor also reviewed one of CableLabs’ NFV initiatives, SNAPS earlier this year.
Future networks based on NFV and SDN will enable simpler security processes and controls than we experience today. Networks using these technologies will be easier to upgrade and patch as security threats evolve. Encryption will be supported more easily and other security mechanisms more consistently than legacy technologies. And network monitoring to manage threats will be easier and more cost-effective.
Open distributed architectures provide the opportunity for more consistent implementation of fundamental features, process and protocols, including easier implementation of new, more secure protocols. This in turn may enable simpler implementation and deployment of security processes and controls. Legacy network infrastructure features and processes are largely characterized by proprietary systems. Even implementing basic access control lists from IP based interfaces varies widely, not only in the interfaces used to implement the control lists, but in the granularity and specificity of the controls. Some areas have improved but NFV and SDN can improve further. For example, BGP Flowspec has helped standardize blocking, rate limiting, and traffic redirection on routers. However, it has strict limits today on the number of rules practically supported on routers. NFV and SDN can provide improved scalability and greater functionality. NFV provides an opportunity to readdress this complexity by providing common methods to implement security controls. SDN offers a similar opportunity, providing standardized interfaces to implement flow tables to devices and configuration deployment through model-based configuration (e.g. using YANG and NETCONF).
Standardized features, processes, and protocols naturally lead to simpler and more rapid deployment of security tools and easier patching of applications. NFV enables the application of Develop Operations (DevOps) best practices to develop, deploy, and test software patches and updates. Physical and virtual routers and network appliances can be similarly programmatically updated using SDN. Such agile and automated reconfiguration of the network will likely make it easier to address security threats. Moreover, security monitors and sensors, firewalls, virtual private network instances, and more can be readily deployed or updated as security threats evolve.
Customer confidentiality can be further enhanced. In the past, encryption was not widely deployed for a wide range of very good economic and technical reasons. The industry has learned a great deal in deploying secure and encrypted infrastructure for DOCSIS® networks and also radio access networks (RANs). New hardware and software capabilities already used widely in data center and cloud solutions can be applied to NFV to enable pervasive encryption within core networks. Consequently, deployment of network infrastructure encryption may now be much more practical. This may dramatically increase the difficulty of conducting unauthorized monitoring, man-in-the-middle attacks and route hijacks.
A key challenge for network operators continues to be detection of malicious attacks against subscribers. Service providers use a variety of non-intrusive monitoring techniques to identify systems that have been infected by malware and are active participants in botnets. They also need to quickly identify large-scale denial of service attacks and try to limit the impacts those attacks have on customers. Unfortunately, such detection has been expensive. NFV promises to distribute monitoring functions more economically and more widely, enabling much more agile responses to threats to customers. In addition, NFV can harness specific virtualization techniques recommended by NIST (such as hypervisor introspection) to ensure active monitoring of applications. Moreover, SDN provides the potential to quickly limit or block malicious traffic flows much closer to the source of attacks.
Finally, NFV promises to allow us the opportunity to leap ahead on security practices in networks. Most of the core network technologies in place today (routing, switching, DNS, etc.) were developed over 20 years ago. The industry providing broadband services knows so much more today than when the initial broadband and enterprise networks were first deployed. NFV and SDN technologies provide an opportunity to largely clean the slate and remove intrinsic vulnerabilities. The Internet was originally conceived as an open environment – access to the Internet was minimally controlled and authentication never integrated at the protocol level. This has proven to be naïve, and open distributed architecture solutions enabled by NFV and SDN can help to provide a better, more securable infrastructure. Of course, there will continue to be vulnerabilities – and new ones will be discovered that are unique to NFV and SDN solutions.
As Cybersecurity Awareness Month closes and we start a new year focused on improving consumer experiences, CableLabs is pursuing several projects to leverage these technologies to improve the security of broadband services. We are working to define and enable key imperatives required to secure virtualized environments. We are using our expertise to influence key standards initiatives. For example, we participate in the ETSI NFV Industry Specification Group (ETSI NFV) which is the most influential NFV standards organization. In fact, CableLabs chairs the ETSI NFV Security Working Group which has advanced the security of distributed architectures substantially the past 4-years. Finally, we continue to innovate new open and distributed network solutions to create home networks that can adaptively support secure services, new methods of authentication and attestation in virtual infrastructures, and universal provisioning interfaces.
Device Security in the Internet of Things
As of the writing, some of the largest distributed denial-of-service (DDoS) attacks ever are actively disrupting major service and content providers. Many of the attacks are being reported as leveraging Internet of Things devices such as IP cameras. It’s interesting that these dramatic attacks are happening during Cybersecurity Awareness month.
How to Affect Change In Security
For many, IoT literally opens doors; for those of us in need of electronic assistance for key tasks, this is critical for daily living; with an estimated 20 billion devices online four years from now, it is a critical security requirement. CableLabs is focused on specific goals in securing Internet of Things (IoT) devices for three specific reasons: 1) our desire to protect the privacy and security of our subscribers; 2) enabling trust in the technology automating the environment we live in; and 3) the need to protect the network infrastructure supporting subscriber services. Our technical teams are actively working toward solutions for handling both the heterogeneous security models of existing devices through advanced networking techniques and in future devices through guiding standards bodies and industry coalitions in security considerations.
Who is Looking out for Your Privacy?
Subscriber privacy goes beyond personal anonymity; it includes protecting information that can be used to identify people, or their devices. Consider a mobile device, such as a Bluetooth fitness band, that broadcasts its unique identifier whenever requested (such as during any handshake to authenticate the device on various networks). That broadcast identifier could be used without the device owner’s knowledge to identify and track shoppers in a mall, protesters, or visitors at medical clinics among other concerns. Interestingly, network protection starts with device identity, and while many put this in opposition to the subscriber privacy, it does not need to be. Prior to onboarding devices into the network, which involves authentication and authorization as well as exchanging credentials and network configuration details, devices can provide temporary random identifier for new onboarding requests. After onboarding into a network, devices need an immutable, attestable, and unique identifier so that network operators can trace malicious behavior. Insecure devices that can evade identification, spoof their network address or misrepresent themselves, all while participating in botnets are a threat to everyone. Being able to rapidly trace attacks back to offending devices allows operators to more effectively coordinate with device owners in surgically tracking down and quarantining these threats.
Security – Where, When and How
Subscriber security is different from privacy and looks to ensure availability, confidentiality, and integrity. Availability is the key reason for the need for immutable identifiers within networks. When networked devices are subverted to participate in DDoS attacks, the ability to trace traffic to the corrupted devices is key. Encryption of data (in use, at rest, and in transit) is the primary means of assuring confidentiality. Since many IoT devices are constrained in processing power, it has become easy for manufacturers to overlook the need for confidentiality (data protection), arguing that the processing, storage and power costs for traditional PKI exceed device capabilities. Today, even disposable IoT devices are capable of using PKI thanks to Elliptical Curve Cryptography (ECC). ECC requires smaller keys and enables faster encryption than traditional methods have allowed – all while maintaining the same level of security assurances as traditional (RSA) cryptography. This allows not only for confidentiality, but can also be used to deliver integrity through non-repudiation (a device cannot deny it received a command/message) and message origin assurance (through signing or credential exchange). However, good ECC curve selection is very important. A final element of security is the ability for these devices to securely update their operating system, firmware, drivers, and protocol stacks. No system is perfect, and when a potential vulnerability is discovered, updating those devices already deployed will be a key part of the success of the IoT and how we interact with these tools.
These elements described above, availability, privacy, confidentiality, and integrity, all work together to develop trust. This trust comes from personal and shared experiences. The more positive security experiences consumers have with devices, the more trust is earned. Negative experiences deteriorate this trust, and this can happen disproportionally to events which built trust, and it often happens vicariously as opposed to personal experience. For example, a subscriber who reads about a personal security camera that has been visible to others on the internet, may forego the purchase of that, or similar, devices. The overall goal is to improve experiences for consumers both in future devices and to limit not only how many devices are compromised, but also limit the scope and impact of any individual vulnerability through leveraging multiple layers of defense.
Working Together Toward Network Protection
When IoT devices can be used en masse to leverage attacks targeting DNS servers, and when consumer market incentives don’t enforce security as a primary concern, industry standards bodies and consortia are typically called on to develop solutions . The Open Connectivity Foundation (OCF) is the leading IoT influence group, with over 200 leading global manufacturers and software developers (Intel, Qualcomm, Samsung, Electrolux, Microsoft and others) joining forces to ensure secure and interoperable IoT solutions. Other ecosystems are converging on OCF as well, and groups like UPnP, the AllSeen Alliance, and OneM2M have merged into the OCF organization. CableLabs and network operators including Comcast and Shaw are part of this movement, contributing code, technical security expertise, leadership, specifications, and time to make the Internet of Things safer for everyone. The Linux Foundation project, IoTivity, is being built as a platform to enable device manufacturers to more economically include security and interoperability in their products. OCF is driving toward support within IoT devices for subscriber privacy, security, and trust.
Standards organizations tend to focus on future devices, but helping manage existing devices is another area of research and exploration. The IoT security community is actively engaged not only on the future, but on the present, and how to improve consumer, manufacturer and operator experiences. A key tool to support existing IoT systems will be intermediating device/internet connections and providing bridges between ecosystems for interoperability to the ideas around using advanced networking techniques to help manage devices.
These different needs, privacy, security, trust and network protection, all combine to create a positive perspective on the IoT environment. Imagine devices which are highly available, trusted to do what they need to do, when they need to, for only whom they are intended to, and that communicate across networks securely, all while maintaining privacy. This is the focus of component and device manufacturers, network operators, integrators, academics, and practitioners alike. The convergence we are seeing around standards and open source projects is great news for all of us.
Interested in learning more? Join Brian and several others at the Inform[ED]™ Conference in New York, April 12, 2017.
Multiple Access Point Architectures and Wi-Fi Whole Home Coverage
As mentioned in a previous blog post on AP Coordination by my colleague Neeharika Allanki, homes sizes are growing and the number of client devices in a home network are increasing exponentially. There is a need for not only consistent performance in terms of throughput and connectivity, but also Wi-Fi coverage throughout the home. Consumers often need more than one Wi-Fi Access Point (AP) in the home network to provide that coverage.
Many houses in the world do not have existing wires that can be used to network these APs together, and so one of the easiest and most cost effective ways to provide whole home Wi-Fi coverage is by using Wi-Fi itself to connect together the APs in the home. The technologies available today that can do this are Mesh APs (MAPs), Repeaters or Extenders.
Wireless repeaters and extenders have been around for years due to consumers seeing the need to expand Wi-Fi coverage in their homes. While some form of wireless mesh networking has been around for more than ten years, until recently there were not products designed for the home that used mesh to connect multiple APs. In the past year, there has been a dizzying array of product announcements and introductions for home Wi-Fi coverage, with many of them using mesh networking.
Mesh Access points (MAPs) are quickly gaining traction in home networks mainly due to ease of installation (even over Repeaters/Extenders) and the promise of high throughput with whole home coverage. A mesh AP network can be defined as a self-healing, self-forming, and self-optimizing network of MAPs. Each MAP can communicate with others using smart routing protocols and thereby choose an optimal path in order to relay the data from one point to another.
As mentioned before in our AP Coordination blog, client steering (moving Wi-Fi clients to the best AP in each location) and band steering (moving and keeping Wi-Fi clients on the best band: 2.4 GHz or 5 GHz) are very important in any multi-AP solution, such as mesh or an AP + repeaters/extenders network. This is needed to ensure that each mobile client stays connected to the best AP for its current location. Without client steering, Wi-Fi clients may show connectivity to Wi-Fi, but throughput may suffer tremendously. This often shows up as the dreaded “Buffering…” message when streaming a video or a slow progress bar when loading a web page. In a fully wireless multiple AP solution, client steering and band steering is even more critical due to the throughput and latency penalty when traffic is repeated over Wi-Fi from one AP to another. As MAPs communicate with each other to form the mesh network, they implement some form of AP Coordination, and it is usually proprietary in nature.
CableLabs recently tested mesh networking solutions and AP + repeater solutions consisting of 3 APs in a 5000+ sq. ft. test house. We performed throughput, jitter, latency and coverage testing at more than twenty locations in and around the house. We found that we were able to run two streaming videos, at HD bitrates (~20Mbps), to video clients in the home while also delivering over 50Mbps to our test client. Both mesh and AP + repeater solutions were able to handle this video throughput, as well as deliver over 50Mbps throughput throughout the house and even to some areas 20’ outside the house. This is excellent news for consumers whose access to the Internet is wireless and who want that access everywhere in their homes.
CableLabs is working with vendors to define a standardized AP Coordination Protocol that would allow all APs in a home network to share information to allow them to make client steering decisions, along with other network maintenance tasks.
Debunking the Myths of Shared Networks: The Point-to-Multipoint Effect
“I don’t want to have to share a pipe. The problem with ‘cable’ is shared pipes. If my neighbor is doing a bunch of stuff over the network, I get impacted too. With fiber I get speed and no shared pipes.”
--- Entrepreneur in a focus group
The notion that subscribers connected to residential fiber networks do not “share pipes” is often misunderstood. For residential fiber networks, sharing pipes is one of the main reasons fiber to the home (FTTH) is even remotely cost-effective for service providers to deploy. But what is most surprising is the following: deploying shared network solutions has led to a more rapid increase in residential broadband speeds than otherwise would have been the case with non-shared access network solutions. I like to call this the Point-to-Multipoint Effect. In the process, sharing pipes has allowed broadband speed growth to surpass the predicted 50% compounded annual growth rate commonly known as Nielsen’s Law of Internet Bandwidth. Read on to learn more…
First, a couple of definitions:
- A (non-shared) point-to-point (P2P) network topology is one in which there is a single dedicated connection between two endpoints. In the case of access networks, one endpoint is typically located at the hub or central office, or could be located at a remote distribution point. The other endpoint is a digital subscriber line (DSL) modem, for example, or a simple Ethernet switch, located on the customer premise. In P2P networks, the peak capacity of a link is used exclusively by only the two
- A (shared) point-to-multipoint (P2MP) network topology is one in which there is a single downstream transmitter and multiple access termination devices that all selectively listen to the same downstream data stream. A key characteristic with P2MP networks is the peak capacity of the network is shared between all connected endpoints. Two examples of P2MP networks are HFC and passive optical networking (PON), shown in the figure below (showing downstream transmission).
Two examples of (shared) point-to-multipoint networks: HFC and PON
The PON solution represents the most prevalent residential fiber solution in the world, primarily due to lower costs compared to P2P fiber solutions. To illustrate the sharing, referring to the diagram above, if 10G-EPON is the technology choice, each optical network unit (ONU) connected to the network transmits upstream at ~10 Gbps, but they don’t transmit simultaneously. Instead, an ONU must be scheduled by the OLT for upstream transmission to avoid collisions with other ONUs. In essence, the scheduling of ONUs results in the sharing of the 10 Gbps peak capacity. Consequently, there is a whole lotta pipe sharing going on in PON solutions.
Do shared networks necessarily perform better or worse than non-shared networks? It depends on how performance is measured, but in one key area, residential broadband speeds, shared networks have significantly outperformed non-shared networks by a substantial amount.
A recent blog discussed Nielsen’s Law of Internet Bandwidth and how the cable industry was preparing to meet future broadband speeds with 100G-EPON. When Mr. Nielsen made his initial prediction in 1998, residential broadband access was dominated by dialup and ISDN connections, which are both P2P solutions. Indeed, for approximately the first 14 years since that initial 300 bits per second dialup connection in 1982/1983, the progression of available peak service tier bit rates followed the 50% annual growth rate prediction.
The release of the first DOCSIS® specifications by CableLabs in 1996 essentially represented the dawn of P2MP solutions, i.e. shared, for residential Internet connectivity. According to the data in the chart above, the tremendous rate of technology advancements resulting from the shared DOCSIS/HFC network solution, and later with the development of shared PON technologies, coupled with the relative cost-effectiveness of these solutions, has far exceeded other P2P technologies for residential broadband. While the initial growth prediction in 1998 was a 50% annual growth rate, the Point-to-Multipoint Effect increased the growth rate closer to 70% for residential Internet connectivity. The Point-to-Multipoint Effect indicates that sharing pipes for residential connectivity has provided a solution that has actually allowed residential high speed data rates to increase at a faster pace! This “sharing” trend is expected to continue with the development of Full Duplex DOCSIS and 100G-EPON, making the introduction of new services possible. Thus, just like our parents always told us, it is good to share.
In his role as Vice President Wired Technologies at CableLabs, Curtis Knittle leads the activities which focus on cable operator integration of optical technologies in access networks. Curtis is also Chair of the 100G-EPON (IEEE 802.3ca) Task Force.
Snapping Together a Carrier Grade Cloud
Today's enterprise and hyper-scale cloud solutions will not deliver everything needed to virtualize the service providers’ networks. However, cloud solutions do provide many of the building blocks as a great starting point.
Service providers are evolving their networks and services to better meet customer needs and expectations. Hosted applications are continuously updated with new features and consumers are starting to demand a similar frequency of change with services innovation. This rate of change and innovation in service provider networks will not be achieved by rolling more and more specialized hardware boxes to tens of millions of customers. Delivering software-based network solutions that reduce dependency on specialized hardware boxes is the only way to meet these customer expectations.
End users' expectation for service quality continues to increase, and they are typically not willing to accept a tradeoff between performance and capabilities. They want both increased performance and increased capabilities. Service Level Agreements (SLAs) are typically required for enterprise customers, but simply over-provisioning dedicated resources to meet these needs is neither economic, nor sustainable. High performance and network proximity are key to delivering interactive voice and video solutions with high bandwidth and low latency. No one wants to be misunderstood when delivering nuanced details during a videoconference with their stakeholders!
Currently, network services are delivered on several specialty devices located at customer sites or hosted by operators. Today, these specialty devices only provide a subset of needed capabilities and physical upgrades are both expensive and time consuming.
Critical Success Factors
In addition to being consistent and predictable, the network must be fast. There are no milliseconds to spare while moving across the network. For time sensitive applications such as cellular networks, there is no tolerance for physically routing packets inappropriately. They need to traverse the quickest route to their ultimate destination. To use a reference from "Smokey and the Bandit," one of my favorite movies, Bandit (Burt Reynolds' character) didn't drive through New York City to win the race from Texas to Georgia. He took the shortest and fastest route possible. Network traffic needs to do the same thing. Stick to the fastest and most direct route and only deviate when absolutely necessary,
This is not the natural mode for software running in an interrupt-driven multi-tasking environment. Much like humans trying to multi-task, tasks tend to take much longer if we are very busy. Software needs to be configured to prevent or bound interference when multiple workloads are running on the same computer.
"Location, location, location" is as important to network virtualization as it is to real-estate. Virtual Network Functions (VNFs) are the software components that replace the current Physical Network Functions (PNFs). VNFs need to be strategically placed, including positioning at the customer site or even other service provider nodes. Managing Wi-Fi networks requires access to devices at customer sites. Even when offloading the majority of the work to a hosted cloud, there are still physical accesses, routing and local security workloads that are best hosted on the customer site.
Low latency services, such as Content Delivery Networks need caching instances to be located relatively close to the customer site to reduce latency and core network bandwidth. Storage of data should not be on the other side of a busy or slow network connection. The path the data takes over the network needs to provide a consistent user experience. The network also needs to be flexible, as it must adapt to varying network loads and outages. Typically, enterprise cloud applications are designed for high availability and low cost. Speedier customer use is not always a consideration. The ability to easily manage service delivery locations by automatically placing and moving workloads within a data center, or geographically is a must for virtualizing network services.
VNFs must work with the deployed Network Function Virtualization (NFV) infrastructure and hardware. Should each VNF require a different infrastructure, it would be nearly impossible to manage and would cost much more to deploy. Interoperability can enable more competition and a broader set of vendors to deliver network services. Competition drives innovation. Standards and interoperability drive economies of scale.
ETSI-NFV is leading the way in developing the foundational standards for NFV based on a set of use cases and requirements coming from industry. Other standards bodies are referencing the ETSI-NFV work to address application-specific needs. These standards are becoming the basis for defining interoperability. But as with any standards effort, there will be many interpretations and implementations that follow these guidelines.
All of the independent components will need to be validated at key touch points to ensure interoperability and there is still no single test suite available today that will guarantee interoperability between VNFs or between VNFs and the infrastructure that hosts them. To help address this issue, ETSI-NFV is developing test specifications that are being referenced by OPNFV which itself was initiated by the ETSI NFV co-founders to accelerate implementation and feedback on the NFV specifications.
Over the next two to three years, we should see NFV being incorporated in mainstream cloud platforms. The expected performance and interoperability enhancements will increase the efficiency of compute and networking resources while requiring less power and space to run the same work. The improved, distributed nature of a trusted cloud will simplify managing applications running on or near the customers’ locations.
What CableLabs is Doing
CableLabs’ SDN/NFV Application development Platform and Stack project (SNAPS for short) is just one of the initiatives at CableLabs that attempts to accelerate and ease the adoption of network virtualization.
We are identifying the performance needs for network virtualization by evaluating the best open source software components and commercially available servers in order to build a stable and replicable platform for developing and demonstrating virtualized network capabilities and to validate interoperability and repeatability. Currently, the SNAPS project leverages a specific configuration of OPNFV which is being tested and hardened. Many of our enhancements have been included in the OPNFV "Colorado" release of the Apex installer.
Sharing our Expertise
While trying out different OpenStack installers, we soon ran into the dilemma of how to quickly use and validate our cloud in a repeatable manner. In response, we created a Python library whose responsibility is to deploy and provision OpenStack tenants from which we built a set of test suites to perform this validation. While the test suite tools are still under development, we have already made them available under the Apache v2 open source license in CableLabs' C3 collaborative software environment.
Additional contributors are always welcome. The source repository is located here: https://gerrit.cablelabs.com/#/admin/projects/snaps-provisioning
Accelerating NFV Adoption
The SNAPS project team, consisting of CableLabs member companies and vendors, is currently generating requirements and defining use cases to be shared publicly. These requirements include both performance and interoperability guidelines.
CableLabs wholly owned subsidiary Kyrio is using the lessons learned through this R&D process to drive evolution of the Kyrio SDN/NFV Interoperability lab.
We are actively involved in OPNFV and OpenDaylight, and we actively contribute to ETSI NFV.
CableLabs Joins the CBRS Alliance
On April 28, the FCC finalized its rules for the Citizens’ Broadband Radio Service (CBRS), opening 150 MHz of spectrum for shared use by commercial entities in the 3.5 GHz band (3.55-3.7 GHz). There will be 15 ten megahertz-wide (MHz) channels available at a granular census tract geography across the United States, suitable for LTE time division duplex (TDD). 80 MHz is reserved for unlicensed use and the other 70 MHz can be subject to an auction for licensed periods of three years. Should that not happen for lack of interest at that time then 150 MHz is available for unlicensed use until another opportunity for an auction in a year’s time. This represents the first opportunity for the democratization of LTE for new innovative applications. Unlike spectrum for mobile networks which can be used to cover very wide areas, CBRS is designed for small cells in both inside and outside locations.
CableLabs has joined the CBRS Alliance founded by Google, Qualcomm, Intel, Nokia, Ruckus and Federated wireless to evangelize LTE-based CBRS technology, use cases and business opportunities for our members. We plan to help drive the technology developments necessary to fulfill our mission. The Alliance will also establish an effective product certification program for LTE equipment in the US 3.5 GHz band ensuring multi-vendor interoperability. Kyrio, a fully owned subsidiary of CableLabs, will evaluate the expansion of its current testing services to support the CBRS program.
The CBRS Alliance believes that LTE-based solutions in the CBRS band, utilizing shared spectrum, can enable both in-building and outdoor coverage and capacity expansion at massive scale. For example, cable operators could deploy small cells in their customers’ homes to capture mobile data where it is used at much faster speeds than external LTE networks with owner economics. Outside small cells with higher transmit powers could cover busy streets and similar areas.
In order to maximize CBRS’s full potential, the CBRS Alliance aims to enable a robust ecosystem towards making LTE-based CBRS solutions available.
The innovative shared spectrum model adopted by the U.S. Federal Communications Commission for the Citizens Broadband Radio Service (CBRS) constitutes a bold and historic shift in spectrum allocation.
For more information, see the CBRS Alliance web site.
A Milestone in Wi-Fi / LTE-U Coexistence
Today is an important milestone for unlicensed spectrum coexistence - the Wi-Fi Alliance (WFA) has released its plan for testing how well LTE-Unlicensed coexists with Wi-Fi.
This culminates many months of work by many expert engineers within the WFA and its membership, including CableLabs staff. The outcome is that we now have a definitive set of tests, based on real-world consumer data, against which to judge LTE-U – and we can move past the competing technical studies that were the hallmark of 2015.
The WFA and its staff are to be commended for bringing all sides to the table on this issue of such importance for broadband consumers everywhere. The test plan, developed in record-time, is a product of compromise by all sides, and LTE-U proponents participated robustly in the process. There are a number of tests that CableLabs supported as important that ultimately were not adopted. But the final product is nevertheless essential – both in validating coexistence performance of any LTE-U device proposed for deployment, and as a sign that diverse industry interests can work toward solutions as wireless access becomes ever more important for consumers.
CableLabs will continue to be engaged as the WFA moves to implement this plan with authorized test labs. We look forward to a transparent process with results reported publicly by the WFA. As we move to this implementation phase, it is worth describing what the test plan does, in order to understand why it is so important.
At a high level, the test plan does the following:
- Checks that LTE-U devices select the most lightly used channel, as LTE-U proponents say they will do;
- Ensures that new Wi-Fi networks can access the channel when LTE-U is active;
- Measures the impact to Wi-Fi throughput and latency from LTE-U; and,
- Ensures that LTE-U adapts its use of the spectrum in response to variation in consumer use of Wi-Fi, as occurs in the real world, in real time.
And it does all of this at signal levels that have been shown with real-world data to be reflective of consumer use of Wi-Fi hotspots. These tests are necessary due to the well-documented shortcomings in the LTE-U Forum coexistence specification, and the lack of standardized test procedures to date, which has yielded vastly different coexistence conclusions. For more information on our views of the test procedures, see Jennifer Andreoli-Fang’s contribution to the August workshop of the WFA, which is available here.
Reasonable compromises have been made by all sides in developing this test plan. It is time to move forward using the outcome of this process, in full, as the sole source of reliable determinations of LTE-U coexistence.
Liberty Global and CableLabs Join MulteFire Alliance
Today, CableLabs is taking a significant step to drive the development of next-generation wireless technology. We are excited to announce that, along with our member Liberty Global, we are joining the MulteFire Alliance, an open consortium dedicated to making mobile technologies more widely available for use in shared, unlicensed spectrum.
MulteFire is based on 3GPP License Assisted Access LTE (LAA-LTE), which uses listen-before-talk etiquette to share spectrum in a manner similar to Wi-Fi. But unlike LAA, MulteFire will place control signaling entirely in the unlicensed band, breaking the reliance on licensed spectrum and mobile networks. This is a capability that we and others have proposed several times in 3GPP, as yet without successful adoption in that body. Our hope is that pursuing this technology in the Alliance will enable its rapid integration to global standards.
We see this step as the basis for renewed collaboration on next-generation wireless technology, which will become ever more important as we move toward 5G. Reliable coexistence, full transparency, and deep engagement with partners have long been central to our work on technologies that use unlicensed, shared spectrum. These same principles will continue to apply as we work with the MulteFire Alliance, 3GPP, the Wi-Fi Alliance, IEEE, and other groups going forward.
Below is the full copy of the joint press release that was issued today:
Full Duplex DOCSIS® 3.1 Specification Effort Launches
During the CableLabs 2016 Winter Conference, CableLabs announced the Full Duplex DOCSIS 3.1 specification project that will significantly increase upstream speeds on the DOCSIS network. The announcement of the Full Duplex extension of the DOCSIS 3.1 specification, and its potential of offering multi-Gbps symmetric services over the HFC network, created a lot of excitement in the industry. Since then a lot has been going on behind the scenes.
CableLabs has been actively collaborating with the vendor community to further refine the concept and system architecture of a Full Duplex DOCSIS 3.1 system. The ecosystem support for the Full Duplex DOCSIS 3.1 technology has been staggering, with many vendors collaborating and contributing to the development of the technology. A recent example is Cisco’s contribution of a new silicon reference design of a digital echo canceler that maximizes the use of HFC capacity to provide a scalable multi-gigabit return path.
In June, CableLabs officially launched the Full Duplex DOCSIS 3.1 project, transitioning it from the innovation phase to the R&D phase focused on specification development. Our first face-to-face meeting held in Louisville last month featured strong participation from CableLabs members and the vendor community including several new participants. Working group meetings will be held on a regular basis until the specification development is complete.
Full Duplex DOCSIS 3.1 technology will radically change the art-of-the-possible on the HFC network by delivering an unparalleled experience to cable customers.