Comments
Innovation

CableLabs Announces Major Update to the Open Source LoRa Server

Daryl Malas
Principal Architect, Advanced Technology Group

Dec 18, 2018

Last week, in my blog post “CableLabs Open Source LPWAN Server Brings Diverse LPWAN Technologies Together,” we announced our LPWAN Server. This project is open source and:

  • Provides new capabilities to bring IoT LPWAN wireless technologies together
  • Is a flexible tool to enable the use of multiple servers across multiple vendors

The LPWAN Server was designed to work with the CableLabs sponsored open source LoRa Server and, together, provide a comprehensive solution to enable many LPWAN use cases. It has been nearly 18 months since we released the first major revision of the LoRa Server and, during this time, many improvements have been made.

In this blog, I’ll discuss why we invested in the LoRa Server, how the project continues to improve and how it aligns with the latest specifications released from the LoRa Alliance. If you need a refresher on the LoRa Server, please see my blog post “CableLabs Announces an Open Source LoRaWAN Network Solution.”

Why Did CableLabs Invest in the LoRa Server?

The LoRa Server project was conceived and started by Orne Brocaar. His goal was to develop a fully open source LoRa Server that could be used by anyone looking for the opportunity to gain an introduction into LoRaWAN and LPWANs. Due to limited time and resources, the project remained minimal in functionality and progression for nearly a year.

CableLabs had a goal to find a fully community-based open-source LoRaWAN server to provide the cable industry with the ability to easily prototype, test and trial LPWAN services using unlicensed RF spectrum. We discovered the LoRa Server and began investing heavily into developing the functionality to align with our goal. Shortly after this, Orne joined the CableLabs team to lead the development of the LoRa Server into the exceptional tool it has become.

Our design strategy began and continues to focus on these key areas:

  • Full functional compliance with LoRa Alliance specifications
  • Extensive debug and logging tools
  • Protocol transparency to the operator of the server
  • Scalable for any sized testing, trial or use

While our goals are to provide a tool for testing, trials and related use, the server is fully open-source under the MIT license. This allows it to be used freely for any use from testing to production. We desire to enable growth and creativity in the LPWAN ecosystem using the LoRaWAN protocol.

Introducing a New Version of the LoRa Server

In the summer of 2018, we released LoRa Server v2. We have released several additional updates to introduce new features and improvements since then while maintaining backward compatibility with LoRaWAN 1.0. Where v1 (released in June 2017) was focused on the first stable release since many test versions, v2 focuses on an improved API, User Interface (UI), compliance with LoRaWAN 1.1 and additional interesting new features.

LoRaWAN 1.1

The major feature of LoRa Server v2 is support for LoRaWAN 1.1. LoRaWAN 1.1 is an important release for many reasons:

Enhanced Security

LoRa Server v2 enhances the security of LoRaWAN devices by providing LoRaWAN 1.1 support. Not only does LoRaWAN 1.1 add better protection against replay attacks, it also adds better separation between the encryption of the mac-layer and the application payloads. This also facilitates the implementation of roaming in the future. It is important to mention that LoRa Server v2 still supports LoRaWAN 1.0 devices.

Re-designed Web-Interface

Another major feature of LoRa Server v2 is the completely redesigned and re-written web-interface. The fantastic new interface is more responsive because of smarter caching and it is more user-friendly and easier to navigate.

Redesigned Web Interface LoRa

API improvements

As many users are integrating LoRa Server into their own platforms using the LoRa Server APIs, we want to make sure these APIs are easy to use and are consistent. LoRa Server v2 removes many inconsistencies present in the v1 API and makes it possible to reuse objects so that code duplication is avoided.

LoRa App Server

Multicast

Multicast was a feature which was long requested and is finally present since LoRa Server v2.1.0. This feature makes it possible to assign devices to a multicast-group, so a group of devices can be controlled without the need to address each device individually, reducing the required airtime. One of its use cases is Firmware Updates Over The Air (FUOTA) which was recently released by the LoRa Alliance. In an upcoming version, we are planning to further integrate this into the LoRa App Server component of the LoRa Server.

Geolocation

Since LoRa Server v2.2.0, the server provides geolocation support. By default, it integrates with the Collos platform provided by Semtech, but by using the provided geolocation API, other platforms can be used. Please note this requires a v2 LoRa Gateway with geolocation capabilities, as a high precision timestamp is required for proper geolocation.

Google Cloud Platform integration

A common request we have received is how to scale LoRa Server. Since LoRa Server v2.3.0, it is possible to make use of the Google Cloud Platform infrastructure to improve scalability and availability. LoRa gateways can directly connect to the Cloud IoT Core MQTT bridge (using the LoRa Gateway Bridge), and the LoRa Server and LoRa App Server integrate with Google Cloud Pub/Sub.

Open Source Community

The open source community is encouraged to take advantage of our efforts and further functional support for even more gateways, solutions and use cases. There are many LoRaWAN gateways and applications and we would like the development community to help us integrate these. 

To find out more information about the LoRa Server and become involved in this project, go to the LoRa Server site.

Subscribe to our blog for updates on the open source LoRa Server.


SUBSCRIBE TO OUR BLOG

Comments
Innovation

CableLabs Open Source LPWAN Server Brings Diverse LPWAN Technologies Together

Daryl Malas
Principal Architect, Advanced Technology Group

Dec 11, 2018

CableLabs is excited to announce a new open source project called LPWAN Server. The LPWAN Server provides new capabilities to bring IoT LPWAN wireless technologies together.

Before we go into more details on the LPWAN Server, let us first get some background into this space. In my previous blog post, I discussed the Internet of Things (IoT) as a growing industry comprised of a massive number of devices that connect to each other to benefit our lives. For example, a soil moisture sensor can help a farmer determine when to water their crops rather than potentially wasting water through a legacy timed-based approach. In that blog post, CableLabs announced the release of an open source LoRaWAN solution, LoRa Server.

What is LoRa Server and LPWANs?

LoRa Server is a community-sourced open source LoRaWAN network server for setting up and managing LoRaWAN networks. LPWANs connect sensors designed to last for years on a single battery transmitting information periodically over long distances.

There are many potential use cases shown below:

LPWA Use Cases

LPWA use cases graphic by LoRa Alliance member Actility at the occasion of its collaboration with Softbank in Japan.

LPWANs are designed to cover large geographical areas and minimize the amount of power required for sensors to interact with the network. There are many solutions available to enable these use cases, including:

  • LoRaWAN™: LoRaWAN is a partially open unlicensed spectrum solution developed through the specifications efforts of the LoRa Alliance: While the specifications are developed within the Alliance, they are made available to the general public upon completion.
  • Mobile solutions from 3GPP: 3GPP defined Cat-M1 and NB-IoT for varying connectivity requirements. These are also open standards, but they require licensed spectrum.
  • Weightless: Weightless is an open specification effort but has struggled in gaining traction in the LPWAN space. It should be noted, there are many other proprietary LPWAN technologies with active deployments and use in this ecosystem.

Why No One Solution Will Own the Technology

We believe no one LPWAN technology will fully own this IoT space. Our reasoning for this belief comes from multiple factors. As we look at the sensors in this space, some are intended for real-time applications with consistent and verified uploads, while other sensors simply wake-up periodically and transmit small data payloads. Without going into more specific examples, we believe some LPWAN applications are better suited for licensed spectrum mobile networks, while other LPWAN applications are better supported with unlicensed solutions, such as LoRaWAN™. LoRaWAN services can be further explored through some of our member offerings via MachineQ™ and Cox℠ .

Our New Open Source Solution

With these considerations in mind, we developed a new open source solution to enable easily moving data from devices and applications across varying network types and related solutions. The LPWAN Server was designed to enable multiple use cases:

  • First, it can be used to simply migrate or operate between two LoRaWAN™ network servers, such as the LoRa Server and The Things Network.
  • Second, and more importantly, the long-term design intention is to enable the routing of multiple LPWAN technologies, such as LoRaWAN™ and SigFox or LoRaWAN™ and Narrow Band IoT (NB-IoT). In order to integrate IP-based devices, the server will include a “relay server” of sorts. This allows for the IP traffic to mix with LoRaWAN™ traffic for a single upstream interface to an application or data collector, such as Google Cloud and Microsoft Azure.

Our goal with this project is to see developers add more back-end integration with network servers and technologies to enable this routing of traffic across many LPWAN technologies.

LPWAN Server Use Cases

The LPWAN Server was designed to support the following use cases:

1. Multi-vendor LoRaWAN™ environment: Using the LPWAN Server in a multi-vendor LoRaWAN™ environment allows a network provider to:

  • Test multiple servers from multiple vendors in a lab,
  • Trial with multiple network servers from multiple vendors
  • Run multiple vendor solutions in production

Multi-vendor LoRaWAN environment

2. NB-IoT & LoRaWAN™ device deployment: The LPWAN Server will allow you to operate a single application for devices deployed on both LoRaWAN™ and NB-IoT networks. The LPWAN Server will enable an IP relay-server for connecting with NB-IoT (and Cat-M1) devices commonly behind a 3GPP mobile network Evolved Packet Core (EPC). It also allows for managing devices on the LoRaWAN™ The devices are managed under a single application within the LPWAN Server. This allows an application to receive data over a single northbound Application Program Interface (API) rather than maintain API connections and data flows to multiple networks.

3. Simplify device provisioning across multiple LPWAN network types and solutions: The LPWAN Server simplifies provisioning to one or more LPWAN networks. A major challenge for a back-office solution is to integrate provisioning into a new network server. This is further complicated with multiple new network servers and types. In order to simplify this, the LPWAN Server manages the APIs to the networks, and the back-office solution only needs to integrate with a single API to the LPWAN Server. The following figure illustrates this.

Multi-vendor LoRaWAN environment

4. Create consistent data order and formats from LPWAN devices: The final use case explains how the LPWAN Server can normalize data from varying devices on one or more networks. Unfortunately, even in a single network environment, such as LoRaWAN™, there is no standard for data formats from multiple “like” sensors. For example, a weather sensor from two different vendors could send the same type of data but reverse the order. An application will need to interpret the data format from multiple sensors. In order to simplify this, the LPWAN Server can be used to reformat the data payload into a common format for sending up to the application server. In this way, the application server will not need to interpret the data.

Simplify device provisioning across multiple LPWAN network types

CableLabs & the Development Community Together

The LPWAN Server is intended to be a community open source project. The initial release from CableLabs provides support for a multi-vendor LoRaWAN™ use case. The back-end has been designed for future support of all of the use cases, and the UI is flexible to support them as well. We currently are using the server for data normalization, too; however, this is via a back-end process.

The open source community is encouraged to take advantage of the initial CableLabs development and further the development into a useful application for even more servers, solutions and use cases. There are many network types and related servers, and we would like the development community to help us integrate these. 

To find out more information about the LPWAN Server and become involved in this project, go to https://lpwanserver.com.

The LPWAN Server was designed to work with the CableLabs sponsored open source LoRa Server. In an upcoming blog, I will discuss how that project continues to evolve and align with the latest specification releases from the LoRa Alliance. The LPWAN Server and LoRa Server provide a comprehensive solution to enable many LPWAN use cases.


SUBSCRIBE TO OUR BLOG

Comments
Virtualization

CableLabs Announces SNAPS-Kubernetes

Randy Levensalor
Principal Architect, Future Infrastructure Group, Office of the CTO

Jul 23, 2018

Today, I’m pleased to announce the availability of SNAPS-Kubernetes. The latest in CableLabs’ portfolio of open source projects to accelerate the adoption of Network Functions Virtualization (NFV), SNAPS-Kubernetes provides easy-to-install infrastructure software for lab and development projects. SNAPS-Kubernetes was developed with Aricent and you can read more about this release on their blog here.

In my blog 6 months ago, I announced the release of SNAPS-OpenStack and SNAPS-Boot, and I highlighted Kubernetes as a future development area. As with the SNAPS-OpenStack release, we’re making this installer available while it's still early in the development cycle. We welcome contributions and feedback from anyone to help make this an easy-to-use installer for a pure open source and freely available environment. We’re also releasing the support for the Queens release of OpenStack—the latest OpenStack release.

Member Impact

The use of cloud-native technologies, including Kubernetes, should provide for even lower overhead and an even better-performing network virtualization layer than existing virtual machine (VM)-based solutions. It should also improve total cost of ownership (TCO) and quality of experience for end users. A few operators have started to evaluate Kubernetes, and we hope with SNAPS-Kubernetes that even more members will be able to begin this journey.

Our initial total cost of ownership (TCO) analysis with a virtual Converged Cable Access Platform (CCAP) core distributed access architecture (DAA) and Remote PHY technology has shown the following improvements:

  • Approximately 89% savings in OpEx costs (power and cooling)
  • 16% decrease in rack space footprint
  • 1015% increase in throughput

We anticipate that Kubernetes will only increase these numbers.

Three Waves of NFV

SNAPS-Kubernetes will help deliver Virtual Network Functions (VNFs) that use fewer resources, are more fault-tolerant and quickly scale to meet demand. This is a part of a movement coined “cloud native.” This the second of the waves of NFV maturity that we are observing.

With the adoption of NFV, we have identified three overarching trends:

  1. Lift & Shift
  2. Cloud Native
  3. Autonomous Networks

SNAPS Kubernetes three waves of nfv

Lift & Shift

Service providers and vendors typically support the Lift & Shift model today. These are large VMs running on an OpenStack-type Virtualized Infrastructure Manager (VIM). This is a mature technology, and many of the gaps in this area have closed.

VNF vendors often brag that their VNF solution runs the same version of software that runs on their appliances in this space. Although achieving feature parity with their existing product line is admirable, these solutions don’t take advantage of the flexibility and versatility that can be achieved by fully leveraging virtualization.

There can be a high degree of separation between the underlying hardware and operating system from the VM. This separation is great for portability, but it comes at a cost. Without some level of hardware awareness, it isn’t possible to take full advantage of acceleration capabilities. An extra layer of indirection is included, which can add latency.

Cloud Native

Containers and Kubernetes excel in this quickly evolving section of the market. These solutions aren’t yet as mature as OpenStack and other virtualization solutions, but they are lighter weight and integrate software and infrastructure management. This means that Kubernetes will scale and fail over applications, and the software updates are also managed.

Cloud native is well suited for edge and customer-premises solutions where compute resources are limited by space and power.

Autonomous Networks

Autonomous networks are the desired future in which every element of the network is automated. High-resolution data is being evaluated to continually optimize the network for current and projected conditions. The 3–6-year projection for this technology is probably a bit optimistic, but we need to start implementing monitoring and automation tools in preparation for this shift.

Features

This release is based on Kubernetes 1.10. We will update Kubernetes as new releases stabilize and we have time to validate these releases. As with SNAPS-OpenStack, we believe it’s important to adopt the latest stable releases for lab and evaluation work. Doing so will prepare you for future features that help you get the most out of your infrastructure.

This initial release supports Docker containers. Docker is one of the most popular types of containers, and we want to take advantage of the rich ecosystem of build and management tools. If we later find other container technologies that are better suited to specific cable use cases, this support may change in future releases.

Because Kubernetes and containers are so lightweight, you can run SNAPS-Kubernetes on an existing virtual platform. Our Continuous Integration (CI) scripts use SNAPS-OO to completely automate the installation on almost any OpenStack platform. This should work with most OpenStack versions from Liberty to Queens.

SNAPS-Kubernetes supports the following six solutions for cluster-wide networking:

  • Weave
  • Flannel
  • Calico
  • Macvlan
  • Single Root I/O Virtualization (SRIOV)
  • Dynamic Host Configuration Protocol (DHCP)

Weave, Calico and Flannel provide cluster-wide networking and can be used as the default networking solution for the cluster. Macvlan and SRIOV, however, are specific to individual nodes and are installed only on specified nodes.

SNAPS-Kubernetes uses Container Network Interface (CNI) plug-ins to orchestrate these networking solutions.

Next Steps

As we highlighted before, serverless infrastructure and orchestration continue to be future areas of interest and research. In addition to extending the scope of our infrastructure, we are focusing on using and refining the tools.

Multiple CMTS vendors have announced and demonstrated virtual CCAP cores, so this will be an important workload for our members.

Try It Today

Like other SNAPS releases, SNAPS-Kubernetes is available on GitHub under the Apache Version 2 license. SNAPS-Kubernetes follows the same installation process as SNAPS-OpenStack. The servers are prepared with SNAPS-Boot, and then SNAPS-Kubernetes is installed.

Have Questions? We’d Love to Hear from You

Subscribe to our blog to learn more about SNAPS in the future.


SUBSCRIBE TO OUR BLOG

Comments
Virtualization

Container Workloads: Evolution of SNAPS for Cloud-Native Development

Feb 8, 2018

Application developers drive cloud-platform innovation by continuously pushing the envelope when it comes to defining requirements for the underlying platform. In the emerging application programming interface (API) and algorithm economy, developers are leveraging a variety of tools and already-built services to rapidly create new applications. Edge computing and Internet-of-Things (IoT) use cases require platforms that can be used to offload computing from low-power devices to powerful servers. Application developers deliver their software in iterations where user feedback is critical for product evolution. This requires building platforms that allow developers to develop new features rapidly and deploy them in production. In other words, to adopt DevOps.

In the telecommunications world, network function virtualization (NFV) is driving the evolution of telco clouds. However, the focus is shifting towards containers as a lightweight virtualization option that caters to the application developer’s requirements of agility and flexibility. Containerization and cluster-management technologies such as Docker and Kubernetes are becoming popular alternatives for tenant, network and application isolation at higher performance and lower overhead levels.

Container is an operating system level virtualization that allows execution of lightweight independent instances of isolated resources on a single Linux instance. Container implementation like Docker avoids the overhead and maintenance of virtual machines and helps in enabling portability and flexibility of applications across public and private cloud infrastructure.

Microservice architectures are enabling developers to easily adopt the API and algorithm economy. It has become imperative that we start to look at containers as an enabler for carrier-grade platforms to power new cloud-native applications.

Edge computing and IoT require containers

Edge Computing and IoT are introducing new use cases that demand low-latency networks. Robotics, autonomous cars, drones, connected living, industrial automation and eHealth are just some of the areas where either low latency is required, or a large amount of data needs to be ingested and processed. Due to the physical distance between the device and public clouds, the viability of these applications depends on the availability of a cloud platform at the edge of the network. This can help operators and MSOs leverage their low-latency access networks—their beachfront property—to enable such applications and create new revenue streams. The edge platforms require cloud-native software stacks to help “cloud-first” developers travel deep inside the operators’ networks and make the transition frictionless.

On the other hand, the devices also require client software, which can communicate with the “edge.” The diversity of such devices such as drones, sensors or cars makes it difficult to install and configure software. Containers can make life easier since they require a version of Linux operating system and container runtime to launch, manage, configure and upgrade software to any device.

The role of intelligence and serverless architectures in the carrier-grade platform

Let’s consider the example of a potential new service for real-time object recognition. By integrating artificial intelligence (AI) and machine learning (ML) algorithms, operators can enhance the edge platform so developers can create applications for pedestrian or obstacle detection in autonomous driving, intrusion detection in video surveillance and image and video search. The operator’s platform that hosts such applications needs to be “intelligent” to provide autonomous services. It requires the ability to host ML tools and support event-driven architectures where computing can be offloaded to the edge on-demand. Modern serverless architectures could be a potential solution for such requirements, but containers and cloud-native architectures are a near-perfect fit.

Are containers ready for carrier-grade workloads?

Containers as a technology have existed for over a decade. Linux containers and FreeBSD Jails are two early examples. However, it was not easy to network or manage the lifecycle of containers. Docker made this possible by simplifying container management and operations, which led to the ability to scale and port applications through containers. Today, the Open Container Initiative of the Linux Foundation is defining the standards for container runtime and image formats. APIs provided by container runtimes and additional tools help abstract low-level resource management of the environment for application developers. Container runtimes can download, verify and run containerized application images.

The production applications are typically composed of several containers that can independently scale. To manage such deployments, new software ecosystems have emerged that primarily orchestrate, manage and monitor applications across multiple hosts. Kubernetes and Docker Swarm are examples of such solutions, commonly called container orchestration engines (COE).

Some of the key challenges for carrier-grade deployments of container-based platforms are:

  • Complex networking with several alternatives for overlay and underlay networks within a cluster of containers
  • Lack of well-defined resource-management procedures like isolating containers with huge pages, CPU pinning, GPU sharing, inter-POD, node-affinity, etc.
  • Complex deployment techniques are required to deploy multi-homed PODs
  • Large ecosystems for securing container platforms as it is not easy to deploy and manage large container security solutions

SNAPS and Containers

SNAPS, which is short for SDN/NFV Application Development Platform and Stack, is an open-source platform developed by CableLabs. The platform enables rapid deployment of virtualized platforms for developers. SNAPS accelerates adoption of virtual network functions by bootstrapping and configuring a cloud platform for developers so they can focus on their applications. Aricent is involved in the SNAPS-OpenStack and SNAPS-Boot projects and contributed to the platform development with CableLabs.

An obvious next phase is to enable containerized platforms. A key first step was already achieved in the SNAPS-OpenStack project where Docker containers are used for executing many OpenStack components. The next obvious step is to create a roadmap for enabling containers for application developers. A cursory look at the cloud-native landscape reveals that this ecosystem is huge. There are several options available for DevOps, tooling, analytics, management, orchestration, security, serverless, etc. This can create confusion for developers regarding what to use and how to configure these components. They will have to “learn” the ecosystem, which will delay their own application development. The future roadmap for SNAPS is to enable developers by bootstrapping a secure and self-service container platform with the following features:

  • Container orchestration and resource management
  • In-built tooling for monitoring and diagnostics
  • A reference microservices architecture for application development
  • Easy management and deployment of container networking
  • Pre-configured and provisioned security components
  • DevOps-enabled for rapid development and continuous deployment

These are exciting times for developers. The availability of platforms and technologies will drive innovation throughout the developer community. The SNAPS community is focused on ensuring that the best-in-class developer platforms are created in the spirit of open innovation. The SNAPS platform roadmap adopting cloud-native ecosystem is going to provide developers an easy-to-use platform. We are looking forward to a larger participation for the developer and operator community. As a community, we must solve the key challenges and create a resilient platform for containerized application platform for network applications.

Have Questions? We’d love to hear from you: 

--

The author, Shamik Mishra, is the Assistant VP of Technology at Aricent. SNAPS, CableLabs’ SDN/NFV Application Development Platform and Stack Project, was developed leveraging the broader industry’s open source projects with the help of the Software Engineering team at Aricent. CableLabs selected Aricent for this specific project because of their world-class expertise in software-defined networks and network virtualization. In a little less than a year, CableLabs and Aricent worked closely to extend CableLabs’ initial code base to the full SNAPS platform. The SNAPS platform has now been released to open source to enable the wider industry to collaboratively build on our work and to use it to test new network approaches based on SDN and NFV.

Comments
Virtualization

Kyrio NFV Interop Lab: Powered by SNAPS

Robin Ku
Director, Kyrio NFV Interop Lab

Jan 25, 2018

On Dec. 14, 2017, CableLabs released two new open source projects, SNAPS-Boot and SNAPS-OpenStack. SNAPS, which is short for SDN/NFV Application Development Platform and Stack, is an open source platform with the following objectives:

  • Speed development of cloud applications
  • Facilitate collaboration between solution providers and operators
  • Ensure interoperability
  • Accelerate adoption of virtual network functions and platform components.

In this post, we explore some of the synergies between the SNAPS projects and the Kyrio NFV Interop Lab.

Background: Delivering on the NFV Promise

The Kyrio NFV Interop Lab is designed as an open, collaborative system integration environment where multiple solution providers can work together in a neutral setting to develop concept systems and then showcase them to the operator community.

At last year’s Summer Conference, we displayed proof-of-concept systems demonstrating orchestrated deployment of SD-WAN with firewalling and LTE to WiFi call hand-off over a D3.1 R-PHY access network connected to a virtual CCAP Core and a virtual mobile core.

These technologies are fundamental enablers for converged networks composed of virtualized network functions running on virtual network cores. The SD-WAN, firewall and mobile calling use cases represent a significant opportunity for operators to offer efficient, flexible and agile services to their customers.

The systems were envisioned and designed by Kyrio NFV Lab sponsoring partners, integrated at CableLabs, and demonstrated at the CableLabs Summer Conference. They remain on display in the Kyrio NFV Lab in order to provide operators with ongoing access to the systems and to enable solution providers to continue development of new functions and features. Further system details are available in this webinar.

Running on Open Source: The Way of the Future

Kyrio NFV Lab systems are designed by lab sponsors using a variety of hardware and software components. However, open source software and generic commercial-off-the-shelf (COTS) hardware are the preferred environment for operators. To that end, SNAPS has been developed to provide a cloud environment that is freely available to operators and developers, based on and synchronized to OPNFV OpenStack, one of the world’s largest open source projects delivering cloud software.

Project code is publicly available and located here:

SNAPS-Boot: Automates the imaging and configuration of servers that constitute a cloud.

SNAPS-OpenStack: Automates the deployment of the OpenStack VIM on those servers.

Together they provide a powerful method for creating a standard development and testing environment.

For details on project objectives, timelines and participation contact Randy Levensalor, the SNAPS project lead. 

The Mobile Call Hand-off system mentioned above was built on a beta version of SNAPS, based on Newtown OpenStack. New systems in the Kyrio NFV Lab are running on the public release of SNAPS, based on Pike OpenStack. OpenStack synchronization is a key benefit for operators, solution developers and interop testing.

Kyrio NFV Lab: Taking the Next Steps

New system development planned for the next two quarters include orchestration of multi-vendor software firewalls and orchestration of a virtual CCAP Core.

Latest generation Intel COTS servers have arrived featuring dual Xeon 6152 CPUs (44 cores/host), 364 GB RAM, four 1 TB SSDs, plus two 250 GB SSDs and multiple 40/10 Gbs NICS.

Evaluation is underway to determine data throughput under various BIOS settings, using select versions of Linux. Work is also underway to measure power consumption baselines under various load conditions.

A stable, well-characterized hardware/software platform is the foundation of the Kyrio NFV Lab’s work toward evaluation of SDN/NFV component interoperability, and Virtual Network Function on-boarding. The main questions operators will ask when considering trial or deployment of a virtual application will be:

  • “Does it work as designed?”
  • “Does it interoperate with other elements in my environment?”
  • “How easy is it to deploy?”

These are the questions that the Kyrio NFV Lab, working over the SNAPS platform, will consider on behalf of the operator community. The faster we can answer “Yes”, “Yes” and “Very”, the faster the ecosystem will advance, the faster operators will adopt, and the faster customers will have access to newer and more reliable services. Stay tuned for progress updates from the Kyrio NFV Interop Lab - powered by SNAPS.

For further information or Kyrio NFV Lab programs and participation opportunities:

Email: Robin Ku, Director Kyrio NFV Lab

For further information on SNAPS and open source software development:

See Broadband Technology Report’s article and Randy Levensalor's blog post "CableLabs Announces SNAPS-Boot and SNAPS-OpenStack Installers

Email: Randy Levensalor, Lead Architect Application Technologies

For CableLabs members:

Attend the Inspir[ED] NFV workshop February 13-15, in Louisville CO, for business and technical track sessions and access to all demo systems.

 

Comments
Virtualization

CableLabs Announces SNAPS-Boot and SNAPS-OpenStack Installers

Randy Levensalor
Principal Architect, Future Infrastructure Group, Office of the CTO

Dec 14, 2017

After living and breathing open source since experimenting in high school, there is nothing as sweet as sharing your latest project with the world! Today, CableLabs is thrilled to announce the extension of our SNAPS-OO initiative with two new projects: SNAPS-Boot and SNAPS-OpenStack installers. SNAPS-Boot and SNAPS-OpenStack are based on requirements generated by CableLabs to meet our member needs and drive interoperability. The software was developed by CableLabs and Aricent.

SNAPS-Boot

SNAPS-Boot will prepare your servers for OpenStack. With a single command, you can install Linux on your servers and prepare them for your OpenStack installation using IPMI, PXE and other standard technologies to automate the installation.

SNAPS-OpenStack

The SNAPS-OpenStack installer will bring up OpenStack on your running servers. We are using a containerized version of the OpenStack software. SNAPS-OpenStack is based on the OpenStack Pike release, as this is the most recent stable release of OpenStack. You can find an updated version of the platform that we used for the virtual CCAP core and mobile convergence demo here.

How you can participate:

We encourage you to go to GitHub and try for yourself: 

Why SNAPS?

SNAPS (SDN & NFV Application Platform and Stack) is the overarching program to provide the foundation for virtualization projects and deployment leveraging SDN and NFV. CableLabs spearheaded the SNAPS project to fill in gaps in the open source community to ease the adoption of SDN/NFV with our cable members by:

Encouraging interoperability for both traditional and prevailing software-based network services: As cable networks evolve and add more capabilities, SNAPS seeks to organize and unify the industry around distributed architectures and virtualization on a stable open source platform to develop baseline OpenStack and NFV installations and configurations.

Network virtualization requires an open platform. Rather than basing our platform on a vendor-specific version, or being over 6 months behind the latest OpenStack release, we added a lightweight wrapper on top of upstream OpenStack to instantiate virtual network functions (VNFs) in a real-time dynamic way.

Seeding a group of knowledgeable developers that will help build a rich and strong open source community, driving developers to cable: SNAPS is aimed at developers who want to experiment with building apps that require low latency (gaming, virtual reality and augmented reality) at the edge. Developers are able to share information in the open source community on how they optimize their application. This not only helps other app developers, but helps the cable industry understand how to implement SDN/NFV in their networks and gain easy access to these new apps.

At CableLabs, we pursue a “release early” principle to enable contributions to improve and guide the development of new features and encourage others to participate in our projects. This enables us to continuously optimize the software, extend features and improve the ease of use. Our subsidiary, Kyrio, is also handling the integration and testing on the platform at their NFV Interoperability lab.

You can find more information about SNAPS in my previous blog posts “SNAPS-OO is an Open Sourced Collaborative Development” and “NFV for Cable Matures with SNAPS

Who benefits from SNAPS?

  • App Developers will have access to a virtual sandbox that allows them to test how their app will run in a cable scenario, saving them time and money.
  • Service providers, vendors and enterprises will be able to build more exciting applications, on a pure open source NFV platform focused on stability and performance, on top of the cable architecture.

How we developed SNAPS:

We leverage containers which have been built and tested by the OpenStack Kolla project. If you are not familiar with Kolla, it is an OpenStack project that maintains a set of Docker containers for many of the OpenStack components. We use these scripts to deploy the containers because the Kolla-Ansible scripts are the most mature and include a broad set of features which can be used in a low latency edge data center. By using containers, we are improving the installation process and updating.

To maximize the usefulness of the SNAPS platform, we included many of the most popular OpenStack projects:

Additional services we included:

Where the future of SNAPS is headed:

  • We plan to continue to make the platform more robust and stable. 
  • Because of the capabilities we have developed in SNAPS, we have started discussions with the OPNFV Cross Community Continuous Integration (XCI) project to use SNAPS OpenStack as a stable platform for testing test tools and VNFs with a goal to pilot the project in early 2018.
  • Aricent is a strong participant in the open source community and has co-created the SNAPS-Boot and SNAPS-OpenStack installer project. Aricent will be one of the first companies to join our open source community contributing code and thought leadership, as well as helping others to create powerful applications that will be valuable to cable.
  • As an open source project, we encourage other cable vendors and our members to join the project, contribute code and utilize the open source work products.

There are three general areas where we want to enhance the SNAPS project:

  • Integration with NFV orchestrators: We are including the OpenStack NFV orchestrator (Tacker) with this release and we want to extend this to work with other orchestrators in the future.
  • Containers and Kubernetes support: We already have some support for Kubernetes running in VMs.  We would like to evaluate the benefit of running Kubernetes with or without the benefit/overhead of VMs.
  • Serverless computing: We believe that Serverless computing will be a powerful new paradigm that will be important to the cable industry and will be exploring how best to use SNAPS as a Serverless computing platform.

Interactive SNAPS portfolio overview:

CableLabs Announces SNAPS-Boot and SNAPS-OpenStack Installer

Click image for an interactive experience

Have Questions? We’d love to hear from you

Don’t forget to subscribe to our blog to read more about NFV and SNAPS in our upcoming in-depth SNAPS series. Members can join our NFV Workshop February 13-15, 2018. You can find more information about the workshop and the schedule here.

Comments
Virtualization

  5 Things I Learned at OpenStack Summit Boston 2017

Randy Levensalor
Principal Architect, Future Infrastructure Group, Office of the CTO

May 23, 2017

Recently, I attended OpenStack Summit in Boston with more than 5,000 other IT business leaders, cloud operators and developers from around the world. OpenStack is the leading open source software run by enterprises and public cloud providers and is increasingly being used by service providers for their NFV infrastructure. Many of the attendees are operators and vendors who collaboratively develop the platform to meet an ever-expanding set of use cases.

With over 750 sessions, it was impossible to see them all. Here are my top five takeaways and highlights of the event:

1. Edward Snowden's Opinions on Security and Open Source

In the biggest surprise of the event, Edward Snowden, former US NSA employee and self-declared liberator, joined us over a live video feed from an undisclosed location. He talked about the ethics and importance of the open source movement and how open source can be used to improve security and privacy.

Unlike vulnerabilities in proprietary software, those in open source are transparent. As a result, the entire community can learn from these exploits and how to prevent them in the future. Though not mentioned by Snowden, his rhetoric brought to mind the work done to secure OpenSSL after the heart bleed vulnerability was made public. This changed the way that core projects are managed. Snowden mentioned Apple’s iPhone as an example where vulnerabilities were found and the solution was not transparent:

“When Apple or Google has a bug, not only can we have no influence over the cure, but we don’t know anything about the cause and we don’t know what they have learned in effecting a cure. So, it’s not possible for everyone to use that knowledge to help build a better world for everyone.”

His talk brought applause from the audience and was a call to action as much as it was informative.

2. OpenStack is Helping Make the World Safer

The U.S. Army is using OpenStack to rapidly deliver the required curriculum for cyber command training and saving millions of dollars in the process. Using software development as an example, they created an agile development process where the instructors can improve the course rapidly and presented an example of their deployment of different virtual machines with malware and threat detection software. Instructors are able to create new content by submitting code to a source code repository and have it approved in less than a day. The new content is also available to graduates of the course in support of ongoing training. As a taxpayer, I can only hope that the other branches of the military will follow the Army’s lead in delivering the same innovative philosophy and process. These processes employed by the Army can be leveraged by service provides to deliver new services, apply security patches, and remedy service disruptions.

You can watch the keynote here and the in-depth talk below:

3. Lightweight OpenStack Control Planes for Edge Computing

OpenStack was designed to run large clouds managing thousands of servers in traditional data centers. Running OpenStack on a single local server allows service and OTT providers to manage CPEs using the same toolchain for managing VMs in their hosted cloud solutions.

Verizon’s keynote highlighting their uCPE is available here.

4. Aligning Container and Virtual Machine Technologies

My favorite forum session was a discussion to align VMs and containers. Containers address the application configuration and management challenges that are not as easily addressed with virtual machines. OpenStack can be used to manage the dependencies that containers need to run. In addition to the general summit proceeding, OpenStack has a forum format. You can learn more about the format here.

Leaders from both the OpenStack Nova team and the Linux Foundation’s Kubernetes were on the panel. Kubernetes performs many complementary and some overlapping tasks as OpenStack. Because Kubernetes was developed several years after Nova, they improved on some of the similar features.

CableLabs hosted an OpenStack Users Group meeting recently on the same subject called "OpenStack & Containers: Better Together".

5. Data Plane Acceleration 

With the growth of OpenStack in the service provider space, the focus to move packets from point A to point B is as critical as ever. Open vSwitch continues to be a popular choice, and with the addition of DPDK support, they are reducing the latency involved with process packets in a virtualized network. Tapio Tallgren, the chair of OPNFV’s Technical Steering Committee, provides some results of testing DPDK with OPNFV. As many of you may know, CableLabs SNAPS project leverages OPNFV as a foundation. The Yardstick performance testing project, which Tapio discusses in his blog post Snaps-OO Open Sourced Collaborative Development Resource, is in the process of migrating many of their scenarios to leverage our SNAPS-OO utility.

FD.io is the newest player for accelerating the data plane. Their testing results in the lab are remarkable, and we are beginning to see some adoption for use in production. There was even a 1-day training session dedicated solely to FD.io.

 

With demos, product launches, and informative talks, OpenStack Summit Boston 2017 was a huge success. I hope to see you at the next one! If you have any questions about OpenStack don’t hesitate to leave a comment below.

Comments
DOCSIS

New Open Source Initiative at CableLabs

Karthik Sundaresan
Distinguished Technologist

Feb 10, 2016

Open source software continues to make solid inroads in the world of network technology. There are various open source industry efforts which are becoming de-facto standards that are being adopted by operators and equipment manufacturers (Linux, Apache, OpenStack, Docker etc). Open source leads to free and quick proliferation of good ideas. Collaboration tackles tough problems which may not be solved individually. The open source approach facilitates a rapid prototyping for new technologies and allows improvement on the most important features. It also allows communities to form around a common cause.

CableLabs is increasing its focus on game-changing innovations and accelerating the delivery of unique competitive advantages to the global cable industry with its CableLabs 2.0 initiative. As a part of this focus, CableLabs would like to announce a new major open/community source project for the cable industry. CableLabs and Cisco are initiating a new collaborative project called ‘OpenRPD’ to develop software which can be used by the industry to build a Remote PHY Device. Cisco will be contributing their Remote PHY interface software to this effort, which forms a baseline on which this project will build. (See Cisco's press release.)

Over the past year, CableLabs, along with our member and vendor community, has worked on the different Distributed CCAP Architectures including Remote PHY and Remote MAC-PHY Architectures. The Remote PHY technology allows for an integrated CCAP to be separated into two components: the CCAP Core and the Remote PHY Device (RPD). The RPD allows all the PHY layer components of a CCAP to be moved out as a separate device into the fiber node in the field.

Remote PHY is a big transition in the traditional access network architecture. To meet the deployment needs of the operators, it is imperative we move the ecosystem quickly. The OpenRPD software effort allows us to do this by enabling faster development of RPD products. We believe that open source software is becoming the new way to create specifications, the mantra now is to ‘write code’ not documents.
Word Cloud "Open Source"
A collaborative effort allows for greater reliability in software products, while offering a greater level of security, both of which are important to an RPD platform which will be out in the field. Developing a common code base for some of the basic RPD functions creates a software platform which will minimize interoperability issues between the CCAP-Core and the RPD. It enables companies to focus on their added value and accelerates time to market for a product. This creates a scenario in which everybody wins and the operator gets to deployment of technology faster.

We welcome all interested developers within the CableLabs community to participate in this project. If you would like to participate in the CableLabs OpenRPD Software initiative, please contact me.

May the source be with you.

Karthik Sundaresan is a Principal Architect at CableLabs responsible for the development and architecture of cable access network technologies. He is primarily involved in the DOCSIS family of technologies and their continued evolution.

Comments
Data

Capitalizing on Data: An Engine to Business Services Growth

Carmela Stuart
Director of Business Technologies

Jul 22, 2015

Aligning a service provider’s business with its drive for new services and optimized operations relies heavily on one foundational element — data. Working closely with our cable operator members, the CableLabs team recognizes that access to data provides key insights that drive business and technology innovations. Supporting the growth opportunities our members have in Business Services, CableLabs has three projects focused on helping enable our members to capitalize and realize the revenue growth.

Present Growth Opportunity: The Enterprise Market

An expanding number of business services opportunities are driving an increased focus on data. For example, enablement of B2B Ethernet services for large enterprise businesses (500+ employees) that span across multiple geographical footprints requires a common data structure to seamlessly integrate end-to-end solutions. Through incorporating existing global standards and providing necessary extensions based on our member’s insight, CableLabs is designing and developing open application programming interfaces (APIs) to create a common data structure to automate the exchange of data, accelerating the sales proposal, service delivery, and ongoing service assurance processes enabling cross-service provider’s ability for seamless service delivery and support for shared accounts that enterprise business locations present.

Sales Automation & Interoperability Approach

pic1

Future Growth Opportunity: Virtualized Service Delivery & Management

While Ethernet services provide the backbone, small, mid & enterprise businesses are increasing the demand for dynamic applications & managed services. Firewall, DDOS mitigation, and multi-site VPNs are a few examples of managed security services that provide an important opportunity to increase revenue per unit served. The emergence and role of virtualized service delivery and management will provide critical infrastructure support to exchange the data needed seamlessly to enable the rapid development and deployment of these solutions. The shift towards a virtualized delivery of products & services is defining the future of network enablement through software-centric solutions with a high reliance on configuration and performance data. Utilizing common data definitions and access methods to network data for reuse across applications will reduce the risk of supplier lock-in, simplify dynamic network management, and enable easy sharing of network data with peripheral applications (e.g., security, load-balancing, and analytic engines).

Industry collaboration that leads to a well-defined data framework will enable the rapid development of new services that can be easily integrated between partner companies. This flexible framework and streamlined integration will shorten new service time-to-market, elevate the cable operator’s competitive positioning, and continue to attract a robust supplier community.

Open API Access & Community Engagement

These trends are driving architecture evolution and a shift away from the historical practice of housing data in a vertical format, which limits sharing across enterprise applications. CableLabs has developed an approach with a reusable library of data artifacts, which includes common APIs, data models, and entity definitions as examples (which registered users can access at diadeveloper.cablelabs.com) that aims to replace the siloed approaches of the past.

pic 2

The approach strives to support CableLabs members as they begin to unlock the value of their big data reserves, adapt quickly to the shifts in technology, and move towards a software-centric and cloud networking operational world.

For more information on CableLabs projects related to Business Services, please contact Carmela Stuart, c.stuart@cablelabs.com

Carmela Stuart is the Director of Business Technologies at CableLabs.

Comments
News

SCOTUS Sidesteps an Interface with APIs

Jud Cary
Deputy General Counsel

Jul 9, 2015

On the last day of its term, the Supreme Court refused to hear an appeal from the Court of Appeals for the Federal Circuit, and thus it let stand a controversial copyright decision by the appellate court on the copyrightability of application program interfaces or APIs.

The case, Oracle v. Google, dates back to 2010 when Oracle (then Sun Microsystems) sued Google over use of certain Java APIs belonging to Sun that were used in Google’s Android operating system. Both Oracle and Sun agreed that Google did NOT copy Oracle’s “implementing code,” but did copy verbatim Oracle’s “declaring code.” That is, Google copied the “method headers” from 37 Java packages with over 600 classes and over 6000 methods. Google then implemented each method with its own code.

The original trial court that reviewed the lawsuit held that APIs were not subject to copyright protection. The court reasoned “there is only one way to write” the header, and thus the “merger doctrine bars anyone from claiming exclusive copyright ownership of that expression.”

OK, so what is the “merger doctrine!?”

Basically, if there is only one way to express something, say “2+2=4”, then that expression “merges” with the idea itself. And, because ideas cannot be protected by copyright, the expression cannot be copyrighted. (Note, an idea might be protectable under patent law, but not copyright law). Google also argued that, of course the method headers must be the same in Android as they are in Java in order to maintain interoperability. The trial court agreed.

But wait, there’s more! Copyright in computer code covers the literal code itself, but can also the non-literal “sequence, structure and organization,” or SSO, so long as there is some modicum of creativity in the SSO. Here, Google argued that the SSO of the 37 Java packages, 600 classes, and 6000 methods was simply a “command structure” and excluded from copyright protection. Again, the trial court agreed.

Copyright Law Terms
Merger Doctrine
Also called the “idea-expression dichotomy.” The Supreme Court stated "[u]nlike a patent, a copyright gives no exclusive right to the art disclosed; protection is given only to the expression of the idea—not the idea itself." Mazer v. Stein, 347 U.S. 201, 217.
"[C]opyright's idea/expression dichotomy 'strike[s] a definitional balance between the First Amendment and the Copyright Act by permitting free communication of facts while still protecting an author's expression.” Harper & Row Publishers, Inc. v. Nation Enters., 471 U.S. 539, 556 (1985)
Sequence Structure and Organization (SSO)
SSO is an alternative way of comparing one software code base to another in order to determine if copying has occurred, even when the second work is not a literal copy of the first. Whelan v. Jaslow (1986). SSO attempts to avoid the extremes of over-protection and under-protection of software code, both of which are considered to discourage innovation.
Fair Use
Fair use was created by the courts, but is now enshrined in the Copyright Act 17 U.S.C. § 107. The Act directs courts as follows: “In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include:

  1. "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;"
  2. "the nature of the copyrighted work;"
  3. "the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and"
  4. "the effect of the use upon the potential market for or value of the copyrighted work.”

Sounds reasonable? Well, the trial court got schooled by the appellate court.

The trial court made several key mistakes in applying the merger doctrine. The trial court focused its merger analysis on the options available to Google at the time of copying, rather than on Oracle’s options at the time of creating. Looked at from the time of creation, Oracle had almost unlimited ways to determine, create, name, and express the 6000 method headers. So, as long as there are several alternative expression choices at the time of creation, the merger doctrine does not apply.

The appellate court further found the sequence structure and organization of the Java packages, classes, and methods sufficiently creative to be copyrightable. And, the court noted, Google did not need to copy verbatim the SSO to make a functionally equivalent platform, albeit not interoperable with Java. See, for example, competitive mobile platforms of Apple iOS or Microsoft Windows Phone.

The case heads back to the trial court to determine if Google’s use of the APIs nevertheless falls under the “fair use” defense doctrine. The factors the trial court will use to determine fair use are: (1) “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;” (2) “the nature of the copyrighted work;” (3) “the amount and substantiality of the portion used in relation to the copyrighted work as a whole;” and (4) “the effect of the use upon the potential market for or value of the copyrighted work.”

Typically courts consider the “commercial nature” of the use as almost dispositive. That is, if any direct or indirect commercial gain is obtained by the use, the fair use defense does not apply. So, IMHO, I think Google will have a hard time succeeding on a fair use defense.

So, what does this mean?

The high tech sector uses, and for that matter CableLabs develops, APIs, including Java APIs, in many projects, platforms, and systems. APIs are intrinsically necessary whenever you want two software systems, platforms, or layers to communicate with each other in an interoperable manner.

The court’s ruling holds that such APIs are likely copyrightable by the creator/owner of the APIs. This means the copyright owner can enforce a copyright license on users of the APIs, or choose not to license the APIs at all. We note that many projects employ an “open source” license with very few restrictions on use, and no royalty of fee. Other projects, like Oracle’s Commercial Use License for Java, may impose fees, and require strict adherence to the APIs and a Certification program in order to maintain interoperability — maybe a good thing.

Similar to APIs, “data models” are widely used by the high tech sector. For example, data models are widely used in the burgeoning Internet of Things to generically represent anything from a light bulb to a refrigerator. It is unclear if data models are so similar to APIs that they too are copyrightable.

CableLabs plays an important role in establishing ownership and managing the associated copyrights of specific APIs and data models. Through our well-tested project creation, project management, and legal agreements, CableLabs ensures that copyright ownership vests with CableLabs (or, at a minimum, a joint ownership interest with the creator), and CableLabs can enforce such copyrights if needed. This disciplined role is especially key in the world of open source that can often become fragmented or “tainted” with multiple ownership rights that are difficult to later enforce. CableLabs will continue to foster collaborative work with our members, and suppliers, while also ensuring copyright ownership is made clear.

Jud Cary is the Deputy General Counsel at CableLabs.

Comments