Cable Information Architecture
Passive Optical Networking – for the Next Generation
Service providers invest billions of dollars in their access networks. Ideally, the deployed technology meets consumer demand for many years, allowing service providers to avoid costly upgrades before fully recovering their investments. In addition to technology longevity, service providers also like to see technology evolution, a next generation, to borrow an overused technology term, to ensure future consumer demands can be met by staying within the same technology family. Nowhere is the next generation moniker more prevalent than in the development of passive optical networking (PON) standards.
Both the ITU-T and IEEE are creating next generation PON standards. The ITU has approved two documents from their G.989 series defining Next Generation PON 2 (NG-PON2), and a third document is currently undergoing final comments. The NG-PON2 architecture settled on a time and wavelength division multiplexing (TWDM) method which stacks four wavelengths in a coordinated manner onto a single fiber, with each wavelength delivering 10 Gb/s. Total NG-PON2 TWDM-PON capacity is therefore 40 Gb/s. Since the ONU only receives one wavelength, the bit rate received by a single ONU is capped at 10 Gb/s. In such a solution, the dynamic bandwidth allocation (DBA) would likely be independent for each wavelength. Multiple vendors have reported demonstrating the TWDM-PON solution. A four-wavelength TWDM-PON is illustrated in the figure below.
Is a 100 Gb/s Solution on the Horizon?
The IEEE 802.3 Working Group has recently formed a Study Group to develop objectives for the next generation of Ethernet Passive Optical Networking (NG-EPON). Several key technology decisions await the NG-EPON Study Group: (1) number of wavelengths, (2) bit rate per wavelength, (3) transceiver tunability, and (4) channel bonding. Faced with the same consumer demands and industry competition as other access network technologies, the NG-EPON Study Group could relatively easily define a four-wavelength, 10 Gb/s-per-wavelength PON to put it on par with NG-PON2, shown above. However, alternative solutions are being investigated by NG-EPON participants that would take PON technology multiple steps beyond, and presumably allow consumer demand to be met for many years into the future. For example, using advanced modulation techniques to provide 25 Gb/s per wavelength, combined with four multiplexed wavelengths on a single fiber, could yield the first 100 Gb/s PON solution. By incorporating channel bonding, a concept popularized by CableLabs and the cable industry in the DOCSIS® 3.0 specifications, an NG-EPON ONU would be capable of receiving one or more wavelengths, potentially receiving 50 Gb/s or more. In a channel-bonded solution the DBA will closely coordinate upstream transmissions on one or more wavelengths simultaneously. Are the optical transceivers tunable? That is another of the many important technology decisions yet to be made. A channel-bonded, time-wavelength division multiplexed PON is shown in the diagram below.
With a keen eye on vendor implementation schedules and interoperability, the increased capacity of NG-EPON ONUs and OLTs could be staged according to a timeline that aligns with consumer demand and service provider requirements, alleviating the need to develop yet another next generation standard (next-next generation?).
These technology decisions will begin to take shape within the NG-EPON Study Group (and subsequent Task Force) starting at the September 2015 802.3 Interim meeting. Anyone with the desire to contribute is welcome to attend. Perhaps the ITU-T will also investigate these same approaches for NG-PON2, ultimately resulting in another step toward a converged optical access solution (See the OnePON blog regarding a converged optical access initiative).
In his role as Vice President Wired Technologies at CableLabs, Curtis Knittle leads the activities which focus on cable operator integration of optical technologies in access networks. Curtis is also Chair of the 100G-EPON (IEEE 802.3ca) Task Force.
NFV and SDN: Paving the Way to a Software-Based Networking Future
When ONF Executive Director Dan Pitt invited me to contribute a blog post, it brought to mind our interaction in the summer of 2012 on how to treat SDN in the seminal NFV White Paper I was then editing. The operator co-authors were keen to ensure that SDN and NFV were positioned in the paper as being complementary. This was important because we wanted to create momentum for NFV by highlighting use cases that did not require the then perceived complexity of SDN. As soon as the ETSI NFV Industry Specification Group (NFV ISG) was launched, we engaged with ONF, recognizing its key role in championing an open SDN ecosystem. And in 2014 the NFV ISG entered into an MoU with ONF to facilitate joint-work.
The vision for NFV was compelling because the benefits could be readily attained. By replacing network appliances based on proprietary hardware with virtualized network functions (VNFs) running on industry standard servers, operators could greatly accelerate time to market for new services, and operations streamlined through automation. Moreover, important NFV use cases (e.g. virtualized CPE) would not require massive systems upgrades — a huge barrier for innovation in telecoms. We are seeing this first-hand at CableLabs, where we have been able to prototype virtualized CPE for business services and home networks on a two-month development cycle.
In contrast, the simplified definition of SDN- the separation of control plane from data plane -in my mind does not adequately convey the compelling benefits of SDN. The term ‘Software Defined Networking’ should mean just that, every element of the network, including the VNFs and network control should be implemented within a fully programmable software environment, exposing open interfaces and leveraging the open source community. This is the only way to create an open ecosystem and to unleash a new and unprecedented wave of innovation in every aspect of networking.
NFV releases network functions “trapped inside hardware” (a description I stole from an HP colleague) achieving tremendous benefits. But VNFs must be dynamically configured and connected at scale to deliver tangible value. While today’s telecommunications operations support systems (OSS) are adequate for static NFV use cases, the real potential for NFV to transform networking can only be realized through SDN control. Consequently, SDN represents much more than the mere separation of control plane and data plane.
Given telecommunications networks are deployed at massive geographic scale, it is a hard sell to convince thousands, or even millions of customers that their services will be migrating to a new network platform where their services will not be quite the same but prices won’t go down. Couple that with the significant time and cost to upgrade the OSS, wide ranging operational processes changes, and the need to validate that the new platforms are sufficiently stable and reliable, not to mention the obligations of regulation, it is not surprising that there is hesitancy to contemplate significant telecommunications network transformations.
Consequently, the telecoms industry has resorted to decades of incremental network upgrades which have piled legacy functionality on top of legacy functionality to avoid the costs and risks of wholesale network and services migration. In the face of these realities, SDN was perceived to offer insufficient benefit to justify significant investment except in niche areas where it could be overlaid on top of existing systems. Furthermore, the idea of logically centralized SDN control is very scary to network designers who don’t readily understand abstract software concepts and who lose sleep striving to deliver reliable connectivity at massive scale, with relentless downward pressure on costs.
Just over two years into the NFV revolution, it is clear that the emergence of NFV has galvanized the industry to embrace software-based networking; short-circuiting a transition that might otherwise have taken years. The revelation that NFV can be deployed in digestible chunks, without massive system upgrades has forced network designers to take notice. After all, it is difficult to ignore a pervasive industry trend when vendors’ product plans have morphed into software roadmaps!
Given that NFV is now accepted by all major network operators and some have already made significant announcements, there is no turning back. Leading vendors have committed to NFV roadmaps and analysts talk about ‘when’ and not ‘if’ NFV will be deployed. More importantly, SDN and NFV are now frequently discussed in the same breath. In my mind, the distinction between NFV and SDN is becoming an artifact of history, and the terms will ultimately be subsumed by a software-based networking paradigm, which itself will emerge as an integral aspect of Cloud technology.
The emergence of NFV with SDN is accelerating the evolution of cloud technologies to satisfy the stringent requirements of software-based telecommunications networks. Whereas a web service could momentarily stall with minimal customer impact while a virtual machine reboots, some business-critical network services cannot tolerate loss of connectivity even for a few milliseconds. Therein lies both challenge and opportunity. Challenge because meeting stringent telecommunications availability and performance requirements is not easy as evidenced by the ETSI NFV ISG’s deliberations. Opportunity, because I foresee an unprecedented wave of telecommunications innovation on a par with the birth of the Internet.
Carrier-grade network resilience (e.g. 5-nines and beyond) will be achieved by pooling virtualized resources, fault management will be supplanted by autonomic self-healing networks that can not only withstand equipment failures but can even rapidly recover from large scale natural disasters by instantly migrating network capacity to remote location as demonstrated by NTT DOCOMO et al in the aftermath of the Fukushima disaster. And exciting new routing paradigms such as intent-based networking and content-based networking will become feasible in a much earlier timeframe with innovation galvanized by the potential for imminent experimentation on deployed infrastructures. I could go on…
The genie of software-based networking — where synergies between NFV and SDN result in significantly greater capability than either could deliver alone — is now truly out of the bottle. The ultimate challenge is to encourage growth of an open telecommunications ecosystem, where operators and vendors can work together to create and deliver value to their customers. Energized by the NFV ISG and ONF, among other industry groups, and open source projects that are becoming increasingly important, the reality is just around the corner.
Don Clarke is Principal Architect for Virtualization Technologies at CableLabs and Chairman of the ETSI NFV ISG Network Operator Council.
The Future of Home Networking: Putting the ‘HIP’ in HIPnet™
You’ve been hearing all the hype about IPv6 networks but don’t know why you should care. Perhaps you’ve heard that your cable provider is delivering IPv6 service to your home but you don’t know what that means or why you should take advantage of it. Maybe you’ve just purchased a new home router that touts IPv6 support but don’t know how to go about connecting it. How do you sort through the seemingly endless configuration parameters required to set up your home network, whether it’s IPv4 or IPv6?
Fortunately, a technology exists today that takes all of the guesswork out of the task of configuring your home network. CableLabs, in conjunction with several service providers, has been developing a standard called Home IP Networking (HIPnet). HIPnet provides users a friendly and seamless way to connect router devices to the public Internet via your service provider. In addition, it provides for a way to connect multiple routers to your home network, or multi-home network. All a user needs to do is connect the routers via an Ethernet cable and the home-router takes care of the rest.
That sounds pretty hip.
Let’s take a step back before we talk about how it works. Why should you care about IPv6? You may have heard talk that the Internet’s IPv4 address space is running out. This is due to the explosion of smartphones, tablets and other devices that need to communicate with one another – all of which need an IP address. In anticipation, cable operators have been actively migrating to IPv6 networks for several years so they can provide addresses to every new device that we buy and put in our home network.
IPv6 solves the problem of an IP address shortage because it provides a nearly inexhaustible number of IP addresses: 2^128 to be exact. To quote Martin Levy of Hurricane Electric, that’s “more than four quadrillion addresses for every star in the observable universe.”
Consumer electronics devices are also beginning to support IPv6 and some are ready to function in an IPv6 home network today. It’s only a matter of time before IPv4 is phased out and you’ll need IPv6 to do all the cool and fun things you love to do online.
Still not convinced that you need IPv6? Then think about all the devices that you use today to connect to the Web: Laptop, cell phone, tablet, etc. Now add to that the number of people in the world doing the same things as you. Then add all of the things coming in the future: Smart appliances, sensor networks, Wi-Fi in your car, and a myriad of new technologies. It’s easy to see how the demand for IP addresses will grow exponentially, and it’s not that far away.
So, let’s get back to HIPnet. How does it work?
Let’s assume that you already have cable service in your home. Chances are you let your service provider take care of setting everything up and, at a minimum, you’ve got broadband service and Wi-Fi provided in one box. You’ve also connected a number of devices to the home side of your network along the way, some wired and some wireless: PC, laptop, tablet, cell phone, printer, and maybe a Smart TV. So far everything’s working. You probably haven’t even thought about IPv6 or IPv4.
But let’s say you want to do something more sophisticated with your home network. Maybe you have a son or daughter that has just started college and you think it might be a good idea to segregate a portion of your home network so they can stay connected at school. Configuring that new router seems like a daunting task. And why should you even have to think about what kind of IP network to set up?
With HIPnet, everything is plug and play. You simply connect the Ethernet ports of the two routers and the routers take care of configuring the home network for you. This works for IPv4 as well as IPv6. If you have IPv6 delivered to the home you’ll be able to take advantage of it without any of the guesswork necessary to configure it. Plain and simple, connect it and it just works.
So, how soon can you get this? CableLabs has been actively working with CE product vendors and service providers to develop HIPnet capable products for about a year. During that time we have hosted several interoperability events that test the ability to connect HIPnet capable router devices together and then access the public Internet to stream video. In the most recent testing event we observed nine different router devices successfully implementing the HIPnet capability. That’s triple the number of devices that supported HIPnet three months ago. We think it’s possible to see these products in the home within a year.
That’s very cool, and very hip.
By John Berg –