Network of the Future Technology Deepdive Explore the Tech


Comments
Reliability

A Better Wi-Fi Experience with Dual Channel Wi-Fi™

Luther Smith
Director, Wireless Technology

May 21, 2019

At least 15 percent of customer service calls are Wi-Fi related, ranging from poor connections to video playback issues, translating to more than $600 million in annual support costs for the cable industry (in North America alone). As the Wi-Fi industry looks for ways to increase speed, coverage and support for more devices in the Wi-Fi ecosystem, one critical element has been overlooked: the need to have the necessary airtime to send data to all end devices in a timely manner. Even with faster Wi-Fi connections, if there is no available airtime to send the data, the connection is useless. CableLabs realized this shortfall and addressed it with the development of Dual Channel Wi-Fi technology.

What Is Dual Channel Wi-Fi?

Dual Channel Wi-Fi delivers an efficient and more reliable wireless connection.

The wireless networking technology that we commonly refer to as Wi-Fi is based on the 802.11 standard developed by the Institute of Electrical and Electronics Engineers (IEEE). The 802.11 standard, in turn, has its origins in a 1985 ruling by the U.S. Federal Communications Commission (FCC) that made the Industrial Scientific Medical (ISM) unlicensed radio frequency bands available for public use.

Wi-Fi is often referred to as “polite” because it uses a procedure called Listen-Before-Talk (LBT). LBT is a contention-based protocol that requires data-transmitting devices to listen and wait for a given frequency band to be clear before sending data. If the device (access point [AP] or station) does not detect transmissions, it proceeds to send data. If the device does detect transmissions, it waits for a random period of time and listens again for a clear frequency or channel before commencing transmission.

Wi-Fi has become ubiquitous over the years and is the primary method by which we connect devices in the home, at work and in public places to reach the internet. Multiple Wi-Fi devices in a typical broadband home can cause contention for available frequencies. Dual Channel Wi-Fi addresses Wi-Fi congestion issues by providing one or more channels for downstream-only data in addition to the primary bi-directional channel. The primary channel is used for upstream and small downstream packets and the others channel(s) are used for large downstream and time-critical data, like video. By offering operators configurable tools to intelligently redirect Wi-Fi traffic, better air-time utilization is achieved for all traffic, resulting in fewer interruptions and a much better Wi-Fi experience for everyone.

dcw-graphic1

Benefits of Dual-Channel Wi-Fi

Dual Channel Wi-Fi benefits more than just downlink-only clients:

  • A better overall multi-user experience: As demonstrated in our performance testing, without Dual Channel Wi-Fi, tests using two standard Wi-Fi channels show issues with video streams, gameplay delays, download buffering and slower throughput to individual devices. By moving data off the standard Wi-Fi channel, it effectively clears traffic from the channel, allowing both the AP and other clients more opportunities to send data.
  • A radically improved multi-user experience: By virtually eliminating hesitation and pixilation of video delivery, Dual Channel Wi-Fi enables smooth gameplay without delays and faster overall delivery of data to both Dual Channel Wi-Fi and non-Dual Channel Wi-Fi devices. In our tests, depending on the application download speeds, data transfer speeds increased up to 12 times while airtime efficiency (by reducing the need for retransmissions) increased by 50 percent.
  • Reduction of downlink data packet errors and packet retries: The AP’s ability to send data to clients without contention interference has reduced downlink data packet errors and packet retries, resulting in a reduction in uplink retry messages. This, in turn, allows the AP to send more TCP segments at a time, further reducing the amount of uplink traffic.

These improvements in data delivery over the Wi-Fi network as a whole are an example of user experience improvements that can be achieved by technologies that complement the cable industry’s 10G initiative. As the cable industry drives towards faster speeds, lower latency and increased reliability, Dual Channel Wi-Fi helps ensure that those benefits are experienced all the way to end user devices.

Because Dual Channel Wi-Fi is not limited to one downlink-only data channel, deployments in venues such as stadiums, airports or outdoor arenas can also benefit. Dual Channel Wi-Fi’s configurable filters can selectively assign, move or remove devices from individual downlink-only data channels. The mechanism to determine which downlink-only channels that different devices should be assigned is open to vendor development. This ability will allow operators and vendors to perform load balancing across the downlink-only data channels. The result is the management of the network to ensure the best user experience.

Dual Channel Wi-Fi is compatible with all Wi-Fi releases, including Wi-Fi 6. Dual Channel Wi-Fi has been developed and tested on various AP and client platforms. These include RDK-B, RDK-V, Ubuntu, Windows, MacOS and OpenWrt, which was co-implemented by Edgewater Wireless. Both CableLabs and Edgewater Wireless are excited about the opportunity to improve Wi-Fi for users around the world and look forward to working with standards bodies, internet service providers and device manufacturers of video set-tops, streaming devices, laptops, tablets and gaming consoles.

For more information about Dual Channel Wi-Fi, test results and implementation guides, click below.


Learn More

Comments
Wireless

Moving Beyond Cloud Computing to Edge Computing

Omkar Dharmadhikari
Wireless Architect

May 1, 2019

In the era of cloud computing—a predecessor of edge computing—we’re immersed with social networking sites, online content and other online services giving us access to data from anywhere at any time. However, next-generation applications focused on machine-to-machine interaction with concepts like internet of things (IoT), machine learning and artificial intelligence (AI) will transition the focus to “edge computing” which, in many ways, is the anti-cloud.

Edge computing is where we bring the power of cloud computing closer to the customer premises at the network edge to compute, analyze and make decisions in real time. The goal of moving closer to the network edge—that is, within miles of the customer premises—is to boost the performance of the network, enhance the reliability of services and reduce the cost of moving data computation to distant servers, thereby mitigating bandwidth and latency issues.

The Need for Edge Computing

The growth of the wireless industry and new technology implementations over the past two decades has seen a rapid migration from on-premise data centers to cloud servers. However, with the increasing number of Industrial Internet of Things (IIoT) applications and devices, performing computation at either data centers or cloud servers may not be an efficient approach. Cloud computing requires significant bandwidth to move the data from the customer premises to the cloud and back, further increasing latency. With stringent latency requirements for IIoT applications and devices requiring real-time computation, the computing capabilities need to be at the edge—closer to the source of data generation.

What Is Edge Computing?

The word “edge” precisely relates to the geographic distribution of network resources. Edge computation enables the ability to perform data computation close to the data source instead of going through multiple hops and relying on the cloud network to perform computing and relay the data back. Does this mean we don’t need the cloud network anymore? No, but it means that instead of data traversing through the cloud, the cloud is now closer to the source generating the data.

Edge computing refers to sensing, collecting and analyzing data at the source of data generation, and not necessarily at a centralized computing environment such as a data center. Edge computing uses digital devices, often placed at different locations, to transmit the data in real time or later to a central data repository. Edge computing is the ability to use distributed infrastructure as a shared resource, as the figure below shows.

Edge computing is an emerging technology that will play an important role in pushing the frontier of data computation to the logical extremes of a network.

Key Drivers of Edge Computing:

  • Plummeting cost of computing elements
  • Smart and intelligent computing abilities in IIoT devices
  • A rise in the number of IIoT devices and ever-growing demand for data
  • Technology enhancements with machine learning, artificial intelligence and analytics

Benefits of Edge Computing

Computational speed and real-time delivery are the most important features of edge computing, allowing data to be processed at the edge of network. The benefits of edge computing manifest in these areas:

  • Latency

Moving data computing to the edge reduces latency. Latency without edge computing—when data needs to be computed at a server located far from the customer premises—varies depending on available bandwidth and server location. With edge computing, data does not have to traverse over a network to a distant server or cloud for processing, which is ideal for situations where latencies of milliseconds can be untenable. With data computing performed at the network edge, the messaging between the distant server and edge devices is reduced, decreasing the delay in processing the data.

  • Bandwidth

Pushing processing to edge devices, instead of streaming data to the cloud for processing, decreases the need for high bandwidth while increasing response times. Bandwidth is a key and scarce resource, so decreasing network loading with higher bandwidth requirements can help with better spectrum utilization.

  • Security

From a certain perspective, edge computing provides better security because data does not traverse over a network, instead staying close to the edge devices where it is generated. The less data computed at servers located away from the source or cloud environments, the less the vulnerability. Another perspective is that edge computing is less secure because the edge devices themselves can be vulnerable, putting the onus on operators to provide high security on the edge devices.

What Is Multi-Access Edge Computing (MEC)?

MEC enables cloud computing at the edge of the cellular network with ultra-low latency. It allows running applications and processing data traffic closer to the cellular customer, reducing latency and network congestion. Computing data closer to the edge of the cellular network enables real-time analysis for providing time-sensitive response—essential across many industry sectors, including health care, telecommunications, finance and so on. Implementing distributed architectures and moving user plane traffic closer to the edge by supporting MEC use cases is an integral part of the 5G evolution.

 Edge Computing Standardization

Various groups in the open source and standardization ecosystem are actively looking into ways to ensure interoperability and smooth integration of incorporating edge computing elements. These groups include:

  • The Edge Computing Group
  • CableLabs SNAPS programs, including SNAPS-Kubernetes and SNAPS-OpenStack
  • OpenStack’s StarlingX
  • Linux Foundation Networking’s OPNFV, ONAP
  • Cloud Native Compute Foundation’s Kubernetes
  • Linux Foundation’s Edge Organization

How Can Edge Computing Benefit Operators?

  • Dynamic, real-time and fast data computing closer to edge devices
  • Cost reduction with fewer cloud computational servers
  • Spectral efficiency with lower latency
  • Faster traffic delivery with increased quality of experience (QoE)

Conclusion

The adoption of edge computing has been rapid, with increases in IIoT applications and devices, thanks to myriad benefits in terms of latency, bandwidth and security. Although it’s ideal for IIoT, edge computing can help any applications that might benefit from latency reduction and efficient network utilization by minimizing the computational load on the network to carry the data back and forth.

Evolving wireless technology has enabled organizations to use faster and accurate data computing at the edge. Edge computing offers benefits to wireless operators by enabling faster decision making and lowering costs without the need for data to traverse through the cloud network. Edge computation enables wireless operators to place computing power and storage capabilities directly at the edge of the network.  As 5G evolves and we move toward a connected ecosystem, wireless operators are challenged to maintain the status quo of operating 4G along with 5G enhancements such as edge computing, NFV and SDN. The success of edge computing cannot be predicted (the technology is still in its infancy), but the benefits might provide wireless operators with critical competitive advantage in the future.

How Can CableLabs Help?

CableLabs is a leading contributor to European Telecommunication Standards Institute NFV Industry Specification Group (ETSI NFV ISG). Our SNAPS™ program is part of Open Platform for NFV (OPNFV). We have written the OpenStack API abstraction library and contributed it to the OPNFV project at the Linux Foundation—“SNAPS-OO”—and leverage object oriented software development practices to automate and validate applications on OpenStack. We also added Kubernetes support with SNAPS-Kubernetes, introducing a Kubernetes stack to provide CableLabs members with open source software platforms. SNAPS-Kubernetes is a certified CNCF Kubernetes installer that is targeted at lightweight edge platforms and scalable with the ability to efficiently manage failovers and software updates. SNAPS-Kubernetes is optimized and tailored to address the need of the cable industry and general edge platforms. Edge computing on Kubernetes is emerging as a powerful way to share, distribute and manage data on a massive scale in ways that cloud, or on-premise deployments cannot necessarily provide.


SUBSCRIBE TO OUR BLOG

Comments
Events

CableLabs Sponsors FCBA/IAPP “Data Is King”

Mark Walker
Director, Technology Policy

Kelton Shockey
Technology Policy Associate

Apr 29, 2019

Many of today’s most popular consumer products and services are powered by the exponential growth in the generation, collection and use of personal data, enabled by ever-increasing broadband capacity, processing power and storage. These products and services provide consumers with unparalleled personalization, efficiency and convenience. However, the technologies and practices surrounding personal data also create new dimensions of risk to individuals, institutions and society alike.

In response, governments both in the United States and around the world are under increasing pressure to develop new legislation and regulatory models to address these growing concerns. In the past year alone, we have seen the implementation of the European Union’s sweeping General Data Protection Regulation (GDPR), the passing of the California Consumer Privacy Act, and multiple hearings in the U.S. Congress stemming from numerous data breaches and other scandals involving the potential misuse of consumers’ personal data. Here at CableLabs, we recognize the interplay and potential impact of emerging privacy regulations on the direction of next-generation Internet applications.

In that spirit, CableLabs sponsored “Data Is King” – U.S. Privacy Developments and Implications for Global Markets and Technology Development, a recent event co-hosted by the Federal Communications Bar Association (FCBA) Rocky Mountain Chapter and the IAPP Denver/Boulder KnowledgeNet Chapter. The event gathered luminaries from across the policy and technology spectrum to explore trends and recent developments in privacy law and regulation, as well as the potential impact that these policies will have on the products and services of tomorrow.

The event was kicked off by Martin Katz (Chief Innovation Officer and Senior Advisor for Academic Innovation and Design at the University of Denver and the Executive Director at Project X-ITE). Katz discussed the existing gaps and fragmentation in today’s U.S. privacy regime and highlighted the drawbacks of the EU’s approach to comprehensive personal data protection legislation (GDPR). In Katz’s view, such an approach creates a significant and costly compliance regime that can stifle new startups and small businesses, and more generally, innovative new products and services. He emphasized that any comprehensive U.S. federal regime should recognize and seek to minimize compliance costs and ensure room for innovation while protecting consumer choice, trust and accountability.

Tracy L. Lechner (Attorney and Founder at the Law Offices of Tracy L. Lechner) moderated the first panel session, focused on trends and recent developments in privacy regulations domestically and internationally, with the following panelists: Beth Magnuson (Senior Legal Editor of Privacy and Data Security at Thomson Reuters Practical Law); Dale Skivington (Compliance and Privacy Consultant, Adjunct Professor at the University of Colorado, and Former Chief Privacy Officer at Dell); Erik Jones (Partner at Wilkinson, Barker, Knauer); and Scott Cunningham (Owner at Cunningham Tech Consulting and Founder of IAB Tech Lab).

The panelists agreed that the general position of industry has evolved from a preference for best practices with agency oversight to a recognized need for U.S. federal legislation. This shift has been spurred by a desire for a common compliance framework in light of developing differences in state laws and diverging international privacy regimes. The panelists emphasized that changing privacy regulatory requirements has forced organizations to make frequent and costly IT overhauls to ensure compliance that arguably create little to no value for consumers. For instance, GDPR’s expansive definition of “personal data” created a herculean project for large organizations to take the foundational step of identifying and classifying all the potentially covered data. The panelists agreed that state attorneys general could have a valuable and thoughtful role in enforcement, but they also believe that specific requirements should be standardized at the federal level and be based on an outcome- or risk-based approach, unlike GDPR’s highly prescriptive approach.

Mark Walker (Director of Technology Policy at CableLabs) led a second-panel discussion, focused on the interplay of privacy regulation and technology development. The panel featured Walter Knapp (CEO at Sovrn), Scott Cunningham and Danny Yuxing Huang (Postdoctoral Research Fellow at the Center for Information Technology Policy at Princeton University). Walker framed the panel discussion in historic terms, highlighting the privacy concerns generated through the widespread availability of the portable camera in the late 1800s, through the emergence of electronic eavesdropping capabilities in the 1960s and, more recently, through the broad adoption of RFID technology. For each of these examples, public concern drove legal and regulatory changes, but more fundamentally, the privacy “panic” subsided for each technology as society became more familiar and comfortable with each technology’s balance of benefits and drawbacks.

Through that lens, the panelists examined GDPR and highlighted the high associated compliance costs, from both a technical implementation and revenue perspective. Faced with these costs, many smaller publishers are choosing to cut off access to their content from covered geographies rather than trying to comply. In comparison, large Internet firms have the resources to ensure compliance even in a costly and highly fragmented regulatory environment. Until recently, the Internet has largely matured without defined geographic borders and has nearly eliminated global distribution costs for smaller publishers. However, this trend may be reversed in the face of an emerging fragmented and highly regulated environment, reducing the viability of smaller publishers and driving unintended market concentration.

Turning to emerging technologies, Huang described his research into the security and privacy implications of consumer Internet of Things (IoT). He provided an overview of a newly released research tool, Princeton IoT Inspector, that consumers can easily use to gain detailed insights into the network behaviors of their smart home IoT devices. Through this tool, consumers can gain a better understanding of how IoT devices share their personal information. He illustrated how IoT Inspector was able to identify the numerous ad networks and other domains a streaming video device communicated with while streaming a single television program; surprisingly, the streaming device communicated with more than 15 separate domains during that single streaming program.

The event closed with Phil Weiser, Colorado’s Attorney General, providing keynote remarks that outlined the current state of legislative efforts, explained potential approaches that address key privacy challenges and highlighted the role of state attorneys general in developing regulatory approaches and enforcing them. Attorney General Weiser recognized that although curbing a patchwork of state laws in favor of a single federal one would be the ideal outcome, it is unlikely to happen in a reasonable timeframe, saying:

A first best solution would be a comprehensive federal law that protected consumer privacy. Such a law, like the Dodd-Frank law, should authorize State AGs to protect consumers. When Congress starts working on such a law, I will be eager and willing to support such an effort. After all, differing laws and reporting requirements designed to protect privacy creates a range of challenges for companies and those working to comply with different—and not necessarily consistent—laws.

In today’s second-best world, I believe that States have an obligation to move forward. We should do so with a recognition that we need to collaborate with one another and develop approaches that recognize the challenges around compliance. We can use your help and engagement and we work towards just this end.

As CableLabs continues to focus on developing new and innovative network technologies, we must continue to ensure that we have a sound understanding of the rapidly evolving privacy landscape, both here and abroad. But, just as importantly, policymakers should have a sound understanding of how the various regulatory approaches may impact current and developing technologies. Events like this help bridge those gaps in understanding.


SUBSCRIBE TO OUR BLOG

Comments
Events

Be a Part of the Next Generation – Join the Next Remote PHY Interoperability Event

Jon Schnoor
Lead Engineer, Network Technologies

Apr 25, 2019

A CableLabs interoperability event is always a popular affair—and with good reason. It’s where manufacturers from all corners of the industry can come together to test the viability and interoperability of their products, as well as resolve technical issues before going to market. Our next Interop•Labs event, focused on Remote PHY technology, will be held May 6–10 in Louisville, Colorado. Space is limited, so be sure to register before May 1 to reserve your spot!

What to Expect at the Event

CableLabs is known for developing specifications, but our work doesn’t stop there. We want to do everything we can to ensure that our specifications are implemented properly and that the final consumer products deliver the best possible experience for customers. This philosophy benefits our members and vendors and, ultimately, the industry as a whole.

At the event, we will help you verify that your device and software meet the Remote PHY (R-PHY) requirements, and we will address any issues associated with implementation or interoperability. You will also get a rare opportunity to collaborate with other vendors and make sure that your products work together.

All event participants will get access to Kyrio’s state-of-the-art laboratories, fully equipped for comprehensive interoperability and performance testing. All you need to bring is the equipment or software that you intend to test.

A Bit of Housekeeping…

The event is open to all CableLabs members and NDA vendors. You must have a CableLabs member or vendor account to register, as well as approved R-PHY project access. Each participating company can send an appropriate number of engineers, in addition to any contributing engineers from the CableLabs R-PHY working groups. We also ask that you sign the DOCSIS® Participation Agreement prior to the event. If you have any questions, please email us at events@cablelabs.com.

REGISTER NOW

Comments
Wired

OFC: A Third of a Mile of Next-Gen Optics

Matt Schmitt
Principal Architect

Apr 23, 2019

0.0000026 seconds.

For the more technically inclined, that’s 2.6 microseconds. Which is how long it would take light to travel a third of a mile through fiber optic cable. It was also the length of the show floor at OFC: The Optical Networking and Communications Conference and Exhibition, held in March in San Diego, California.

Of course, it took me considerably longer – 115,384,615 times longer, or about 5 minutes – to walk that same distance at the show. And that’s assuming I maintained a fast pace and avoided stopping for the entire distance – a feat that proved essentially impossible, given the amazing assortment of next-generation optical technology on display, as well as a large number of familiar faces around me!

CableLabs Represented

The show floor hosted 683 exhibitors – too many to take in over such a short time. Among them were many of the companies that have been involved in the CableLabs P2P Coherent Optics effort, helping to blaze the trail for the use of coherent optics technology in the cable access network, in turn enabling our 10G vision. In those booths, many were showcasing products that support 100G speeds based on our PHYv1.0 specification, as well as their roadmap and plans for devices supporting 200G speeds based on our PHYv2.0 specification. Roaming the show floor, checking out exhibited products or enjoying key sessions, I kept running into many of the direct participants in our efforts, despite the fact that 15,400 people were in attendance.

It didn’t seem that I could go very far without encountering someone from a significant CableLabs contingent or one of our members, reflecting the importance of next-generation optics to the cable industry, as well as CableLabs’ strong commitment to developing new optical technologies. Our Optical Center of Excellence has been at the forefront of developing innovative approaches for applying optical technology to cable networks, such as Full Duplex Coherent Optics.

CableLabs on Display

Although CableLabs wasn’t an official exhibitor, beyond having a contingent of people present, CableLabs and cable definitely had a presence at this year’s OFC. The importance of the cable industry was mentioned during a keynote speech; Curtis Knittle participated on a panel on “Action in the Access Network” as a part of the OIDA Executive Forum, and one of our interns presented a poster as part of a collaboration with CableLabs’ Bernardo Huberman and Lin Cheng.

Another presentation from our own Mu Xu also illustrated how CableLabs is pushing the boundaries of optical technology. This presentation – titled “Multi-Stage Machine Learning Enhanced DSP for DP-64QAM Coherent Optical Transmission” and co-authored by other CableLabs thought leaders including Junwen Zhang, Haipeng Zhang, Jing Wang, Lin Cheng, Zhensheng Jia, Alberto Campos, and Curtis Knittle – was particularly noteworthy because it brought together multiple areas of next-generation technology and research going on at CableLabs.

This was my first year attending OFC, and I feel like I barely scratched the surface of what was there. More than anything else, I came away impressed by the impact that the CableLabs team is making on the optical industry, one that will be critical for enabling our 10G future. I’m greatly looking forward to next year.


SUBSCRIBE TO OUR BLOG

Comments
Wireless

Mobility Lab Webinar #3 Recap: Inter-Operator Mobility with CBRS

Omkar Dharmadhikari
Wireless Architect

Apr 18, 2019

Today we hosted our third webinar in the Mobility Lab Webinar series, “Inter-Operator Mobility with CBRS.” In case you missed the webinar, you can read about it in this blog or scroll down to see the recorded webinar and Q&A below.

Background

Multiple service operators (MSOs) may be motivated to provide mobile services using the new 3.5 GHz spectrum introduced with Citizens Broadband Radio Service (CBRS). However, because CBRS operates low-power small cells to provide localized coverage in high-traffic environments, MSOs may rely on mobile virtual network operator (MVNO) agreements to provide mobile service outside the CBRS coverage area. In this scenario, MSOs will be motivated to:

  • deliver a seamless transition,
  • minimize the transition time between the home CBRS network and the visitor MVNO network, and
  • maximize device attachment to the home CBRS network.

For inter-operator roaming, mobile operators use one of the two 3GPP roaming standards—Home Routing (HR) or Local Break Out (LBO)—to support the transition between a home network and roaming partner visitor networks. The international or domestic roaming agreements between home and visitor operator networks require the two networks to share roaming interfaces, as dictated by the 3GPP-defined roaming models. Because mobile operators are motivated to keep their subscribers on their network as long as possible to minimize LTE offload, they have little incentive to provide open access and connection to MVNO partners. Thus, the CBRS operator and host MVNO operators may have different and opposing motivations.

Our Webinar: Inter-Operator Mobility with CBRS

The “Inter-Operator Mobility with CBRS” webinar provides key findings that may assist MSOs in evaluating the implementation of the two roaming models for CBRS use cases with regards to:

  • inter-operator mobility using network-based triggers for connected and idle modes,
  • sharing of roaming interfaces,
  • Public Land Mobile Network (PLMN) configurations, and
  • higher-priority network selection timer.

The webinar also discusses the alternative solutions to network-based transition, such as:

  • device transition controlled with an external server and
  • enhancing dual SIM functionality.

You can view the webinar, webinar Q&A and technical brief below:

If you have any questions, please feel free to reach out to Omkar Dharmadhikari. Stay tuned for information about upcoming webinars by subscribing to our blog.


SUBSCRIBE TO OUR BLOG 

Comments
Policy

Driving Global Connectivity Well Beyond Cable Technology

Kelton Shockey
Technology Policy Associate

Mark Walker
Director, Technology Policy

Apr 15, 2019

CableLabs participates in more than 30 unique standards organizations, industry consortia, and open source efforts. 

CableLabs is focused on developing innovative technologies, not only in the performance of cable’s hybrid fiber coax (HFC) networks, but also in many areas that extend beyond the traditional cable network, including wireless (both licensed and unlicensed), cybersecurity, network function virtualization (NFV), optical technologies for access networks, and the application of artificial intelligence (AI) and machine learning to network management and orchestration. To be successful, CableLabs recognizes that, in these areas beyond traditional cable technology, it must engage and work with the broader technology community to drive advancements. This effort is visible through CableLabs’ deep commitment to leading and contributing to standards organizations, industry consortia, and open source efforts in these broader areas.  

Developing standards and industry specifications are at the core of CableLabs, which has been in the specification and standardization business since its inception over 30 years ago. In 1997, CableLabs released the initial version of the Data Over Cable Service Interface Specification (DOCSIS), the technology that enables broadband service to be provided over an HFC network. Standardization of the cable interface specification allowed the cable network operators to work at scale with the network equipment manufacturers to build the interoperable technology needed for cable to meet the exploding demand for broadband Internet access.

Ever since, CableLabs, along with its members and the vendor community, has continued to advance DOCSIS technology. Cable operators today have largely moved to DOCSIS 3.1 technology, enabling the availability of gigabit-speed broadband across nearly the entire cable footprint in the US, and driving towards a “10G” network capability. As cable has broadened its focus, CableLabs has responded by broadening its standards efforts and industry engagement.

Improving Wi-Fi and Enabling 5G through Wireless Standards Engagement

CableLabs contributes significantly to almost a dozen different standards organizations to improve wireless connectivity through standardization related mechanisms. Our work is not restricted to improvements in the traditionally separate spheres of in-home and mobile wireless and includes work toward a seamless network convergence for the future. Along those lines, CableLabs is engaged in the O-RAN Alliance, where we are leading an effort to establish an open virtualized RAN (“radio access network”) fronthaul specification which will allow for low-cost small cells with DOCSIS network backhaul.

At 3GPP, CableLabs is driving the Wireless-Wireline Convergence (WWC) effort to make the operation, management, and traversal of 5G wireless networks and 10G DOCSIS networks more seamless. CableLabs is also working to bring consumers a faster and safer in-home network experience through a next-generation adaptive security platform, CableLabs ® Micronets, which enables enterprise-level smart security at home. Beyond making home networks safer, we’re working to make them more powerful; exhibited by our role in achieving recent milestones with carrier-grade Wi-Fi certification through the Wi-Fi Alliance’s VantageTM and launch of the new EasyMeshTM certification program.

Driving Increased Performance of Optical Technologies in the Access Network through Broad Industry Collaboration

As cable drives its fiber infrastructure deeper into the HFC network, CableLabs has developed new technology for use of fiber in the access portion of the network and has promoted standardization of such technology. We are involved at several global standards development bodies—including IEEE, ESTI, O-RAN, and SCTE where we work to level-up all aspects of the fiber network. These efforts combine our internal specification development— work (such as Coherent Optics specifications) with broad industry collaboration in order to deliver dramatic improvements to the access network across all areas. This means that while working toward ever faster speeds through developing the next generations of PON protocols, the whole network ecosystem needs to be addressed, which includes innovation in network operations with projects such as Proactive Network Maintenance (PNM).

Building a Common, Secure, Foundation for IoT Devices of the Future

CableLabs envisions a future empowered by technologies that improve our lives—a future where augmented reality (AR)/ virtual reality (VR) head-mounted displays, video walls, AI-enabled media, ubiquitous Internet of Things (IoT) devices, light field holodecks and displays (as seen in our latest Near Future video) are just the beginning. However, in order for AR/VR devices to be populated with high-quality content, for video walls to connect seamlessly, or for our IoT devices to assist us securely, we will first need high-quality, secure, industry-driven standards on which the technology and applications can be built. This belief has led to our involvement in the Open Connectivity Foundation (OCF), an industry effort to develop a secure interoperability specification for IoT.

Catalyzing the Future of Immersive Media Experiences

Recognizing the importance of building consensus throughout the ecosystem, even beyond the broadband network, CableLabs is significantly involved in and contributing technical expertise toward a number of emerging technology areas, including significant projects in video, VR/AR, and immersive media. Essential to the actual adoption of standards, we recently played a founding role in establishing Media Coding Industry Forum (MC-IF) to address patent licensing of future MPEG codecs. In addition, we announced a new collaboration called IDEA (Immersive Digital Experiences Alliance) to establish and promote end-to-end delivery of immersive content, including light fields, over broadband networks.

To learn more about our work in standards, open source, and industry consortia please see our members-only (login required) Standards Strategy Update (April 2019) on current engagements.


SUBSCRIBE TO OUR BLOG

Comments
Innovation

An IDEA is Born: CableLabs Heads Up New Alliance That Will Bring Holodecks Into Your Living Room

Apr 11, 2019

CableLabs has joined forces with top players in cutting-edge media technology—Charter Communications, Light Field Lab, OTOY and Visby—to form the Immersive Digital Experiences Alliance (IDEA). Chaired by CableLabs’ Principal Architect and Futurist, Arianne Hinds, the alliance aims to facilitate the development of an end-to-end ecosystem for immersive media, including VR, AR, stereoscopic 3D and the much-talked-about light field holodeck, by creating a suite of display-agnostic, royalty-free specifications. Although the work is already well underway, the official IDEA launch event was on April 8 at the 2019 NAB Show. Learn more about it here.

IDEA’s Challenges: What problems do we want to solve?

Advancements in immersive media offer endless opportunities not only in gaming and entertainment but also in telemedicine, education, business and personal communication and many other areas that we haven’t even begun to explore. It’s an exciting technological frontier that always gets a lot of buzz at tech expos and industry conferences. The question now is not if, but when is it going to become reality and what are the steps to getting there?

Despite numerous innovation leaps in VR and AR in recent years, the immersive media industry as a whole is still in its very early stages. Light field technology, the richest and most dense form of immersive media that allows the user to view and interact with a three-dimensional object in volumetric space, is particularly limited by the shortcomings of the existing video interchange standards.

  • Problem #1: Too much data

A photorealistic, volumetric video requires substantially more data than the traditional 2D media we’re used to today. In order to deliver a truly seamless and lifelike immersive experience, we need to take a different approach for an interoperable media format and network delivery.

  • Problem #2: Inadequate Network Ecosystem

There’s currently no common media format for storage, distribution and display of immersive images. We’ll need to build a media-aware network that’s fully optimized for the new generation of immersive entertainment.

IDEA’s Goals: How will we address these problems?

IDEA is already working on the first version of the Immersive Technologies Media Format (ITMF), a display-agnostic set of specifications for representation of immersive media. ITMF is based on OTOY’s well-established ORBX Scene Graph format currently used in 3D animation.

The initial draft of ITMF, scheduled for release by the end of 2019, will meet the following criteria:

  • It will be royalty-free and open source
  • It will be built on established technologies already embraced by content creators
  • It will be unconstrained by legacy raster-based 2D approaches
  • It will allow for continued improvements and advancements
  • It will address real-life requirements based on input from content creators, technology manufacturers and network operators.

In addition to the development of the ITMF standard, IDEA will also:

  • Gather marketplace and technical requirements to define and support new specifications
  • Facilitate interoperability testing and demonstration of immersive technologies in order to gain industry feedback
  • Produce immersive media educational events and materials
  • Provide a forum for the exchange of information and news relevant to the immersive media ecosystem, open to international participation of all interested parties

IDEA’s New Chairperson: A Woman With a 3D Vision

IDEA’s newly-elected chairperson, Dr. Arianne Hinds, joined CableLabs in 2012 as a Principal Architect of Video & Standards Strategy. A VR futurist, innovator and inventor, she has over 25 years of experience in areas of image and video compression, including MPEG and JPEG. Dr. Hinds has won numerous industry awards, including the prestigious 2017 WICT Rocky Mountain Woman in Technology Award. She is the Chair for the U.S. delegation to MPEG and is currently serving as the Chairperson of the L3.1 Committee for United States MPEG Development Activity for the International Committee for Information Technology Standards. Her new responsibilities at IDEA are a natural extension of her life’s work, perfectly aligned with the IDEA’s mission to bring the beautiful world of immersive media technology into the mainstream.

IDEA-Founders-Arianne-Hinds

Why CableLabs?

The 10G platform positions cable operators as the first commercial network service providers to support truly immersive services beyond the limits of legacy 2D video. With its ability to deliver up to 10Gbps while at the same time supporting low latency for interactive applications, 10G will be crucial to delivering the immersive media at bitrates (e.g. 1.5 Gbps for light field panels) that allow the corresponding displays to operate at their fullest potential. 

Become an IDEA member

No one company can build the future in isolation. IDEA welcomes anyone—technologists, creative visionaries, equipment manufacturers and network distribution operators—who share its vision. If you’re interested in learning more about becoming a member, please visit the website at www.immersivealliance.org.

You can learn more about the CableLabs future vision by clicking below. 


Learn More About 10G

Comments
Wired

Proactive Network Maintenance (PNM): Cable Modem Validation Application(s)

Jason Rupe
Principal Architect

Jay Zhu
Senior Engineer

Apr 10, 2019

Sometimes, two apps are better than one. We now have two versions of the Cable Modem Validation Application (CMVA) available for download and use: a new lab automation version, and a data exploration version.

Thing One and Thing Two

Lab automation and certification have unique requirements, but investigation and invention require flexibility. Because the CMVA found value as a cable modem (CM) data plotter and browser on top of its original purpose as a lab testing tool, we decided there should be two versions—one focused on each use case.

Sometimes You Feel Like a DUT

The newest, most complex version of CMVA is built specifically for CM Cert-Lab testing and includes several new features and automations:

  • Improved efficiency for CMVA on certification testing: CMVA now discovers OFDM/OFDMA-based topology information from the CMTS and loads all related channel configuration information automatically for testing. CMVA also synchronizes PNM SNMP SET command parameters with XCCF for better efficiency and greater control.
  • Automated discovery of the active DOCSIS® 3.1 CM list: Users can easily select CMs with their test configurations automatically filled to start tests with a few clicks.
  • CMVA now runs multiple PNM tests sequentially on multiple CMs in parallel with simple clicks on a single user login: The latest test reports are directly served from the CM table. Different users are handled in parallel, as previously.
  • CMVA now embeds detailed testing logs into the HTML test report: The log file can be downloaded from the HTML test report. The HTML test report is portable.
  • CMVA now keeps copies of raw PNM test files together with the test reports for vendor debugging references: When downloading the test reports, CMVA packages the test logs in raw text, and forms the portable HTML test report into a single archive.
  • All the Acceptance Test Plan (ATP) calculation activities are placed in the log file for vendor debugging references.
  • We added a function for resetting CMs remotely with one click: This is important for testing and useful for other purposes.

 

Proactive-Network-Maintenace

Figure 1: New layout for test and configuration management

CM-table-proactive-network-maintenance

Figure 2: Select CM directly from the table to start tests; the latest reports are linked directly in the table for convenience

 

test-procedures-proactive-network-maintenance

Figure 3: The test procedures ran last time are tracked, and the configurations are automatically filled

 

Detailed-test-logs-proactive-network-maintenance

Figure 4: Detailed test logs are embedded directly into the portable HTML test report and can be downloaded as pure text log

All these new features are important for test automation, but some of them are useful for other needs. Go nuts! But if you simply want the basic capabilities that CMVA always provided, you can still get that version.

Sometimes You Don’t

Sometimes you just want a simple way to poll a set of modems and see what you can get. The previous version is a bit simpler, but it still has the validation capabilities if you need them. So, it might be the version that can address most, if not all, of your needs. We use it for many purposes but mainly as a testing and development tool. Here are some specific use cases we’ve encountered:

  • Testing ideas in the lab: The PNM Working Group InGeNeOS conducted lab testing, as reported on before, and we used CMVA to grab data from CMs under test.
  • Developing applications: As we work to develop our first large-scale PNM base application, inside our prototype PNM Application Environment, we use CMVA to develop theories about how the data can be processed for automated processing.
  • Building reports and documenting: So often, we need to capture what certain impairments look like, or obtain a good visualization of a PNM measurement, and CMVA makes that handy.
  • Investigating issues: With CMVA, it’s a simple matter to collect data from a pool of CMs and compare the results. This helps us investigate many issues, including changes in firmware versions, CM responsiveness, and other potential issues with plant configuration, software changes and so on.
  • Combined Common Collection Framework (XCCF) development and testing: As we develop new capabilities with our XCCF, we can use CMVA to validate its functionality.

If you are a user of CMVA, let us know how you have used it!

Two Can Play at That Game

Although the more complicated testing tool can be used for all these use cases and many more, some users don’t need the automation, overhead and many controls required for automated testing. When you contact us to get an updated version of CMVA, please let us know what you would like to use it for. That way, we can offer you the right version.


SUBSCRIBE TO OUR BLOG

Comments
Wired

Forward Error Correction (FEC): A Primer on the Essential Element for Optical Transmission Interoperability

Steve Jia
Distinguished Technologist, Wired Technologies

Apr 4, 2019

Forward error correction (FEC) has been a powerful tool in the cable industry for many years. In fact, perhaps the single biggest performance improvement in the DOCSIS 3.1 specifications was achieved by changing the FEC being used in previous versions – Reed-Soloman (RS) – to a new coding scheme with improved performance: low-density parity check (LDPC). Similarly, FEC has also become an indispensable element for high-speed optical transmission systems, especially in current coherent optical transmission age.

FEC is an effective digital signal processing method that improves the bit error rate of communication links by adding redundant information (parity bits) to the data at the transmitter side so that the receiver side then uses the redundant information to detect and correct errors that may have been introduced in the transmission link. As the following figure shows, the signal encoding that takes place at the transmitter has to be properly decoded by the receiver in order to extract the original signal information. Precise definition and implementation of the encoding rules are required to avoid misinterpretation of the information by the receiver decoding the signal. Successful interoperability will only take place when both the transmitter and receiver follow and implement the same encoding and decoding rules.

Forward-Error-Correction-FEC

As you can see, FEC is the essential element that needs to be defined to enable the development of interoperable transceivers using optical technology over point-to-point links. The industry trends are currently moving toward removing proprietary aspects and becoming interoperable when the operators advocate more open and disaggregated transport in high-volume short-reach applications.

When considering which FEC to choose for a new specification, you need to consider some key metrics, including the following:

  • Coding overhead rate­— The ratio of the number of redundant bits to information bits
  • Net coding gain (NCG)— The improvement of received optical sensitivity with and without using FEC associated with increasing bit rate
  • Pre-FEC BER threshold— A predefined threshold for error-free post-FEC transmission determined by NCG

Other considerations include hardware complexity, latency, and power consumption.

One major decision point for FEC coding and decoding is between Hard-Decision FEC (HD-FEC) and Soft-Decision FEC (SD-FEC). HD-FEC performs decisions whether 1s or 0s have occurred based on exact thresholds, whereas SD-FEC makes decisions based on probabilities that a 1 or 0 has occurred. SD-FEC can provide higher NCG to get closer to the ideal Shannon limit with the sacrifice of higher complexity and more power consumption.

The first-generation FEC code, standardized for optical communication, is RS code. RS is used for long-haul optical transmission as defined by ITU-T G.709 and G.975 recommendations. In this RS implementation, each codeword contains 255 code word bytes, of which 239 bytes are data and 16 bytes are parity, usually expressed as RS (255,239) with the name of Generic FEC (GFEC).  Several FEC coding schemes were recommended in ITU-T G. 975.1 for high bit-rate dense wavelength division multiplexing (DWDM) submarine systems in the second-generation of FEC codes. The common mechanism for increased NCG was the use of concatenated coding schemes with iterative hard-decision decoding. The most commonly-implemented example is the Enhanced FEC (EFEC) from G.975.1 Clause I.4 for 10G and 40G optical interfaces.

At the 100 Gbps data rate, CableLabs has adopted Hard-Decision (HD) Staircase FEC, defined in ITU-T G.709.2 and included in the CableLabs P2P Coherent Optics Physical Layer v1.0 (PHYv1.0) Specification. This Staircase FEC, also known as high-gain FEC (HG-FEC), is the first coherent FEC that provides an NCG of 9.38 dB with the pre-FEC BER of 4.5E-3. The 100G line-side interoperability has been verified in the very first CableLabs’ Point-to-Point (P2P) Coherent Optics Interoperability Event.

At the 200 Gbps data rate, openFEC (oFEC) was selected in CableLabs most-recent release of P2P Coherent Optics PHYv2.0 Specification. The oFEC provides an NCG of 11.1 dB for Quadrature Phase-Shift Keying (QPSK) with pre-FEC BER of 2E-2 and 11.6 dB for 16QAM format after 3 soft-decision iterations to cover multiple use cases. This oFEC was also standardized by Open ROADM targeting metro applications.

Although CableLabs has not specified 400G coherent optical transport, the Optical Interworking Forum (OIF) has adopted a 400G concatenated FEC (cFEC) with soft-decision inner Hamming code and hard-decision outer Staircase code in its 400G ZR standard; this same FEC has been selected as a baseline proposal in the IEEE 802.3ct Task Force. This 400G implementation agreement (IA) provides an NCG of 10.8 dB and pre-FEC BER of 1.22E-2 for coherent dual-polarized 16QAM modulation format specially for the Data Center Interconnection (DCI).

The following table summarizes performance metrics for standardized FEC in optical fiber transmission systems.

A-Primer-on-the-Essential-Element-for-Optical-Transmissio

CableLabs is the first specification organization to demonstrate 100G coherent optics interoperability with a significant level of participants. Please register for our next coherent optics interoperability testing.


REGISTER NOW

Comments