Comments
Culture

Meet CableLabs Principal Architect and Futurist Dr. Arianne Hinds

Sep 14, 2017

We've all heard of virtual reality, but few know about the scientists behind the technology shaping the future of our experiences. Virtual reality has a come a long way from primitive headsets to a technology poised to revolutionize the way we communicate, learn and receive medical care. Meet the woman who's bringing the holodeck to your living room. 

With over 25 years experience in the areas of image and video compression, innovator and inventor Dr. Arianne Hinds joined the CableLabs team in 2012. The recent winner of the prestigious 2017 WICT Rocky Mountain Woman in Technology Award and Chair for the US delegation to MPEG (formally known as INCITS L3.1), she is actively engaged in developing virtual reality standards in MPEG.

Watch the video below to learn more about Dr. Hinds and how she's defining the future of virtual reality.

You can find out more about Arianne’s research by reading her blog posts and publications. Subscribe to our blog to find out more about how CableLabs is inventing the future.  

Comments
Innovation

5 Ways to Raise Innovation Leadership

Michelle Vendelin
Director, Innovation Services

Sep 13, 2017

The need to innovate now is greater than ever. Yet, many leaders admit that they just don’t have the time or they don't practice innovation consistently enough as an individual or with their team. With an accelerating stream of start-ups and popular well-funded competitors challenging the set-top box, connectivity methods, and business models of the cable industry, we must rise up together and win the innovation game! It will take our individual and collective commitment to delivering great value with our networks, entertainment &  connectivity solutions today. We also need to raise our commitment to innovation leadership in order to place the cable industry at the forefront of the connected experience with purposeful innovations for decades to come.

With all this in mind, here are 5 ways for you and your team to innovate at new heights: 

1. Check Yourself: Transformation always begins with self-awareness. Contemplate these questions as an innovator and correct any tensions that may arise:

  • How important is innovation in my role, my team or company?
  • What would another level of innovation in my role, team or company look like?
  • Am I or is my team regularly discovering, developing or delivering innovation to my customers inside or outside my company?
  • Do I really know my customer's challenges, problems, gaps or missed opportunities that are the ripest for innovation?
  • What idea or innovation has been on the back burner for way too long?

2. Ask better questions: Let’s start with the most obvious question: “Do you know what questions need to be asked to spark game-changing innovation and influence others to see the value you see?”

Why ask better questions?  So, you can:

  • Help your organization make great decisions
  • Challenge your team to see beyond the status quo
  • Anticipate business challenges
  • Understand your customers when they are not always clear about what they want

Wondering how to start?  I invite you to formulate or seek out questions that help you challenge beliefs and assumptions about and assumptions about what’s true or possible, consider the most expansive form of your idea and to empathize with others in order to solve real problems and innovate with purpose.

3. Build Innovation Grit: Coming up with breakthrough ideas is really hard. It’s even harder to stay resilient when the “antibodies”, or resistors, to change cut you off all your attempt. Innovation has taught us that most great ideas were first impractical, impossible or down right stupid before they became the NEXT big thing or even just the next important thing. As humans, we’d like to think we are fully adaptable, but in reality, we are comforted by the status quo and certainty overall. It requires a special mindset, commitment and a lot of perseverance (grit) to see an innovation through.

  • Did you know Innovation Grit must be developed consciously or your idea will be lost to the innovation graveyard, due to the antibodies AGAIN!?

4. Practice, Practice, Practice: To master anything, you must practice! So, let's check in on your innovation practice:

  • What are the daily, weekly, monthly, quarterly or yearly innovation practices you apply consistently, and with ever increasing competency?
  • Are these practices truly generating more ideas, more value or more impact?
  • If you were going to be 10x times more innovative 6 months from now, what practice would you need to START or re-commit to NOW?

5. Accelerate Change: If you noticed tensions, gaps or challenges in the above, and you KNOW it’s time to make a change, then take action and/or find an innovation mentor, coach or training program to help you accelerate the learning curve, focus your attention and get back in the game with the kind of intensity we need right now for the Cable Industry!

I invite you to consider our upcoming Innovation Boot Camp Intensive:

  • At boot camp, you will learn a framework for successful innovation and how to generate breakthrough innovation, BY DOING IT, in an immersive, intimate and accelerated way. Our upcoming boot camps are October 23-27, 2017 in Boulder, CO or April 23-27, 2018 in Silicon Valley.
  • We keep each boot camp small, so you get plenty of time and attention with our experts. These include CableLabs CEO and renowned innovator Phil McKinney, author of “Beyond the Obvious – Killer Questions that Spark Game-Changing Innovation,” as well as creator of the award winning, nationally syndicated radio show/podcast “Killer Innovations”. You will leave Boot Camp with more strategic questions and the confidence when you return to work.
  • You will learn about the innovation myths that need to be busted and great strategies to deal with the antibodies who may kill your best ideas or slow down their progress. With a team of battle tested innovators challenging you to transform your mindset, build creative confidence, grit and ultimately innovation impact in any role.

So there you have it, 5 tangible and accelerated ways to RAISE your Innovation Game: Check in on your current state of innovation, move your attention to killer questions, find new ways to overcome "antibodies" to your innovations, commit to a regular innovation practice AND when you are ready, go DEEP at Innovation Bootcamp - or go home!

Interested in reading about what Innovation Bootcamp is really like? Check out my blog post about our previous Boot Camp here and watch the video below. Don’t hesitate to contact me with any questions.

Discover how CableLabs supports the cable industry to stay on the forefront of innovation here.

Comments
Energy

Savin’ Some Rosenfelds

Debbie Fitzgerald
Principal Architect and the Director of the Energy Efficiency Program

Sep 7, 2017

Have you ever heard of a Rosenfeld? The Rosenfeld metric was created in 2010 and named after Art Rosenfeld, a former Lawrence Berkeley National Laboratory Scientist and former California Energy Commissioner known as “the godfather of energy efficiency”. It is a unit of energy savings representing 3 billion kilowatt-hours per year, which is also equivalent to the amount of energy generated by one 500-megawatt coal-run power plant. Why is this important? It is a term that is often used to quantify the savings related to energy efficiency initiatives.

Art Rosenfeld

Art Rosenfeld in his Berkeley Lab Office in 1989 (credit: Lawrence Berkeley National Laboratory)

I was first introduced to the Rosenfeld in 2011 when the Natural Resources Defense Council estimated that the pay-TV industry could save 3 Rosenfelds of energy annually by 2016 through the adoption of energy-saving technologies and practices. Shortly thereafter, the U.S. Department of Energy (DOE) opened a proceeding to consider the development of energy regulations for set-top boxes (STB), but by law, DOE energy standards cannot take effect for five years after adoption. To achieve faster savings, the pay-TV industry and NRDC (along with the American Council for an Energy-Efficient Economy (ACEEE)) established a non-regulatory “Voluntary Agreement (VA)” in which all of the country’s largest pay-TV providers committed to the purchase of energy-efficient devices beginning in 2014.

The DOE was so satisfied with this agreement that it closed its proceeding. The Secretary of Energy at the time, Ernest Moniz, explained that the VA’s “energy efficiency standards reflect a collaborative approach among the Energy Department, the pay-TV industry and energy efficiency groups – building on more than three decades of common-sense efficiency standards that are saving American families and businesses hundreds of billions of dollars.”

I’m happy to report that in 2016, the VA saved approximately 33% more energy than NRDC had hoped for in 2011 - a savings of nearly 4 Rosenfelds!  The 2016 Annual Report for the STB VA released earlier this month found that set-top boxes in 2016 used just over 8 Rosenfelds, compared to the 9 Rosenfelds that NRDC set as a goal for 2016 in its earlier report and the 12 Rosenfelds that NRDC had projected would be consumed in 2016 absent immediate regulation. The VA has therefore not only succeeded in delivering energy savings far faster than DOE regulation could have, but it has actually resulted in savings that greatly exceeded the expectations of the leading energy-efficiency advocates.  And because the savings under the VA are expected to increase even more under its more rigorous “Tier 2” standards that became effective in 2017, the best is yet to come.

As stated in the most recent annual report, this program has been extremely successful in reducing the energy consumption of STBs and reducing the number of power plants required in the United States. Over the four years the signatories have been reporting, it is estimated that the program has saved 16.8 TWh of energy, saved consumers 2.1 Billion dollars in energy costs, and avoided 11.8 million metric tons of CO2 emissions.

The STB VA was established in 2012 as a five-year program, and 2017 is the last year for commitments with a final report in 2018.  The VA signatories, including the energy efficiency advocates NRDC and ACEEE, are actively working on a renewal of the Voluntary Agreement to keep the momentum of this successful program going.

That’s great news for set-top boxes, but what about cable modems and routers?

There is also a Voluntary Agreement for Small Network Equipment (SNE) energy efficiency that was established in 2015, and 2016 was the first year that the signatories had to meet the commitment that 90% of their purchases or retail sales would be within the power limits established by the VA. The Annual Report for the SNE VA was just released this week, and it demonstrated progress as well.  It found that 98.3% of the devices reported met the required levels, up from 89.6% last year. In addition, ten of the eleven signatories met the 90% commitment, and the eleventh fell just short at 88%. As per the VA, that signatory is working on a remediation plan to offset the incremental energy associated with its devices that exceeded its commitment.

Small Network Equipment has been evolving at a rapid pace, increasing network speeds from the Service Provider to the home, and also increasing networking capability within the home. Many of the newer SNE products integrate multiple functionalities that had been supported by separate devices, including broadband modem functionality, high-powered WiFi, MoCA, multi-port routing, and even IoT controllers.  In spite of this evolution, the energy consumption of the devices has decreased relative to broadband speeds, as depicted in this chart taken from the annual report:

Energy Usage by Equipment Type

Energy Usage by Equipment Type, Weighted by Broadband Speed

These trends demonstrate that the service providers and manufacturers are making energy-conscious decisions in their design and purchasing decisions, and saving Rosenfelds in the process!

You can learn more about the Voluntary Agreements here, and on the CableLabs site here.

--

Debbie Fitzgerald is a Principal Architect in the Technology Policy Department and leads the Energy Efficiency Program at CableLabs.

 

Comments
Technology

Towards the Holodeck Experience: Seeking Life-Like Interaction with Virtual Reality

Arianne Hinds
Principal Architect, Video & Standards Strategy Research and Development

Sep 5, 2017

By now, most of us are well aware of the market buzz around the topics of virtual and augmented reality. Many of us, at some point or another, have donned the bulky, head-mounted gear and tepidly stepped into the experience to check it out for ourselves. And, depending on how sophisticated your set up is (and how much it costs), your mileage will vary. Ironically, some research suggests that it’s the baby boomers who are more likely to be “blown away” with virtual reality rather than the millennials who are more likely to respond with an ambivalent “meh”. And, this brings us to the ultimate question that is simmering on the minds of a whole lot of people: is virtual reality here to stay?

It’s a great question.

Certainly, the various incarnations of 3D viewing in the last half-century, suggest that we are not happy with something. Our current viewing conditions are not good enough, or … something isn’t quite right with the way we consume video today.

What do you want to see?

Let’s face it, the way that we consume video today is not the way our eyes were built to record visual information, especially in the “real-world”. Looking into the real world (which, by the way, is not what you are doing right now) your eyes capture much more information than the color and intensity of light reflected off of the objects in the scene.  In fact, the Human Visual System (HVS) is designed to pick up on many visual cues, and these cues are extremely difficult to replicate both in current generation display technology, and content.

Displays and content? Yes. Alas, it is a two-part problem. But let’s first get back to the issue of visual cues.

What your brain expects you to see

Consider this, for those of us with the gift of sight, the HVS provides roughly 90% of the information we absorb every day, and as a result, our brains are well-tuned to the various laws of physics and the corresponding patterns of light. Put more simply, we recognize when something just doesn’t look like it should, or when there is a mismatch between what we see and what we feel or do. These mismatches in sensory signals are where our visual cues come into play.

Here are some cues that are most important:

  • Vergence distance is the distance that the brain perceives when the muscles of the eyes move to focus at a physical location, or focal plane. When that focal plane is at a fixed distance from our eyes, let’s say, like with the screen in your VR headset, then the brain is literally not expecting for you to detect large changes in distance. After all, your eye muscles are fixed at looking at something that is physically attached to your face, i.e. the screen. But, when the visual content is produced in a way so as to simulate the illusion of depth (especially large changes in depth) the brain recognizes that there is a mismatch between the distance information that it is getting from our eyes vs. the distance it is trained to receive in the real world based on where our eyes are physically focused. The result? Motion sickness and/or a slew of other unpleasantries.
  • Motion parallax: As you, the viewer, physically move, let’s say walk through a room in a museum, then objects that are physically closer to you should move more quickly across your field of view (FOV) vs. objects that are farther away. Likewise, objects that are positioned farther away should move more slowly across your FOV.
  • Horizontal and vertical parallax: Objects in the FOV should appear differently when viewed from different angles, both from changes in visual angles based on your horizontal and vertical location.
  • Motion to photon latency:. It is really unpleasant when you are wearing a VR headset and the visual content doesn’t change right away to accommodate the movements of your head. This lag is called “motion to photon” latency. To achieve a realistic experience, motion to photon latency must be less than 20ms, and that means that service providers, e.g. cable operators, will need to design networks that can deterministically support extremely low latency. After all, from the time that you move your head, a lot of things need to happen, including signaling head motion, identifying the content consistent with the motion, fetching that content if not already available to the headset, and so on.
  • Support for occlusions, including the filling of “holes”. As you move through, or across, a visual scene, objects that are in front of or behind other objects should block each other, or begin to reappear consistent with your movements.

It’s no wonder…

Given all of these huge demands placed on the technology by our brains, it’s no wonder that current VR is not quite there yet. But, what will it take to get there? How far does the technology still have to go? Will there ever be a real holodeck? If “yes”, when? Will it be something that we experience in our lifetimes?

The holodeck first appeared properly in Star Trek: The Next generation in 1987. The holodeck was a virtual reality environment which used holographic projections to make it possible to interact physically with the virtual world.

Fortunately, there are a lot of positive signs to indicate that we might just get to see a holodeck sometime soon. Of course, that is not a promise, but let’s say that there is evidence that content production, distribution, and display are making significant strides. How you say?

Capturing and displaying light fields

Light fields are 3D volumes of light as opposed to the ordinary 2D planes of light that are commonly distributed from legacy cameras to legacy displays. When the HVS captures light in the natural world (i.e. not from a 2D display), it does so by capturing light from a 3D space, i.e. a volume of light being reflected from the objects in our field of view. That volume of light contains the necessary information to trigger the all-too-important visual cues for our brains, i.e. allowing us to experience the visual information in a way that is natural to our brains.

So, in a nutshell, not only does there need to be a way to capture that volume of light, but there also needs to be a way to distribute that volume of light over a, e.g. cable, network, and there needs to be a display at the end of the network that is capable of reproducing the volume of light from the digital signal that was sent over the network. A piece of cake, right?

Believe it or not

There is evidence of significant progress on all fronts. For example, at the F8 conference earlier this year, Facebook, unveiled its light field cameras, and corresponding workflow. Lytro is also a key player in the light field ecosystem with their production-based light field cameras.

For the display side, there is Light Field Lab and Ostendo, both with the mission to make in-home viewing with light field displays, i.e. displays that are capable of projecting a volume of light, a reality.

On the distribution front, both MPEG and JPEG have projects underway to make the compression and distribution of light field content possible. And, by the way, what is the digital format for that content? Check out this news from MPEG’s 119th meeting in Torino:

At its 119th meeting, MPEG issued Draft Requirements to develop a standard to define a scene representation media container suitable for interchange of content for authoring and rendering rich immersive experiences. Called Hybrid Natural/Synthetic Scene (HNSS) data container, the objective of the standard will be to define a scene graph data representation and the associated container for media that can be rendered to deliver photorealistic hybrid scenes, including scenes that obey the natural flows of light, energy propagation and physical kinematic operations. The container will support various types of media that can be rendered together, including volumetric media that is computer generated or captured from the real world.

This latest work is motivated by contributions submitted to MPEG by CableLabs, OTOY, and Light Field Labs.

Hmmmm … reading the proverbial tea-leaves, maybe we are not so far away from that holodeck experience after all.

--

Subscribe to our blog to read more about virtual reality and more CableLabs innovations.

Comments
Innovation

Behind the Tech: Connected Healthcare and The Near Future. A Better Place

Aug 31, 2017

Our short film The Near Future: A Better Place explores how emerging technologies in healthcare will transform our daily lives and allow us to age gracefully in place. We envision a future where embedded IoT devices are able to monitor us anywhere, keeping us safe and healthy longer. These technologies, once thought of as science fiction, rely on the high speed, secure, reliable wireless connectivity and networking protocols enabled by the cable industry. Below, take an inside look at the tech featured in our film that will change the way we connect and interact with the world around us.

Smart Medicine

Battery free ingestible pills containing microchips optimise drug levels and transmit a signal to doctors where effects can be monitored and displayed.

Brain Scans

Neurodegenerative diseases like Alzheimer’s and strokes are easily detected through high-resolution MRI’s able to probe the microstructure of the brain.

Sensory Body Reading

Wireless, water resistant sensors are able to monitor vital signs and bio activity on a continuing basis. Health issues are detected early on by monitoring daily trends.

Remote Diagnostics

Say goodbye to those monthly doctor’s visits and weekly blood tests! Mobile monitoring allows doctors to monitor patients in real time via a secure connection, enjoying a complete healthcare appointment from the comfort of your own home.

Networked Health Care and Smart Cities

Sensors in the home and city infrastructure monitor weather, temperature, pollution and pollen allowing seniors to make the best decision for their daily activities.

AI Agents

Cookie, the AI agent in the film, is able to take the place of an in-home nurse or doctors visit by providing an in-home companion capable of social interaction and health monitoring that is fully knowledgeable of treatments and drug regimes.

Nano Surgery

Nanobots powered by electromagnetic impulses are injected directly into the bloodstream and able to treat diseases more efficiently and accurately, making certain diseases a thing of the past.

 

You can find more information about connected healthcare and The Near Future. A Better Place here. Subscribe to our blog to find out how CableLabs is enabling the tech of the future.

Comments
Technology

5G For All: The Need for Standardized 5G Technologies in the Unlicensed Bands

Belal Hamzeh
VP, Research & Development, Wireless Technologies

Aug 29, 2017

A version of this article appeared in S&P Global Market Intelligence in July 2017. You can find the original here.

Wherever you turn in the wireless ecosystem today, 5G is the buzzword and the popular kid on the block… well, at least in some blocks. 3GPP, a third generation partnership project that defines specifications for GSM networks and radio access technologies, is working on developing the 5G standards at an accelerated pace, thus emphasizing the importance of 5G in the evolution of mobile networks. But, what is missing in the picture, is an equal emphasis and urgency in developing standardized 5G solutions for the unlicensed bands.

--

According to the FCC:

Unlicensed Spectrum: “In spectrum that is designated as "unlicensed" or "licensed-exempt," users can operate without an FCC license but must use certified radio equipment and must comply with the technical requirements, including power limits, of the FCC's Part 15 Rules. Users of the license-exempt bands do not have exclusive use of the spectrum and are subject to interference.”

Licensed Spectrum: “Licensed spectrum allows for exclusive, and in some cases non-exclusive, use of particular frequencies or channels in particular locations. Some licensed frequency bands were made available on a site-by-site basis, meaning that licensees have exclusive use of the specified spectrum bands in a particular point location with a radius around that location.”

--

The unlicensed spectrum has a history of delivering connectivity to the masses at unparalleled scales and economies. Taking Wi-Fi as a proxy for the unlicensed spectrum, by 2020, it is expected that the total shipment of Wi-Fi devices will have a user base of nearly 12 billion devices and the total shipments of Wi-Fi devices will surpass a whopping 28 billion. (Note: world population is forecasted to be 7.7 billion in 2020!).

5G For All

One of the fundamental drivers of the success of Wi-Fi is its use of unlicensed spectrum because the innovation enables the availability of Wi-Fi for everyone with a significantly lower cost and complexity of deploying a wireless network. Additionally, unlicensed spectrum plays a critical role in the success of licensed spectrum technologies. In 2016, 60% of mobile traffic was offloaded to the unlicensed spectrum. This means that mobile networks had to increase capacity by 250% if offloading to unlicensed spectrum was not viable. However, the user experience when switching between mobile networks and Wi-Fi has not been ideal due to a lack of interoperability.

3GPP is now looking at enabling 5G technologies in the unlicensed bands; specifically, 3.5 GHz, 5 GHz and 60 GHz. The study led by Qualcomm was approved in March of 2017, with the results expected to be handed off to 3GPP in June of 2018 for review. While this is exciting news for fans of unlicensed spectrum, it comes at a slower pace than its licensed spectrum counterpart. It is expected that 3GPP will finalize the licensed spectrum Non-Standalone (NSA) 5G enhanced mobile broadband specifications by March of 2018, thus enabling 5G network deployments in early 2019.

Although the schedule is not ideal, it is a step in the right direction. The popularity of the unlicensed spectrum (2.4 GHz and 5 GHz) has driven high levels of congestion. With the continuous increase in user demand, spectrum depletion is a real risk. Fortunately, the 60 GHz spectrum offers a huge swath of underutilized spectrum that is ripe for deploying 5G standalone networks within it. The 60 GHz band has 14 GHz of available spectrum (57 GHz – 71 GHz), which on its own is larger than all the licensed spectrum that is being considered for 5G networks, including licensed spectrum for 2G/3G/4G mobile networks!

We have addressed the business case for 5G technologies in the unlicensed band, but what’s in it for the end user?  The ability to make high-speed low latency wireless networks widely available has the potential to significantly disrupt what’s possible in our everyday lives, such as in education, healthcare, transportation, commerce, the way we work, entertainment, and most importantly, the way people connect with each other. Additionally, the availability of high-speed low latency wireless networks enables a new platform for innovation, on which applications we have yet to think of will be developed. The possibilities are endless and limited only by our imagination. As an example, take a look at our short video The Near Future: A Better Place here.

Harnessing the capabilities of 5G technologies and coupling it with unlicensed spectrum so that users can enjoy a seamless wireless experience across licensed and unlicensed bands is one of those truly rare instances where 1 + 1 = 3. What we cannot afford is to not drive standardized 5G technologies in the unlicensed band at a fast pace, because we would miss out on what the coupling of 5G and unlicensed spectrum has to offer.

--

You can find out more about what CableLabs is doing in this space by reading our Inform[ED] Insight on 5G here. Subscribe to our blog to find out more about 5G in the future.

 

Comments
Technology

Introduction to Proactive Network Maintenance (PNM): The Importance of Broadband

Doug Jones
Principal Architect

Aug 24, 2017

This is the introduction for our upcoming series on Proactive Network Maintenance (PNM).

The advent of the Internet has had a profound impact on American life. Broadband is a foundation for economic growth, job creation, global competitiveness and a better way of life. The internet is enabling entire new industries and unlocking vast new possibilities for existing ones. It is changing how we educate children, deliver health care, manage energy, ensure public safety, engage government and access, organize and disseminate knowledge.

There is a lot riding on broadband service which places a focus on customer service; to create both a faster and more reliable broadband experience that delight customers. Recent technological advancements in systems and solutions, as well as agile development, have enabled new cloud-based tools to enhance customer experience.

Over the past decade, CableLabs has been inventing and refining tools to improve the experience of broadband. CableLabs is providing both specifications and reference designs to interested parties to improve how customers experience their broadband service. Proactive Network Maintenance (PNM) is one of these innovations.

What is Proactive Network Maintenance and Why Should You Care?

Proactive network maintenance (PNM) is a revolutionary philosophy. Unlike predictive, or preventive maintenance, proactive maintenance depends on a constant and rigorous inspection of the network to look for the causes of a failure, before that failure occurs, and not treating network failures as routine or normal. PNM is about detecting impending failure conditions followed by remediation before problems become evident to users.

In 2008 the first instantiation of PNM was pioneered at CableLabs. This powerful innovation used information available in each cable modem and mathematically analyzed it to identify impairments in the coax portion of the cable network. From this time forward, every cable modem in the network is a troubleshooting device and could be used as a preventive diagnostic tool.

This is important when trying to track down transient issues related to the time of day, temperature, and other environmental variables, which can play a huge role in the performance of the cable system. With transient issues, it is important to have sensors continually monitoring the network. Since then, with improvements in technology, more sophisticated tools have been added giving operators unprecedented amounts of information about the state of the network.

Problems are solved quickly and efficiently because we can pinpoint where the problems are. Technicians like PNM because they become empowered to find and fix issues. An impairment originating from within a customer’s home can be dispatched to a service technician. While impairments originating on the cable plant itself can be dispatched to line technicians. Customer service agents also like the tools because they create actionable service requests. Lastly, impairments that can be attributed to headend alignment issues can be routed to a headend technician. All of this can be done before the customer is even aware there is a problem!

So, what does CableLabs have to do with all this?

The PNM project continues to innovate. Because of the success of PNM for the cable network, capabilities have been added to investigate in-home coax, WiFi and soon fiber optic networks.   Monitoring is key, and by using powerful cloud-based predictive algorithms and analytics, networks can be monitored 24 x 7 to provide insights, follow trends and detect important clues with the goal to identify, diagnose and fix issues before customers notice any impact.

CableLabs provides a toolkit of technical capabilities and reference designs that interested parties can use to create and customize tools fitting specific business needs. Operators can get started with reference designs, build expertise and their own solutions, and integrate the tools into their own systems. In addition, suppliers have licensed the technology and are creating a turn-key solution that operators can choose to work with.

--

In my upcoming series, I will cover DOCSIS PNM, MoCA PNM, Optical PNM, Common Collection Framework and explore in greater depth how PNM enhances the customer experience. Be sure to subscribe to our blog to find out more.

Comments
Security

IoT Security – Insight on Trends, Challenges and the Road Ahead

Ann Finnie
Global Communications Manager

Aug 17, 2017

The Internet of Things (IoT) industry isn’t part of the “Near Future” - it’s already here and growing rapidly. The Wall Street Journal hails IoT as the next Industrial Revolution and, according to Cisco, there are currently 4.9 billion connected devices today with an expected 12 billion by 2020. The fully matured result of this rapid growth is a $6 trillion industry.

AT&T's Cybersecurity Insights Report surveyed more than 5,000 enterprises around the world and found that 85% of enterprises are in the process of or intend to deploy IoT devices. Yet a mere 10% of those surveyed feel confident that they could secure those devices against cyber attacks.

The big question that emerges as individuals think deeper about the the implications of almost every device being connected is: “How do we keep our devices secure?”

To further our discussion on IoT Security from our Insight paper, we talked to Kyrio’s Director of Business Development, Security Services, Ron Ih, to get expert insight into one of the most pressing questions in tech today...

  1. What is the most important IoT security trend we are seeing this year?

As consumers and businesses adopt more IoT devices and threats continue to multiply, securing those devices easily and at scale has become a daunting task. We are seeing more specialized security tools and processes specifically for IoT devices this year, specifically the use of digital certificates and public key infrastructure (PKI’s) to enable a more secure onboarding process.

“‘Onboarding’ is the process by which a new device is connected and added to the network and the local IoT ecosystem. Onboarding includes the process for authentication, authorization, and accountability of that new device.” -- A Vision for Secure IoT

Digital certificates are issued and signed by a reputable source, often referred to as a Certificate Authority or Root of Trust. Like a digital identity card, devices exchange digital certificates to cryptographically authenticate each other’s identity and origin. In other words, authentication credentials allow you to prove you are what you say you are. As the IoT Security Informed Insight explains, “not only do digital certificates increase security, they enable a better customer experience (e.g. no PIN to enter.)”

The cryptographic signatures within the certificates cannot feasibly be forged or re-created unless you have the proper private key at the source. You can read more about the authentication process, digital certificates and PKI’s here.

  1. What are the main challenges facing the IoT industry today?

The challenges are multifaceted, but the three most common I see are:

  • While many companies are beginning to explore solutions, most device makers do not have security experts and are unprepared to manage security complexities

Device manufacturers and security companies have traditionally operated in two quite separate worlds.

Device manufacturers operate in a world of physical devices, often on the scale of hundreds of thousands, even millions of devices the manufactured each year. Tightly managing inventory, bill of material costs, and just in time delivery are essential to remaining competitive.  Device manufacturers work with firmware and small footprint applications, often with limited compute power and storage. Security can be limited to that which is only essential, in order to keep costs down and delivery times short. This market is generally characterized by tens of thousands of small to medium sized companies that individually might not drive very high volumes, but in aggregate ship billions of devices.

Security companies have traditionally operated in the world of enterprise computing, networking, and web servers and web applications. These accounts are typically characterized by large corporations with IT groups and staff or consultants that specializes in security. Generally, these are large companies, banks, data centers, health care providers, etc. where there may not be a physical product, but valuable data that is stored in vast database servers. The data enables services and usually involves personal and/or financial information that must be protected.

As you can see, this can result in a large mismatch between what a device maker needs, and what a security company is equipped to provide, resulting in the two parties talking past each other. As a result, device security often doesn’t get implemented properly. This is not because the device maker doesn’t want to do it, but because they are not effectively guided on HOW to do it.

  • In the pressure to meet product schedules and quarterly earnings, device security is often omitted or left as an afterthought because it currently takes too much effort and cost to understand and implement it

People often hear that cost is the reason for not implementing security, but misinterpret where that cost lies. There is indeed strong pressure to lower BOM costs, but the larger cost is often in the staff a company needs just to understand security itself. Whether it is allocating brain cycles from existing staff or new hires, headcount is generally one of the largest costs a company incurs. Understanding takes brain cycles. Brain cycles = time. Time = money, big money.

If we are to address the IoT security issue effectively, we need to address the time aspect of implementing security.

  • Although IoT has existed for some time now, the market pressure to go wireless leaves devices more vulnerable to attacks

Autonomous networked devices have existed for quite some time already, but have primarily been implemented on wired networks on a relatively limited scale, using general purpose computers. However, with the relentless march of Moore’s Law, microcontrollers have advanced to the point where even a very small, inexpensive chip can operate a full TCP/UDP network stack in addition to managing a wireless radio. This high integration and lower cost have driven the market towards the adoption of small, wirelessly connected autonomous devices. In addition, the convenience of wireless connectivity has increased the scale of adoption to levels that are orders of magnitude greater than we have ever seen before.

Every device that is connected to your network is effectively a user on that network. Would you let a human user onto your network without verifying their identity? If you wouldn’t do that, why would you let a “device” do it? I put “device” in quotes because, in a network environment, you can’t always be sure if something claiming to be a device actually is what it says it is.

The justification for omitting security I often hear is “there is nothing important on that device”. That is the data center way of thinking about it where you are protecting what is directly on the system where security is implemented. My response is usually this, “You are absolutely correct. No one cares about what’s on the device. They care about the network it’s connected to.” That usually gets them to rethink their position. Insecure devices provide a foothold on the network to attack higher value devices or capture sensitive data.

  1. How can companies work to ensure better security in their IoT products?

  • Businesses need to stop looking at security as a burden

Instead, businesses should leverage security as an opportunity to improve customer experience and revenues. Consumers don’t buy security for security's sake, they buy products that make their lives easier and more convenient. If a product is secure, it improves the customer experience.

  • A holistic approach to security must be addressed at the design stage of a device

To bring products to market faster, it’s easy to fall into the trap of a “sell now and we’ll patch it later” mentality. It’s nearly impossible to predict every security issue that may arise, so manufacturers need to consistently ask themselves: “How would this feature play out over time?” and “How do we do this in a way that’s scalable and secure over time”. Retrofitting security midway through the product lifecycle generally doesn’t work nearly as well and often sets you up for failure.

  • Businesses must understand what “security” actually means and look for solutions that are easily digestible if they don’t employ security experts

Device makers need to understand what security actually means and what it is. Just because you use encryption, doesn’t mean your device is secure. The biggest element of security is not encryption, but authentication: identify who you are communicating with and be able to verify it.

--

As IoT devices gather more information about us and our daily lives, consumers and businesses must pay more attention to the security risks and vulnerabilities. As Chris Connors, the General Manager of Internet of Things Offerings at IBM, states: “This means that device manufacturers, application developers, consumers, operators, integrators and enterprise businesses all have their part to play to follow best practices.”

You can find more information on IoT security here. Don’t forget to subscribe to our blog for more information on IoT in future blog posts.

Comments
Events

12 Things We Learned at Summer Conference 2017

Aug 15, 2017

Last week, at our annual Summer Conference in the picturesque Rocky Mountains, 350 cable operators representing North America, South America, Europe, Asia and Australia gathered together to explore cutting edge technology products. From VR to AI to autonomous vehicles, President and CEO Phil McKinney introduced several ways CableLabs is set to enhance our quality of life through broadband networks and wireless connectivity.

Not only did we witness the world premiere of our groundbreaking vision video The Near Future. A Better Place, we made some new connections and learned plenty along the way. So, without further ado, here are some of our biggest takeaways from our best Summer Conference yet...

  1. Innovation requires dedication every day.
  2. Creativity is important. Making time to think creatively can pay large dividends. Just as a person would strengthen their physical muscles, we must also tone our creativity “muscles”. We do this by abandoning the status-quo and pushing ourselves out of our comfort zones.
  3. Make a fool of yourself, work with the unexpected, give things away, realize you can't do it alone and ask the unasked.
  4. Don't be afraid to fail. Perfection is overrated and often holds you back from innovating something truly amazing.
  5. Share your innovation early. You will inspire others and they will inspire you to improve in unusual and meaningful ways.
  6. Try to be conscious of your biases. They might keep you from listening to someone with a good idea (or a critical warning).
  7. Artificial intelligence will soon power everything in our home from robots to mobile devices to holograms. AI will know everything when it comes to the details of a patient’s treatments so that people can remain in the comfort of their own home.
  8. There's been nearly 100 years of history with wireless technology starting in 1918 with Germany's wireless telephony experiments on military trains.
  9. Not all innovation comes from Silicon Valley. UpRamp announced 4 new fiterator companies featuring technology in AI, IoT, P2P CDN, and Cybersecurity from places including Pittsburgh, Pennsylvania, Alexandria, Virginia and Madrid, Spain.
  10. AI capabilities should not be outsourced and should live close to the business units. It's not simply a problem of the technology stack, but how AI can be paired with intuition to make better business and product decisions.
  11. It's an amazing time to rethink wireless. The Internet of Things will force us to reimagine what a wireless network could and should be.
  12. Rain doesn't stop a Kyrio BBQ!

12 Things we learned from Summer Conference 2017

CableLabs is the innovation lab for the global cable industry focused on innovation with purpose. Watch our video here and check out our Buzzfeed article “10 Ways Tech will Change your Life in the Near Future” to learn more.

Leave a comment below to let us know how the technology of the future inspires you and what you learned from our 2017 Summer Conference.

Comments
Innovation

10 Fun Facts about The Near Future. A Better Place.

Eric Klassen
Innovation Project Lead

Aug 10, 2017

“Innovate with Purpose” – CableLabs President and CEO Phil McKinney

This week at our Summer Conference we released a short a film titled The Near Future. A Better Place. The second in our Near Future series focusing on virtual reality and AI, the film highlights how our broadband networks and increased connectivity in the home play a crucial role in the innovations of the future of healthcare and telemedicine.

10 Fun facts about The Near Future. A Better Place.

Exclusive from the team that created The Near Future. A Better Place, here are 10 fun facts about our film that will both inspire you and blow your mind:

  1. The star of the film, Rance Howard. He has been acting for over 70 years and has appeared in over 250 films and tv shows.
  2. Rance had never cast a fishing lure before this film.
  3. To create Cookie the robot an art director designed and 3D printed the head and body. A robotics expert programmed the wheels and head motors, two operators remote controlled Cookie’s performance on set, two animators designed and created the performance of the eyes, a voice actor read for the voice, and a sound engineer synthesized the voice.
  4. The Ollie bus is a real autonomous car created by Local Motors and is mostly 3D printed. Multiple Olli’s were not available, so a special effects technique was used to create the shot with three Olli’s.
  5. All photos with Rance are his personal photos with an actress standing in for his wife, except for one where his real wife appears.
  6. A medical technologist expert from the Mayo Clinic was consulted to ensure the nanobots technology was realistic and not too sci-fi.
  7. The hospital was created in the office space of an architecture company in downtown Oakland, CA.
  8. The producer’s dog was hired to work with Cookie on set and she gave a stellar performance.
  9. To create the video wall in the living room, a separate wall was built, painted, and designed over an existing wall in the house and removed after shooting.
  10. The Super8 remembering-wife shot was created by bringing in hundreds of flowers that were placed in the existing foliage and then removed after shooting.

Now, grab a coffee and watch out for these in our video below:

--

You can learn more about the integral role the cable industry is playing in the innovations of the future here.

 

Comments