Behind the Tech: Near Future. Diverse Thinkers Wanted.
During Summer Conference, we released Diverse Thinkers Wanted, the fourth Near Future film—this one about the ways we’ll all be working a few years from now. Just like the previous three films, Diverse Thinkers Wanted highlights how advancements in cable technology can affect the way we interact with other people and the world around us—but this time in a professional capacity. The film explores multiple future technologies that can eventually help us become better, faster and smarter versions of ourselves, enhancing our creative problem solving, time management and collaboration skills.
On-Call Mixed Reality
Our eyeglasses and other wearables will be outfitted with connected mixed reality (MR) tech that can display a variety of useful and timely information about everything we see. This will give new meaning to the term “plugged in” at work.
Public Light Field
Public Light Field technology will allow multiple users to sign in and share information in a virtual 3D space or take part in private discussions through a secure channel if they need to. Thanks to this tech, working from home or another location will be easier than ever.
Autonomous Taxi Fleet
We envision a future, just a few years away, where a connected autonomous taxi service is instantly available to safely take you from point A to point B no matter where you are. Beats trying to get an Uber during rush hour!
The traditional video telepresence solutions available today will be enhanced with MR and holographic technologies that will allow you and your team to be exponentially more productive, leaving less room for misunderstandings and more room for creativity.
Next-generation AI applications can help you make the right decisions more quickly than ever. They can continuously listen for context and adapt to your needs as time goes by. Eventually, these tools will know what you need when you need it, without your explicit instructions. This tech will be very handy when you’re in a time crunch.
Holo-rooms are gathering spaces that include the latest holographic tech such as light field displays, light field rooms and volumetric light field tables. They’re perfect for running “virtual” meetings where all the participants feel as if they’re in the same room even if they’re miles apart.
Future innovations will allow you to do things you’ve never been able to do before, such as moving virtual objects with the gaze of your eyes. This type of tech can revolutionize the workforce by creating exciting new opportunities and even entire new fields of work.
Affordable Light Field Units
As light field technology becomes more mainstream, it will become more affordable, allowing manufacturers to create a variety of products for use at home, work and in public places, such as museums and bus stops. Soon, it will become one of the most effective ways to convey information.
The Near Future. Diverse Thinkers Wanted: 10 Fun Facts
This week, at our Summer Conference, we released a short film titled The Near Future. Diverse Thinkers Wanted. The fourth installment in our Near Future series focusing on light field technology, mixed reality and AI, the film highlights how our broadband networks and increased connectivity keeps everyone in the workplace seamlessly connected and more creative. Here are ten fun facts about our film:
- The autonomous cars in the film appear to have no steering wheel. This was achieved by using real cars with steering wheels and producing carefully mirrored shots: the set, costumes, props and stage direction were all mirrored, and the shot was then flipped in post-production, creating a realistic autonomous car driver-side with no steering wheel.
- The lead actress ran so much in the film that she had to use two sets of shoes to avoid blisters. In shots that showed her feet, she used her costume’s business shoes; in other shots, she used running shoes.
- The opening chase scene from the café to the cars took more than 20 takes to get everything shot properly from every angle. Both actors were exhausted but happy to add a chase scene to their acting experience.
- The café in the film does not exist. Every table, chair, cup, painting and every other prop was brought into an empty retail space that was built (art designed) as a café. Two days after it was built, the whole thing was taken down, leaving only an empty retail space again.
- The holographic video content in the autonomous car assumes that the windshield glass works with the dashboard element to generate the media. The producers initially thought that glass light field technology was “too sci-fi,” but it passed due to the availability of existing glass displays.
- One day of shooting happened at a college, and parking had to be coordinated on narrow campus grounds. While one of the red “autonomous” cars was being parked, it hit a concrete corner of an outdoor seating area, which ripped through the metal of the car’s passenger side door. Nobody was hurt, and thankfully the car scene had already been shot.
- The set for the quadriplegic was actually an office kitchen that was converted into a home space. Every item in the office kitchen was taken out, and every prop—including tables and chairs in the background, and item on the wall of the set—was brought in and designed to look like a home. After shooting, it was all torn down and the office kitchen was put back together exactly as it was.
- Several quadriplegics were auditioned for the part, but the actor who got the part is not disabled. He said that being able to move only his eyes and face was one of the hardest acting challenges he’s ever had.
- The Holo-Room was designed and mostly constructed beforehand. It was designed to be moved piece by piece into an office space for quick construction. It was moved in and built in 1 day and then torn down.
- The film was shot entirely in San Diego, marking the first time a Near Future film had no scenes filmed in the Bay Area.
Just Released: A New “Near Future” Film Takes a Look at How Innovation Will Affect the Way You Work
This week, CableLabs released the fourth installment in its Near Future series. Titled The Near Future: Diverse Thinkers Wanted, this short film explores the aspect of life that takes up most of our time and energy: work. Have you ever wondered what a typical day at the office might be like in a decade? Will a 9-to-5 workday still exist, or will the technology of tomorrow redefine the concept of work as we know it? Let’s take a closer look.
The Future Vision
The film’s narrative is centered on Nikki, an ambitious go-getter who’s about to deliver an important presentation. But as often happens in the world of business, things don’t go exactly as planned and Nikki is faced with a number of seemingly insurmountable challenges.
Fortunately, she has all the tools she needs to not only solve every problem but to do so without ever slowing down. On-call mixed reality apps and helpful light field displays provide the information she needs. An autonomous taxi is always there to take her anywhere she wants to go. Layered videoconferencing solutions and holographic telepresence technology help her maintain continuous contact with her team. And an ever-present AI assistant takes care of everything else, from confirming appointment details to booking a holo-room, in seconds.
Thanks to all this advanced tech at her fingertips, Nikki has the opportunity to be her best, most creative and efficient self, and to make smart, calculated decisions without ever losing focus. Not everyone’s workday will resemble Nikki’s, but this kind of technological advancement is certain to have a profound effect on the way we approach our daily tasks, conduct meetings and solve problems in the near future, no matter what line of work we’re in.
Technologies That Will Help Us Get There
The technology shown in the film will shape the way we think about work in the future. Powered by a multi-gigabit super network of tomorrow, it will create a more efficient, productive and creative work environment that will help us perform at our best. For example, technology can be used to:
- Manage our time better: Picture a world where you don’t waste half your morning resolving calendar conflicts or worrying about logistics. How much more would you be able to get done in a day? According to Accenture, technologies such as Nikki’s ear-piece AI assistant are projected to increase labor productivity by up to 40 percent, enabling you to make more efficient use of your time.
- Access the information we need, whenever we need it: A lot of workplace slowdowns occur because of missing or inadequate information. How much more productive do you think you’d be if all the information you ever needed was readily available to you? In the film, Nikki’s eyeglasses have built-in mixed-reality tech that overlays street addresses and other data on top of everything she sees, allowing her to make critical decisions on the go.
- Collaborate more efficiently, from anywhere: To accommodate a more talented and diverse workforce, businesses around the world are seeking advanced remote collaboration solutions that allow their teams to seamlessly interact as if they’re physically present at the same location. In the film, we explore a few ideas about how this might work, including layered videoconferencing technology that combines traditional video with mixed and virtual reality, public light field tables and holographic telepresence systems (holo-rooms), where Nikki’s entire team gathers to work on a common project.
- Enhance our skills and abilities: According to the World Economic Forum, 65 percent of children now entering elementary school will hold jobs that currently don’t exist. This is partially due to technologies like alternative interfacing, which gave Nikki’s coworker the ability to manipulate virtual objects with the movement of his eyes. This type of new and exciting technology will drive the need for more interesting and fulfilling jobs—and redefine the nature of work as we know it.
- Focus on creative solutions: According to McKinsey, 50 percent of current work activities are automatable, and the demand for skills like creativity, critical thinking, decision making and complex information processing is projected to grow 19 percent in the United States by 2030. Outsourcing some of the boring and mundane tasks—such as double-checking locations, hailing a cab or booking a room—to machines will free up more of our brainpower for a whole new level of creativity and imagination.
Although we’re not yet in Nikki’s world, we’re well on our way. The 10G platform will set the foundation for many of these technologies, enabling app developers and entrepreneurs to innovate without worrying about the speed, capacity and latency restrictions they had to deal with in the past. Take a look for yourself! You can view the film in its entirety below.
Innovation Journeys: 10G is new. We have been working on it for years.
You may have noticed that CableLabs is focused on innovation. One of our goals is to be recognized as the leading industry innovation lab in the world but talking about our innovation can be a bit tricky. Our job is to deliver innovation for the worldwide cable industry, but we can’t really talk about what we are working on now. We need to keep that secret for our member companies (cable operators) until the technology is ready to launch.
Our CEO, Phil McKinney has talked about how innovation is messy. Where you start may not be where you end up. I want to tell you about the path that led to one of our most important innovations--and part of our 10G platform. Low latency.
Our Low Latency Journey
We started on this journey over four years ago, with a challenge question (Focus in the FIRE methodology): What applications will drive a need for 60Mbps+ of sustained Internet bandwidth? That led to ideation sessions that unearthed the usual suspects: Internet of Things (billions of sensors, but each with such low bandwidth that they still don’t add up to much), 4K streaming video (good try, but still only 15Mbps or less), “Big Data” (sorry, not really a candidate for consumer households). Those applications didn’t quite answer the question.
But the emergence of 360° immersive video looked promising. Experiencing some of the earliest 360° video at the beginning of 2014 (shot on 6 Go-Pro’s, manually stitched) on a low-resolution Oculus Development Kit Virtual Reality headset got us thinking about where the technology might lead. Six 4K videos, streamed to the headset met the challenge of over 60Mbps, although compression gains would reduce the bandwidth and resolution increases would increase it.
Rather than “geeking out” on the technical possibilities, we followed advice from Phil: “Talk to consumers!” In February of 2015, we did primary research, bringing 50 varied members of the public into CableLabs to try out “immersive video content.” Rather than just focusing on virtual reality (VR) headsets, we constructed some other ways of consuming the content, such as immersive multi- 4K TV displays, ultra-wide projectors, tablets and regular TVs. We needed to understand whether “regular humans” (not geeks) would like these technologies.
The consumer research was massively informative. We shared the insights with our member companies at the time and realized that this ecosystem was likely to take off. We stepped back and tried to work out other mass-market use cases for VR.
We pivoted. We started to look at the possibilities of transforming how people communicate, and the ability to have holographic telepresence using digital human technology to perform digital headset removal. We don’t really want to talk to another person and see that person with a headset on; we want to see other people eye to eye and have them see us eye to eye. To prove the point, later in 2015 and into early 2016 we developed eye and mouth tracking capabilities that we added to a wireless VR headset and developed a digital human avatar of one of our staff.
We linked the head, eye and mouth tracking to real-time control of the digital avatar.
And in May of 2016 we demonstrated this to our board of directors.
We also found that realistic digital human avatars take LOTS of compute to render in real time, and that required a tethered PC. Even as mobile processors get faster and more capable, PC graphics will always be faster and more capable, due to their power budget. Phones get hot when you try to render realistic humans. To get to mass-market adoption, we need to go wireless and move the PC out of the home.
No Less Than a Revolution
VR needs incredibly low latency between head movement and the delivery of new pixels to your eyes, or you start to feel nauseated. To move the PC out of the home, we need to make the communications over the cable network be a millisecond or less round trip. But our DOCSIS® technology at the time could not deliver that.
So, we pivoted again. Since 2016, CableLabs DOCSIS architects Greg White and Karthik Sundaresan have been focused on revolutionizing DOCSIS technology to support sub-1ms latency. Although VR is still struggling to gain widespread adoption, that low and reliable DOCSIS latency will be a boon to gamers in the short term and will enable split rendering of VR and augmented reality (AR) in the longer term. The specifications for Low Latency DOCSIS (as a software upgrade to existing DOCSIS 3.1 equipment) have been released, and we’re working with the equipment suppliers to get this out into the market and to realize the gains of a somewhat torturous innovation journey.
Low latency is a key component of our 10G initiative. You can read more about the importance of latency here, and gain access both to a technical brief (members only) and to a detailed report (members only) on Wi-Fi latency in retail Wi-Fi routers.
An IDEA is Born: CableLabs Heads Up New Alliance That Will Bring Holodecks Into Your Living Room
CableLabs has joined forces with top players in cutting-edge media technology—Charter Communications, Light Field Lab, OTOY and Visby—to form the Immersive Digital Experiences Alliance (IDEA). Chaired by CableLabs’ Principal Architect and Futurist, Arianne Hinds, the alliance aims to facilitate the development of an end-to-end ecosystem for immersive media, including VR, AR, stereoscopic 3D and the much-talked-about light field holodeck, by creating a suite of display-agnostic, royalty-free specifications. Although the work is already well underway, the official IDEA launch event was on April 8 at the 2019 NAB Show. Learn more about it here.
IDEA’s Challenges: What problems do we want to solve?
Advancements in immersive media offer endless opportunities not only in gaming and entertainment but also in telemedicine, education, business and personal communication and many other areas that we haven’t even begun to explore. It’s an exciting technological frontier that always gets a lot of buzz at tech expos and industry conferences. The question now is not if, but when is it going to become reality and what are the steps to getting there?
Despite numerous innovation leaps in VR and AR in recent years, the immersive media industry as a whole is still in its very early stages. Light field technology, the richest and most dense form of immersive media that allows the user to view and interact with a three-dimensional object in volumetric space, is particularly limited by the shortcomings of the existing video interchange standards.
- Problem #1: Too much data
A photorealistic, volumetric video requires substantially more data than the traditional 2D media we’re used to today. In order to deliver a truly seamless and lifelike immersive experience, we need to take a different approach for an interoperable media format and network delivery.
- Problem #2: Inadequate Network Ecosystem
There’s currently no common media format for storage, distribution and display of immersive images. We’ll need to build a media-aware network that’s fully optimized for the new generation of immersive entertainment.
IDEA’s Goals: How will we address these problems?
IDEA is already working on the first version of the Immersive Technologies Media Format (ITMF), a display-agnostic set of specifications for representation of immersive media. ITMF is based on OTOY’s well-established ORBX Scene Graph format currently used in 3D animation.
The initial draft of ITMF, scheduled for release by the end of 2019, will meet the following criteria:
- It will be royalty-free and open source
- It will be built on established technologies already embraced by content creators
- It will be unconstrained by legacy raster-based 2D approaches
- It will allow for continued improvements and advancements
- It will address real-life requirements based on input from content creators, technology manufacturers and network operators.
In addition to the development of the ITMF standard, IDEA will also:
- Gather marketplace and technical requirements to define and support new specifications
- Facilitate interoperability testing and demonstration of immersive technologies in order to gain industry feedback
- Produce immersive media educational events and materials
- Provide a forum for the exchange of information and news relevant to the immersive media ecosystem, open to international participation of all interested parties
IDEA’s New Chairperson: A Woman With a 3D Vision
IDEA’s newly-elected chairperson, Dr. Arianne Hinds, joined CableLabs in 2012 as a Principal Architect of Video & Standards Strategy. A VR futurist, innovator and inventor, she has over 25 years of experience in areas of image and video compression, including MPEG and JPEG. Dr. Hinds has won numerous industry awards, including the prestigious 2017 WICT Rocky Mountain Woman in Technology Award. She is the Chair for the U.S. delegation to MPEG and is currently serving as the Chairperson of the L3.1 Committee for United States MPEG Development Activity for the International Committee for Information Technology Standards. Her new responsibilities at IDEA are a natural extension of her life’s work, perfectly aligned with the IDEA’s mission to bring the beautiful world of immersive media technology into the mainstream.
The 10G platform positions cable operators as the first commercial network service providers to support truly immersive services beyond the limits of legacy 2D video. With its ability to deliver up to 10Gbps while at the same time supporting low latency for interactive applications, 10G will be crucial to delivering the immersive media at bitrates (e.g. 1.5 Gbps for light field panels) that allow the corresponding displays to operate at their fullest potential.
Become an IDEA member
No one company can build the future in isolation. IDEA welcomes anyone—technologists, creative visionaries, equipment manufacturers and network distribution operators—who share its vision. If you’re interested in learning more about becoming a member, please visit the website at www.immersivealliance.org.
You can learn more about the CableLabs future vision by clicking below.
2019 Tech Innovation Predictions
Now that 2019 is here, it’s time to share my tech innovation predictions for the year. Watch the video below to find out what you can expect to see in 2019.
What are your innovation predictions for 2019? Tell us in the comment section below. Best wishes for a great new year!
Subscribe to our blog to see how CableLabs enables innovation.
1+1=100: CableLabs’ University Research Relationships and Their Role Within the Innovation Ecosystem
One of CableLabs’ most important objectives is the continuous pursuit of new ideas that can lead to game-changing innovations for the cable industry. CableLabs university research relationships give us access to great minds around the world that can bring innovative ideas to the cable industry and supercharge our own efforts at CableLabs. It’s also an opportunity to build long-term, mutually beneficial working relationships with some of the best research labs in the country.
When you think of building the future, a university lab, traditionally considered a goldmine for radical thinking and innovative research, is a natural place to start. Some of the best academic institutions in the U.S., such as Georgia Tech, Carnegie Mellon, Princeton and our neighbor, Colorado State, share our vision of a highly-connected near future and are doing amazing research in networking, 5G, cybersecurity and other areas of interest to the cable industry. Our partnerships with these institutions have already proven to be a worthwhile investment producing innovative solutions that are helping drive our progress in IoT security and mobile networking.
CableLabs + Universities: Building the Future Together
Current innovation projections for the near future, including the proliferation of IoT devices, VR/AR applications, artificial intelligence and seamless mobile communication, all require a powerful broadband network. Together with our university partners, we’re developing ideas that’ll bring us closer to the multi-Gigabit network reality of the future. Let’s take a look at some examples of how we work together to make it happen.
- Future Mobile Infrastructure
In just 20 years we’ve migrated from basic flip phones to powerful multi-use smartphones that are essentially our pocket-sized lifeline to everyone and everything we need. Not only do we have better hardware, but our mobile networks have also been enhanced to keep up with the exponentially growing user demand. But what will our hyperconnected future look like years from now? How will our mobile networks deal with massive amounts of data? Does our current mobile infrastructure require radical changes? Our partners at Carnegie Mellon University’s Electric & Computer Engineering Department are working on answering these questions by taking a fresh look at Mobile Core Network Architecture and the implications of building and operating future mega-powerful mobile networks.
- The Future of IoT & Network Security
Our users’ desire for increased connectivity and productivity has already led to the proliferation of various IoT sensors and devices in our homes, cars, offices and everywhere in between. In response, companies are rushing to meet user demand by selling products without adequate cybersecurity measures. Since smart technology is only going to become more prevalent in the near future, this hacker’s dream is becoming an industry-wide problem that needs urgent attention. We’ve been working with the Center for Information Technology Policy at Princeton to understand IoT device behavior and potential issues. We’ve also been working closely with the faculty and graduate students at Colorado State University to develop new ways of identifying problems and protecting against security threats. This work will help inform CableLabs’ larger effort to drive better IoT security standards across the industry. In addition to addressing IoT issues, Colorado State is also exploring ways of using real-time network data to identify unusual traffic patterns and applying multiple strategies to mitigate the rapidly evolving denial of service attacks.
- 5G and Fiber-Wireless Integration
4G wireless networks are fast but not nearly fast enough for the low-latency technologies of tomorrow. The 5G rollout in the next few years will introduce multi-Gbps mobile broadband speed and along with it—a new era in connectivity. 5G can support cutting-edge technologies, like VR, AI and IoT devices in large quantities, opening the door to a plethora of exciting new inventions, like self-driving AI-powered cars and much more. Together with our research partners at Georgia Tech, we’re exploring the possibilities of the 5G network and are looking into expanding the bandwidth capacity of cable’s optical technologies to meet the demand of 5G devices.
Moving forward, we will continue seeking out extraordinary thinkers within the academic community and supporting the development of new ideas and talent—the two main ingredients for a brighter future.
CableLabs Announces Major Update to the Open Source LoRa Server
Last week, in my blog post “CableLabs Open Source LPWAN Server Brings Diverse LPWAN Technologies Together,” we announced our LPWAN Server. This project is open source and:
- Provides new capabilities to bring IoT LPWAN wireless technologies together
- Is a flexible tool to enable the use of multiple servers across multiple vendors
The LPWAN Server was designed to work with the CableLabs sponsored open source LoRa Server and, together, provide a comprehensive solution to enable many LPWAN use cases. It has been nearly 18 months since we released the first major revision of the LoRa Server and, during this time, many improvements have been made.
In this blog, I’ll discuss why we invested in the LoRa Server, how the project continues to improve and how it aligns with the latest specifications released from the LoRa Alliance. If you need a refresher on the LoRa Server, please see my blog post “CableLabs Announces an Open Source LoRaWAN Network Solution.”
Why Did CableLabs Invest in the LoRa Server?
The LoRa Server project was conceived and started by Orne Brocaar. His goal was to develop a fully open source LoRa Server that could be used by anyone looking for the opportunity to gain an introduction into LoRaWAN and LPWANs. Due to limited time and resources, the project remained minimal in functionality and progression for nearly a year.
CableLabs had a goal to find a fully community-based open-source LoRaWAN server to provide the cable industry with the ability to easily prototype, test and trial LPWAN services using unlicensed RF spectrum. We discovered the LoRa Server and began investing heavily into developing the functionality to align with our goal. Shortly after this, Orne joined the CableLabs team to lead the development of the LoRa Server into the exceptional tool it has become.
Our design strategy began and continues to focus on these key areas:
- Full functional compliance with LoRa Alliance specifications
- Extensive debug and logging tools
- Protocol transparency to the operator of the server
- Scalable for any sized testing, trial or use
While our goals are to provide a tool for testing, trials and related use, the server is fully open-source under the MIT license. This allows it to be used freely for any use from testing to production. We desire to enable growth and creativity in the LPWAN ecosystem using the LoRaWAN protocol.
Introducing a New Version of the LoRa Server
In the summer of 2018, we released LoRa Server v2. We have released several additional updates to introduce new features and improvements since then while maintaining backward compatibility with LoRaWAN 1.0. Where v1 (released in June 2017) was focused on the first stable release since many test versions, v2 focuses on an improved API, User Interface (UI), compliance with LoRaWAN 1.1 and additional interesting new features.
The major feature of LoRa Server v2 is support for LoRaWAN 1.1. LoRaWAN 1.1 is an important release for many reasons:
LoRa Server v2 enhances the security of LoRaWAN devices by providing LoRaWAN 1.1 support. Not only does LoRaWAN 1.1 add better protection against replay attacks, it also adds better separation between the encryption of the mac-layer and the application payloads. This also facilitates the implementation of roaming in the future. It is important to mention that LoRa Server v2 still supports LoRaWAN 1.0 devices.
Another major feature of LoRa Server v2 is the completely redesigned and re-written web-interface. The fantastic new interface is more responsive because of smarter caching and it is more user-friendly and easier to navigate.
As many users are integrating LoRa Server into their own platforms using the LoRa Server APIs, we want to make sure these APIs are easy to use and are consistent. LoRa Server v2 removes many inconsistencies present in the v1 API and makes it possible to reuse objects so that code duplication is avoided.
Multicast was a feature which was long requested and is finally present since LoRa Server v2.1.0. This feature makes it possible to assign devices to a multicast-group, so a group of devices can be controlled without the need to address each device individually, reducing the required airtime. One of its use cases is Firmware Updates Over The Air (FUOTA) which was recently released by the LoRa Alliance. In an upcoming version, we are planning to further integrate this into the LoRa App Server component of the LoRa Server.
Since LoRa Server v2.2.0, the server provides geolocation support. By default, it integrates with the Collos platform provided by Semtech, but by using the provided geolocation API, other platforms can be used. Please note this requires a v2 LoRa Gateway with geolocation capabilities, as a high precision timestamp is required for proper geolocation.
Google Cloud Platform integration
A common request we have received is how to scale LoRa Server. Since LoRa Server v2.3.0, it is possible to make use of the Google Cloud Platform infrastructure to improve scalability and availability. LoRa gateways can directly connect to the Cloud IoT Core MQTT bridge (using the LoRa Gateway Bridge), and the LoRa Server and LoRa App Server integrate with Google Cloud Pub/Sub.
Open Source Community
The open source community is encouraged to take advantage of our efforts and further functional support for even more gateways, solutions and use cases. There are many LoRaWAN gateways and applications and we would like the development community to help us integrate these.
To find out more information about the LoRa Server and become involved in this project, go to the LoRa Server site.
Subscribe to our blog for updates on the open source LoRa Server.
CableLabs Open Source LPWAN Server Brings Diverse LPWAN Technologies Together
CableLabs is excited to announce a new open source project called LPWAN Server. The LPWAN Server provides new capabilities to bring IoT LPWAN wireless technologies together.
Before we go into more details on the LPWAN Server, let us first get some background into this space. In my previous blog post, I discussed the Internet of Things (IoT) as a growing industry comprised of a massive number of devices that connect to each other to benefit our lives. For example, a soil moisture sensor can help a farmer determine when to water their crops rather than potentially wasting water through a legacy timed-based approach. In that blog post, CableLabs announced the release of an open source LoRaWAN solution, LoRa Server.
What is LoRa Server and LPWANs?
LoRa Server is a community-sourced open source LoRaWAN network server for setting up and managing LoRaWAN networks. LPWANs connect sensors designed to last for years on a single battery transmitting information periodically over long distances.
There are many potential use cases shown below:
LPWANs are designed to cover large geographical areas and minimize the amount of power required for sensors to interact with the network. There are many solutions available to enable these use cases, including:
- LoRaWAN™: LoRaWAN is a partially open unlicensed spectrum solution developed through the specifications efforts of the LoRa Alliance: While the specifications are developed within the Alliance, they are made available to the general public upon completion.
- Mobile solutions from 3GPP: 3GPP defined Cat-M1 and NB-IoT for varying connectivity requirements. These are also open standards, but they require licensed spectrum.
- Weightless: Weightless is an open specification effort but has struggled in gaining traction in the LPWAN space. It should be noted, there are many other proprietary LPWAN technologies with active deployments and use in this ecosystem.
Why No One Solution Will Own the Technology
We believe no one LPWAN technology will fully own this IoT space. Our reasoning for this belief comes from multiple factors. As we look at the sensors in this space, some are intended for real-time applications with consistent and verified uploads, while other sensors simply wake-up periodically and transmit small data payloads. Without going into more specific examples, we believe some LPWAN applications are better suited for licensed spectrum mobile networks, while other LPWAN applications are better supported with unlicensed solutions, such as LoRaWAN™. LoRaWAN services can be further explored through some of our member offerings via MachineQ™ and Cox℠ .
Our New Open Source Solution
With these considerations in mind, we developed a new open source solution to enable easily moving data from devices and applications across varying network types and related solutions. The LPWAN Server was designed to enable multiple use cases:
- First, it can be used to simply migrate or operate between two LoRaWAN™ network servers, such as the LoRa Server and The Things Network.
- Second, and more importantly, the long-term design intention is to enable the routing of multiple LPWAN technologies, such as LoRaWAN™ and SigFox or LoRaWAN™ and Narrow Band IoT (NB-IoT). In order to integrate IP-based devices, the server will include a “relay server” of sorts. This allows for the IP traffic to mix with LoRaWAN™ traffic for a single upstream interface to an application or data collector, such as Google Cloud and Microsoft Azure.
Our goal with this project is to see developers add more back-end integration with network servers and technologies to enable this routing of traffic across many LPWAN technologies.
LPWAN Server Use Cases
The LPWAN Server was designed to support the following use cases:
1. Multi-vendor LoRaWAN™ environment: Using the LPWAN Server in a multi-vendor LoRaWAN™ environment allows a network provider to:
- Test multiple servers from multiple vendors in a lab,
- Trial with multiple network servers from multiple vendors
- Run multiple vendor solutions in production
2. NB-IoT & LoRaWAN™ device deployment: The LPWAN Server will allow you to operate a single application for devices deployed on both LoRaWAN™ and NB-IoT networks. The LPWAN Server will enable an IP relay-server for connecting with NB-IoT (and Cat-M1) devices commonly behind a 3GPP mobile network Evolved Packet Core (EPC). It also allows for managing devices on the LoRaWAN™ The devices are managed under a single application within the LPWAN Server. This allows an application to receive data over a single northbound Application Program Interface (API) rather than maintain API connections and data flows to multiple networks.
3. Simplify device provisioning across multiple LPWAN network types and solutions: The LPWAN Server simplifies provisioning to one or more LPWAN networks. A major challenge for a back-office solution is to integrate provisioning into a new network server. This is further complicated with multiple new network servers and types. In order to simplify this, the LPWAN Server manages the APIs to the networks, and the back-office solution only needs to integrate with a single API to the LPWAN Server. The following figure illustrates this.
4. Create consistent data order and formats from LPWAN devices: The final use case explains how the LPWAN Server can normalize data from varying devices on one or more networks. Unfortunately, even in a single network environment, such as LoRaWAN™, there is no standard for data formats from multiple “like” sensors. For example, a weather sensor from two different vendors could send the same type of data but reverse the order. An application will need to interpret the data format from multiple sensors. In order to simplify this, the LPWAN Server can be used to reformat the data payload into a common format for sending up to the application server. In this way, the application server will not need to interpret the data.
CableLabs & the Development Community Together
The LPWAN Server is intended to be a community open source project. The initial release from CableLabs provides support for a multi-vendor LoRaWAN™ use case. The back-end has been designed for future support of all of the use cases, and the UI is flexible to support them as well. We currently are using the server for data normalization, too; however, this is via a back-end process.
The open source community is encouraged to take advantage of the initial CableLabs development and further the development into a useful application for even more servers, solutions and use cases. There are many network types and related servers, and we would like the development community to help us integrate these.
To find out more information about the LPWAN Server and become involved in this project, go to https://lpwanserver.com.
The LPWAN Server was designed to work with the CableLabs sponsored open source LoRa Server. In an upcoming blog, I will discuss how that project continues to evolve and align with the latest specification releases from the LoRa Alliance. The LPWAN Server and LoRa Server provide a comprehensive solution to enable many LPWAN use cases.
Innovation From All Corners: The Role of Vendors in the Innovation Ecosystem
So far we’ve covered the cycle of innovation and commercialization in Transforming Ideas into Solutions and how CableLabs helps turn innovative ideas into reality through startup collaboration and creative licensing agreements. This next part of the Innovation Ecosystem Series focuses on vendors and their role in moving our industry forward.
CableLabs has a long history of vendor community collaboration, teaming up to bring new, innovative ideas to reality. Vendors’ research, unique vantage point and expertise helps shape our innovation roadmaps, inform our members’ business decisions and bring new cutting-edge technology products to market faster and more cost-effectively.
We could cite many examples of successful vendor collaboration, but let’s look at one that has had a widespread and significant market impact: DOCSIS® technology. Its role in the evolving cable modem sector is a great example of how vendor contributions drive the innovation cycle in the industry.
DOCSIS Technology: A Vendor Collaboration Success Story
DOCSIS technology, well established and universally adopted today, was revolutionary at the outset, establishing a telecommunications standard that permitted the addition of high-bandwidth data transfer to an existing cable TV (CATV) system. This meant users could finally experience true interoperability, regardless of the cable vendor they chose.
Prior to the creation of the original DOCSIS specifications, many vendors brought proprietary cable modems to market. Each of these proprietary solutions had its own unique, innovative approach to offering broadband internet over a cable network. Some offered solutions for carrying downstream traffic but were weaker in the area of carrying upstream traffic. Some had better Media Access Control (MAC) layer protocols but did not offer great RF performance. Few addressed the issue of security and privacy. None of them individually had the best fix, and none of them captured sufficient market share to make the economics work for truly mass deployment. In short, neither suppliers nor cable customers were finding the optimum resolution.
Together with our member companies and industry suppliers, we embarked on an exhaustive process to select the “best of breed” innovations proposed by the vendors. This process involved testing a multitude of different solutions to determine which ones would eventually make it into DOCSIS technology. With each successive generation of DOCSIS technology, innovative contributions to the specifications enable greater speed, lower latency and higher reliability. Now, thanks to our continued collaboration with our vendors, DOCSIS 3.1 technology supports phenomenal speeds of up to 10 gigabits per second downstream and 1 gigabit per second upstream—an amazing achievement that allows our cable operators to stay very competitive in today’s markets.
The Industry Impact
DOCSIS technology catapulted the cable industry into a new service arena, providing broadband internet access to many homes across the country and beyond. Bruce Leichtman, president and principal analyst for Leichtman Research Group, relates the results: "At the end 2Q 2017, cable had a 64% market share. The broadband market share for cable is now at the highest level it has been since the first quarter of 2004."
Thanks to DOCSIS technology, consumers can now enjoy ultra-high definition 4K television, artificial reality (AR), advanced gaming options, IoT and many other benefits of internet connectivity. This, coupled the soon-to-be-delivered 5G, is a very exciting time for broadband innovation.
How We Engage With Our Vendors
Over the last 30 years, we’ve worked hard to develop close working relationships with vendors through various programs and collaboration opportunities. Here are just some of the ways our vendors can engage with us and our member community.
- Working Groups: As Working Group members, vendors get the opportunity to participate in the creation of new specifications and collaborate with other industry professionals.
- Visiting and Contributing Engineer Programs: Vendors can work on-site in our labs as Visiting Engineers or remotely—as Contributing Engineers. They get access to our tools and workspace in exchange for their expertise.
- Draft Specification Reviews: Vendors can get access to draft specifications not yet available to the public. This gives them an opportunity to comment and submit change requests 30 to 60 days prior to issuance.
- Co-innovation Opportunities: Just like with DOCSIS technology, we invite all members and vendors to join us in tackling a specific problem facing the industry and are open to many co-innovation opportunities.
- Events & Showcases: Vendors have multiple opportunities to show off their inventions and connect with cable operators throughout the year. Some of our free events, like the Envision Forum (formerly Connect[ED] Forum), are geared specifically to vendors.
- Kyrio Product Certifications: In our industry, interoperability is key. Through Kyrio, vendors can test their products to make sure they meet all the industry specifications.
Vendors’ work is crucial to our industry on both the innovation and commercialization side of the spectrum. These companies are some of the most prolific sources of great ideas and it’s part of CableLabs’ mission to make sure they are heard by the right people at the right time in the innovation cycle. By working together with both our members and vendors, we can continue to reach our collective goals as an industry and discover new possibilities.
Learn more about vendor collaboration by clicking below.