Comments
Home Network

  Tips to Improve Your Home Wi-Fi Performance

Mar 31, 2020

In response to the Covid-19 outbreak, most people around the world are being asked to stay home. At the same time, they’re going about their daily lives as best as possible under the circumstances. This means that people will be doing more online such as working from home, classroom assignments, calling their friends and watching videos.

With more users in the home simultaneously active online, you may be experiencing Wi-Fi overload. Joanna Stern over at the Wall Street Journal posted a helpful article about fixing Wi-Fi pain points here. We have a few more suggestions and tips to better share your home network.

The first step is to assure that your router is centrally located in your home. Router antennas send Wi-Fi signals out in all directions. So, if yours is tucked away in a corner of your home your wireless coverage might be pointing outside. A central location will optimize your signal. Ideally the router should be off the floor and placed as high as possible. It’s also important to keep the router away from obstructions, such as TVs and other large electronics. The more obstacles near your router, the more interference with the signal.

Don’t forget to stay close to your Wi-Fi router when you’re using data-intensive applications. The closer the device you are using is to a Wi-Fi router, the better its connection will be. If you have several people on the internet in your household all at the same time watching online videos and such, consider using standard definition (versus high definition) to help things run smoothly. Go into video settings on your streaming platforms and lower the resolution as needed. Do you really need full HD 1080 resolution for the kids to watch Baby Shark? Look for “data saver” or “good” under the settings of your favorite services. Also consider lowering the video resolution on your webcam to 720p to ensure great video while reducing the risk of the kids competing with you for Wi-Fi bandwidth while binge-watching Star Wars movies.

Services such as Zoom, Google Hangouts and Webex allow for changing your video resolution under their settings tabs. Others may require a change by your systems administrator. It can help to turn off your video security cameras and nanny cams while at home or not in use too.

If you’re on a deadline for work or school and need to try and speed things up, try minimizing the use of concurrent devices and applications (e.g., gaming or watching streaming video) to reduce conflict. Backing up videos, pictures and files can take a while, depending on the size and other applications being used. Schedule and perform large file backups during overnight hours.

Don’t forget that having multiple tabs open on your browser can slow things down, especially if you’re streaming music or have a video running on different websites in the background. This can cause a decrease in your browsing speed, so we recommend closing tabs when they’re not in use. We’re all used to working wirelessly these days; a wired Ethernet connection will speed up data transfer from your computer to the Internet. So, go to your garage or closet where you store your miscellaneous electronic accessories and dig up your CAT 6 (standard twisted pair cable) cables to use a wired versus Wi-Fi connection for your PCs, gaming consoles and TVs if available.

By taking some of the steps outlined above, we can all help improve our Internet experience within the home network. We will get through this and whatever comes next!

SUBSCRIBE TO OUR BLOG

Comments
Innovation

  Behind the Tech: Connected Healthcare and The Near Future. A Better Place

Aug 31, 2017

Our short film The Near Future: A Better Place explores how emerging technologies in healthcare will transform our daily lives and allow us to age gracefully in place. We envision a future where embedded IoT devices are able to monitor us anywhere, keeping us safe and healthy longer. These technologies, once thought of as science fiction, rely on the high speed, secure, reliable wireless connectivity and networking protocols enabled by the cable industry. Below, take an inside look at the tech featured in our film that will change the way we connect and interact with the world around us.

Smart Medicine

Battery free ingestible pills containing microchips optimise drug levels and transmit a signal to doctors where effects can be monitored and displayed.

Brain Scans

Neurodegenerative diseases like Alzheimer’s and strokes are easily detected through high-resolution MRI’s able to probe the microstructure of the brain.

Sensory Body Reading

Wireless, water resistant sensors are able to monitor vital signs and bio activity on a continuing basis. Health issues are detected early on by monitoring daily trends.

Remote Diagnostics

Say goodbye to those monthly doctor’s visits and weekly blood tests! Mobile monitoring allows doctors to monitor patients in real time via a secure connection, enjoying a complete healthcare appointment from the comfort of your own home.

Networked Health Care and Smart Cities

Sensors in the home and city infrastructure monitor weather, temperature, pollution and pollen allowing seniors to make the best decision for their daily activities.

AI Agents

Cookie, the AI agent in the film, is able to take the place of an in-home nurse or doctors visit by providing an in-home companion capable of social interaction and health monitoring that is fully knowledgeable of treatments and drug regimes.

Nano Surgery

Nanobots powered by electromagnetic impulses are injected directly into the bloodstream and able to treat diseases more efficiently and accurately, making certain diseases a thing of the past.

 

You can find more information about connected healthcare and The Near Future. A Better Place here. Subscribe to our blog to find out how CableLabs is enabling the tech of the future.

Comments
Innovation

  10 Fun Facts about The Near Future. A Better Place.

Aug 10, 2017

“Innovate with Purpose” – CableLabs President and CEO Phil McKinney

This week at our Summer Conference we released a short a film titled The Near Future. A Better Place. The second in our Near Future series focusing on virtual reality and AI, the film highlights how our broadband networks and increased connectivity in the home play a crucial role in the innovations of the future of healthcare and telemedicine.

10 Fun facts about The Near Future. A Better Place.

Exclusive from the team that created The Near Future. A Better Place, here are 10 fun facts about our film that will both inspire you and blow your mind:

  1. The star of the film, Rance Howard. He has been acting for over 70 years and has appeared in over 250 films and tv shows.
  2. Rance had never cast a fishing lure before this film.
  3. To create Cookie the robot an art director designed and 3D printed the head and body. A robotics expert programmed the wheels and head motors, two operators remote controlled Cookie’s performance on set, two animators designed and created the performance of the eyes, a voice actor read for the voice, and a sound engineer synthesized the voice.
  4. The Ollie bus is a real autonomous car created by Local Motors and is mostly 3D printed. Multiple Olli’s were not available, so a special effects technique was used to create the shot with three Olli’s.
  5. All photos with Rance are his personal photos with an actress standing in for his wife, except for one where his real wife appears.
  6. A medical technologist expert from the Mayo Clinic was consulted to ensure the nanobots technology was realistic and not too sci-fi.
  7. The hospital was created in the office space of an architecture company in downtown Oakland, CA.
  8. The producer’s dog was hired to work with Cookie on set and she gave a stellar performance.
  9. To create the video wall in the living room, a separate wall was built, painted, and designed over an existing wall in the house and removed after shooting.
  10. The Super8 remembering-wife shot was created by bringing in hundreds of flowers that were placed in the existing foliage and then removed after shooting.

Now, grab a coffee and watch out for these in our video below:

--

You can learn more about the integral role the cable industry is playing in the innovations of the future here.

 

Comments
Culture

  CableLabs Loves Pedal Power: Our Bike to Work Day Story

Jun 14, 2017

It's that time of year again! While most of the country celebrates Bike to Work Day in May, Colorado waits for the snow to melt and designates every June as Bike to Work Month and June 28th, 2017 as #BikeToWorkDay. Whether you bike for health, to get to work, preserve the environment, or simply for fun, Bike to Work Day is a unique opportunity to celebrate the power of two pedals.

CableLabs has made cycling a priority at our workplace and is excited for our employees to take part in this year's festivities. Watch the video below to find out more about CableLabs' bike to work day story.

--

With more than 30,000 Coloradans hitting the streets on two wheels in 2016, Colorado hosted the second-largest Bike to Work Day in the nation last year. Summer's here, so dust off that bike and register today to join CableLabs' employees on the Colorado bike lanes and trails for #BikeToWorkDay 2017 to make this year one for the record books. Click here to see a list of Bike to Work Day events in your area.

To learn more about the benefits of bicycling to work, school or errands, don't forget to check out the Colorado Department of Transportation's website.

Comments
Consumer

A Look into the Near Future

Eric Klassen
Executive Producer

Aug 8, 2016

CableLabs has done something surprising for an Innovation and R&D Lab. They have released a short film that provides a vision of possibilities arising from the high speed low latency networks that will connect our homes, businesses and mobile devices in the not too far distant future.

Portraying a number of vignettes in the life of a family, the video illustrates the impact of new technologies on a range of human interactions. These include holographic based education, autonomous vehicles, augmented and virtual reality gaming, collaborative work and much more.

The film, produced from the vision of the CableLabs innovation team, offers a compelling view into the technology driven transformative shifts that could occur over the next five years.

Filming was not a trivial task. To properly illustrate each technology, special effects were required in most of the shots. The visual FX company, FirstPerson, was brought on to the team to 3D map each set in preparation for post production. After shooting was completed, the FX team was tasked with matching the 3D generated maps to the real locations, bringing the virtual reality technology to life.

To see the film and more, go to www.thenearfuture.network

 

 

Comments
Labs

The Cost of Codecs: Royalty-Bearing Video Compression Standards and the Road that Lies Ahead

Feb 1, 2016

Since the dawn of digital video, royalty-bearing codecs have always been the gold standard for much of the distribution and consumption of our favorite TV shows, movies, and other personal and commercial content. Since the early 90’s, the Moving Picture Expert Group (MPEG) has defined the formats and technologies for video compression that allow all those pixels and frames to be squeezed into smaller and smaller files. For better or for worse, along with these technologies comes the intellectual property of the individuals and organizations that created them. In the United States and in many other countries, this means patents and royalties.

MPEG-LA

The MPEG Licensing Authority (MPEG-LA) was established after the development of the MPEG-2 codec to manage patents essential to the standard and to create a sensible consolidated royalty structure through which intellectual property holders could be compensated. With over 600 patents from 27 different companies considered to be essential to MPEG-2, MPEG-LA simplified the process of negotiating licensing agreements from each patent owner by anyone wishing to develop products utilizing MPEG-2. The royalty structure that MPEG-LA and the patent owners adopted for MPEG-2 was fairly simple: a one-time flat fee per codec. See table below.

The successor codec to MPEG-2, MPEG-4 Advanced Video Coding (AVC), was completed in 2003. Shortly thereafter, MPEG-LA released its licensing terms, which contained per-unit royalties and a ceiling on the amount that any one licensee must pay in a year. However, it seemed that not all parties who claimed to have essential patents for AVC were satisfied with the royalty structure established by MPEG-LA. Thus, a new AVC patent pool, managed by Via Licensing, a spin-off of Dolby Laboratories, Inc., emerged in 2004 and announced their separate licensing terms. This created a significant amount of confusion amongst those trying to license the technology. Would you end up having to license from both parties to protect yourself from future litigation? Some patent owners were in both pools; would you have to pay twice for these patents? In the end, Via Licensing signed an agreement with MPEG-LA and its licensors merged with the MPEG-LA licensors to create a single licensing authority. The royalty structure established by the MPEG-LA group for MPEG-4 added new fees to the mix. In addition to a one-time device fee (includes software and hardware applications), per-subscriber and per-title fees (e.g., VOD, digital media) were added. These additional fees have deterred many cable operators and other MVPDs from adopting MPEG-4. Royalties were also established for over-the-air broadcast services, while free Internet broadcast video (e.g YouTube, Facebook) remained exempt. See table below.

The latest MPEG video codec is called High Efficiency Video Coding (HEVC), and with it comes the latest licensing terms from MPEG-LA. A flat royalty per device is incurred for those shipping over 100,000 devices to end-users. Companies with device volumes under 100,000 will pay no royalties. In addition, the royalty cap for organizations was raised substantially. Most companies distributing HEVC encoders/decoders will not be charged royalties because they will easily fall under the 100,000-unit limit. Larger companies that sell mobile devices (Samsung, LG, etc.), OS vendors (Microsoft), and large media software vendors (Adobe, Google) will likely be the ones paying HEVC royalties. Learning from the lackluster adoption rate of MPEG-4 by content distributors, there are no royalties on content distribution. This seems like a step in the right direction.

The following table details the royalty structures established from the days of MPEG-2 up to the proposed terms for HEVC.
The-Cost-of-Codecs--table
Download PDF version

Trouble on the Horizon

As one might expect, many companies who hold patents that they claim are essential to HEVC were not happy with the MPEG-LA terms and have instead joined a new licensing organization, HEVCAdvance. In early 2015, when HEVCAdvance first announced their licensing terms, it caused quite a stir in the industry. Not only did they have much higher per-device rates, but there was also a very vague “attributable revenue” royalty for content distributors and any other organization that utilizes HEVC to facilitate revenue generation. On top of all that, there were no royalty caps.

In December, 2015, due in no small part to the media and industry backlash that followed the release of the original license terms, HEVCAdvance released a modified terms and royalties summary. Royalties for devices were lowered slightly, but still significantly higher than what MPEG-LA has established – and with no yearly cap. Additionally, there are slightly different royalty rates depending on the type of device (e.g., mobile phone, set-top-box, 4K UHD+ TV). No royalties are assessed for free, over-the-air and Internet broadcast video. However, per-title and per-subscriber royalties remain. See table above. The HEVCAdvance terms also assess royalties with no caps for past sales of per-title and subscription-based units. Finally, there is an incentive program in which early adopters of the HEVCAdvance license can receive limited discounts on royalties for past and future sales.

It is important to note that the companies that participate in each patent pool (HEVCAdvance, MPEG-LA) are not the same. If the two patent pools are not reconciled or merged, licensees may have to sign agreements with both organizations to better protect themselves against infringement. This will result in a substantial financial impact to the device vendors (TVs, Browsers, Operating Systems, Media Players, Blu-Ray Players, Set-Top Boxes, GameConsoles, etc). Of course, content distributors are also unhappy with the HEVCAdvance terms that revert back to the content-based royalties of MPEG-4, and a higher cap of $5M per year. As a result, you may expect the content distributors to pass a portion of their subscription- and title-based royalties on to their customers which results in increased prices for consumers like you and me.

Update (2/3/16): Technicolor announced that they will withdraw from HEVCAdvance and license its patents directly to device manufacturers. While this is an encouraging sign (and good for content distributors), I'm curious as to why they did not just join MPEG-LA. If more and more companies take the route of individual licensing, it may make the process of acquiring licenses for HEVC more complex.

The Royalty-Free Codec Revolution

In response to the growing concerns regarding the costs of using and deploying MPEG-based codecs, numerous companies and organizations have developed, or are planning to develop, alternatives that are starting to show some promise.

Google VPx

Google, through its acquisition of On2 Technologies, has developed the VP8 and VP9 codecs to compete with AVC and HEVC, respectively. Google has released the VP8 codec, and all IP contained therein, under a cross-license agreement that provides users royalty-free rights to use the technology. In 2013, Google signed an agreement with MPEG-LA and 11 MPEG-LA AVC licensors to acquire a license to VP8 essential patents from this group. This makes VP8 a relatively safe choice, legally speaking, for use as an alternative to AVC. The agreement between Google and MPEG-LA also covers one “next-generation” codec. According to my discussions with Google, this is meant to be the VP9 codec. The VP8 cross-license FAQ indicates that a VP9 cross-license is in progress, but nothing official exists to-date.

Most of the content served by Google’s YouTube is compressed with VP8/VP9 technology. Additionally, numerous software and hardware vendors have added native support for VPx codecs. Chrome, Firefox, and Opera browsers all have software-based VP8 and VP9 decoders. Recently, Microsoft announced that they would be adding VP9 decode support to their Edge browser in an upcoming release. On the hardware side, Google has signed agreements with almost every major chipset vendor to add VP8/VP9 functionality to their designs.

From a performance standpoint, we feel that VP8/AVC and VP9/HEVC are very similar. Numerous experiments have been run to try to assess the relative performance of the codecs and, while it is difficult to do an apples-to-apples comparison, they seem very close. If you’d like to look for yourself, here is a sampling the many tests we found:

  • https://www.ietf.org/mail-archive/web/rtcweb/current/msg09064.html
  • Mukherjee, et al., Proceedings of the International Picture Coding Symposium, pp. 390-393, Dec. 2013, San Jose, CA, USA.
  • Grois, et al., Proceedings of the International Picture Coding Symposium, pp. 394-397, Dec. 2013, San Jose, CA, USA.
  • Bankoski, et al., Proceedings of IEEE International Conference on Multimedia and Expo, pp. 1-6, July 2011, Barcelona, ES.
  • Rerabek, et al., Proceedings of the SPIE Applications of Digital Image Processing, XXXVII, vol. 9217, August 2014, San Diego, CA, USA
  • Feller, et al., Proceedings of the IEEE International Conference on Consumer Electronics, pp. 57-61, Sept. 2011, Berlin, DE.
  • Kufa, et al., Proceedings of the 25th International Radioelektronika Conference, pp.168-171, April 2015, Pardubice, CZ
  • Uhrina, et al., 22nd Telecommunications Forum (TELFOR), pp. 905-908, Nov. 2014, Belgrade, RS
An Alliance of Titans

At least in part in reaction to the HEVCAdvance license announcement in 2015, a collection of the largest media and Internet companies in the world announced that they were forming a group with the sole goal of helping develop a royalty-free video codec for use on the web. The Alliance for Open Media (AOM), founded by Google, Cisco, Amazon, Netflix, Intel, Microsoft, and Mozilla, aims to combine their technological and legal strength to ensure the development of a next-generation compression technology that will be both superior in performance over HEVC and free of charge.

Several of the companies in AOM have already started work on a successor codec. Google has been steadily progressing on VP10. While Cisco has introduced a new codec called Thor. Finally, Mozilla has long been working on a codec named Daala, which takes a drastically different approach to the standard Discrete Cosine Transform (DCT) algorithm used in most compression systems. Cisco, Google, and Mozilla have all stated that they would like to combine the best of their separate technologies into the next generation codec envisioned by AOM. It is likely that this work would be done in the IETF NetVC working group, but not much has been decided as of yet.

The Future is Free?

Not sure we can say this statement with utmost confidence; but it seems clear that the confusing landscape of IP-encumbered video codecs is driving industry towards a future where digital video--the predominant leader in video traffic on the internet--may truly be free from patent royalties.

Greg Rutz is a Lead Architect in the Advanced Technology Group at CableLabs. 

Special thanks to Dr. Arianne Hinds, Principal Architect at CableLabs, and Jud Cary, Deputy General Counsel at CableLabs for their contributions to this article.

Comments
Consumer

Who Will Win The Race For Virtual Reality Content?

Steve Glennon
Distinguished Technologist, Advanced Technology Group

Oct 1, 2015

This is the second of a series of posts on Virtual Reality technology by Steve and others at CableLabs. You can find the first post here.

In the recent study we performed at CableLabs, we asked what would stop people from buying a virtual reality headset. High on the list of items was availability of content. Setting aside VR gaming, people didn’t want to spend money on a device that only had a few (or a few hundred) pieces of content. So there has to be lots. Who is creating content, and how quickly will content show up?

Leave it to the Professionals?

Three companies have emerged as leaders in the space of professional content production:

  • JauntVR who just secured another $66M investment
  • Immersive Media who worked on the Emmy award-winning Amex Unstaged: Taylor Swift Experience
  • NextVR who seemingly wants to become the “Netflix of VR.”

Each company is developing content with top-tier talent, each working to establish brand equity and to develop critical intellectual property. Another thing they have in common is that they have developed their own cameras to capture the content, yet they all say they are not in the business of building cameras.

Jaunt

JauntVR Neo Camera 

The new Jaunt Neo camera seems to have 16 sensors around the periphery, which we might expect to yield a 16K x 8K resolution. This is far in excess of what the Samsung GearVR will decode and display (limited to 4K x 2K 30fps, or lower resolution at 60fps), but shows the type of quality experience we might expect for the future. The NextVR camera uses 6 4K RED professional cameras, each of which is a rather expensive proposition before you even buy lenses.

But there seems to be a problem of scaling up. To get to thousands of pieces of professional content we will have to wait for years for the three companies and any other new entrants to scale up their staff, equipment, and get engaged in and produce enough interesting projects.

Or will consumer-generated content win the day?

On the flip side, we have seen the rate at which regular video is created and uploaded to YouTube (300 hours uploaded per minute).  All we need is for affordable consumer-grade cameras to show up. The first cameras are already here, though their resolution leaves something to be desired. Brand-name cameras like the Ricoh Theta and the Kodak SP360 are affordable consumer-grade 360° cameras that are around $300, but they only shoot HD video. When the 1920 pixels of the HD video are expanded to fill the full horizon, all that can be seen in the VR headset is less than a quarter of that - or around 380 pixels. Most of the content we used in consumer testing was 4000x2000 pixels of content, and even that showed up as fairly low resolution. There are some some startup names entering the scene (like Bublcam and Giroptic), but these are not shipping yet, and don’t advance beyond HD video for the full sphere of what you see.  Perhaps more interesting in terms of quality are cameras coming from Sphericam and Nokia that promise 4K video at 60 frames per second, and an updated Kodak camera that supports 4K. These cameras reach a little beyond what the Samsung GearVR headset can decode (which is 4K 30 fps) but are a little beyond the consumer price point, entering the market at $1500. Oddly missing from the consumer arena here is GoPro, but I think we can reasonably predict a consumer-grade 360° camera from them late in 2015 or early 2016 that is a little more manageable than the 16-GoPro Google Jump camera, or the 6-camera rigs that hobbyists have been using to create the early content.

YouTube and Facebook 360 are already here

Facebook launched support for 360° videos in timelines just last week. YouTube launched 360° support in March, and in six months already has about 14,000 spherical videos, with few mainstream cameras available. I strongly suspect that user-generated content will massively out-number professionally produced 360° content. Out studies suggested that while consumers guessed they would be willing to wear a VR headset for 1-2 hours, short-form content would be favored. This matches the predominant YouTube content length well. While Samsung has created their own video browsing app (MilkVR), a YouTubeVR application is sure to emerge soon, allowing us all to spend hours consuming the 360 videos in a VR headset.

What Content Appeals?

Shooting interesting 360° content is hard. There is nowhere to hide the camera operator. Everything is visible, and traditional video production focuses on a “stage” that the director populates with the action he or she wants you to see. We had a strong inkling that live sports and live music concerts would be compelling types of content, and might be experiences people would value enough to pay for. We tried several different types of content when we tested VR with consumers, trying to cover the bases of sports, music, travel and “storytelling” - fiction in short or long form. We were surprised to see the travel content we chose as the clear winner, leaving all other content categories in the dust on a VR headset. We gained lots of insights into the difficulty of producing good VR video content (like not moving the camera, and not doing frequent scene cuts), and the difficulty of creating great storytelling content that really leverages the 360° medium. Let’s just say I am not expecting vast libraries of professionally produced 360 movies any time soon.

The ingenuity of the population in turning YouTube into the video library of everything is likely to play out here too, with lots of experimentation and new types of content becoming popular. The general public, through trying lots of different things, is most likely to find the answers for us. For that to happen, mass adoption of VR headsets has to take place, and consumer 360 cameras have to become cheap and desirable. Samsung announced the “general public” version of their headset at the Oculus Connect 2 last week, with a price point of $99 over the cost of their flagship phones. Oculus expects to ship over 1 million headsets in their first year of retail availability. Cameras are already out there, inexpensive, and getting better. In 12 months I expect over 100,000 360° videos to be published on YouTube.

Is the killer app for VR… TV?

Netflix just launched their app on the GearVR. Here at my desk I can watch movies and TV episodes on a VR headset, sitting in a ski lodge with views out the window of ski runs. I’m sitting in front of a (virtual) 105” TV, enjoying an episode of Sherlock. The screen quality is pretty grainy, but I quickly forget that and become engrossed in the episode -- until my boss taps me on the shoulder and tells me to get back to work. I thought this was a pretty dumb use case for the technology, but I was wrong. Dead wrong. Only problem is I can’t see the popcorn to pick it up and try to get it into a mouth I can’t see. There is lots to ponder here.

Steve Glennon is a Principal Architect in the Advanced Technology Group at CableLabs.

Comments
Consumer

Is Virtual Reality the Next Big Thing?

Steve Glennon
Distinguished Technologist, Advanced Technology Group

Sep 8, 2015

In the Advanced Technology Group at CableLabs I get to play with all the best toys, trying to work out what is a challenge or an opportunity, and what is a flop waiting to be unwrapped. 3D is a great case study. I first got to work with 3D TV in 1995 (in a prior life), with an early set of shutter glasses and a PC display. Back then it faced the chicken and egg problem - no TVs, and no content. Which would come first?

I still remember the first 3D video I saw in 1995. It was high quality, high frame rate, and beautiful, but not compelling. I still didn’t find it compelling in 2010 when the 3D TV/content logjam was broken.

I remember the first time I saw Virtual Reality video at CES in January 2014. My reactions were starkly different. The first VR Video I saw was grainy, blurry, and a little jerky. And it was incredibly compelling. I was hooked.

You can’t talk about VR - you have to experience it

Lots of people talk about VR - they “get it” in their heads. You see video playing, and you can look all around. And you can look stupid with a headset on.  But you don’t really get it until you experience some good content in a good headset.

Untitled

This doesn’t convey what it feels like see elephants up close. Everyone loves elephants, and with a headset you feel like you are meeting one for the first time, in their natural habitat.

Untitled

And this doesn’t convey the reaction of stepping back to avoid getting wet feet, with icy cold Icelandic water.

So we have been doing a couple of things. We have been showing the headsets to people at the CableLabs conferences, where we got almost half the attendees to try on a headset and experience it first hand. We also wanted to put some market research behind it, so we designed some consumer panels to get regular consumer feedback across a representative demographic cross-section.

VR Is Only For Gamers. Or Not.

A lot of the early focus for the Oculus Rift (and Crystal Cove) headset has been around gamers and gaming. It is the perfect add-on for the gamer, where every frame is rendered in real time and you can go beyond “looking around” on the PC screen to just looking around. I had the pleasure of getting a tour and demo of the HTC Vive headset at Valve’s headquarters in Bellevue, and there you can walk around in addition to looking around. It is the closest thing to a Holodeck that I have ever experienced. I think we can reasonably expect these to go mainstream within the gaming community.

But the consumption of 360° video has much more importance to cable, as mentioned by Phil McKinney in this Variety interview, because of the need for high bandwidth to deliver the content. We’ll look at that in more detail in a future blog post.

Rather than use our own tech-geek, early-adopter attitudes to evaluate this, we wanted to get a representative sample of the general population and ask their opinions. So that’s what we did. With some excellent 360° video content from Immersive Media, a former Innovation Showcase participant, we asked regular people, with regular jobs. Like teachers, nurses, students, physical therapists, personal trainers, and a roadie. We tried to come up with some different display technologies to compare against, and showed them the Samsung GearVR with the content. Here’s what they told us:

57% “Had to have it”. 88% could see themselves using a head-mounted display within 3 years. Only 11% considered the headset to be either uncomfortable or very uncomfortable. 96% of those who were cost sensitive would consider a purchase at a $200 price point (the GearVR is a $200 add-on to the Samsung Galaxy Note 4 or Galaxy S6 phone).

So this seems overwhelmingly positive. There is the novelty factor to take into account, but we were surprised by how few expressed any discomfort and how positively regular people described the experience.

“VR is the most Important Development since the iPhone”

I had the distinct pleasure of spending time on stage with Robert Scoble during his keynote at the recent CableLabs Summer Conference. We discussed the state of the art in head-mounted displays, 360° cameras and content (we’ll talk more about that in later blog posts) but he expressed this sentiment (paraphrased): “Virtual reality is the most important technology development since the original iPhone”. I hadn’t thought about it that way, and now I agree with him. This is not just hot and sexy, a passing fad. It has massive potential to transform lots of what we do, and we can all expect incredible developments in this space.

I want to meet Palmer Luckey

Palmer Luckey created his first prototype head-mounted display at age 18. Four years later he sold Oculus to Facebook for $2 billion, having dropped out of school to build more prototypes. I didn’t see the Facebook deal coming, and didn’t understand it. Now I get it. I want to meet him and thank him for transforming our lives in ways we cannot imagine. We just haven’t witnessed it yet. We will explore more in the next few posts.

Steve Glennon is a Principal Architect in the Advanced Technology Group at CableLabs.

Comments
Technical Blog

Content Creation Demystified: Open Source to the Rescue

Jun 9, 2015

There are four main concepts involved in securing premium audio/visual content:

  1. Authentication
  2. Authorization
  3. Encryption
  4. Digital Rights Management (DRM)

Authentication is the act of verifying the identity of the individual requesting playback (e.g. username/password). Authorization involves ensuring that individual’s right to view that content (e.g. subscription, rental). Encryption is the scrambling of audio/video samples; decryption keys (licenses) must be acquired to enable successful playback on a device. Finally, DRM systems are responsible for the secure exchange of encryption keys and the association of various “rights” with those keys. Digital rights may include things such as:

  • Limited time period (expiration)
  • Supported playback devices
  • Output restrictions (e.g. HDMI, HDCP)
  • Ability to make copies of downloaded content
  • Limited number of simultaneous views

This article will focus on the topics of encryption and DRM and describe how open source software can be used to create protected, adaptive bitrate test content for use in evaluating web browsers and player applications. I will close this article by describing the steps for how protective streams can be created.

One Encryption to Rule Them All

Encryption is the process of applying a reversible mathematical operation to a block of data to disguise its contents. Numerous encryption algorithms have been developed, each with varying levels of security, inputs and outputs, and computational complexity. In most systems, a random sequence of bytes (the key) is combined with the original data (the plaintext) using the encryption algorithm to produce the scrambled version of the data (the ciphertext). In symmetric encryption systems, the same key used to encrypt the data is also used for decryption. In asymmetric systems, keys come in pairs; one key is used to encrypt or decrypt data while its mathematically related sibling is used to perform the inverse operation. For the purposes of protecting digital media, we will be focusing solely on symmetric key systems.

With so many encryption methods to choose from, one could reasonably expect that web browsers would only support a small subset. If the sets of algorithms implemented in different browsers do not overlap, multiple copies of the media would have to be stored on streaming servers to ensure successful playback on most devices. This is where the ISO Common Encryption standards come in. These specifications identify a single encryption algorithm (AES-128) and two block chaining modes (CTR, CBC) that all encryption and decryption engines must support. Metadata in the stream describes what keys and initialization vectors are required to decrypt media sample data. Common Encryption also supports the concept of “subsample encryption”. In this situation, unencrypted samples are interspersed within the total set of samples and Common Encryption metadata is defined to describe the byte offsets where encrypted samples begin and end.

In addition to encryption-based metadata, Common Encryption defines the Protection Specific System Header (PSSH). This bit of metadata is an ISOBMFF box structure that contains data proprietary to a particular DRM that will guide the DRM system in retrieving the keys that are needed to decrypt the media samples. Each PSSH box contains a “system ID” field that uniquely identifies the DRM system to which the contained data applies. Multiple PSSH boxes may appear in a media file indicating support for multiple DRM systems. Hence, the magic of Common Encryption; a standard encryption scheme and multi-DRM support for retrieval of decryption keys all within a single copy of the media.

The main Common Encryption standard (ISO 23001-7) specifies metadata for use in ISO BMFF media containers. ISO 23001-9 defines Common Encryption for the MPEG Transport Stream container. In many cases, the box structures defined in the main specification are simply inserted into MPEG TS packets in an effort avoid introducing completely new structures that essentially hold the same data.

Creating Protected Adaptive Bitrate Content

CableLabs has developed a process for creating encrypted MPEG-DASH adaptive bitrate media involving some custom software combined with existing open source tools. The following sections will introduce the software and go through the process of creating protected streams. These tools and some accompanying documentation are available on GitHub. A copy of the documentation is also hosted here.

EME Content Tools

Adaptive Bitrate Transcoding

The first step in the process is to transcode source media files into several, lower bitrate versions. This can be simply reducing the bitrate, but in most cases the resolution should be lowered. To accomplish this, we use the popular FFMpeg suite of utilities. FFMpeg is a multi-purpose audio/video recorder, converter, and streaming library with dozens of supported formats. An FFMpeg installation will need to have the x264 and fdk_aac codec libraries enabled. If the appropriate binaries are not available, it can be built .

CableLabs has provided an example script that can be used as a guide to generating multi-bitrate content. There are some important items in this script that should be noted.

One of the jobs of an ABR packager is to split the source stream up into “segments.” These segments are usually between 2 and 10 seconds in duration and are in frame-accurate alignment across all the bitrate representations of media. For bitrate switching to appear seamless to the user, the player must be able to switch between the different bitrate streams at any segment boundary and be assured that the video decoder will be able to accept the data. To ensure that the packager can split the stream at regular boundaries, we need to make sure that our transcoder is inserting I-Frames (video frames that have no dependencies on other frames) at regular intervals during the transcoding process. The following arguments to x264 in the script accomplish that task:

[highlighter line=0]

-x264opts "keyint=$framerate:min-keyint=$framerate:no-scenecut"

We use the framerate detected in the source media to instruct the encoder to insert new I-Frames at least once ever second. Assuming our packager will segment using an integral number of seconds, the stream will be properly conditioned. The no-scenecut argument tells the encoder not to insert random I-Frames when it detects a scene change in the source material. We detect the framerate of the source video using the ffprobe utility that is part of FFMpeg.

framerate=$((`./ffprobe $1 -select_streams v -show_entries stream=avg_frame_rate -v quiet -of csv="p=0"`))

At the bottom of the script, we see the commands that perform the transcoding using bitrates, resolutions, and codec profile/level selections that we require.

transcode_video "360k" "512:288" "main" 30 $2 $1
transcode_video "620k" "704:396" "main" 30 $2 $1
transcode_video "1340k" "896:504" "high" 31 $2 $1
transcode_video "2500k" "1280:720" "high" 32 $2 $1
transcode_video "4500k" "1920:1080" "high" 40 $2 $1

transcode_audio "128k" $2 $1
transcode_audio "192k" $2 $1

For example, the first video representation is 360kb/s with a resolution of 512x288 pixels using AVC Main Profile, Level 3.0. The other thing to note is that the script transcodes audio and video separately. This is due to the fact that the DASH-IF guidelines forbid multiplexed audio/video segements (see section 3.2.1 of the DASH-IF Interoperability Points v3.0 for DASH AVC/264).

Encryption

Next, we must encrypt the video and/or audio representation files that we created. For this, we use the MP4Box utility from GPAC. MP4Box is perfect for this task because it supports the Common Encryption standard and is highly customizable. Not only will perform AES-128 CTR or CBC mode encryption, but it can do subsample encryption and insert multiple, user-specified PSSH boxes into the output media file.

To configure MP4Box for performing Common Encryption, the user creates a “cryptfile”. The cryptfile is an XML-based description of the encryption parameters and PSSH boxes. Here is an example cryptfile:


The top-level <GPACDRM> element indicates that we are performing AES-128 CTR mode Common Encryption. The <DRMInfo> elements describe the PSSH boxes we would like to include. Finally, a single <CrypTrack> elements is specified for each track in the source media we would like to encrypt. Multiple encryption keys for each track may be specified, in which case the number of samples to encrypt would be indicated with a single key before moving to the next key in the list .

Since PSSH boxes are really just containers for arbitrary data, MP4Box has defined a set of XML elements specifically to define bitstreams in the <DRMInfo> nodes. Please see the MP4Box site for a complete description of the bitstream description syntax.

Once the cryptfile has been generated, you simply pass it as an argument on the command line to MP4Box. We have created a simple script to help encrypt multiple files at once (since you may most likely be encrypting each bitrate representation file for your ABR media).

Creating MP4Box Cryptfiles with DRM Support

In any secure system, the keys necessary to decrypt protected media will most certainly be kept separate from the media itself. It is the DRM system’s responsibility to retrieve the decryption keys and any rights afforded to the user for that content by the content owner. If we wish to encrypt our own content, we will also need to ensure that the encryption keys are available on a per-DRM basis for retrieval when the content is played back.

CableLabs has developed custom software to generate MP4Box cryptfiles and ensure that the required keys are available on one or more license servers. The software is written in Java and can be run on any platform that supports a Java runtime. A simple Apache Ant buildfile is provided for compiling the software and generating executable JAR files. Our tools currently support the Google Widevine and Microsoft PlayReady DRM systems with a couple of license server choices for each. The first Adobe Access CDM is just now being released in Firefox and we expect to update the tools in the coming months to support Adobe DRM. Support for W3C ClearKey encryption is also available, but we will focus on the commercial DRM systems for the purposes of this article.

The base library for the software is CryptfileBuilder. This set of Java classes provides abstractions to facilitate the construction and output of MP4Box cryptfiles. All other modules in the toolset are dependent upon this library. Each DRM-specific tool has detailed documentation available on the command line (-h arg) and on our website.

Microsoft PlayReady Test Server

The code in our PlayReady software library provides 2 main pieces of functionality:

  1. PlayReady PSSH generator for MP4Box cryptfiles
  2. Encryption key generator for use with the Microsoft PlayReady test license server.

Instead of allowing clients to ingest their own keys, the license server uses an algorithm based on a “key seed” and a 128-bit key ID to derive decryption keys. The algorithm definition can be found in this document from Microsoft (in the section titled “Content Key Algorithm”). Using this algorithm, the key seed used by the test server, and a key ID of our choosing, we can derive the content key that will be returned by the server during playback of the content.

Widevine License Portal

Similar to PlayReady, our Widevine toolset provides a PSSH generator for the Widevine DRM system. Widevine, however, does not provide a generic test server like the one from Microsoft. Users will need to contact Widevine to get their own license portal on their servers. With that portal, you will get a signing key and initialization vector for signing your requests. You provide this information as input to the Widevine cryptfile generator.

The Widevine license server will generate encryption keys and key IDs based on a given “content ID” and media type (e.g. HD, SD, AUDIO, etc). Their API has been updated since our tools were developed and they now support ingest of “foreign keys” that our tool could generate itself, but we don’t currently support that.

DRMToday

The real power of Common Encryption is made apparent when you add support for multiple DRM systems in a single piece of content. With the license servers used in our previous examples, this was not possible because we were not able to select our own encryption keys (as explained earlier, Widevine has added support for “foreign keys”, but our tools have not been updated to make use of them). With that in mind, a new licensing system is required to provide the functionality we seek.

CableLabs has partnered with CastLabs to integrate support for their DRMToday multi-DRM licensing service in our content creation tools. DRMToday provides a full suite of content protection services including encryption and packaging. For our needs, we only rely on the multi-DRM licensing server capabilities. DRMToday provides a REST API that our software uses to ingest Common Encryption keys into their system for later retrieval by one of the supported DRM systems.

MPEG-DASH Segmenting and Packaging

The final step in the process is to segment our encrypted media files and generate a MPEG-DASH manifest (.mpd). For this, we once again use MP4Box, but this time we use the -dash argument. There are many options in MP4Box for segmenting media files, so please run MP4Box -h dash to see the full list of configurations.

For the purposes of this article, we will focus on generating content that meets the requirements of the DASH-IF DASH-AVC264 “OnDemand” profile. Our repository contains a helper script that will take a set of MP4 files and generate DASH content according to the DASH-IF guidelines. Run this script to generate your segmented media files and produce your manifest.

Greg Rutz is a Lead Architect at CableLabs working on several projects related to digital video encoding/transcoding and digital rights management for online video.

This post is part of a technical blog series, "Standards-Based, Premium Content for the Modern Web".

Comments
Technical Blog

Web Media Playback: Solved with dash.js

Jun 2, 2015

(Disclaimer: the author is a regular contributor to the dash.js project)
A crucial piece of any standards development process is the creation of a reference implementation to validate the feasibility of the many requirements set forth within the standard. After all, it makes no sense to create a standard if it is impossible to create fully compliant products. The DASH Industry Forum recognized this and created the dash.js project.

dash.js is an open-source, JavaScript media player library coupled with a few example applications. It relies on the W3C Media Source Extensions (MSE) for adaptive bitrate playback and Encrypted Media Extensions (EME) for protected content support. While dash.js started out as a reference implementation, it is has been adopted by many organizations for use in commercial products. The DASH-IF Interoperability Points specification has achieved relative stability with respect to the base functionality, so the development team is focused on adding features and improving performance to further increase its usefulness to companies producing web media players.

One of the benefits of dash.js is in its richness of features. It supports both live and on-demand content, multi-period manifests, and subtitles, to name a few. The player is highly extensible with an adaptive-bitrate rules engine, configurable buffer scheduling, and a metrics reporting system. The example application provided with the library source displays the current buffer levels and current representation index for both audio and video, and it allows the user to manually initiate bitrate changes or let the player rules handle it automatically. Finally, the app contains a buffer level graph, manifest display and debug logging window.

CableLabs has been an active contributor to the dash.js project. Much of the EME support in dash.js was designed and implemented by CableLabs to ensure that the application will support the wide variety of API implementations found in production desktop web browsers today. Additionally, CableLabs has created and hosted test content in the dash.js demo application to ensure that others can observe the dash.js EME implementation in action and evaluate support for protected media on their target browsers.

Content Protection

The media library contains extensive support for playback of protected content. The EME specification has seen many modifications and updates over the years and browser vendors have selected various points during its development to release their products. In order to support as many of these browsers as possible, dash.js has developed a set of APIs (MediaPlayer.models.ProtectionModel) as an abstraction layer to interface with the underlying implementation, whatever that may be. The APIs are designed to mimic the functionality of the most recent EME spec. Several implementations of this API have been developed to translate back to the EME versions that were found in production browsers. The media player will detect browser support and instantiate the proper JavaScript class automatically.
The MediaPlayer.models.ProtectionModel and MediaPlayer.controllers.ProtectionController classes provide the application with access to the EME system both inside and outside the player. ProtectionModel provides management of MediaKeySessions and protection system event notification for a single piece of content. Most actions performed on the model are made using ProtectionController.   Applications can use the media player library to instantiate these classes outside of the playback environment to pre-fetch licenses. Once licenses have been pre-fetched, the app can attach the protection objects to the player to associate the licenses with the HTMLMediaElement that will handle playback.

An EME demo app is provided with dash.js (samples/dash-if-reference-player/eme.html) that provides some visibility and control into the EME operations taking place in the browser. The app allows the user to pre-fetch licenses, manage key session persistence, and playback associated content. It also shows the selected key system, and the status of all licenses associated with key sessions.

eme-demo-app

Greg Rutz is a Lead Architect at CableLabs working on several projects related to digital video encoding/transcoding and digital rights management for online video.

This post is part of a technical blog series, "Standards-Based, Premium Content for the Modern Web".

Comments