Comments
AI

Leveraging Machine Learning and Artificial Intelligence for 5G

Omkar Dharmadhikari
Wireless Architect

Jun 18, 2019

The heterogenous nature of future wireless networks comprising of multiple access networks, frequency bands and cells - all with overlapping coverage areas - presents wireless operators with network planning and deployment challenges. Machine Learning (ML) and Artificial Intelligence (AI) can assist wireless operators to overcome these challenges by analyzing the geographic information, engineering parameters and historic data to:

  • Forecast the peak traffic, resource utilization and application types
  • Optimize and fine tune network parameters for capacity expansion
  • Eliminate coverage holes by measuring the interference and using the inter-site distance information

5G can be a key enabler to drive the ML and AI integration into the network edge. The figure below shows how 5G enables simultaneous connections to multiple IoT devices generating massive amounts of data. The integration of ML and AI with 5G multi-access edge computing (MEC) enables wireless operators to offer:

  • High level of automation from the distributed ML and AI architecture at the network edge
  • Application-based traffic steering and aggregation across heterogeneous access networks
  • Dynamic network slicing to address varied use cases with different QoS requirements
  • ML/AI-as-a-service offering for end users

ML and AI for Beamforming

5G, deployed using mm-wave, has beam-based cell coverage unlike 4G which has sector-based coverage. A machine learned algorithm can assist the 5G cell site to compute a set of candidate beams, originating either from the serving or its neighboring cell site. An ideal set is the set that contains fewer beams and has a high probability of containing the best beam. The best beam is the beam with highest signal strength a.k.a. RSRP. The more activated beams present, the higher the probability of finding the best beam; although the higher number of activated beams increases the system resource consumption.

The user equipment (UE) measures and reports all the candidate beams to the serving cell site, which will then decide if the UE needs to be handed over to a neighboring cell site and to which candidate beam. The UE reports the Beam State Information (BSI) based on measurements of Beam Reference Signal (BRS) comprising of parameters such as Beam Index (BI) and Beam Reference Signal Received Power (BRSRP). Finding the best beam by using BRSRP can lead to multi-target regression (MRT) problem while finding the best beam by using BI can lead to multi-class classification (MCC) problem.

ML and AI can assist in finding the best beam by considering the instantaneous values updated at each UE measurement of the parameters mentioned below:

  • Beam Index (BI)
  • Beam Reference Signal Received Power (BRSRP)
  • Distance (of UE to serving cell site),
  • Position (GPS location of UE)
  • Speed (UE mobility)
  • Channel quality indicator (CQI)
  • Historic values based on past events and measurements including previous serving beam information, time spent on each serving beam, and distance trends

Once the UE identifies the best beam, it can start the random-access procedure to connect to the beam using timing and angular information. After the UE connects to the beam, data session begins on the UE-specific (dedicated) beam.

ML and AI for Massive MIMO

Massive MIMO is a key 5G technology. Massive simply refers to the large number of antennas (32 or more logical antenna ports) in the base station antenna array. Massive MIMO enhances user experience by significantly increasing throughput, network capacity and coverage while reducing interference by:

  • Serving multiple spatially separated users with an antenna array in the same time and frequency resource
  • Serving specific users with beam forming steering a narrow beam with high gain to send the radio signals and information directly to the device instead of broadcasting across the entire cell, reducing radio interference across the cell.

The weights for antenna elements for a massive MIMO 5G cell site are critical for maximizing the beamforming effect. ML and AI can be used to:

  • Identify dynamic change and forecast the user distribution by analyzing historical data
  • Dynamically optimize the weights of antenna elements using the historical data
  • Perform adaptive optimization of weights for specific use cases with unique user-distribution
  • Improve the coverage in a multi-cell scenario considering the inter-site interference between multiple 5G massive MIMO cell sites

ML and AI for Network Slicing

In the current one-size-fits-all approach implementation for wireless networks, most resources are underutilized and not optimized for high-bandwidth and low-latency scenarios. Fixed resource assignment for diverse applications with differential requirements may not be an efficient approach for using available network resources. Network slicing creates multiple dedicated virtual networks using a common physical infrastructure, where each network slice can be independently managed and orchestrated.

Embedding ML algorithms and AI into 5G networks can enhance automation and adaptability, enabling efficient orchestration and dynamic provisioning of the network slice. ML and AI can collect real time information for multidimensional analysis and construct a panoramic data map of each network slice based on:

  • User subscription,
  • Quality of service (QoS),
  • Network performance,
  • Events and logs

Different aspects where ML and AI can be leveraged include:

  • Predicting and forecasting the network resources can enable wireless operators to anticipate network outages, equipment failures and performance degradation
  • Cognitive scaling to assist wireless operators to dynamically modify network resources for capacity requirements based on the predictive analysis and forecasted results
  • Predicting UE mobility in 5G networks allowing Access and Mobility Management Function (AMF) to update mobility patterns based on user subscription, historical statistics and instantaneous radio conditions for optimization and seamless transition to ensure better quality of service.
  • Enhancing the security in 5G networks preventing attacks and frauds by recognizing user patterns and tagging certain events to prevent similar attacks in future.

With future heterogenous wireless networks implemented with varied technologies addressing different use cases providing connectivity to millions of users simultaneously requiring customization per slice and per service, involving large amounts of KPIs to maintain, ML and AI will be an essential and required methodology to be adopted by wireless operators in near future.

Deploying ML and AI into Wireless Networks

Wireless operators can deploy AI in three ways:

  • Embedding ML and AI algorithms within individual edge devices for to low computational capability and quick decision-making
  • Lightweight ML and AI engines at the network edge to perform multi-access edge computing (MEC) for real-time computation and dynamic decision making suitable for low-latency IoT services addressing varied use case scenarios
  • ML and AI platform built within the system orchestrator for centralized deployment to perform heavy computation and storage for historical analysis and projections

Benefits of Leveraging ML and AI in 5G

The application of ML and AI in wireless is still at its infancy and will gradually mature in the coming years for creating smarter wireless networks. The network topology, design and propagation models along with user’s mobility and usage patterns in 5G will be complex. ML and AI can will play a key role in assisting wireless operators to deploy, operate and manage the 5G networks with proliferation of IoT devices. ML and AI will build more intelligence in 5G systems and allow for a shift from managing networks to managing services. ML and AI can be used to address several use cases to help wireless operators transition from a human management model to self-driven automatic management transforming the network operations and maintenance processes.

There are high synergies between ML, AI and 5G. All of them address low latency use cases where the sensing and processing of data is time sensitive. These use cases include self-driving autonomous vehicles, time-critical industry automation and remote healthcare. 5G offers ultra-reliable low latency which is 10 times faster than 4G. However, to achieve even lower latencies, to enable event-driven analysis, real-time processing and decision making, there is a need for a paradigm shift from the current centralized and virtualized cloud-based AI towards a distributed AI architecture where the decision-making intelligence is closer to the edge of 5G networks.

The Role of CableLabs

The cable network carries a significant share of wireless data today and is well positioned to lay an ideal foundation to enable 5G with continued advancement of broadband technology. Next-generation wireless networks will utilize higher frequency spectrum bands that potentially offer greater bandwidth and improved network capacity, however, face challenges with reduced propagation range. The 5G mm-wave small cells require deep dense fiber networks and the cable industry is ideally placed to backhaul these small cells because of its already laid out fiber infrastructure which penetrates deep into the access network close to the end-user premises. The short-range and high-capacity physical properties of 5G have high synergies with fixed wireless networks.

A multi-faceted CableLabs team is addressing the key technologies for 5G deployments that can benefit the cable industry. We are a leading contributor to European Telecommunication Standards Institute NFV Industry Specification Group (ETSI NFV ISG). Our SNAPS™ program is part of Open Platform for NFV (OPNFV). We are working to optimize Wi-Fi technologies and networks in collaboration with our members and the broader ecosystem. We are driving enhancements and are standardizing features across the industry that will make the Wi-Fi experience seamless and consistent. We are driving active contributions to 3GPP Release 16 work items for member use cases and requirements.

Our 10G platform complements 5G and is also a key enabler to provide the supporting infrastructure for 5G to achieve its full potential. CableLabs is leading the efforts for spectrum sharing to enable coexistence between Wi-Fi and cellular technologies, that will enable multi-access sharing with 3.5 GHz to make the 5G vision a reality.


Learn More About 10G

Comments
AI

Why Consensual AI Improves Problem Solving

Bernardo Huberman
Fellow and Vice President of Core Innovation

Feb 22, 2019

A version of this blog post was published on February 21, 2019, on the S&P Global Market Intelligence site. 

We are surrounded by embedded sensors and devices with more processing power than many of the computers standing on our desks. Machine learning modules inside phones, home control systems, thermostats, and the ubiquitous voice operated gadgets, constitute a whole technological species that now coexist with us through the same Internet environment we populate with our own communication devices. These are the simple components of what it is commonly called the “Internet of Things” (IoT).

The real revolution is taking place in a different, less visible setting, an industrial one, that ranges from manufacturing and refineries to health care. Within these very large systems and organizations, myriads of embedded smart sensors are connected through shared API’s, leading to a new form of networked computing power that will likely dwarf what we conceive of as the present-day Internet.

The Challenges of Embedded Systems

This vast industrial array of connected sensors has many characteristics that make it different from the consumer smart devices with which most people are familiar. First, the pervasiveness and interconnectivity of these sensors, coupled with the unpredictability of their inputs, make their response times autonomous from human intervention. For instance, a fitness tracker running out of power does not necessitate an urgent response. In comparison, the failure or delayed emergency signal from a smart sensor controlling several refinery valves can trigger an undesirable chain reaction from other sensors and actuators leading to consequential systemic failures.

Second, these smart sensors constitute an open and asynchronous distributed system which cannot predict the behavior of the environment in which it is embedded. This system is also decentralized since it would be hard for a central unit to receive and transmit up-to-date information on the state of the whole system.

Third, the distributed nature of this industrial internet of things makes it open to a host of security threats, since a single break into a component of the distributed fabric can compromise the entire system.

While it is easy to create machine learning algorithms that report on the behavior of parts of the system, it is hard for these programs to be able to reason and act swiftly in response to inputs and malfunctions of a large, interconnected embedded system. A notable improvement can be achieved by using Edge Computing, which entails sensing and processing the information from embedded systems in close spatial proximity to them.

Far more complicated is the aggregation of such local information at a coarser level, so that one can obtain global and timely information and take corrective action when needed. The reason why this is difficult is that sensors differ in the precision and sophistication with which they sense and process data, while at times reporting faulty readings.

Consensual AI Solutions

One way to remedy these problems is to design distributed algorithms that can cooperate in the overall task of diagnosing and acting on systems of embedded sensors. Examples include the sensing of local anomalies that are aggregated intelligently in order to decide on a given action, the collective detection of malware in parts of the network, and effective responses to predetermined traffic and content patterns, to name a few.

We know that such distributed systems are extremely effective at solving the global problems posed by interconnected local units because this is the manner in which humans successfully manage large distributed tasks. Organizations are created at a dizzying speed to deal with problems of control and distribution in a number of industries and services, and they function by interweaving local expertise that is able to detect and solve problems that require timely solutions, from network malfunctions to supply chain interruptions.

This distributed form of artificial intelligence is not an illusory goal. A few such systems have already been designed and tested and have shown dramatic improvements in the times needed to solve hard problems such as optimization and graph coloring. These problems are characterized by the fact that as their size increases linearly, the time to finding a solution rises exponentially. A common example is the traveling salesman problem, which can be seen as a metaphor for the laying of networks in a manner that minimizes the number of traversals needed to cover multiple cities and users. As more nodes are added to such a network, the number of possible solutions increases exponentially, leading to the impossibility of finding the node that minimizes the connections in finite times.

The beauty of cooperative systems is that once deployed, they can exhibit combinatorial implosions. These implosions are characterized by a sudden collapse in the number of possible avenues to a solution due to the effectiveness of cooperation. As a result, problems that took an enormous amount of time to solve are now rendered in linear or polynomial time. These implosions occur when both the quality and the number of messages exchanged by these agents or programs increases while working on the solution of complex problems.

In the context of these networked embedded sensors, consensual systems have the ability to aggregate disparate, and at times, incorrect information to quickly and reasonably diagnose problems. These types of cooperative systems will allow for the control of these large embedded systems without having to resort to an exhaustive analysis of all the data that floods the network. Even better, one hopes that as algorithmic forms of common-sense reasoning (e.g. haven’t I seen this problem before?) are developed in the future we will then reach the dream of fully embedded sensors being able to control their systems in a form that one would call intelligent.

Want to learn more about AI in the future? Subscribe to our blog. 


SUBSCRIBE TO OUR BLOG

Comments
Innovation

  2019 Tech Innovation Predictions

Phil McKinney
President & CEO

Jan 3, 2019

Now that 2019 is here, it’s time to share my tech innovation predictions for the year. Watch the video below to find out what you can expect to see in 2019.

What are your innovation predictions for 2019? Tell us in the comment section below. Best wishes for a great new year!

 --

Subscribe to our blog to see how CableLabs enables innovation.


SUBSCRIBE TO OUR BLOG
 

Comments
AI

A Different Future for Artificial Intelligence

Bernardo Huberman
Fellow and Vice President of Core Innovation

Nov 27, 2018

Not a single day goes by without us hearing about AI. Machine learning, and AI, as these terms are often conflated, have become part of the lexicon in business, technology and finance. Great strides in pattern recognition and the discovery of hidden correlations in vast seas of data are fueling the enthusiasm and hopes of both the technical and business communities.

While this success is worth celebrating, we should not lose track of the fact that there are many other aspects of artificial intelligence beyond machine learning. Common sense reasoning, knowledge representation, inference, to name a few, are not part of the toolbox today but will have to be if we seek forms of machine intelligence that have much in common with our own.

The reason why such forms of machine intelligence are not being used is due to the difficulty that those problems entail. Unlike the recent advances in machine learning, half a century of research in symbolic systems, cognitive psychology and machine reasoning have not produced major breakthroughs.

Intelligent Networks: A New Form of Artificial Intelligence

A more promising future can be expected once we realize that intelligence is not restricted to single brains; it also appears in groups, such as insect colonies, organizations and markets in human societies, to name a few. In all these cases, large numbers of agents capable of performing local tasks that can be conceived as computations, engage in collective behavior which successfully solves a number of problems that transcend the capacity of a single individual to solve. And they often do so without global controls, while exchanging information that is imperfect and at times delayed.

Many of the features underlying distributed intelligence can be found in the computing networks that link our planet. Within these systems processes are created or “born”, migrate across networks and spawn other processes in remote computers. And as they do, they solve complex problems -think of what it takes to render a movie in your screen -while competing for resources such as bandwidth or CPU contested by other processes.

Interestingly, we understand the performance of distributed intelligence, both natural and artificial, much better than the workings of individual minds. This is partly due to the ease with which we can observe and measure the interactions among individuals and programs as they navigate complex informational spaces. Contrast this with the difficulty in learning about detailed cognitive processes within the human brain. And from this large body of knowledge we know that while the overall performance of a distributed system is determined by the capacity of many agents exchanging partial results that are not always optimal, success is determined by those few making the most progress per unit time (think of many agents looking for the proverbial needle in the stack).

Distributed Intelligence: Better than the Best

This suggests a promising path forward for AI; the creation of distributed intelligent programs that can sense, learn, recognize and aggregate information when deployed throughout the network in the service of particular goals. Examples can be the sensing of local anomalies that are aggregated intelligently in order to decide on a given action, the collective detection of malware in parts of the network, sensor fusion, and effective responses to predetermined traffic and content patterns, to name a few.

Distributed AI is not an illusory goal. A few such systems have already been designed and tested and have shown large improvements in the times needed to solve hard computational problems such as cryptarithmetic and graph coloring. These are problems characterized by the fact that as their size increases linearly, the time to finding a solution rises exponentially. A common example is the traveling salesman problem, which can be seen as a metaphor for the laying of networks in such a way that they minimize the number of traversals needed to cover a number of cities and users. While there are a number of powerful heuristics to approach this optimization problem, for large instances one can only hope for solutions that while not optimal, satisfy a certain number of constraints.

The beauty of cooperative systems is that once deployed, they can exhibit combinatorial implosions. These implosions are characterized by a sudden collapse in the number of possible venues to a solution due to the effectiveness of cooperation, and as a result, what took exponential times to solve is now rendered in linear or polynomial time. These implosions appear as both the quality and the number of messages exchanged by AI agents increases while working on the solution of complex problems.

In closing, the emergence of distributed AI will allow for the solution of a number of practically intractable problems, many of them connected with the smooth and safe functioning of our cable networks. Imagine applying different AI solutions to search for security anomalies and combining them in order to identify and act on them. Or monitoring distal parts of the network with different kind of sensors whose outputs are aggregated by intelligent agents. These are just two instances of the myriad problems that could be tackled by a distributed form of artificial intelligence. The more examples we think of and implement, the closer we will get to this vision of a society of intelligent agents who, like the social systems we know, will vastly outperform the single machine learning algorithms we are so familiar with.

Subscribe to our blog to learn more about the future of artificial intelligence. 


SUBSCRIBE TO OUR BLOG

Comments
Legal

Should Artificial Intelligence Practice Law?

Simon Krauss
Deputy General Counsel

Jun 27, 2018

As in many other professions, artificial intelligence (AI) has been making inroads into the legal profession. A service called Donotpay uses AI to defeat parking tickets and arrange flight refunds. Morgan Stanley reduced its legal staff and now uses AI to perform 360,000 hours of contract review in seconds and a number of legal services can conduct legal research (e.g., Ross Intelligence), perform contract analysis (e.g., Kira Systems, LawGeex, and help develop legal arguments in litigation (e.g., Case Text).

Many of these legal AI companies are just a few years old; clearly, there are more AI legal services to come. Current laws allow only humans that passed a bar exam to practice law. But if non-humans could practice law, should we have AI lawyers? The answer may depend on how we want our legal analysis performed.

AI Thinking

Today, when people talk about AI, they often refer to machine learning. Machine learning has been around for many years, but because it is computationally intensive, it has not been widely adopted until more recently. In years past, if you wanted a computer to perform an operation, you had to write the code that told the computer what to do step-by-step. If you wanted a computer to identify cat pictures, you had to code into the computer the visual elements that make up a cat, and the computer would match what it “saw” with those visual elements to identify a cat.

With machine learning, you provide the computer with a model that can learn what a cat looks like and then let the computer review millions of cat (and non-cat) pictures, stimulating the model when it correctly discerns a cat, and correcting it when it doesn’t properly identify a cat. Note that we have no idea how the computer structured the data it used in identifying a cat—just the results of the identification. The upshot is that the computer develops a probabilistic model of what a cat looks like, such as “if it has pointy ears, is furry, and has eyes that can penetrate your soul, there is a 95 percent chance that it is a cat.” And there is room for error. I’ve known people who fit that cat description. We all have.

Lawyer Thinking

If a lawyer applies legal reasoning to identifying cat pictures, that lawyer will become well versed in the legal requirements as to what pictorial elements (when taken together) make up a cat picture. The lawyer will then look at a proposed cat picture and review each of the elements in the picture as it relates to each of the legally cited elements that make up a cat and come up with a statement like, “Because the picture shows an entity with pointy ears, fur, and soul-penetrating eyes, this leads to the conclusion that this is a picture of a cat.”

In machine learning, the room for error does not lie in the probability of the correctness of the legally cited cat elements to the proposed cat picture. The room for error is in the lawyer’s interpretation of the cat elements as they relate to the proposed cat picture. This is because the lawyer is using a causal analysis to come to his or her conclusion—unlike AI, which uses probability. Law is causal. To win in a personal injury or contracts case, the plaintiff needs to show that a breach of duty or contractual performance caused damages.

For criminal cases, the prosecutor needs to demonstrate that a person with a certain mental intent took physical actions that caused a violation of law. Probability appears in the law only when it comes to picking the winner in a court case. In civil cases, the plaintiff wins with “a preponderance of the evidence” (51 percent or better). If it is a criminal case, the prosecution wins if the judge or jury is convinced “beyond a reasonable doubt” (roughly 98 percent or better). Unlike in machine learning, probability is used to determine the success of the causal reasoning, and is not used in place of causal reasoning.

Lawyer or Machine?

Whether a trial hinges on a causal or probabilistic analysis may seem like a philosophical exercise devoid of any practical impact. It’s not. A causal analysis looks at causation. A probabilistic analysis looks at correlation. Correlation does not equal causation. For example, just because there is a strong correlation between an increase in ice cream sales and an increase in murders doesn’t mean you should start cleaning out your freezer.

I don’t think we want legal analysis to change from causation to correlation, so until machine learning can manage a true causal analysis, I don’t think we want AI acting like lawyers. However, AI is still good at a lot of other things at Kyrio and CableLabs. Subscribe to our blog to learn more about what we are working on in the field of AI at CableLabs and Kyrio.


SUBSCRIBE TO OUR BLOG

Comments
Video

  The Near Future of AI Media

Eric Klassen
Executive Producer

Dec 7, 2017

CableLabs is developing artificial intelligence (AI) technologies that are helping pave the way to an emerging paradigm of media experiences. We’re identifying architecture and compute demands that will need to be handled in order to deliver these kinds of experiences, with no buffering, over the network.

The Emerging AI Media Paradigm

Aristotle ranks story as the second-best way to arrive at empirical truth and philosophical debate as the first. Since the days of ancient oral tradition, storytelling practices have evolved, and in recent history, technology has brought story to the platforms of cinema and gaming. Although the methods to create story differ on each platform, the fact that story requires an audience has always kept it separate from the domain of Aristotle’s philosophical debate…until now.

The screen and stage are observational: they’re by nature something separate and removed from causal reality. For games, the same principle applies: like a child observing an anthill, the child can use a stick to poke and prod at the ants to observe what they will do by his interaction. But the experience remains observational, and the practices that have made gaming work until now have been built upon that principle.

To address how AI is changing story, it’s important to understand that the observational experience has been mostly removed with platforms using six degrees of freedom (6DoF). Creating VR content with observational methods is ultimately a letdown: the user is present and active within the VR system, and the observational methods don’t measure up to what the user’s freedom of movement, or agency, is calling for. It’s like putting a child in a playground with a stick to poke and prod at it.

AI story is governed by methods of experience, not observation. This paradigm shift requires looking at actual human experience, rather than the narrative techniques that have governed traditional story platforms.

Why VR Story is Failing

The HBO series, Westworld, is about an AI-driven theme park with hundreds of robot ‘Hosts’ that are indistinguishable from real people. Humans are invited to indulge their every urge, killing and desecrating the agents as they wish since none of their actions have any real-world consequence. What Westworld is missing, despite a compelling argument from its writers, is that in reality, humans are fragile, complex social beings rather than sociopaths, and when given a choice, becoming more primal and brutal is not a trajectory for a better human experience. However, what is real about Westworld, is the mythic lessons that the story handles: the seduction of technology, the risk of humans playing God, and overcoming controlling authority to name a few.

This is the same experiential principal in regards to superheroes or any kind of fictional action figure: being invincible or having superhuman powers is not a part of authentic human becoming. These devices work well for teaching about dormant psychological forces, but not for the actual assimilation of them, and this is where the confusion around VR story begins.

Producing narrative experiences for 6DoF, including social robotics, means handling real human experience, and that means understanding a person’s actual emotion, behavior, and perception. Truth is being arrived at not by observing a story, but rather, truth is being assimilated by experience.

A New Era for Story

The most famous scene in Star Wars is when Vader is revealed as Luke’s father. Luke makes the decision to commit suicide rather than join the dark side with his father. Luke miraculously survives his fall into a dark void, is rescued by Leia, and ultimately emerges anew. The audience naturally understands the evil archetype of Vader, the good archetype of Obi-Wan, and Luke’s epic sacrifice of choosing principle over self in this pivotal scene. The archetypes and the sacrifice are real, and that’s why they work for Star Wars or for any story and its characters, but Jedi’s are not real. Actually training to become a Jedi is delusional, but the archetype of a master warrior in any social domain is very real.

And this is where it gets exciting.

Story, it turns out, was not originally intended for passive entertainment, rather it was a way to guide and teach about the actual experience. The concept of archetypes goes back to Plato, but Carl Jung and Joseph Campbell made them popular, and it was Hollywood that adopted them for character development and screenwriting. According to Jung, archetypes are primordial energies of the unconscious that move into the conscious. In order for a person to grow and build a healthy psyche, he or she must assimilate the archetype that’s showing up in their experience.

The assimilation of archetypes is a real experience, and according to Campbell, the process of assimilation is when a person feels most fully alive. Movies, being observed by the audience, are like a mental exercise for how a person might go about that assimilation. If story is to scale on the platforms of VR, MR (mixed reality) and social robotics, the user must experience archetypal assimilation to a degree of reality that the flat screen cannot achieve.

But how? The scope of producing for user archetypes is massive. To really consider doing that means going full circle back to Aristotle’s philosophy and it’s offspring disciplines of psychology, theology, and the sciences - all of which have their own thought leaders, debates, and agreements on what truth by experience really means. The future paradigm of AI story, it seems, includes returning to the Socratic method of questioning, probing, and arguing in order to build critical thinking skills and self-awareness in the user. If AI can be used to achieve a greater reality in that experience, then this is the beginning of a technology that could turn computer-human interaction into an exciting adventure into the self.

You can learn more about what CableLabs is working on by watching my demonstration, in partnership with USC's Institute for Creative Technologies, on AI agents.

Eric Klassen is an Executive Producer and Innovation Project Engineer at CableLabs. You can read more about storytelling and VR in his article in Multichannel News, "Active Story: How Virtual Reality is Changing Narratives." Subscribe to our blog to keep current on our work in AI. 

Comments
Education

Human and Computer Cognition

Sep 7, 2016

In my mind, the future is not in how we interact with computers, but how computers interact with us, and how lifelike we can make the series of connected artificial intelligence that will become a part of our daily lives.

What do I mean by Artificial Intelligence?

My view of AI is a program capable of some level of autonomy and intelligence, not a super-intelligent system of the sort Elon Musk seems to fear appearing in the near future. That level of intelligence, a true simulation of the human mind, is probably impossible. Of course, that doesn’t mean that our machines do not and will not ever have any degree of intelligence. In the words of Alan Turing, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” That is, just because a computer’s thought won’t resemble that of a human doesn’t mean that it won’t be, on some level, intelligent. Machine intelligence is already superior to human intelligence in recall, math, and logic, so we should see it as a parallel to human intelligence rather than a replacement for it.

What will that look like?

Already people are becoming accustomed to the ease of use an intelligent system can provide. After just a few days of using the Amazon Echo, a voice-enabled wireless speaker, trying to change music with anything else feels clunky and unusable by comparison. In the future, we will likely see technologies like this extended onto entire houses, with interconnected appliances controlled by a user’s voice and capable of learning from that user’s routines and daily actions. Obviously privacy and security are major concerns, but not the focus here.

Cognitive Control

As part of my summer internship, I was tasked with attempting to fly a small consumer drone using an $800 cognitive headset. Weeks of experimenting with third party software ended with me only able to direct the drone forward and backward, and not with any real precision. Other attempts at cognitive control of remote vehicles seem to have been largely successful only when the pilot was trained in meditation or other cognitive discipline. I did have the chance to demonstrate the headset on a coworker’s young son, and he had no difficulty at all using multiple inputs to interact with a simulated object. While the technological limitations are clearly the largest hurdle that still needs to be overcome, the mental limitations of the user cannot be disregarded. Perhaps younger generations, growing up with easy access to this technology, will have an easier time interacting with it, but either way, I believe that widespread cognitive control is still many years away.

Sean_Fernandez_AI-CLSC2016-2 Sean_Fernandez_AI-CLSC2016 Sean_Fernandez_AI-MEL

Can Cable Contribute?

Most intelligent products today are dependent on an internet connection. The Amazon Echo, for example, quickly becomes a fancy paperweight without a constant, strong connection. Any additional connected device will only increase the homeowner’s bandwidth usage. If these intelligent devices eventually achieve mass adoption, we will need much more powerful and reliable networks than we have today.

The Big Picture

Artificial intelligence has the potential to radically improve our standard of living across the board. Robots are already replacing humans in menial production tasks, so why couldn’t an AI eventually replace an accountant or data analyst? The obvious fear this sort of thinking creates is that we eventually end up in a world similar to that envisioned by Aldous Huxley in his famous novel “Brave New World,” one in which humanity has become a race of technicians, maintaining machines that produce their art and music. Personally, I don’t think we have anything to fear, at least in terms of our total replacement by machines. In the words of Thomas J. Watson Jr, the namesake of IBM’s famous Watson supercomputer,

“Computing will never rob man of his initiative or replace the need for creative thinking. By freeing man from the more menial or repetitive forms of thinking, computers will actually increase the opportunities for the full use of human reason.”

Artificial intelligence will never be able to fully replace our creativity, spontaneity, and compassion, much in the same way that no human will ever have the same instant recall and data analysis abilities of a powerful computer. As such, the AI of the future will work in concert with us, its unparalleled computational power combined with our creativity and problem solving.

 

Sean Fernandes is currently in his third year of undergraduate studies at the University of British Columbia majoring in Cognitive Systems, a mixture of Computer Science, Psychology, and Philosophy, exposing him to a number of perspectives on the future of our interactions with computers and their implications.

 

 

Comments
Video

Active Story: How Virtual Reality is Changing Narratives

Eric Klassen
Executive Producer

Nov 3, 2015

I love story, it’s why I got into filmmaking. I love the archetypal form of the written story, the many compositional techniques to achieve a visual story, and using layers of association to tell story with sound. But most of all, I love story because of its capacity to teach. Telling a story to teach an idea, it has been argued, is the most fundamental technology for social progress.

When the first filmmakers were experimenting with shooting a story, they couldn’t foresee using the many creative camera angles that are used today: the films more resembled watching a play. After decades of experimentation, filmmakers discovered more and more about what they could do with the camera, lighting, editing, sound, and special effects, and how a story could be better told as a result. Through much trial and error, film matured into its own discipline.

Like film over a century ago, VR is facing unknown territory and needs creative vision balanced with hard testing. If VR is to be used as a platform for story, it’s going to have to overcome some major problems. As a recent cover story from Time reported, gaming and cinema story practices don’t work in virtual reality.

VR technology is beckoning storytellers to evolve, but there’s one fundamental problem standing in the way: the audience is now in control of the camera. When the viewer is given this power, as James Cameron points out, the experience becomes a gaming environment, which calls for a different method to create story.

Cameron argues that cinematic storytelling can’t work in VR: it’s simply a gaming environment with the goggles replacing the POV camera and joystick. He’s correct, and as long as the technology stays that way, VR story will be no different than gaming. But there’s an elephant in the room that Cameron is not considering: the VR wearable is smart. The wearable is much more than just a camera and joystick, it’s also a recording device for what the viewer is doing and experiencing. And this changes everything.

If storytelling is going to work in VR, creatives need to make a radical paradigm shift in how story is approached. VisionaryVR has come up with a partial solution to the problem, using smart zones in the VR world to guide viewing and controlling the play of content, but story is still in its old form. There’s not a compelling enough reason to move from the TV to the headset in this case. But they’re on the right track.

Enter artificial intelligence.

The ICT at USC has produced a prototype of a virtual conversation with the holocaust survivor, Pinchas Gutter (Video). In order to populate the AI, Pinchas was asked two thousand questions about his experiences. His answers were filmed on the ICT LightStage that also recorded a full 3D video of him. The final product, although not perfect, is the feeling of an authentic discussion with Pinchas about his experience in the holocaust. They’ve used a similar “virtual therapist” to treat PTSD soldiers, and when using the technology alongside real treatment, they’ve achieved an 80% recovery rate, which is incredible. In terms of AI applications, these examples are very compelling use cases.

The near future of VR will include character scripting using this natural language application. Viewers are no longer going to be watching characters, they are going to be interacting with them and talking to them. DreamWorks will script Shrek with five thousand answers about what it’s like when people make fun of him, and when kids use it, parents will be better informed about how to talk to their kids. Perhaps twenty leading minds will contribute to the natural languaging of a single character about a single topic. That’s an exciting proposition, but how does that help story work in VR?

Gathering speech, behavioral, and emotive analytics from the viewer, and applying AI responses to it, is an approach that turns story on its head. The producer, instead of controlling the story from beginning to end, follows the lead of the viewer. As the viewer reveals an area of interest, the media responds in a way that’s appropriate to that story. This might sound like gaming, but it’s different when dealing with authentic, human experience. Shooting zombies is entertaining on the couch, but in the VR world, the response is to flee. If VR story is going to succeed, it must be built on top of real world, genuine experience. Working out how to do this, I believe, is the paradigm shift of story that VR is calling for.

If this is the future for VR, it will change entirely how media is produced, and a new era of story will develop. I’m proposing the term, “Active Story,” as a name for this approach to VR narratives. With this blog, I’ll be expanding on this topic as VR is produced and tested at CableLabs and as our findings reveal new insight.

Eric Klassen is an Associate Media Engineer in the Client Application Technologies group at CableLabs.

Comments