AI

Why Consensual AI Improves Problem Solving

Consensual AI

Bernardo Huberman
Fellow and Vice President of Core Innovation

Feb 22, 2019

A version of this blog post was published on February 21, 2019, on the S&P Global Market Intelligence site. 

We are surrounded by embedded sensors and devices with more processing power than many of the computers standing on our desks. Machine learning modules inside phones, home control systems, thermostats, and the ubiquitous voice operated gadgets, constitute a whole technological species that now coexist with us through the same Internet environment we populate with our own communication devices. These are the simple components of what it is commonly called the “Internet of Things” (IoT).

The real revolution is taking place in a different, less visible setting, an industrial one, that ranges from manufacturing and refineries to health care. Within these very large systems and organizations, myriads of embedded smart sensors are connected through shared API’s, leading to a new form of networked computing power that will likely dwarf what we conceive of as the present-day Internet.

The Challenges of Embedded Systems

This vast industrial array of connected sensors has many characteristics that make it different from the consumer smart devices with which most people are familiar. First, the pervasiveness and interconnectivity of these sensors, coupled with the unpredictability of their inputs, make their response times autonomous from human intervention. For instance, a fitness tracker running out of power does not necessitate an urgent response. In comparison, the failure or delayed emergency signal from a smart sensor controlling several refinery valves can trigger an undesirable chain reaction from other sensors and actuators leading to consequential systemic failures.

Second, these smart sensors constitute an open and asynchronous distributed system which cannot predict the behavior of the environment in which it is embedded. This system is also decentralized since it would be hard for a central unit to receive and transmit up-to-date information on the state of the whole system.

Third, the distributed nature of this industrial internet of things makes it open to a host of security threats, since a single break into a component of the distributed fabric can compromise the entire system.

While it is easy to create machine learning algorithms that report on the behavior of parts of the system, it is hard for these programs to be able to reason and act swiftly in response to inputs and malfunctions of a large, interconnected embedded system. A notable improvement can be achieved by using Edge Computing, which entails sensing and processing the information from embedded systems in close spatial proximity to them.

Far more complicated is the aggregation of such local information at a coarser level, so that one can obtain global and timely information and take corrective action when needed. The reason why this is difficult is that sensors differ in the precision and sophistication with which they sense and process data, while at times reporting faulty readings.

Consensual AI Solutions

One way to remedy these problems is to design distributed algorithms that can cooperate in the overall task of diagnosing and acting on systems of embedded sensors. Examples include the sensing of local anomalies that are aggregated intelligently in order to decide on a given action, the collective detection of malware in parts of the network, and effective responses to predetermined traffic and content patterns, to name a few.

We know that such distributed systems are extremely effective at solving the global problems posed by interconnected local units because this is the manner in which humans successfully manage large distributed tasks. Organizations are created at a dizzying speed to deal with problems of control and distribution in a number of industries and services, and they function by interweaving local expertise that is able to detect and solve problems that require timely solutions, from network malfunctions to supply chain interruptions.

This distributed form of artificial intelligence is not an illusory goal. A few such systems have already been designed and tested and have shown dramatic improvements in the times needed to solve hard problems such as optimization and graph coloring. These problems are characterized by the fact that as their size increases linearly, the time to finding a solution rises exponentially. A common example is the traveling salesman problem, which can be seen as a metaphor for the laying of networks in a manner that minimizes the number of traversals needed to cover multiple cities and users. As more nodes are added to such a network, the number of possible solutions increases exponentially, leading to the impossibility of finding the node that minimizes the connections in finite times.

The beauty of cooperative systems is that once deployed, they can exhibit combinatorial implosions. These implosions are characterized by a sudden collapse in the number of possible avenues to a solution due to the effectiveness of cooperation. As a result, problems that took an enormous amount of time to solve are now rendered in linear or polynomial time. These implosions occur when both the quality and the number of messages exchanged by these agents or programs increases while working on the solution of complex problems.

In the context of these networked embedded sensors, consensual systems have the ability to aggregate disparate, and at times, incorrect information to quickly and reasonably diagnose problems. These types of cooperative systems will allow for the control of these large embedded systems without having to resort to an exhaustive analysis of all the data that floods the network. Even better, one hopes that as algorithmic forms of common-sense reasoning (e.g. haven’t I seen this problem before?) are developed in the future we will then reach the dream of fully embedded sensors being able to control their systems in a form that one would call intelligent.

Want to learn more about AI in the future? Subscribe to our blog. 


SUBSCRIBE TO OUR BLOG

AI

A Different Future for Artificial Intelligence

Future for Artificial Intelligence

Bernardo Huberman
Fellow and Vice President of Core Innovation

Nov 27, 2018

Not a single day goes by without us hearing about AI. Machine learning, and AI, as these terms are often conflated, have become part of the lexicon in business, technology and finance. Great strides in pattern recognition and the discovery of hidden correlations in vast seas of data are fueling the enthusiasm and hopes of both the technical and business communities.

While this success is worth celebrating, we should not lose track of the fact that there are many other aspects of artificial intelligence beyond machine learning. Common sense reasoning, knowledge representation, inference, to name a few, are not part of the toolbox today but will have to be if we seek forms of machine intelligence that have much in common with our own.

The reason why such forms of machine intelligence are not being used is due to the difficulty that those problems entail. Unlike the recent advances in machine learning, half a century of research in symbolic systems, cognitive psychology and machine reasoning have not produced major breakthroughs.

Intelligent Networks: A New Form of Artificial Intelligence

A more promising future can be expected once we realize that intelligence is not restricted to single brains; it also appears in groups, such as insect colonies, organizations and markets in human societies, to name a few. In all these cases, large numbers of agents capable of performing local tasks that can be conceived as computations, engage in collective behavior which successfully solves a number of problems that transcend the capacity of a single individual to solve. And they often do so without global controls, while exchanging information that is imperfect and at times delayed.

Many of the features underlying distributed intelligence can be found in the computing networks that link our planet. Within these systems processes are created or “born”, migrate across networks and spawn other processes in remote computers. And as they do, they solve complex problems -think of what it takes to render a movie in your screen -while competing for resources such as bandwidth or CPU contested by other processes.

Interestingly, we understand the performance of distributed intelligence, both natural and artificial, much better than the workings of individual minds. This is partly due to the ease with which we can observe and measure the interactions among individuals and programs as they navigate complex informational spaces. Contrast this with the difficulty in learning about detailed cognitive processes within the human brain. And from this large body of knowledge we know that while the overall performance of a distributed system is determined by the capacity of many agents exchanging partial results that are not always optimal, success is determined by those few making the most progress per unit time (think of many agents looking for the proverbial needle in the stack).

Distributed Intelligence: Better than the Best

This suggests a promising path forward for AI; the creation of distributed intelligent programs that can sense, learn, recognize and aggregate information when deployed throughout the network in the service of particular goals. Examples can be the sensing of local anomalies that are aggregated intelligently in order to decide on a given action, the collective detection of malware in parts of the network, sensor fusion, and effective responses to predetermined traffic and content patterns, to name a few.

Distributed AI is not an illusory goal. A few such systems have already been designed and tested and have shown large improvements in the times needed to solve hard computational problems such as cryptarithmetic and graph coloring. These are problems characterized by the fact that as their size increases linearly, the time to finding a solution rises exponentially. A common example is the traveling salesman problem, which can be seen as a metaphor for the laying of networks in such a way that they minimize the number of traversals needed to cover a number of cities and users. While there are a number of powerful heuristics to approach this optimization problem, for large instances one can only hope for solutions that while not optimal, satisfy a certain number of constraints.

The beauty of cooperative systems is that once deployed, they can exhibit combinatorial implosions. These implosions are characterized by a sudden collapse in the number of possible venues to a solution due to the effectiveness of cooperation, and as a result, what took exponential times to solve is now rendered in linear or polynomial time. These implosions appear as both the quality and the number of messages exchanged by AI agents increases while working on the solution of complex problems.

In closing, the emergence of distributed AI will allow for the solution of a number of practically intractable problems, many of them connected with the smooth and safe functioning of our cable networks. Imagine applying different AI solutions to search for security anomalies and combining them in order to identify and act on them. Or monitoring distal parts of the network with different kind of sensors whose outputs are aggregated by intelligent agents. These are just two instances of the myriad problems that could be tackled by a distributed form of artificial intelligence. The more examples we think of and implement, the closer we will get to this vision of a society of intelligent agents who, like the social systems we know, will vastly outperform the single machine learning algorithms we are so familiar with.

Subscribe to our blog to learn more about the future of artificial intelligence. 


SUBSCRIBE TO OUR BLOG