The Future of Network Security
I recently attended a panel discussion that considered technology evolution over the next thirty years. Of course, predicting such long term evolution and revolution is daunting. However, it’s interesting that all three panelists chose first to look to the mid 1980s to provide guidance to forecast the mid 2040s.
As a forward-looking security engineer, looking into the past is a frustrating approach. In 1984, William Gibson wrote Neuromancer predicting hackers before we had hackers. In this work of science fiction, people would hack into a network represented in virtual reality and then gain illicit access to information and processors. The book developed a cult following and even today is often a major inspiration of criminal hackers. Four years later, Kevin Mitnik and Robert Morris were both convicted of what today we consider hacking. Kevin Mitnik was a hacker before we had a name for hackers – using social engineering, dumpster diving, phone phreaking, and various technical exploits, Mitnik gained access to the phone network and Digital Equipment Corporation’s computer network. He was eventually convicted of wire fraud. Contemprary with Mitnik, Robert Morris became notorious for development of the first computer worm and disrupted large swathes of what eventually became the Internet. Morris was the first person convicted under the 1986 Computer Fraud and Abuse Act.
Strangely, many of the vulnerabilities used in the 1980s by Mitnik and Morris remain the vehicles for exploits today. This includes social networking, poor passwords, vulnerabilities in operating systems, exposed open interfaces, and more. When considering the evolution of network security over the next thirty years, it becomes easy to be very pessimistic. There have been many advances and tools and practices have evolved in network security. New solutions are introduced every year. Frequently, these are expensive and not widely applied. Often, new solutions are not cost effective – many even reduce overall costs – but they are not implemented or applied properly. And, often, those that actually get deployed in turn get hacked.
Security Conflicts Development
It’s important to consider why network security is challenging and why it has evolved in such fits and starts. The fundamental strategies to network security have been to limit access to resources and to minimize network connectivity. These are contrary to development of value in networks. Typically, the more people or devices that can access the resource, the greater the value of the resource. The increase in value may be exponential — Metcalfe’s Law asserts that the value of a telecommunications network is proportional to the square of the connected users of the system. Consequently, the more people and devices a network connects, the greater the value of the network. This reality creates a necessary dynamic tension that may never go away.
Why is this tension dynamic and necessary? Value is a neutral measurement — a network that is valuable to its creators and users may also be useful to somebody else. If so, somebody else may try to leverage that value for purposes for which the network was not created. This is what hackers really do -- they take over an asset that has value so they can apply that value to their own purposes. Consequently, network security exists as an exercise in adversarial engineering. Within the enterprise or service provider, this means that as network engineers continually strive to add value and new features to networks, security engineers are always considering how others can subvert new value and features by implementing controls that ultimately limit network functionality.
Technology, Personal Motivation and The Business Case
There are at least three other reasons the security challenge hasn’t really been met the past thirty years: technology, personal motivation, and the business case. I’m sure many people will find it hard to believe, but the fact is that the technology has not been available to secure networks. The problem has been our limited ability to exert strong personal and device identities for network authentication and authorization. Consider, for a moment, just how little your driver’s license has changed over the past thirty years. And consider that even with recent technologies, it’s still possible to get forged drivers’ licenses. It’s not that much different for networks — proving that a person is who you think they are, much less the devices being used are what you expect them to be, has been very elusive. Again, there have been many advances – they just haven’t quite been sufficient.
There have been fairly cumbersome solutions to personal and network device identification. They’ve been expensive and very limiting. Unfortunately, there really hasn’t been much personal motivation to apply these solutions. We really have only recently started to see network applications that mandated strong security. Just a few years ago, it was cheaper to use insurance or business mechanisms to address security lapses, or nothing at all. For example, when your credit card number is stolen, the credit card company doesn’t hold you personally liable.
Given a low personal motivation, it’s been hard for companies to support business cases to improve security. Network security engineers really work on a business approach similar to insurance; you assess risk, apply what you think are reasonable mitigations and accept the risks that can’t be reasonably mitigated. Given the adversarial environment of network security, it should be no surprise that sometimes (maybe often), the network security engineers’ assessments are not quite what we’d wish in hindsight.
Fortunately, there are reasons to believe these will be solved and this gives reason to believe that the next thirty years will see dramatic improvements in the value of our networks because we will solve some fundamental security challenges. The fundamental technology challenges have been personal identity, software validation, and hardware validation. These are being solved. The payment and medical industries are working on very compelling solutions to prove that a person is whom they claim to be, at least to a reasonable degree. Network operators will hopefully be able to leverage these abilities. We’ve had good solutions for trusted hardware and software systems for some time, but they have been somewhat expensive. The systems and solutions to make highly trusted computer software and hardware environments are becoming available now. And, we are getting new tools. For example, distributed ledger technologies record transactions so that we can measure trust and reputation in new ways. The result of this technology renaissance will be a much more firm basis for trust. However, there needs to be a reason that drives application of the improving technology.
Personal motivation is rising. First, more and more of our financial transactions are done electronically. People care about their money, and that drives strong motivation to do what is necessary to protect it. However, there are new motivators. With the advent of connected cars, homes, and medical devices, the nature of attacks can be much more personal. Targeted attacks at individuals are not new, but with the Internet of Things where everything is connected, the risks are both more direct and more widely applicable.
As a consequence, the business case for strong security is becoming much more compelling. As everything is connected, hacking becomes highly automated. One organization, RouterCheck, even coins the phrase “hack of mass destruction” as: “A computer hacking attack in which a large group of people are targeted based on their use of homogeneous computer networking equipment.” Furthermore, as targeted attacks become more common, negligence will take on a much more personal and measurable character. Between the industrialization of cyber crime and increased liability for people’s well being, the business case for strong network security becomes much more tenable.
Can We See 2040?
So, what does the future look like? Mostly, it looks promising. Both the tools and the motivation to secure networks are becoming increasingly available. In fact, when you consider the growth rate of broadband in terms of customers and bandwidth against the growth of cyber crime, it seems that network operators have been gaining ground for a few years. Strong network authentication and authorization will capitalize on this trend. However, network security will remain challenging. The value of our networks will continue to grow; we will use them in increasingly interesting ways. There will continue to be a drive to subvert the network for nefarious purposes. The dynamic tension between network engineering and network security will continue. Network operators will continue to perform business in an adversarial environment. The need for network security will continue to be driven by human nature.
OPNFV Builds Momentum With First Code Release
Today sees the first OPNFV release known as ‘Arno’ (OPNFV releases are named after rivers) which OPNFV has been busily creating since the community was launched last September. In my blog celebrating the OPNFV launch I outlined the importance of open source to stimulate innovation and accelerate progress on implementation. At CableLabs we are very keen on the open source approach because it enables the industry to collaborate to build common features while avoiding duplication of effort. This enables everyone to focus on product development and service creation. We have been eagerly awaiting this first release as it provides the foundation for our virtualization projects and we will be proposing to bring in our Virtual Business CPE APIs for the next OPNFV release.
The initial scope of OPNFV is focused on the NFV Infrastructure layer of the ETSI NFV Architectural Framework as shown below:
Bounding the scope of OPNFV to the NFVI in this first phase has enabled this new global community to focus on rapidly creating a software development framework and for the participants to get to know each other and build awareness around this new topic through deeper involvement in a smaller set of projects.
We congratulate the OPNFV community on this achievement. The community has solved a lot of open source integration problems and created and debugged toolsets that would otherwise have had to be done independently and repeatedly in different labs. OPNFV is proving the value for the industry to work together to do the heavy lifting once.
There was a lively debate at the first OPNFV Hackfest in Prague on whether it was better to include more features in the initial release but to allow more time for development, or to release earlier with fewer features. We argued for early release to enable the industry to become familiar with the tools and to start to accrue learning as quickly as possible. This release will result in more developers becoming familiar with the OPNFV platform more quickly and to contribute to future OPNFV releases as well as their proprietary innovation on top of the platform.
The OPNFV Arno release enables the industry to create NFV integration platforms according to a common baseline thereby accelerating collaboration and shared learning. Full details of what’s included in the Arno release can be found on the OPNFV website, but as a quick summary, it includes the base Operating System (Linux CentOS), SDN Controller (OpenDaylight Helium) and Infrastructure Controller (OpenStack Juno).
CableLabs considers open source and formal standards processes to be complementary and we are actively involved in both. We actively contributed to the OPNFV ‘Pharos’ Testbed infrastructure project including contributing governance documents based our vendor-neutral test and certification experience. We are involved in the new OPNFV Certification & Compliance Committee and we are building OPNFV reference platforms at our Sunnyvale-CA and Louisville-CO locations to integrate and validate our collaborative open source development on behalf of the cable industry. We will be providing feedback to OPNFV and the ETSI NFV ISG as well as contributing our own code.
The next few months are going to be very exciting as we begin to see the ETSI NFV ISG Architectural Framework brought to life through the efforts of the OPNFV community and we’ll be able to share insights on NFV performance and interoperability because we’ll all be using a common infrastructure configuration.
Don Clarke is a Principal Architect at CableLabs working in the Virtualization and Network Evolution group.