Adversarial Engineering

Steve Goeringer
Distinguished Technologist, Security

Jul 13, 2016

Security engineering is one of few technical endeavors in which you deal with an adversary. There are a few other domains such as electronic warfare or fire prevention. Working against an adversary in this way is like playing a twisted game of chess. As the game begins, the security engineer is aware of most of the board and most of the pieces. The attacker discovers the board and pieces as the game is played. Both players invent new rules or change old rules throughout the game without telling the other player. Either player may introduce new squares to the board, new pieces to the game, or remove them. The twisted advantage that the attacker has is that they can use the security engineer’s pieces sometimes.

Security engineering makes for a rough game. The stakes are very high. Revenue loss and brand damage to companies can be huge. Ponemon Institute released a study in June 2016 that indicates the average cost of a data breach is $4 million while the average cost per lost or stolen record is $158. Of course, the actual and incidental damages of each particular breach is unique. The largest security events impact many millions of customers. Information is Beautiful provides a fascinating interactive graphic showing the history of the world’s biggest data breaches since 2004.

All in the mindset

Ultimately, attackers hijack the intended user experience to achieve personal goals — financial gain, extortion, fame, fun, harm. How does the security engineer cope? The security engineer needs to approach work with the mindset of their adversary – the attacker. I like to call this approach adversarial engineering. An adversarial engineer focuses on how to misuse or change a service or product with an eye towards what attackers (various kinds of cyber criminals) may want to do. This way, the adversarial engineer can better integrate mitigations and controls to keep hackers out.

Tools and strategies for adversarial engineering

The adversarial engineer understands and identifies security problems by thinking offensively and creatively about how to get a network or IT resource to provide access to data that shouldn’t be available or provide functionality that isn’t intended. The adversarial engineer employs some great tools and strategies, including:

  • Threat analysis — The adversarial engineer creates models of the architecture used to provide services. Hacking techniques can then be postulated on how malcontents might try to access the network, servers, databases, and other resources used to provide services. Threat vectors are identified so they can be can be systematically addressed, ensuring each vector is faced with multiple controls and mitigations to prevent hackers from achieving their goals.
  • Misuse cases — Network and IT services are dynamic and fluid, reacting to events and changing state as users interact with resources. Service designers create use cases that define how resources should behave and be used. The adversarial engineer needs to consider these use cases and develop “misuse” cases for each one. Once misuse cases are crafted, multiple controls and mitigations are considered and integrated into the overall solution to foil bad actors from hijacking user experiences and doing unintended activities.
  • Vulnerability scanning — Even well designed services can be vulnerable. The adversarial engineer discovers what they may have missed the same way hackers might — they use a variety of tools to scan network interfaces and computer resources for vulnerabilities. Classic examples of such tools are nmap developed by Gordon Lyon, aka Fyodor VaskovichMetasploit developed by HD Moore (now available from Rapid7), and Nessus (from Tenable Network Security). There are dozens of other tools available, sometimes packaged into entire environments such as Kali Linux (offered by Offensive Security). Some very advanced scanners look for completely new kinds of vulnerabilities using code analysis or by performing fuzzing.
  • Penetration testing — Once vulnerabilities are discovered, the engineer needs to go one more step. They need to find how vulnerabilities might be exploited by doing penetration testing. This is where the craft of adversarial engineering can get deeply technical. Hand crafted investigation is often applied. However, many penetration testing tools are packaged in the same environments as mentioned above under vulnerability scanning.
  • Pervasive monitoring — Not all intrusions can be stopped – the Internet, by nature and design, is a fairly open environment. Pervasive monitoring keeps tabs on services and their associated resources, continually watching to ensure that things are being used as expected and performing as designed. This helps to minimize the time intruders are in systems or networks and potentially decrease the damage done by intrusions. Often, hackers will find vulnerabilities that were not discovered by the adversarial engineer and new controls and mitigations will be integrated into the service infrastructure.

Mitigations and controls

What are the mitigations and controls that adversarial engineers consider? There are literally hundreds. The US government identifies over 300 fundamental controls in the NIST Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations (“800-53”). There are several families of controls, summarized from 800-53 in the table below. Not all of these are applicable to commercial services, and commercial services often need more than what is applied by the government. A more concise list is maintained by the Center for Internet Security, CIS. These provide a minimum framework for effective cyber defense and are available at the Center for Internet Security website.



Figure 1: NIST 800-53 security control identifiers and family names

Applications must be considered as well. A good starting point is the Open Web Application Security Project (OWASP) who, similar to CIS, maintains a top 10 list as well.

The challenge in applying network and application controls is achieving defense in depth. Achieving a robust security strategy requires deploying controls and mitigations in multiple dimensions — in line, at multiple layers, and even in time. The adversarial engineer assumes controls may be compromised, so they will try to contain or at least slow perpetrators so they can be recognized and stopped.

Pervasive monitoring enables an agile operations strategy referred to as “kill-chains”. This is a “special forces”-inspired approach where you design multiple areas in your strategy where adversaries can be monitored, intercepted, and stopped. The idea was initially documented by Lockheed Martin to proactive detect and respond to persistent threats. Today, this is an increasingly applied strategy to provide an agile response to the ever-evolving tactics and strategies of hackers.

Its not ALL about bad actors

Network equipment fails. Applications do not always behave as designed. Mistakes are made. Sometimes, network attackers will at least partially succeed. Consequently, good networks are actually designed to fail well. The adversarial engineer also considers how resilient the network and security controls must be to achieve design goals. Systems and software will be deployed redundantly, sometimes to extreme levels, so that if something does fail, it doesn’t completely take down services. And, because things do break in the real world, graceful recovery after disruptions and outages must be designed.

What about CableLabs?

CableLabs ensures cable operators have multiple tools to apply adversarial engineering practices. For example,

  • DOCSIS® technology includes three areas of control and mitigation: authentication, encryption, and integrity. And, DOCSIS implementations allow for controls both in the network and also at the home or business.
  • CableLabs is developing new specifications that also provide for secure devices in the home, including access points, home routers, and even IoT devices.
  • CableLabs is developing extremely high speed wireless environments to extend the reach of network operators into communities, cities, and campuses, and security is a core consideration of these emerging technologies.
  • CableLabs is considering new ways to secure applications and hardware in virtualized environments and clouds.

Security engineering is challenging given the adversarial nature of the Internet and cable technology is meeting that challenge.


The Future of Network Security

Steve Goeringer
Distinguished Technologist, Security

May 24, 2016

I recently attended a panel discussion that considered technology evolution over the next thirty years. Of course, predicting such long term evolution and revolution is daunting. However, it’s interesting that all three panelists chose first to look to the mid 1980s to provide guidance to forecast the mid 2040s.

As a forward-looking security engineer, looking into the past is a frustrating approach. In 1984, William Gibson wrote Neuromancer predicting hackers before we had hackers. In this work of science fiction, people would hack into a network represented in virtual reality and then gain illicit access to information and processors. The book developed a cult following and even today is often a major inspiration of criminal hackers. Four years later, Kevin Mitnik and Robert Morris were both convicted of what today we consider hacking. Kevin Mitnik was a hacker before we had a name for hackers – using social engineering, dumpster diving, phone phreaking, and various technical exploits, Mitnik gained access to the phone network and Digital Equipment Corporation’s computer network. He was eventually convicted of wire fraud. Contemprary with Mitnik, Robert Morris became notorious for development of the first computer worm and disrupted large swathes of what eventually became the Internet. Morris was the first person convicted under the 1986 Computer Fraud and Abuse Act.

Strangely, many of the vulnerabilities used in the 1980s by Mitnik and Morris remain the vehicles for exploits today. This includes social networking, poor passwords, vulnerabilities in operating systems, exposed open interfaces, and more.  When considering the evolution of network security over the next thirty years, it becomes easy to be very pessimistic. There have been many advances and tools and practices have evolved in network security. New solutions are introduced every year. Frequently, these are expensive and not widely applied. Often, new solutions are not cost effective – many even reduce overall costs – but they are not implemented or applied properly. And, often, those that actually get deployed in turn get hacked.


Security Conflicts Development

It’s important to consider why network security is challenging and why it has evolved in such fits and starts. The fundamental strategies to network security have been to limit access to resources and to minimize network connectivity. These are contrary to development of value in networks. Typically, the more people or devices that can access the resource, the greater the value of the resource. The increase in value may be exponential — Metcalfe’s Law asserts that the value of a telecommunications network is proportional to the square of the connected users of the system. Consequently, the more people and devices a network connects, the greater the value of the network. This reality creates a necessary dynamic tension that may never go away.

Why is this tension dynamic and necessary? Value is a neutral measurement — a network that is valuable to its creators and users may also be useful to somebody else. If so, somebody else may try to leverage that value for purposes for which the network was not created. This is what hackers really do -- they take over an asset that has value so they can apply that value to their own purposes. Consequently, network security exists as an exercise in adversarial engineering. Within the enterprise or service provider, this means that as network engineers continually strive to add value and new features to networks, security engineers are always considering how others can subvert new value and features by implementing controls that ultimately limit network functionality.


Technology, Personal Motivation and The Business Case

There are at least three other reasons the security challenge hasn’t really been met the past thirty years: technology, personal motivation, and the business case. I’m sure many people will find it hard to believe, but the fact is that the technology has not been available to secure networks. The problem has been our limited ability to exert strong personal and device identities for network authentication and authorization. Consider, for a moment, just how little your driver’s license has changed over the past thirty years. And consider that even with recent technologies, it’s still possible to get forged drivers’ licenses. It’s not that much different for networks — proving that a person is who you think they are, much less the devices being used are what you expect them to be, has been very elusive. Again, there have been many advances – they just haven’t quite been sufficient.

There have been fairly cumbersome solutions to personal and network device identification. They’ve been expensive and very limiting. Unfortunately, there really hasn’t been much personal motivation to apply these solutions. We really have only recently started to see network applications that mandated strong security. Just a few years ago, it was cheaper to use insurance or business mechanisms to address security lapses, or nothing at all. For example, when your credit card number is stolen, the credit card company doesn’t hold you personally liable.

Given a low personal motivation, it’s been hard for companies to support business cases to improve security. Network security engineers really work on a business approach similar to insurance; you assess risk, apply what you think are reasonable mitigations and accept the risks that can’t be reasonably mitigated. Given the adversarial environment of network security, it should be no surprise that sometimes (maybe often), the network security engineers’ assessments are not quite what we’d wish in hindsight.

Fortunately, there are reasons to believe these will be solved and this gives reason to believe that the next thirty years will see dramatic improvements in the value of our networks because we will solve some fundamental security challenges. The fundamental technology challenges have been personal identity, software validation, and hardware validation. These are being solved. The payment and medical industries are working on very compelling solutions to prove that a person is whom they claim to be, at least to a reasonable degree. Network operators will hopefully be able to leverage these abilities. We’ve had good solutions for trusted hardware and software systems for some time, but they have been somewhat expensive. The systems and solutions to make highly trusted computer software and hardware environments are becoming available now. And, we are getting new tools. For example, distributed ledger technologies record transactions so that we can measure trust and reputation in new ways. The result of this technology renaissance will be a much more firm basis for trust. However, there needs to be a reason that drives application of the improving technology.

Personal motivation is rising. First, more and more of our financial transactions are done electronically. People care about their money, and that drives strong motivation to do what is necessary to protect it. However, there are new motivators. With the advent of connected cars, homes, and medical devices, the nature of attacks can be much more personal. Targeted attacks at individuals are not new, but with the Internet of Things where everything is connected, the risks are both more direct and more widely applicable.

As a consequence, the business case for strong security is becoming much more compelling. As everything is connected, hacking becomes highly automated. One organization, RouterCheck, even coins the phrase “hack of mass destruction” as: “A computer hacking attack in which a large group of people are targeted based on their use of homogeneous computer networking equipment.”  Furthermore, as targeted attacks become more common, negligence will take on a much more personal and measurable character. Between the industrialization of cyber crime and increased liability for people’s well being, the business case for strong network security becomes much more tenable.


Can We See 2040?

So, what does the future look like? Mostly, it looks promising. Both the tools and the motivation to secure networks are becoming increasingly available. In fact, when you consider the growth rate of broadband in terms of customers and bandwidth against the growth of cyber crime, it seems that network operators have been gaining ground for a few years. Strong network authentication and authorization will capitalize on this trend. However, network security will remain challenging. The value of our networks will continue to grow; we will use them in increasingly interesting ways. There will continue to be a drive to subvert the network for nefarious purposes. The dynamic tension between network engineering and network security will continue. Network operators will continue to perform business in an adversarial environment. The need for network security will continue to be driven by human nature.