It can sometimes feel like cyber security is just too complex for us to understand and manage properly. With such a fast-moving and technically complicated topic, how can we accurately assess our cyber risks, and then manage them effectively?
The view that cyber risk is just too complex to manage is commonplace and understandable. But in this post, we’ll explain why we are optimistic that cyber risks can be managed. A few months ago, Kate R and Geoff E introduced the concept of complexity, and discussed how it might apply to cyber security. In this blog, we’ll build on that idea of complexity by looking at what we can learn from the safety engineering world – another highly complex domain.
Are cyber risks unpredictable?
Suppose we wanted to somehow calculate the ‘riskiness’ of our digital technology, and then to use this to come up with predictions about how a system could break or be attacked. The first thing to consider is the huge list of variables we’d have to deal with. There are the vulnerabilities in our technology to analyse, and the intentions of threat actors to assess, along with a whole load of other variables, including how much money your organisation has to spend on cyber security. There are techniques for merging these assessments, which are fine in themselves, but they quickly become unworkable in the face of large interconnected systems.
The problem is that we can’t precisely calculate cyber risk because it is intrinsically unpredictable. The only way to build up a detailed understanding of how these kinds of system will act is to observe their behaviour, or to model it, whilst being aware of the limitations of the models you use.
For cyber systems, is failure inevitable?
On March 28, 1979, the second reactor of the Three Mile Island nuclear power station suffered a partial meltdown, leading to radioactive substances being released into the nearby air and water. In the investigations that followed the meltdown, the sociologist Charles Perrow identified a number of properties of the reactor system which meant that failure was inevitable. Two of these properties are particularly relevant to cyber risk.
The first of these is interactive complexity. In a nutshell, this property refers to systems where there are a lot of connections between individual components. The second property is tight coupling. This refers to the speed with which actions propagate throughout the overall system. If something happening in one part of the system affects another part of the system very quickly, then it can be called tightly coupled. Perrow recognised that for systems which have a high degree of interactive complexity and are tightly coupled, then a large-scale failure is unavoidable.
Perrow identified these properties by looking at industrial systems and the organisations which manage them. As a result, the kinds of failure he was thinking about were accidents, or failures of safety. But let’s now apply those two properties to cyber security. Digital systems are both interactively complex and tightly coupled, because that is precisely what we need them to be. They are the tools we use to make organisations responsive and efficient. As a result, applying Perrow’s theory of accidents to cyber security tells us that cyber security breaches are basically inevitable, as well as being unpredictable.
Do we give up trying to manage cyber risk?
We’ve just argued that cyber risks are both unpredictable and that breaches are inevitable. What options does this leave us? Do we follow the thrust of Douglas and Wildavsky’s statement in their book Risk and Culture:
“Can we know the risks we face, now or in the future? No, we cannot; but yes, we must act as if we do.”
Not quite. Practically, this perspective on risk can feel pretty disempowering, even if it is often accurate. But it also conceals a crucial part of this story: techniques have already been developed that can help you to analyse complexity of the sort that Perrow identified, in the systems you manage. And there are tools out there that can help you to model the causality of cyber security risks, which can make some sense of the unpredictability of some cyber systems.
We need more tools to manage cyber risk effectively
So, what are these tools and techniques that we can use to help us manage our cyber risks? In the first phase of the NCSC’s guidance on Risk Management for cyber security, we introduced two distinct, but mutually-supporting types of risk management technique. These were component-driven risk management and system-driven risk management. It’s worth mentioning that some of the system-driven techniques that we introduced in that guidance were originally developed in the safety engineering world, to deal with precisely the issues we described above. But, there is far more to risk management than these two types of technique, and for this reason, we will be publishing the second phase of this guidance in the autumn of 2018.
First, we will introduce quantitative methods, applied to cyber security. We will also present some techniques for analysing the causality of security breaches, such as attack trees and scenario planning techniques. Last, but by no means least, we’ll present some practical suggestions around security governance, whilst recognising that there can be no one-sized-fits-all security governance approach. As with the first phase of our risk guidance, our aim is to broaden out the range of techniques available to Cyber Risk Managers.
So, to the question ‘Can we manage our cyber risks?’ the answer is yes. But a qualified yes. If the cyber risk management profession uses only a small variety of risk management techniques, centred around component-driven approaches, then as Kate and Geoff said, we will be gradually overtaken by the complexity of what we’re seeking to manage. But, if we adapt our approach to include a wider range of techniques, we stand a much better chance of keeping our systems, our organisations, and our country, secure in cyberspace.
John Y
Risk Research Lead
Source: National Cyber Security Centre