The Economist, a weekly newspaper, ran an article on the Deepwater Horizon report issued last week by the President's commission that prompted my reflection on our approach to information security.
Such self-policing should be an adjunct, not an alternative, to better regulation. The commission wants the regulatory reforms already put into place to be beefed up with the creation of a new, fully independent safety regulator. And it wants that regulator to take a new approach. After the loss in 1980 of the Alexander Kielland, a Norwegian rig, and the explosion in 1988 of Britain’s Piper Alpha platform, which between them claimed almost 300 lives, regulators put a new responsibility on operating companies to go beyond meeting existing standards and demonstrate that in the round their plans had minimised all the relevant risks: to make a positive “safety case” for their proposal.
The commission is keen on this kind of approach, which automatically keeps up with the ever more extreme technology being used—deepwater and ultradeepwater drilling has developed at staggering speed—in a way that setting standards in advance cannot. Such safety cases, it notes, should include well-developed plans for what to do if things go wrong, plans that were signally lacking in the case of Deepwater Horizon.Have you ever read an article about the state of information security that didn't refer to an arms race with hackers or the challenge of maintaining security with the flood of new technologies? Policies, standards, and precisely-defined IT controls clearly have their places, but I like the emphasis on security planning and demonstrating that relevant risks have been minimized. When policies and standards are too thin or too narrow, effective governance is very difficult. Their failures when taken too far in the other extreme are obvious. But where is the sweet spot in the middle?
How many IT projects have you seen where management took 100% of their security requirements straight from policies and standards without appearing to give a moment of consideration to the risks unique to the particular systems, environment, and business use? When such detailed policies and standards didn't exist, how many project managers complained loudly that the security review function was getting in the way and not helping?
My preference has always been for guidelines over standards for practical reasons. The biggest difference is that the administrative expense of tracking known exceptions to the standards rarely seems worth the benefit they provide. Guidelines can fall behind and progress can continue, albeit at the cost of reduced security. When standards fall behind they interfere with progress and count against the trust that the security function has with management. More over, standards are almost guaranteed to be somewhat outdated by the time they are socialized, approved, published, and adopted.
From detailed configuration guidelines to reusable design patterns, we need to enable strong security architectures by providing examples of what good security can look like. However, compliance is not security and we have to make sure that our communications always reinforce the idea that management must think first in terms of risk and where formal guidance has not yet caught up to new technology, it is management's responsibility to demonstrate to their auditors, accreditors, and security managers that their planned use of it is safe.