A couple of weeks ago I met with the email management and security vendor Retarus. While I was unfamiliar with the company (and it had a reasonably standard portfolio, I thought at first glance), my interest was piqued because it was German, and the country is very particular about such questions as personal data, privacy and so on.
As our conversation tended towards what the organisation is calling patient zero detection — referring to how the vendor’s software looks to improve how it reacts to security attacks that have already taken place, I found myself on fundamentally more interesting and potentially valuable ground. It’s difficult to explain why I think this is so important, so please bear with me and I shall try.
IT security has had a chequered history. Back in the day when I used to manage UNIX systems, workstations and servers tended to be delivered with all technological ‘doors’ left open (front and back), so that any person with a reasonable grasp of the operating system could gain access to whatever they wanted.
Some systems were better than others — indeed, good old mainframes had an almost-militaristic attitude to their own protection, the principles of which were adopted over time by the open systems movement and then the PC wave of computing (c/f Microsoft’s late-to-party-but-still-laudable Trusted Computing Initiative, kicked off in 1999).
(As an aside, pretty much any time I have pushed back against such efforts being promoted by IT vendors, my discomfort has been driven by the presentation of well-established, accepted and required truth as something new.)
As computing moved into the mainstream, security best practices came into alignment with the broader world of contextual risk, itself well known in military and safety-critical circles. This world had taken a different path, moving from risk (and litigation) avoidance to the (more affordable) risk management philosophy we still follow today.
Having bottomed out the best practices required for managing and mitigating the probability and impact of risk, attention has turned to resolving any issues when they arise. The parallel fields of business continuity and disaster recovery are testament to these efforts, their principles later applied to IT security not least in terms of how to deal with zero-day exploits.
Here’s the ‘however’: while this philosophical path from risk mitigation to breach resolution remains a constant, it is based on assumptions that are difficult to maintain where IT is involved. Not only that decisions, once made, can be stuck with; but also the idea that by dealing with the tangible assets (bey they physical, electronic or software-based), the stuff they deal with is also protected.
In the case of IT, the ‘stuff’ is called data. When the Jericho Forum got together to discuss the changing nature of IT security, they did so because they saw the protect-the-border approach to security as being a recipe for disaster. Their focus moved to identity management as a result, proposing models now espoused by Google’s zero-trust, end-to-end encrypted BeyondCorp initiative.
And so, in this day and age, traditional asset-based security runs uneasily alongside the school of thought that says data, not devices, needs to be protected. I was faced with this dichotomy myself when I released my (more asset-centric) book on Security Architecture to pointed criticism from luminaries of the latter camp.
The truth, however, is that neither perspective is completely right — and indeed, both start from the wrong point. Specifically, neither considers what to do when things don’t work out, when (as all too often) a breach or data leak takes place. The pervading view from security professionals is “well, you didn’t listen” which is not the most helpful in a time of crisis, however accurate.
The mindset of all parties, that we are trying to prevent things from going wrong to the best of our abilities, is fundamentally flawed. The core notion (which goes back to the origins of both IT security and broader risk management) is that if we did everything right, we would pretty much ensure bad things didn’t happen.
This notion is false. It is not that bad things are going to happen anyway, in the same way that a vehicle crash might happen even with all the right protections in place. That may be true but if we do have a car accident we are typically distraught, in the knowledge that we were, figuratively and statistically, one of the unlucky ones.
A far better framing of the nature of IT security is similar to that of disease. Of course we can look to avoid illness but when we succumb, we recognise it is part of the tapestry of life. Prevention will never fully work, recovery is a necessary and well-understood set of steps.
Indeed, so it is with human weakness, in that sometimes we succumb to our less positive traits. Far from being just another analogy, this is a fundamental input to our understanding of how digital technology is as likely to be misused as used for positive reasons.
The overall consequence is that we should be accepting the consequences of such weakness as the norm, not treating any incidents as exceptions. We should also be thinking about risk management and mitigation in the same way as we think of hygiene when dealing with germs — as necessary as it is imperfect and, sometimes, counterproductive.
Thinking more broadly, even as I write this we are getting far better at using data to understand the spread of disease. It makes absolute sense that we should be investing in tools that look at how computer attacks spread virally, and how they can potentially be contained and their wider impact minimised.
It’s not for me to say whether Retarus’ product is any better or worse than any other (as I haven’t tested it), but the company’s philosophy was decidedly refreshing. Thinking in terms of ‘patient zero’ outbreak detection and mitigation is a good start for any security vendor and, more importantly, it should be part of the mindset adopted by any organisation wanting to define its attitude to IT security in this increasingly digital world.