The recent “cyber attack” and the implications on the healthcare industry.

As you all now know, a recent “cyber attack” has affected many computers around the world, including, most prominently, the NHS. In this article I will ask the question, “what implications does this have on future IT services for the healthcare industry?”

Firstly, this incident was not a “cyber attack”. No targeted attack against the NHS took place. In fact, whether it is an attack at all is open to debate. The incident was in fact caused by a piece of “ransomware” which takes the form of an Internet worm. Worms are self-replicating pieces of code which spread from computer to computer using networks. They usually exploit a vulnerability in target software or operating system code to gain access and/or elevated privileges on the target system. Once infected, the real purpose of the worm, termed the payload, activates. The payload can range from something benign to something more sinister. In the case of ransomware, the usual modus operandi is to encrypt data on the target computer’s hard-drive and then offer to decrypt it  in return for the payment of a ransom.

So, having established what happened, how did it happen and why was the impact so significant? The answer to this lies in the exploit used. This particular exploit  leveraged a vulnerability which, according to Microsoft, was not known to them. In fact, it was developed by the NSA, who kept quiet about the vulnerability so they could use it against their targets. Recently a group leaked a set of NSA exploits, including one which used this vulnerability. Because the exploit was not discovered, disclosed to the vendor and patched, as is the normal way these issues are dealt with, Microsoft did not have an immediate fix. This type of exploit is termed a “zero-day exploit” in the industry. In fact, this is the worst kind of vulnerability. It was not just a theoretical vulnerability but a tried and tested working exploit.  Because Microsoft was now on the back foot, coders managed to release a worm that used this exploit before a security fix could be released. In fact, they had little hope of stopping a determined coder in time. So, this worm is more a direct result of the cracking activity of the NSA and by extension GCHQ as they are very closely linked. Is this something we should be concerned about? Absolutely! Could it have been handled better, most definitely!

Having established what happened and why, what lessons can we learn from this? Well, firstly, the standard response to this type of threat is to ensure your software patching schedule and methodology ensures your operating systems and software are kept up to date. However, in this circumstance, this would have done nothing to mitigate the risk. There are things that could have helped to protect important data, however. I will deal with these below.

The first question that springs to my mind is why is raw data is accessible from a terminal in the first place? If files are not directly accessible, they cannot be encrypted. This means that, even if a terminal is affected, a simple re-image will get you up and running again. If we take this a step further and look at network boot, thin client environments, the risk can be greatly mitigated and the recovery time greatly reduced.

Coupled with this we must look at how our data is accessed and presented. Placing our data in the cloud would help to mitigate against this type of attack. If our data is hosted on a highly secure system and accessed, for example, using HTTPS or XMLRPC  then our data would be safe even if the terminal was compromised. Data could continue to be accessed and it could not be held to ransom. We must also be mindful of correct backup procedure and cold storage, so that any data that is compromised could be restored intact. Placing data in the cloud provides a unique opportunity to protect ourselves from local network attack, so the only element at direct risk from attack vectors such as the one used by this worm is the access layer to our data. Cloud computing allows us to treat our local and wide-area networks as we should treat them; hostile, untrusted environments. It is obvious from the impact on the NHS that both the NHS National Network (N3) and local NHS Trust networks were heavily involved in the propagation of this worm and should not be treated as trusted networks. Perhaps the existing paradigm, where N3 is widely considered safe to pass patient data should be under heavy scrutiny and more controls should be applied to data transiting this network.

When we consider N3 as an untrusted network, we realise that our second line of defence, beyond our firewalls and security procedures is very simple. Isolate, contain, eliminate. We must be prepared to pull the plug on our links to the outside world when threats such as this take place in order to protect the integrity of our local networks and our data. Commonly, a loss of connectivity is considered an undesirable event. However, IT managers must consider a controlled disconnection as one of the tools in their arsenal to protect their network. This approach, however, presents unique challenges to business continuity, particularly around the access to services and data. These challenges are more apparent when we move towards a cloud-enabled data model. It is this specific area that my company, iCoriolis, is working on innovative solutions to ensure data is still accessible even when disconnected from the WAN and by extension the cloud, whether this event is controlled or an incident.

Lastly and possibly most important in my mind are the choices made by IT managers about the software and operating systems they choose for terminals and servers. This incident has shown us that Microsoft, despite considerable effort, cannot predict the future. They simply cannot fix an unknown vulnerability fast enough in these circumstances. This is not inherently their fault as they rely on the security community to identify and report vulnerabilities; no one company can discover everything. This is where Open Source software really shows its advantage. It’s not that Open Source developers are better (although some are). It’s not an ideological issue. It’s simply that because the code of Open Source software is made freely available and the community constantly peer reviews and improves it. Vulnerabilities are discovered, shared, discussed  and fixed. Rather than this time-bomb hanging around for years, it could have been fixed in a short amount of time. With these facts in mind, putting my personal preference for Open Source software and my dislike of Windows for a moment, I find it difficult to understand how anyone can now trust a closed-source operating system for critical data. Indeed, governments seem to agree, with the NSA and GCHQ widely using and recommending Open Source software. Whilst Open Source software is not a magic bullet, in my mind, this is certainly a case of “better the devil you know”.

Leave a Reply