WannaCry and the Elephant in the Room

After the recent news of “WannaCry” Ransomware crippling systems worldwide, people have started to opine on the host of reasons this attack occurred and what to do about it. Next generation endpoint security (i.e. the replacement of antivirus), improved patch management, better business continuity planning & disaster recovery, modern security operations centre capabilities, and a strong incident response capability will all be held up as the most critical solution to stop similar attacks in the future.

What is the real root cause here?

At first, it may seem that the root issue is organizations not updating (i.e. patching) their systems in time, leaving them vulnerable to WannaCry malware. It’s worthwhile asking, however, what caused the to need to patch in the first place?

In this case, you could argue the alleged attack and subsequent leak of the ShadowBrokers on the NSA was the reason. Of course, even if the ShadowBrokers attack or leak didn’t occur, there were clearly actors who could have used this exploit — particularly before it was publicly known and patched (i.e an “0 day”) — to break into many systems.

We need to dig deeper by examining the reason the patch exists. The technical details of the patch point to a security flaw in the way “SMBv1 handles specially crafted requests”. In other words, the root cause is a software vulnerability.

Insecure software is the root cause, as it almost always is in major hacks. Need proof? Take a look at the Metasploit exploit database. Metasploit is one of the most well known exploit systems in the world, and the database is full of code that takes advantage of software vulnerabilities like “command execution”, “code execution” and “buffer overflow”. These are all technical names of flaws in code that allow an attacker to compromise a remote system, including via malware like WannaCry.

Despite the fact that software is the root issue, it is conspicuously absent from the conversations about next steps after an attack like this. Insecure code, built in-house or by third parties, is not mentioned in the NIST cyber security framework, gets very little attention in the industry standard ISO 27001, and wasn’t featured as a topic in the recent National Association of Corporate Directors’ Cyber-Risk Oversight document. Organizations use these documents to architect and oversee their security programs, meaning its entirely possible for them to pay little attention to software. Most see it as a technical detail that doesn’t need to be addressed at such a high level. The net result, however, is that it simply gets left out of mindshare, effort and budgets.

Perhaps that’s because writing secure code is extremely difficult. After all, Microsoft — the vendor that wrote the insecure code — is an industry leader in software security through its security development lifecycle. Yet harping on the difficulty of secure code doesn’t justify the topic being ignored. Nobody claims that secure software will ever be bulletproof, but the question we ought to ask in every instance of an attack rooted in software is “How easily could this vulnerability have been prevented in code?” If people aren’t asking, there is no reason to innovate or make the systematic kinds of changes we need to build an infrastructure of secure software. Having worked in software security for over a decade, I can tell you that there is room to do much, much better than what we see today. We just have to pay more attention to the problem as a collective.

While the exact secure coding issue from the Microsoft patch isn’t clear, a common coding weakness referenced in the Metasploit database is “buffer overflow”. We have known about buffer overflows for years, yet a recent study found that state of the art code security scanners (i.e. static analysis tools) were only able to detect less than half of known buffer overflows in code. Our recent research indicates that many organizations, including software vendors, rely on this kind of scanning as their primary or even sole method of securing software. Relying on techniques that find less than half of extremely dangerous flaws is industry standard.

As a collective we can do much better. We can ask software vendors to adopt more holistic secure development processes, such as those outlined in the ISO 27034. We can pressure vendors to move away from using unsafe programming languages that are difficult to write secure code in to power our critical infrastructure. We can encourage research to help build and promote more secure variants and alternatives for those languages. Most importantly, we can make secure software the top level issue that it ought to be in cyber security frameworks, discussions and regulations.

Let’s be clear: we will never be totally safe from hacking or malware. Attacks like this will happen into the foreseeable future. However, if we want to dramatically reduce the frequency and impact of these attacks, we have to address the root cause.