Why was “Wannacry” so successful?

In May 2017, “Wannacry” marched across the world virtually unchecked for several days, infecting over 230 000 computers in at least 150 countries, reminiscent of Sasser and Morris. In hindsight, these infections were mostly preventable. The hardest hit systems were running Windows 7 and Windows Server 2008, the backbone of many enterprises, and this is where we heard the largest cries: parts of Britain’s National Health Service (NHS), Spain’s Telefónica, FedEx, Deutche Bahn, to name but a few. It jumped from computer to computer, encrypting content on each machine it infected and demanded a time-limited ransom for the return of the information.

Infection was greatly slowed several days after its initial release when a malware researcher found and activated a killswitch in the initial version of the worm. This was soon countered by 2.0 versions which were released without the switch. Fortunately, by this time, organizations were aware of the threat and were patching their environments with urgency.

Wannacry uses an Eternal Blue, a package that was created to exploit a vulnerability in Microsoft’s Server Message Block (SMB). This vulnerability was discovered by the US National Security Agency (NSA), but rather than report it to Microsoft, they kept quiet and used it to create an exploit for their own use. It was only after the vulnerability was revealed by the “Shadow Brokers”, that Microsoft created and released a critical patch in March.

By May 2017, many organizations had not yet applied this patch to their servers and workstations.

So why not patch?

Many organizations are hesitant to patch immediately after a patch is released. Microsoft has in the past released, and subsequently had to withdraw, several patches after people have installed them and had issues ranging from crashes and reboots to total “bricking” of their systems. Deploying a patch like this to several hundred workstations can quickly become a very bad day for desktop support teams. Having a production server environment crashing is an even bigger calamity as this affects customer facing business, especially in a mostly automated Business to Consumer (B2C) scenario.

This means a regime of testing patches before they are approved for deployment. Moving from workstations to servers becomes even more difficult. Environments must be re-created to ensure that the patches do not affect applications running on the servers. Once that has been established, application downtime is needed for servers to be updated. Many systems operate 24-7 with a shutdown not easily afforded. Standard maintenance downtimes become few and far between. Patching hundreds, or thousands of production servers manually, even with pre-loaded patches, can be a very time-consuming effort.

Ultimately, the onus is on companies to evaluate the risk of potential challenges and the impact of maintenance downtime vs. the impact of having to rebuild infected servers and recover from backups. Many application owners balk against downtime, but an incident like this indicated that if faced with no option, preventative maintenance is indeed possible.

The following two tabs change content below.

Andrew Smith

Andrew is a senior systems-engineer with over 20 years experience in corporate and small business environments. This includes consulting for large ICT service providers. He has supported systems at every level in the organization, including infrastructure, operating systems, applications, and perimeter protection. He also collaborates with software development teams on web, database, and infrastructure security. Andrew has co-founded multiple ICT businesses, where he advises on cybersecurity strategies and policies. Andrew has a 3-year National Diploma in Electronics (light current).

Latest posts by Andrew Smith (see all)