Ransomware attack, biggest in history, hits thousands of machines in over 100 countries, and this was all preventable.
This was the largest single day ransomware attack ever. It involved a malware going under 2 names, "WannaCry" and "WannaCrypt". This was completely avoidable, the patch was released 6-8 weeks in advance, and so far in excess of 230,000 machines have been affected, with over 35Million paid out in Bitcoin and billions of costs and expenses from services outages.
First things first, government controls. The US government knew about this vulnerability some time ago, it is believed to be part of a vulnerability that the NSA developed as a result of a flaw they found in Windows SMB, they called it EternalBlue and it was included in a toolkit that the NSA had. Then the NSA got hacked and the vulnerability was released to hackers. It is believed the reason Microsoft canceled Februaries "Patch Tuesday" release was that it got wind of the vulnerability and released the patch for it in March. Instead of notifying Microsoft, the NSA used this to build a possible attack and when the president of Microsoft, Brad Smith sharply criticized the NSA its a public indication of how serious this is.
This malware spreads via a phishing email and once inside your network it uses the SMB port to spread like a worm to your other computers and to infect machines on the internet.
This was not a ZeroDay as Microsoft had already released the patch over 6 weeks ago. It has also been reported that another backdoor called DoublePulsar has been installed on infected machines, and this will also need to be removed after the cleanup.
In this Article that I published on "How to protect my O365 users from phishing emails" I cover in detail the tools that Microsoft have made available, that would have stopped the email attack right in its tracks. There are plenty of other email solutions out there, it's just that none of them are powered by Microsofts Intelligent Security Graph and this makes all the difference.
Let's start off the patching story with the number one reason we don't or can't patch; old, crappy software! I can not tell you how many companies tell me that they have systems out of support. They have an app that they can't upgrade for all the usual reasons and so it's just left there. We all joke about how as kids the excuses we came up with for no homework being done, and to be honest the excuses we give for systems that fall out of management are in the same vein.
Microsoft along with a host of other competitors offer a patching solution to fit every use case. With Microsoft, we can patch via SCCM, Intune, and OMS. Then we come to the elephant in the room, I can't patch those systems because they always break, too mission critical or it's out of support. Basically, we have chosen the mantra of "if it works don't fix it" in other words leave well enough alone. We are failing to factor in application lifecycle management into our systems management.
Application Life Cycle Managment
When we buy/build an application that becomes integral to a business process then we have a duty of care to the business to manage that application, and when I say application I mean the app, the OS it's running on, and the continuous care and feeding it's going to need from the IT department. We have to ask questions like;
- How often to you bring out updates that support new OS versions?
- Can you show us your previous support statements?
- How portable is the application?
- At what point will we start our modernization drive for this app?
I am working with a major food retailer that has non-stop problems with its ERP system. The deployment project will continue for another 2 years before its finished and the software version is already out of support. The vendor was making fundamental changes to its ERP product and was close to releasing the new version but the company went for the older version, and its a mess. It's on a "soon to be" unsupported Windows Server and it's going to be a challenge to secure, and it's not even completely deployed.
The current pause of this malware attack is temporary.
It turns out that the malware needed to talk to an unregistered website. Once a very smart 22-year-old security researcher found the kill switch the attack stopped, but this requires a pretty small change in code and we will be off again.
The great thing about this attack is that it is literally costing billions, and over 35Million has been paid to the Bitcoin wallets (there is even a twitter bot that is measuring this). So why is this a good thing I hear you say? Because as someone in the IT department you have a very real business case. We can no longer say we can't afford this upgrade or it's not a business priority. The reality is that if your company is using old out-of-date OS's and applications, or apps that break every time there is a patch then guess what, you have no choice! You have to upgrade this stuff, buy new or power it off.
he US and other countries are looking for ways to view into encrypted systems for security reasons, and to a degree, this makes some sense. The fact that the same government departments who demand these "back door's" can't then keep them protected is unconscionable.
- Patch your systems
- Use O365 ATP
- Instigate ALM on your systems