CROWDSTRIKE GLOBAL OUTAGE: Attack plot thickens. It's worse than we thought
Everything we said about how to view this was true. And then some.
You can throw a grenade around all day. It'll only go bang when you insert the detonator and operate it correctly.
The ex Google engineer who published the technical breakdown we referenced and paraphrased, Zach Vorhies, continues his work.
CrowdStrike has validated his analysis AND confirmed worse.
We are now into attack scenario B.
Vorhies’ explanation is straightforward:
Update:
Crowdstrike came out and released a technical report confirming my analysis. They were reading in a bad data file and attempting to access invalid memory.
This global crash was a two-part bomb. The detonator apparently, was NOT new.. it was PRE-INSTALLED.
Contrary to initial suspicions, Crowd Strike did NOT push out a faulty driver, the faulty driver ALREADY existed on Mac, Linux AND Windows, likely for months or years.
Sitting there like a ticking time bomb.
This bug was a two-part series.
All it needed was bad data in order to detonate it.
The recent data update, delivered that payload.
Deploying new data files to computers is generally thought to be safe. Data doesn't contain executable instructions for the CPU, after all.
Therefore it doesn't go through the same code review process as new executable code.
In effect, this proved to be the perfect backdoor
When there is a ticking time bomb that's been PRE-INSTALLED on computers, then all it needs is just the right data to activate it.
And unfortunately, this is exactly what happened.
This data update, because of it's assumed low-security implications, was allowed to be raw dogged into every Windows box running Crowd Strike, without consent, and without notification.
And btw, this same ticking time bomb apparently exists in Linux AND MacOS. They just weren't targets of this data update, so they didn't crash. If a similar push had been to Linux, we would have seen a global catastrophe.
Originally, I thought this was simply programmer error. But now, I'm not so sure. My experience seeing corruption at Google showed me that obvious bugs were allowed to exist, with apparent insiders who were aware and exploiting them for their own agenda.
For example, Jordan Peterson got his entire Gmail/Youtube account wiped because some insider knew they could create nearly the same email address as him and start sending spam, knowing that his account would be wiped out by the AI, despite his account being over a decade old.
Is something similar going on here with Crowdstrike?
Some insider with the knowledge that this nuke existed on every Windows/MacOS/Linux box with their software, only needing the proper data-detonator to act as the trigger?And why wasn't this software bug caught by automated checking at Microsoft? This code is reading data, interpreting that as valid memory locations, and attempting to read it.
HELLO?
@Microsoft, are you aware that tools have existed for DECADES that are designed to find these simple access violations and flag them?
@Microsoft have you NEVER bothered to run these tools on Crowd Strikes system drivers?This is really bizarre. And the recent facts raise a LOT more questions about why this ticking time bomb has existed on mission critical devices for months or years.
It get's worse, Microsoft granted Crowd Strike's ticking time bomb with "boot-start" privileges, normally reserved for Microsoft drivers.
A boot start driver MUST be installed in order to start the Windows Operating system.https://x.com/Perpetualmaniac/status/1815316367958290828?t=2sXkLyXIZ2ixpRN_1vTFuA&s=19
So let's revisit our claim about why this was an attack, because the above reveals a flaw in our assumptions that is important to admit to and show to you.
We said that if the driver code had an error in it that was critical re an invalid memory address, it should never and could never get through a proper test process to be released. Therefore, something had to have been deliberately changed/decided/allowed between the buggy software and the non buggy software.
Vorhies’ latest wrecks our assumption because he's saying the bug's always been there. How come?
Simple.
You write code with a path or function in it that either isn't used for any intentional purpose of the code, so is never activated in real use, or you knowingly write good code with the knowledge of how to use bad data that should never normally be fed to the program to break the good code.
In the first case, the problem is how you obfuscate unnecessary or extraneous code that either isn't in the design/functional spec of the program, that another developer, tester or whoever never picks up on and spots then questions. There's ways to do this and you cannot assume anything about how CrowdStrike does development and testing. Obfuscation in code is possible.
The second case is even more obfuscated and difficult to spot because detection would hinge on testing with a data set that contained the right kind of error to trigger the crash. It's very possible that the tests were never comprehensive enough to achieve this. Therefore, it's possible that someone knowingly built the CrowdStrike program with this deliberate design flaw from the outset or introduced it at some point.
However as Vorhies makes clear, you don't need initial intent from the outset. You can simply find and wilfully tolerate known, genuine, unintentional bugs that are kept secret so that a select few can exploit them without being detected.
This is basically the Zero Day exploit concept but from the inside.
Zero Days are bugs or exploits that a hacker finds and never declares so there's been zero days of declaration i.e. awareness of the program developer and/or public so they've had zero days to effect a fix. So those with the knowledge can exploit that weakness until it becomes known and patched.
In this case, the hacker isn't necessarily a hacker but an insider with knowledge of the bug/exploit and how it can be allowed to exist in the dev/test system, and exploited at some point.
Also, Vorhies’ observation also creates the possibility that this was an accident that's finally been triggered by a bad dataset that simply never went bad until now.
As stated, this data error as trigger is worse because the code problem exists in Windows AND other operating systems, so the bug could've taken down even more machines across more of the globe if the update & data release had gone to more operating systems.
This could still have been stopped inside an adequate test and release process that put the real data into the program in a test environment to see if the real data and that version of the program worked or blew up. This means our original claim that this shouldn't have happened is still true because CrowdStrike should have had sophisticated, mature and bomb proof dev test release processes that combined programs with data in a safe way before careful, staged release to the end user.
The above is even more reason to treat this as an attack, albeit a more sophisticated one.
Make no mistake, assume criminality somewhere until proven otherwise.
You as readers need to flag this to your company leadership AND your government representatives.
If you're in the USA, focus on Thomas Massie. He is guaranteed to have the experience and intellectual capacity to fully understand this from the get go.
I've no idea about politicians anywhere else. In the UK it is safer to assume everyone is a corrupt, ignorant, idiotic and technically incompetent cunt and therefore proceed with them accordingly from the ground floor, with utter contempt and total suspicion.
You might be just one person but doing nothing about this is a fatal mistake. Don't leave it to someone else. Do your part and get others to do theirs with you. It matters. It adds up.
Don't be nothing.
That's what they think you are.
That is why they treat you as they do and why they have power.
That's your fault. Not theirs.
Covid was a two part weapon and it was a crime. This appears to be the same until proven otherwise.