CrowdStrike released a relatively minor patch on Friday, and somehow it wreaked havoc on large swaths of the IT world running Microsoft Windows, bringing down airports, healthcare facilities and 911 call centers. While we know a faulty update caused the problem, we don’t know how it got released in the first place. A company like CrowdStrike very likely has a sophisticated DevOps pipeline with release policies in place, but even with that, the buggy code somehow slipped through.
In this case it was perhaps the mother of all buggy code. The company has suffered a steep hit to its reputation, and the stock price plunged from $345.10 on Thursday evening to $263.10 by Monday afternoon. It has since recovered slightly.
In a statement on Friday, the company acknowledged the consequences of the faulty update: “All of CrowdStrike understands the gravity and impact of the situation. We quickly identified the issue and deployed a fix, allowing us to focus diligently on restoring customer systems as our highest priority.”
Further, it explained the root cause of the outage, although not how it happened. That’s a post mortem process that will likely go on inside the company for some time as it looks to prevent such a thing from happening again.
Dan Rogers, CEO at LaunchDarkly, a firm that uses a concept called feature flags to deploy software in a highly controlled way, couldn’t speak directly to the CrowdStrike deployment problem, but he could speak to software deployment issues more broadly.
“Software bugs happen, but most of the software experience issues that someone would experience are actually not because of infrastructure issues,” he told TechCrunch. “They’re because someone rolled out a piece of software that doesn’t work, and those in general are very controllable.” With feature flags, you can control the speed of deployment of new features, and turn a feature off, if things go wrong to prevent the problem from spreading widely.
It is important to note however, that in this case, the problem was at the operating system kernel level, and once that has run amok, it’s harder to fix than say a web application. Still, a slower deployment could have alerted the company to the problem a lot sooner.
What happened at CrowdStrike could potentially happen to any software company, even one with good software release practices in place, said Jyoti Bansal, founder and CEO at Harness, a maker of DevOps pipeline developer tools. While he also couldn’t say precisely what happened at CrowdStrike, he talked generally about how buggy code can slip through the cracks.
Typically, there is a process in place where code gets tested thoroughly before it gets deployed, but sometimes an engineering team, especially in a large engineering group, may cut corners. “It’s possible for something like this to happen when you skip the DevOps testing pipeline, which is pretty common with minor updates,” Bansal told TechCrunch.
He says this often happens at larger organizations where there isn’t a single approach to software releases. “Let’s say you have 5,000 engineers, which probably will be divided into 100 teams of 50 or so different developers. These teams adopt different practices,” he said. And without standardization, it’s easier for bad code to slip through the cracks.
How to prevent bugs from slipping through
Both CEOs acknowledge that bugs get through sometimes, but there are ways to minimize the risk, including perhaps the most obvious one: practicing standard software release hygiene. That involves testing before deploying and then deploying in a controlled way.
Rogers points to his company’s software and notes that progressive rollouts are the place to start. Instead of delivering the change to every user all at once, you instead release it to a small subset and see what happens before expanding the rollout. Along the same lines, if you have controlled rollouts and something goes wrong, you can roll back. “This idea of feature management or feature control lets you roll back features that aren’t working and get people back to the prior version if things are not working.”
Bansal, whose company just bought feature flag startup Split.io in May, also recommends what he calls “canary deployments,” which are small controlled test deployments. They are called this because they hark back to canaries being sent into coal mines to test for carbon monoxide leakage. Once you prove the test roll out looks good, then you can move to the progressive roll out that Rogers alluded to.
As Bansal says, it can look fine in testing, but a lab test doesn’t always catch everything, and that’s why you have to combine good DevOps testing with controlled deployment to catch things that lab tests miss.
Rogers suggests when doing an analysis of your software testing regimen, you look at three key areas — platform, people and processes — and they all work together in his view. “It’s not sufficient to just have a great software platform. It’s not sufficient to have highly enabled developers. It’s also not sufficient to just have predefined workflows and governance. All three of those have to come together,” he said.
One way to prevent individual engineers or teams from circumventing the pipeline is to require the same approach for everyone, but in a way that doesn’t slow the teams down. “If you build a pipeline that slows down developers, they will at some point find ways to get their job done outside of it because they will think that the process is going to add another two weeks or a month before we can ship the code that we wrote,” Bansal said.
Rogers agrees that it’s important not to put rigid systems in place in response to one bad incident. “What you don’t want to have happen now is that you’re so worried about making software changes that you have a very long and protracted testing cycle and you end up stifling software innovation,” he said.
Bansal says a thoughtful automated approach can actually be helpful, especially with larger engineering groups. But there is always going to be some tension between security and governance and the need for release velocity, and it’s hard to find the right balance.
We might not know what happened at CrowdStrike for some time, but we do know that certain approaches help minimize the risks around software deployment. Bad code is going to slip through from time to time, but if you follow best practices, it probably won’t be as catastrophic as what happened last week.
Comment