Eric Goldstein, the executive assistant director for cybersecurity at the US government’s Cybersecurity and Infrastructure Security Agency (CISA), recently said “To say that our solution to cybersecurity is at least in part, patch faster, fix faster, that is a failed model.” That remark, and his subsequent analysis, resonated with me.
II talk to organizations all the time that struggle with effective vulnerability management, particularly in the world of Cloud-Native, DevOps-driven workloads. While taking a “Shift-Left” approach for vulnerability management, where scanning happens early in the code’s lifecycle to catch issues long before they’re deployed, is a compelling philosophical approach, it often comes with its own headaches as implementation can be complex and, often, requires non-security teams to do the heavy lifting to make it work. And, even if organizations are able to effectively shift the scanning left, they’re usually overwhelmed by the sheer number of vulnerabilities found in the libraries and code included in their projects. Often, old vulnerabilities are re-introduced in new projects, too.
Our observations at Orca support the depth of the issue. While preparing for an upcoming research report, I was, simultaneously, surprised and not surprised to see that many organizations have 20+ year old exploitable, fixable vulnerabilities in workloads running in the cloud; additionally, we see many cases in which Log4Shell vulnerabilities are being reintroduced into deployments.
Goldstein calls for the software industry to take on more of the burden of addressing vulnerabilities and I can’t disagree with him; however, that’s a long-term change and I’m more interested in how organizations can get ahead of “patch faster, fix faster” or, at least, get more efficient at it.
Embracing Context
It’s fair to say that not all instances of a vulnerability are the same. A vulnerability exposed to the Internet is likely to be more concerning than a vulnerability with no exposure at all and a vulnerability in a library that’s used by a workload is more concerning than one that is present but never called. By using context, we can make decisions on what needs to be patched more immediately and what can wait for a regular cycle.
One of the most important contexts is “How is the vulnerable asset used?” – not only whether it’s exposed to the outside world but, also, whether it’s part of a business-critical process, whether it has access to critical roles and data, and whether a compromise of the asset might lead to additional, more interesting assets. While analysts can look at this sort of context manually, it’s more efficient if vulnerability management systems automate it.
Another context might be “How likely is this to actually be exploited?” While the Common Vulnerability Scoring System (CVSS) score of a vulnerability will tell us how severe it might be if exploited, it tells us very little about how likely exploitation actually is. FIRST’s Exploit Prediction Scoring System (EPSS) is an adjunct to CVSS that predicts the likelihood of exploitation. Given that only 2.3% of vulnerabilities with a CVSS score of 7 or higher have been observed being exploited in attacks, adding a system like this to a vulnerability management program can increase efficiency by only “patch[ing] faster, fix[ing] faster” where it’s warranted. (Of course, this doesn’t mean that other vulnerabilities can be ignored but they can probably be updated on a more regular and lower-effort cadence).
Adopting Minimalism
Cloud-native workloads tend to run as “microservices”; in other words, instead of having one, monolithic service that runs, they’re composed of many, smaller services that work in concert. This is far more efficient for cloud workloads as microservices, typically, scale more efficiently.
This introduces the possibility, particularly with containerized microservices, of reducing the footprint to only include executables and libraries that are required for that particular service. While removing unused code introduces some additional work, it can also dramatically reduce the number of vulnerabilities found, thus reducing the vulnerability management burden for the workloads.
Easing Shift-Left
As discussed above, scanning for vulnerabilities in the pipeline before deployment is a powerful tool but, for many organizations, one that’s difficult to deploy and operationalize.
I’ve found that two things make this easier. First, making it easier for security teams to integrate this scanning, particularly with integrations that don’t require manually modifying every pipeline, increases adoption dramatically. Second, where possible, providing developers with results that only show the impact of a change (like a particular pull request) rather than showing them all of the vulnerabilities in the project allows the right people to focus on fixing the right things.
I have yet to have an organization tell me that they’re too good at vulnerability management but I’m not-entirely-irrationally hopeful that 2024 will be that year. If you’re looking to bolster your vulnerability management strategy this year, talk to a cloud security expert today.