Reflections on “CVE Approach for Cloud Vulnerabilities”

At Black Hat Briefings USA this week, Ami Luttwak and Shir Tamari called for a “CVE” style approach to documenting vulnerabilities that affect cloud offerings (note: I have not seen the talk). As one of two people (the other being Jake Kouns) that may have the longest history in this specific space, I wanted to give some history and very high-level reflection on the challenges and benefits of this endeavor. First, for perspective, CVE covers at least 56 pure cloud / site-specific vulnerabilities, the first going back as far as 2000. They cover at least another 461 that are hosted blockchain contracts. At times, some of the CVE (Editorial) Board has advocated for CVE to expand to cover cloud vulnerabilities while others argue against it. At least one who advocated for CVE coverage said they should get CVE IDs, while others that supported and disagreed with the idea saying that if cloud was covered, they should get their own ID scheme. MITRE has not made a clearly policy change on that in years, despite increasing coverage primarily of hosted blockchain contracts.

Next, the idea of “CVE for Cloud” has been done! Well, it was attempted at least. Back in the day, the Open Security Foundation was desperately trying to maintain the Open Sourced Vulnerability Database (OSVDB) and DatalossDB. Neither project had enough volunteers and community support simply wasn’t there. Despite that, we spun up a project called Cloutage (archive.org link) in 2010, that originally aimed to catalog instances of cloud service outages as well. The idea was this would tell companies which ones actually had the alleged 99.999% uptime, which regions of a provider were more prone to issues, and more. We were collecting incidents categorized as “hack”, “outage”, or “vuln” long before that with the intent of such a project, but that is when we finally made them publicly available in an organized fashion. So my reflections come with real-world experience; the challenges we faced, along with the pitfalls of such an endeavor, are still largely present. (For convenience, I will use the term ‘cloud’ to refer to service providers that are subscription offerings and not on-premises hardware / solutions.)

First, let’s begin with one significant change from the mid 2000s to today. Over ten years ago, cloud providers with a vulnerability were typically an issue they had to fix and were out of the customer’s control. While that is still prevalent today, we see another class of cloud vulnerabilities that better warrant a “CVE for Cloud” approach. Those are cases where a vulnerability is present in a cloud offering and requires a customer to mitigate it themselves. These will often be described as misconfigurations like the pervasive open S3 buckets that can be accessed without credentials, disclosing a world of information or hosting malicious software [1] [2] [3]. There is a fundamental difference between a security issue that requires the customer to mitigate versus one they must wait for the cloud provider to fix.

Historically, when a vast majority of the security issues fell into the “wait for the provider to fix” category, we cataloged those vulnerabilities but questioned the value. Sure, it had value in that a company could evaluate cloud providers to see which had more vulnerabilities and how long it took to fix sometimes. But even that was crippled in that companies would only announce them after the issue was fixed. That meant we knew a vulnerability was present for some undetermined time, and then patched. But cloud providers weren’t including how long customers were exposed and most providers simply were not disclosing at all. So the database was a fraction of the actual vulnerabilities, typically with no known exposure window, and customers were powerless to do anything about it.

Another issue we ran into was that the rare times a cloud provider disclosed the issue, it was extremely vague. More frustrating is when they used language implying that some, but not all of customers were impacted. They didn’t include regions or IP ranges or anything else to identify who was affected so a customer couldn’t even guarantee they were without opening a support ticket and hoping to get an honest answer. That was frustrating and not very helpful. How will security teams cope if they find out there is a public vulnerability in a cloud offering and no mitigating control? Seems like a recipe for stressful days. Of course, it’s no different than no except for today ignorance is bliss.

CVSS is a defacto risk scoring framework for vulnerabilities. It was designed around vulnerabilities that appear in CVE, which is predominantly on-premises software. Would CVSS scale to work for cloud vulnerabilities? CVSSv2 seems like it would be as suitable for cloud as on-premises given its more simple scoring elements. CVSSv3 seems like it could be more problematic and not take advantage of some of the changes over v2. First, the ‘physical’ location would effectively be eliminated. Second, would Authorization Scope be neutered? Defined initially as “ability for a vulnerability in one software component to impact resources beyond its means, or privileges“, on the surface it seems like every vulnerability would have to be evaluated more closely for scope change. Every vulnerability potentially causes the cloud provider, a different company than the one using the service, to get involved in some way. The cloud provider’s support and security teams spending time because $CompanyA’s instance got popped technically “impacts resources beyond its means”. I don’t think that is what the CVSS SIG had in mind though. That said, given the shared environments of many cloud offerings, sandbox escapes and lateral movement potentially impacts completely different organizations. Does a simple “scope changed” modifier to a score adequately convey the potential risk?

Finally, who’s going to run this “CVE for Cloud”? OSVDB and subsequent similar projects give a damning history showing there is a lot of vocal support for community-run vulnerability resources, but they very rarely see the actual time required from those vocal folks. If it is run by one person or a small group of dedicated volunteers, how long before they burnout or commercialize it? There’s a reason most of the public VDBs have gone away, or gone commercial to varying degrees (e.g. OSVDB, Secunia, BID). The only thing worse than such a project not getting off the ground is one that does, becomes an essential part of security programs, and then goes away.

I look forward to seeing the talk by Luttwak and Tamari, if it is made public. I’m curious if they have come up with solutions that have plagued vulnerability databases for over two decades.

[Update: Josh Bressers points out that the Cloud Security Alliance (CSA) wants to address this problem too.]

One response to “Reflections on “CVE Approach for Cloud Vulnerabilities””

  1. I agree we need something similar to CVEs for cloud offerings. However, I think it should be focused on vulnerabilities, not risks. Weak defaults are not vulnerable defaults and a correct configuration is not a mitigation for a vulnerability…We also have to be careful about which service will “own” the vulnerability. Cloud services are usually “layered”, what means that you have a cloud infra like AWS, a web server on top of it, and an application on top of the web server. If there is a misconfiguration caused by a weak default on the cloud infra, who would get the CVE? I think it is necessary, but it is not trivial and it should be defined careful.

Leave a Reply

Discover more from Rants of a deranged squirrel.

Subscribe now to keep reading and get access to the full archive.

Continue reading