Rants of a deranged squirrel.

Vulnerability Forecasting Technical Colloquium – A Few Thoughts

[I wrote this on September 21st, but apparently forgot to ultimately move from GDoc to Blog. I suspect because it really needs to be cleaned up as it is my first draft. Rather than do that, since the event has passed, I will just backdate instead. This blog was actually published December 28, 2024.]


Part of me says I must attend. The other part of me wants to make a fresh bowl of popcorn. I was actually asked a couple days ago if I was attending, and it was the first I heard about this. Given vulnerability databases are my life, kind of odd I didn’t hear about it before.

This is my niche, of 30 years basically. Let me explain, because I am having to do this in various ways lately to the newcomers in this arena. There are (were), fundamentally, two types of vuln people. Those who ‘do’, and those who ‘observe’.

On the surface, I am an observer since I mostly observe vuln disclosures. But at this point, it might be time to argue there are fundamentally three types due to time. Those who ‘do’ discover and disclose vulns, those who ‘do’ aggregate with all the challenges and also ‘observe’ those disclosures. That conjunction produces more refined data like we see in any vulnerability database, including CVE.

Then there are those who ‘observe’ that refined set of data and do their fancy graphs and predictions and such. The problem is they are twice removed from the data they are observing really. They don’t know how the data is generated, nor how the data is aggregated. They only analyzed the final product.

So, let’s look at this “Vulnerability Forecasting Technical Colloquium”?

“… gathers people to talk about vulnerabilities; published or unpublished …”

Yes, everyone can talk about unpublished vulnerabilities, and recently, they often do. It’s the “known unknown” and we love to predict them based on past data. But our industry has shown how that evolves considerably. The early days of iDefense, evolution of bug bounties, the dark web sales, private but unaffiliated government contractors that buy these vulns, the agencies that outright purchase them on the dark web, and more. There’s a reason the future is a known unknown. We can all speculate, but I don’t recall anyone truly predicting a quarter of what I hastily summarized in this paragraph.

“Forecasting and prediction of anything to do with potential exploits, actual exploits, or hypothetical exploits is on topic.”

Ooh yeah, let’s spend time hypothesizing about if one vuln can be exploited or not! Let’s consider a recent example, VulnDB 369784 (CVE-2024-38063) and all the analysis of that. A lot of researchers have been examining this vulnerability for good reason. This is potentially a pure RCE in Windows, over IPv6, a protocol that is often ignored by many. A month later, lots of analysis, and just crashing PoCs which is to be expected. Is it exploitable? Eh, maybe! But in 30 days if modern researchers of any hat can’t figure it out, that says something. Is it worth the academic masturbation over that vuln? No. Let researchers sort it out and publish a PoC / exploit code / video claiming they have a working exploit.

With the rate of vuln disclosures, focusing on that seems too academic and too focused on the offense. I say this as a former hacker, former pen-tester, and former contributor to a commercial vulnerability scanner.

“We welcome metrics, measurement, and moderation of vulnerabilities, coordinated or unilaterally published.”

Ooh, “I’ma let you finish…”  To me, that is one of the current problems. Welcoming that idea isn’t a plus in my book. As said above, there are those that do, those who do/observe, and those who only observe the final data. They are not qualified to accurately provide *meaningful* metrics and measurement, and most certainly not poised to provide any level of moderation.

Also interesting, we get “coordinated or unilaterally published”. Is that academic speak for “coordinated vs uncoordinated”? If not, what does that mean? Why not just say uncoordinated? Double irony points is the amount of academics that “unilaterally publish” vulnerabilities. I think I will use that term from now on, but it only applies to academics. Seems fair, since they appear to have birthed this terminology.

This is where it gets fun, and becomes a true roller coaster:

“We try to measure: define, identify, count, and catalog vulnerabilities, assess characteristics, detect existence and exploitation, and prioritize responses. In recent years, we’ve worked on prediction of the occurrence of new vulnerabilities (vuln4cast) and the likelihood that they will be exploited (EPSS). We are also interested in the growth of software, such as measurement of CPE records. Further topics include CVSS, CWE, or SBOMs, or decision support such as SSVC.”

That’s the entire vulnerability landscape basically! Good luck forecasting that shit. Really… you start out, unquoted above, saying that the overall field “has been scattered for decades“. You aren’t wrong there! But let me give you some advice. Thirty-plus years of a scattered landscape cannot be measured with any degree of certainty. Not at this point, when several genies are out of the bottle, none of which were predicted in the real world or cyberpunk.

“This Technical Colloquia gathers interested parties to present, discuss, and improve vulnerability measurement and prediction models, methodologies, and techniques.”

Great, observers that are second removed, please keep providing us statistics and metrics and charts and colorful presentations that show how bad things are! And how … I don’t know, unpredictable and scary and … fill in some adjectives? Heaven forbid you work more toward solving the problem instead of charting how bad it is. For anyone that works in this arena, we know damn well how bad it is.

Today was Patch Tuesday again! Almost born on the back of Microsoft propaganda about how researchers are unethical and vendors are ethical. You know who has to catalog -all- of those vulnerabilities? It isn’t the news outlets, it isn’t the third-party summarizes sucking on the teet of CVE. I’ll interrupt myself from finishing the last sentence by providing one more example, because those who feed off the teet of CVE, without realizing just how bad it has become, need to go away.

And this is where I will end. Because I know where this conference is really born out of. In their CFP they have eight bullets and it isn’t until the seventh that you see “CVE and CNA”, and when you read that bullet I really hope you stop to think for a minute.

“CVE and CNA statistics or yearly reports”

What first year high school student couldn’t submit a paper here on that topic?

How many companies do yearly reports on CVE statistics? And how many companies are now doing CNA statistics? The first is very boring. The latter is mostly boring, at least the way they have been presented so far. Pretty charts based on raw data, without any understanding of that middle layer. How and why is that data generated? Sure, you can tell me which CNA produced how much… great!

DOES. NOT. MATTER.

Not for the most part at least. There is so much context lost in all of that, that it becomes laughable. And if you don’t understand why that is, that’s fine. Just understand that academics and third-party data analyzers that don’t actually work with the data, don’t get the full picture.

“We intend to have a discussion and exploratory atmosphere, and invite academics and practitioners alike.”

If anyone wants to challenge them on that, here’s your opportunity. Tell them to change the date until after October 25 since I will be on my first real vacation in 14 years during the conference. If they want to have a serious practitioner that has real-world experience, and more importantly, the willingness to speak up against the wave of bullshit that is likely to consume it, then delay it. You do that, I will pay out of pocket to attend this conference and bring some some reality to your world.

You may test that assumption at your convenience.

Reminder… this is FIRST. That brought you CVSSv3 which had a world of issues, CVSSv4 which is basically absurd and written by academics, EPSS which refuses to share real-world critically valuable data with the rest of us, VulnCon with its issues, and other ilk.

Thanks for coming to my TED talk. I may make a blog of this tomorrow. So yes, like others, I would love to see a report of this. I’d much rather see recorded speakers, BoF sessions, or anything else they do.

Ah crap, I missed perhaps the most important parts of this page, the very end:

“We are not in love with the problem…”

And that, is why you won’t actually help. You can go on to try to qualify it with “and while zerodays make heroes”, you got that wrong too. The only “zeroday heroes” are in the world of those causing the problems, be it any color hat you choose. You go on… “we’re more interested in making vulnerability management manageable”, and measurements, statistics, metrics, calculations, graphs, pie charts, and fancy presentations won’t do that one bit. You finish that with “and exploitation easy to foresee.” So what, you don’t have any faith in EPSS? Because that is a FIRST project, that doesn’t openly share the data they use to generate the model, making it non-repeatable.

And the absolute clown car of a finale:

“In short form; Less reactionary and more confident. Overachieving and under budget. We foresee the harm and contain it before it is realized. The vulnerabilities of the future are no longer surprises or surprising.”

This conference is quite literally the definition of REACTIONARY. You won’t overachieve, and you are definitely under-budget since based on the page there is no virtual offering even.

Seriously? This literally invokes a bunch of academics sitting around a table, each with one hand on top scribbling notes and laughing at each other’s jokes, their other hands below the table going in a frantic motion.

Sorry for the bad visual, but that is the reality based on the words they offered. 

Exit mobile version