Rants of a deranged squirrel.

Leave AI Slop out of CVE; Humans Make Mistakes Just Fine

I was recently asked, again, if so-called AI could help CVE. My reply was quick and direct; no. At least, not right now, and to me not for the immediate foreseeable future. Anyone that knows me is probably aware of my disdain for so-called AI. The fact that I preface it with “so-called” should be a good clue because what people call “AI” right now certainly isn’t. Machine learning (ML), scripting, and large language models (LLM) are one thing, but that isn’t artificial intelligence. Until your so-called AI can pass the Turing Test, stop calling it AI and be honest with yourself. Artificial ignorance is a more apt description of the implementations I have seen.

First, I do want to disclaim that there are some companies and groups that are in the early stages of doing cool things with ML and LLM. They are starting to effectively use it to find vulnerabilities, accurately, as well as detect and prevent exploitation of a zero-day vulnerability. But these are the outliers in the world of so-called AI being used in the vulnerability realm. Day to day, most implementations are certified slop. Anyway, back to the point; so-called AI cannot save CVE. First, as I have frequently blogged about, CVE can’t be saved in its current state, period. Right now, so-called AI and the “enshitification” of our space is doing more harm than good in the bigger picture.

MITRE and the CVE program do not appear to be using so-called AI currently, and that is a good thing. However, more and more companies that “run vulnerability databases”, that are just lipstick on the CVE pig, are starting to use it. And it is a raging headache for many, even ones that don’t use their products or services. Having AI slop hallucinate CVE IDs is the biggest frustration I have experienced, but that is just a sign of what is to come. Using this technology and introducing confusion and problems, rather than solutions, isn’t the right path forward.

I have to ask though, why even use AI slop? There are hundreds of lipstick on the CVE pig  regurgitations that do it without AI slop currently. So what value does it bring? You can apply the lipstick to CVE with scripts and traditional coding. Oh that’s right, you are lazy. Worse, AI slop is a pathway for people that don’t really understand vulnerability databases to step in thinking they do, or that they can solve problems they haven’t defined or don’t grasp.

Case Study: Transilience / transilienceai

A relatively new company named Transilience advertises “Autonomous

AI agents for Cyber Defense” which is just a word salad of buzz-words, a warning sign unto itself. It gets worse when you look at their page a year prior:

“By automating the curation of relevant threat intelligence alerts, Transilience GPTs have enabled our team to save tens of hours of manual work each week, significantly enhancing our operational efficiency”

At one point, their page boasts “Adaptive AI Agents for Unburdened Cyber Defense“, which sounds good because who likes to be burdened? Recently their page claims “Transilience delivers advanced, structured vulnerability data with real-world impact insights, vendor-specific details, and seamless integration—empowering developers, security teams“. Note the “real-world” bit and let’s remember that “real” is the most important thing in everything we do. So with these marketing claims above and the promise of autonomous, curated, unburdened intelligence that saves tens of hours of manual work a week is great! Right? Let’s examine the reality of it. Turning to Twitter:

The amount of times I have pointed out errors in their so-called AI, where it hallucinates CVE IDs, is utterly ridiculous. There is no “real-world” impact, it is not relevant to anyone, and it only creates a burden causing me to waste more hours, not save hours. Worse, while they spew this hallucinated slop to social media they don’t have a human that will reply to you, and won’t heed my advice to turn it off until fixed.

Tip of the Iceberg

Unfortunately for the Information Security industry, this is the tip of the iceberg and hardly the only so-called AI offender. Note that my primary experience that led to this blog was strictly from incorrect CVE IDs mentioned on Twitter. So imagine how bad the problem really is when all of my examples are just a fragment of the bigger issue. What’s worse is that each time I encounter such a CVE hallucination, no one else seems to notice or care enough to reply. I don’t know if defenders are using this bad intel and causing more work, or if this is a sign that no one is using this AI-slop driven business model.

F5, a company that specializes “in application security, multi-cloud management, online fraud prevention, application delivery networking (ADN), application availability and performance, and network security, access, and authorization” has acquired “Fletch AI”. This is another AI-slop driven effort that hallucinates CVE IDs and has no humans that monitor social media. That should worry you when a company known for producing their own vulnerable security software acquires a company known for producing bogus threat intel.

UNDERCODE NEWS (note the all caps excitement there!) has a Twitter profile that boasts “Latest in Cyber & Tech News with AI-Powered Analysis and Fact Checking.” Note the last part, “fact checking” and then note that their AI-slop hallucinates as well. The worst offense was announcing a remote code execution vulnerability in Apache HTTP Server, run in some form or another by just about any large company. The problem of course being that the CVE ID they gave was bogus, and of course had no reply for me. Tim McGuffin pointed out that “their whole site is clickbait, so i’d expect their twitter feed to be nothing different.

Next up we have “Trending Tech News”, which shows that AI slop can hallucinate more than numbers. In this example, we see that they give the incorrect CVE ID and hash tag it #Cisco, meaning they got the vendor wrong as well. Littering it with hash tags to get more eyes on their errors is icing on the cake. But, this isn’t just a game for the little AI slopsters either. The bigger, more well-known, AI slop are just as guilty.

The Bigger Picture

It would be difficult to imagine any reader of this blog has not run into articles, memes, or examples of AI slop giving incorrect information. The frequency, and absurdity, of these cases are pure humor at this point. Right up until you realize where AI is being baked into, and how much control it already has in our lives. The race to enshitify products and services is at a break-neck speed and it is already to the detriment of us all. Even in an industry where ‘integrity’ is a backbone of what we do, AI slop is still flying by.

Anyone that blindly implements a so-called AI solution without rigorous testing, and not just security and prompt injection, specifically tests around accuracy of information needs to reevaluate their role in the industry. Just because another company has reduced the quality and integrity of their offering by introducing AI slop doesn’t mean you should to. In fact, you should market the opposite message; that humans still do a significant portion of curation and that accuracy and integrity are your mission. That would sell me a lot faster than bragging about bullshit “AI”.

This is what your “AI” looks like to many of us:

Trust me, humans make plenty of mistakes with CVE numbers, we don’t need you.

Exit mobile version