Yesterday many people ran across a headline that was shocking, and repetitive. This time it read “‘Gone in 9 seconds’: Claude-powered AI agent deletes startup’s entire database“. For myself, the first thing I had to do was check the date of the article because I swore I had just read about this recently. Yep, April 28 so it’s a new one! In a prior blog on the topic of so-called AI, or as I argued it’s better termed “Artificial Humanity“, I had linked to the “AI Incident Database” that tracks a wide range of negative impact incidents related to the use of the technology. The headline from days ago certainly isn’t new to the world. So why blog about it, glad you asked, here we go!
I will cite some lines from the recent article about the incident along with similar prior incidents to put forth the argument that journalists need to stop chasing these rookie headlines and start approaching the topic with more expertise and asking more questions of the “victims”. I’ll even go so far as to be a callous asshole and say that some of these companies got exactly what they were begging for in the form of deleted databases, long-term outages, and financial losses as the result of their own action, and they are to blame more than the software everyone seems keen to blame. Please note, I only use the language above in the context of technology being used poorly, the “victim” being a tech company, and the harm being a bunch of 1’s and 0’s. I do not apply such victim-blaming to humans that are abused in any fashion.
Missing the Blame Target
From the article cited above I feel they are missing the bigger picture, the finer points, who is really to blame, and just about everything except the clickbait headline now that I start to list it out. Before we start, it’s worth noting that the first line of this article suggests the author is brand new to journalism or simply hasn’t read any article on the topic this year; “An AI coding agent wiping out an entire company database in seconds is no longer a hypothetical risk.” If anyone thought this was “hypothetical” in late April, 2026, they too don’t read any tech headlines it seems.
So let’s examine some of the lines quoted verbatim from this article together to see if a pattern emerges. I am confident my more technical readers that work in information technology/security will notice right away:
An AI coding agent, Cursor running Anthropic’s Claude Opus 4.6, deleted our production database and all volume-level backups in a single API call to Railway,” Crane wrote. “It took 9 seconds.”
Instead of flagging the issue, it searched for an API token, found one in an unrelated file, and used it to delete a Railway volume, the one containing live production data.
Crane also took responsibility for assumptions made during setup. “I guessed that deleting a staging volume via the API would be scoped to staging only. I didn’t verify,” he said.
Rather than enumerate all the ways this company did not enjoy the most basic of mature development practices, had not undergone any real security audit, and had basically glued together a bunch of technology in the most unstable way begging to fail, let’s try a brief thought experiment. Try re-reading the entire article and this time substitute “Claude” or “AI Agent” with “human developer” and see what you think. Should a human be able to make one call to one API using credentials they found in a random file and cause the damage described? If the answer ranges from “of course not” to “are you out of your fuckin’ mind?!” then there you go!
Jer Crane, founder of PocketOS, and whatever developers are on staff are entirely to blame for this long before the introduction of an AI Agent. If one was involved from the start meaning this a vibe-coded fly-by-night shop then the same statement stands; Crane is entirely to blame here. Even if you wanted to trust an AI Agent as much as demonstrated here it simply should not have the capability to do this level of damage. Credentials should not be left in random files, catastrophic results stemming from a single API call should require multiple levels of confirmation that require a human without exception, and none of this should have been permissible in a production environment. These are what you learn in first year programs and are considered elementary concepts to almost anyone practicing in the real world.
Jumping back to March, 2026, with the headline “An AI agent destroyed this coder’s entire database. He’s not the only one with a horror story” there are similarities, with bolding my emphasis:
The root cause was a small setup mistake on a new laptop that confused the automation about what was “real” and what was safe to delete, so it erased the actual production system instead of just cleaning up duplicates.
July 2025, where “Vibe” coding service Replit deleted a production database and more:
“Replit QA’s it itself (super cool), at least partially with some help from you … and … then you push it to production — all in one seamless flow.”
From the same timeframe as above this time at the hands of Gemini that deleted code:
In his GitHub post, Gupta, who is a product lead at cybersecurity firm Cyware, clarified he’s not a developer, but a “curious PM [product manager] experimenting with vibe coding.”
These are just a few examples out of an alarmingly fast-growing trend of similar incidents happening, and while the last hit a person rather than a company, they worked for a security firm and should better understand security controls even if not a developer.
Blame In the Wrong Place
In addition to the journalists not asking more pointed questions and putting the victims on the spot, they seem quick to attribute blame elsewhere. From the first incident at the start of this blog:
“This matters because the easy counter-argument from any AI vendor in this situation is ‘well, you should have used a better model.’ We did,” Crane wrote. “We were running the best model the industry sells, configured with explicit safety rules in our project configuration, integrated through Cursor — the most-marketed AI coding tool in the category.”– Alex Perry, Tech Reporter, Mashable
This illustrates why tech journalists need to ask better questions. Any “AI” vendor that tries to counter an argument with “use a better model” is myopic and disingenuous at best.
Different article but covering the same incident:
What made the situation worse was the complete absence of safeguards. There was no confirmation prompt, no warning about deleting production data, and no environment-level restrictions. The command executed instantly and irreversibly.– Unattributed MoneyControl Writer
So close! Absolutely correct about “absence of safeguards” but so wrong in attributing them to the software first and the “envrionment-level restrictions” last, especially when it should be more robust than that even. The entire development system should be wrapped in safeguards from the ground up, not over-reliance on the AI Agent coding for you. A security auditor should actually test to see if all of that is possible before you stake your company’s reputation on it.
The root cause was a small setup mistake on a new laptop that confused the automation about what was “real” and what was safe to delete, so it erased the actual production system instead of just cleaning up duplicates.– Beatrice Nolan, Tech Reporter, Fortune
No, that is not the ‘root’ cause of this incident. The actual root cause was running that technology in a production environment which helped to allow that to happen. Had the agentic Ai software been limited to staging and/or development environments then we never would have read about this incident. From the July, 2025 Gemini incident:
Seasoned developers might know to experiment with AI coding agents in an isolated environment or otherwise protect the original source files, as some Redditors pointed out. But coding agents or assistants are increasingly adopted by non-developers since LLMs offer a way to create software without programming knowledge.– Cecily Mauran, Tech Reporter, Mashable
No no no, there is no “but” here. If coding agents and assistants are “increasingly adopted by non-developers” you stop right there because of that last incredibly important point. The term “non-developers” is the key here. It’s no different than hiring contractors that aren’t licensed, don’t have much experience, and promise they can do it cheaper than others.
Recall my earlier thought exercise of replacing AI Agent with human and do the same here, this time change the idea of “coding agents” with “electricity” and also change “programming knowledge” with “water”. You play with dangerous things sans knowledge of them, let alone the safety implications, you will get hurt. There is a reason that many parents let their children do something that would briefly hurt themselves or scold the kid after touching a hot stove rather than immediately showing empathy before the lesson. They know that at a formative age we learn that lesson for life.
These stories have the narrative wrong. An AI Agent did not destroy a production database any more than the person wielding the tool without experience did. Time to let these non-developers playing with electricity in the middle of a metal rod factory while wearing a blindfold get zapped once or twice. Hopefully it isn’t too late for their formative years.