Audited Once Is No Longer a Security Model
This article discusses security vulnerabilities and exploits for educational purposes. Nothing here constitutes financial or security advice.
What’s New This Week
DL News published an investigation this week with researchers from Halborn, Firepan, and Hacken all independently observing patterns consistent with AI-driven scanning of legacy smart contracts. The Anthropic research cited – 405 real exploited contracts, 63% exploitation rate – was published in December and has been circulating in security circles since. No new data since writing, but the piece synthesises what practitioners are seeing on the ground.
Changelog
| Date | Summary |
|---|---|
| 27 Mar 2026 | Initial publication. |
Smart contract security has always had an asymmetry problem. Defenders audit once, at launch. Attackers keep looking. What AI has done is make that asymmetry catastrophic.
The fundamental shift isn’t that AI invented new vulnerability classes. It didn’t. Reentrancy bugs, price oracle manipulation, integer overflows – these have been in the taxonomy for years. What AI does is scan for them faster, cheaper, and at a scale no human team can match. Gabi Urrutia, field CISO at Halborn, puts it plainly: “AI does not need to invent novel vulnerability classes to create more damage; it only needs to find old ones faster and at scale.”
The practical consequence is that the economics have changed completely. Previously, the cost of manually hunting vulnerabilities meant attackers focused on high-value targets – protocols with hundreds of millions in TVL. The threshold for a profitable attack was high enough that most legacy contracts sat in relative safety through obscurity and scale. That calculus is gone. Urrutia again: “Attackers can profit at much lower value thresholds than defenders can justify for equivalent detection effort.” If your protocol has $50,000 locked in a contract deployed in 2020, it is now worth scanning.
Stephen Ajayi, dapp audit technical lead at Hacken, describes what this looks like in practice: “We observe repeated, identical exploit attempts across multiple contracts simultaneously, which is consistent with scripted or agent-driven reconnaissance.” That is not manual work. That is a pipeline.
The Truebit hack is instructive, even though nobody has confirmed AI involvement. A pricing-logic flaw in a contract compiled with Solidity 0.6.10 – deployed over five years before the attack – was used to drain $26 million. Urrutia describes it as exactly the target profile AI makes more attractive: old compiler, dormant risk, no ongoing maintenance. In the current environment, that profile is what gets you targeted.
This is the structural problem with the one-time audit model. An audit is a snapshot. It captures the risk that existed at the moment the auditors looked. It says nothing about what a more capable tool, scanning the same bytecode three years later, might find. And it says nothing about the ecosystem changes around the contract – new price feeds, new protocol integrations, new market conditions that change what an attacker can do with a pricing assumption baked into 2020 code.
Urrutia is direct about the implications: “‘Audited once’ is no longer a serious security model. If attackers can continuously re-scan the long tail of old contracts, then dormant risk becomes active risk again.”
There is a defensive side to this story. Octane Security, an AI-native security firm, used its tooling to find a high-severity bug in Nethermind, the Ethereum client software. Ajayi’s conclusion follows: “If attackers are using AI to find vulnerabilities, defenders must do the same.” The capability gap isn’t permanent, but right now it exists. Offensive tooling is iterating faster than defensive tooling, and Gerrit Hall, who spent five years at Curve Finance before co-founding Firepan, said this gap has him convinced that “offensive capacity is improving far faster than defensive tooling.”
The Anthropic research from December gives some shape to the numbers. AI agents tested against 405 real exploited contracts from 2020 to 2025 successfully exploited 63% of them. The scale is the point: what previously required a skilled human researcher working for days can now be automated across thousands of targets simultaneously.
The practical implication for anyone running DeFi infrastructure is that continuous re-auditing needs to become standard operational practice, not a pre-launch checklist item. Not because the code changed, but because the tooling scanning it did. A contract you deployed and audited three years ago is not in the same threat environment it was then. The compiler version, the ecosystem dependencies, the market conditions it assumes – all of it looks different to a tool that didn’t exist when you shipped.
Static risk has become dynamic risk. The infrastructure for treating it that way – continuous scanning, automated alerting on legacy contracts, deprecation paths for under-maintained code – mostly doesn’t exist yet. That is the gap AI has opened, and closing it requires treating security as an ongoing operational cost, not a one-time gate.