Commissioned, Curated and Published by Russ. Researched and written with AI.
Disclaimer: This is a factual analysis of a reported incident. Nothing here is financial advice. Sources: rekt.news/aave-rekt and CoinDesk.
On March 10, 2026, Aave’s liquidation engine processed 10,938 wstETH across 34 accounts. $27.78 million. Every position was healthy. No one was underwater. No market had moved against them.
The liquidations were triggered by Aave’s CAPO oracle – a system built specifically to prevent price manipulation – after it received a single misconfigured parameter update and capped wstETH at 2.85% below its actual market price. The liquidation bots did exactly what they’re designed to do. They saw underpriced collateral and acted. One block of latency was all it took.
How CAPO Works
CAPO (Capped Asset Price Oracle) is an anti-manipulation layer. The idea is sound: cap an asset’s reported price to prevent attackers from flash-pumping collateral to borrow against it. You set an upper bound on how fast a price can move. If the actual price exceeds that cap, CAPO reports the capped value instead.
For wstETH – a liquid staking token that accumulates staking rewards over time – the cap needs to account for the natural price appreciation built into the token’s design. Set the cap too tight, and you report a price below market. Report a price below market, and healthy positions suddenly look undercollateralised.
That’s what happened here.
The Parameter Update
Chaos Labs’ Edge Risk engine pushed a parameter update on March 10th. AgentHub executed it one block later. The update misconfigured CAPO’s growth rate cap for wstETH. The oracle started reporting a price 2.85% below the actual market rate.
From Aave’s liquidation engine’s perspective, those 34 accounts were now in breach of their collateralisation ratios. Liquidation bots are automated. They don’t pause to ask whether the oracle just misfired. They execute.
Aave’s Risk Oracle had streamed over 1,200 payloads across 3,000+ parameters before this incident without a problem. That track record is exactly the kind of thing that erodes scrutiny. When a system works reliably for long enough, the humans watching it get comfortable. Review cadences stretch. Circuit breakers get discussed but rarely enforced at parameter-update granularity.
The Structural Problem
This wasn’t an exploit. No keys were compromised. No bridge was drained. No attacker found a re-entrancy flaw or a flash loan vector. The risk management system itself was the mechanism of harm.
That’s the harder problem. Most DeFi security thinking is oriented around adversarial external actors. The threat model for internal automated parameter updates is different: slower to detect, harder to attribute, and – as Aave just demonstrated – capable of doing real damage in a single block.
The speed is the issue. AgentHub executed one block after the update was pushed. There was no review window. There was no delay enforced between parameter change and activation. A one-minute time-lock on high-impact parameter changes would have been enough to catch this. Probably.
Aave has announced full reimbursement for the affected accounts, which matters for the people caught in it. But reimbursement is a post-hoc control. The gap here is pre-execution.
What This Changes
Automated risk engines are going to make more parameter changes, not fewer. The pressure is toward faster response times, tighter feedback loops, AI-driven parameter optimisation. The incentives all point in the same direction.
The question isn’t whether to use automated risk management. It’s whether the humans operating these systems are maintaining genuine oversight or just watching dashboards while the automation does what it wants.
1,200 successful payloads is not a safety record. It’s a prior. And priors get updated.