Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
Q1 2026 exploit data from PeckShield puts January losses at $86 million and February at $26.5 million – lower than the catastrophic individual incidents of 2025, but the attack surface is not shrinking. A Medium post from OKcontract Chainwall notes that early 2026’s biggest security failures were not smart contract hacks but operational and social engineering failures – the same conclusion security teams keep reaching. The Bybit hack in February 2025 ($1.5 billion, North Korea) remains the defining incident shaping how exchanges and protocols think about custody and signing infrastructure in 2026.
Changelog
| Date | Summary |
|---|---|
| 23 Mar 2026 | Initial publication – 2025 losses, Bybit aftermath, bridge risk, audit landscape. |
2025 Exploit Statistics
2025 was a significant year for crypto theft. Chainalysis’s 2026 Crime Report reports that DPRK-linked hackers alone stole $2 billion in 2025, with North Korea responsible for approximately $1.5 billion of that via the Bybit exploit. A separate figure from one Chainalysis-citing source puts total protocol hack losses at $3.4 billion for 2025. Crypto scams and fraud reached an estimated record $17 billion in 2025, according to Chainalysis’s scams report.
The Bybit figure is the anchor. On February 21, 2025, attackers compromised Bybit’s signing infrastructure and stole approximately $1.5 billion worth of Ethereum. The FBI attributed the attack to North Korea’s Lazarus Group (TraderTraitor). This was the largest digital asset theft in history and accounted for the majority of exchange losses in that period.
The structural insight from Chainalysis: the top three hacks in 2025 accounted for 69% of all losses. Crypto security risk is extremely fat-tailed. The median hack is relatively small; the tail events are catastrophic.
Why Bridges Are Still the Primary Attack Surface
Cross-chain bridges are the highest-value target in DeFi, and have been since the Ronin hack in 2022. The fundamental problem: bridges hold large pools of locked assets on one chain to issue synthetic representations on another. That locked pool is a honeypot.
Bridge security models vary:
- Multisig bridges – N-of-M keys control the locked assets. If enough keys are compromised, funds walk. Bybit was not a bridge, but the attack vector was similar: compromised signing infrastructure.
- Optimistic bridges – fraud proofs protect assets but introduce withdrawal delays.
- ZK bridges – cryptographic validity proofs remove the trust-in-validators model. Harder to hack in principle, but the proving systems themselves can have implementation bugs.
The Wormhole ($320 million, 2022) and Ronin ($625 million, 2022) hacks remain the canonical examples of what bridge failure looks like. Neither has been repeated at that scale in 2025–2026, but smaller bridge incidents are regular.
The Audit Industry
Smart contract audits are a requirement, not a differentiator. Every serious protocol gets audited. The top firms:
CertiK – highest volume, widest coverage. Audited by CertiK is a standard badge. Volume has been associated with variable quality; post-audit exploits on CertiK-certified contracts are a recurring industry criticism.
Trail of Bits – research-grade security work. Slower and more expensive, but the methodology is deeper. The firm’s academic approach (founded by Trail of Bits, not CertiK) makes it the choice for critical infrastructure.
OpenZeppelin – produces battle-tested contract libraries that most protocols build on. Also audits. The library approach reduces the number of times the same code needs to be written and audited from scratch.
Formal verification – mathematically proving contract correctness rather than manually reviewing code. More expensive and requires specialist expertise. Used for the highest-risk contract components.
The uncomfortable truth: even audited contracts get exploited. Audits reduce risk; they do not eliminate it. The finding from the 2025–2026 data is that the most damaging incidents are not in smart contract logic – they are in operational security, signing key management, and supply chain attacks on development tooling.
MEV – The Grey Area
Maximal extractable value (MEV) is the profit available to block producers (validators, sequencers) by reordering, inserting, or censoring transactions within a block. It spans a wide ethical spectrum:
Arbitrage – capturing price differences across pools. Broadly considered legitimate; improves price efficiency.
Sandwich attacks – detecting a large pending trade, front-running it, and back-running it to extract profit from the victim’s slippage. Widely considered exploitative.
Liquidation MEV – competing to liquidate undercollateralised positions first. Legitimate in that liquidations are required for protocol health, but the competitive nature drives gas wars.
MEV extraction from Ethereum reached billions of dollars cumulatively. Flashbots built the MEV-Boost infrastructure that made MEV more transparent (and partially democratised access). The debate continues on whether MEV is a feature (market efficiency) or a bug (user extraction).
On Solana, the equivalent is Jito’s MEV infrastructure. The design is different but the economic dynamic is the same.
Code Is Law – Except When It Isn’t
The crypto industry has a recurring identity crisis when protocols get hacked. The “code is law” position says: the smart contract executed as written, and reversing that execution would undermine the trustless contract model. The pragmatic position says: $600 million walked out the door, reverse it.
The Ethereum DAO hack in 2016 set the precedent: the community forked the chain to reverse the theft, creating Ethereum and Ethereum Classic. Code-is-law purists stayed on ETC. Almost everyone else moved to ETH.
Since then, protocols have generally handled large hacks with a combination of post-mortem disclosure, bug bounty payments to white-hat hackers who return funds, and in some cases governance-approved patches that effectively rewrite history. The pattern: theoretical immutability, practical pragmatism when the losses are large enough.
This creates a trust problem in the other direction: if governance can patch out bad outcomes, what are the constraints on governance?
Security Tooling Landscape
What is improving: static analysis tooling (Slither, Mythril), invariant testing frameworks, and fuzz testing are now standard in serious development workflows. Audit competition platforms (Code4rena, Sherlock) allow multiple independent auditors to review the same contract, improving coverage.
What is still broken: most protocols ship with insufficient test coverage. Operational security for signing key management remains inconsistent. Supply chain attacks on npm packages used in frontend code remain a live threat – users interact with a frontend, not the contract directly, and a compromised frontend can redirect approvals.
Connections to Traditional Cybersecurity
The attack vectors are familiar to anyone in traditional security: phishing, supply chain compromise, key management failures, social engineering. The consequences are different. There are no chargebacks. There is no fraud department to call. In most jurisdictions, stolen crypto is effectively unrecoverable.
The Bybit hack used what amounts to an advanced persistent threat (APT) playbook: patient, targeted, exploiting trusted tooling. The difference from a bank robbery is that the funds moved immediately and irreversibly across multiple protocols designed to obscure the trail. Chainalysis and TRM Labs track these flows, and some funds have been frozen at centralised exchanges. But recovery is the exception, not the rule.
For security engineers: the threat model in crypto is the threat model of financial infrastructure, without the safety nets.