Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
Initial publication. No updates since launch.
Changelog
| Date | Summary |
|---|---|
| 23 Mar 2026 | Initial publication. |
Steve Krouse published a piece this week at stevekrouse.com/precision titled “Reports of Code’s Death Are Greatly Exaggerated.” It’s worth reading in full rather than relying on the HN summary. The argument is more careful than the headline suggests, and the HN thread (47476315) mostly argued past it.
What Krouse Is Actually Saying
The piece starts from an observation about vibe coding: prompting AI until something works gives you the illusion of precision without the substance of it. English specifications feel precise until they meet reality. You think you’ve specified “live collaboration” until you discover what live collaboration actually requires – the edge cases, the failure modes, the operational complexity that only surfaces when you’ve tried to build it.
Krouse’s core argument is about abstraction. Precision isn’t about writing low-level code manually – it’s about creating semantic levels at which you can reason without losing correctness. Good abstractions let you hold complexity. Bad ones let it leak. Vibe coding optimises for shipping fast by staying at the “English vibes” level, and that works until the abstraction beneath it collapses under load, scale, or an edge case nobody thought to specify. His claim isn’t that AI is useless – it’s that the work of finding good abstractions hasn’t gone away, and probably can’t.
Where He’s Right
The failure mode he’s describing is specific and real. Vibe-coded systems tend to work in the happy path and fall apart at the edges, not because AI writes bad code (though it sometimes does) but because the specification was never precise enough to cover the edge cases in the first place. You can’t test for correctness you haven’t defined. You can’t debug behaviour you can’t reason about.
The HN thread had a useful observation from one commenter describing how they now use AI: not to write code without thinking, but to capture invariants and expected failure modes in conversation before writing anything. That’s the opposite of vibe coding. That’s using AI to sharpen your thinking rather than bypass it. The engineers doing that are getting more out of the tools than the ones who are just prompting until tests go green.
For certain classes of systems – financial calculations, security-sensitive paths, anything where an error has consequences that don’t announce themselves immediately – vibe coding isn’t a productivity shortcut. It’s a liability deferral. The bugs are still there; you just don’t know where yet.
The Reframe
But here’s where the “code lives or dies” framing breaks down.
Most of what engineers actually write doesn’t require that level of precision. The glue code, the integrations, the plumbing between services that have well-documented APIs – none of that is where correctness gets defined. It’s assembly work. AI is genuinely good at assembly work, and claiming otherwise isn’t rigour, it’s denial.
The useful question isn’t whether precision matters. It’s where precision is load-bearing versus habitual. In most systems, the correctness constraints are concentrated. The invariants that must hold, the failure modes that must be handled, the boundaries that must not be crossed – these live in a small fraction of the codebase. That fraction still requires careful human thought. The rest is tolerant of ambiguity, always was, and AI handles it fine.
Krouse’s Dan Shipper example illustrates this well. The failure wasn’t that Shipper used AI – it was that “live collaboration” was treated as a vague specification when it needed a precise one. The AI didn’t cause the problem; the underspecified intent did. A human writing that from scratch would have hit the same wall. The difference is they might have hit it faster and recognised it sooner.
What This Means for Engineers in 2026
The practical skill this environment rewards is knowing which parts of a system require precision and which don’t. That’s not the same as being able to write a lot of code. It’s closer to systems thinking – understanding how failure propagates, where correctness must be guaranteed, what can be rebuilt versus what must be right the first time.
Architects have always needed this. The difference now is that every engineer needs it, because the cost of getting the scaffolding wrong is lower and the cost of getting the invariants wrong is the same as it always was. AI hasn’t changed the blast radius of a miscalculated financial figure or a broken auth flow. It’s just made the surrounding code cheaper to produce.
The engineers who adapt well won’t be the fastest prompters or the ones most resistant to using AI. They’ll be the ones who can look at a system and correctly identify the 20% that needs careful human specification – and then use every tool available, AI included, for the other 80%.
The Deeper Question
The HN thread spent a lot of energy on whether AI can be innovative, whether 1x programmers are safe, whether the field is oversupplied. These are real questions but they’re mostly economic questions dressed up as engineering ones.
Krouse’s piece is about something narrower and more durable: whether the activity of programming – the precise specification of behaviour, the construction of abstractions that hold under pressure – has value independent of the software it produces. His answer is yes, and I think he’s right.
The analogy to writing is useful here. Nobody argues that ChatGPT ended journalism. Good writing – precise, argued, honest about uncertainty – still requires someone who knows what they’re trying to say. The same is true for good code. AI is a powerful tool for producing text. It doesn’t make having something worth saying unnecessary.
The engineers who understand this will be fine. The ones who’ve confused productivity with precision – who’ve been shipping things without understanding them – were always going to have a hard time. AI just speeds up the reckoning.