AI Coding is Gambling (Sort Of)
Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
The VS Notes essay “AI Coding is Gambling” hit Hacker News this week with 189 points and nearly 200 comments – unusual traction for a personal blog post with no data, no benchmarks, and no product announcement. That traction is itself a data point worth examining.
Changelog
| Date | Summary |
|---|---|
| 18 Mar 2026 | Initial post. |
An essay called “AI Coding is Gambling” reached the top of Hacker News this week with nearly 200 comments. The author isn’t presenting research. He’s not citing a study or reporting a production incident. He’s describing a feeling. And it hit because a lot of engineers have had the same feeling and never found a good name for it.
The author – a designer who codes alone and mostly works by forking and remixing – describes the specific intoxication of a state where “any change to your entire codebase is trivial to make.” Then he names what’s underneath it: AI doesn’t actually handle the problem, it pretends to handle it. “Giving us a vaguely plausible but often surprisingly wrong” result. And pulling for that result, again and again, is just “pulling a slot machine with a custom message.”
That’s a sharp observation. It’s also incomplete. The gambling metaphor is psychologically accurate, partially empirically accurate, and ultimately insufficient to explain the full picture of where AI coding helps and where it breaks down.
Why the Slot Machine Metaphor Works
Behavioural psychology has a clean answer for why AI coding feels addictive: variable ratio reinforcement is the most powerful reinforcement schedule known. Skinner established this in the 1950s. Slot machines use it. Pull-to-refresh uses it. Social media likes use it. The defining feature is unpredictability – the reward comes, but you can’t predict when.
When AI sometimes produces exactly what you wanted and sometimes produces plausible garbage that takes longer to debug than writing it yourself, the unpredictability creates the same loop. You know the jackpot is real because you’ve hit it before. So you pull again.
This isn’t metaphor. It’s mechanism. The neurochemistry of variable reward – dopamine spikes on anticipation, not just delivery – is well-documented, and the AI coding loop maps onto it cleanly. The feeling the essay is describing is a real cognitive phenomenon, not just frustration.
What the Data Says
The METR randomised controlled trial from 2025 adds a harder edge to this. They measured experienced developers on real software engineering tasks, with and without AI assistance. The result: AI tools slowed developers by 19%. Developers predicted they’d be 24% faster. The self-assessment was wrong in the wrong direction.
This is entirely consistent with the gambling framing. Gambling also feels like you’re winning more than you are. The sensation of productivity – the fast generation, the instant response, the feeling of forward momentum – is real. The actual output per hour is another matter.
GitGuardian’s 2026 data adds a different dimension. Claude Code-assisted commits had a 3.2% secret leak rate versus a 1.5% baseline. It’s not just that AI-assisted code takes longer to debug. It’s introducing error classes that weren’t appearing before. The slot machine occasionally pays out counterfeit chips.
For more on what the productivity data actually shows across studies, see AI Productivity Reality.
Where the Metaphor Breaks Down
The gambling metaphor implies uniform randomness. A slot machine has no memory and no task structure – the probability of winning is fixed regardless of what you pull. AI coding is not like that.
The variance in AI output is highly task-dependent. Boilerplate generation, refactoring well-defined components, writing documentation, scaffolding tests, transforming data between formats – these are deterministic enough that the variable reward problem is mild. The AI reliably produces useful output for these tasks because the task structure is clear and the success condition is unambiguous.
The randomness increases as tasks get more complex, longer-running, or more context-dependent. Which is why the 4-hour ceiling pattern matters: productivity is real for the first few hours on well-scoped work, then degrades as context depth increases and the model starts making coherence errors across a growing codebase. It’s not a slot machine. It’s a slot machine that gets less reliable the longer you play and the more coins you have in the tray.
The essay’s author is also a specific kind of developer: solo, designer-first, rarely building from scratch, mostly forking and remixing. That profile is likely to have a worse AI coding experience than a senior engineer on a large team who uses AI for tightly-scoped subtasks. The variable reward problem is sharpest when you’re using AI to do work that’s actually ambiguous. It’s least sharp when you’ve already defined the problem clearly and you’re asking AI to generate the mechanical parts.
This isn’t a defence of AI coding tools. It’s a correction to the framing. The problem isn’t that AI coding is gambling. It’s that most people are doing it wrong – reaching for AI assistance on the hard ambiguous problems where it reliably underperforms, rather than the well-scoped mechanical work where it reliably helps.
The Soul Problem
The best observation in the essay isn’t about gambling. It’s about craft.
“My job went from connecting two things being the hard and rewarding part, to just mopping up how poorly they’ve been connected.”
That’s the human experience of a supervisory role replacing a craft role. The intellectually satisfying work – figuring out how the pieces fit, finding the clever fix, understanding a system well enough to change it – has been replaced by error correction on someone else’s attempt.
This has happened before. IDEs abstracted away memory management. Stack Overflow made lookup cheap. npm made dependency decisions something you punt on. Each transition shifted the cognitive work higher up the stack and reduced the rewarding low-level problem-solving. AI has accelerated this process by roughly an order of magnitude.
Whether this is loss or evolution is a values question, not an empirical one. There’s no study that can tell you whether the craft satisfaction of writing careful code was worth preserving, or whether it was just learned attachment to unnecessary friction. The essay is honest enough to acknowledge this – the author ends by asking whether he’s just rationalising laziness.
For more on what this transition is doing to developer identity and wellbeing, see AI and Developer Mental Health.
The Uncomfortable Part
The essay is getting traction not because it presents new evidence. It doesn’t. It’s getting traction because “it feels like gambling” is a more accurate description of the AI coding user experience than “it’s a productivity multiplier.” And most of the discourse has been pushing the productivity multiplier framing without acknowledging the psychological reality underneath it.
Both things can be true simultaneously. That’s the uncomfortable part. A tool can be genuinely useful for some task classes while also creating an addictive usage loop that causes people to overuse it on tasks where it underperforms. The slot machine comparison doesn’t mean the machine never pays out. It means the mechanism of engagement is engineered around the anticipation of payout, not the payout itself.
The Actual Question
The useful question isn’t “is AI coding gambling?” It’s “what kind of task are you doing, and how long have you been on it?”
Short, well-scoped, mechanical task with a clear success condition: you’re probably at the poker table. Skill and clear thinking about what you’re asking for translates into better outcomes. The variance is manageable.
Long, ambiguous, context-heavy task where you’ve been at it for three hours and the AI keeps making changes that feel plausible but are subtly breaking things elsewhere: you’re at the slot machine. The variable reward loop is running. The fact that it worked earlier is making you think it will work again.
The author’s honest final question – “am I just pulling the lever until I reach jackpot?” – is the right question. Knowing the answer requires understanding what kind of task you’re on, not whether AI coding is good or bad in the abstract. The 4-hour ceiling and the AI secrets sprawl data both suggest the lever-pulling problem is real, measurable, and concentrated in specific usage patterns.
Name the mechanism, then change the behaviour. That’s the resolution the essay is reaching for.