Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
Quieter day – nothing today that materially shifts the thesis.
Changelog
| Date | Summary |
|---|---|
| 25 Mar 2026 | No material updates – quieter news day for this topic. |
| 24 Mar 2026 | TechCrunch documents AI burnout hitting earliest adopters hardest; Pragmatic Engineer survey confirms 95% weekly AI usage with Codex now at 2M weekly active users. |
| 23 Mar 2026 | DORA 2025 report adds organisational evidence for the AI-as-multiplier thesis; Stack Overflow trust data (29%) and Spotify/Google role-shift reporting added to precision debate and team structure sections. |
The AI-Assisted Coding Reality in 2026
A year ago the question was “should engineers use AI tools?” That question is closed. The Pragmatic Engineer’s 2026 survey (900+ respondents, Jan-Feb 2026) puts weekly AI tool usage at 95%, with 75% using AI for half or more of their work and 56% doing 70% or more of engineering with AI assistance. The question now is which tools, for what work, and how to integrate them without degrading the quality of what gets shipped.
The same survey shows the current tool landscape: Claude Code at #1 most loved – up from nothing eight months ago when it launched in May 2025. OpenAI’s Codex (GPT-5.3), which didn’t exist in the previous survey, already has 60% of Cursor’s usage share and 2 million weekly active users as of March 2026. Staff+ engineers are leading agent adoption at 63.5%.
What accounts for the rise of Claude Code? The terminal-native, whole-context approach versus the autocomplete-first model of Copilot addresses a different kind of work. Copilot is good at filling in function bodies and suggesting completions. Claude Code handles the “rewrite this module, here’s what the tests should pass” workflow that senior engineers actually want. Cursor sits somewhere in between: IDE-integrated, context-aware, popular with engineers who want AI assistance without leaving their editor.
The honest assessment: all three have genuine failure modes. They hallucinate APIs that don’t exist, produce code that looks right but has subtle logic errors, and degrade in quality on large codebases without careful context management. The engineers getting the most value are the ones who understand these failure modes, not the ones who trust output uncritically.
The Precision Debate
Steve Krouse’s piece “Reports of Code’s Death are Greatly Exaggerated” touched a nerve. The argument at russellclare.com/ai-precision-vibe-coding-limits/: AI-generated code is great for getting to 80% quickly and increasingly problematic at the last 20%. The final 20% – correctness, edge cases, performance under real load, security properties – requires precise understanding that current models struggle with. The HN thread that followed was useful precisely because it split the audience: engineers working in domains where correctness is paramount (infrastructure, financial systems, safety-critical code) recognised the problem immediately. Engineers doing CRUD web development thought the concern was overblown.
Both camps are right about their domain. The mistake is assuming one answer covers all software engineering. Vibe coding gets you a functional prototype in an afternoon. It does not get you a reliable payments service.
The Stack Overflow 2025 survey put a number on this: 46% of developers actively distrust AI-generated code accuracy, with only 3% reporting high trust. Usage is at 84% but trust dropped 11 points in a single year. Developers are using tools they don’t fully trust – a calibration problem that plays out differently depending on domain. The DORA 2025 report frames it from the organisational side: AI amplifies what already exists. Strong engineering culture and processes plus AI produces better outcomes. Weak processes plus AI produces faster accumulation of complexity. The tool is not the variable.
Developer Mental Health and the Identity Crisis
This is the conversation the industry is not having loudly enough. When 95% of developers use AI tools weekly, and those tools can generate thousands of lines of code in minutes, the identity question becomes real: what is a software engineer in 2026?
The russellclare.com/ai-mental-health-crisis/ piece documented this in detail. The short version: engineers who built their professional identity around writing code are experiencing a form of skill displacement anxiety even when their jobs are technically secure. The loss is not the job – it’s the craft. Writing code carefully, understanding a system deeply, solving a hard problem with a clean solution. AI tools compress those moments into seconds and the compression feels like loss, not gain.
Now there’s a harder data point. TechCrunch (February 2026) documents a pattern worth watching: the engineers burning out fastest are not the ones who resisted AI tools – they’re the ones who embraced them earliest. The mechanism is straightforward and ugly: leadership saw AI adoption and tripled expectations. Actual productivity moved maybe 10%. The people under most pressure to justify the investment are the ones most exposed to the overhead of using it. The Pragmatic Engineer survey reinforces this from the opposite direction: engineers using agents are nearly twice as likely to feel excited about AI; non-users are twice as likely to be sceptical. The divergence in experience is widening.
The engineers adapting well share a common trait: they have shifted their identity from “I write code” to “I build systems.” The writing is a means, not the meaning. That shift is available to everyone, but it’s not automatic.
The 4-Hour Ceiling
There’s a productivity phenomenon documented at russellclare.com/ai-4-hour-ceiling/ that deserves more attention. AI-assisted development can dramatically compress the early stages of a task – setup, boilerplate, initial implementation. But sustained AI-assisted work has a cognitive load that most engineers underestimate. After roughly four hours, the overhead of reviewing AI output, managing context, and course-correcting errors accumulates. Productivity curves back down.
This isn’t a problem with the tools. It’s a calibration issue. Engineers who treat AI assistance like a productivity multiplier that applies uniformly across the day run into this ceiling and don’t understand why they’re exhausted. The engineers working sustainably treat it more like pair programming: intense, collaborative, not a replacement for the slow thinking that happens when you’re working alone and stuck.
Team Structure Changes
The change already visible in high-performing teams: smaller headcount shipping more. A 3-5 person team with AI tooling in 2026 is doing work that previously needed 8-12. This is real, and it’s affecting hiring decisions.
The senior engineer role is shifting. Less time on implementation, more on architecture, code review, and the judgment calls that AI can’t make reliably: what should this system do, not just how should it be built. The engineers who are thriving are the ones who were already doing that work and now have more time for it.
The risk for teams: losing the implementation knowledge that comes from writing code. Engineers who skip the implementation and move straight to review lose something real about system understanding. The best teams are finding a balance – use AI assistance heavily, but keep enough hands-on implementation work to maintain the intuition that makes architecture decisions good.
The role shift is accelerating faster than most expected. Business Insider reports senior engineers at Spotify haven’t written code since December. Anthropic uses AI for 70-90% of its code. Google’s AI code percentage is now described as “much, much higher” than the 50% cited in October. Engineers are becoming agent managers – directing multiple parallel AI workers rather than writing implementation. Changelog’s interview with InfluxDB co-founder Paul Dix (Feb 2026) frames it as “the great engineering divergence”: engineers splitting into two distinct kinds of developer – those managing agents and those resisting them. Paul Dix himself went back to coding by hand temporarily before returning to AI with more oversight. The middle ground, he argues, is shrinking. DORA’s finding that agent-heavy workflows can risk burnout is worth noting alongside this. The productivity signal is real; so is the cognitive overhead.
What to Learn in 2026
The three paths are laid out at russellclare.com/the-three-paths/. The short version for this context:
The skills that compound most reliably right now: systems thinking and architecture, understanding of what the AI tools are actually doing (not just prompt engineering, but model behaviour and failure modes), and domain expertise in any area where correctness matters. The AI tools are general-purpose and shallow on domain knowledge. Engineers who are deep on a domain and know how to use AI tools as leverage are more valuable than engineers who can use the tools fluently but have shallow domain knowledge.
Security, distributed systems, and performance engineering remain areas where the tools assist but do not replace. The judgment about what safe, reliable, and fast actually means in production is not something current models have.
Tools Worth Knowing
Cursor: AI-first IDE, VS Code-based. Best if you want AI assistance inside your editor without switching contexts. Good autocomplete, decent multi-file context, improving rapidly.
Claude Code: Terminal-native. Operates on your whole codebase rather than an open file. Best for larger refactors, architectural work, and tasks where you want to describe the outcome rather than guide each line. Most loved by the engineers actually using it.
OpenAI Codex (GPT-5.3): Launched February 2026 and already at 2 million weekly active users. Already has 60% of Cursor’s usage share despite not existing six months ago. Worth evaluating if you haven’t.
Zed: Fast editor with native AI integration. Worth watching – the architecture is Rust-native and genuinely quick, and the AI features are first-class rather than bolted on.
Neovim + AI plugins: Still the choice for engineers who want full control. The plugin ecosystem has matured. If you already live in Neovim, you don’t need to leave it for AI features.
AI-assisted testing: The area where AI tools add the most value with the least risk. Generating test cases from existing code, writing property tests, producing test fixtures – these are good uses. Treating AI-generated tests as authoritative without review is not.