Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
The Pragmatic Engineer’s AI tooling survey (February 2026, 15,000 developers) has been circulating widely. The headline: Claude Code is rated the “most loved” tool by 46% of respondents, ahead of Cursor at 19% and GitHub Copilot at 9%. Senior engineering leaders are disproportionately enthusiastic about Claude Code. These numbers will keep shifting as the tools evolve, but the direction is clear – terminal-native, context-aware AI assistance is pulling ahead of IDE-integrated autocomplete.
Changelog
| Date | Summary |
|---|---|
| 23 Mar 2026 | Initial publication. |
The AI-Assisted Coding Reality in 2026
A year ago the question was “should engineers use AI tools?” That question is closed. According to survey data from February 2026, 73% of developers use AI coding tools daily. The question now is which tools, for what work, and how to integrate them without degrading the quality of what gets shipped.
The Pragmatic Engineer survey data (15,000 developers, February 2026) shows the current landscape: Claude Code at 46% most loved, Cursor at 19%, GitHub Copilot at 9%. Claude Code launched in May 2025 and reached that position in under a year – a fast reversal from a world where Copilot was the default choice for most teams.
What accounts for the difference? The terminal-native, whole-context approach of Claude Code versus the autocomplete-first model of Copilot addresses a different kind of work. Copilot is good at filling in function bodies and suggesting completions. Claude Code handles the “rewrite this module, here’s what the tests should pass” workflow that senior engineers actually want. Cursor sits somewhere in between: IDE-integrated, context-aware, popular with engineers who want AI assistance without leaving their editor.
The honest assessment: all three have genuine failure modes. They hallucinate APIs that don’t exist, produce code that looks right but has subtle logic errors, and degrade in quality on large codebases without careful context management. The engineers getting the most value are the ones who understand these failure modes, not the ones who trust output uncritically.
The Precision Debate
Steve Krouse’s piece “Reports of Code’s Death are Greatly Exaggerated” touched a nerve. The argument at russellclare.com/ai-precision-vibe-coding-limits/: AI-generated code is great for getting to 80% quickly and increasingly problematic at the last 20%. The final 20% – correctness, edge cases, performance under real load, security properties – requires precise understanding that current models struggle with. The HN thread that followed was useful precisely because it split the audience: engineers working in domains where correctness is paramount (infrastructure, financial systems, safety-critical code) recognised the problem immediately. Engineers doing CRUD web development thought the concern was overblown.
Both camps are right about their domain. The mistake is assuming one answer covers all software engineering. Vibe coding gets you a functional prototype in an afternoon. It does not get you a reliable payments service.
Developer Mental Health and the Identity Crisis
This is the conversation the industry is not having loudly enough. When 73% of developers use AI tools daily, and those tools can generate thousands of lines of code in minutes, the identity question becomes real: what is a software engineer in 2026?
The russellclare.com/ai-mental-health-crisis/ piece documented this in detail. The short version: engineers who built their professional identity around writing code are experiencing a form of skill displacement anxiety even when their jobs are technically secure. The loss is not the job – it’s the craft. Writing code carefully, understanding a system deeply, solving a hard problem with a clean solution. AI tools compress those moments into seconds and the compression feels like loss, not gain.
The engineers adapting well share a common trait: they have shifted their identity from “I write code” to “I build systems.” The writing is a means, not the meaning. That shift is available to everyone, but it’s not automatic.
The 4-Hour Ceiling
There’s a productivity phenomenon documented at russellclare.com/ai-4-hour-ceiling/ that deserves more attention. AI-assisted development can dramatically compress the early stages of a task – setup, boilerplate, initial implementation. But sustained AI-assisted work has a cognitive load that most engineers underestimate. After roughly four hours, the overhead of reviewing AI output, managing context, and course-correcting errors accumulates. Productivity curves back down.
This isn’t a problem with the tools. It’s a calibration issue. Engineers who treat AI assistance like a productivity multiplier that applies uniformly across the day run into this ceiling and don’t understand why they’re exhausted. The engineers working sustainably treat it more like pair programming: intense, collaborative, not a replacement for the slow thinking that happens when you’re working alone and stuck.
Team Structure Changes
The change already visible in high-performing teams: smaller headcount shipping more. A 3-5 person team with AI tooling in 2026 is doing work that previously needed 8-12. This is real, and it’s affecting hiring decisions.
The senior engineer role is shifting. Less time on implementation, more on architecture, code review, and the judgment calls that AI can’t make reliably: what should this system do, not just how should it be built. The engineers who are thriving are the ones who were already doing that work and now have more time for it.
The risk for teams: losing the implementation knowledge that comes from writing code. Engineers who skip the implementation and move straight to review lose something real about system understanding. The best teams are finding a balance – use AI assistance heavily, but keep enough hands-on implementation work to maintain the intuition that makes architecture decisions good.
What to Learn in 2026
The three paths are laid out at russellclare.com/the-three-paths/. The short version for this context:
The skills that compound most reliably right now: systems thinking and architecture, understanding of what the AI tools are actually doing (not just prompt engineering, but model behaviour and failure modes), and domain expertise in any area where correctness matters. The AI tools are general-purpose and shallow on domain knowledge. Engineers who are deep on a domain and know how to use AI tools as leverage are more valuable than engineers who can use the tools fluently but have shallow domain knowledge.
Security, distributed systems, and performance engineering remain areas where the tools assist but do not replace. The judgment about what safe, reliable, and fast actually means in production is not something current models have.
Tools Worth Knowing
Cursor: AI-first IDE, VS Code-based. Best if you want AI assistance inside your editor without switching contexts. Good autocomplete, decent multi-file context, improving rapidly.
Claude Code: Terminal-native. Operates on your whole codebase rather than an open file. Best for larger refactors, architectural work, and tasks where you want to describe the outcome rather than guide each line. Most loved by the engineers actually using it.
Zed: Fast editor with native AI integration. Worth watching – the architecture is Rust-native and genuinely quick, and the AI features are first-class rather than bolted on.
Neovim + AI plugins: Still the choice for engineers who want full control. The plugin ecosystem has matured. If you already live in Neovim, you don’t need to leave it for AI features.
AI-assisted testing: The area where AI tools add the most value with the least risk. Generating test cases from existing code, writing property tests, producing test fixtures – these are good uses. Treating AI-generated tests as authoritative without review is not.