This is the current version. View full version history →
What’s New
| Date | Update |
|---|---|
| 1 Apr 2026 | Oracle files WARN Act notice for 491 Washington state engineering cuts, explicitly citing AI coding tools enabling smaller teams; Goldman Sachs CIO describes engineers shifting from writing code to supervising AI that writes it. |
| 28 Mar 2026 | Claude Code gets computer use (macOS, Pro/Max) – coding agent becomes desktop agent; Duke/Fed CFO survey finds AI job cuts 9x higher in 2026 but still just 0.4% of workforce; SWE job postings down 15% YoY. |
| 27 Mar 2026 | Codex v0.117.0 ships 20+ plugins (Slack, Figma, Notion, Gmail, Drive); Google’s internal Agent Smith restricted after demand surge; WebStorm 2026.1 adds Claude Code and Codex as first-class AI agents. |
| 26 Mar 2026 | Harness 2026: 69% of frequent AI users see more deployment problems; Multitudes study shows 75% of engineering teams can’t measure AI’s productivity impact even as 40% face board pressure to prove it. |
| 24 Mar 2026 | TechCrunch documents AI burnout hitting earliest adopters hardest; Pragmatic Engineer survey confirms 95% weekly AI usage with Codex now at 2M weekly active users. |
| 23 Mar 2026 | DORA 2025 report adds organisational evidence for the AI-as-multiplier thesis; Stack Overflow trust data and Spotify/Google role-shift reporting added. |
Version History
Full changelog and snapshots →
Changelog
| Date | Summary |
|---|---|
| 1 Apr 2026 | Oracle files WARN Act notice for 491 Washington state engineering cuts citing AI coding tools; Goldman Sachs CIO describes engineers shifting from writing code to supervising AI that writes it. |
| 28 Mar 2026 | Claude Code gets computer use (macOS Pro/Max); Duke/Fed CFO survey finds AI job cuts 9x higher in 2026 but economists call it a rounding error; SWE postings down 15% YoY. |
The AI-Assisted Coding Reality in 2026
A year ago the question was “should engineers use AI tools?” That question is closed. The Pragmatic Engineer’s 2026 survey (900+ respondents, Jan-Feb 2026) puts weekly AI tool usage at 95%, with 75% using AI for half or more of their work and 56% doing 70% or more of engineering with AI assistance. The question now is which tools, for what work, and how to integrate them without degrading the quality of what gets shipped.
The same survey shows the current tool landscape: Claude Code at #1 most loved – up from nothing eight months ago when it launched in May 2025. OpenAI’s Codex (GPT-5.3), which didn’t exist in the previous survey, already has 60% of Cursor’s usage share and 2 million weekly active users as of March 2026. Staff+ engineers are leading agent adoption at 63.5%.
What accounts for the rise of Claude Code? The terminal-native, whole-context approach versus the autocomplete-first model of Copilot addresses a different kind of work. Copilot is good at filling in function bodies and suggesting completions. Claude Code handles the “rewrite this module, here’s what the tests should pass” workflow that senior engineers actually want. Cursor sits somewhere in between: IDE-integrated, context-aware, popular with engineers who want AI assistance without leaving their editor.
The honest assessment: all three have genuine failure modes. They hallucinate APIs that don’t exist, produce code that looks right but has subtle logic errors, and degrade in quality on large codebases without careful context management. The engineers getting the most value are the ones who understand these failure modes, not the ones who trust output uncritically.
The Precision Debate
Steve Krouse’s piece “Reports of Code’s Death are Greatly Exaggerated” touched a nerve. The argument at russellclare.com/ai-precision-vibe-coding-limits/: AI-generated code is great for getting to 80% quickly and increasingly problematic at the last 20%. The final 20% – correctness, edge cases, performance under real load, security properties – requires precise understanding that current models struggle with. The HN thread that followed was useful precisely because it split the audience: engineers working in domains where correctness is paramount (infrastructure, financial systems, safety-critical code) recognised the problem immediately. Engineers doing CRUD web development thought the concern was overblown.
Both camps are right about their domain. The mistake is assuming one answer covers all software engineering. Vibe coding gets you a functional prototype in an afternoon. It does not get you a reliable payments service.
The Stack Overflow 2025 survey put a number on this: 46% of developers actively distrust AI-generated code accuracy, with only 3% reporting high trust. Usage is at 84% but trust dropped 11 points in a single year. Developers are using tools they don’t fully trust – a calibration problem that plays out differently depending on domain. The Harness 2026 State of DevOps Modernization report sharpens this further: 69% of developers who use AI coding tools multiple times a day say their teams experience deployment problems more often when AI-generated code is involved. 58% of all respondents – frequent and occasional users alike – report serious concerns about AI code risks. The Harness CTO describes the problem as working at “machine speed”: development velocity has jumped, but QA, security testing, and remediation processes built for human-speed development haven’t caught up. The DORA 2025 report frames it from the organisational side: AI amplifies what already exists. Strong engineering culture and processes plus AI produces better outcomes. Weak processes plus AI produces faster accumulation of complexity. The tool is not the variable.
Developer Mental Health and the Identity Crisis
This is the conversation the industry is not having loudly enough. When 95% of developers use AI tools weekly, and those tools can generate thousands of lines of code in minutes, the identity question becomes real: what is a software engineer in 2026?
The russellclare.com/ai-mental-health-crisis/ piece documented this in detail. The short version: engineers who built their professional identity around writing code are experiencing a form of skill displacement anxiety even when their jobs are technically secure. The loss is not the job – it’s the craft. Writing code carefully, understanding a system deeply, solving a hard problem with a clean solution. AI tools compress those moments into seconds and the compression feels like loss, not gain.
Now there’s a harder data point. TechCrunch (February 2026) documents a pattern worth watching: the engineers burning out fastest are not the ones who resisted AI tools – they’re the ones who embraced them earliest. The mechanism is straightforward and ugly: leadership saw AI adoption and tripled expectations. Actual productivity moved maybe 10%. The people under most pressure to justify the investment are the ones most exposed to the overhead of using it. The Pragmatic Engineer survey reinforces this from the opposite direction: engineers using agents are nearly twice as likely to feel excited about AI; non-users are twice as likely to be sceptical. The divergence in experience is widening.
The engineers adapting well share a common trait: they have shifted their identity from “I write code” to “I build systems.” The writing is a means, not the meaning. That shift is available to everyone, but it’s not automatic.
The 4-Hour Ceiling
There’s a productivity phenomenon documented at russellclare.com/ai-4-hour-ceiling/ that deserves more attention. AI-assisted development can dramatically compress the early stages of a task – setup, boilerplate, initial implementation. But sustained AI-assisted work has a cognitive load that most engineers underestimate. After roughly four hours, the overhead of reviewing AI output, managing context, and course-correcting errors accumulates. Productivity curves back down.
This isn’t a problem with the tools. It’s a calibration issue. Engineers who treat AI assistance like a productivity multiplier that applies uniformly across the day run into this ceiling and don’t understand why they’re exhausted. The engineers working sustainably treat it more like pair programming: intense, collaborative, not a replacement for the slow thinking that happens when you’re working alone and stuck.
Team Structure Changes
The change already visible in high-performing teams: smaller headcount shipping more. A 3-5 person team with AI tooling in 2026 is doing work that previously needed 8-12. This is real, and it’s affecting hiring decisions.
The senior engineer role is shifting. Less time on implementation, more on architecture, code review, and the judgment calls that AI can’t make reliably: what should this system do, not just how should it be built. The engineers who are thriving are the ones who were already doing that work and now have more time for it.
The risk for teams: losing the implementation knowledge that comes from writing code. Engineers who skip the implementation and move straight to review lose something real about system understanding. The best teams are finding a balance – use AI assistance heavily, but keep enough hands-on implementation work to maintain the intuition that makes architecture decisions good.
The role shift is accelerating faster than most expected. Business Insider reports senior engineers at Spotify haven’t written code since December. Anthropic uses AI for 70-90% of its code. Google’s AI code percentage is now described as “much, much higher” than the 50% cited in October. Engineers are becoming agent managers – directing multiple parallel AI workers rather than writing implementation. Changelog’s interview with InfluxDB co-founder Paul Dix (Feb 2026) frames it as “the great engineering divergence”: engineers splitting into two distinct kinds of developer – those managing agents and those resisting them. Paul Dix himself went back to coding by hand temporarily before returning to AI with more oversight. The middle ground, he argues, is shrinking. DORA’s finding that agent-heavy workflows can risk burnout is worth noting alongside this. The productivity signal is real; so is the cognitive overhead.
Measurement is emerging as the defining challenge for engineering leadership. A Multitudes study (700+ engineering professionals, published March 2026) finds 75% of teams struggle to measure AI’s impact on productivity. 60% say lack of clear metrics is their single biggest challenge – and 40% report board-level pressure to deliver quantifiable results they can’t yet produce. Traditional metrics like lines of code are meaningless when AI generates the lines. Cycle time and deployment frequency move in the same direction as bug rates and rollbacks. The teams doing this well are measuring broadly: usage, customer value, output quality, and human impact – none of them perfectly, but consistently enough to spot trends.
The job market data is catching up to the team structure shifts. LinkedIn figures for January-February 2026 show new SWE job postings down 15% year-on-year – hiring compression happening in parallel with the productivity claims. Atlassian cut more than 900 R&D and engineering roles in March, the same departments its CEO had pledged to grow with new graduate hiring in 2025-2026. The cuts were publicly framed around AI-driven productivity enabling smaller teams to ship more. A Duke/Federal Reserve CFO survey of 750 US firms published in March adds the macro view: 44% of CFOs plan AI-related job cuts in 2026, translating to roughly 502,000 roles economy-wide – nine times the 55,000 AI-attributed layoffs seen in 2025. Economists are calling it a rounding error against a 125-million-person workforce (0.4%), but the perception gap in that same survey is the more interesting finding: CFOs believe AI is more productive than the revenue data currently shows. The boards pushing engineering leadership for productivity metrics are working from a number that isn’t fully grounded in what’s actually shipping.
The pattern is becoming formulaic. Oracle filed a WARN Act notice on April 1, 2026 for 491 employees in Washington state, effective June 1. Oracle’s SVP of software development stated the reason directly: AI coding tools are enabling smaller engineering teams to deliver more. It is the same framing Atlassian used after cutting 900+ R&D roles in March. The Goldman Sachs perspective, by contrast, frames the same shift in operational rather than headcount terms. Goldman’s CIO Marco Argenti, speaking on Bloomberg’s Odd Lots podcast on March 30, described engineers as no longer primarily in the business of writing code – they are supervising the machines that write it. The role is migrating toward ML operations, model validation, and prompt and agent orchestration. Code reviews now include scrutiny of prompt engineering alongside logic and syntax. Argenti described the outcome as a rebalancing rather than a unilateral reduction. Whether that distinction holds over time is the open question. What is consistent across both framings: the engineering role is changing faster than most job descriptions acknowledge.
What to Learn in 2026
The three paths are laid out at russellclare.com/the-three-paths/. The short version for this context:
The skills that compound most reliably right now: systems thinking and architecture, understanding of what the AI tools are actually doing (not just prompt engineering, but model behaviour and failure modes), and domain expertise in any area where correctness matters. The AI tools are general-purpose and shallow on domain knowledge. Engineers who are deep on a domain and know how to use AI tools as leverage are more valuable than engineers who can use the tools fluently but have shallow domain knowledge.
Security, distributed systems, and performance engineering remain areas where the tools assist but do not replace. The judgment about what safe, reliable, and fast actually means in production is not something current models have.
Tools Worth Knowing
Cursor: AI-first IDE, VS Code-based. Best if you want AI assistance inside your editor without switching contexts. Good autocomplete, decent multi-file context, improving rapidly.
Claude Code: Terminal-native. Operates on your whole codebase rather than an open file. Best for larger refactors, architectural work, and tasks where you want to describe the outcome rather than guide each line. Most loved by the engineers actually using it. As of March 23, 2026, computer use has been added for macOS Pro and Max users – Claude can now open apps, navigate browsers, click, and fill in spreadsheets without any additional setup. The shift from terminal-native coding agent to full desktop agent is a meaningful one.
OpenAI Codex (GPT-5.3): Launched February 2026 and already at 2 million weekly active users. Already has 60% of Cursor’s usage share despite not existing six months ago. Worth evaluating if you haven’t.
Zed: Fast editor with native AI integration. Worth watching – the architecture is Rust-native and genuinely quick, and the AI features are first-class rather than bolted on.
Neovim + AI plugins: Still the choice for engineers who want full control. The plugin ecosystem has matured. If you already live in Neovim, you don’t need to leave it for AI features.
AI-assisted testing: The area where AI tools add the most value with the least risk. Generating test cases from existing code, writing property tests, producing test fixtures – these are good uses. Treating AI-generated tests as authoritative without review is not.