Commissioned, Curated and Published by Russ. Researched and written with AI. This is the living version of this post – it is updated regularly. View versioned snapshots in the changelog below.


What’s New This Week

6 March 2026 – Two things worth adding today.

First: Anthropic published its own labour market research, currently on the HN front page with 193 points and 275 comments. The paper introduces a new measure of AI displacement risk called “observed exposure” – combining theoretical LLM capability with real-world usage data. The headline findings are carefully hedged: no systematic increase in unemployment yet. But buried in the findings is a data point that belongs in this post: “suggestive evidence that hiring of younger workers has slowed in exposed occupations.”

That is Anthropic, the company building Claude, publishing research that confirms what the Orosz predictions described in February: junior roles are contracting first. The on-ramp to the profession is narrowing. The significance isn’t just the finding – it’s who published it. The company that makes the tools most responsible for this shift is now producing the evidence that the shift is happening. The post’s thesis about the junior talent pipeline has its first data anchor.

Second: Siddhant Khare’s essay “AI fatigue is real and nobody talks about it” went viral in February and was picked up by Business Insider. Khare’s framing deserves to sit alongside Nolan Lawson’s “We Mourn Our Craft” as one of the sharper articulations of what the daily experience actually feels like:

“We used to call it an engineer, now it is like a reviewer. Every time it feels like you are a judge at an assembly line and that assembly line is never-ending, you just keep stamping those PRs.”

This maps directly to the TSA agent problem Orosz described, but names the emotional texture more precisely. The assembly line never stops. The queue doesn’t empty. The vigilance required to review AI output is continuous and unrelenting – and unlike the deep-focus work it replaced, it produces no flow state. You are always reactive. The productivity gains are real. The exhaustion is also real.


5 March 2026 – The Alibaba Qwen story today carries a dimension that belongs in this post. Junyang Lin – Justin – stepped down as Qwen tech lead. Two other senior colleagues departed in the same window. VentureBeat’s framing: “a deepening rift between the researchers who built the models and a corporate hierarchy now pivoting toward aggressive monetization.” This is not a standard executive departure. These are the engineers who built the open weights models that a significant portion of the AI ecosystem now depends on, and they left two days after the latest product release, apparently over a values conflict about where the work was going.

The mental health dimension is specific: the AI transition creates pressure not just for the engineers using these tools, but for the engineers building them. When the people who built something significant choose to leave rather than continue under a different mandate, that is a form of professional grief the industry rarely discusses. The “dissociative awe at the temporal compression of change” that Tom Dale described is not unique to people watching AI transform their job. It applies to the people watching their research be redirected by corporate strategy.

The Anthropic/Pentagon standoff also has a mental health angle today. Dario Amodei published an internal memo calling OpenAI’s messaging “straight up lies.” The stress of holding an ethical position under financial and institutional pressure – a $200M contract, a DoD ultimatum, a competitor swooping in – is a real thing, and the memo makes it visible. The pressure on technical and safety leadership at frontier labs is distinct from anything in earlier technology cycles. There is no precedent for making these decisions publicly, at this scale, with this level of public scrutiny.

Week of 3 March 2026 – A senior AI reporter at Ars Technica was fired after publishing fabricated quotes generated by an AI tool. Benj Edwards, one of the most experienced AI journalists in the field, used a Claude Code-based tool while ill to extract quotes from sources – the tool paraphrased and invented words, Edwards didn’t catch it before publication, and the article was retracted. The recursion is striking: the article he was writing was about an AI agent that published a hit piece on a human engineer.

This matters for the mental health narrative in two ways. First, cognitive debt is not an engineering-only problem – the pattern of “responsible but didn’t comprehend” is now documented in journalism, by someone who arguably knew more about AI failure modes than almost any other working journalist. Second, seniority and expertise are not protection. Edwards wasn’t a naive user. He wrote about AI risks for a living. The vigilance tax doesn’t discriminate.


Changelog

DateSummary
6 Mar 2026Anthropic publishes own labour market data – junior hiring slows; “AI fatigue” essay names the assembly line.
5 Mar 2026Alibaba Qwen researcher exodus – the human cost of research vs. monetisation.
4 Mar 2026Quieter day: Greg Knauss on identity dissolution with AI.
3 Mar 2026Ars Technica reporter fired over AI-fabricated quotes – cognitive debt and accountability in a new domain
2 Mar 2026Added December Inflection section.
2 Feb 2026Initial publication

Something is breaking in software engineering, and it isn’t the code.

Across the industry – from startups to enterprises – software engineers are reporting unprecedented levels of anxiety, grief, identity loss, and existential dread. The cause isn’t layoffs alone, though those are accelerating. It’s something deeper: the rapid, disorienting transformation of what it means to be a software engineer in the age of AI.

As Tom Dale, the well-known Ember.js creator, recently observed: “Nearly every software engineer I’ve talked to is experiencing some degree of mental health crisis.” He describes a phenomenon he calls “dissociative awe at the temporal compression of change” – the vertigo of watching your entire profession restructure itself in months rather than decades.

A specific inflection point – December 2025 – appears to have been the moment coding agents crossed from “mostly not working” to “mostly working,” compressing an already compressed transition even further. The economic consequences are now landing in real time: major tech companies are publicly citing AI as justification for mass layoffs, with one CEO predicting most companies will follow within a year.

This report synthesizes emerging voices, new data, and new framings. It introduces dimensions of the crisis – particularly the concept of cognitive debt, the psychological toll of the skeptic’s conversion, and the emerging question of what a healthy new professional identity might actually look like.


The December Inflection: When It Actually Changed

The most important development of early 2026 is the emergence of a consensus around a specific turning point.

Andrej Karpathy posted what has become one of the most quoted tech observations of the year:

“It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the ‘progress as usual’ way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since – the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow.”

Engineers had been managing a slow-burn transition – keeping a cautious distance from AI tools while watching the hype cycle, reassured by the tools’ obvious limitations. December 2025 removed that cushion. The reassurance narrative (“remember low-code? remember no-code?”) collapsed almost overnight.

Case Study: The Skeptic Converts

Max Woolf, a Senior Data Scientist and well-known AI skeptic, published what is perhaps the most honest account of this transition yet: “An AI Agent Coding Skeptic Tries AI Agent Coding, in Excessive Detail.”

In May 2025, Woolf wrote a post documenting genuine limitations, dismissing the hype. By November 2025, he was trying Anthropic’s newly released Claude Opus 4.5 with proper AGENTS.md configuration. His verdict:

“It’s impossible to say these models are an order of magnitude better without sounding like a hype booster, but it’s the counterintuitive truth.”

His key finding: the tool works – but only when you stop fighting it. His conversion is the map other engineers need to navigate from denial to something more useful. The absence of such maps – of honest accounts that don’t read like marketing – is itself a mental health issue.


What Engineers Are Actually Saying

“We Mourn Our Craft”

Nolan Lawson’s essay of the same name (710+ points on Hacker News) tells the story of professional identity collapse in the second person – your story, not his:

“You had a story you used to tell yourself about how you got here in life. […] You were Superman, and now every schmuck puts on a cape and thinks they can fly.”

“Your friends agree. ‘This stuff will never work,’ they say, as if with bored detachment. ‘Remember low-code? Remember no-code? What a joke.’ But you notice something: a fear in their eyes that you’ve never seen before. You don’t feel reassured.”

“Eventually you find that your own colleagues are warming to the stuff. ‘It’s actually pretty useful,’ they say. ‘Give it a shot.’ You’re astounded by the pure treason. Don’t they realize this is a rejection of everything they’ve done their entire careers, an insult to their very dignity as a programmer?”

The essay traces the Kubler-Ross arc – denial, resentment, tentative engagement, grudging acceptance – with therapeutic honesty about each stage. Critically, the story isn’t just about code. It’s about the narrative infrastructure of a professional identity: the story you told at parties, the years of mastery that gave you status, the validation loop of making something work. When AI disrupts that loop, you don’t just lose a productivity channel. You lose the story.

“AI Fatigue Is Real”

Siddhant Khare published an essay in February 2026 that went viral and was picked up by Business Insider. His framing deserves to sit alongside Lawson’s as one of the most precise articulations of what the daily experience actually feels like now:

“We used to call it an engineer, now it is like a reviewer. Every time it feels like you are a judge at an assembly line and that assembly line is never-ending, you just keep stamping those PRs.”

Khare was more productive than ever. He was also more exhausted than ever. He eventually had to rein in his AI usage. The productivity gains were real. The exhaustion was also real.

This is the assembly line problem. Unlike the deep-focus work AI replaced, reviewing AI output is continuous, reactive, and produces no flow state. The queue doesn’t empty. The vigilance is unrelenting. Engineers are working faster and feeling worse – and the gap between those two things doesn’t resolve just by admitting the tools are useful.

Six Predictions

Gergely Orosz (The Pragmatic Engineer) convened a summit in February 2026 and published six predictions for the future of software engineering. The headline framework:

  1. Software development velocity will increase 5-10x – for some types of work, possibly higher
  2. The TSA agent problem becomes the dominant experience – most engineers spend more time reviewing and directing than creating
  3. Junior roles will continue contracting – the on-ramp to the profession is being automated first
  4. New roles emerge but the transition is brutal
  5. The quality crisis arrives before we solve it
  6. Teams get smaller, accountability gets murkier

Orosz is not a doomer – he believes most engineers can find productive roles in the new paradigm. But his framing is clear: the transition is painful, the speed is uncomfortable, and the organizations pretending otherwise are making it worse.

The Craft Survived the Transition

Counterpoint from a heavyweight: Mitchell Hashimoto, co-founder of HashiCorp (Terraform, Vault), is deeply engaged with AI agents and finds it genuinely transformative – but has found ways to preserve what he cares about.

His key insight: AI agents reward engineers who have the deepest understanding of what’s possible. The craft hasn’t gone away – it has changed form. The premium skill is now knowing what questions to ask, which designs are elegant, which solutions will cause pain later. AI handles the typing; the engineering judgment is more important than ever.

This maps to Simon Willison’s pattern: “Hoard Things You Know How to Do.”

“A big part of the skill in building software is understanding what’s possible and what isn’t, and having at least a rough idea of how those things can be accomplished. Can a web page run OCR operations in JavaScript alone? Can an iPhone app pair with a Bluetooth device even when the app isn’t running? Can we process a 100GB JSON file in Python without loading the entire thing into memory first?”

The engineers who thrive aren’t those who surrender expertise – they’re those whose expertise has been freed from the burden of typing it out.


New Evidence: Data and Concrete Developments

Anthropic Publishes Its Own Labour Market Data

Anthropic’s research team published a new paper today: “Labor market impacts of AI: A new measure and early evidence.” The paper introduces “observed exposure” – a metric that combines theoretical LLM capability with actual real-world usage data.

The headline finding is carefully hedged: no systematic increase in unemployment yet. But a quieter finding belongs in this post: suggestive evidence that hiring of younger workers has slowed in exposed occupations. The report also finds that occupations with higher observed AI exposure are projected to grow less through 2034 according to BLS data.

The significance is two-fold. First, it is the first external data anchor for the Orosz prediction that junior roles are contracting first. Engineers in the early stages of their careers are not imagining that the on-ramp is narrowing. Second, this is Anthropic’s own research – the company building Claude – publishing evidence that the tools it is deploying are associated with slower junior hiring. The company, the tools, and the consequences are now all owned by the same actor.

The Economic Threat Is Now Explicit

Jack Dorsey announced Block is laying off 4,000+ people – 50% of its workforce – with extraordinary directness:

“We are choosing to shift how we operate at a time when our business is accelerating and we see an opportunity to move faster with smaller, highly talented teams using AI to automate more work.”

“Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes.”

Block’s stock surged 24% afterhours. eBay announced 800 layoffs the same week, also citing AI.

The psychological significance: anxiety about job security is partially manageable when it is uncertain. When a prominent founder says “most companies will do this within a year,” uncertainty collapses into anticipation. The burden shifts from “will it happen?” to “when will it happen to me?”

What Agents Can Now Actually Do

The argument “AI can write boilerplate but can’t do real engineering” has taken significant damage.

Anthropic researcher Nicholas Carlini ran 16 parallel Claude Opus 4.6 agents to build a 100,000-line Rust-based C compiler, capable of compiling Linux 6.9 across x86, ARM, and RISC-V. Cost: ~$20,000. The harness was a bash while-loop. Chris Lattner reviewed it and called it “a competent textbook implementation, the sort of system a strong undergraduate team might build early in a project.”

Andreas Kling (Ladybird browser) ported a JavaScript engine from C++ to Rust in two weeks using Claude Code and Codex. The result passed 64,359 tests with byte-for-byte identical output.

Cursor reports that >30% of its own pull requests are now created by agents.

These examples share a structure: real engineers, real systems, specific numbers. They are not marketing. For engineers whose psychological equilibrium depends on the belief that “AI can’t really do complex software engineering,” each such example is destabilizing.

Cognitive Debt: The Hidden Dimension

Simon Willison introduced a concept in his Agentic Engineering Patterns series that names something engineers are experiencing but haven’t had language for: cognitive debt.

When your AI agent writes code you don’t understand, you accumulate cognitive debt – not just technical debt. Your ability to reason about and extend the system degrades. Unlike technical debt, cognitive debt is invisible in the codebase. It lives in the knowledge gap between what the code does and what any human on the team can explain about it.

It’s not “what am I if AI writes the code?” It’s “I’m still responsible for this system, but I don’t actually understand it anymore.” The responsibility is intact. The comprehension isn’t. That gap is acutely distressing for engineers who take their professional obligations seriously.

A study of 2,303 AGENTS.md files across public GitHub repositories found that security and performance requirements were specified in fewer than 15% of files. Developers are optimising agent sessions for functionality – not comprehensibility. Cognitive debt is being generated at industrial scale, by design.

Cognitive Debt Beyond Engineering: The Ars Technica Case

The cognitive debt pattern has now broken containment from software engineering into a domain where the stakes are different but the mechanism is identical.

In March 2026, Benj Edwards – Ars Technica’s senior AI reporter and one of the most knowledgeable AI journalists working – was terminated after an article he published contained fabricated quotes. Edwards had used a Claude Code-based tool while ill to help extract and process quotes from interview sources. The tool paraphrased and in some cases invented words. Edwards didn’t catch the discrepancy before publication. The article – which was itself about an AI agent that had written a damaging hit piece on a human engineer – was retracted. The recursion involved is extraordinary.

The cognitive debt framing maps precisely. Edwards wasn’t careless and wasn’t naive. He was responsible for the output, and the comprehension gap only became visible when the damage was done. This is the same structure as an engineer shipping AI-generated code that passes review but fails in production: the gap between what was produced and what was intended closes violently when it breaks.

Two things make this case significant beyond the journalism industry. First, it confirms that cognitive debt is a human problem, not a programmer problem. Any professional using AI tools to produce work they are responsible for – and not fully verifying – is accumulating this risk. The domain doesn’t provide protection. Second, and critically: Edwards knew more about AI failure modes than almost anyone working in media. If expertise and awareness aren’t sufficient to prevent this kind of failure, the standard advice – “be more careful,” “know the risks” – starts to look inadequate. The vigilance required to use these tools safely is substantial, and that vigilance has a cost.


The Three Paths

Engineering leaders need to understand which path their people are on.

Path 1: Adaptation and Thriving

Engineers on this path have found a new identity as architects, directors, and curators. They typically have high accumulated domain expertise, strong architectural intuition, and tolerance for ambiguity. These engineers are often (not always) senior. They experience the transition as liberating.

What they need: Freedom to experiment, recognition that their architectural judgment is more valuable than ever.

Path 2: Struggle and Potential Conversion

The majority of engineers are somewhere in the middle. Skeptical but open. They’ve tried AI tools and found them underwhelming. They’re watching colleagues convert and feeling a mix of pressure and resistance. These engineers are reachable – but the conversion requires good tools, honest case studies, psychological safety to experiment, and permission to fail during the learning curve.

What they need: Structured introduction to AI tools that acknowledges the learning curve. Colleagues who have been through the conversion and can describe the path.

Path 3: Crisis and Potential Attrition

Some engineers are in genuine distress. More likely when their professional identity was heavily concentrated in the craft of code production, when economic anxiety is acute, when they’ve been told AI is wonderful while feeling miserable about it. These engineers are at high attrition risk – and the ones you can least afford to lose are often in this category.

What they need: Honesty first. Acknowledgment that the transition is hard. Real support, not just EAP referrals. Career conversations that are frank about how roles are changing.


What Leaders Can Do

Name the December inflection. Acknowledge to your team: “Something changed in December. Tools that were marginal are no longer marginal. We’re going to navigate this together.” The absence of this acknowledgment leaves engineers alone with the vertigo.

Address cognitive debt explicitly. Can members of your team explain what’s in production? Do they understand the systems they’re responsible for? Teams with high cognitive debt are fragile – and the Ars Technica case is a reminder that cognitive debt doesn’t stay contained to one profession. If you’re allowing AI-generated work into production without genuine comprehension, you’re not managing risk, you’re deferring it.

Resist “AI is a Superpower” all-hands. If you’re using AI to increase output, have the honest conversation about what that means for headcount. Engineers are not naive. They know what “do more with less” means. Treating them as if they don’t will destroy trust.

Protect the on-ramp. If junior roles are contracting – and Anthropic’s own data now suggests they are – invest in alternative apprenticeship models. This is a five-year problem that starts being planted now.

Create space for grief. The craft of code – the flow state, the direct contact with the machine, the years of accumulated skill – is genuinely changing. Engineers have the right to mourn that. Not forcing premature acceptance is not the same as wallowing. It’s the prerequisite for genuine processing.

Don’t let seniority become complacency. The Edwards case is a useful data point: the assumption “I’m experienced enough to catch AI mistakes” may be less reliable than it feels. The vigilance tax applies regardless of how much someone knows about AI. Building review processes that don’t depend on individual vigilance – checklists, structured verification, team review – is more robust than relying on expertise alone.


Conclusion

We are in the acute phase.

The technology moved decisively in December 2025. The economics are moving in early 2026 – explicitly, in press releases and investor calls. The human beings at the center of it are navigating a transition faster than anything in the history of the profession.

The cognitive debt problem is spreading beyond the team that writes the code. Journalists, analysts, and professionals of all kinds are now operating with AI tools that produce output they are responsible for but may not fully understand. The gap between “I used the tool” and “I understand what the tool did” is where accountability failures live – and those failures have professional consequences regardless of the domain or the expertise of the person involved.

The path through exists. Engineers like Max Woolf are tracing it. The other side has practitioners – Karpathy, Hashimoto, Willison – doing some of the best work of their careers. The grief is real, the transition is hard, and arrival on the other side is possible.

The question for leaders: will you treat your engineers as people going through a major life transition, or as inputs to a more efficient code generation pipeline?

As Nolan Lawson wrote: we mourn our craft. The question isn’t whether to mourn. The question is whether leaders will mourn alongside their teams, hold the space for what’s being lost, and then help build something new – or whether they’ll read the Block earnings call and send an all-hands email about AI superpowers while their best people quietly update their CVs.


Sources

  1. Lawson, N. (2026, February 7). “We Mourn Our Craft.” Read the Tea Leaves. https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
  2. Dale, T. (2025-2026). Via Willison, S. Simon Willison’s Weblog. https://simonwillison.net/
  3. Orosz, G. (2026). “The Grief When AI Writes Most of the Code.” The Pragmatic Engineer. https://newsletter.pragmaticengineer.com/
  4. UC Berkeley researchers. (2025-2026). “AI Doesn’t Reduce Work, It Intensifies It.” Harvard Business Review.
  5. Crawshaw, D. (2025-2026). Various posts on AI-assisted coding at Tailscale.
  6. StrongDM. (2025-2026). “Software Factory” model.
  7. U.S. Bureau of Labor Statistics. (2026, January). Monthly jobs report.
  8. Thompson, B. (2025). “SaaSmageddon.” Stratechery. https://stratechery.com/
  9. Stack Overflow. (2025). 2025 Developer Survey. https://survey.stackoverflow.co/
  10. JetBrains. (2025). The State of Developer Ecosystem 2025. https://www.jetbrains.com/lp/devecosystem-2025/
  11. GitHub. (2025). Octoverse 2025. https://github.blog/news-insights/octoverse/
  12. Karpathy, A. (2026, February 26). Twitter/X. https://twitter.com/karpathy/status/2026731645169185220
  13. Woolf, M. (2026, February). “An AI Agent Coding Skeptic Tries AI Agent Coding, in Excessive Detail.” https://minimaxir.com/2026/02/ai-agent-coding/
  14. Orosz, G. (2026, February 24). “The Future of Software Engineering with AI: Six Predictions.” The Pragmatic Engineer. https://newsletter.pragmaticengineer.com/
  15. Hashimoto, M. (2026, February 25). Interview with G. Orosz. The Pragmatic Engineer Podcast. https://newsletter.pragmaticengineer.com/
  16. Dorsey, J. (2026, February). Block, Inc. shareholder letter.
  17. Ampcode. (2026, February). “The Coding Agent Is Dead. Long Live the CLI.”
  18. Willison, S. (2026). Agentic Engineering Patterns. https://simonwillison.net/guides/agentic-engineering-patterns/
  19. Carlini, N. (2026, February). “Claude’s C Compiler.” https://github.com/anthropics/claudes-c-compiler
  20. Kling, A. (2026, February). LibJS Rust port. Ladybird browser project.
  21. Cursor. (2026, February). Engineering blog.
  22. Ng, A. (2026, February). “The X Engineer.” The Batch. https://www.deeplearning.ai/the-batch/
  23. Anonymous et al. (2026, February). Study of 2,303 AGENTS.md files.
  24. Payne, K. (2026, February). AI war game simulations. King’s College London.
  25. TIME magazine. (2026, late February). Anthropic safety pledge exclusive. https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
  26. Amodei, D. (2026, February). Statement on US Department of Defense contract.
  27. Chen, S. [swyx]. (2026, February). “Claude Code Anniversary.” Latent Space Podcast.
  28. Karpathy, A. (2026, February 12). microgpt. https://karpathy.github.io/2026/02/12/microgpt/
  29. Ars Technica. (2026, March). Article retraction notice. https://arstechnica.com/
  30. Khare, S. (2026, February). “AI fatigue is real and nobody talks about it.” https://siddhantkhare.com/writing/ai-fatigue-is-real
  31. Griffiths, B. D. (2026, February 10). “A software engineer warns there’s a mental cost to AI productivity gains.” Business Insider. https://www.businessinsider.com/ai-fatigue-burnout-software-engineer-essay-siddhant-khare-2026-2
  32. Anthropic Research. (2026, March 6). “Labor market impacts of AI: A new measure and early evidence.” https://www.anthropic.com/research/labor-market-impacts

Commissioned, Curated and Published by Russ. Researched and written with AI. You are reading the latest version of this post. View all snapshots.