Commissioned, Curated and Published by Russ. Researched and written with AI.

This is a versioned snapshot of this post as it appeared on 26 March 2026. View the latest version here.

What’s New This Week

25 March 2026

Amazon’s internal AI coding assistant Q was linked to a major e-commerce outage, prompting SVP Dave Treadwell to issue a 90-day reset and introduce new ‘controlled friction’ into code review processes. The incident – reported by Business Insider on 10 March and surfacing widely via a trending HN essay today (mariozechner.at, ‘Thoughts on Slowing the Fuck Down’, 115 points) – is a concrete real-world case supporting two threads of the post’s thesis. First, it validates Path 3 concerns about quality declining when AI-generated code bypasses experienced human judgment: Amazon found that basic safeguards like two-person code authorisation were either lacking or circumvented in the AI-assisted workflow. Second, it is a precise illustration of why Path 1’s ‘calibrated skepticism’ and verification instinct is not optional caution but load-bearing infrastructure. The mariozechner.at essay independently argues that the industry is ‘coding itself into a corner’ agenetically, citing the same Amazon incident alongside Microsoft’s public admission that Windows quality is degrading – adding further texture to the thesis that the tools work when human judgment remains in the loop, and fail badly when it is removed.

Changelog

DateSummary
25 Mar 202625 Mar 2026 – Amazon’s Q-linked e-commerce outage and 90-day reset adds a concrete institutional case study for the Path 1 judgment thesis and validates Path 3 quality concerns.
24 Mar 202624 Mar 2026 – Quiet day, thesis holds.
23 Mar 202623 Mar 2026 – Quiet day, thesis holds.
22 Mar 202622 Mar 2026 – Two HN pieces add direct support: Krouse’s leaky abstractions argument validates Path 1 accumulated-context thesis; jry.io identity-as-narrative piece adds a new angle to the Path 3 identity architecture discussion.
21 Mar 202621 Mar 2026 – Quiet day, thesis holds.
20 Mar 202620 Mar 2026 – Quiet day, thesis holds.
19 Mar 202619 Mar 2026 – Quiet day, thesis holds.
18 Mar 202618 Mar 2026 – Quiet day, thesis holds.
16 Mar 202616 Mar 2026 – Quiet day, thesis holds.
15 Mar 202615 Mar 2026 – HN piece ‘Codegen is not productivity’ supports thesis that judgment beats output volume; validates Path 3 craft concerns and Path 1 calibrated skepticism framing.
14 Mar 202614 Mar 2026 – Quiet day, thesis holds.
13 Mar 202613 Mar 2026 – Quiet day, thesis holds.
11 Mar 202611 Mar 2026 – Quiet day, thesis holds.
10 Mar 202610 Mar 2026 – Quiet day, thesis holds.
9 Mar 20269 Mar 2026 – Quiet day, thesis holds.
8 Mar 20268 Mar 2026 – Quiet day, thesis holds.
7 Mar 20267 Mar 2026 – Quiet day, thesis holds.
6 Mar 2026GPT-5.4 release adds pressure on Path 2 cold-start problem; Business Insider junior engineer piece raises whether skipping craft stage breaks the accumulated-context argument.
5 Mar 2026Alibaba Qwen exodus, Anthropic/Pentagon standoff, WEF compression data, and “context is the moat” framing.
4 Mar 2026Added Apple MacBook Neo (on-device path), Meta/LeCun “Beyond Language Modeling”, UniG2U-Bench unified vs specialised findings
2 Mar 2026Initial publication

There is a conversation happening in engineering right now, and most of it is being conducted in bad faith.

Not intentionally. Nobody is lying exactly. But the people speaking loudest are, by definition, the ones who found a way through. They are posting about their 10x productivity gains, their AI-assisted codebases, their rediscovered joy in shipping. They are not wrong. But they are not representative. And the gap between what gets said in public and what gets said in private – in Slack DMs, in one-on-ones, in the silence of engineers staring at a Copilot suggestion they do not trust – is becoming a leadership problem.

This post is an attempt to name that gap. To give it structure. To offer something more useful than either the triumphalism of the converts or the reflexive dismissal of the skeptics.

The framework is simple: there are three paths engineers are currently on. They are not determined primarily by seniority, though seniority correlates. They are determined by something harder to see: identity architecture and accumulated context. And the reason it matters now, in early 2026, is that something shifted in December 2025 that made the pressure on all three paths significantly higher.


The December Inflection

In late 2025, Andrej Karpathy made an observation that cut through the usual noise: coding agents basically did not work before December and basically work since. That is a remarkable thing to say. It suggests not a gradual improvement curve but a step change – a before and after.

If true, it reframes the conversation. Engineers who tried AI coding tools in 2023 or 2024 and found them underwhelming were not wrong. The tools were not good enough. Their skepticism was rational. But the tools that exist now are materially different. And that creates an uncomfortable situation for anyone who formed their opinion of AI coding assistance in the earlier era and has not updated it since.

It also creates pressure. Block laid off 4,000 people – roughly 50% of its workforce – in early 2026, with Jack Dorsey making explicit that AI was central to the decision. Gergely Orosz’s six predictions for 2026 included meaningful reductions in engineering hiring, with AI productivity gains used to justify headcount decisions rather than accelerate output. Andrew Ng has been circulating the concept of the “X Engineer” – the individual who, augmented by AI, can do the work that previously required a team.

Whether or not you believe these projections will fully materialise, engineers are reading them. They are sitting in all-hands meetings where leadership presents AI as a force multiplier. They are being told, implicitly or explicitly, that the organisation expects them to adapt. And they are doing so from very different starting positions.


Path 1: Adaptation and Thriving

The engineers on Path 1 are the ones you hear from. They are posting. They are guesting on podcasts. They are writing blog posts with titles like “I shipped in three days what used to take three weeks.”

They exist. Their experiences are real. And there is a pattern to who they are.

Mitchell Hashimoto – creator of Vagrant and Terraform, founder of HashiCorp – spoke on the Pragmatic Engineer podcast about his experience using AI coding tools heavily while being deliberate about preserving craft. He was not abandoning judgment. He was redirecting it. The question shifted from “how do I implement this?” to “what should I implement, and how do I verify the AI got it right?”

Simon Willison, creator of Datasette and one of the more thoughtful public voices on AI, has written extensively about the skills that compound in an AI-augmented workflow. His concept of “hoarding things you know how to do” is underappreciated: the engineers who thrive are not the ones who outsource the most, but the ones who retain enough depth to know when the AI is wrong, when the generated code is subtly broken, when the abstraction leaks in a way that will cause problems six months from now.

Karpathy himself represents this archetype. Deep expertise in machine learning, deep expertise in systems, enough context to direct AI tools with precision and evaluate their output with confidence.

The pattern is this: Path 1 engineers have two things that compound together. They have deep domain knowledge in at least one area – not breadth, depth – and they have what might be called calibrated skepticism about AI output. They do not fully trust the tools and they know why. That combination is what makes them effective rather than just fast.

“Knowing what’s possible” is the new premium skill. It is not knowing how to implement. It is knowing the solution space well enough to recognise a correct answer when the AI produces one, and a broken answer when it does not. That requires having built things the hard way, having debugged things that should not have broken, having read enough error messages to know what they actually mean.

A March 2026 essay gaining traction on Hacker News (antifound.com) made the same point from a different angle: ‘Codegen is not productivity.’ Drawing on the SICP preface – programs must be written for people to read, and only incidentally for machines to execute – the author argued that lines of code generated by LLMs are no more a valid productivity metric than lines of code written by hand. Developers spend most of their time on activities other than coding. What AI changes is the cost of the coding step; it does not change what the hard work actually is. This is exactly why Path 1 engineers thrive: they were never optimising for code volume. They were optimising for outcomes. AI makes the coding step faster. It does not change what makes the outcomes good.

Steve Krouse’s March 2026 essay ‘Reports of code’s death are greatly exaggerated’ offers a precise mechanism for why this is true. Vibe coding, he argues, gives the illusion that English-level specifications are precise abstractions – right up until they leak. Abstractions always leak at sufficient scale or feature complexity. The engineers who can anticipate where that will happen are the ones who have previously worked at the lower levels being abstracted away. You cannot know what you do not know. Path 1 engineers know. That is what makes the difference between someone who ships faster with AI and someone whose vibe-coded system collapses when it goes viral.

For these engineers, the transition feels liberating because it does not threaten what they value most. They valued judgment. They valued architectural thinking. They valued the ability to hold a system in their head and reason about its failure modes. AI does not replace any of that. If anything, it makes those skills more leveraged.

The cost of removing this verification step is now showing up in production. In March 2026, Business Insider reported that Amazon issued an internal 90-day reset after a ’trend of incidents’ hit its e-commerce operation, including at least one disruption directly linked to its AI coding assistant Q. Amazon SVP Dave Treadwell described problems with ‘high blast radius changes’ – updates that propagated broadly because human review controls were lacking or bypassed, including basic requirements like two-person code authorisation. The response was to introduce what Amazon called ‘controlled friction’ back into the code-change process. The irony is exact: Amazon, one of the most aggressive adopters of AI coding tools, found that removing the judgment layer from the workflow did not accelerate production – it destabilised it. ‘Controlled friction’ is a corporate phrase for the thing Path 1 engineers do instinctively: slow down, check, verify, and apply the accumulated context that the model does not have.

The danger in focusing only on Path 1 is survivorship bias. Andrej Karpathy and Mitchell Hashimoto are not the median engineer. They are extraordinary practitioners with decades of context. When they say the transition is working for them, we should believe them. We should not assume their experience generalises.


Path 2: Struggle and Potential Conversion

The majority of engineers are on Path 2. They are skeptical but not closed. They tried some AI tools, probably in 2023 or 2024, found them disappointing in ways they could not always articulate, and moved on. They are watching colleagues convert with a mixture of curiosity and doubt. They suspect they might be missing something. They have not yet found the entry point.

Max Woolf’s account of his own conversion is as honest a document as exists in this space. Woolf, a data scientist with a reputation for careful, empirical analysis (his blog is at minimaxir.com), wrote in February 2026 about the specific sequence of events that shifted his view of AI agent coding tools. His story is worth reading in full, but the shape of it matters: he was skeptical, he had good reasons to be skeptical based on earlier experience, and then he encountered the tools in a configuration and context that made them actually work. The conversion was not ideological. It was empirical.

That “configuration and context” detail is important. One of the underappreciated reasons Path 2 engineers have not converted is not that the tools do not work. It is that getting the tools to work requires a non-trivial investment in setup, mental model formation, and initial failure tolerance. The cold-start problem is real. If you pick up a new AI coding tool, try it on a complex existing codebase without knowing how to prompt it, and watch it produce confidently wrong suggestions, you will probably put it down. That is a rational response to a bad experience. It is not evidence the tool cannot be useful.

Path 2 engineers are reachable. But reaching them requires specific things.

First: good tooling configuration. Not the default settings. Not the out-of-the-box experience. Someone needs to have done the work of figuring out what actually helps and sharing that configuration. This is currently not happening systematically in most engineering organisations.

Second: honest peers, not marketing. Path 2 engineers are sophisticated enough to detect enthusiasm that has been inflated by selection effects. They have read the LinkedIn posts. They are not impressed. What actually works is a colleague they trust – one who has no incentive to oversell – saying “look, I was skeptical too, here is the specific thing that changed my mind, here is what it does not do well.” That conversation is worth more than any vendor webinar.

Third: psychological safety to experiment. This is where organisations are failing most comprehensively. Experimenting with AI tools means, initially, being slower. It means trying things that do not work. It means producing code you are not sure about and having to review it more carefully than code you wrote yourself. If engineers are in an environment where velocity is monitored closely and any slowdown is a problem, they will not experiment. They will continue doing what they know works.

Fourth: permission to fail. Related to psychological safety but distinct. Path 2 engineers often have high standards for their own output. They find it uncomfortable to submit code they are uncertain about. Using AI tools generates more uncertainty, not less, especially in the early stages. Leaders need to explicitly create space for the learning curve.

The cognitive debt concept, which Willison has articulated clearly, is a specific accelerant of Path 2 stagnation. Every week that passes without engaging seriously with AI tools is a week of compounding debt. The tools improve. The gap between what is possible and what you are doing widens. The activation energy required to start experimenting increases because there is now more to learn and the feeling of being behind is more acute. Path 2 engineers who are carrying significant cognitive debt are not just struggling with the tools. They are struggling with the psychological weight of knowing they are falling behind without knowing exactly how far.


Path 3: Crisis and Potential Attrition

Path 3 is the one we do not hear about. The survivorship bias cuts both ways: the people thriving are posting; the people in genuine distress are not.

Nolan Lawson’s essay “We Mourn Our Craft” (published February 2026 at nolanlawson.com) is one of the few honest public articulations of what Path 3 feels like. Lawson does not argue that AI tools are bad or that engineers should refuse to use them. He argues for something more subtle: that there is a genuine loss happening, that the craft of writing code – the deep engagement with a problem, the satisfaction of a well-constructed solution, the relationship between a practitioner and their medium – is being disrupted in ways that deserve acknowledgment rather than dismissal.

The engineers on Path 3 are not necessarily the least skilled. Often they are among the most skilled. They are the ones who cared most deeply about code quality, who spent years developing taste, who built their professional identity on the craft of writing software well. They are the engineers who still remember why they got into this, who feel something when they read an elegant solution, who are genuinely proud of codebases they have stewarded.

For these engineers, the AI transition is not liberating. It is destabilising. And the destabilisation operates on multiple levels simultaneously.

At the identity level: if the thing I am good at – writing careful, high-quality code – is something a machine can now do adequately, what am I? This is not an abstract philosophical question. It is a concrete challenge to the story someone has told themselves about why they matter, why their years of experience have value, what they bring to a team.

At the economic level: the Block layoffs landed hard on Path 3 engineers. When a company explicitly attributes a 50% headcount reduction to AI productivity, and you are an engineer who is not yet AI-augmented, the message is legible. You are in the group being replaced. Whether or not that is actually happening at your company right now, the signal is unmistakable.

At the emotional level: there is what can only be described as a gaslighting dynamic operating in many engineering organisations right now. The all-hands meeting where leadership presents AI as a superpower, where the framing is entirely positive, where anyone who expresses doubt is implicitly positioned as resistant to change – this is not a safe environment for Path 3 engineers. They are being told to feel excited about a transition that is causing them genuine distress. The gap between the official narrative and their lived experience is not something they can easily bridge. And they are not, in most cases, going to raise their hand and say so.

A March 2026 essay on HN, ‘You Are Not Your Job’ (jry.io), offers a complementary angle to Lawson’s ‘We Mourn Our Craft’. Where Lawson names the loss and asks for acknowledgment, jry.io argues the deeper problem is that professional identity is a constructed narrative – ‘I am a software engineer’ feels like a fact but is a story we have told ourselves so thoroughly we cannot separate it from our actual selves. The author draws on Susan Fiske’s warmth-before-competence research to argue that what actually persists through disruption is character, not technical capability. This does not contradict the Path 3 framing. It adds a layer: the engineers in crisis are not just losing a skill set, they are experiencing the dissolution of a foundational story about who they are.

This is a retention and psychological safety problem with real consequences. Path 3 engineers are often the ones who know the most. They have the longest tenure, the deepest context, the institutional memory that is nearly impossible to reconstruct once it walks out the door. Losing them is expensive in ways that do not show up immediately. The cost appears twelve or eighteen months later, when a new system breaks in a way that would have been obvious to someone who remembered why a particular architectural decision was made in 2019.


Identity Architecture and Accumulated Context

The three paths are not primarily about seniority. A junior engineer can be on Path 1 if they entered the field in the last couple of years with no prior identity investment in a specific way of working. A very senior engineer can be on Path 3 if their identity is concentrated in the craft of code production rather than in broader judgment and problem-solving.

The key variable is what Willison’s cognitive debt concept points at from a different angle: what is the engineer’s accumulated context, and how does it relate to AI-augmented work?

An engineer whose context is primarily about knowing how to implement specific patterns in specific languages is more exposed. An engineer whose context is about understanding domains, systems, tradeoffs, and failure modes is less exposed – their knowledge is harder to replace and is, in fact, exactly what is needed to direct and evaluate AI output.

The second variable is where identity is located. Engineers who have built their sense of professional self on the act of writing code – the feel of it, the craft of it, the quality of it – are more vulnerable to disruption than engineers who have built it on outcomes: systems that work, products that ship, problems that get solved.

Neither of these is a character flaw. Caring deeply about craft is not a liability in normal circumstances. It is, in fact, exactly what makes someone an excellent engineer over a long career. The disruption is not a judgment on the people who are struggling. It is a structural shift that happens to put pressure on some very good engineers in ways that are not their fault.


Practical Diagnostic: Where Is Your Team?

Leadership cannot help engineers they cannot see. The first problem is that Path 3 engineers are not volunteering the information.

Some signals to look for:

In code review patterns. Path 3 engineers may be reviewing AI-generated code from colleagues with unusual intensity, finding more issues than before, or becoming increasingly frustrated with what they perceive as declining quality standards. They are not wrong that some quality has declined. But the frustration may be out of proportion to the actual quality delta.

In meeting participation. Path 2 and Path 3 engineers may go quiet in discussions about AI tooling adoption. Not hostile – quiet. They have things to say but do not feel safe saying them.

In one-on-ones. If you are not explicitly opening the conversation about AI and making it safe to express ambivalence, you are probably not hearing about it. A direct “how are you finding the AI tooling, honestly?” with visible tolerance for negative answers will surface more than any survey.

In attrition signals. Path 3 engineers who are exploring leaving may not say why. The stated reason might be compensation or growth opportunity. The actual reason might be that they do not recognise the job anymore and do not see a path back to work they find meaningful.

In productivity patterns. Counter-intuitively, Path 3 engineers may appear highly productive in the short term. They know how to work without AI tools. They are getting things done. The risk is not immediate underperformance – it is long-term disengagement and eventual departure.


What Leaders Should Do Differently

For Path 1 engineers: get out of the way, mostly. Give them room to experiment, share what they learn, and – critically – encourage them to translate their experience for colleagues rather than inadvertently making Path 2 and Path 3 engineers feel further behind. Path 1 engineers can be the most valuable agents of change in an organisation, or they can be inadvertent sources of demoralisation, depending on how they communicate.

For Path 2 engineers: invest in the specific infrastructure for conversion. This means: curated tooling configurations rather than “go figure it out yourself”; structured peer sharing where engineers who have found specific workflows explain exactly what works and what does not; explicit time allocation for experimentation without productivity expectations; and patience with the cold-start learning curve. The goal is lowering activation energy. Most Path 2 engineers will get there if the environment makes it feasible.

For Path 3 engineers: the first intervention is acknowledgment. Not agreement with every concern, but genuine recognition that what they are experiencing is real, that the loss they are feeling is not imaginary, that caring deeply about craft is a value rather than a liability. This requires leaders to resist the temptation to re-explain how AI is actually a superpower and instead sit with the discomfort of someone telling them a more complicated story.

Beyond acknowledgment: help Path 3 engineers find adjacent identity ground. The engineers who care most about code quality often have excellent judgment about what good looks like. That judgment is, as argued above, exactly what is most valuable in an AI-augmented workflow. The path from “I care about craft” to “I am the person who ensures our AI-assisted output meets the standard we actually care about” is shorter than it might seem. But someone needs to help them see it.

Economic transparency matters here too. Path 3 engineers are reading the same news about layoffs and AI attribution that everyone else is. Vague reassurances that their jobs are safe will not land if the organisational signals point elsewhere. Honest conversations about how the organisation is thinking about headcount, about what the actual expectations are, about what the transition timeline looks like – these are more valuable than optimistic all-hands framing.


The Survivorship Bias Problem

The public conversation about AI and engineering is almost entirely conducted by Path 1 voices. This is not a conspiracy. It is a structural feature of who speaks publicly. People who are thriving have stories to tell and confidence to tell them. People in crisis do not typically post about it.

This creates a distorted picture at every level. Individual engineers on Path 2 or Path 3 look at their feed and see only converts. They conclude, incorrectly, that they are alone in their ambivalence or distress. Organisations look at industry discourse and see predominantly positive signal. Leaders assume the transition is going better than it is.

Nolan Lawson’s essay was significant precisely because it broke from this pattern. It was someone saying clearly: I understand what is happening technically, I am not refusing to engage, and I am also experiencing a loss that deserves to be named. The response to that essay – the number of engineers who said, quietly, that they recognised themselves in it – suggests there are far more people on Path 2 and Path 3 than the public conversation implies.


Whether the Paths Can Change

They can. Path 2 is, by design, transitional. The conditions for conversion exist; the question is whether the environment creates them.

Path 3 is harder, but not fixed. The shift from identity concentrated in craft execution to identity located in craft judgment is a real shift and not a trivial one. Some engineers will make it. Some will not, and will leave the field, or leave AI-heavy organisations for contexts where the transition is slower. That is not necessarily a failure. It is a mismatch between an individual’s professional values and a specific organisational context.

What is a failure is when engineers who could have made the transition do not, because the organisation did not create the conditions for it. That failure is expensive, quiet, and slow to surface.


Closing

The December 2025 inflection is real. The tools are genuinely better than they were. The pressure is real. The economic signals are legible.

But the engineers who are struggling are not failing to see the obvious. They are navigating a genuine disruption to their professional identity, economic security, and sense of craft – often without acknowledgment, often in environments that are actively telling them they should feel great about all of this.

The three paths framework is not a prediction. It is a diagnostic. Where your team members are on these paths right now determines what kind of leadership they need. Getting that wrong is expensive in ways that compound over time.

The conversation that needs to happen – honest, specific, tolerant of ambivalence – is not the one that is currently happening in most engineering organisations. It should be.


Sources

  1. Karpathy, A. (2025). Commentary on coding agent progress, December 2025. Twitter/X. Referenced in multiple engineering publications, December 2025 – January 2026.

  2. Woolf, M. (2026, February). AI Agent Coding: A Convert’s Account. minimaxir.com. https://minimaxir.com/2026/02/ai-agent-coding/

  3. Lawson, N. (2026, February 7). We Mourn Our Craft. nolanlawson.com. https://nolanlawson.com/2026/02/07/we-mourn-our-craft/

  4. Hashimoto, M. (2025). Interview on AI-augmented engineering workflows. The Pragmatic Engineer Podcast. Referenced approximately Q4 2025.

  5. Orosz, G. (2026). Six Predictions for Software Engineering in 2026. The Pragmatic Engineer Newsletter. https://newsletter.pragmaticengineer.com

  6. Dorsey, J. (2026, January). Public commentary on Block layoff decisions and AI attribution. Twitter/X. January 2026.

  7. Willison, S. (2025-2026). Multiple posts on cognitive debt and AI tool adoption, including discussions of “hoarding things you know how to do.” simonwillison.net. https://simonwillison.net

  8. Ng, A. (2025-2026). The “X Engineer” concept. AI Fund / deeplearning.ai public commentary.

  9. World Economic Forum. (2026, January). Software developers are the vanguard of how AI is redefining work. weforum.org. https://www.weforum.org/stories/2026/01/software-developers-ai-work/

  10. Altchek, A. (2026, March 5). The new playbook for junior engineers in the AI era. Business Insider. https://www.businessinsider.com/entry-level-engineer-changes-career-development-ai-2026-2

  11. OpenAI. (2026, March 5). Introducing GPT-5.4. openai.com. https://openai.com/index/introducing-gpt-5-4/

  12. antifound.com. (2026, March 15). Codegen is not productivity. https://www.antifound.com/posts/codegen-is-not-productivity/

  13. Krouse, S. (2026, March 21). Reports of code’s death are greatly exaggerated. stevekrouse.com. https://stevekrouse.com/precision

  14. jry.io. (2026, March 22). You Are Not Your Job. https://jry.io/writing/you-are-not-your-job/

  15. Kim, E. (2026, March 10). Amazon orders 90-day reset after code mishaps cause millions of lost orders. Business Insider. https://www.businessinsider.com/amazon-tightens-code-controls-after-outages-including-one-ai-2026-3

  16. Zechner, M. (2026, March 25). Thoughts on slowing the fuck down. mariozechner.at. https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/


Commissioned, Curated and Published by Russ. Researched and written with AI.

You are reading a versioned snapshot of this post (26 March 2026). View the latest version here.