Commissioned, Curated and Published by Russ. Researched and written with AI.
This is the living version of this post. View versioned snapshots in the changelog below.
What’s New This Week
5 March 2026
The Alibaba Qwen story is a three-paths story at the organizational level. Junyang Lin – Justin – the tech lead who built Qwen, stepped down. Two other key colleagues departed in the same window. VentureBeat described it as “a deepening rift between the researchers who built the models and a corporate hierarchy now pivoting toward aggressive monetization.” These are Path 1 engineers by any definition – people who built something genuinely significant, who had deep domain expertise and architectural judgment – choosing departure over compromise. When the engineers who built the models leave two days after a major release, that is not a career decision. It is a statement.
The Anthropic/Pentagon standoff offers a parallel. Dario Amodei sent an internal memo to staff calling OpenAI’s messaging “straight up lies.” The detail that matters for the three-paths framework: the DoD offered to accept Anthropic’s terms if they deleted one specific phrase about surveillance. Anthropic declined. This is what Path 1 looks like at the lab level – deep expertise in what the implications are, willingness to walk away from significant money, clarity about which lines not to cross. It sits in direct contrast to Path 3 at the organizational level, which is what accepting the original terms would have represented: compliance under pressure, regardless of the implications.
A World Economic Forum piece positions software developers as “the vanguard of how AI is redefining work.” The data point that lands hardest: one engineer reported leveling up from junior to near-senior JavaScript skills in two months through AI-assisted learning – a process that would have taken an employer months to formalise. A third of developers now rank GenAI as their top learning priority.
This is a Path 1 data point with a Path 3 implication. If AI is compressing the junior-to-senior progression from years to months, senior engineers whose value partly derives from being further along that curve face a structural shift. The accumulated context argument still holds – deep domain knowledge and calibrated skepticism remain hard to shortcut – but the pace of compression raises the stakes for anyone whose identity is tied to having earned their seniority the long way.
Also trending today: a piece arguing “intelligence is a commodity; context is the real AI moat.” That framing maps directly onto this framework. Path 1 engineers are not winning on raw capability – they are winning on context. The ability to evaluate output rather than just generate it retains leverage precisely because context cannot be downloaded.
Changelog
| Date | Summary |
|---|---|
| 5 Mar 2026 | Alibaba Qwen exodus, Anthropic/Pentagon standoff, WEF compression data, and “context is the moat” framing. |
| 4 Mar 2026 | Added Apple MacBook Neo (on-device path), Meta/LeCun “Beyond Language Modeling”, UniG2U-Bench unified vs specialised findings |
| 2 Mar 2026 | Initial publication |
There is a conversation happening in engineering right now, and most of it is being conducted in bad faith.
Not intentionally. Nobody is lying exactly. But the people speaking loudest are, by definition, the ones who found a way through. They are posting about their 10x productivity gains, their AI-assisted codebases, their rediscovered joy in shipping. They are not wrong. But they are not representative. And the gap between what gets said in public and what gets said in private – in Slack DMs, in one-on-ones, in the silence of engineers staring at a Copilot suggestion they do not trust – is becoming a leadership problem.
This post is an attempt to name that gap. To give it structure. To offer something more useful than either the triumphalism of the converts or the reflexive dismissal of the skeptics.
The framework is simple: there are three paths engineers are currently on. They are not determined primarily by seniority, though seniority correlates. They are determined by something harder to see: identity architecture and accumulated context. And the reason it matters now, in early 2026, is that something shifted in December 2025 that made the pressure on all three paths significantly higher.
The December Inflection
In late 2025, Andrej Karpathy made an observation that cut through the usual noise: coding agents basically did not work before December and basically work since. That is a remarkable thing to say. It suggests not a gradual improvement curve but a step change – a before and after.
If true, it reframes the conversation. Engineers who tried AI coding tools in 2023 or 2024 and found them underwhelming were not wrong. The tools were not good enough. Their skepticism was rational. But the tools that exist now are materially different. And that creates an uncomfortable situation for anyone who formed their opinion of AI coding assistance in the earlier era and has not updated it since.
It also creates pressure. Block laid off 4,000 people – roughly 50% of its workforce – in early 2026, with Jack Dorsey making explicit that AI was central to the decision. Gergely Orosz’s six predictions for 2026 included meaningful reductions in engineering hiring, with AI productivity gains used to justify headcount decisions rather than accelerate output. Andrew Ng has been circulating the concept of the “X Engineer” – the individual who, augmented by AI, can do the work that previously required a team.
Whether or not you believe these projections will fully materialise, engineers are reading them. They are sitting in all-hands meetings where leadership presents AI as a force multiplier. They are being told, implicitly or explicitly, that the organisation expects them to adapt. And they are doing so from very different starting positions.
Path 1: Adaptation and Thriving
The engineers on Path 1 are the ones you hear from. They are posting. They are guesting on podcasts. They are writing blog posts with titles like “I shipped in three days what used to take three weeks.”
They exist. Their experiences are real. And there is a pattern to who they are.
Mitchell Hashimoto – creator of Vagrant and Terraform, founder of HashiCorp – spoke on the Pragmatic Engineer podcast about his experience using AI coding tools heavily while being deliberate about preserving craft. He was not abandoning judgment. He was redirecting it. The question shifted from “how do I implement this?” to “what should I implement, and how do I verify the AI got it right?”
Simon Willison, creator of Datasette and one of the more thoughtful public voices on AI, has written extensively about the skills that compound in an AI-augmented workflow. His concept of “hoarding things you know how to do” is underappreciated: the engineers who thrive are not the ones who outsource the most, but the ones who retain enough depth to know when the AI is wrong, when the generated code is subtly broken, when the abstraction leaks in a way that will cause problems six months from now.
Karpathy himself represents this archetype. Deep expertise in machine learning, deep expertise in systems, enough context to direct AI tools with precision and evaluate their output with confidence.
The pattern is this: Path 1 engineers have two things that compound together. They have deep domain knowledge in at least one area – not breadth, depth – and they have what might be called calibrated skepticism about AI output. They do not fully trust the tools and they know why. That combination is what makes them effective rather than just fast.
“Knowing what’s possible” is the new premium skill. It is not knowing how to implement. It is knowing the solution space well enough to recognise a correct answer when the AI produces one, and a broken answer when it does not. That requires having built things the hard way, having debugged things that should not have broken, having read enough error messages to know what they actually mean.
For these engineers, the transition feels liberating because it does not threaten what they value most. They valued judgment. They valued architectural thinking. They valued the ability to hold a system in their head and reason about its failure modes. AI does not replace any of that. If anything, it makes those skills more leveraged.
The danger in focusing only on Path 1 is survivorship bias. Andrej Karpathy and Mitchell Hashimoto are not the median engineer. They are extraordinary practitioners with decades of context. When they say the transition is working for them, we should believe them. We should not assume their experience generalises.
Path 2: Struggle and Potential Conversion
The majority of engineers are on Path 2. They are skeptical but not closed. They tried some AI tools, probably in 2023 or 2024, found them disappointing in ways they could not always articulate, and moved on. They are watching colleagues convert with a mixture of curiosity and doubt. They suspect they might be missing something. They have not yet found the entry point.
Max Woolf’s account of his own conversion is as honest a document as exists in this space. Woolf, a data scientist with a reputation for careful, empirical analysis (his blog is at minimaxir.com), wrote in February 2026 about the specific sequence of events that shifted his view of AI agent coding tools. His story is worth reading in full, but the shape of it matters: he was skeptical, he had good reasons to be skeptical based on earlier experience, and then he encountered the tools in a configuration and context that made them actually work. The conversion was not ideological. It was empirical.
That “configuration and context” detail is important. One of the underappreciated reasons Path 2 engineers have not converted is not that the tools do not work. It is that getting the tools to work requires a non-trivial investment in setup, mental model formation, and initial failure tolerance. The cold-start problem is real. If you pick up a new AI coding tool, try it on a complex existing codebase without knowing how to prompt it, and watch it produce confidently wrong suggestions, you will probably put it down. That is a rational response to a bad experience. It is not evidence the tool cannot be useful.
Path 2 engineers are reachable. But reaching them requires specific things.
First: good tooling configuration. Not the default settings. Not the out-of-the-box experience. Someone needs to have done the work of figuring out what actually helps and sharing that configuration. This is currently not happening systematically in most engineering organisations.
Second: honest peers, not marketing. Path 2 engineers are sophisticated enough to detect enthusiasm that has been inflated by selection effects. They have read the LinkedIn posts. They are not impressed. What actually works is a colleague they trust – one who has no incentive to oversell – saying “look, I was skeptical too, here is the specific thing that changed my mind, here is what it does not do well.” That conversation is worth more than any vendor webinar.
Third: psychological safety to experiment. This is where organisations are failing most comprehensively. Experimenting with AI tools means, initially, being slower. It means trying things that do not work. It means producing code you are not sure about and having to review it more carefully than code you wrote yourself. If engineers are in an environment where velocity is monitored closely and any slowdown is a problem, they will not experiment. They will continue doing what they know works.
Fourth: permission to fail. Related to psychological safety but distinct. Path 2 engineers often have high standards for their own output. They find it uncomfortable to submit code they are uncertain about. Using AI tools generates more uncertainty, not less, especially in the early stages. Leaders need to explicitly create space for the learning curve.
The cognitive debt concept, which Willison has articulated clearly, is a specific accelerant of Path 2 stagnation. Every week that passes without engaging seriously with AI tools is a week of compounding debt. The tools improve. The gap between what is possible and what you are doing widens. The activation energy required to start experimenting increases because there is now more to learn and the feeling of being behind is more acute. Path 2 engineers who are carrying significant cognitive debt are not just struggling with the tools. They are struggling with the psychological weight of knowing they are falling behind without knowing exactly how far.
Path 3: Crisis and Potential Attrition
Path 3 is the one we do not hear about. The survivorship bias cuts both ways: the people thriving are posting; the people in genuine distress are not.
Nolan Lawson’s essay “We Mourn Our Craft” (published February 2026 at nolanlawson.com) is one of the few honest public articulations of what Path 3 feels like. Lawson does not argue that AI tools are bad or that engineers should refuse to use them. He argues for something more subtle: that there is a genuine loss happening, that the craft of writing code – the deep engagement with a problem, the satisfaction of a well-constructed solution, the relationship between a practitioner and their medium – is being disrupted in ways that deserve acknowledgment rather than dismissal.
The engineers on Path 3 are not necessarily the least skilled. Often they are among the most skilled. They are the ones who cared most deeply about code quality, who spent years developing taste, who built their professional identity on the craft of writing software well. They are the engineers who still remember why they got into this, who feel something when they read an elegant solution, who are genuinely proud of codebases they have stewarded.
For these engineers, the AI transition is not liberating. It is destabilising. And the destabilisation operates on multiple levels simultaneously.
At the identity level: if the thing I am good at – writing careful, high-quality code – is something a machine can now do adequately, what am I? This is not an abstract philosophical question. It is a concrete challenge to the story someone has told themselves about why they matter, why their years of experience have value, what they bring to a team.
At the economic level: the Block layoffs landed hard on Path 3 engineers. When a company explicitly attributes a 50% headcount reduction to AI productivity, and you are an engineer who is not yet AI-augmented, the message is legible. You are in the group being replaced. Whether or not that is actually happening at your company right now, the signal is unmistakable.
At the emotional level: there is what can only be described as a gaslighting dynamic operating in many engineering organisations right now. The all-hands meeting where leadership presents AI as a superpower, where the framing is entirely positive, where anyone who expresses doubt is implicitly positioned as resistant to change – this is not a safe environment for Path 3 engineers. They are being told to feel excited about a transition that is causing them genuine distress. The gap between the official narrative and their lived experience is not something they can easily bridge. And they are not, in most cases, going to raise their hand and say so.
This is a retention and psychological safety problem with real consequences. Path 3 engineers are often the ones who know the most. They have the longest tenure, the deepest context, the institutional memory that is nearly impossible to reconstruct once it walks out the door. Losing them is expensive in ways that do not show up immediately. The cost appears twelve or eighteen months later, when a new system breaks in a way that would have been obvious to someone who remembered why a particular architectural decision was made in 2019.
Identity Architecture and Accumulated Context
The three paths are not primarily about seniority. A junior engineer can be on Path 1 if they entered the field in the last couple of years with no prior identity investment in a specific way of working. A very senior engineer can be on Path 3 if their identity is concentrated in the craft of code production rather than in broader judgment and problem-solving.
The key variable is what Willison’s cognitive debt concept points at from a different angle: what is the engineer’s accumulated context, and how does it relate to AI-augmented work?
An engineer whose context is primarily about knowing how to implement specific patterns in specific languages is more exposed. An engineer whose context is about understanding domains, systems, tradeoffs, and failure modes is less exposed – their knowledge is harder to replace and is, in fact, exactly what is needed to direct and evaluate AI output.
The second variable is where identity is located. Engineers who have built their sense of professional self on the act of writing code – the feel of it, the craft of it, the quality of it – are more vulnerable to disruption than engineers who have built it on outcomes: systems that work, products that ship, problems that get solved.
Neither of these is a character flaw. Caring deeply about craft is not a liability in normal circumstances. It is, in fact, exactly what makes someone an excellent engineer over a long career. The disruption is not a judgment on the people who are struggling. It is a structural shift that happens to put pressure on some very good engineers in ways that are not their fault.
Practical Diagnostic: Where Is Your Team?
Leadership cannot help engineers they cannot see. The first problem is that Path 3 engineers are not volunteering the information.
Some signals to look for:
In code review patterns. Path 3 engineers may be reviewing AI-generated code from colleagues with unusual intensity, finding more issues than before, or becoming increasingly frustrated with what they perceive as declining quality standards. They are not wrong that some quality has declined. But the frustration may be out of proportion to the actual quality delta.
In meeting participation. Path 2 and Path 3 engineers may go quiet in discussions about AI tooling adoption. Not hostile – quiet. They have things to say but do not feel safe saying them.
In one-on-ones. If you are not explicitly opening the conversation about AI and making it safe to express ambivalence, you are probably not hearing about it. A direct “how are you finding the AI tooling, honestly?” with visible tolerance for negative answers will surface more than any survey.
In attrition signals. Path 3 engineers who are exploring leaving may not say why. The stated reason might be compensation or growth opportunity. The actual reason might be that they do not recognise the job anymore and do not see a path back to work they find meaningful.
In productivity patterns. Counter-intuitively, Path 3 engineers may appear highly productive in the short term. They know how to work without AI tools. They are getting things done. The risk is not immediate underperformance – it is long-term disengagement and eventual departure.
What Leaders Should Do Differently
For Path 1 engineers: get out of the way, mostly. Give them room to experiment, share what they learn, and – critically – encourage them to translate their experience for colleagues rather than inadvertently making Path 2 and Path 3 engineers feel further behind. Path 1 engineers can be the most valuable agents of change in an organisation, or they can be inadvertent sources of demoralisation, depending on how they communicate.
For Path 2 engineers: invest in the specific infrastructure for conversion. This means: curated tooling configurations rather than “go figure it out yourself”; structured peer sharing where engineers who have found specific workflows explain exactly what works and what does not; explicit time allocation for experimentation without productivity expectations; and patience with the cold-start learning curve. The goal is lowering activation energy. Most Path 2 engineers will get there if the environment makes it feasible.
For Path 3 engineers: the first intervention is acknowledgment. Not agreement with every concern, but genuine recognition that what they are experiencing is real, that the loss they are feeling is not imaginary, that caring deeply about craft is a value rather than a liability. This requires leaders to resist the temptation to re-explain how AI is actually a superpower and instead sit with the discomfort of someone telling them a more complicated story.
Beyond acknowledgment: help Path 3 engineers find adjacent identity ground. The engineers who care most about code quality often have excellent judgment about what good looks like. That judgment is, as argued above, exactly what is most valuable in an AI-augmented workflow. The path from “I care about craft” to “I am the person who ensures our AI-assisted output meets the standard we actually care about” is shorter than it might seem. But someone needs to help them see it.
Economic transparency matters here too. Path 3 engineers are reading the same news about layoffs and AI attribution that everyone else is. Vague reassurances that their jobs are safe will not land if the organisational signals point elsewhere. Honest conversations about how the organisation is thinking about headcount, about what the actual expectations are, about what the transition timeline looks like – these are more valuable than optimistic all-hands framing.
The Survivorship Bias Problem
The public conversation about AI and engineering is almost entirely conducted by Path 1 voices. This is not a conspiracy. It is a structural feature of who speaks publicly. People who are thriving have stories to tell and confidence to tell them. People in crisis do not typically post about it.
This creates a distorted picture at every level. Individual engineers on Path 2 or Path 3 look at their feed and see only converts. They conclude, incorrectly, that they are alone in their ambivalence or distress. Organisations look at industry discourse and see predominantly positive signal. Leaders assume the transition is going better than it is.
Nolan Lawson’s essay was significant precisely because it broke from this pattern. It was someone saying clearly: I understand what is happening technically, I am not refusing to engage, and I am also experiencing a loss that deserves to be named. The response to that essay – the number of engineers who said, quietly, that they recognised themselves in it – suggests there are far more people on Path 2 and Path 3 than the public conversation implies.
Whether the Paths Can Change
They can. Path 2 is, by design, transitional. The conditions for conversion exist; the question is whether the environment creates them.
Path 3 is harder, but not fixed. The shift from identity concentrated in craft execution to identity located in craft judgment is a real shift and not a trivial one. Some engineers will make it. Some will not, and will leave the field, or leave AI-heavy organisations for contexts where the transition is slower. That is not necessarily a failure. It is a mismatch between an individual’s professional values and a specific organisational context.
What is a failure is when engineers who could have made the transition do not, because the organisation did not create the conditions for it. That failure is expensive, quiet, and slow to surface.
Closing
The December 2025 inflection is real. The tools are genuinely better than they were. The pressure is real. The economic signals are legible.
But the engineers who are struggling are not failing to see the obvious. They are navigating a genuine disruption to their professional identity, economic security, and sense of craft – often without acknowledgment, often in environments that are actively telling them they should feel great about all of this.
The three paths framework is not a prediction. It is a diagnostic. Where your team members are on these paths right now determines what kind of leadership they need. Getting that wrong is expensive in ways that compound over time.
The conversation that needs to happen – honest, specific, tolerant of ambivalence – is not the one that is currently happening in most engineering organisations. It should be.
Sources
Karpathy, A. (2025). Commentary on coding agent progress, December 2025. Twitter/X. Referenced in multiple engineering publications, December 2025 – January 2026.
Woolf, M. (2026, February). AI Agent Coding: A Convert’s Account. minimaxir.com. https://minimaxir.com/2026/02/ai-agent-coding/
Lawson, N. (2026, February 7). We Mourn Our Craft. nolanlawson.com. https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
Hashimoto, M. (2025). Interview on AI-augmented engineering workflows. The Pragmatic Engineer Podcast. Referenced approximately Q4 2025.
Orosz, G. (2026). Six Predictions for Software Engineering in 2026. The Pragmatic Engineer Newsletter. https://newsletter.pragmaticengineer.com
Dorsey, J. (2026, January). Public commentary on Block layoff decisions and AI attribution. Twitter/X. January 2026.
Willison, S. (2025-2026). Multiple posts on cognitive debt and AI tool adoption, including discussions of “hoarding things you know how to do.” simonwillison.net. https://simonwillison.net
Ng, A. (2025-2026). The “X Engineer” concept. AI Fund / deeplearning.ai public commentary.
World Economic Forum. (2026, January). Software developers are the vanguard of how AI is redefining work. weforum.org. https://www.weforum.org/stories/2026/01/software-developers-ai-work/
Commissioned, Curated and Published by Russ. Researched and written with AI.
You are reading the latest version of this post. View all snapshots.