Commissioned, Curated and Published by Russ. Researched and written with AI.
This is the living version of this post. View versioned snapshots in the changelog below.
What’s New This Week
6 March 2026 (15:00 UTC): Two signals today, one directly material to the post’s central argument.
Anthropic published new research: “Labor market impacts of AI: A new measure and early evidence.” The headline finding is reassuring on the surface: no systematic increase in unemployment for highly exposed workers since late 2022. But the finding that matters more for this post’s argument is easier to miss: suggestive evidence that hiring of younger workers has slowed in exposed occupations. This is exactly the junior pipeline dynamic the post flags – AI is not yet showing up in headline unemployment, but it is already showing up at the front door. Fewer young people are getting the entry-level jobs that would eventually make them senior engineers.
The research introduces a useful new framing: “observed exposure” – combining theoretical LLM capability with actual usage data, weighting automated over augmentative uses. The key finding is that actual AI coverage remains a fraction of what is theoretically feasible. The occupations with the highest observed exposure are already projected by the BLS to grow less through 2034. That is not a forecast based on extrapolation. It is a current projection informed by what is already observable in usage data.
Also today: the US economy shed 92,000 jobs in February, pushing unemployment to 4.4% – the biggest monthly loss since October. The causes are mixed: federal government employment has been falling sharply (down 330,000 since October 2024), and healthcare was hit by strikes. This is not a clean AI story. But the broader backdrop – a weakening labor market, 2025 being the weakest year for jobs since the pandemic – makes structural pressure on engineering roles harder to absorb. Displacement is easier to manage when the rest of the job market is growing. Right now it is not.
5 March 2026 (update 2, 15:00 UTC): Three more signals from today.
HN’s top story in the reliability bucket: “The L in LLM Stands for Lying” (acko.net, 437 points). The post makes the case that language models are not merely imprecise – they are trained to produce plausible output rather than accurate output, and those are different things. The engineering relevance is direct: when your AI assistant generates a test suite, a PR review, or an architectural proposal, the goal of the system is to sound right, not to be right. The engineers who understand this at a gut level – and verify accordingly – are the ones who will not get burned by it. This is the evaluation skill gap made concrete.
Also on HN today: “Relicensing with AI-Assisted Rewrite” (246 points). A developer used AI to fully rewrite a codebase in order to relicense it without carrying forward the original license obligations. This is a genuinely new category of AI-assisted engineering work that would have been impractical to do manually. It also raises a question the post has touched on before: if AI can rewrite a codebase, what does that mean for the assumption that existing code represents durable, hard-to-replace human investment? The answer is probably that code as an artifact is becoming less defensible and the judgment baked into the system design – what it does and why – is becoming more so.
Morgan Stanley published analysis this week reinforcing what several other sources have pointed to: developers are shifting toward curation, review, integration, and problem-solving roles as AI coding assistants handle more implementation. Their headline: “the software developer workforce should expand significantly.” Worth noting they are talking about the total workforce expanding, not about any individual engineer’s job being safe. The expansion scenario and the displacement scenario are not mutually exclusive.
5 March 2026 (initial update, 12:30 UTC): Two stories worth tracking today.
The Alibaba Qwen leadership departure is relevant to the engineering jobs story in a way that is easy to miss. Junyang Lin and two senior colleagues stepped down as Qwen tech lead, reportedly over a rift between the research team and corporate leadership pivoting toward monetization. Alibaba has formed a task force to continue the work. This is a reminder that the most consequential engineering decisions in AI are still made by small groups of senior researchers – and that those researchers have leverage, preferences, and limits. The “replace engineers with AI” narrative runs into an inconvenient fact: the people building the AI have not been replaced. When they leave over values conflicts, the AI doesn’t carry on seamlessly. The task force exists because it doesn’t.
AMD’s Ryzen AI 400 for AM5 desktop landed at MWC 2026. Up to 50 TOPS from an integrated NPU on a standard consumer desktop platform – the world’s first desktop Copilot+ chips. For engineers thinking about what hardware access means for skill development, this is another data point: the tooling and hardware for serious local AI work is now available at mainstream price points. The expectation that engineers have hands-on experience with AI-native development on local hardware is no longer unreasonable.
Changelog
| Date | Summary |
|---|---|
| 6 Mar 2026 | Anthropic publishes labor market research: hiring of younger workers slowing in AI-exposed roles. |
| 5 Mar 2026 | Alibaba Qwen researcher departure. |
| 2 Mar 2026 | Initial publication |
| 3 Mar 2026 | Codex 1.6M devs; sub-500ms voice agents; expert verification gap; Ars reporter case |
| 4 Mar 2026 | BeyondSWE benchmark: frontier agents below 45% on complex tasks. |
The Numbers Are Not Ambiguous Anymore
For most of 2023 and 2024, the conversation about AI and engineering jobs was largely hypothetical. Productivity gains were real but diffuse. The argument was about trajectories, not outcomes. That has changed.
In January 2026, the US Bureau of Labor Statistics recorded the fastest pace of job cuts since the Great Recession [1]. That context matters. We are not talking about isolated restructuring inside a handful of tech companies. We are talking about a broad, accelerating contraction in certain categories of knowledge work, and software engineering is not exempt.
OpenAI reported in March 2026 that Codex has 1.6 million weekly active developers – a figure that tripled since January 2026 – with OpenAI’s overall platform serving 900 million weekly active users and 9 million paying businesses [1a]. These are not projection slides. They are reported actuals, and they indicate the pace of adoption has moved well past early adopter territory into mainstream enterprise deployment.
The clearest public data point on headcount impact came from Block. In February 2026, Jack Dorsey announced layoffs affecting approximately 4,000 employees, roughly 50% of the company’s workforce. The internal letter attributed the reduction explicitly to AI enabling smaller teams to do more [2]. This was not the usual corporate softening. It was a direct statement: the work that required this many people no longer does.
eBay followed with 800 positions cut, approximately 6% of its workforce, with similar language about AI-driven productivity [3]. These are not startups making desperate pivots. These are mature, profitable companies making deliberate decisions about headcount based on what AI tooling can now do.
The honest reading of this data is that the theoretical phase is over. Companies are no longer asking whether AI will change how many engineers they need. They are acting on their answer.
What Gergely Orosz Is Watching
Gergely Orosz, who covers engineering culture and economics at The Pragmatic Engineer, published six predictions about AI’s effect on software engineering that are worth taking seriously [4]. Not because every prediction will land exactly, but because the underlying logic is grounded in how engineering organisations actually work.
The first prediction is velocity. AI-assisted development is already producing 5-10x productivity increases in certain task categories. Writing boilerplate, generating tests, translating requirements into initial implementations – these are measurably faster. The implication is not that engineers are being replaced by AI, but that fewer engineers can now do what previously required more.
The second is what Orosz calls the TSA agent problem. TSA agents are responsible for security screening, but the job has evolved to include significant automation. The risk is that humans in the loop stop paying close attention because the system usually works. When it fails, the human is not ready to catch it. AI-assisted engineering has the same failure mode. If engineers stop reading the code AI generates because it usually looks right, they will miss the cases where it is wrong in ways that matter. The people who thrive will be those who stayed sharp enough to catch the failures.
The third prediction is the contraction of junior roles. This is addressed in more detail below, but the short version is that AI is automating the entry-level work first. That is not an accident. It is a direct consequence of AI being best at well-defined, pattern-based tasks – which is most of what junior engineers do by design.
The fourth is that new AI-adjacent roles exist but the transition to them is brutal. The roles are real. The pathway from traditional software engineer to those roles is not smooth, and the speed at which companies are moving means most engineers are doing this transition under pressure, without a safety net.
The fifth prediction is a quality crisis. As codebases fill with AI-generated code that no one fully understands, bugs that would previously have been caught during careful implementation get buried. The bugs are not obvious at PR review when the reviewer is also using AI. They emerge later, in production, often in edge cases. This is a structural risk that engineering leadership should be thinking about now.
The sixth is an accountability gap. When something goes wrong with AI-generated code, the question of who is responsible becomes genuinely complicated. The engineer who accepted the AI’s suggestion? The team lead who approved the PR? The company that chose to adopt that workflow? This is not a philosophical question. It has practical implications for how incidents are handled, how post-mortems are run, and how liability is assigned.
The X Engineer
Andrew Ng introduced the concept of the X Engineer in The Batch, his AI newsletter [5]. The framing is simple but structurally important. An X Engineer is someone with deep domain expertise in field X – biology, finance, law, manufacturing, whatever – combined with the ability to direct AI systems to do the implementation work. The combination is more valuable than either element alone.
This is already happening. Biologists who can direct AI to write analysis pipelines. Financial analysts who can build their own tooling. Lawyers who can automate contract review workflows. These people are not software engineers in the traditional sense. They do not write much code themselves. But they are replacing teams of engineers who previously did the work they are now directing AI to do.
The implication for traditional software engineers is uncomfortable but worth facing directly. Domain expertise is now a competitive differentiator in a way it was not before. The engineer who understands the business deeply – who knows why the system exists, what it actually needs to do, where the edge cases live in the real world rather than in the specification – is significantly harder to replace than the engineer who is good at implementing well-specified requirements.
The Roles That Are Shrinking
The clearest category is junior and mid-level coding roles focused on implementation. Writing CRUD endpoints. Generating reports. Building the third version of a dashboard that looks like the previous two. These are not glamorous tasks, and AI can do a reasonable first pass at most of them.
QA automation is under particular pressure. Automated test generation is one of the things AI is genuinely good at. The role of the engineer who writes Selenium tests or maintains an automation suite is harder to justify when AI can generate those tests from a description of the feature.
Boilerplate-heavy backend work is contracting. Scaffolding services, wiring up integrations, writing adapters between systems – these tasks have always been high-volume and low-creativity. They are exactly the kind of work that AI handles well.
Voice-based customer service roles – and the engineering that supports them – are under new pressure. The open source Shuo project demonstrated sub-500ms end-to-end voice agent latency in early 2026 [10a]. The key framing from the author: voice interaction is a turn-taking problem, not a transcription problem. Once that reframe is made, the technical barriers to replacing human voice agents look much lower than they did eighteen months ago. The cost of building a voice agent that sounds and responds like a human has dropped to near-zero for capable engineering teams.
Report-writing and basic data analysis are being absorbed. The engineer or analyst who spent significant time pulling data, formatting it, and writing the narrative around it is being replaced by tooling that does this in seconds.
None of this means the work is going away entirely. It means the headcount required to do it is shrinking. Five engineers can now do what fifteen were doing two years ago for a significant proportion of this type of work. That is not a theoretical claim. It is what the data from companies like Block is showing.
The Roles That Are Growing
The picture is not purely subtractive. New roles are emerging, and some existing roles are becoming more valuable. The honest caveat is that these roles are not yet growing fast enough to absorb everyone displaced, and the skills required are different enough that the transition is not automatic.
AI-directed architects are becoming more important. These are senior engineers who spend less time writing code and more time designing systems, evaluating AI outputs, and making the decisions that AI cannot make: what to build, why, what the tradeoffs are, what the system needs to do in five years. The title may not exist yet in most organisations, but the function is clearly becoming more valuable.
Agent operators and monitors are an emerging category. As AI agents run longer, more autonomous workflows – deploying code, managing infrastructure, making decisions across multi-step processes – someone needs to watch them. Not by reading every line of output, but by building the observability, setting the guardrails, and being ready to intervene when things go wrong. This is a new kind of engineering job that did not exist at scale two years ago.
AI security and red-teaming is a growing field. AI systems introduce new attack surfaces – prompt injection, model manipulation, data exfiltration through model APIs, adversarial inputs. The engineers who understand these threat models are in short supply and high demand.
People who can evaluate AI output accurately are worth more than they were. This is the TSA agent problem solved well rather than badly. An engineer who can read AI-generated code, AI-generated test results, or AI-generated analysis and reliably identify what is wrong with it is doing something genuinely difficult. The skill is harder to develop than it sounds because AI output is designed to be plausible.
Prompt and context engineers – whatever you want to call them – are filling a real function even if the title has not stabilised. The work of figuring out how to give AI systems the right context to produce useful output, how to structure workflows around AI capabilities, and how to get consistent results from probabilistic systems is real engineering work. It is just not the kind that most engineers trained to do.
Where Human Judgment Stays Irreplaceable
One counterpoint to the displacement narrative deserves serious attention. Phoebe Yao argued in a widely-circulated March 2026 thread that approximately 90% of expert work in domains like healthcare, law, and finance cannot be effectively verified by current AI training methods. The argument is structural: reinforcement learning from verifiable rewards (RLVR) – the training approach behind the most capable current AI reasoning systems – requires outcomes that can be checked. A correct legal judgment, a sound medical decision, a defensible financial recommendation: these involve subjective assessment, contextual weighting, and professional accountability that does not reduce to a reward signal.
This matters for engineering work adjacent to those domains. An engineer building systems for clinical decision support cannot simply trust AI output without clinical judgment in the loop. An engineer working in regulated financial infrastructure cannot outsource compliance reasoning to a model. The domains where judgment is irreducibly expert are also the domains where the X Engineer model (domain expertise directing AI implementation) works best – and where the domain expertise component is genuinely hard to fake or automate.
The implication is not that these domains are safe from AI. It is that the human roles that survive in them will be the ones grounded in the kind of verified expertise that AI training cannot currently approximate. For engineers choosing where to develop deep domain knowledge, this is a meaningful signal about where the floor is.
AI Creates and Destroys Simultaneously
The Ars Technica case in early 2026 – an AI reporter fired after AI-fabricated quotes appeared in published work – illustrates something that does not fit neatly into either the optimistic or pessimistic framing of AI’s effect on jobs.
The reporter’s role existed partly because AI tooling had made certain kinds of research and writing faster, enabling more coverage with fewer staff. The same tooling created a new failure mode: AI-generated content that looked like reporting but contained invented quotes. The accountability failure was not the AI’s. It was the human in the loop who did not catch it. That human no longer has the job.
This is the pattern that gets missed when the conversation is framed as “AI taking jobs” versus “AI creating jobs.” In many cases AI is doing both simultaneously inside the same role. The productivity gains from AI assistance are real. So are the new failure modes. The organisations and individuals who navigate this well are those who are rigorous about verification precisely because AI output is plausible rather than obviously wrong. Sloppy verification under AI assistance is a career risk in a way that sloppy implementation under manual coding rarely was – because the errors are harder to see and the accountability lands squarely on the human who signed off.
The Skills That Matter More
Knowing what is possible is increasingly foundational. Engineers who understand what current AI systems can and cannot do are better at designing systems around them. This is not about following AI news obsessively. It is about having a calibrated mental model of current capabilities, which requires actually using the tools seriously.
System design and architecture matter more than they did. When implementation is faster and cheaper, the decisions that happen before implementation – what to build, how to structure it, what the interfaces are – carry more weight. A bad architectural decision that would have been expensive to implement before is now fast to implement and fast to make worse.
Domain expertise has become a genuine differentiator. The engineer who deeply understands the business they are building for – the constraints, the edge cases, the reasons why the previous three attempts failed – is much harder for AI to replace than the engineer who is good at translating specifications into code.
Judgment under uncertainty is increasingly the thing that separates valuable engineers from those at risk. AI systems produce confident-sounding output. Knowing when to trust that output and when to be suspicious of it is a skill that requires experience, domain knowledge, and a willingness to slow down and check.
Evaluation and testing – not just writing tests, but designing systems to be testable and being rigorous about what correct behaviour actually looks like – is becoming more important as AI-generated code fills codebases.
Communication and translation between business requirements and AI systems is a skill gap that organisations are struggling with. Engineers who can do this well are genuinely useful in a way that is hard to automate.
The Skills That Matter Less
Memorising syntax. If you have been proud of knowing the exact API signature for a rarely-used library method, that is a skill with diminishing returns. The time spent on this is better spent on other things.
Writing boilerplate from memory. Engineers who are fast at generating the scaffolding of a feature from scratch are now competing with AI that is faster. Speed at boilerplate is no longer a meaningful differentiator.
Certain types of searching. The engineer who was good at finding the right Stack Overflow answer quickly is finding that function replaced by AI. This is a relatively minor skill loss, but it signals something broader: tasks that involve retrieving and applying known patterns are increasingly automated.
The Junior Pipeline Crisis
This is the structural issue that does not get enough attention. The on-ramp to the engineering profession is being automated first.
Junior engineering roles exist, in part, because junior engineers are cheap, and the tasks they do – implementation of well-specified features, maintenance of existing code, writing tests – are exactly the tasks AI is best at. Companies that are already using AI heavily are finding that they need fewer junior engineers to do this work. Some are not hiring juniors at all.
The pipeline problem is that senior engineers develop from junior ones. The tacit knowledge that comes from writing the boring code – debugging it, maintaining it, understanding why it breaks in production – is how engineers develop judgment. If you never write a thousand lines of mediocre code, you do not develop the instinct that tells you why a thousand lines of AI-generated code might be wrong.
Andrej Karpathy noted in late 2025 that we may be reaching an inflection point where the coding skills that matter are no longer the ones that took years to develop through practice [6]. That is probably true for some categories of skill. But the judgment to direct AI systems well, to evaluate their output, to know when to trust and when to verify – that judgment seems to develop through experience with the underlying systems. Whether it can develop without those foundational years of implementation work is genuinely unclear. We are running the experiment in real time.
Engineering leadership needs to think carefully about this. If you are hiring fewer juniors now because AI makes them less necessary, what does your senior pipeline look like in five years? What does mentorship look like when there is less implementation work to mentor juniors through? These are not hypothetical concerns. They are structural questions about how the profession reproduces itself.
The Historical Parallels Are Instructive But Imperfect
The standard reassurance is that automation has always created new jobs to replace the ones it eliminated, so this time will be similar. The evidence for this is real but more complicated than the summary suggests.
CAD software changed architecture. Architects who previously spent significant time on manual drafting no longer do. But the number of architects did not fall proportionally, because cheaper and faster drafting enabled more design iteration, more complex projects, and a broader market for architectural services. The skill mix changed substantially – a generation of architects who were excellent at hand-drafting found their specific skill devalued – but the profession survived and grew.
Spreadsheets changed accounting in a similar way. The manual computation work that occupied large accounting departments was automated. But accounting firms grew, because cheaper analysis meant more analysis happened, and the work expanded into advisory roles that the spreadsheet enabled rather than replaced.
The ATM case is genuinely instructive and often misread. ATMs were introduced widely in the 1970s and 1980s with the expectation that bank teller numbers would fall. They did not. Bank teller employment actually increased for several decades after widespread ATM adoption [7]. The reason is that ATMs made it cheaper to run a bank branch, so banks opened more branches, and each branch still needed tellers. The mix of tasks changed, but the headcount did not fall the way people predicted.
The lesson from these examples is not that automation never destroys jobs. It does. Typographers, telephone operators, and travel agents are categories that did not recover. The pattern that distinguishes the recoveries from the collapses tends to be whether the automation expanded the market for the underlying service (spreadsheets made accounting cheaper so more of it happened) or whether it simply replaced a service that had been artificially labour-intensive (automated telephone directories did not create more demand for directory assistance).
For software engineering, the relevant question is whether AI-assisted engineering expands the market for software. There is a reasonable case that it does. Software is still expensive relative to demand, and there are enormous areas of human activity that would benefit from software that has not been built because the cost was prohibitive. AI making software development cheaper could unlock demand rather than just reduce headcount. Whether that happens at a rate that absorbs displaced engineers is unknown.
The swyx projection – that 25-50% of code is already being written by AI systems, with that proportion growing rapidly [8] – suggests the scale of this shift is large and fast. Cursor’s data showing 30% of pull requests being generated by AI agents [9] is a concrete datapoint in the same direction. The question is not whether this is happening. It is happening. The question is what the equilibrium looks like and how long the transition takes.
The Honest Answer
Some roles are going. This is not a failure of imagination or an overly pessimistic take. It is what the data shows. Companies that have adopted AI tooling seriously are running with smaller engineering teams. The roles most at risk are the ones focused primarily on implementation of well-specified requirements at the junior and mid levels.
Some new roles are coming. They are real and they are growing. But they require different skills than many current engineers have, they are not yet growing fast enough to absorb everyone displaced, and the transition is not being managed well in most organisations.
The transition is brutal even when the destination is okay. An engineer who has spent fifteen years developing implementation skills is not going to pivot painlessly to AI-directed architecture or agent monitoring. The skills transfer partially but not completely. The learning curve is steep and the market is not patient.
Managed honestly, most engineers can find a place. The engineers who are paying attention to the shift, developing judgment and domain expertise, staying calibrated on what AI can and cannot do, and positioning themselves as the people who direct and evaluate AI rather than compete with it – those engineers are in a reasonable position.
Managed badly, organisations lose their best people. The engineers with options will leave if the path forward is unclear, if AI adoption is creating chaos without support for the transition, or if they feel their skills are being devalued without acknowledgment. Engineering leadership that does not address this openly will find its best engineers have quietly made other arrangements.
What Engineers Should Actually Do Right Now
Use the tools seriously. Not for show, but to develop a calibrated sense of what they can and cannot do. Engineers who are avoiding AI tools because they feel threatening are not protecting themselves. They are falling behind.
Invest in domain expertise. Whatever field your software serves, understand it more deeply than you currently do. The engineers who understand the business, the users, and the real-world constraints are harder to replace than those who understand only the technical implementation.
Practise evaluation. The skill of reading AI output and assessing it critically – knowing what to check, what failure modes to look for, when the plausible-sounding answer is wrong – is developable through deliberate practice. Make it part of your workflow.
Develop systems thinking. Architecture and design decisions matter more when implementation is cheaper. If you have been heads-down in implementation, it is worth investing time in the higher-level concerns: system design, scalability, observability, failure modes.
Build communication skills. The gap between what a business needs and what AI systems can do is often a communication and translation problem. Engineers who can bridge that gap are valuable in a way that does not compress.
Be honest about where you are. The engineers who are doing fine are often those who are clear-eyed about which of their skills are durable and which are not, and who are actively working on the durable ones. Denial is not a strategy.
For engineering leaders specifically: be honest with your teams about what you are seeing. The engineers who are most at risk are the ones who are not being told the truth about what is changing. Managed well, this transition produces teams that are smaller, faster, and doing higher-value work. Managed badly, it produces chaos, attrition, and quality problems that show up in production six months later.
Sources
[1a] OpenAI. Usage statistics reported March 2026. OpenAI blog and press statements, March 2026.
[1] Bureau of Labor Statistics. Job Openings and Labor Turnover Survey (JOLTS), January 2026. US Department of Labor, 2026. https://www.bls.gov/jlt/
[2] Dorsey, Jack. All Hands Message on Block Restructuring. Block, Inc., February 2026. [Internal letter, widely reported.]
[3] eBay Inc. Q4 2025 Earnings Call and Workforce Announcement. eBay Investor Relations, January 2026. https://investors.ebayinc.com/
[4] Orosz, Gergely. “Six Predictions for Software Engineering in 2026”. The Pragmatic Engineer, December 2025. https://newsletter.pragmaticengineer.com/
[5] Ng, Andrew. “The Rise of the X Engineer”. The Batch, DeepLearning.AI, 2025. https://www.deeplearning.ai/the-batch/
[6] Karpathy, Andrej. Notes on the December Inflection in AI-Assisted Coding. X/Twitter, December 2025. https://x.com/karpathy
[7] Bessen, James. “Toil and Technology”. Finance and Development, IMF, March 2015. https://www.imf.org/external/pubs/ft/fandd/2015/03/bessen.htm – This paper documents the ATM/bank teller paradox with detailed historical data.
[8] Shawn Wang (swyx). “25-50% of Code Written by AI: Tracking the Inflection”. Latent Space, 2025. https://www.latent.space/
[9] Cursor. “Agent PR Data: 30% of Pull Requests”. Cursor engineering blog, 2025. https://cursor.com/blog/
[10a] Tikhonov, Nick. Shuo: sub-500ms open source voice agent. GitHub, March 2026. https://github.com/NickTikhonov/shuo
[10b] Yao, Phoebe. Thread on expert verification and AI training limits. X/Twitter, March 2026.
[10c] Ars Technica. Staff changes following AI content review. March 2026.
[10d] Various. “The L in LLM Stands for Lying”. acko.net, March 2026. https://acko.net/blog/the-l-in-llm-stands-for-lying/
[10e] Various. “Relicensing with AI-Assisted Rewrite”. tuananh.net, March 2026. https://tuananh.net/2026/03/05/relicensing-with-ai-assisted-rewrite/
[11] Morgan Stanley. “AI in Software Development: Creating Jobs and Redefining Roles”. Morgan Stanley Insights, 2026. https://www.morganstanley.com/insights/articles/ai-software-development-industry-growth
[12] Anthropic. “Labor market impacts of AI: A new measure and early evidence”. Anthropic Research, March 2026. https://www.anthropic.com/research/labor-market-impacts
[13] Sherman, Natalie. “US economy unexpectedly sheds 92,000 jobs in February”. BBC News, March 2026. https://www.bbc.com/news/articles/cjd98091g28o
[10] Hashimoto, Mitchell and Orosz, Gergely. On craft surviving the AI transition. Various interviews and posts, 2025. [Referenced across multiple Pragmatic Engineer issues and public conversations.]
Latest version of this post: future-of-engineering-jobs
Commissioned, Curated and Published by Russ. Researched and written with AI. You are reading the latest version of this post. View all snapshots.