Commissioned, Curated and Published by Russ. Researched and written with AI.

This is a living document. It will be updated as the approach evolves. Each version is archived as a dated snapshot.


What’s New This Week

5 March 2026 (evening update).

A quieter day – nothing today that shifts the thesis. This morning’s update covered the Qwen team fracture and the Anthropic/DoD contract story, both of which remain the most relevant recent developments. The disclosure and commission/curate/publish argument stands as stated.

Changelog

DateSummary
5 Mar 2026Qwen team fractured – supply chain of AI writing; Anthropic/DoD contract language and model deployment terms. Evening: quiet day, thesis unchanged.
2 Mar 2026Initial publication
3 Mar 2026Ars Technica AI fabrication case – directly relevant to what this post is about
4 Mar 2026Knuth publishes on Claude’s reasoning patterns.

Every post on this site carries the same disclaimer at the top:

Commissioned, Curated and Published by Russ. Researched and written with AI.

That’s eight words of disclaimer for what is, in practice, a fairly nuanced production process. This post is the longer explanation. What those eight words actually mean, why I work this way, what it changes, and why I think disclosure matters.

If you’ve read other posts here and thought “wait, how exactly does this work?” – this is the answer.


The Disclosure, Unpacked

Let me take the disclaimer apart, piece by piece.

Commissioned means I decide what to write about. I set the brief. I define the angle. Every post starts with a question I’m genuinely thinking about, or a pattern I’ve been watching, or something I’ve read that I want to stress-test properly. The topic isn’t AI-generated. The framing isn’t AI-generated. I come in with a point of view, or at least a direction, and the brief reflects that.

For the agents post, the question I was sitting with was: what does it actually mean to have software that acts on your behalf rather than just responding to you? That’s an SRE’s question. That’s my question. The AI didn’t generate it.

Curated means I review everything. I edit. I reject drafts that don’t meet the standard. I decide what goes live and what doesn’t. This step matters more than people probably assume. AI output is variable. Sometimes it’s excellent. Sometimes it’s competent but flat. Sometimes it’s confidently wrong in ways that would embarrass me if I published them. The curation step is where I catch all of that – and where the actual judgment happens.

In practice, most posts go through multiple drafts. I push back on the structure. I rewrite sections that don’t sound right. I cut the parts that are technically accurate but don’t actually add anything. Curation isn’t a rubber stamp. It’s the work.

Published means my name is on it and I’m accountable for what’s here. If something is wrong, that’s on me. If an argument is poorly made, that’s on me. If a reader tells me I’ve missed something important, I take that seriously – because publishing it was my call, not the AI’s.

Researched and written with AI means exactly that. The research phase – pulling sources, synthesising across multiple papers or posts, building a picture of what the field currently thinks – that’s AI-assisted. The prose generation is AI-assisted. The judgment about what matters, what to emphasise, and what angle to take is human.


Why This Approach

I want to be honest about why I work this way, because there are a few different reasons and they’re worth separating.

Research depth. The straightforward practical reason: an AI can synthesise dozens of sources in an hour. I can’t. That’s just true. The alternative isn’t me doing better research – it’s me doing shallower research, or not writing the post at all because I don’t have twenty hours to spend on it.

If you’ve read the mental health and AI post or the four-hour ceiling post, those posts draw on a fairly wide range of sources. That breadth is only possible because the research step is AI-assisted. A solo author working in their spare time would either cut the scope or cut the rigour. I’d rather not cut either.

Keeping up with the pace. AI and engineering is moving fast enough that a topic can shift meaningfully in the time it takes to write a traditional post. By the time you’ve done the reading, written the draft, revised it three times, and published it, the landscape has moved. The approach here compresses the cycle enough that what goes live is still current when it lands.

This is also why the living document model matters. Posts here don’t just publish once and age. They get updated. That’s only possible because the update cycle is fast enough to be sustainable.

The meta point. Writing about AI using AI isn’t a contradiction. It’s actually the honest version of what most people in the industry are doing right now. The difference is saying so. I’m not going to pretend I’m producing long-form, research-heavy content in my evenings on pure solo effort when I’m not. Most blogs aren’t either, whether they say so or not.

The quality bar. The AI doesn’t publish. I publish. That distinction matters. The AI produces drafts. I decide whether they’re good enough to put my name on. If they’re not, they don’t go live. The AI is a tool, not an author. Authors are accountable. Tools aren’t.


What This Changes

I want to be equally honest about what this approach changes, because pretending it changes nothing would be its own kind of dishonesty.

The writing voice is different. These posts are more structured than I’d be if I were writing purely solo. More comprehensive, more systematically organised. If you’ve read my actual writing – Slack messages, incident reports, internal docs – you’ll notice the difference. The blog posts are more formal. That’s partly the medium, partly the process.

I’ve tried to push back on this where I can. I want the voice to feel like mine, not like a well-organised GPT output. But it’s a real thing worth acknowledging. Solo writing has a specific quality to it – the digressions, the half-formed thoughts that turn out to matter, the way one sentence leads somewhere unexpected. That’s harder to replicate in this process.

The risk of confident wrongness. This is the one I take most seriously. AI systems can be wrong with confidence. They can cite plausible-sounding sources that don’t say what they claim. They can assert technical details that are almost right but not quite. They can miss important nuance while sounding authoritative.

The curation step exists precisely to catch this. I’m not a passive publisher. When something claims a specific number, I check it. When something asserts how a system works, I verify it against what I know from actually working in this space. When something doesn’t pass the sniff test, it comes out.

This is why curation isn’t optional in this model. Without it, this blog would be exactly the kind of low-grade AI slop that’s cluttering up search results right now. With it, the goal is something worth reading.

The question of credit. What exactly is original here? I think this is a fair question and deserves an honest answer rather than a defensive one.

The ideas come from the field. The sources are real. The synthesis is AI-assisted. The framing and the judgment and the angle – those are mine. What’s original is the selection: what I think is worth writing about, what I think the important question is, what I think readers in this specific context actually need to understand.

I’m an SRE at a major live events company. I work at scale. I think about reliability and production systems and the human side of engineering. When I commission a post, that context shapes everything: what I’m asking, what I consider important, what I push back on in drafts. That’s not nothing. But it’s also not the same as writing every sentence yourself.

The skin in the game question. Does publishing AI-assisted content under your name carry the same professional accountability as writing it yourself?

My answer is yes. And I think the curation step is what makes that true. If I’m reviewing everything, correcting errors, rejecting work that doesn’t meet the standard, and putting my name on what goes live – then I’m accountable for it in the same way I’d be accountable for work I’d delegated to a junior engineer and published under my name. The review and sign-off is the accountability. Skipping that step would be the dishonest version.


When the Curation Step Fails

On 3 March 2026, Ars Technica fired Benj Edwards, its senior AI reporter. The circumstances are worth laying out plainly.

Edwards was ill with a fever. He used a Claude Code-based tool to extract and summarise quotes from source documents. AI-paraphrased words ended up published as real direct quotes in a live article. The article was retracted. Edwards lost his job.

The article he was writing was about the MJ Rathbun case – an autonomous AI agent that had published fabricated, defamatory content about an open source maintainer. The meta-irony is uncomfortable: a journalist covering AI fabrication accidentally committed AI fabrication.

I’m not going to pile on Edwards. Making a serious mistake while ill, under deadline pressure, is something that could happen to anyone using any tool carelessly. But the specifics of what went wrong matter, because they’re exactly what I describe as the central risk of this kind of writing.

The failure wasn’t that AI was used. The failure was that the curation step didn’t happen properly. Someone unwell, probably rushing, used AI output as a shortcut rather than as a draft to be verified. The quotes went in without being checked against the source material. That’s not an AI problem. That’s a process problem.

The commission/curate/publish model I describe in this post is only as good as the curation step. Commission something and skip proper review before publishing, and you get exactly what happened at Ars. The AI produces confident-sounding output. You’re tired. It looks fine. It ships. And then it turns out the quotes aren’t quotes at all, they’re paraphrases, and you’ve attributed fabricated words to real people in a published article.

The lesson I take from this isn’t “don’t use AI for writing.” The lesson is: curation is load-bearing. It’s not an optional review pass at the end. It’s the step that determines whether the whole process is legitimate or not. If I’m too tired or too rushed to actually check what I’m publishing, the right answer is to not publish that day – not to let AI output go through unreviewed.

I use AI tools here. Some of them are similar to what Edwards was using. The difference I’m committing to is that the curation step gets done properly, or the piece waits. That’s not a very exciting promise. But it’s the one that matters.

The Ars case is also a reminder that professional consequences are real. Edwards didn’t just publish a correctable error. He lost a job he’d built over years. AI-assisted writing done carelessly isn’t a small thing. The risk is proportional to the seriousness of the claim – and direct quotes attributed to named individuals are about as serious as it gets.


Why Disclose At All

The cynical answer is: because most people don’t, and the gap between what’s disclosed and what’s actually happening is getting embarrassing.

Most of the internet is publishing AI-generated content right now without saying so. Some of it is obvious. Some of it is less obvious. The effect is a general erosion of trust – readers can’t tell what’s real, writers have an incentive to obscure it, and the whole ecosystem gets worse.

I don’t want to contribute to that. Readers deserve to know how the sausage is made. Not because it should change how they read it necessarily – but because they should have the information to calibrate. If you know a post is AI-synthesised and curated, you read it differently than if you assumed it was solo writing. That’s legitimate. The disclosure gives you that.

There’s also something genuinely interesting in the disclosure itself. What does it mean that a thoughtful human working with AI produces better content than either could alone? That’s not a trivial question. It’s the question this entire blog is circling around in the posts about agents and augmentation and what human-AI collaboration actually looks like in practice. This blog is evidence in that conversation, not just commentary on it.

The living document model is worth mentioning here too. Right now there are eight posts on this site that get reviewed and updated regularly. A solo author working traditionally couldn’t sustain that – the update cycle is just too labour-intensive. The AI-assisted model is what makes it possible to keep these posts current rather than letting them age into irrelevance. Disclosure is partly about acknowledging that this model only works the way it does because of how it’s built.


The Bigger Picture

This blog is a case study in the transition it writes about.

The posts here are about how AI is changing engineering: the tools, the workflows, the team dynamics, the mental load, the ceiling effects. The production method for those posts is itself an instance of AI changing how writing works. That’s not a contradiction. I think it’s actually the honest version of doing this kind of writing in 2026.

The mental health post is about how humans navigate AI assistance in high-stakes domains. The four-hour ceiling post is about what happens when AI lifts the floor on individual contribution but leaves other constraints in place. The agents post is about what it means to delegate to systems that act rather than respond. All of those posts are about humans working with AI. So is this blog.

I don’t think that makes it circular or self-referential in a bad way. I think it makes it honest. The people I’m writing for – senior engineers, engineering leaders, people who are thinking seriously about how to navigate this shift – are themselves working through the same questions. What do I hand off? What do I keep? Where does the judgment have to stay human? Where does AI assistance actually help and where does it create new risks?

Those aren’t rhetorical questions for me. They’re real ones I’m thinking about in my actual job. The blog is partly how I think through them publicly.

The goal is straightforward: honest enough to be trusted, rigorous enough to be useful, transparent enough that readers can calibrate appropriately. Whether it’s hitting that bar is something you’re better placed to judge than I am.

If you think something here is wrong, or the approach is more problematic than I’m acknowledging, I’d genuinely like to hear it. The email’s at the bottom of the page.


Commissioned, Curated and Published by Russ. Researched and written with AI.

Questions, corrections, and disagreements welcome: [email protected]


Commissioned, Curated and Published by Russ. Researched and written with AI. You are reading the latest version of this post. View all snapshots.