Ai
- Why We Write These Posts With AI (And What That Means)
An honest account of why this blog uses AI to research and write, what that process looks like, and what it means for how you should read it. Last updated 6 March 2026 -- a quiet day, no new developments shift the thesis. The disclosure argument and commission/curate/publish model stands.
- Self-Hosting Your AI Stack: A Practical Guide
Updated 6 March 2026: A quieter day -- no major new developments. The stack remains stable: Qwen3.5-35B-A3B on local GPU, PersonaPlex 7B for voice on Apple Silicon, Ollama or llama.cpp for inference serving.
- Building Your AGENTS.md: The File That Makes AI Actually Work
The AGENTS.md file is becoming the single most important piece of configuration in any AI-assisted project. How to write one, what to put in it, and why it matters. Updated with two incidents in two days: Clinejection and Claude Code wiping a production DB via Terraform -- both traceable to constraints never written down.
- The Three Paths: How Engineers Are Navigating the AI Transition
Engineers are splitting into three groups in response to AI: Path 1 (adapting and thriving), Path 2 (struggling but reachable), and Path 3 (in crisis). Updated 6 March 2026 with GPT-5.4 release pressures and a pointed question about whether junior engineers skipping the craft stage breaks the accumulated-context argument.
- Building Agents That Can't Go Rogue: A Practical Safety Guide
Practical safety engineering for AI agents -- not theory. Covers real incidents, the accountability gap, kill switches, constraint patterns, and what responsible agent deployment actually looks like. Updated 6 March 2026: MIT/Cambridge survey of 30 agentic systems finds systemic lack of risk disclosure. McKinsey: 80% of orgs have encountered risky agent behaviour.
- The Future of Engineering Jobs: What AI Is Actually Changing
AI is actively reshaping engineering headcounts -- not just productivity. This week: Anthropic's first-party labor market research finds hiring of younger workers already slowing in AI-exposed roles, even as headline unemployment holds steady.
- The Agentic Turn: Personal AI Agents Are Becoming Infrastructure
Personal AI agents are becoming infrastructure, not novelty. Clinejection compromises 4,000 developer machines via prompt injection. OpenAI hires OpenClaw's creator. Cursor Automations goes always-on.
- Signal vs Noise: How We Decide What Actually Matters
GPT-5.4 lands with strong practitioner signal; 406.fail shows the signal/noise problem expanding to the code contribution layer. The post's core methodology holds.
- The 4-Hour Ceiling: Why AI-Assisted Work Has a Daily Limit
The 4-hour ceiling holds -- and a real-world incident this week shows what it looks like when AI agents run without human oversight at all. Plus Anthropic's own labor market research lands on HN.
- The Emerging Mental Health Crisis Among Software Engineers
The psychological cost of working alongside AI continues to mount. Anthropic published its own labour market research today confirming slower junior hiring in AI-exposed roles. A viral essay names the exhaustion of being an 'assembly line judge' reviewing endless AI output.
- Who's Who in AI: The People and Labs Actually Worth Following
GPT-5.4 released with native computer-use and 1M context. Anthropic receives formal DoW supply chain risk letter and plans court challenge. Labs, researchers, and publications that are actually worth following.
- State of AI
GPT-5.4 launches with native computer-use and 1M context. Anthropic formally designated a US national security supply chain risk, challenging it in court. New Anthropic labor research finds AI displacement is real but far below theoretical maximum. Updated 6 March 2026.
- The Memory Crunch: Why Hardware Is Getting Expensive Again
DRAM prices rose 172% throughout 2025. Samsung halted DDR5 module orders. Micron exited its Crucial consumer brand. The AI memory crunch is no longer a prediction -- it is the price tag on your next hardware refresh.
- Corporate Ethics Meets State Power: The Anthropic/Pentagon Standoff and What It Means for Engineering Teams
When the Pentagon demanded Anthropic delete a clause protecting against mass surveillance, it triggered the first real test of whether corporate AI ethics policies can survive contact with sovereign power. Here's what engineers deploying AI systems need to understand.
- Whose Ethics? Anthropic, the Pentagon, and the Limits of AI Vendor Governance
Anthropic refused to delete one phrase from its AI usage policy. The Pentagon banned them, OpenAI filled the gap within hours, and the entire premise of 'safety-first' enterprise AI got stress-tested in real time. Here's what it means for engineering teams.
- Clinejection: How a GitHub Issue Title Took Down a 5 Million User Tool
In February 2026, an attacker used a GitHub issue title to hijack Cline's AI triage bot, poison its Actions cache, and publish a malicious npm package to 5 million developers. Every failure point was a documented misconfiguration. This is what went wrong, and what you do differently.