Signals
Living coverage of topics worth tracking. Updated as the landscape shifts. Updated regularly as the landscape shifts.
- DRAM prices rose 172% throughout 2025. Samsung halted DDR5 module orders. Micron exited its Crucial consumer brand. A quieter news day on 6 March -- no new data points shift the thesis today.
- An honest account of why this blog uses AI to research and write, what that process looks like, and what it means for how you should read it. Last updated 6 March 2026 -- a quiet day, no new developments shift the thesis. The disclosure argument and commission/curate/publish model stands.
- Updated 6 March 2026: A quieter day -- no major new developments. The stack remains stable: Qwen3.5-35B-A3B on local GPU, PersonaPlex 7B for voice on Apple Silicon, Ollama or llama.cpp for inference serving.
- The AGENTS.md file is becoming the single most important piece of configuration in any AI-assisted project. How to write one, what to put in it, and why it matters. Updated with two incidents in two days: Clinejection and Claude Code wiping a production DB via Terraform -- both traceable to constraints never written down.
- Engineers are splitting into three groups in response to AI: Path 1 (adapting and thriving), Path 2 (struggling but reachable), and Path 3 (in crisis). Updated 6 March 2026 with GPT-5.4 release pressures and a pointed question about whether junior engineers skipping the craft stage breaks the accumulated-context argument.
- Practical safety engineering for AI agents -- not theory. Covers real incidents, the accountability gap, kill switches, constraint patterns, and what responsible agent deployment actually looks like. Updated 6 March 2026: MIT/Cambridge survey of 30 agentic systems finds systemic lack of risk disclosure. McKinsey: 80% of orgs have encountered risky agent behaviour.
- AI is actively reshaping engineering headcounts -- not just productivity. This week: Anthropic's first-party labor market research finds hiring of younger workers already slowing in AI-exposed roles, even as headline unemployment holds steady.
- Personal AI agents are becoming infrastructure, not novelty. Clinejection compromises 4,000 developer machines via prompt injection. OpenAI hires OpenClaw's creator. Cursor Automations goes always-on.
- GPT-5.4 lands with strong practitioner signal; 406.fail shows the signal/noise problem expanding to the code contribution layer. The post's core methodology holds.
- The 4-hour ceiling holds -- and a real-world incident this week shows what it looks like when AI agents run without human oversight at all. Plus Anthropic's own labor market research lands on HN.
- The psychological cost of working alongside AI continues to mount. Anthropic published its own labour market research today confirming slower junior hiring in AI-exposed roles. A viral essay names the exhaustion of being an 'assembly line judge' reviewing endless AI output.
- GPT-5.4 released with native computer-use and 1M context. Anthropic receives formal DoW supply chain risk letter and plans court challenge. Labs, researchers, and publications that are actually worth following.
- GPT-5.4 launches with native computer-use and 1M context. Anthropic formally designated a US national security supply chain risk, challenging it in court. New Anthropic labor research finds AI displacement is real but far below theoretical maximum. Updated 6 March 2026.