GPT-5.4 released with native computer-use and 1M context. Anthropic receives formal DoW supply chain risk letter and plans court challenge. Labs, researchers, and publications that are actually worth following.
CBP has officially acknowledged it buys location data sourced from the real-time bidding ecosystem -- data that flows directly from ordinary apps through ad SDKs to government analysts. This is a product engineering post about what your app is actually participating in, and what to do about it.
GPT-5.4 launches with native computer-use and 1M context. Anthropic formally designated a US national security supply chain risk, challenging it in court. New Anthropic labor research finds AI displacement is real but far below theoretical maximum. Updated 6 March 2026.
DRAM prices rose 172% throughout 2025. Samsung halted DDR5 module orders. Micron exited its Crucial consumer brand. The AI memory crunch is no longer a prediction -- it is the price tag on your next hardware refresh.
When the Pentagon demanded Anthropic delete a clause protecting against mass surveillance, it triggered the first real test of whether corporate AI ethics policies can survive contact with sovereign power. Here's what engineers deploying AI systems need to understand.
Drones hit three AWS facilities in the UAE and Bahrain during the US-Iran conflict. AZ isolation failed. Banking services went down. And Iranian state media told us exactly why they targeted cloud infrastructure. Here's what changes now.
Anthropic refused to delete one phrase from its AI usage policy. The Pentagon banned them, OpenAI filled the gap within hours, and the entire premise of 'safety-first' enterprise AI got stress-tested in real time. Here's what it means for engineering teams.
An honest account of why this blog uses AI to research and write, what that process looks like, and what it means for how you should read it. Last updated 5 March 2026 -- the disclosure argument and commission/curate/publish model, with case study from the Ars Technica fabrication incident.
Updated 5 March 2026: NVIDIA PersonaPlex 7B makes fully local full-duplex voice practical on Apple Silicon. AMD Ryzen AI 400 brings NPU to AM5 desktop. A practical guide to running your own AI stack -- inference, agent layer, memory, publishing, monitoring.
In February 2026, an attacker used a GitHub issue title to hijack Cline's AI triage bot, poison its Actions cache, and publish a malicious npm package to 5 million developers. Every failure point was a documented misconfiguration. This is what went wrong, and what you do differently.
The AGENTS.md file is becoming the single most important piece of configuration in any AI-assisted project. How to write one, what to put in it, and why it matters. Updated with Clinejection: the attack that proved what happens when AI agents have no security constraints.
Engineers are splitting into three groups in response to AI: Path 1 (adapting and thriving), Path 2 (struggling but reachable), and Path 3 (in crisis). Updated 5 March 2026 with Alibaba researcher exodus, Anthropic/Pentagon standoff, WEF compression data, and the 'context is the moat' framing.
Practical safety engineering for AI agents -- not theory. Covers real incidents, the accountability gap, kill switches, constraint patterns, and what responsible agent deployment actually looks like. Updated 5 March 2026: Anthropic/DoD negotiation as a precision-of-constraint case study.
AI is actively reshaping engineering headcounts -- not just productivity. This week: LLM reliability as a career risk, AI-assisted codebase rewrites showing up on HN, and Morgan Stanley projecting developer workforce expansion even as displacement accelerates.
Personal AI agents are becoming infrastructure, not novelty. Apple announced MacBook Neo, Nvidia PersonaPlex brings on-device speech to Apple Silicon, BMW deploys humanoid robots, and Willison's Qwen deep-dive lands at 711 points on HN.
How we decide what AI and tech news actually matters -- sources, filters, and the things we deliberately ignore. A guide to the methodology behind this blog.
AI-assisted work has a daily ceiling -- most engineers hit it around the four-hour mark. Why it happens, what the research says, and how to structure your day around it.
The psychological cost of working alongside AI is emerging as a real and under-discussed crisis. Covers cognitive debt, moral distress, identity erosion, and what engineering leaders should be doing about it.