Russell Clare

Russell Clare -- engineering, AI, and the things worth writing down.

  • About
  • Posts

powered by Hugo | themed with poison
© 2026 Russell Clare. All rights reserved.

  • Who's Who in AI: The People and Labs Actually Worth Following March 6, 2026
    GPT-5.4 released with native computer-use and 1M context. Anthropic receives formal DoW supply chain risk letter and plans court challenge. Labs, researchers, and publications that are actually worth following.
  • The Ad SDK You Shipped Is a Government Surveillance Vector March 6, 2026
    CBP has officially acknowledged it buys location data sourced from the real-time bidding ecosystem -- data that flows directly from ordinary apps through ad SDKs to government analysts. This is a product engineering post about what your app is actually participating in, and what to do about it.
  • State of AI March 6, 2026
    GPT-5.4 launches with native computer-use and 1M context. Anthropic formally designated a US national security supply chain risk, challenging it in court. New Anthropic labor research finds AI displacement is real but far below theoretical maximum. Updated 6 March 2026.
  • The Memory Crunch: Why Hardware Is Getting Expensive Again March 5, 2026
    DRAM prices rose 172% throughout 2025. Samsung halted DDR5 module orders. Micron exited its Crucial consumer brand. The AI memory crunch is no longer a prediction -- it is the price tag on your next hardware refresh.
  • Corporate Ethics Meets State Power: The Anthropic/Pentagon Standoff and What It Means for Engineering Teams March 5, 2026
    When the Pentagon demanded Anthropic delete a clause protecting against mass surveillance, it triggered the first real test of whether corporate AI ethics policies can survive contact with sovereign power. Here's what engineers deploying AI systems need to understand.
  • Infrastructure in the Line of Fire: What the AWS Drone Strikes Actually Mean for SREs March 5, 2026
    Drones hit three AWS facilities in the UAE and Bahrain during the US-Iran conflict. AZ isolation failed. Banking services went down. And Iranian state media told us exactly why they targeted cloud infrastructure. Here's what changes now.
  • Whose Ethics? Anthropic, the Pentagon, and the Limits of AI Vendor Governance March 5, 2026
    Anthropic refused to delete one phrase from its AI usage policy. The Pentagon banned them, OpenAI filled the gap within hours, and the entire premise of 'safety-first' enterprise AI got stress-tested in real time. Here's what it means for engineering teams.
  • Why We Write These Posts With AI (And What That Means) March 5, 2026
    An honest account of why this blog uses AI to research and write, what that process looks like, and what it means for how you should read it. Last updated 5 March 2026 -- the disclosure argument and commission/curate/publish model, with case study from the Ars Technica fabrication incident.
  • Self-Hosting Your AI Stack: A Practical Guide March 5, 2026
    Updated 5 March 2026: NVIDIA PersonaPlex 7B makes fully local full-duplex voice practical on Apple Silicon. AMD Ryzen AI 400 brings NPU to AM5 desktop. A practical guide to running your own AI stack -- inference, agent layer, memory, publishing, monitoring.
  • Clinejection: How a GitHub Issue Title Took Down a 5 Million User Tool March 5, 2026
    In February 2026, an attacker used a GitHub issue title to hijack Cline's AI triage bot, poison its Actions cache, and publish a malicious npm package to 5 million developers. Every failure point was a documented misconfiguration. This is what went wrong, and what you do differently.
  • Building Your AGENTS.md: The File That Makes AI Actually Work March 5, 2026
    The AGENTS.md file is becoming the single most important piece of configuration in any AI-assisted project. How to write one, what to put in it, and why it matters. Updated with Clinejection: the attack that proved what happens when AI agents have no security constraints.
  • The Three Paths: How Engineers Are Navigating the AI Transition March 5, 2026
    Engineers are splitting into three groups in response to AI: Path 1 (adapting and thriving), Path 2 (struggling but reachable), and Path 3 (in crisis). Updated 5 March 2026 with Alibaba researcher exodus, Anthropic/Pentagon standoff, WEF compression data, and the 'context is the moat' framing.
  • Building Agents That Can't Go Rogue: A Practical Safety Guide March 5, 2026
    Practical safety engineering for AI agents -- not theory. Covers real incidents, the accountability gap, kill switches, constraint patterns, and what responsible agent deployment actually looks like. Updated 5 March 2026: Anthropic/DoD negotiation as a precision-of-constraint case study.
  • The Future of Engineering Jobs: What AI Is Actually Changing March 5, 2026
    AI is actively reshaping engineering headcounts -- not just productivity. This week: LLM reliability as a career risk, AI-assisted codebase rewrites showing up on HN, and Morgan Stanley projecting developer workforce expansion even as displacement accelerates.
  • The Agentic Turn: Personal AI Agents Are Becoming Infrastructure March 5, 2026
    Personal AI agents are becoming infrastructure, not novelty. Apple announced MacBook Neo, Nvidia PersonaPlex brings on-device speech to Apple Silicon, BMW deploys humanoid robots, and Willison's Qwen deep-dive lands at 711 points on HN.
  • Signal vs Noise: How We Decide What Actually Matters March 5, 2026
    How we decide what AI and tech news actually matters -- sources, filters, and the things we deliberately ignore. A guide to the methodology behind this blog.
  • The 4-Hour Ceiling: Why AI-Assisted Work Has a Daily Limit March 5, 2026
    AI-assisted work has a daily ceiling -- most engineers hit it around the four-hour mark. Why it happens, what the research says, and how to structure your day around it.
  • The Emerging Mental Health Crisis Among Software Engineers March 5, 2026
    The psychological cost of working alongside AI is emerging as a real and under-discussed crisis. Covers cognitive debt, moral distress, identity erosion, and what engineering leaders should be doing about it.