Hardware
- Linux Gaming in 2026: The Year It Got Serious
GE-Proton 10-34 ships targeted fixes for God of War Ragnarok, Final Fantasy XIV, and Assassin's Creed 1. Forza Horizon 6 confirmed for Steam Deck at May launch. Epic lays off 1,000+ amid Fortnite decline, raising questions about the EAC Linux holdout. NTSYNC now shipping by default in SteamOS 3.7.20 beta.
- Hardware for AI: Self-Build Recommendations and the Inference Landscape
A living guide to building your own AI-capable and gaming machine. Three tiers at £500, £800, and £1500 for AI inference and gaming/general purpose, GPU quick reference, and what to avoid in 2026. Updated 1 Apr 2026: AMD RX 9060 XT 16GB arrives at the RTX 5060 Ti 8GB MSRP price, reshaping mid-tier options; DDR5-5200 documented rising from $100 to $400+ in five months; US semiconductor tariffs take effect today.
- The Memory Crunch: Why Hardware Is Getting Expensive Again
Memory inflation has spread from gaming hardware into smartphones and ultra-low-cost consumer devices. The thesis holds: no new data points shifted the picture on 1 April 2026, with the shortage trajectory intact and the 2028-2030 relief window unchanged.
- Intel Arc Pro B70: 32GB for $949 and What It Does to the Inference Cost Equation
Intel launched the Arc Pro B70 on 25 March 2026 -- 32GB GDDR6, 608 GB/s bandwidth, $949. That's more VRAM than Nvidia's $1,800 RTX Pro 4000 Blackwell, at nearly half the price. The VRAM:price calculus for local AI inference just shifted.
- The Memory Crunch: Why Hardware Is Getting Expensive Again
A quieter day on 26 March -- nothing new shifts the thesis. All major data points from this week (Micron Q2 2026 earnings, Samsung $73B capex, SK Group shortage-to-2030 forecast) remain the dominant signals; the supply squeeze thesis holds.
- Arm AGI CPU: When the Licensor Becomes the Chip Maker
Arm has launched its first production data center CPU, the AGI CPU, co-developed with Meta. After 35 years of IP licensing, Arm is now in the silicon business -- and it's a bet on the CPU becoming the pacing element in agentic AI infrastructure.
- iPhone 17 Pro Runs a 400B Model Locally. Here's What That Actually Means.
The iPhone 17 Pro has been demonstrated running a 400B parameter model locally via storage-as-RAM paging at 0.6 tokens per second. That speed makes it useless for production work today -- but the architectural threshold it crosses matters.
- Building a Gaming PC in 2026: Three Builds at £500, £800, and £1500
A practical self-build guide for gaming and everyday use in 2026. Three tiers at £500, £800, and £1500 -- covering what to build, why, and where the real value sits.
- Building a Local AI Machine: Three Builds at £500, £800, and £1500
A practical buying guide for engineers who want to run local AI models and agents in 2026. Three tiers at £500, £800, and £1500, with honest assessments of what each actually runs.
- How MoE Sparsity and Apple Silicon SSD Architecture Make 397B Local Inference Possible
Flash-MoE runs a 397-billion-parameter model on a MacBook Pro with 5.5GB of active RAM by combining MoE weight sparsity with Apple Silicon's direct SSD-to-GPU memory architecture. This is a specific technical convergence, not a general trick, and understanding why it works on Apple Silicon but not on a standard PC changes how you think about hardware selection for local inference.
- Locked In: What $1 Trillion in AI Compute Capital Means for Your Infrastructure Decisions
At GTC 2026, Jensen Huang said he now sees at least $1 trillion in purchase orders for Blackwell and Vera Rubin through 2027. That capital is already committed and being manufactured -- and it has structural implications for every engineering team making build vs buy decisions over the next three years.
- Running a 397B Model on 48GB: Flash-MoE and the Active-Parameter Insight
Dan Woods streamed a 209GB MoE model from SSD on a 48GB MacBook Pro and got 5-5.7 tokens per second. The key insight: memory constraints on local inference are about active parameters, not total ones. MoE architecture changes the math entirely.
- Tinybox vs Apple Silicon vs Project Digits: Which Local AI Box for Engineering Teams
Three different philosophies for running AI locally: raw GPU VRAM (Tinybox), unified memory that just works (Apple Silicon), and the Nvidia stack in a compact box (Project Digits). This is a decision guide, not a benchmark sheet.
- Amazon's AI Phone Bet: Is 'Transformer' a Vision or Another Fire Phone?
Reuters reports Amazon is developing a new smartphone codenamed Transformer, built around Alexa+ and designed to potentially replace traditional app stores with conversational AI. The idea is more coherent than the Fire Phone ever was -- but Amazon still has to build it.
- NVIDIA Vera Rubin: What 10x Cheaper Inference Actually Means
NVIDIA announced Vera Rubin at GTC 2026: 3.3-5x inference improvement over Blackwell, 10x inference token cost reduction, custom Vera ARM CPU, HBM4 at 22 TB/s. Ships H2 2026. The performance numbers matter for procurement. The cost numbers matter for every engineer deciding what to build.
- Nvidia's $26 Billion Open-Weight Bet
Nvidia released Nemotron 3 Super -- a 120B-parameter hybrid reasoning model -- and Wired surfaced a $26 billion commitment to open-weight AI buried in a 2025 financial filing. The hardware monopoly is building the models too.
- $130 Billion in Illegal Tariffs: What the Refund Ruling Means for Hardware Teams
A US trade court ordered refunds on $130B in tariffs ruled illegal by the Supreme Court, affecting ~300,000 importers including hardware buyers. Here's what it means for engineering budgets, CapEx planning, and procurement strategy.