Nvidia
- Locked In: What $1 Trillion in AI Compute Capital Means for Your Infrastructure Decisions
At GTC 2026, Jensen Huang said he now sees at least $1 trillion in purchase orders for Blackwell and Vera Rubin through 2027. That capital is already committed and being manufactured -- and it has structural implications for every engineering team making build vs buy decisions over the next three years.
- Nvidia's Open-Source Play: Nemotron 3 and the Agentic Token Tax
Running agentic AI workflows through closed APIs is getting expensive fast. Nvidia's Nemotron 3 Super is the most credible open-weight answer yet -- but the hardware strategy underneath it is worth understanding before you reach for the Ollama docs.
- Tinybox vs Apple Silicon vs Project Digits: Which Local AI Box for Engineering Teams
Three different philosophies for running AI locally: raw GPU VRAM (Tinybox), unified memory that just works (Apple Silicon), and the Nvidia stack in a compact box (Project Digits). This is a decision guide, not a benchmark sheet.
- NemoClaw: Nvidia's Enterprise Agent Security Stack
NemoClaw is Nvidia's enterprise agent security stack for OpenClaw -- a single-command install that adds OpenShell sandboxing, policy-based guardrails, and a privacy router to autonomous agents. Launched at GTC 2026 on March 16. This signal tracks how the enterprise AI agent security infrastructure layer develops.
- NVIDIA Vera Rubin: What 10x Cheaper Inference Actually Means
NVIDIA announced Vera Rubin at GTC 2026: 3.3-5x inference improvement over Blackwell, 10x inference token cost reduction, custom Vera ARM CPU, HBM4 at 22 TB/s. Ships H2 2026. The performance numbers matter for procurement. The cost numbers matter for every engineer deciding what to build.
- Nvidia's $26 Billion Open-Weight Bet
Nvidia released Nemotron 3 Super -- a 120B-parameter hybrid reasoning model -- and Wired surfaced a $26 billion commitment to open-weight AI buried in a 2025 financial filing. The hardware monopoly is building the models too.
- NVIDIA Nemotron 3: What the Architecture Tells Us About Agentic AI Infrastructure
NVIDIA's Nemotron 3 family -- 31.6B parameters, 3.6B active, hybrid Mamba-Transformer MoE -- is engineered specifically for multi-agent systems. Here's what the architectural choices tell engineers about where agentic AI infrastructure is heading.