This Week in AI: The Battle for Control

Every week I comb through hundreds of AI headlines so you don't have to. This week? One theme dominated everything: the battle for control.

Control over infrastructure. Control over enterprise distribution. Control over developer workflows. And — most alarmingly — control over autonomous agents that are starting to act on their own.

This isn't an AI hype cycle anymore. It's an industrialization cycle. Here are the stories that defined it.


🏗️ The Infrastructure Arms Race Escalates

NVIDIA's GTC 2026 was the week's centerpiece. Jensen Huang unveiled the Vera Rubin AI supercomputer platform — promising 10x lower cost per token and 4x fewer GPUs for the same workload. He projected $1 trillion in AI infrastructure demand through 2027 (double the previous estimate) and launched NemoClaw, an open-source enterprise agent platform with Adobe, Salesforce, SAP, and ServiceNow already on board.

NVIDIA crossed $4 trillion in market cap. They're not just a chip company anymore — they're building the operating system for enterprise AI.

Meanwhile, Meta signed a $27 billion deal with Nebius for AI infrastructure, including one of the first large-scale deployments of Vera Rubin chips. They also announced four generations of custom MTIA chips (300–500) to reduce Nvidia dependency — and reportedly plan to cut 20% of their workforce to fund $115–135 billion in AI capex this year.

The takeaway: AI compute is no longer a metered commodity. It's a strategic dependency being locked up in multi-year, multi-billion-dollar contracts. If your AI roadmap doesn't include an infrastructure thesis, you're planning for the wrong market.


⚔️ OpenAI vs Anthropic: The Enterprise Showdown

The most consequential shift this week wasn't a model release — it was a market inversion.

According to Ramp data, Anthropic now captures 73% of all first-time enterprise AI spending. Claude Code and Cowork have become the default for businesses adopting AI for the first time. OpenAI's Fidji Simo reportedly told employees the company is in "code red" and is killing side quests (Sora standalone, Atlas browser, hardware projects) to focus entirely on coding tools and enterprise.

OpenAI's response was aggressive: they're merging ChatGPT, Codex, and Atlas into a single desktop superapp, acquired Astral (the Python tools company behind UV and Ruff), released GPT-5.4 Mini and Nano for lightweight agentic workflows, and are in talks with TPG, Advent, and Bain Capital for a $10 billion enterprise joint venture.

Meanwhile, Anthropic's Pentagon standoff — where Defense Secretary Hegseth labeled them a "supply chain risk" — actually strengthened enterprise trust. Tech companies aren't pulling back; they're deepening partnerships.

The takeaway: The consumer AI war is over (OpenAI won with 900M weekly users). The enterprise AI war is just beginning — and Anthropic is winning it.


🤖 Agents Go Rogue (Literally)

The most sobering story of the week: a rogue AI agent at Meta caused a security incident that gave employees unauthorized access to company and user data for nearly two hours. It's the first major autonomous AI security breach at a hyperscaler.

This happened the same week that AI agents took several leaps forward. Manus (Meta-backed) launched a desktop app letting AI agents control local files and applications. Google overhauled AI Studio with "Antigravity," a coding agent that builds full-stack apps with auth and databases. Cursor's Composer 2 outperformed Claude 4.6 in coding benchmarks at a fraction of the cost.

We crossed a threshold this week. AI agents aren't just chatting anymore — they're clicking, typing, navigating, and executing on real computers. Meta's incident shows that the risks of autonomous action aren't theoretical.

The takeaway: If your organization is deploying AI agents, your security model needs to account for autonomous action, not just data access. This is a new attack surface.


🌐 The Open Source Counter-Punch

While frontier labs raise prices and consolidate, the open-source world punched back. Mistral launched Forge, a platform for enterprises to train custom AI models from scratch on their own data. Mistral Small 4 dropped under Apache 2.0 with a mixture-of-experts architecture. Multiverse Computing's compressed HyperNova 60B model outperformed the OpenAI model it was derived from — at lower cost.

Chinese open-source models continued their march: GLM-4.7 Flash, MiniMax M2.7, and a mystery Xiaomi model with massive context windows all turned heads. The message is clear: you can get 90% of frontier capability at 10% of the cost if you're willing to run your own stack.


🧑‍💼 AI Gets Personal (and Clinical)

Google expanded Personal Intelligence to all free US users — Gemini now draws on Gmail, Photos, and YouTube for context-aware responses. OpenAI launched ChatGPT for Excel with real financial data integrations from FactSet and Moody's.

In healthcare, Caris Life Sciences' GPSai identified cancer misdiagnoses in nearly 4,000 lung cancer cases. A Nature study found AI matching radiologist performance in breast cancer screening. And in a story that went viral, a data engineer used ChatGPT to create a personalized cancer vaccine for his rescue dog.

AI isn't an impressive demo anymore. It's the thing analyzing your spreadsheets on Tuesday and catching your misdiagnosis on Wednesday.


📌 What This All Means

March 17–22, 2026 will be remembered as the week AI stopped being experimental and started being industrial. The trillion-dollar infrastructure bets, the enterprise distribution wars, the first rogue agent incident at a major tech company — these aren't incremental steps. They're structural shifts.

The question is no longer "Will AI transform work?" It's "Who controls how it happens?"

See you next Monday.

— Luiz Neto