A thought to start
Two weeks ago, the EU AI Act Omnibus trilogue collapsed in Brussels with ninety-three days to enforcement. Last weekend, Brussels closed the deal. In between, the US government negotiated pre-deployment testing access to every new frontier AI before it ships, the Pentagon awarded classified-network AI contracts to seven vendors, and Anthropic got cut from that list over a safety guardrails dispute.

Governance is not lagging anymore. It is catching up, fast, on both sides of the Atlantic. The companies that have been treating compliance as a Q3 problem are about to find out it was a Q1 problem

The last two weeks in AI

The headlines worth keeping, drawn from the Saturday news segments.

The EU AI Act Omnibus closed. A week after the trilogue collapsed, Brussels landed the deal with seventy-nine days to enforcement. The negotiating room narrowed sharply. Enterprises that delayed their AI literacy programs waiting for the rules to soften are now waiting on the wrong outcome.

The US government got a seat at the table on frontier AI. Every new frontier model now goes through pre-deployment government testing before it ships. The procurement side moved at the same time — the Pentagon awarded classified-network AI contracts to seven vendors, and notably cut Anthropic over a safety guardrails dispute. Whether you read that as Anthropic holding the line or losing the contract depends on which side of the table you sit on.

The first joint security framework for agentic AI landed. CISA and the Five Eyes published it together — and conceded, in the document itself, that prompt injection may never be fully solved. Read that sentence twice. It is a remarkable admission from the agencies whose job it is to tell you the threats can be managed.

Claude Security and GPT-5.5-Cyber both went live in the same week. Anthropic moved Claude Security into public beta for Enterprise. OpenAI launched GPT-5.5-Cyber for vetted defenders. UK AISI evaluations put both models at expert-human level for offensive cyber capability. The defenders and the attackers just got the same tools at the same time.

The vibe-coding bill came due. Roughly 380,000 AI-generated apps were found publicly accessible. Five thousand of them held sensitive data. Shadow AI is no longer a future risk. It is a current liability waiting on a breach report.

The two most-watched episodes since the last digest.

The AI Model Nobody's Talking About — Mistral 3.5 & The Sovereignty Question (23K views). Thirteen minutes on why the Mistral release matters less for its benchmarks and more for what it signals about European AI sovereignty and on-premise deployment economics.

Someone Just Trained a Frontier AI — Without Nvidia: ZAYA1-8B Explained (3.9K views). Fourteen minutes on what a frontier-grade model trained on non-Nvidia silicon actually proves — and what it doesn't.

The deeper read

The Integration Paradox. Seventy-nine days from today, the EU AI Act becomes fully operational. Most enterprises have an AI strategy. Almost none have AI governance. That gap has a name — governance debt — and starting August 2nd, it has a price.

The companies that win on AI in the next eighteen months won't be the ones with the biggest model budgets. They'll be the ones who close their governance debt before the regulators — and their own boards — force them to.

What's coming

The Flip Phone Rebel

Season 1 wrapped this week with Episode 7 — The AI Dictionary, the term-by-term reference for everything we covered in the foundations arc.

Season 2 starts Tuesday and shifts the lane: practical AI. How to actually use it, where it breaks, and how to build governance around it without slowing your teams down.

From the AI archive

In 1966, MIT's Joseph Weizenbaum built ELIZA, one of the first chatbots. It worked by pattern-matching: type "I am sad," it replies "Why are you sad?" That was the entire trick. No understanding. No memory. Just reflection.

His own secretary, who had watched him build it, asked him to leave the room so she could talk to it privately.

Sixty years later, we're still figuring out what to do about that.

Every promise. Every risk. The truth.

— Fredrik

Keep reading