Kernel Weekly — March 5, 2026: Pentagon vs. Anthropic

Rabbit-in-the-Headlights — AI stories that actually matter.

Update March 5, 2026

Welcome to Kernel Weekly, your short, sharp hit of AI news — the breakthroughs, bold moves, and occasional blunders worth sinking your teeth into.


1) Pentagon blacklists Anthropic as a “supply chain risk”

The Trump administration designated Anthropic a “supply chain risk” — the first time the US government has ever applied this label to an American technology company. The move came after Anthropic refused to remove two hard limits from its Pentagon contract: a ban on mass surveillance of American citizens, and a prohibition on fully autonomous weapons systems. Defense Secretary Hegseth invoked the designation after negotiations broke down, effectively barring federal agencies from using Claude. Anthropic fired back, citing federal statute that Hegseth lacks the authority to restrict companies working with them. Axios | MIT Technology Review | DefenseScoop

Why it matters: This isn’t a procurement dispute. It’s the first real test of whether AI companies can set ethical boundaries with the world’s most powerful military customer — and survive the consequences.


2) OpenAI rushes in on the Pentagon deal — then calls it “opportunistic and sloppy”

Hours after Anthropic’s blacklisting, OpenAI announced its own deal to deploy models in classified military environments. CEO Sam Altman initially framed it as de-escalation. Days later, facing fierce backlash, he admitted the deal was “definitely rushed” and “looked opportunistic and sloppy.” OpenAI is now renegotiating, adding language that its systems “shall not be intentionally used for domestic surveillance of US persons,” consistent with the Fourth Amendment. Instead of Anthropic’s hard contractual bans, OpenAI accepted an “all lawful purposes” framework layered with architectural controls: cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared engineers embedded forward. Fortune | CNBC | TechCrunch

Kernel take: When your CEO publicly calls your own deal “sloppy” within 72 hours, that tells you more about the internal pressure than the external PR. The technical safeguards are better than nothing — but “all lawful purposes” is a very wide lane when the government defines what’s lawful.


3) 300+ Google and OpenAI employees sign open letter backing Anthropic

More than 300 Google employees and over 60 OpenAI staffers signed an open letter titled “We Will Not Be Divided,” urging their employers to stand with Anthropic against the Pentagon’s demands. The letter calls for industry solidarity on boundaries for AI in surveillance and autonomous weapons. OpenAI researcher Aidan McLaughlin posted publicly that the Pentagon deal “wasn’t worth it” — drawing nearly 500,000 views. Some 70 current Anthropic staffers also signed, seeking to create “shared understanding and solidarity in the face of this pressure.” TechCrunch | The Hill | CNBC

Why it matters: The talent that builds these models is now publicly drawing red lines. In a labor market where top AI researchers have their pick of employers, this letter is leverage — not just symbolism.


4) Claude hits #1 on the App Store as consumers rally

In the most unexpected twist, Claude surged from 42nd place to the #1 most-downloaded free app in the US — overtaking ChatGPT and Gemini — within days of the Pentagon blacklisting. Anthropic reported daily sign-ups breaking all-time records every day that week, free users increasing more than 60% since January, and paid subscribers more than doubling year-over-year. Axios | TechCrunch | CNN | The Hill

Kernel take: The Pentagon wanted to make an example of Anthropic. Instead, they made a martyr. This is the first time an AI company has been commercially rewarded for saying “no” to a government — and it won’t be the last time others calculate whether that trade is worth it.


5) Defense tech companies flee Claude — the enterprise cost materializes

While consumers flocked to Claude, the enterprise picture is grimmer. Ten defense-focused portfolio companies backed by venture firm J2 Ventures have already stopped using Claude for military applications. Major contractors including Lockheed Martin are expected to remove Anthropic technology from their supply chains. Beyond Defense, officials at Treasury, State, and HHS have directed employees to move off Claude. With approximately 80% of Anthropic’s revenue coming from enterprise customers, the supply chain designation is designed to squeeze where it hurts most. CNBC | TechCrunch

Why it matters: Consumer downloads don’t replace enterprise contracts. Anthropic is betting that principled positioning attracts enough new enterprise business to offset the government exodus — but that’s a multi-quarter bet with a very public clock ticking.


6) $189 billion: February smashes the global startup funding record

Global venture investment hit $189 billion in February — the largest single month in startup funding history. But here’s the catch: 83% of that capital went to just three companies. OpenAI raised $110 billion at a $730 billion valuation, with Amazon ($50B), Nvidia, and SoftBank ($30B each) leading. Anthropic pulled in $30 billion in the third-largest venture round ever recorded. Meanwhile, Ayar Labs (optical interconnects) raised $500M from Nvidia and AMD. Crunchbase | TechCrunch

Kernel take: When three companies absorb 83% of a record month’s capital, that’s not a rising tide — it’s a gravitational field. The infrastructure layer of AI is consolidating into a handful of mega-players, and everyone else is competing for what’s left.


7) OpenAI launches GPT-5.3 Instant — less preachy, fewer hallucinations

OpenAI released GPT-5.3 Instant on March 3, directly addressing user complaints that ChatGPT had become overly preachy and patronizing. The update delivers a 26.8% reduction in hallucinations when pulling web search results, significantly fewer unnecessary refusals, and a toned-down moralizing preamble. Web search integration is smarter — the model now balances online sources with its own knowledge rather than simply summarizing results. GPT-5.2 Instant remains available for three months before retirement in June. 9to5Mac | TechCrunch | The Register

Why it matters: “Less cringe” isn’t a benchmark — but it’s what keeps paying users from switching. OpenAI is learning that tone and trust matter as much as capability in a market where the models are converging.


8) Anthropic launches Import Memory — making switching from rivals effortless

Anthropic made its memory feature free for all users on March 2 and introduced a new Import Memory tool. The feature lets users transfer conversation history, preferences, and context from ChatGPT, Gemini, and Microsoft Copilot into Claude. Users paste a specific prompt into a competitor’s chatbot, copy the exported memories, and paste them into Claude’s settings. Timing is everything — this launched the same weekend Claude hit #1 on the App Store. MacRumors | Bloomberg | 9to5Mac

Kernel take: Anthropic’s product team saw the window and jumped through it. When millions of users are rage-downloading your app, reducing switching costs is the highest-leverage move you can make. This is textbook competitive execution.


9) Alphabet surges 80% YoY as Gemini 3 + Apple deal reshapes the Magnificent Seven

Alphabet shares have surged 80% year-over-year, outperforming Nvidia and Microsoft. The catalyst: Gemini 3.1 Pro’s “Native Multimodal Fusion” — processing video, audio, and 3D geospatial data simultaneously — combined with the Apple partnership that makes Gemini the default intelligence layer for Siri across 2+ billion active devices. The Broadcom TPU alliance has solved AI’s two biggest problems: capability and cost. Microsoft, meanwhile, sees margins compressed paying “the Nvidia tax” while subsidizing Copilot. Financial Content | Seeking Alpha

Why it matters: Google went from “AI laggard” to the company everyone’s chasing in 18 months. The Apple distribution deal is the kind of structural advantage that compounds — every Siri query is now a Gemini training signal.


10) EU AI Act high-risk rules loom — sandbox deadline hits August

The EU’s AI Act high-risk provisions take effect in August 2026, requiring each member state to establish at least one AI regulatory sandbox by then. The European Commission has already missed its own deadline for guidance on Article 6 compliance, and the Transparency Code of Practice won’t be finalized until May or June. Meanwhile, the global AI infrastructure market is projected to grow from $158 billion (2025) to $419 billion by 2030. EU AI Act | IAPP | GlobeNewswire

Kernel take: The EU is building the regulatory framework while the planes are already in the air. If you’re deploying AI in Europe, the compliance clock is ticking — and the regulators themselves are behind schedule.


Final Thought

This week’s theme is unmistakable: the AI industry’s relationship with power is being tested in real time. Anthropic drew a line with the Pentagon and got blacklisted — then got rewarded by consumers. OpenAI rushed in and got burned by its own employees. Google quietly consolidated an 80% stock surge while everyone else was fighting.

The Pentagon saga isn’t just about one contract. It’s the first real-world test of whether AI companies can say “no” to governments and survive. So far, the answer is: it depends on which customers you’re willing to lose.


Want this in your inbox every week? Subscribe at thekernel.io/newsletter or reply with topics for next time.

Sources: TechCrunch, CNBC, Fortune, Axios, MIT Technology Review, Bloomberg, CNN, The Hill, DefenseScoop, MacRumors, 9to5Mac, The Register, Seeking Alpha, Financial Content, IAPP, Crunchbase, GlobeNewswire, EU AI Act

Written by

Share

Categories

Related Post

Schedule Appointment

Fill out the form below, and we will be in touch shortly.
Contact Information
Enquiry Type