Rabbit-in-the-Headlights — AI stories that actually matter.
Update March 26, 2026
Welcome to Kernel Weekly, your short, sharp hit of AI news — the breakthroughs, bold moves, and occasional blunders worth sinking your teeth into.
1) 🏛️ White House Drops Its AI Blueprint — And Wants to Override Every State Law
On March 20, the Trump Administration released its National Policy Framework for Artificial Intelligence — a 7-category legislative roadmap for Congress. The headline: broad federal preemption of state AI laws. The framework calls for a “minimally burdensome” national standard, opposes creating any new federal AI regulatory body, and wants to preclude states from regulating AI model development or imposing liability on developers for third-party misuse. It also backs stronger child safety protections, wants AI companies — not ratepayers — to cover data center energy costs, and punts the copyright-versus-fair-use question to the courts. Crowell & Moring | Roll Call
Kernel take: The framework is non-binding — it’s a wish list, not a law. But the preemption play is the real signal. If Congress follows through, every state-level AI safety bill (Colorado, California, Illinois) becomes moot overnight. For boards navigating compliance strategy, the worst outcome isn’t one set of rules or another — it’s spending two years building governance around state laws that evaporate the moment Congress acts.
2) 🇪🇺 EU Votes 101–9 to Delay Its Own AI Law
On March 18, the European Parliament’s joint IMCO/LIBE committee voted overwhelmingly — 101 in favor, 9 against — to delay the EU AI Act’s high-risk compliance deadlines. The original August 2026 deadline for Annex III high-risk systems now shifts to December 2027, with Annex I systems pushed to August 2028. Only 8 of 27 EU member states were ready for the original timeline. MEPs also introduced a new ban on AI “nudifier” systems that create non-consensual intimate imagery. European Parliament | Trending Topics
Why it matters: The EU wrote the world’s most ambitious AI regulation — then discovered that nobody, including its own member states, could implement it on time. The 101–9 vote isn’t close; it’s an admission that regulation outpaced both technical standards and institutional readiness. Meanwhile, Washington is pushing preemption and Beijing is accelerating.
3) ⚖️ Boards Are Now on the Hook for AI Oversight — Whether They Realize It or Not
A new analysis from Skadden published on the Harvard Law School Forum on Corporate Governance on March 25 makes the fiduciary case explicit: directors who allow AI deployment without adequate governance, testing, or monitoring risk breaching their duty of care — especially where problems were foreseeable and preventable. The authors argue that waiting for comprehensive federal legislation is not a defense. Existing laws across privacy, employment discrimination, consumer protection, and securities regulation already apply to AI systems. Harvard Law Forum
Kernel take: This is the piece every board member should read this week. The legal exposure isn’t hypothetical — it’s the gap between what your company is deploying and what your board is overseeing. “We didn’t know” stopped being a defense the moment your organization started using AI in customer-facing or decision-making workflows. The fiduciary clock is already running.
4) 📐 NIST Launches AI Agent Standards — 930+ Organizations Respond
NIST’s Center for AI Standards and Innovation (CAISI) launched the AI Agent Standards Initiative in February, and the public comment period that closed March 9 drew more than 930 submissions from organizations including the American Bankers Association, TechNet, and BSA. The initiative aims to foster industry-led technical standards for interoperable, secure AI agents — covering everything from authentication and policy enforcement to cross-platform communication. Cybersecurity Dive | MetricStream
Why it matters: 930 responses isn’t standard-setting — it’s a stampede. The industry clearly wants guardrails for autonomous AI agents before the market fragments into incompatible ecosystems. For enterprise buyers, NIST standards will become the de facto procurement checklist. If your AI agent vendor can’t point to NIST alignment within 18 months, expect that to become a dealbreaker.
5) 🖥️ GPT-5.4 Brings Computer Use to the Mainstream
OpenAI released GPT-5.4 on March 5, and the headline feature is native computer use — the model can see your screen and operate applications by clicking, typing, scrolling, and navigating menus. It supports up to 1 million tokens of context, enabling agents to plan, execute, and verify tasks across long horizons. OpenAI reports a 33% reduction in factual errors compared to GPT-5.2. NxCode | Deeper Insights
Kernel take: Computer use is where AI agents stop being chatbots and start being workers. The ability to navigate applications, fill out forms, extract data from legacy systems — this is the unlock for the 80% of enterprise workflows that live outside APIs. It also means your security perimeter just expanded dramatically. Every AI agent with screen access is a new attack surface.
6) 💊 Eli Lilly Commissions Pharma’s Most Powerful AI Supercomputer
Eli Lilly officially launched LillyPod on February 25 — a DGX SuperPOD powered by 1,016 NVIDIA Blackwell Ultra GPUs delivering over 9,000 petaflops of AI performance. Built in just four months at the Indianapolis campus, LillyPod is the most powerful AI factory wholly owned by a pharmaceutical company. Workloads already in production span genomics, molecule design, single-cell biology, imaging, and manufacturing optimization. NVIDIA Blog | Fierce Biotech
Why it matters: This isn’t a pilot or a partnership announcement — it’s a production-scale AI factory owned outright by a pharma company. Lilly is betting that owning the compute, not renting it, gives them a structural advantage in drug discovery speed. If LillyPod delivers even a marginal acceleration in clinical timelines, the ROI dwarfs the hardware cost. Expect every top-10 pharma company to follow within 18 months.
7) ⚖️ Harvey Hits $11B — Legal AI’s First Mega-Unicorn
Legal AI startup Harvey raised $200 million at an $11 billion valuation, led by GIC and Sequoia. The company’s valuation has leaped from $3B to $5B to $8B to $11B in roughly a year. Harvey’s products are now used by more than 100,000 lawyers across 1,300 organizations, covering contract analysis, compliance, due diligence, and litigation. TechCrunch
Kernel take: Harvey is the proof point for vertical AI. While horizontal platforms fight over general-purpose use cases, Harvey built deep domain expertise in legal workflows and is now valued higher than most of the law firms it serves. The playbook — pick a profession, go deep, own the workflow — is replicable in accounting, consulting, and healthcare. The question isn’t whether vertical AI will eat professional services. It’s which verticals get eaten first.
8) 🔧 Oracle and Block Signal the AI Workforce Reckoning
Two major workforce stories landed within days of each other. Oracle is planning to cut 20,000–30,000 positions — up to 18% of its 162,000-strong workforce — to fund a massive data center buildout for AI workloads (including OpenAI). The cuts are driven by a cash crunch that Wall Street expects to keep Oracle cash-flow-negative for years. Meanwhile, Block cut 4,000 employees — nearly half its staff — with Jack Dorsey declaring that “a significantly smaller team, using intelligence tools, can do more and do it better.” Block’s stock surged 24%. Fortune
Why it matters: Oracle is firing people to afford AI infrastructure. Block is firing people because AI supposedly replaces them. Same outcome, different narratives — and both got rewarded by the market. For boards, the uncomfortable question isn’t whether AI reduces headcount. It’s whether you’re making structural workforce decisions based on genuine capability assessment or because the stock price rewards the announcement.
9) 🇰🇷 Samsung Bets $73 Billion on an AI Chip Comeback
Samsung Electronics will spend 110 trillion won ($73.3 billion) on chip capacity expansion and R&D in 2026 — a 22% increase and the single largest annual semiconductor investment by any company in history, surpassing TSMC’s projected $45 billion. The spending targets advanced memory (HBM), next-generation 2nm processors, and new fabrication facilities. Samsung recently reclaimed a technical edge by being first to commercially ship HBM4 chips, and received a public endorsement from Jensen Huang at GTC for its next-generation HBM4E. Evertiq
Kernel take: Samsung isn’t just catching up — it’s trying to rewrite the supply map. SK Hynix has dominated HBM supply to NVIDIA, and Samsung’s $73 billion is a statement that the memory hierarchy is too important to concede. For enterprise buyers, this is good news: a genuine two-supplier HBM market means better pricing and less concentration risk. For investors, it’s a reminder that the AI infrastructure race has no finish line.
10) 🤖 NVIDIA’s GTC Declares the Age of AI Agents — and Builds the Stack to Run Them
At GTC 2026, Jensen Huang unveiled the Vera Rubin platform — a full-stack computing system comprising seven chips, five rack-scale systems, and one supercomputer purpose-built for agentic AI. It delivers 10x more performance per watt than Grace Blackwell. Huang projected purchase orders between Blackwell and Vera Rubin reaching $1 trillion through 2027. On the software side, NVIDIA launched the Agent Toolkit with OpenShell — an open source runtime enforcing policy-based security for autonomous agents — now adopted by Adobe, Atlassian, Salesforce, SAP, ServiceNow, and Siemens. CNBC | NVIDIA Newsroom
Why it matters: NVIDIA isn’t just selling chips anymore — it’s building the entire operating environment for autonomous AI. Hardware, runtime, security guardrails, enterprise integrations. The Agent Toolkit with OpenShell is the play to become the default platform layer, the way AWS became the default cloud. When Jensen says “the age of AI agents,” he means the age of NVIDIA infrastructure. And with $1 trillion in projected orders, the market agrees.
Final Thought
This was the week the scaffolding became visible. Washington laid out a legislative framework. Brussels delayed its own. NIST started standardizing agents. NVIDIA built the platform to run them. Samsung committed $73 billion to power them. And in the middle of it all, companies are firing tens of thousands of people — some to build AI, some because of AI, and some just because the market rewards saying “AI.”
The infrastructure layer is consolidating. The regulatory layer is fragmenting. And the workforce implications are accelerating faster than any governance structure can absorb. If you’re advising a board right now, the Skadden/Harvard analysis isn’t optional reading — it’s the starting brief.
See you next week.
Subscribe to Kernel Weekly: thekernel.io/newsletter
Sources: White House, European Parliament, Harvard Law School Forum on Corporate Governance, Skadden, NIST, OpenAI, NVIDIA, Eli Lilly, Harvey AI, Bloomberg, CNBC, TechCrunch, CNN, Fortune, Cybersecurity Dive, Roll Call