US Commerce Dept to Vet AI Models From Google DeepMind, Microsoft, xAI

3 min readSources: Lex Blog

The Commerce Department's CAISI secured agreements to vet AI models from Google DeepMind, Microsoft, and xAI before release.

Why it matters: Federal oversight of frontier AI models could set new standards for legal tech vendor compliance and AI governance. These protocols may become crucial benchmarks for law firms and corporate legal departments integrating AI tools.

  • Announced May 5, 2026, CAISI can now evaluate AI models for national security risks before public launch.
  • Agreements cover Google DeepMind, Microsoft, and xAI, with prior deals including OpenAI and Anthropic.
  • CAISI has completed over 40 AI model evaluations, including for unreleased technologies.
  • Evaluations target threats like cybersecurity, biosecurity, and chemical weapons, with classified testing environments.

The U.S. Department of Commerce's Center for AI Standards and Innovation (CAISI) announced formal agreements with Google DeepMind, Microsoft, and xAI to conduct rigorous pre-deployment testing of their AI models.

  • Under the May 5, 2026 announcement, CAISI gains access to assess AI models for potential national security risks—including cybersecurity, biosecurity, and chemical weapons vulnerabilities—prior to public release.
  • This move builds on earlier agreements with OpenAI and Anthropic made in 2024, signaling expanded industry-government collaboration.
  • CAISI, originally established as the AI Safety Institute in 2023 and later renamed, has already completed 40+ evaluations, including of unreleased, state-of-the-art models.

Evaluations will draw on expertise from across the federal government, supported by the CAISI-convened TRAINS Taskforce, and can involve testing within classified settings.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said Chris Fall, CAISI Director.

According to Microsoft's Chief Responsible AI Officer Natasha Crampton, "Testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments." Microsoft has also agreed to comparable pre-deployment testing with the UK's AI Security Institute (ITPro).

The Trump administration is reportedly considering an executive order to formalize the federal oversight process for AI models (The Guardian).

For legal departments and tech-forward law firms, these oversight agreements set a precedent in AI governance and pre-market compliance—especially as AI products increasingly underpin sensitive legal operations.

By the numbers:

  • 40+ — AI model evaluations CAISI has completed to date
  • 2023 — Year CAISI (as AI Safety Institute) was established
  • May 5, 2026 — Date of the latest agreements announcement

Yes, but: Details on CAISI's evaluation methodologies remain undisclosed, and the timeline for a possible executive order is unclear.

What's next: Watch for potential executive action from the Trump administration to cement a federal AI oversight process.