Stanford AI Index: 90% of Top AI Models Built by Industry in 2025
Stanford's 2026 AI Index finds more than 90% of leading AI models were industry-built last year.
Why it matters: Legal professionals face novel risks and compliance gaps as AI progress accelerates ahead of new regulation. Real-time oversight is critical as incidents jump and legal exposure widens.
- Over 90% of top AI models released in 2025 were built by private industry, per Stanford HAI report.
- AI incidents rose 55% year-over-year to 362 in 2025, signaling mounting legal and reputational risks.
- U.S. private AI investment hit $285.9B in 2025—more than 23 times China’s figure.
- Only 6% of K-12 teachers found AI policy clear, illustrating broader regulatory uncertainty.
The 2026 AI Index Report from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) finds commercial players are driving the overwhelming majority of recent progress in advanced AI, outpacing the evolution of legal and regulatory systems in the U.S. and globally.
- Of all major frontier AI models—those at the cutting edge of performance and efficiency—launched in 2025, more than 90% originated in industry rather than academia or cross-sector labs. These models now rival or surpass humans in tasks from math and language reasoning to generating images and video.
- Reported AI-related incidents, including misuse and safety failures, spiked 55% compared to 2024, reaching 362 documented cases and sharpening legal concern over liability and governance.
- The U.S. cemented its global AI lead with $285.9 billion in private investment in 2025, and 5,427 tracked AI data centers, per Stanford’s survey. In contrast, Chinese investment reached just $12.8 billion.
- Policy confusion is widespread: in 2025, only 6% of surveyed K-12 teachers felt school guidance on AI use was clear, mirroring broader uncertainty across public and private sectors.
Experts say the regulatory response is not keeping pace with technology. According to Stanford HAI co-chairs Yolanda Gil and Raymond Perrault, “The data does not point in a single direction. It reveals a field that is scaling faster than the systems around it can adapt.”
Outside commentary echoes the legal stakes: The New York Times recently detailed how law firms are tracking the surge in AI incident-related litigation and the pressure on in-house counsel to proactively update risk frameworks in response to new threats.
With generative AI adoption surpassing 50% of the global population in three years (Stanford HAI), legal teams must rapidly adapt oversight and compliance strategies to evolving regulatory realities.
By the numbers:
- >90% — Share of 2025's top AI models built by private industry (Stanford HAI)
- 55% — Year-over-year increase in documented AI incidents in 2025 (Stanford HAI)
- $285.9B — U.S. private AI investment in 2025, 23x China's (Stanford HAI)
Yes, but: Many cutting-edge AI advances remain concentrated in a handful of large tech companies, potentially complicating future regulation and access.
What's next: New U.S. and EU regulatory guidelines for AI are expected by early 2027, aiming to address gaps highlighted in the report.