KPMG Expands AI Assurance to Address Global Regulatory Demands

3 min readSources: Lex Blog

KPMG expanded its global AI Trust Assurance for legal and compliance teams in 2025, targeting new regulations.

Why it matters: Legal departments increasingly bear the responsibility to prove AI compliance, facing audit, litigation, and enforcement risks. KPMG’s expanded services promise clear documentation and oversight, helping in-house and law firm teams deliver defensible compliance with evolving laws and standards.

  • KPMG’s AI Trust Assurance services debuted globally in May 2025, with advanced model assurance added in September 2025.
  • Legal teams must now align with complex requirements under the EU AI Act and ISO/IEC 42001:2023.
  • NIST AI RMF 1.0 calls for thorough documentation and structured AI risk controls, now viewed as baseline by many regulators.
  • Expert insight: Dr. Megan Palmer, Stanford University, stresses independent oversight as critical for credible compliance.

KPMG’s expanded AI Trust Assurance targets a sharp rise in legal and regulatory scrutiny of artificial intelligence. Launched globally in May 2025 and enhanced in September, these services support legal and compliance teams by providing risk mapping, control validation, and ongoing monitoring of AI models—a direct answer to global expectations for accountable, transparent AI practices.

  • The NIST AI Risk Management Framework (AI RMF 1.0) sets out step-by-step risk management and documentation as best practices. Legal teams need to ensure each stage of the AI lifecycle—development, testing, deployment—is carefully recorded to meet scrutiny in audits or litigation. NIST’s framework is becoming stateside regulatory shorthand for what’s expected in a courtroom or before a regulator.
  • The EU AI Act and ISO/IEC 42001:2023 both add strict requirements for high-risk systems—including explicit policies, continuous record-keeping, human oversight, and regular audits. Lawyers will face direct liability for deficiencies in recordkeeping or discrimination in outcomes.
  • KPMG’s controls help translate these standards into workflows: building documentation, approving changes, and keeping real-time records for each AI model. This assists in responding to regulatory inquiries and evidencing compliance if challenged legally.
  • Dr. Megan Palmer, an expert in responsible AI at Stanford University, notes, “Regulators are watching for not just completed checklists but ongoing, independent oversight that’s documented over time.”

For legal teams, the expansion means shifting from reactive, post-hoc defenses to proactive, continuous compliance—streamlining burdens when responding to inquiries, preparing for audits, or managing cross-border liabilities. Even so, industry experts remind lawyers that external audits and robust documentation only go so far without demonstrable independence in oversight and regular enforcement of AI policies.

By the numbers:

  • May 2025 — Global launch of KPMG's AI Trust Assurance services
  • 2023 — ISO/IEC 42001 published as first international AI management standard
  • 9,000+ — Estimated high-risk AI systems subject to new EU AI Act reporting requirements

Yes, but: Even with third-party assurance, regulators and courts may scrutinize internal independence and actual daily adherence to AI risk policies—not just formal documentation.

What's next: KPMG plans to offer additional region-specific compliance modules in late 2025 as regulatory regimes evolve.