UK Courts Tackle Generative AI Risks in Legal Disclosure

3 min readSources: Lex Blog

UK courts and regulators are scrutinizing generative AI's role in legal disclosure, urging transparency and oversight.

Why it matters: Firms and in-house teams must address the risks of AI-generated content as it permeates disclosure workflows. Maintaining defensibility, accuracy, and compliance is now critical under evolving best practice and regulatory expectations.

  • ILTA released the 'Generative AI Best Practice Guide' on 30 September 2025, outlining responsible AI use in legal disclosure.
  • The Civil Justice Council launched a February 2026 consultation, proposing mandatory disclosure of AI use in court documents.
  • The 2025 Ayinde case saw the court warn of unverified, fictitious authorities created by AI, underscoring the need for human oversight.
  • Legal thought leaders and the Law Society stress structured validation, transparency, and proportionality in AI-assisted disclosure.

The rapid adoption of generative AI across UK legal practice is reshaping disclosure, with regulators and practitioners racing to set new guardrails.

  • ILTA's 'Generative AI Best Practice Guide', released in September 2025, gives firms a framework to agree on GenAI use at the outset of proceedings, aiming to avoid later disputes under Practice Direction 57AD. Fiona Campbell, Partner at Fieldfisher, welcomes the guide as a way to streamline agreements on these tools early in the disclosure process.
  • Tom Whittaker of Burges Salmon notes the guide's focus on practical compliance, and Imogen Jones at DAC Beachcroft underscores the importance of rigorous testing and oversight, stating, "the guide emphasises the need for structured testing, validation and proper oversight."
  • On the regulatory side, the Civil Justice Council opened a consultation in February 2026 that could require practitioners to disclose any AI use in preparing court materials—an indicator that transparency is now a compliance concern, not just best practice.
  • The judicial response is evolving. In R. (on the application of Ayinde) v Haringey LBC EWHC 1383 (Admin), the court flagged serious risks when parties relied on AI-generated authorities without verification. The threat of fictitious or inauthentic documentation spurred industry bodies, including the International Bar Association, to call for robust validation processes.

Legal professionals now face a dual imperative: harness GenAI's efficiencies while preserving the integrity of the disclosure process. Jonathan Howell of DAC Beachcroft cautions that, as e-disclosure evolves, defensibility must not come at the expense of innovation.

By the numbers:

  • 30 September 2025 — ILTA's 'Generative AI Best Practice Guide' published
  • 23 February 2026 — CJC launches consultation on AI disclosure rules
  • 2025 — Ayinde decision addresses risks of AI-generated authorities

Yes, but: Comprehensive information on adoption rates of AI tools and outcomes of the CJC consultation is not yet available.

What's next: The legal community is awaiting outcomes from the CJC's 2026 consultation, which could lead to new mandatory rules on AI use in disclosure.