AI-Generated False Citations Lead to Sanctions Against Butler Snow Attorneys

2 min read

Key points:

  • Butler Snow attorneys sanctioned for submitting court filings with AI-generated false citations.
  • U.S. District Judge Anna Manasco disqualified three lawyers from the case and referred them to the Alabama State Bar.
  • The incident underscores the risks of unverified AI use in legal research and the necessity for human oversight.
A federal judge has sanctioned three attorneys from the law firm Butler Snow for submitting court filings containing fabricated legal citations generated by ChatGPT. U.S. District Judge Anna Manasco reprimanded William R. Lunsford, Matthew B. Reeves, and William J. Cranford for including unverified and entirely fictitious legal references in documents related to a case involving an inmate who was repeatedly stabbed at the William E. Donaldson Correctional Facility. The filings alleged that prison officials failed to ensure inmate safety. ([apnews.com](https://apnews.com/article/c6a64736cb488cf6379624403d3757ca?utm_source=openai)) Judge Manasco characterized the use of AI without verification as "recklessness in the extreme" and ordered the lawyers removed from the case. She mandated that they share the sanctions with all ongoing clients, opposing counsel, and judges. Additionally, the incident has been referred to the Alabama State Bar for possible disciplinary action. ([apnews.com](https://apnews.com/article/c6a64736cb488cf6379624403d3757ca?utm_source=openai)) The attorneys had previously apologized, admitting they failed to confirm the AI-generated citations. Butler Snow, which has received over $40 million from Alabama since 2020 to represent the state in multiple prison-related lawsuits, expressed embarrassment over the errors and acknowledged that using unverified AI sources went against its practices. ([apnews.com](https://apnews.com/article/8cbaf729dafc2b56bee59545391707c0?utm_source=openai)) This case highlights the growing concerns over the use of artificial intelligence in legal research. While AI tools like ChatGPT can expedite information gathering, they are prone to generating "hallucinations"—plausible-sounding but incorrect information. Legal professionals are reminded of the critical importance of verifying all sources and maintaining rigorous oversight when incorporating AI into their workflows. The incident serves as a cautionary tale for the legal community, emphasizing that while AI can be a valuable tool, it cannot replace the due diligence and critical analysis required in legal practice. As AI continues to evolve, law firms must establish clear guidelines and training to ensure its responsible use, safeguarding the integrity of legal proceedings.