California Supreme Court Demands Answers on AI Use in Bar Exam Questions Controversy

3 min read

Key points:

  • The California Supreme Court was unaware of the State Bar's use of AI in crafting February 2025 bar exam questions.
  • The court demands full disclosure on the use, reliability, and oversight of AI-generated questions.
  • State Bar leadership acknowledges a communication breakdown and is implementing structural reforms.
The California Supreme Court has called upon the State Bar of California to provide a comprehensive explanation regarding the utilization of artificial intelligence (AI) in developing multiple-choice questions for the February 2025 bar examination. This request follows the court's recent discovery that AI was employed without prior notification or approval. In a statement issued on April 24, 2025, the court expressed its concern over the lack of transparency and oversight in the exam preparation process. The justices were reportedly unaware that the State Bar had permitted its independent psychometrician, ACS Ventures, to use AI in crafting a subset of exam questions. The court has now requested detailed information on how and why AI was used, the measures taken to ensure the reliability of these questions, and whether any AI-generated questions were excluded from scoring due to reliability concerns. ([latimes.com](https://www.latimes.com/california/story/2025-04-24/california-supreme-court-demands-state-bar-answer-ai-questions?utm_source=openai)) This inquiry arises amidst broader issues surrounding the February exam, which was marred by technical problems and irregularities. The State Bar has petitioned the court to adjust test scores for affected candidates. ([legal.io](https://www.legal.io/articles/5672299/California-Supreme-Court-Demands-Answers-on-AI-Use-in-Bar-Exam-Questions?utm_source=openai)) The controversy centers not only on the use of AI but also on the rigor of the vetting process for a high-stakes exam that determines the licensure of aspiring attorneys in California. It also raises questions about the transparency of the State Bar's decision-making, especially as it transitions away from the National Conference of Bar Examiners' Multistate Bar Examination to a new hybrid model of in-person and remote testing. In response to the court's demand, the State Bar acknowledged a breakdown in communication and stated that structural changes are being implemented to address the lapse. However, the Bar has not disclosed which AI platform was used or how it was trained to generate questions suitable for assessing minimal competency to practice law in California. ([legal.io](https://www.legal.io/articles/5672299/California-Supreme-Court-Demands-Answers-on-AI-Use-in-Bar-Exam-Questions?utm_source=openai)) The State Bar maintains confidence in the validity of the AI-assisted multiple-choice questions, asserting that all questions were reviewed by content validation panels and subject matter experts for legal accuracy, minimum competence, and potential bias. When measured for reliability, the combined scored multiple-choice questions from all sources, including AI, performed above the psychometric target of 0.80. ([latimes.com](https://www.latimes.com/california/story/2025-04-23/state-bar-of-california-used-ai-for-exam-questions?utm_source=openai)) Despite these assurances, the revelation has sparked significant concern among legal educators and professionals. Mary Basick, assistant dean of academic skills at UC Irvine Law School, described the situation as "worse than we imagined," expressing disbelief that questions were drafted by non-lawyers using AI. ([latimes.com](https://www.latimes.com/california/story/2025-04-23/state-bar-of-california-used-ai-for-exam-questions?utm_source=openai)) The California Supreme Court's demand for answers underscores the importance of transparency and reliability in the bar examination process, especially as new technologies like AI are integrated into high-stakes assessments.