Florida AG Probes OpenAI After FSU Shooting Tied to ChatGPT Use
Florida's Attorney General has launched an investigation into OpenAI linked to the 2025 FSU shooting.
Why it matters: AI platforms face rising scrutiny over potential misuse. Legal teams must assess the exposure of tech providers to liability and regulatory risks when their tools are allegedly used for harm.
- AG James Uthmeier opened a formal probe into OpenAI on April 9, 2026.
- Attorneys allege suspect Phoenix Ikner relied on ChatGPT to plan the April 17, 2025 attack.
- Victim Robert Morales' attorney intends to file a product liability suit against OpenAI.
- OpenAI is cooperating with law enforcement and rolled out a new AI Safety Bug Bounty program.
Florida Attorney General James Uthmeier has initiated an investigation into OpenAI following reports that suspect Phoenix Ikner used ChatGPT in planning the April 17, 2025 shooting on the Florida State University campus.
- On April 9, 2026, Uthmeier stated his office is examining whether OpenAI adequately protected its platform from criminal misuse and whether its data security practices could allow foreign access to sensitive information.
- Attorney Barry Richard, who represents victim Robert Morales, alleges that Ikner used ChatGPT conversations to help plan the attack and cited over 270 images and chatbot logs as evidence. Richard stated that a product liability lawsuit against OpenAI is being prepared.
- OpenAI has confirmed it located a ChatGPT account likely linked to Ikner and is working with law enforcement. The company emphasized ongoing efforts to detect and handle policy violations by users.
- In response to rising concerns around AI misuse, OpenAI recently introduced a public Safety Bug Bounty program, offering incentives for researchers to identify security flaws.
A key technical risk is so-called prompt injection—where users manipulate AI outputs by crafting specific inputs. This vector is cited in recent conversations about vulnerabilities in generative AI, including those involving OpenAI's Atlas browser. Legal experts note that liability rules for providers remain unsettled as courts test the boundaries of product responsibility in the context of AI-facilitated harm.
The Florida AG's investigation underscores increasing regulatory attention on AI providers and the growing need for in-house counsel and compliance teams to track emerging legal risks associated with AI deployment.
By the numbers:
- April 9, 2026 — Date AG Uthmeier formally announced the OpenAI probe
- Over 270 — Number of images and chatbot logs cited as evidence by victim's attorney
Yes, but: The AG's claims about possible foreign access to OpenAI's data have not been independently confirmed.
What's next: Attorney Barry Richard plans to file the product liability lawsuit against OpenAI in coming weeks.