California Judge Blocks ChatGPT User Over Stalking Allegations

3 min readSources: Volokh Conspiracy

A California judge ordered OpenAI to block a ChatGPT user following stalking allegations.

Why it matters: Legal and compliance teams must track new risks as courts scrutinize AI platforms’ responsibilities for harmful user behavior. This case signals that AI companies may be held accountable for prompt, effective moderation when abuse is reported.

  • Judge Harold Kahn issued a temporary restraining order on April 13, 2026.
  • OpenAI must block the user’s ChatGPT account until at least May 6, 2026.
  • The plaintiff says her ex-partner used ChatGPT to create and share false psychological reports about her.
  • OpenAI had flagged and briefly suspended the account in August 2025 for harmful content, then reinstated it after review.

The Superior Court of California directed OpenAI to suspend a ChatGPT account after the platform was linked to alleged stalking and harassment. Judge Harold Kahn’s temporary restraining order on April 13, 2026, requires OpenAI to keep the user locked out of ChatGPT until May 6 as the court reviews the case (Bloomberg Law).

  • The plaintiff, identified as Jane Doe, alleges her former partner used ChatGPT to generate and distribute fake psychological evaluations about her. These reports, according to Doe, were shared with her family and colleagues, along with threatening voice messages (Volokh Conspiracy).
  • OpenAI’s system detected prompts concerning the creation of “Mass Casualty Weapons” in August 2025, triggering an automatic account suspension. A human reviewer soon reversed this ban, and the account was restored.
  • After Doe filed an abuse complaint in November 2025, OpenAI responded but did not terminate the account (TechCrunch).
  • The user was arrested in January 2026 on felony charges related to the harassment claims, found unfit for trial, and sent to psychiatric care. Doe sought the restraining order after the user was released because of what court documents describe as procedural issues in the criminal process.

This case highlights how AI platforms can be misused. For legal teams, the ruling suggests that courts may require faster, more robust moderation on generative AI tools when serious abuse is reported. The upcoming hearing may set important guidance about AI companies’ obligations—and potential liability—when notified of user misconduct.

By the numbers:

  • April 13, 2026 — Date of the court’s restraining order against the ChatGPT user.
  • August 2025 — OpenAI initially flagged the account for dangerous content.
  • May 6, 2026 — Next scheduled court review date.

Yes, but: Courts have not yet set a clear national or industry-wide legal standard for when AI companies must act on alleged abuse.

What's next: The court will review whether to extend or modify OpenAI’s suspension of the ChatGPT user at a hearing set for May 6, 2026.