Court Considers Blocking ChatGPT User After Harassment Lawsuit Targets OpenAI

3 min readSources: Volokh Conspiracy

A California plaintiff asks the court to bar an ex-boyfriend's ChatGPT access over alleged abuse.

Why it matters: The case could clarify legal responsibilities for AI providers when users pose safety risks. Its outcome may influence how legal tech companies address user abuse, liability, and regulatory obligations.

  • Jane Doe sued OpenAI on April 10, 2026, alleging ChatGPT enabled criminal harassment.
  • The accused user’s account was flagged for weapon-related activity but reinstated after review.
  • Doe seeks a restraining order to block access, monitor new attempts, and preserve chat records.
  • OpenAI suspended user accounts following the lawsuit and is reviewing the court filing.

A potentially precedent-setting lawsuit in California is testing whether courts can require AI firms to restrict access by individuals accused of criminal harassment. Jane Doe's April 10, 2026 filing claims her ex-boyfriend used ChatGPT to sustain a campaign of harassment and defamatory accusations. Prosecutors later filed felony charges against him for bomb threats and assault.

  • Doe alleges the ex-boyfriend used ChatGPT to manufacture fake psychological evaluations and share them widely.
  • OpenAI’s automated tools flagged his account in August 2025 due to suspected discussions of weapons, but human review reactivated access the next day.
  • Doe reported further abuse in November 2025; OpenAI called it “extremely serious and troubling” but took no immediate further steps.

After his arrest in January 2026, Doe’s legal complaint cited ChatGPT’s role in supporting the harassment campaign. The suit requests a court order forcing OpenAI to block both current and new account access by the accused, alert Doe of future attempts to regain access, and retain all relevant chat data.

OpenAI has stated, “We are reviewing the plaintiff’s filing to understand the details, and with current information, we've identified and suspended relevant user accounts.” CNN Tech coverage notes this case highlights the tension between AI product safety and user rights, and industry experts say it could influence how tech companies implement monitoring or restriction tools.

The legal community is closely monitoring the court’s approach for signals on liability and possible regulatory change as generative AI becomes more embedded in daily life. Independent analysts suggest outcomes here may set guidance for AI policy and for legal expectations around user management. NPR Tech reports legal scholars anticipate increased judicial scrutiny over AI tools where public safety is alleged to be at risk.

By the numbers:

  • 4 — Felony counts charged against the accused user, including assault and bomb threats
  • 2 — Times OpenAI reviewed and acted on user complaints before and after the lawsuit

Yes, but: Legal experts caution that broad restrictions on user access could set complex First Amendment and due process questions for AI platforms.

What's next: The court will hear oral arguments on the temporary restraining order on May 20, 2026.