Families Sue OpenAI Over ChatGPT Role in Deadly Tumbler Ridge Shooting

3 min readSources: Lex Blog

Families of Tumbler Ridge shooting victims sued OpenAI and CEO Sam Altman in U.S. federal court, alleging negligence tied to ChatGPT use by the shooter, Jesse Van Rootselaar.

Why it matters: The case is one of the first to test whether AI companies like OpenAI have a legal obligation to report or act on user conduct linked to threats. Its outcome could define the scope of liability and reporting duties for AI platforms handling dangerous or criminal behavior.

  • Suit filed April 29, 2026, in U.S. federal court against OpenAI and CEO Sam Altman.
  • Plaintiffs say OpenAI’s failure to alert law enforcement about ChatGPT user Jesse Van Rootselaar contributed to the Tumbler Ridge shooting in British Columbia.
  • OpenAI banned Van Rootselaar’s ChatGPT account in June 2025 for discussing gun violence but did not notify police.
  • OpenAI’s safety team reportedly urged referral to authorities, but company leadership declined; the threshold for referral standards are not set by law.

The families affected by the Tumbler Ridge, British Columbia shooting have filed a wrongful death lawsuit in U.S. federal court against OpenAI and CEO Sam Altman. The complaint, lodged April 29, 2026, alleges OpenAI was grossly negligent for failing to report dangerous user activity by Jesse Van Rootselaar on ChatGPT ahead of the February 2026 mass shooting.

  • Van Rootselaar killed seven people and injured 27 at two locations before dying by suicide, making it one of Canada’s deadliest mass shootings (AP).
  • OpenAI’s automated moderation flagged and banned Van Rootselaar’s ChatGPT account in June 2025 over discussions about gun violence, but the company decided the incident did not cross its “threshold for referral,” meaning they did not pass it to police (AP).
  • The safety team at OpenAI reportedly wanted to alert law enforcement, but senior leadership overruled, citing privacy and business risks (The Guardian).

Attorney Jay Edelson, representing the families, argued the leadership’s decision was “pretty close to the definition of evil”—his opinion—after children and adults died in the shooting.

In a public statement on April 24, CEO Sam Altman apologized: “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” (AP).

This suit stands out because there is currently no explicit U.S. or Canadian law requiring AI companies to report user conversations unless imminent harm or a specific threat is detected—thresholds that themselves are under legal scrutiny. The outcome could shape new standards for identifying and escalating potentially criminal AI-assisted behaviors.

OpenAI, with a valuation near $1 trillion, faces unprecedented scrutiny as the courts weigh tech liability in tragedies enabled or amplified by digital platforms.

By the numbers:

  • 7 — Number of people killed in the Tumbler Ridge shooting
  • 27 — Number of people injured
  • $1 trillion — OpenAI’s estimated valuation ahead of possible IPO

Yes, but: No law currently requires AI companies to automatically report user discussions about violence to the authorities unless there is a clear, imminent threat, a definition still evolving in AI contexts.

What's next: The court will decide whether OpenAI’s duty of care extends to monitoring and reporting user conduct, possibly setting new legal precedents for AI platform liability.