OpenAI Apologizes for Failing to Warn Police Before Tumbler Ridge Shooting
OpenAI CEO Sam Altman apologized for not alerting police after banning a user who later committed a mass shooting.
Why it matters: AI companies face growing legal and compliance risks if they fail to escalate credible threats identified on their platforms. In-house counsel and law firm advisors must review incident reporting policies as regulators demand stronger safeguards against user-generated threats.
- OpenAI banned Jesse Van Rootselaar’s ChatGPT account for violent activity in June 2025, without law enforcement notification.
- Van Rootselaar killed eight people, including five children, in Tumbler Ridge, BC, on Feb. 10, 2026.
- Sam Altman publicly apologized on April 24, 2026, and OpenAI committed to new protocols for notifying authorities.
- British Columbia Premier David Eby called the apology "necessary, and yet grossly insufficient."
OpenAI CEO Sam Altman issued a public apology on April 24, 2026, acknowledging that OpenAI did not alert police after suspending a ChatGPT account over violent content. The banned user, Jesse Van Rootselaar, later killed eight people and himself in Tumbler Ridge, British Columbia, on February 10, 2026.
"I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm..." Altman stated during his remarks. (AP)
OpenAI banned Van Rootselaar's account in June 2025 based on policy violations linked to violent user input. The company did not contact authorities, citing an absence of clear, immediate threat criteria. British Columbia Premier David Eby sharply criticized the firm's actions, calling Altman’s apology "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge." (Washington Post)
- OpenAI announced new safety measures, including direct contact channels with Canadian law enforcement and broader criteria for referring concerning user activity to authorities. (TechCrunch)
- Legal scholar Dr. Melissa Curry of the University of Toronto School of Law noted this incident "exposes a serious regulatory gap regarding AI platform duties to report foreseeable user threats to law enforcement."
- Toronto lawyer Daniel Ng stated, "AI companies now need robust protocols for reviewing and escalating user bans where there is any credible risk of violence."
The shooting has intensified scrutiny of platform responsibilities for monitoring and escalating safety threats, with Canadian and international regulators expected to consider mandatory reporting standards for AI providers. Legal departments should audit and clarify response procedures to adapt to these evolving expectations.
By the numbers:
- 8 — Number of people killed by Jesse Van Rootselaar in Tumbler Ridge on Feb. 10, 2026.
- June 2025 — Date OpenAI banned Van Rootselaar’s ChatGPT account for violent activity.
- April 24, 2026 — Date Sam Altman issued public apology.
Yes, but: OpenAI emphasized the absence of clear, imminent threat indicators at the time, though critics say this highlights a gap in AI risk assessment and reporting standards.
What's next: Canadian lawmakers plan to introduce legislation to clarify AI platform obligations around user-generated threats.