OpenAI is now facing a lawsuit over its decision to not inform authorities about violent ChatGPT user messages from the perpetrator of last month's school shooting in Tumbler Ridge, BC. The family of 12-year-old Maya Gebala, critically wounded in the shooting that killed 8, alleges the teenage suspect described "various scenarios involving gun violence" with the chatbot, treating it as a "trusted confidante," per the BBC. According to the complaint, a ChatGPT account linked to Jesse Van Rootselaar was banned in June 2025 after OpenAI staff flagged messages as posing an "imminent risk of serious harm" and recommended alerting Canadian authorities.
The lawsuit claims that recommendation was rejected, police were never notified, and the suspect later opened a second account used to "continue planning scenarios involving gun violence." The company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge," the suit claims, per the CBC. The plaintiffs also say OpenAI failed to verify the user's age despite requiring users under 18 to have parental consent, and designed the chatbot so that users in general would become "psychologically and socially dependent" on it.
OpenAI has previously said the conversations did not meet its internal standard for a "credible or imminent" threat that would trigger a police report. In a statement, the company called the shooting an "unspeakable tragedy" and said it is working with governments and law enforcement to improve safeguards. It has since pledged to loosen its criteria for alerting authorities, bring in mental health and behavioral experts, strengthen detection systems, and set up a direct line to Canadian police. Canada's AI minister, Evan Solomon, said officials welcome the commitments but are still waiting for a detailed implementation plan.