Families of victims of a school shooting in a remote Canadian Rockies town are suing artificial intelligence company OpenAI in a United States federal court, alleging that the ChatGPT maker failed to alert police to the shooter’s alarming interactions with the chatbot.
A lawsuit filed on Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting, is among the first of more than two dozen cases from families in Tumbler Ridge, British Columbia. Lawyers say this represents “an entire community stepping forward to hold OpenAI accountable.”
Six other lawsuits filed in a San Francisco federal court allege wrongful death claims on behalf of five children and an educator killed in Canada’s deadliest mass shooting in years. The victims include Zoey Benoit, Abel Mwansa Jr, Ticaria “Tiki” Lampert, Kylie Smith, all 12, and Ezekiel Schofield, 13, as well as education assistant Shannda Aviugana-Durand.
According to police, 18-year-old Jesse Van Rootselaar shot her mother and stepbrother at home before killing an educational assistant and five students aged 12 to 13 at her former school on February 10. She then died by suicide. Twenty-five people were also injured.
An OpenAI spokesperson called the shooting “a tragedy” and said the company has a zero-tolerance policy for using its tools to assist in committing violence. However, the lawsuits allege that OpenAI’s automated systems flagged ChatGPT conversations in June 2025 in which the attacker described gun violence scenarios.
Safety team members recommended contacting police after concluding she posed a credible and imminent threat, but CEO Sam Altman and other OpenAI leadership overruled the safety team, the lawsuit claims. The shooter’s account was deactivated, but she was able to create a new account and continue using the platform to plan her attack.
The lawsuits allege “the victims didn’t learn this because OpenAI was forthcoming, but because its own employees leaked it to The Wall Street Journal after they could no longer stomach the company’s silence.” OpenAI denies the claims, stating it trains its models to refuse requests that could “meaningfully enable violence” and notifies law enforcement when conversations suggest “an imminent and credible risk of harm to others.”
Lawyer Jay Edelson said he plans to file another two dozen lawsuits in the coming weeks. The lawsuits seek an unspecified amount of damages and a court order requiring OpenAI to overhaul its safety practices, including mandatory law enforcement referral protocols.
Source: www.aljazeera.com