In the wake of a public letter of regret from OpenAI's chief executive Sam Altman addressed to the community of Tumbler Ridge in British Columbia following the devastating school attack on February 10, relatives of those affected have initiated legal action against the company, accusing it of failing in its duties.

This incident ranks among the most fatal shootings in Canada's records, involving 18-year-old suspect Jesse Van Rootselaar, who stormed into the local secondary school, resulting in the deaths of five pupils and an educator, severe injuries to two more individuals, and her subsequent self-inflicted fatality. Authorities found that she had earlier slain her mother and an 11-year-old step-sibling prior to the school intrusion.

According to NPR reports, attorneys for certain bereaved families submitted six separate legal claims on Wednesday to a U.S. federal court in San Francisco. A specific filing representing Maya Gebala, who survived the assault, asserts that OpenAI's automated moderation detected Van Rootselaar's interactions with ChatGPT back in June 2025—well over six months prior to her armed incursion into the school with a shotgun and altered firearm—for indications of 'gun violence activity and planning.' The document further states that the firm's security personnel recommended alerting law enforcement, yet OpenAI opted merely to suspend her profile, after which she established a new one to resume her dialogues with the AI.

An OpenAI representative commented to Engadget that the Tumbler Ridge occurrences represent a profound loss, emphasizing the company's strict prohibition on employing its technologies to facilitate acts of aggression. The spokesperson noted that, in communications with Canadian authorities, OpenAI has enhanced its protective measures, such as refining ChatGPT's handling of distress signals, linking users to regional aid and psychological services, bolstering evaluations and reporting of possible violent risks, and advancing identification of persistent offenders.

OpenAI released a statement on its website late Tuesday detailing its protective protocols. The entry explained ongoing efforts to broaden defenses, enabling ChatGPT to identify nuanced indicators of potential damage in various scenarios. It highlighted that certain dangers emerge gradually: an isolated communication might appear innocuous in isolation, yet recurring themes over extended exchanges—or multiple sessions—could point to deeper issues.

These recent court filings mark another effort to seek judicial responsibility from OpenAI regarding its technology's architecture. During the previous summer, the guardians of teenager Adam Raine, who ended his life in 2025, brought the inaugural documented negligence claim of this nature against an artificial intelligence developer, contending that ChatGPT had knowledge of Raine's four prior self-harm endeavors before his final act.