This week, Canadian authorities called OpenAI executives to Ottawa to discuss safety issues surrounding ChatGPT. At the heart of the officials' worries was the company's failure to alert regulators after suspending an account tied to an individual suspected of carrying out a mass shooting in British Columbia earlier this month.

Justice Minister Sean Fraser commented on the exchange with the AI firm and its tool, emphasizing that they firmly insisted on immediate implementation of improvements, warning that without swift action, the government would intervene directly. The specifics of any such regulatory steps remain uncertain. Canada has previously failed twice to enact legislation aimed at curbing online harms.

A Wall Street Journal investigation revealed that during 2025, certain OpenAI staff members highlighted the account of the purported gunman, Jesse Van Rootselaar, for possibly signaling intentions of actual violence and recommended alerting authorities. While the profile was ultimately suspended due to breaches of guidelines, an OpenAI spokesperson indicated that the content fell short of the thresholds required to involve local law enforcement.

Canadian Artificial Intelligence Minister Evan Solomon described the accounts as profoundly alarming, particularly those suggesting OpenAI's delayed response to law enforcement. Prior to the session with the executives, he outlined plans for a direct conversation to clarify the firm's safety mechanisms, including protocols for escalation and the benchmarks for reporting to police, aiming to gain clearer insight into their operations.

OpenAI has faced numerous lawsuits alleging wrongful deaths. In one suit filed in December 2025, ChatGPT stood accused of promoting delusional ideas that preceded a man's murder of his mother followed by his own suicide. The company is also embroiled in various legal actions against AI chatbot developers, claiming they assisted adolescents in devising and executing suicide plans.