Canada's Privacy Commissioner, Philippe Dufresne, determined that OpenAI did not adhere to federal and provincial privacy regulations when developing its artificial intelligence systems. After conducting a probe, Dufresne along with officials from Alberta, Quebec, and British Columbia concluded that OpenAI's methods for handling data acquisition and user permissions infringed on several statutes, notably the Personal Information Protection and Electronic Documents Act (PIPEDA), which regulates the handling of personal data in commercial operations.
The involved commissioners pinpointed various shortcomings in OpenAI's data practices, such as collecting enormous volumes of personal data without sufficient protections against its incorporation into AI training processes, and neglecting to obtain proper authorization for such collection and utilization. Although ChatGPT includes alerts about potential use of user interactions for model improvement, the externally sourced data—whether bought or harvested—often contains private elements unknown to individuals. Investigators also highlighted the absence of mechanisms for ChatGPT users to review, amend, or remove their data, as detailed in the probe's overview, plus OpenAI's insufficient efforts to address errors in the chatbot's outputs.
The Canadian Privacy Commissioner noted that OpenAI cooperated fully during the inquiry and pledged several updates to ChatGPT to align with local privacy standards. OpenAI has discontinued prior versions that contravened Canadian rules and now employs a screening mechanism to identify and obscure sensitive details like names or contact numbers in web-scraped or licensed training materials, according to the Commissioner. Additionally, OpenAI will introduce within three months a clearer disclaimer on the unregistered ChatGPT interface warning that conversations may contribute to training and advising against sharing confidential details, and within six months it will:
- Simplify its data download features for better accessibility and usability, while clarifying procedures for disputing the correctness of ChatGPT's information.
- Assure the Privacy Commissioners of robust safeguards for archived datasets to prevent their reuse in ongoing projects.
- Evaluate defenses for non-public children of prominent individuals, verifying that the AI rejects queries for their personal identifiers like names or birthdates.
Although the Canadian review of OpenAI's data handling began in 2023, the firm has faced intensified oversight lately due to its links to the February 2026 mass shooting in Tumbler Ridge. Reports indicated that OpenAI detected potential threats of actual harm in the suspect's account as early as 2025 but did not alert Canadian authorities. In response to the incident, officials pressed for safety enhancements, leading OpenAI to commit to greater partnership with Canadian police and health organizations moving forward.