Ottawa (Rajeev Sharma): The revelation that OpenAI banned the ChatGPT account of mass shooter Jesse Van Rootselaar months before she killed eight people in Tumbler Ridge, British Columbia, has sparked intense scrutiny over AI safety and law enforcement intervention. In the wake of one of Canada’s most devastating mass killings, federal officials and crime experts are questioning whether the tech giant’s failure to report the 18-year-old’s violent online behavior constitutes a catastrophic missed opportunity for prevention.
Artificial Intelligence Minister Evan Solomon has formally summoned OpenAI officials to Ottawa to provide an account of their safety protocols and reporting thresholds. The controversy centers on OpenAI’s admission that it identified “misuses of our models in furtherance of violent activities” as early as June 2025. Despite the ban, the company opted not to notify police, concluding that the activity did not meet their internal “higher threshold” for identifying credible or imminent threats. OpenAI defended its stance by citing privacy concerns and the potential distress that law enforcement intervention can cause for young users and their families.
The tragedy has also cast a light on the systemic failures that preceded the attack. Local police were already aware of Van Rootselaar’s history of mental health challenges, which included diagnoses of ADHD, depression, OCD, and autism. Notably, authorities had previously removed firearms from her home, only to return them later. Criminologists and mental health experts, such as University of Ottawa professor Tracy Vaillancourt, argue that while privacy is essential, the refusal to refer such high-risk cases to authorities is a failure of social responsibility. Conversely, some legal experts warn against turning tech companies into a “private surveillance wing” of law enforcement, which could disproportionately affect marginalized communities.
The investigation into the massacre remains active as British Columbia Premier David Eby and other leaders demand greater transparency from tech companies whose platforms are increasingly viewed as the “new public sphere.” Beyond ChatGPT, it was discovered that Van Rootselaar had created a shooting simulation game on the Roblox Studio app shortly before the tragedy. While OpenAI has expressed its condolences and pledged to support the Royal Canadian Mounted Police, the debate continues over whether current AI regulations are sufficient to prevent such violence in an era where digital footprints often precede physical harm.
