scorecardresearch
Add as a preferred source on Google
Saturday, February 21, 2026
Support Our Journalism
HomeWorldOpenAI flagged & banned Canada suspect 8 months before mass shooting, didn’t...

OpenAI flagged & banned Canada suspect 8 months before mass shooting, didn’t alert police

The AI company said suspected killer Jesse Van Rootselaar’s account was flagged about 8 months earlier by systems that scan for misuse, including potential furthering of violent activities.

Follow Us :
Text Size:

OpenAI flagged and banned the suspect in one of Canada’s worst-ever mass shootings for violating ChatGPT’s usage policy June last year, without referring her to police.

The artificial intelligence company said that the suspected killer — Jesse Van Rootselaar — had an account that was detected about eight months ago by systems that scan for misuse, including the possible furthering of violent activities.

Canadian police alleged that the 18-year-old killed eight people and injured about 25, before taking her own life in the remote western Canadian town of Tumbler Ridge earlier this month.

OpenAI identified an account associated with Van Rootselaar about eight months ago, with tools to detect misuse of its AI models to further violent activities, and banned it, the company said.

The Wall Street Journal first reported OpenAI’s identification of Van Rootselaar, citing anonymous sources as saying that the alleged killer “described scenarios involving gun violence over the course of several days,” which triggered an internal debate among roughly a dozen staffers, some of whom urged leaders to alert police, the report said.

OpenAI said it considered referring the account to law enforcement at the time, but didn’t identify credible or imminent planning and determined it didn’t meet the threshold. After the shooting, the company contacted Canadian authorities.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said by email. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

The company said it trains ChatGPT to discourage imminent real-world harm.

This report is auto-generated from Bloomberg  news service. ThePrint holds no responsibility for its content.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular