With mass shootings a constant fear for parents and school administrators almost across the US, several States have spent the last decade investing in surveillance systems to monitor students' online activity. A recent incident in Florida showed this technology. A school monitoring system flagged a student after he asked ChatGPT for advice on how to kill his friend.
The event unfolded when a school-issued computer flagged a concerning query made to OpenAI's ChatGPT. According to local police, the unnamed student asked the AI tool "how to kill my friend in the middle of class." The question immediately triggered an alert through the school's online surveillance system, which is operated by a company called Gaggle .
According to a report in local NBC-affiliate WFLA, Volusia County Sheriff’s deputies responded to the school and interviewed the student. The teen reportedly told officers he was "just trolling" a friend who had annoyed him. However, law enforcement officials were not amused by the explanation. "Another ‘joke’ that created an emergency on campus," the Volusia County Sheriff’s Office stated, urging parents to talk to their children about the consequences of such actions.
The student was subsequently arrested and booked at a county jail, although the specific charges have not been publicly disclosed.
This incident is said to be the latest example of a school district's increasing reliance on surveillance technology to monitor students' digital activity in the wake of rising mass shootings. Gaggle, which provides safety services to school districts nationwide, describes its system as a tool for flagging "concerning behavior tied to self-harm, violence, bullying, and more." The company’s website indicates that its monitoring software filters for keywords and gains "visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms."
This event comes as chatbots and other AI tools are increasingly appearing in criminal cases, often in relation to mental health. The rise of "AI psychosis," where individuals with mental health issues have their delusions exacerbated by interactions with chatbots, has become a growing concern, with some recent suicides also being linked to the technology.
The event unfolded when a school-issued computer flagged a concerning query made to OpenAI's ChatGPT. According to local police, the unnamed student asked the AI tool "how to kill my friend in the middle of class." The question immediately triggered an alert through the school's online surveillance system, which is operated by a company called Gaggle .
According to a report in local NBC-affiliate WFLA, Volusia County Sheriff’s deputies responded to the school and interviewed the student. The teen reportedly told officers he was "just trolling" a friend who had annoyed him. However, law enforcement officials were not amused by the explanation. "Another ‘joke’ that created an emergency on campus," the Volusia County Sheriff’s Office stated, urging parents to talk to their children about the consequences of such actions.
The student was subsequently arrested and booked at a county jail, although the specific charges have not been publicly disclosed.
This incident is said to be the latest example of a school district's increasing reliance on surveillance technology to monitor students' digital activity in the wake of rising mass shootings. Gaggle, which provides safety services to school districts nationwide, describes its system as a tool for flagging "concerning behavior tied to self-harm, violence, bullying, and more." The company’s website indicates that its monitoring software filters for keywords and gains "visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms."
This event comes as chatbots and other AI tools are increasingly appearing in criminal cases, often in relation to mental health. The rise of "AI psychosis," where individuals with mental health issues have their delusions exacerbated by interactions with chatbots, has become a growing concern, with some recent suicides also being linked to the technology.
You may also like
'Put travel plans on hold': H-1B visa warning issued by University of Southern California; 'Until further guidance'
UFC star Conor McGregor slapped with 18-month drug ban
Bodies found in Mount Tosc ravine as deadly avalanche kills three climbers
Nightmare for Keir Starmer as top Labour figure quits in fresh meltdown for party
Jaipur-Ajmer highway inferno: LPG truck collision triggers massive fire; CM Bhajanlal orders on-site review