Today : Nov 10, 2025
U.S. News
27 October 2025

AI Mistakes Doritos Bag For Gun At Maryland School

A Baltimore County student was handcuffed after an AI security system falsely flagged his snack as a firearm, prompting calls for accountability and a review of school safety protocols.

On Monday evening, October 20, 2025, the routine calm outside Kenwood High School in Baltimore County, Maryland, was shattered by flashing lights and blaring sirens. Taki Allen, a 16-year-old student, was waiting with friends for a ride home after football practice—just another ordinary day, until it wasn’t. What triggered the chaos? Not a weapon, but an empty bag of Doritos chips, misidentified as a firearm by the school’s AI-powered security system.

According to CNN and WBAL-TV, the AI-driven gun detection system, operated by Omnilert and deployed across Baltimore County public schools since 2023, flagged Allen’s snack as a possible threat. Within minutes, Allen found himself surrounded by eight police cars, officers with guns drawn, and a terrifying demand: “Get on the ground.” The teen, shaken and confused, complied. “They made me get on my knees, put my hands behind my back, and cuffed me,” Allen recounted to WBAL. “They searched me, and they figured out I had nothing. Then, they went over to where I was standing and found a bag of chips on the floor.”

For Allen, the ordeal was more than a misunderstanding—it was a traumatic encounter that left him questioning his own safety at school. “Are they going to kill me?” he recalled thinking in the heat of the moment, as reported by Fox 45 News. “I just, in that moment, I didn’t feel safe. I didn’t feel like the school actually cared about me. Because nobody came up to me after, not even the principal.”

The Baltimore County Police Department confirmed the incident, stating officers “responded appropriately and proportionally based on the information provided at the time.” After a thorough search, no weapon was found, and the incident was resolved without charges. Yet the fallout from this false alarm has rippled far beyond the school’s walls, igniting a fierce debate over the role—and reliability—of artificial intelligence in school security.

Kenwood High School principal Kate Smith, in a message to parents the following day, emphasized that “ensuring the safety of our students and school community is one of our highest priorities.” She acknowledged the distress caused by the incident, adding, “We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident. Our counselors will provide direct support to the students who were involved in this incident and are also available to speak with any student who may need support.”

The chain of events that led to Allen’s handcuffing reveals a series of missteps and communication breakdowns. The AI system triggered an alert, which was reviewed and canceled by the school’s security department after confirming no weapon was present, according to Principal Smith. However, a lack of clear communication meant that the alert was still referred to the school resource officer, who then contacted local police—setting the stage for the heavy-handed response that followed.

Omnilert, the company behind the AI system, expressed regret over the incident. In a statement to CNN, the company said, “We regret that this incident occurred and wish to convey our concern to the student and the wider community affected by the events that followed. While the object was later determined not to be a firearm, the process functioned as intended: to prioritize safety and awareness through rapid human verification.” The company’s technology is designed to scan live video feeds for objects resembling firearms, flagging potential threats for human review. But as the Kenwood episode shows, even advanced algorithms can be tripped up by something as benign as a crumpled snack bag held in an unusual way.

Superintendent Myriam Rogers called the incident “truly unfortunate” during a news conference on October 23, 2025, and stressed that the district never wants “any student, whether it’s during school hours or not, to be in a situation that is frightening.” She added that reviewing the system and security practices is “part of our regular practice.”

The incident has drawn sharp criticism from local officials and community members. Baltimore County Councilman Izzy Patoka was unequivocal: “No child in our school system should be accosted by police for eating a bag of Doritos,” he said in a statement on social media, calling for a full review of the AI-powered weapon detection system and its procedures. Councilman Julian Jones echoed those sentiments, urging the district to ensure that “safeguards are in place, so this type of error does not happen again.”

Allen’s grandfather, Lamont Davis, demanded accountability in the aftermath. “Something has got to be done. Changes have to be made and people have to be held accountable,” he told WBAL. The call for change has only grown louder as the community grapples with the implications of the incident.

This episode shines a harsh light on the broader challenges of using AI in school safety. AI gun detection systems, like those made by Omnilert and other vendors, rely on computer vision models trained on vast datasets to identify firearms in real time. But as experts note, real-world conditions—messy hallways, backpacks, musical instruments, and even hands holding snacks—can confound these models. Lighting, motion blur, and camera angles introduce further complications, increasing the likelihood of false positives.

There’s also a lack of standardized federal benchmarks for evaluating the accuracy of AI weapon detection in schools. Investigative reports and independent testing have revealed that error rates in these systems can be higher than some vendors claim. The U.S. Government Accountability Office has found scant evidence that many school security technologies, including surveillance and analytics, actually reduce harm in real-world conditions.

Beyond technical limitations, the human impact of false alarms is profound. Students like Allen describe feeling frightened, humiliated, and unsupported after such incidents. Civil liberties organizations, including the ACLU, warn that these tools can amplify bias and subject some students to disproportionate law enforcement contact. As schools increasingly turn to AI-driven security amid public pressure to prevent mass shootings, the risk that a low-probability error will disrupt or traumatize students becomes ever more real.

Experts recommend several safeguards: making decision ownership explicit so that human overrides are immediately communicated to all responders, running scenario-driven drills that include false alarms, releasing anonymized metrics on alert rates and response times, and demanding transparency from vendors about their systems’ limitations. Independent audits and third-party testing in the environments where these systems are deployed can help expose failure modes before they lead to incidents like the one at Kenwood.

For Kenwood High School, Allen, and the Baltimore County community, the Doritos debacle is a cautionary tale about the promises and perils of AI in the classroom. As districts nationwide consider similar technologies, the lesson is clear: the tools meant to keep students safe must be matched by disciplined protocols, transparent communication, and an unwavering commitment to human judgment. Otherwise, the line between safety and suspicion can blur in the blink of an algorithm—and trust, once lost, is hard to reclaim.