Today : Nov 02, 2025
U.S. News
26 October 2025

AI Gun Detection Mistakes Doritos Bag For Firearm

A Baltimore County student was handcuffed after an AI security system falsely identified his snack as a weapon, prompting calls for accountability and a review of school safety technology.

On an ordinary Monday evening at Kenwood High School in Baltimore County, Maryland, a simple bag of Doritos triggered a chain of events that’s now sparking national debate over the use of artificial intelligence in schools. Student Taki Allen, just waiting for a ride home after football practice, found himself surrounded by armed police, handcuffed, and searched—all because an AI-powered security system flagged his empty snack bag as a possible firearm.

Allen’s ordeal began when the school’s recently installed AI gun detection system, operated by Omnilert, analyzed video footage from security cameras and identified something in his hands as a potential threat. “They made me get on my knees, put my hands behind my back, and cuffed me,” Allen recounted to CNN affiliate WBAL. The teenager described a harrowing scene: “The first thing I was wondering was, was I about to die? Because they had a gun pointed at me.” According to Allen, about “eight cop cars” pulled up to the school, and all he’d been holding was a Doritos bag. “It was two hands and one finger out, and they said it looked like a gun,” he said.

The incident, which occurred on October 20, 2025, quickly escalated. School safety officials did review the AI-generated alert and determined there was no weapon. The alert was canceled, but a crucial breakdown in communication meant that the cancellation wasn’t immediately relayed to everyone involved. Principal Kate Smith, in a statement to parents shared with CNN, confirmed that she reported the matter to the school resource officer, who then called local police for support. By the time officers arrived, the situation had already spun out of control.

Omnilert, the company behind the AI system, expressed regret over the incident, telling CNN, “We regret that this incident occurred and wish to convey our concern to the student and the wider community affected by the events that followed.” Yet, the company maintained that “the process functioned as intended: to prioritize safety and awareness through rapid human verification.” Their statement, while apologetic, has fueled further questions about what it truly means for a system to “function as intended” if it results in a student being handcuffed over a snack.

The AI gun detection system has been in use in Baltimore County public schools since 2023, according to Superintendent Myriam Rogers. The technology is designed to analyze video feeds from existing security cameras and flag potential firearms, sending alerts for rapid human review. Rogers, speaking at a news conference, described the incident as “truly unfortunate” and emphasized, “The district never wants to put any of its students in such a frightening situation.”

But for Allen and those who witnessed the event, the damage was done. “They searched me, and they figured out I had nothing. Then, they went over to where I was standing and found a bag of chips on the floor,” Allen told WBAL. The heavy-handed response—eight police cars, drawn weapons, and handcuffs—has left many in the community shaken and angry.

Allen’s grandfather, Lamont Davis, voiced the family’s frustration and called for accountability. “Something has got to be done. Changes have to be made and people have to be held accountable,” Davis told WBAL. Community outrage has only grown as more details emerged about the breakdown in communication and the reliance on AI for critical security decisions.

Local officials have joined the chorus demanding a thorough review. “No child in our school system should be accosted by police for eating a bag of Doritos,” Baltimore County Councilman Izzy Patoka stated on social media, urging the school district to review its procedures around the AI-powered weapon detection system. Councilman Julian Jones echoed the call, insisting on safeguards “so this type of error does not happen again.” Superintendent Rogers assured the public that reviewing the system and security practices “is part of our regular practice.”

The incident has ignited a broader conversation about the reliability and ethics of AI in sensitive environments like schools. As reported by BitcoinWorld, the event “serves as a stark reminder that while AI promises efficiency, its flaws can have profound human impacts.” False positives—when a system incorrectly identifies a non-threat as a threat—are not just technical glitches; they can traumatize students, misallocate resources, and erode public trust in the very systems meant to protect them.

Why do such errors happen? AI systems, especially those tasked with visual detection, are only as good as the data they’re trained on. If training datasets lack diversity or fail to account for everyday objects like snack bags, the system can misinterpret what it sees. “A Doritos bag, under certain conditions, might possess visual characteristics that, to a machine learning algorithm, superficially resemble the outline of a firearm,” BitcoinWorld explained. The consequences in a high-stakes environment like a school can be severe, ranging from student trauma to unnecessary police interventions.

Beyond technical shortcomings, the incident has raised questions about algorithmic bias and privacy. While there’s no direct evidence of racial bias in Allen’s case, experts warn that AI systems trained on unrepresentative data can disproportionately affect certain demographics. The lack of contextual understanding means AI may see patterns but miss the nuance, leading to errors when objects are presented in unexpected ways.

Privacy advocates are also sounding alarms over the proliferation of AI surveillance in schools. Constant monitoring, data collection, and the creation of digital records—even of false accusations—can have lasting impacts on students’ sense of autonomy and trust. The cryptocurrency community, as noted by BitcoinWorld, is particularly sensitive to these issues, drawing parallels between AI surveillance and broader debates about digital rights and data ownership.

So, where does this leave schools and communities striving for safety without sacrificing civil liberties? Experts and community leaders are calling for enhanced human oversight of AI alerts, rigorous testing and training of AI models, and greater transparency from companies like Omnilert. There’s also a push for engaging parents, students, and the wider community in conversations about how such technology is deployed and what safeguards are in place.

“AI is a tool, not a replacement for human judgment,” BitcoinWorld emphasized. The role of trained personnel in verifying alerts and de-escalating situations remains indispensable. As schools continue to adopt advanced security technologies in response to the threat of mass violence, the Kenwood High School incident is a vivid reminder that well-intentioned systems can have unintended and deeply personal consequences if not implemented with caution and robust oversight.

The story of Taki Allen and his Doritos bag has become more than just a local controversy—it’s a flashpoint in the national debate over how to balance safety, technology, and the rights of students. As the community demands answers and reforms, the hope is that lessons learned will lead to smarter, safer, and more humane approaches to school security in the digital age.