Artificial intelligence is rapidly transforming the way doctors diagnose and treat patients, offering new tools that promise greater accuracy and efficiency. Yet, a new study out of Poland is prompting a wave of reflection—and some concern—about what happens when doctors become too reliant on these technological marvels. The research, published on August 19, 2025, in Lancet Gastroenterology and Hepatology, found that experienced gastroenterologists who had grown accustomed to using an AI-assisted system for colonoscopies saw a notable drop in their ability to independently detect polyps and other abnormalities when the technology was taken away.
The study, conducted across four clinics in Poland, set out to evaluate the real-world impact of AI on medical diagnostics, specifically in the high-stakes environment of colon cancer screenings. The AI system in question works by analyzing real-time video from a camera inserted into the colon, highlighting suspicious areas with a green box for the physician’s attention. On the surface, the technology appeared to be a boon: it helped doctors spot more potential polyps during screenings. But when the researchers looked closer, they noticed a counterintuitive trend.
After doctors spent time using the AI system, their ability to detect polyps without it dropped from 28.4% to 22.4%. That’s a 20% decrease—a figure that raised eyebrows among the medical community. Dr. Marcin Romańczyk, the study’s lead author and a gastroenterologist at H-T Medical Center in Tychy, Poland, was candid about the team’s reaction. “We were not expecting such a significant decline in detection rates,” he told NPR, underscoring the element of surprise that accompanied the findings.
Romańczyk theorizes that the doctors may have subconsciously begun to wait for the AI’s green box to highlight abnormalities, instead of relying on their own vigilance and expertise. “We are subconsciously waiting for the green box to come out to show us the region where the polyp is and we’re not paying so much attention,” he explained. This subtle shift in focus, from active searching to passive confirmation, could have real consequences when the AI is not in use.
It’s a phenomenon that isn’t entirely new. Other research has documented what’s known as the “safety-net effect,” where people—especially those with less experience—perform worse when they know AI assistance is available at the push of a button. Johan Hulleman, a researcher at Manchester University who has studied human reliance on artificial intelligence, helped lead a similar study involving mammogram scans. He describes the effect as a kind of psychological crutch, where the presence of AI makes users less attentive or thorough in their own right.
But Hulleman is also quick to caution against drawing sweeping conclusions from the Polish study. “I think three months seems like a very short period to lose a skill that you took 26 years to build up,” he noted, pointing out that the doctors in the study were seasoned professionals. He believes that statistical variations—such as the average age of patients in different phases of the study—could account for some of the drop in detection rates. “We don’t know how many polyps there really were, so we don’t know the ground truth,” he added, raising the possibility that some of the missed polyps may not have been medically significant.
Still, the study’s results have sparked a lively debate about the integration of AI in clinical practice. There’s no denying that artificial intelligence is becoming a staple in medical diagnostics, from eye scans to breast cancer screenings and, of course, colonoscopies. “AI is spreading everywhere,” Romańczyk observed. Yet, as he pointed out, most doctors weren’t trained to use these technologies. “We’ve been taught from books and from our teachers. No one told us how to use AI,” he said. This gap in training may leave physicians ill-prepared to strike the right balance between leveraging AI’s strengths and maintaining their own diagnostic skills.
The Polish study’s methodology was straightforward but illuminating. Over a period of three months, doctors at four clinics used the AI system during colonoscopies, with the technology providing real-time visual cues. When the system was switched off, researchers tracked the doctors’ performance in detecting polyps and other abnormalities. The drop in detection rates was consistent enough to raise questions about whether short-term exposure to AI might foster a kind of learned dependency.
Romańczyk isn’t opposed to using AI—far from it. He believes the technology can help clinicians perform better colonoscopies and ultimately improve patient outcomes. But he’s also clear-eyed about the need for more robust data and a deeper understanding of how AI is changing the way doctors work. “We have powerful AI systems at our disposal,” he remarked, “but a significant gap exists in understanding their long-term effects on clinical practice.”
Hulleman, for his part, remains skeptical of the study’s implications. He notes that the relatively short duration and the experienced nature of the participating doctors make it difficult to draw firm conclusions about skill degradation. “There are a lot of variables the researchers couldn’t control,” he said, emphasizing the need for larger and longer-term studies to tease apart the true impact of AI on medical expertise.
What’s clear is that the conversation around AI in medicine is far from settled. The technology’s promise is undeniable: faster, more accurate diagnoses, and the potential to catch diseases earlier than ever before. But as this study shows, the human side of the equation can’t be overlooked. If doctors begin to trust AI at the expense of their own skills, patients could be at risk when the technology isn’t available—or when it makes a mistake.
The Polish study is a timely reminder that as AI becomes more entrenched in healthcare, the medical community must grapple with new challenges. Training programs may need to evolve to include not just how to use AI, but how to maintain core clinical skills alongside technological advances. And as more data emerges, the debate over AI’s role in medicine will likely grow even more nuanced.
For now, the findings serve as both a caution and a call to action: artificial intelligence is here to stay, but so is the need for vigilant, well-trained human doctors. As AI continues to reshape medicine, striking the right balance will be essential for delivering the best possible care.