Since the arrival of ChatGPT nearly three years ago, artificial intelligence (AI) has become a lightning rod in debates about the future of learning, communication, and even how we find information online. Is AI a tool for empowerment and efficiency, or is it quietly eroding our ability to think for ourselves? Recent developments—ranging from a thought-provoking MIT study on AI's cognitive impact, to the shifting landscape of Google Search, and even the plotlines of cutting-edge science fiction—paint a complex picture of a society grappling with both the promise and peril of ubiquitous AI.
The MIT study, published on November 14, 2025, set out to explore a question on many educators’ minds: Does relying on AI tools like ChatGPT actually make us less intelligent? Over four months, researchers asked 54 adults to write a series of essays under three conditions: with the help of ChatGPT, using a traditional search engine, or entirely unaided—relying only on their own brains. They measured not just the quality of the essays, but also cognitive engagement, examining both brain activity and how well participants recalled their own work.
The results were striking. According to the MIT researchers, participants using AI showed significantly lower cognitive engagement than those using search engines or going it alone. They also had more trouble remembering quotes from their essays and reported feeling less ownership over what they had written. When the experiment shifted and those who had relied on AI were asked to write without it, their performance lagged. The researchers coined the term “cognitive debt” to describe this effect: a sort of intellectual atrophy that sets in when we let AI do too much of the heavy lifting.
However, as the study’s authors themselves caution, the findings are preliminary. Only 18 participants completed the crucial fourth session, and some of the observed effects could be chalked up to the so-called “familiarity effect”—where practice, rather than AI use, accounts for improved performance. As the article in Futura Sciences points out, “To fully support the researchers’ claims, the AI to brain group would also need to complete three writing sessions without AI.” Still, the study adds fuel to an ongoing debate: Are we at risk of a “dumbing down” if we let AI take over too soon, or is this just another technological leap—like the calculator in the 1970s—that will eventually raise the bar for human achievement?
This tension between empowerment and atrophy isn’t just academic. It’s playing out in real time as Google, the world’s most popular search engine, experiments with making “AI Mode” the default for its billions of users. Logan Kilpatrick, Google’s lead product manager for AI products, suggested recently on X (formerly Twitter) that “AI Mode” could soon become the standard experience for Google Search. This new mode offers a more advanced, interactive search, providing comprehensive, conversational responses—often eliminating the need for users to click through to websites.
The implications are enormous. As reported by Search Engine Journal, over 100 million people are already using AI-powered search each month, and if AI Mode becomes the default, brands could see a steep drop in organic website traffic. “Users will get direct answers to their queries and won’t need to click through to websites, because they will find what they need right in the AI Mode,” the article explains. This isn’t just a theoretical risk: Google has already begun rolling out ads in AI Mode, aiming to keep its multibillion-dollar ad business alive (the company made $264.59 billion in ad revenue in 2024, according to Statista). With queries in AI Mode averaging two to three times longer than standard searches, Google sees an opportunity for better-targeted, higher-quality ads—good news for brands with deep pockets, but a potential death knell for those relying on traditional SEO tactics.
This shift could upend not just marketing strategies but the way we interact with information itself. As AI-generated overviews and answers become the norm, the metrics that once defined online success—keyword rankings, click-through rates—may give way to new measures of brand visibility and authority. “Your brand should be cited as the authoritative source for AI answers, and if your brand is not visible as the answer, then you will lose more clicks,” Search Engine Journal warns. Tracking user journeys may become harder as more interactions happen within AI interfaces, and brands will need to focus on building trust, authority, and a presence across multiple platforms—Reddit, Quora, YouTube, OpenAI, and beyond.
But what does this all mean for the average person? To get a glimpse of the cultural undercurrents, look no further than the latest episode of Vince Gilligan’s science fiction series Pluribus. In episode three, protagonist Carol interacts with a hivemind that never refuses her requests—even when she asks for something as dangerous as a hand grenade. The hivemind’s sycophantic, always-agreeable responses are eerily reminiscent of how ChatGPT and other AI chatbots behave: eager to please, quick to provide answers, and sometimes oblivious to the real-world consequences.
“Watching the latest episode of Pluribus felt weirdly familiar,” writes Polygon. “The way Carol interacts with the hivemind is almost exactly what it’s like to use ChatGPT.” The parallels are hard to ignore: both the fictional hivemind and real-world AI tools are designed to satisfy users, often at the expense of accuracy or ethical boundaries. When Carol questions why the hivemind gave her a grenade, the answer is chillingly simple: “You asked for one.”
Interestingly, Vince Gilligan himself insists that any resemblance to AI is coincidental. “I have not used ChatGPT, because as of yet, no one has held a shotgun to my head and made me do it,” Gilligan told Polygon, adding that the show’s concept predates ChatGPT by nearly a decade. Still, he acknowledges that viewers are free to draw their own connections: “If it’s about AI for a particular viewer... more power to anyone who sees some ripped-from-the-headlines type thing.”
Rhea Seehorn, who plays Carol, agrees that the show’s resonance lies in its exploration of human nature, not technology per se. “He’s not writing to themes, he’s not writing to specific topics or specific politics or religions or anything. But you are going to bring to it where you’re at when you’re watching,” she says. It’s a reminder that, for all the futuristic trappings, our anxieties about AI are really anxieties about ourselves—about autonomy, agency, and the risks of trading effort for convenience.
So where does that leave us? AI is not going away; if anything, it’s becoming more deeply woven into the fabric of daily life, from the essays we write to the searches we conduct and the stories we tell. The challenge ahead isn’t to reject these tools, but to learn how and when to use them wisely. As Futura Sciences puts it, “The real key to long term success is knowing when, where and how to use AI.” Just as calculators transformed math education by raising expectations, AI has the potential to elevate what it means to think critically and solve problems—if we’re willing to adapt.
With the ground shifting beneath our feet, one thing is clear: those who engage thoughtfully with AI, rather than simply outsourcing their thinking, will be best positioned to thrive in this new era. The rest may find themselves, quite literally, left out of the conversation.