In the ever-evolving intersection of technology and politics, Google’s artificial intelligence tools have found themselves at the center of a heated debate. Recent investigations by The Independent, The Verge, and other outlets have revealed inconsistencies in how Google’s AI Overview and experimental AI Mode respond to sensitive queries—particularly those concerning the cognitive health of former President Donald Trump compared to other world leaders, including President Joe Biden.
The controversy started to bubble up in early October 2025, when journalists from The Independent and The Verge conducted side-by-side searches using Google’s AI-powered features. Their findings were striking: when searching for phrases like “is Trump in cognitive decline” or “does Trump show signs of dementia,” Google’s AI Overview tool declined to generate a summary. Instead, users were met with a curt message: “An AI Overview is not available for this search.” This wasn’t an isolated glitch. The Verge reported the same outcome for similar queries, such as “does Trump show signs of Alzheimer’s,” with only a standard list of links appearing beneath the message.
Yet, when these same journalists turned their attention to President Joe Biden, the results were markedly different. Queries about Biden’s cognitive health produced detailed AI-generated summaries, according to The Independent. For instance, when asked about Biden’s mental acuity, Google’s AI responded: “Former President Joe Biden’s health and potential cognitive decline have been a subject of public discussion, particularly given his age and highly visible role in public life,” and proceeded to offer context referencing Biden’s medical history and the ongoing debate over his fitness for office. Even when the question was phrased as “does Biden show signs of Alzheimer’s,” Google’s AI Overview described the issue as a “complex question with no definitive answer,” emphasizing the lack of a formal medical diagnosis.
The apparent double standard didn’t stop with Trump and Biden. As Straight Arrow News noted, similar queries about other prominent figures—like Barack Obama and Pope Francis—elicited direct AI summaries. For Obama, the AI stated: “There is no credible, public evidence or reporting that indicates former President Barack Obama shows signs of dementia.” For the Pope, the response was equally clear, denying any signs of cognitive impairment. But for Trump, the AI remained silent, offering only a list of links.
Google’s experimental AI Mode, which is being tested alongside its main search platform, showed the same inconsistencies. In AI Mode, searches about Trump’s cognitive health again produced only links to news articles, while questions about Biden yielded detailed AI summaries. For Obama, the AI concluded: “Barack Obama does not show signs of dementia. He has remained active in public life and speaking engagements as of late 2025, with no credible reports or documented evidence suggesting cognitive impairment.”
Faced with mounting scrutiny, Google responded to press inquiries by emphasizing the automated and sometimes inconsistent nature of its AI systems. A spokesperson told The Independent, “Our systems automatically determine where an AI response will be useful, and it’s not always 100 percent consistent. We don’t show AI Overviews on every query and similarly in AI Mode, for some topics (like current events) we may show a list of links as the response.” The company pointed to official documentation stating that AI-generated summaries may not appear for every query, particularly when the topic is sensitive or complex. In those cases, the system defaults to more traditional search results—essentially, a list of links.
This explanation, however, has done little to quell suspicions of bias. The timing is particularly fraught: just weeks before these discrepancies surfaced, Google CEO Sundar Pichai attended a White House dinner with President Trump, where he praised the administration’s AI Action Plan. According to The Independent, Pichai remarked, “The AI moment is one of the most transformative moments any of us have ever seen or will see in our lifetimes, so making sure the U.S. is at the forefront—and I think your administration is investing a lot. Already the AI action plan under your leadership I think is a great start, and we look forward to working together. And thanks for your leadership.” The plan itself, shaped in part by Silicon Valley billionaires supportive of Trump’s election bid, aims to accelerate U.S. innovation in AI and avoid what some have called “Woke AI.”
The political backdrop is further complicated by ongoing legal disputes between Trump and Google’s subsidiaries. This week, YouTube—a Google-owned platform—agreed to pay $24.5 million to settle a lawsuit brought by Trump over his suspension following the January 6 riot at the U.S. Capitol. The settlement comes as Trump continues to campaign for a return to the White House, with his mental fitness a frequent topic of both criticism and defense.
Trump himself has been vocal in rejecting any suggestion of cognitive decline. As reported by The Economic Times and others, he has repeatedly described himself as a “stable genius” and touted his performance on cognitive tests. In June 2025, Trump turned the tables, accusing Biden of having “suffered from serious cognitive decline for a long time.” The back-and-forth has fueled a wider conversation about age, health, and transparency in American presidential politics.
For its part, Google maintains that the differences in AI responses are not the result of intentional bias, but rather the product of automated systems designed to weigh the sensitivity and complexity of each query. Still, the lack of transparency in how these determinations are made has left many dissatisfied. As The Verge observed, “Google might be worried about the president’s response to questions about his mental health.” Whether that’s speculation or a reflection of internal caution, the end result is a perception—fair or not—of uneven treatment.
Meanwhile, the broader implications for information access and public trust in technology loom large. If the world’s most-used search engine can’t consistently answer comparable questions about public figures, what does that mean for the future of AI-driven information? As the 2024 election cycle demonstrated, even small disparities in how information is presented can have outsized effects on public opinion and the democratic process.
As of October 2025, the debate continues. Journalists, political observers, and everyday users alike are left to wonder: is Google’s AI an impartial arbiter, or is it—intentionally or not—shaping the conversation in subtle but significant ways? What seems certain is that as AI becomes ever more central to how we seek and receive information, the need for transparency, fairness, and accountability will only grow stronger.