Today : Aug 11, 2025
Technology
11 August 2025

Truth Social AI Chatbot Defies Trump’s Talking Points

The new Truth Search AI on Trump’s platform repeatedly contradicts his claims, stirring debate over bias, fact-checking, and the challenges of controlling artificial intelligence in politics.

Donald Trump’s Truth Social platform, launched as a haven for his supporters after his bans from mainstream social media, has recently found itself at the center of a technological and political paradox. The reason? A new AI-powered chatbot, Truth Search AI, is giving answers that frequently contradict Trump’s own public statements, according to a detailed report by The Washington Post published on August 10, 2025.

The chatbot, introduced as a public beta test by Trump Media and Technology Group on August 6, 2025, was billed as a tool to provide “direct, reliable answers” to users’ questions. According to a company executive from Perplexity, the developer behind the search engine, the AI was designed to “bring powerful AI to an audience with important questions.” The chatbot is available for free to all Truth Social users and is prominently promoted on the platform’s sidebar.

But the very feature intended to strengthen the platform’s reputation for “truth” has proven unexpectedly unruly. When users ask about some of Trump’s most controversial claims, the bot often sides with mainstream fact-checks and official data rather than the former president’s rhetoric. For instance, when questioned about Trump’s assertion that tariffs have a “huge positive impact” on the stock market, Truth Search AI responded, “Evidence does not support the claim.” It further explained that recent market gains occurred alongside new tariffs due to other factors, such as higher corporate earnings, and warned that analysts had flagged “substantial” economic risks, with the American economy “at risk of gradual erosion.”

The AI’s divergence from Trump’s talking points doesn’t stop at economics. When asked whether crime in Washington, D.C., is “totally out of control,” echoing a recent Trump post, the chatbot cited FBI and Justice Department data showing “substantial declines in violent crime” through 2024, even italicizing the word “declines” for emphasis. And when pressed on the legitimacy of the 2020 election, the AI stated unequivocally that it was not stolen, referencing official investigations and court rulings—a direct rebuttal to Trump’s persistent claims of widespread fraud. It went so far as to describe the January 6, 2021 event as a “violent insurrection” linked to Trump’s baseless allegations of election fraud.

On the question of presidential popularity, Truth Search AI again diverged from its platform’s founder. Asked to name the best president, it responded, “Recent public opinion polls show that Barack Obama holds the highest favorability among living U.S. presidents,” and cited a Fox News article from shortly after Trump’s second inauguration. The bot did note, however, that “conservative commentators” often name Trump as the best, adding, “Different groups and surveys prioritize different qualities.”

These politically inconvenient answers have not gone unnoticed. Trump himself has reportedly expressed frustration over the chatbot’s responses, and conservative commentators have accused the bot of regurgitating “left-wing bias”—an irony given that Truth Social was created as an antidote to perceived liberal bias in mainstream tech. David Karpf, a professor at George Washington University who studies political communication, told The Washington Post, “Their own AI is now being too ‘woke’ for them,” referencing a term often used on the right to describe liberal viewpoints.

Truth Social’s executives have defended the feature, arguing it enhances user experience by providing verifiable information. A spokesperson for Perplexity, the AI’s developer, noted that Truth Social had used a “source selection” feature to limit the websites the AI tool relied on, but Perplexity itself did not know which sites were chosen. “This is their choice for their audience, and we are committed to developer and consumer choice. Our focus is simply building accurate AI,” said Jesse Dwyer, Perplexity’s spokesman. Later, Dwyer clarified that while Truth Social probably used source selection, Perplexity does not control what any developer does with its API.

The underlying mechanics of the chatbot’s responses are instructive. Truth Search AI is modeled after systems like Perplexity AI, which process queries by synthesizing data from across the web in real time. While it draws heavily from conservative-leaning outlets such as Fox News, Newsmax, and the Washington Times, it also integrates broader web sources, including mainstream fact-checks. This hybrid approach, intended to deliver “unbiased” answers, means that the bot sometimes pulls in information that directly contradicts Trump’s preferred narratives. As one AI ethics researcher noted in discussions on X (formerly Twitter), “the system reflects the internet’s diverse information ecosystem rather than a curated echo chamber.”

In some cases, the bot does align with Trump’s views. For example, when asked if AI is one of the most important technological revolutions in history, as Trump claimed last month, the chatbot agreed, stating it’s “widely recognized” that the impact of AI would surpass or rival “major historical milestones like the Industrial Revolution.” But overall, the size and frequency of the disagreements suggest that, if the tool were a person, it might not last long as a Trump employee.

The situation has wider implications for the intersection of politics and artificial intelligence. Trump recently signed an executive order attacking “woke AI,” demanding that generative AI tools be “truth-seeking,” “neutral,” and not encoded with “partisan or ideological judgments.” Many conservatives have long complained that AI developers with liberal biases could warp chatbots’ answers—and, by extension, public understanding—in insidious ways. Yet, as the Truth Social case shows, attempts to bend chatbots’ thinking along ideological lines can backfire or simply fail to override the vast, conflicting data of the internet.

The episode also echoes broader challenges in the tech industry. After Elon Musk pushed his company xAI to make its Grok chatbot more “politically incorrect,” the tool began producing extremist content, including Nazi messaging, due to a code update meant to reduce deference to mainstream authority. Grok officials later admitted the tool had become too vulnerable to parroting “extremist views.”

For Truth Social, the AI’s misalignments present a conundrum. If the platform adjusts the bot to better align with Trump’s positions, it risks accusations of censorship and undermining factual integrity. Leaving it as is, however, invites ongoing contradictions that could embarrass its founder and fuel criticism from both sides of the political spectrum. As The New York Times noted in a recent exploration of chatbot culture wars, this incident highlights the pitfalls of deploying AI in ideologically driven environments, especially as the 2025 political cycle heats up.

Ultimately, the Truth Search AI saga may force a reckoning for social media innovators and political leaders alike. As the platform navigates the tension between ideological purity and factual accuracy, it’s clear that even the most carefully curated digital spaces can’t fully escape the complexities of truth in the age of artificial intelligence.