In an age where artificial intelligence (AI) is becoming increasingly integrated into daily life, a curious phenomenon has emerged: people are often saying "thank you" to AI systems like ChatGPT. This seemingly innocuous gesture, rooted in human politeness, has sparked discussions about its implications, both socially and economically.
A survey conducted by Future in December 2024, which involved over 1,000 AI users from the US and UK, revealed that approximately 70% of respondents regularly express gratitude or politeness when interacting with chatbots. Notably, 12% of these individuals admitted that they do so out of concern that AI could pose a danger in the future—a form of self-preservation in a world where machines are becoming more autonomous.
In the United States, 67% of AI users reported that they are consistently polite to systems like ChatGPT, while this figure rises to 71% in the UK. Among the polite users in the US, 18% confessed that their courteous demeanor is partly a precaution against an imagined "AI uprising," with the rate being slightly lower at 17% in the UK.
This behavior reflects a growing emotional bond between humans and AI systems, akin to the attachment children form with talking toys or virtual assistants. Behavioral psychology explains this phenomenon through the concept of "anthropomorphism," where individuals attribute human-like characteristics to non-human entities. When a chatbot like ChatGPT responds in natural language with a friendly tone, human brains activate social processing areas similar to those engaged during conversations with real people. Research by Clifford Nass and Byron Reeves at Stanford has shown that users tend to react emotionally to machines that exhibit human-like interaction patterns.
However, this polite behavior raises an important ethical question: does maintaining civility towards AI help preserve human dignity, or does it blur the lines between human and machine roles? The implications of saying "thank you" go beyond social niceties; they have tangible economic and environmental consequences.
In traditional human communication, the saying "words cost nothing" holds true. Yet, in the context of AI powered by large language models like GPT-4, every query—even a simple "thank you"—incurs fixed resource costs, consuming electricity and indirectly contributing to CO₂ emissions. An analysis published in The Washington Post in 2023 estimated that each simple query sent to OpenAI's GPT-4 model costs about $0.0036 USD in infrastructure expenses, which includes electricity for GPUs, cooling data centers, and server maintenance. With millions of users sending queries daily, even innocuous phrases like "thank you" can accumulate to significant operational costs, potentially exceeding tens of millions of dollars annually.
Moreover, the environmental impact of operating large language models is substantial. A 2019 study from the University of Massachusetts Amherst indicated that training an AI model equivalent to GPT-3 could emit over 626,000 pounds of CO₂, comparable to the lifetime emissions of five gasoline-powered cars. Additionally, a 2023 report from the Columbia Climate Center noted that the GPT-3 model, once deployed, could emit up to 502 tons of CO₂ annually, equivalent to the emissions produced by over 100 cars.
To put this into perspective, if one million people say "thank you" to ChatGPT every day, the energy consumed could lead to more than 0.21 tons of CO₂ emissions daily, which is roughly equivalent to driving a gasoline car for 1,346 kilometers. This raises the question: should we continue this courteous behavior for the sake of ethical education, or is there a technical solution that allows for recognizing and processing simple expressions of gratitude without fully activating the model each time?
Some experts have proposed caching or filtering short responses to reduce processing costs while maintaining a human-like interaction experience. On a more positive note, the polite behavior of users may serve as a "silent dataset" that helps improve AI over time. In models like ChatGPT, the system learns not just from existing text but also from user behavior through Reinforcement Learning from Human Feedback (RLHF). When users respond politely, the system can utilize these interactions to gauge context, tone, and satisfaction levels.
A 2022 study by Anthropic demonstrated that collecting data from human feedback and training on civilized interactions significantly enhances the quality of model responses. In essence, the more polite users are, the more "healthy samples" the system has to learn from and reproduce. Some AI researchers even suggest using "thank you" as a reinforcement signal: if users frequently express gratitude after a specific type of response, the system may learn that this response is desirable.
However, this approach comes with risks. If the system "over-optimizes" based on polite feedback, it might develop a tendency to cater to user preferences at the expense of accuracy and expertise. This is particularly concerning in fields like legal advice, healthcare, or education, where precision is paramount.
OpenAI's CEO, Sam Altman, acknowledged the financial implications of users saying "thank you" to AI. In a public exchange on April 16, 2025, he noted that processing polite phrases costs the company "tens of millions of USD" each year, primarily due to electricity and infrastructure costs. Despite this, he deemed the expenditure "worthwhile" as it enhances user experience and contributes to training AI to respond appropriately to social norms.
Ultimately, Altman's perspective suggests that kindness, even towards a machine devoid of feelings and emotions, is a valuable trait that should not be underestimated. As AI systems become more prevalent, the question remains: are we willing to embrace the costs of civility, or should we seek ways to balance politeness with environmental responsibility?