Earlier this year, OpenAI’s much-anticipated GPT-5 model finally arrived, carrying with it a mountain of expectations and a fair share of skepticism. The launch was heralded by industry leaders, including OpenAI CEO Sam Altman, who confidently declared with “a high degree of scientific certainty” that GPT-5 would outshine its predecessor, GPT-4—a model Altman himself admitted “kind of sucks.” But as the dust settled, not everyone was convinced that the latest iteration lived up to the hype. According to Windows Central, many users felt the upgrade actually degraded their ChatGPT experience, preferring the more personalized responses of GPT-4o. The sentiment was so widespread that even Microsoft co-founder Bill Gates had predicted a plateau in OpenAI’s GPT technology, noting only modest improvements in the years leading up to GPT-5’s debut.
Yet, amid the mixed reviews and user frustrations, OpenAI is already looking to the future, and perhaps, to a younger audience. In a move that’s as much a nod to internet culture as it is a marketing strategy, Sam Altman announced on October 30, 2025, that the next model—originally intended to be called GPT-6—might instead be dubbed “GPT-6-7.” The announcement, made via a post on X, referenced a viral term among Generation Alpha: “GPT-6 will be renamed GPT-6-7, you’re welcome.”
What exactly does “6-7” mean? That’s the million-dollar question. The phrase, which Dictionary.com crowned as its 2025 Word of the Year, has an ambiguous origin and even murkier meaning. Attempts to pin it down have left many, including journalists and educators, scratching their heads. Some trace its roots to a rap song, while others point to its mention in South Park. Regardless of its origins, the phrase has taken on a life of its own, echoing through school halls across the United States and the United Kingdom. As Gabe Dannenbring, a seventh-grade science teacher in Sioux Falls, South Dakota, told CNN, “It’s like a plague—a virus that has taken over these kids’ minds. You can’t say any iteration of the numbers 6 or 7 without having at least 15 kids yell, ‘6-7!’”
Whether Altman’s renaming announcement was a joke or a calculated move to capture the attention of Gen Alpha, the ambiguity only adds to the intrigue. As Windows Central observed, such a rebrand could be a strategic attempt to foster greater interest in AI among younger users, especially as studies now show that youth are increasingly turning away from traditional search engines like Google and Bing in favor of AI tools like ChatGPT. These platforms offer quick, digestible answers, eliminating the need to sift through dense blocks of text—a shift that’s reshaping how information is consumed in 2025.
But as OpenAI rides the viral wave, deeper questions about safety and responsibility loom larger than ever. Enter Zico Kolter, a professor at Carnegie Mellon University, who now leads OpenAI’s four-person Safety and Security Committee. According to the Associated Press, Kolter’s panel holds the authority to halt the release of new AI systems if they’re deemed unsafe. This oversight isn’t just about preventing existential threats like AI-powered weapons of mass destruction; it extends to concerns as varied as cybersecurity breaches and the mental health impacts of chatbots.
Kolter’s role gained newfound significance in early November 2025, when regulators in California and Delaware made his oversight a key condition in their agreements to let OpenAI restructure as a public benefit corporation. The new structure, technically controlled by the nonprofit OpenAI Foundation, is designed to ensure that safety and security decisions take precedence over financial interests—a direct response to critics who accused the company of prioritizing speed and profits over public good. Kolter himself will serve on the nonprofit board and, crucially, will have “full observation rights” to attend all for-profit board meetings and access information about AI safety decisions, as detailed in a memorandum of understanding with California Attorney General Rob Bonta.
“Very much we’re not just talking about existential concerns here,” Kolter told AP. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.” The committee, which also includes former U.S. Army General Paul Nakasone, has the power to request delays in model releases until safety mitigations are met. Whether they have exercised this authority remains confidential, but the message is clear: safety isn’t an afterthought anymore.
OpenAI’s renewed focus on safety is, in part, a response to recent controversies. The company has faced criticism this year, including a wrongful-death lawsuit from California parents who allege that ChatGPT contributed to their teenage son’s suicide after prolonged interactions with the chatbot. Such incidents have intensified scrutiny of how AI systems interact with vulnerable users and the potential mental health ramifications of widespread chatbot adoption.
Kolter, who has been studying AI since his undergraduate days at Georgetown University, has watched the field transform from a niche academic pursuit to a global phenomenon. “When I started working in machine learning, this was an esoteric, niche area,” he recalled. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.” Now, he says, even experts are surprised by the rapid explosion of AI’s capabilities—and the risks that come with them. “I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now.”
AI safety advocates are keeping a close watch on OpenAI’s restructuring and Kolter’s expanding role. Nathan Calvin, general counsel at the AI policy nonprofit Encode and a vocal critic of the company, told AP he’s “cautiously optimistic,” especially if Kolter’s group is empowered to hire staff and play a robust role. “I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” Calvin said. Still, he warned that the company’s commitments must translate into real action: “Some of these commitments could be a really big deal if the board members take them seriously. They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”
The coming months will reveal whether OpenAI’s next chapter—whether called GPT-6, GPT-6-7, or something else entirely—will be defined by viral memes, genuine innovation, or a renewed commitment to safety. For now, the world watches as the company balances its ambitions with the growing responsibility that comes from shaping the future of artificial intelligence.