Today : Sep 18, 2025
Technology
18 September 2025

Meta And Google DeepMind Unveil Historic AI Breakthroughs

Meta launches advanced smart glasses while Google DeepMind’s AI outperforms human programmers, signaling a new era for artificial intelligence.

In a week marked by major milestones in artificial intelligence, two tech giants—Meta and Google DeepMind—have unveiled advances that could reshape how people interact with technology and solve some of the world’s most challenging problems. At Meta’s annual Connect conference on September 18, 2025, CEO Mark Zuckerberg took the stage at the company’s Silicon Valley campus to reveal a new generation of AI-powered smart glasses, while Google DeepMind announced a historic achievement for its Gemini 2.5 model, which outperformed human competitors at an international programming contest in Azerbaijan.

Meta’s showcase was a spectacle of innovation and ambition. Zuckerberg, holding up a sleek pair of black-framed glasses, described the technology as a “huge scientific breakthrough” before an audience of hundreds. The new Meta Ray-Ban Display glasses, developed in partnership with iconic eyewear brands Ray-Ban and Oakley, feature a full-color, high-resolution screen embedded in one lens, a 12-megapixel camera, and the ability to make video calls and view messages right before the wearer’s eyes. With a neural wristband that pairs to the glasses, users can even send messages with small hand gestures—a nod to the seamless integration of AI into everyday life.

According to BBC, the Ray-Ban Display glasses will hit the market this month, priced at $799 (£586)—a significant jump from Meta’s earlier models. For sports enthusiasts, the company also introduced the $499 Oakley Meta Vanguard glasses, and the second generation of Ray-Ban Meta glasses, priced at $379. These launches come as Meta continues to pour resources into AI, with Zuckerberg revealing plans to spend hundreds of billions of dollars on sprawling data centers in the US (one nearly the size of Manhattan) and to attract top AI talent from competitors.

Meta’s bet is that smart glasses—unlike the company’s earlier, much-hyped virtual reality headsets—will become an everyday accessory. “Unlike VR headsets, glasses are an everyday, non-cumbersome form factor,” Forrester VP and Research Director Mike Proulx told the BBC. But he cautioned that “the onus is on Meta to convince the vast majority of people who don’t own AI glasses that the benefits outweigh the cost.”

Industry analysts remain divided about the mass appeal of these devices. Leo Gebbie of CCS Insight expressed skepticism that the new, pricier Ray-Ban Display would match the traction of Meta’s earlier, more affordable models. “The Ray-Bans have done well because they’re easy to use, inconspicuous and relatively affordable,” Gebbie noted. Meta, for its part, has not released precise sales figures but is understood to have sold around two million pairs of smart glasses since entering the market in 2023.

The company’s AI ambitions don’t stop at hardware. Zuckerberg has made clear that Meta’s vision is to weave its artificial intelligence tool, Meta AI, into the fabric of daily life. The hardware launches are part of a broader push to develop what Meta calls “superintelligence”—AI technology that can out-think humans. This vision, however, comes amid ongoing scrutiny of Meta’s products, especially regarding their impact on children.

Just hours before the Connect conference, activists and families of suicide victims protested outside Meta’s New York headquarters, demanding tighter safeguards for children on social media platforms like Facebook, Instagram, and WhatsApp. The protest followed testimony last week by two former Meta safety researchers before the US Senate. They alleged that Meta not only failed to address potential harms to children from its virtual reality products but also discouraged internal research that might reveal such risks. Meta has firmly denied these allegations, calling them “nonsense.”

Meanwhile, on September 17, 2025, Google DeepMind announced a leap in AI capabilities that some experts are calling “historic.” At an international programming competition in Azerbaijan earlier this month, a version of DeepMind’s Gemini 2.5 AI model solved a complex, real-world problem that had stumped even the world’s top college programmers. The AI figured out how to send liquid through a network of ducts to interconnected reservoirs in the shortest possible time—a task requiring it to weigh an infinite number of possibilities. Notably, it did so in less than half an hour, outperforming all human teams and earning a gold medal, making it the first AI to achieve this distinction at the event.

Google described the feat as a “profound leap in abstract problem-solving” and likened it to IBM’s Deep Blue defeating chess grandmaster Garry Kasparov in 1997 and DeepMind’s AlphaGo besting Go champion Lee Sedol in 2016. “For me it’s a moment that is equivalent to Deep Blue for Chess and AlphaGo for Go,” said Quoc Le, Google DeepMind’s vice-president. “Even bigger, it is reasoning more towards the real world, not just a constrained environment … because of that I think this advance has the potential to transform many scientific and engineering disciplines.” He pointed to fields like drug and chip design as areas poised for disruption.

While the Gemini 2.5 model failed two of the 12 tasks it was assigned, its overall performance ranked it second among 139 teams of elite college-level programmers from Russia, China, Japan, and elsewhere. Google said the model was specially trained for hard coding, math, and reasoning problems, and performed “as well as a top 20 coder in the world.” The company also confirmed that the AI’s computing power far exceeds what’s available to average subscribers of its $250-a-month Google AI Ultra service.

The achievement has drawn both praise and skepticism from experts. Stuart Russell, a leading computer science professor at the University of California, Berkeley, said, “Claims of epochal significance seem overblown.” He pointed out that AI systems have performed well on programming tasks for some time and that previous breakthroughs like Deep Blue had “essentially no impact on the real world of applied AI.” Still, he acknowledged that “this performance may show progress towards making AI-based coding systems sufficiently accurate for producing high-quality code.”

Michael Wooldridge, Ashall Professor of Artificial Intelligence at Oxford, echoed the excitement but questioned the resources required, asking just how much computing power was needed for Gemini 2.5’s feat. Google declined to provide specifics, only confirming that it was more than what’s available to most users. Dr. Bill Poucher, executive director of the International Collegiate Programming Contest, called it “a key moment in defining the AI tools and academic standards needed for the next generation.”

These latest advances stand on the shoulders of decades of AI milestones—from Frank Rosenblatt’s 1957 Perceptron, which laid the groundwork for neural networks, to IBM’s Deep Blue, DeepMind’s AlphaGo, and AlphaFold, which helped win a Nobel Prize in chemistry in 2024 by predicting 3D protein folding.

As Meta and Google race to shape the future of AI, the world is left to ponder both the promise and the peril of these technologies. With every breakthrough comes new questions—about ethics, safety, and the very nature of intelligence. For now, one thing is clear: the AI revolution is no longer a distant vision. It’s happening right before our eyes, and its impact is only just beginning to unfold.