Today : Feb 05, 2026
Technology
04 February 2026

AI Infrastructure Shifts To Edge As Firms Race To Innovate

From law firms to tech startups, organizations are rethinking AI infrastructure and governance to deliver faster, more reliable services and unlock real-world value.

On February 4, 2026, the landscape of artificial intelligence (AI) infrastructure and adoption saw a flurry of significant developments across industries, from law firms and tech startups to global enterprises. As the AI & Big Data Expo and Intelligent Automation Conference opened their doors, the spotlight was not just on the promise of AI as a digital co-worker, but on the nuts and bolts that make such ambitions possible—data quality, network infrastructure, and cultural readiness. Meanwhile, Canadian startup PolarGrid made headlines with a prototype network designed to slash AI latency by shifting inference to the edge, and leading law firm Eversheds Sutherland announced a new AI-focused innovation department in the US. Together, these moves paint a picture of an industry moving beyond hype, grappling with the realities of turning AI into a reliable, everyday tool.

For many, the first phase of the AI revolution was defined by massive investments in centralized data centers—hyperscalers pouring hundreds of billions into vast server farms. But as Investing News Network reports, 2026 is shaping up to be the year where the question shifts from "who can build fastest" to "who can deliver the best user experience and return on investment." Nicholas Mersch from Purpose Investments summed up the mood: "The focus is turning from who can build fastest to who can drive the highest revenue and margin per dollar of AI infrastructure." With some data centers now pushing past 1 gigawatt of power usage and supply shortages for crucial components like high-bandwidth memory, the limits of centralized architectures are becoming increasingly clear.

Enter PolarGrid, a Canadian startup led by former Canopy Growth president Rade Kovacevic. The company is betting that the future of AI lies not in ever-bigger data centers, but in bringing AI inference closer to users—what's known as the "edge." In recent tests shared with INN, PolarGrid's prototype network cut latency by more than 70% compared to traditional centralized hyperscalers, bringing total response times down to around 300 milliseconds. That may sound technical, but it has real-world consequences: for applications like voice assistants, video agents, or even autonomous vehicles, a one-second pause can break trust and render the technology unusable.

Kovacevic draws a parallel to the early days of the commercial internet. "Initially we’ve all been enamored with the new features and capabilities," he explained to INN, "but as we’ve gotten used to it, our expectations have continued to increase." Just as consumers once marveled at waiting 30 seconds for an image to load, only to later demand near-instant results, users of AI will soon expect seamless, human-like interactions. Anything less, especially in latency-sensitive applications like talent recruitment or customer service, risks driving users away. As Kovacevic puts it, "Inference latency is the bottleneck for real-time AI at scale—whether it’s real-time voice or video solutions."

The technical solution is elegantly simple: instead of routing every user request to a handful of distant data centers, PolarGrid distributes GPUs across major population centers in North America. Imagine swapping a warehouse in another state for a neighborhood vending machine—the trip is shorter, and the response is faster. This approach doesn't eliminate the need for large, centralized training clusters, but it does shift the latency-sensitive part of the workload closer to the user. For policymakers and enterprises concerned with data sovereignty and security, edge AI also offers the advantage of keeping sensitive data local.

This shift comes at a time when analysts expect hyperscalers to spend between US$300 billion and US$600 billion on AI infrastructure in 2026 alone. Yet, as INN notes, the winners may not be those who spend the most, but those who squeeze the most utility—and revenue—out of every dollar invested. PolarGrid's early pilots are targeting verticals like voice agents and interactive entertainment, where even a small improvement in responsiveness can translate directly into higher engagement and revenue.

While the technical challenges of latency and infrastructure are front and center, the AI & Big Data Expo and Intelligent Automation Conference underscored a broader set of hurdles facing AI adoption. According to AI News, the transition from passive automation to "agentic" systems—AI that can reason, plan, and execute tasks autonomously—requires much more than just clever algorithms. Amal Makwana from Citi explained how these systems are now acting across enterprise workflows, closing the "automation gap" and functioning as true digital co-workers. However, Brian Halpin from SS&C Blue Prism cautioned that organizations must master standard automation before deploying such agentic AI, as these systems require robust governance frameworks to handle non-deterministic outcomes.

Speakers from Informatica, MuleSoft, and Salesforce echoed the need for strict oversight, emphasizing that a governance layer must control how AI agents access and utilize data to prevent operational failures. Data quality remains a critical stumbling block. Andreas Krause from SAP warned, "AI fails without trusted, connected enterprise data." To combat issues like hallucinations in large language models, Meni Meller of Gigaspaces advocated for combining retrieval-augmented generation (eRAG) with semantic layers, allowing AI to retrieve factual enterprise data in real time.

The conversation extended beyond software. As AI systems become embodied—deployed in factories, offices, and public spaces—physical safety becomes a pressing concern. Edith-Clare Hall from ARIA and Matthew Howard from IEEE RAS discussed the importance of safety protocols for robots interacting with humans, while Perla Maiolino from the Oxford Robotics Institute highlighted research into Time-of-Flight sensors and electronic skin to improve robots' self-awareness and environmental perception.

Network infrastructure, too, must keep pace. Julian Skeels from Expereo argued that networks must be designed specifically for AI workloads, with sovereign, secure, and always-on fabrics capable of handling high throughput. Yet, as Paul Fermor from IBM Automation warned, traditional automation thinking often underestimates the complexity of AI adoption—a phenomenon he called the "illusion of AI readiness." Jena Miller reinforced that strategies must be human-centered, ensuring workforce trust and adoption, or the technology will fail to deliver returns. Ravi Jay from Sanofi advised that leaders must ask operational and ethical questions early, deciding where to build proprietary solutions and where to buy established platforms.

Meanwhile, in the legal world, Eversheds Sutherland—a global law firm headquartered in London with over 430 lawyers in the US—announced the launch of a 20-person AI-centric innovation department. As reported by Global Legal Post, the new department consolidates technology functions including data and analytics, research and knowledge services, legal technology, and client-facing technologies. Led by Katrina Dittmer, the team aims to "accelerate" the firm's use of AI and get "the right tools" into lawyers' hands. About 40% of the firm's US lawyers already use a generative AI legal platform for tasks like due diligence, litigation, and document drafting, and several pilot projects—including an AI-powered enterprise search tool—are planned for 2026.

Dittmer emphasized the importance of experimentation and a startup-like culture: "There’s an element of innovation that is about experimentation… We’re able to take what we learn from feedback, from pilots and make decisions quickly [if they] make sense for the firm and pivot quickly when they don’t." Ron Friedmann, a senior director analyst at Gartner, called the move a sign that generative AI has become a "must-have" for large firms to stay competitive.

As AI infrastructure evolves—shifting from centralized data centers to the edge, from passive automation to agentic systems, and from cautious pilots to full-scale adoption—the challenge is no longer just technical. Success will depend on building reliable, responsive networks, ensuring data quality, establishing robust governance, and fostering a culture ready to embrace change. For those willing to tackle these hurdles, the payoff could be transformative.