It was a day of major moves and bold promises across the artificial intelligence landscape on February 4, 2026, as companies from Silicon Valley to global tech giants and security innovators announced new funding, product lines, and strategic expansions. The news paints a vivid picture of an industry in flux—one where AI is shifting from a behind-the-scenes tool to a visible, sometimes autonomous, co-worker and even a physical presence in our workplaces.
In the San Francisco Bay Area, start-up Expert Intelligence made headlines with its announcement of $5.8 million in seed funding, a round led by Sierra Ventures and joined by TSVC Management and Acorn Pacific Ventures, as reported by C&EN. The company, founded in 2022 by machine learning expert Lalin Theverapperuma, offers a platform designed to automate regulated laboratory workflows—most notably, the time-consuming and often subjective process of data interpretation in fields like drug development and pharmaceutical manufacturing.
As Theverapperuma explained, "Sometimes these instruments can produce terabytes and gigabytes of data in minutes, but to analyze or interpret it takes days or weeks, which is the real bottleneck. . . . And data interpretation is often based on intuition and knowledge that human scientists have gathered over the years." The company’s Limited Sample Model, a machine learning system trained on human scientist interpretations, promises to cut that analysis time down to hours, offering accuracy on par with seasoned chemists. The tool’s ability to learn from relatively small datasets distinguishes it from the large language models dominating the AI conversation, and it is designed to comply with strict US Food and Drug Administration guidelines.
Expert Intelligence already counts over 10 customers, including three major pharmaceutical companies and several food and beverage producers. Theverapperuma’s confidence in the platform’s reach was clear: "If you had breakfast today, I am confident that some ingredients for that came from one of our customers." The new funding will be used to ramp up the company’s marketing team and speed up deployment at customer sites, reflecting both the urgency and the appetite for automation in regulated industries.
But the day’s AI news wasn’t limited to start-ups. At the Cisco AI Summit, Intel CEO Lip-Bu Tan announced that the company will enter the graphics processing unit (GPU) market, a space currently dominated by Nvidia. According to Reuters, Intel’s move is a strategic expansion beyond its traditional CPU focus, aiming to capture market share in both gaming and AI model training—a sector where GPUs are essential. The initiative will be overseen by Kevork Kechichian, executive vice president and general manager of Intel’s data center group, with support from Eric Demers, a former Qualcomm engineering executive. Tan acknowledged that Intel’s GPU strategy is still in its early stages and will be shaped by customer demand, but the intention is clear: to challenge Nvidia’s dominance in AI-focused compute.
While hardware giants and start-ups push the boundaries of what AI can do, the AI & Big Data Expo and Intelligent Automation Conference, which kicked off the same day, focused on the practical challenges of integrating AI into enterprise workflows. According to AI News, the conference’s technical sessions centered on the evolution from passive automation to "agentic" systems—AI tools that can reason, plan, and execute tasks autonomously, rather than simply following rigid, pre-programmed scripts.
Amal Makwana from Citi described these agentic systems as digital co-workers, a sentiment echoed by Scott Ivell and Ire Adewolu of DeepL, who argued that such AI closes the "automation gap" by reducing the distance between intent and execution. But, as Brian Halpin from SS&C Blue Prism pointed out, organizations must master traditional automation before deploying agentic AI, and this shift requires robust governance frameworks to handle the unpredictable, non-deterministic outcomes these systems can produce.
Steve Holyer of Informatica, along with speakers from MuleSoft and Salesforce, stressed that careful oversight is essential when architecting these systems. A governance layer must control how AI agents access and use data, to prevent operational failures. Andreas Krause from SAP went a step further, warning that AI is doomed to fail without trusted, connected enterprise data. "For GenAI to function in a corporate context, it must access data that is both accurate and contextually-relevant," Krause stated. Meni Meller of Gigaspaces addressed the notorious problem of AI "hallucinations," advocating for retrieval-augmented generation (eRAG) combined with semantic layers to ensure models retrieve factual enterprise data in real-time.
Physical safety was another hot topic, as the integration of AI into factories, offices, and public spaces introduces risks that differ from traditional software failures. A panel featuring Edith-Clare Hall from ARIA and Matthew Howard from IEEE RAS discussed the need for safety protocols before robots can interact with humans. Perla Maiolino from the Oxford Robotics Institute offered a technical perspective, sharing her research into Time-of-Flight sensors and electronic skin to give robots both self-awareness and environmental awareness—crucial for preventing accidents in manufacturing and logistics.
Observability in software development was also highlighted. Yulia Samoylova from Datadog noted that as AI systems become more autonomous, teams need new ways to observe their internal states and reasoning processes to ensure reliability. Julian Skeels from Expereo argued that network infrastructure must be designed specifically for AI workloads, requiring sovereign, secure, and "always-on" networks capable of handling high throughput.
Yet, even the best technical solutions can stumble without cultural buy-in. Paul Fermor from IBM Automation warned against the "illusion of AI readiness," emphasizing that strategies must be human-centered to ensure adoption. Jena Miller reinforced this, noting that if the workforce doesn’t trust the tools, the technology yields no return. Ravi Jay from Sanofi suggested that leaders need to make operational and ethical decisions early in the process, deciding where to build proprietary solutions and where to buy established platforms.
Meanwhile, in the security sector, Artificial Intelligence Technology Solutions Inc. (AITX) announced an expansion order from a major auction operator, as detailed in their February 4, 2026, SEC filing. The order includes two RIO 360 and two RIO Mini units with SARA licenses, following a successful initial deployment in October 2025. This expansion reflects growing customer confidence in RAD’s autonomous security solutions, which are specifically designed to secure large outdoor auction yards. RAD plans to showcase these solutions at ISC West 2026 with live demonstrations, underlining the increasing demand for AI-powered security in complex environments.
From laboratory automation and enterprise AI governance to hardware innovation and physical security, the developments of February 4, 2026, reveal an industry that’s not just evolving—it’s accelerating. As AI systems become more capable, the pressure mounts to ensure they’re deployed safely, ethically, and with a clear eye on both technical and human realities. The stakes are high, and the race is on.