During National Robotics Week, which ran from April 4 to April 12, 2026, the world of artificial intelligence and robotics saw a flurry of announcements, partnerships, and bold visions for the future. Two companies stood out: NVIDIA, the established titan of AI hardware and software, and DeepX, a rising South Korean semiconductor firm with ambitions to reshape the landscape of physical AI. Their recent unveilings—spanning everything from next-generation chips to real-world applications in robotics and automation—signal a new era where AI isn’t just digital, but deeply embedded in the machines and devices that shape our daily lives.
NVIDIA kicked off the robotics celebration with a showcase of new technologies announced at its March GTC event. According to NVIDIA’s official blog, the company unveiled the Isaac GR00T open model, which can understand natural language commands and perform complex, multi-step tasks by combining vision, language, and action reasoning. Also introduced were the Cosmos Worldmodel, designed to generate synthetic data and train robots at scale, and Newton 1.0, an open-source physics engine supporting precise collision detection and complex system simulation.
These aren’t just abstract advances. The University of Maryland, for example, is developing an AI-based humanoid system capable of tackling sophisticated household chores. Their approach, as highlighted by NVIDIA, uses the Isaac open robotics platform to simulate realistic home environments, allowing robots to safely train on millions of task variations—even in rare or tricky situations. Meanwhile, in healthcare, PeritasAI is leveraging NVIDIA’s Isaac for Healthcare and the Rheo automation platform to create intelligent, multi-agent systems designed to operate seamlessly in surgical suites.
Automotive and industrial sectors are also feeling the impact. The Toyota Research Institute, according to NVIDIA, has applied the Cosmos World foundation model to achieve top-tier performance in dynamic view synthesis and remote control data augmentation. And in agriculture, the startup Aigen is using NVIDIA Jetson Orin edge AI modules to power solar-driven autonomous rovers that weed crops without herbicides—a win for both productivity and sustainability.
Energy infrastructure is another frontier. AES Corporation’s Maximo division recently completed the installation of a 100-megawatt solar power plant using autonomous robot swarms coordinated by NVIDIA Isaac Sim. These robots worked together to deploy solar panels at a scale and speed previously unimaginable, showing how AI and robotics are transforming even the most traditional industries.
In the logistics space, Doosan Robotics has developed an intelligent palletizing system that leverages Cosmos Reason. With just a single camera image, the system can determine the contents and condition of a box—adjusting its handling approach automatically. This kind of smart automation promises to make warehouses safer, more efficient, and less reliant on manual labor.
But perhaps most intriguing is the work happening below the waves. The University of Michigan has created OceanSim, a GPU-accelerated simulator for underwater robotics, using NVIDIA Isaac Sim and Omniverse libraries. OceanSim enables the generation of highly realistic synthetic underwater images, speeding up research and development for subaquatic robots that could one day explore, monitor, or even repair our oceans.
As these examples show, NVIDIA’s ecosystem is vast and growing. AWS and MassRobotics recently announced a second cohort of nine startups—ranging from humanoid robotics to industrial automation and haptics—further fueling innovation. Meanwhile, OpenClaw has begun fully local operation on NVIDIA’s Jetson Thor, using the Nemotron open model and vLLM inference library to push the boundaries of private, low-latency edge AI in robotics.
Yet as dominant as NVIDIA remains, there’s a new challenger on the horizon. On April 14, 2026, DeepX held a press conference at its Pangyo headquarters in Seongnam, South Korea. CEO Kim Nok-won didn’t mince words: “If NVIDIA contributed to creating AI, DeepX will help ensure that AI runs well in the real world. In the era of physical AI, DeepX will become the NVIDIA.”
DeepX’s strategy is built on proprietary chip technology and a full-stack approach that connects chips, hardware, and software like Lego blocks. Kim emphasized the firm’s power efficiency with a memorable anecdote: during a customer evaluation, butter placed on their chip didn’t melt, while competitors’ chips ran so hot that the butter liquefied. That’s not just a party trick—DeepX’s initial yield for Samsung Foundry’s 5nm process is an impressive 91%, far above the industry average of 50–80%. The company has also shrunk die size to a quarter of competitors’, quadrupling wafer production and slashing costs.
These efficiencies translate into real-world advantages. DeepX’s chip offers 20 times better power efficiency than NVIDIA’s Jetson Orin GPU, at just one-tenth the price. The next-generation DX-M2 chip, built on Samsung’s cutting-edge 2nm process, is set to deliver under 5 watts of power consumption with up to 80 TOPS (trillion operations per second), targeting mass production in 2027. This chip aims to run generative AI models with hundreds of billions—if not trillions—of parameters on battery-powered devices, moving AI inference from distant clouds to the very edge of the network.
The company isn’t just talking theory. DeepX’s AI computing solutions, co-developed with Hyundai Robotics Lab, have passed mass production validation and will soon be deployed in Hyundai’s delivery robot DAL-e and mobility platform MoBed. Full-scale production is scheduled for the end of 2026. DeepX has already secured an initial order of 40,000 units from Baidu and projects annual sales of around 400 million KRW (about 590 billion KRW) in 2026.
To support its ambitions, DeepX has built a web of partnerships. Its chips are compatible with major processors from Qualcomm, Intel, and Renesas, and it collaborates with AWS, Baidu, and Wind River on the software side. In robotics, DeepX’s API layer DX-Newton is fully compatible with NVIDIA’s Isaac ROS, allowing developers to start in a familiar environment and then switch to DeepX hardware for production—gaining both power and cost advantages.
Kim Nok-won summed up DeepX’s vision with characteristic confidence: “Just as TSMC made Taiwan what it is today, DeepX will build South Korea’s physical AI semiconductor industry. We won’t repeat the history of dependence on foreign technology in the CPU and GPU era.” The company’s three-stage Lego-like full-stack strategy aims to let customers rapidly build physical AI applications by choosing their preferred hardware and AI models, with minimal integration headaches.
It’s a bold claim, and one that comes at a time when the AI and robotics market is expected to triple in the next five years, driven by the spread of autonomous vehicles, robots, unmanned factories, and AI-powered devices. DeepX’s focus on efficiency, cost, and open compatibility offers a clear alternative to the cloud-centric, power-hungry models of the past.
As the age of physical AI dawns, the rivalry between established giants like NVIDIA and ambitious newcomers like DeepX is set to define not just who leads the market, but how AI will touch every aspect of our physical world.