Silicon Valley is buzzing this week as the world’s tech elite descend on San Jose, California, for Nvidia’s annual GPU Technology Conference (GTC 2026), running from March 16 to 19. The event, hosted by Nvidia, has become a global showcase for the latest in artificial intelligence (AI) and accelerated computing, but this year’s edition is making even bigger waves thanks to headline-grabbing announcements and fierce competition between semiconductor giants Samsung Electronics and SK hynix.
The spotlight is squarely on memory technology—critical infrastructure for the AI era. Both Samsung and SK hynix are leveraging GTC 2026 to demonstrate their latest advances and deepen their partnerships with Nvidia, the company at the heart of the AI revolution.
SK hynix, a major supplier of memory to Nvidia, has crafted an immersive exhibition themed “Spotlight on AI Memory.” According to Yonhap Infomax, the company’s display is divided into three main zones: the Nvidia Collaboration Zone, the Product Portfolio Zone, and the Event Zone. Each area is designed to offer visitors a hands-on understanding of how advanced memory solutions are fueling AI’s explosive growth.
In the Nvidia Collaboration Zone, SK hynix is showing off real-world applications of its memory products on Nvidia’s AI platforms. These include the sixth-generation high-bandwidth memory (HBM4), HBM3E, and the SOCAMM2 server memory module, all of which are already integrated into Nvidia’s powerful GPU-based AI accelerators. There’s also a liquid-cooled enterprise SSD—jointly developed with Nvidia—and the DGX Spark supercomputer, which features SK hynix’s low-power DRAM, LPDDR5X.
The Product Portfolio Zone is a feast for tech enthusiasts, featuring everything from HBM4 and HBM3E to high-capacity server DRAM modules, next-gen mobile DRAM (LPDDR6), graphics DRAM (GDDR7), automotive memory, and enterprise SSDs. Meanwhile, the Event Zone offers interactive experiences, such as a virtual chip-stacking game that lets visitors grasp the intricacies of TSV (Through Silicon Via) processing and high-stacking packaging—technologies vital for building next-gen memory.
SK hynix’s presence at GTC 2026 is further underlined by the attendance of top brass, including SK Group Chairman Chey Tae-won and CEO Kwak Noh-jung. The company is using the opportunity to strengthen ties with Nvidia and other big tech companies, with a meeting between Chey and Nvidia CEO Jensen Huang also anticipated. As reported by Hankyung, SK hynix aims to “showcase the competitiveness of our memory technology, which is a core infrastructure of the AI era, based on our partnership with Nvidia.”
Samsung Electronics, not to be outdone, is taking the wraps off its next-generation HBM4E memory and the Vera Rubin platform, aiming to cement its global AI leadership. According to Samsung Newsroom, Samsung’s booth is organized into several thematic areas: the HBM4 Hero Wall, Nvidia Gallery, and three technology zones—AI Factories, Local AI, and Physical AI. The company is making a bold statement by being the only provider capable of supplying the total memory solution for Nvidia’s Vera Rubin platform, including memory and storage.
Samsung’s HBM4E, which is being unveiled in physical form for the first time, promises blistering speeds of 16Gbps per pin and a bandwidth of 4.0TB/s. The company is touting its proprietary Hybrid Copper Bonding (HCB) technology, which improves thermal resistance by more than 20% compared to Thermal Compression Bonding (TCB) and enables stacking of more than 16 layers—key for supporting the ever-increasing demands of AI workloads. The HBM4 Hero Wall also features live chips and wafer displays, highlighting Samsung’s prowess in memory, logic design, and advanced packaging.
Samsung is also demonstrating its ability to deliver a comprehensive solution for the Vera Rubin platform, displaying HBM4 for Nvidia GPUs, SOCAMM2 for Vera CPUs, and the PM1763 SSD for storage. The company has started mass production of SOCAMM2, a server memory module based on LPDDR, while the PM1763 SSD—built on PCIe Gen6—is being showcased with live demonstrations of Nvidia’s SCADA workload. Samsung is also planning to supply the PM1753 SSD, based on PCIe Gen5, for the CMX (Context Memory eXtension) platform, which extends AI inference cache data beyond GPU memory.
This high-stakes collaboration is not lost on Jensen Huang, Nvidia’s charismatic CEO. During his GTC 2026 keynote, Huang projected that Nvidia’s latest AI chips could propel the company to at least $1 trillion in revenue by 2027, as reported by Kyunghyang Shinmun. “AI’s inflection point for inference has arrived. AI can finally do productive work,” he declared, noting that computational demand has increased a million-fold in just two years and will only accelerate with the rise of AI agents.
Huang also took a moment to thank Samsung for its relentless effort, saying, “They are cranking as hard as they can,” and confirmed that Samsung’s foundry division is producing Nvidia’s newest inference chip, the Groq 3 LPU (Language Processing Unit). This chip, acquired via Nvidia’s recent acquisition of Groq, is designed for lightning-fast inference and will be integrated into the Vera Rubin platform, with shipments expected in the second half of 2026.
The Vera Rubin platform itself represents a leap forward in AI infrastructure, combining Arm-based Vera CPUs and next-gen Rubin GPUs to handle both massive AI model training and inference. Nvidia also unveiled the Rubin Ultra platform, which can link up to 144 GPUs, and teased next year’s debut of the Rosa CPU and Feynman GPU architecture. As Huang explained, “AI infrastructure investments are made in multi-year cycles, so a long-term platform roadmap is crucial. We plan to introduce new architectures every year.”
But GTC 2026 isn’t just about chips and memory. Nvidia is pushing the boundaries of AI’s real-world applications, from security-hardened agentic AI models like NemoClaw—designed to automate enterprise workflows while safeguarding data—to the integration of physical AI in robotics and autonomous vehicles. Hyundai Motor Group has joined Nvidia’s roster of self-driving partners, alongside Mercedes-Benz, Toyota, and GM, with plans to roll out autonomous vehicle platforms at scale. The conference even saw the debut of Olaf, a robot version of the beloved character from Disney’s Frozen, developed in collaboration with Disney Research to showcase Nvidia’s physical AI and simulation capabilities.
The convergence of Samsung and SK hynix’s memory innovations with Nvidia’s AI platform ambitions underscores a broader transformation: memory is no longer a passive component but a strategic pillar shaping the future of AI infrastructure. As an SK hynix official put it to Hankyung, “As AI technology advances, memory is becoming a core element that determines the structure and performance of AI infrastructure, from data centers to on-device AI.”
With every announcement and partnership, GTC 2026 is making it clear—AI’s next chapter will be written not just by software, but by the relentless drive for faster, smarter, and more efficient hardware at every level of the stack.