AMD has once again pushed the boundaries of artificial intelligence (AI) hardware with the unveiling of its Instinct MI350 Series GPUs on June 16, 2025. This launch marks a significant milestone in AMD’s quest to build an open AI ecosystem, combining cutting-edge hardware with a robust software stack and scalable infrastructure. The MI350 Series is designed to deliver leadership rack-scale AI performance well beyond 2027, setting new standards in performance, efficiency, and scalability for generative AI and high-performance computing.
Dr. Lisa Su, AMD’s chair and CEO, emphasized the company’s rapid innovation pace, stating, “AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD ‘Helios’ rack-scale solutions, and growing momentum for our ROCm open software stack.” She further highlighted the collaborative spirit behind these advancements, noting that AMD’s expanding leadership across a broad ecosystem of hardware and software partners is helping define the future of AI through open standards and shared innovation.
AMD’s announcement was not limited to the hardware itself. The company also showcased its end-to-end, open-standards rack-scale AI infrastructure, which is already being deployed in hyperscaler environments such as Oracle Cloud Infrastructure (OCI). This infrastructure integrates the MI350 Series accelerators with AMD’s 5th Gen EPYC processors and AMD Pensando Pollara NICs, promising powerful performance and scalability. Broad availability of this infrastructure is expected in the second half of 2025, signaling AMD’s commitment to delivering comprehensive AI solutions.
Adding to the excitement, AMD previewed its next-generation AI rack, codenamed “Helios,” which promises to continue the company’s leadership in rack-scale AI performance. Alongside these hardware innovations, AMD announced the broad availability of the AMD Developer Cloud, aimed at supporting global developer and open-source communities in building and optimizing AI applications.
Oracle, a major cloud hyperscaler, quickly followed with its own announcement on the same day. It revealed plans to be among the first to offer an AI supercomputer powered by AMD’s latest Instinct MI355X GPUs on OCI. These GPUs deliver more than double the price-performance for large-scale AI training and inference workloads compared to the previous generation, promising customers unprecedented capabilities for AI innovation at scale.
Oracle’s zettascale AI clusters, accelerated by up to 131,072 MI355X GPUs, are designed to handle the demands of new AI applications that require processing larger and more complex datasets. The zettascale OCI Supercluster features a high-throughput, ultra-low latency RDMA cluster network architecture, enabling massive parallelism and efficiency. The MI355X GPUs themselves nearly triple the compute power and offer a 50 percent increase in high-bandwidth memory compared to their predecessors.
Customers using OCI can expect up to 2.8 times higher throughput for AI deployments, thanks to the MI355X’s 288 gigabytes of high-bandwidth memory 3 (HBM3) and up to eight terabytes per second of memory bandwidth. The GPUs also support the new FP4 4-bit floating point compute standard, enabling cost-effective deployment of modern large language and generative AI models with ultra-efficient, high-speed inference.
Oracle’s AI infrastructure is also notable for its dense, liquid-cooled design, which achieves a performance density of 125 kilowatts per rack. Each rack houses 64 GPUs, each consuming 1,400 watts, allowing for faster training times with higher throughput and lower latency. The system includes a powerful head node featuring an AMD Turin high-frequency CPU and up to three terabytes of system memory, designed to optimize GPU performance through efficient job orchestration and data processing.
Oracle will be the first hyperscaler to deploy AMD’s Pollara AI NICs on backend networks. These NICs provide advanced RDMA over Converged Ethernet (RoCE) functionality, including programmable congestion control and support for open industry standards from the Ultra Ethernet Consortium (UEC). This innovation enhances network fabric designs, ensuring high performance and low latency for demanding AI workloads.
Meanwhile, Vultr, a leading cloud provider and top-tier sponsor of AMD Advancing AI, announced it is among the first to offer the AMD Instinct MI355X GPU to its customers. Vultr is now accepting pre-orders for early access to the MI355X, with availability slated for the third quarter of 2025. The company offers direct liquid cooling (DLC) options for higher-density environments, enhancing thermal efficiency and unlocking higher performance per rack.
Complementing the hardware, Vultr benefits from the latest enhancements in AMD’s ROCm software stack, which optimizes AI inference, training, and framework compatibility. This results in high throughput and ultra-low latency, essential for modern AI workloads. Negin Oliver, AMD’s corporate vice president of business development for the Data Center GPU Business, remarked, “AMD Instinct MI350 series GPUs paired with AMD ROCm software provide the performance, flexibility, and security needed to deliver tailored AI solutions that meet the diverse demands of the modern AI landscape.”
Vultr’s lineup already includes AMD EPYC 9004 Series and 7003 Series CPUs, as well as AMD Instinct MI325X and MI300X GPUs. The addition of the MI355X GPUs, especially when paired with AMD EPYC 4005 Series CPUs, offers customers full stack support—delivering high-performance compute and seamless integration from processor to accelerator.
J.J. Kardwell, CEO of Vultr, highlighted the significance of the new GPUs, stating, “AMD MI355X GPUs are designed to meet the diverse and complex demands of today’s AI workloads, delivering exceptional value and flexibility. As AI development continues to accelerate, the scalability, security, and efficiency these GPUs deliver are more essential than ever. We are proud to be among the first cloud providers worldwide to offer AMD MI355X GPUs, empowering our customers with next-generation AI infrastructure.”
Vultr’s participation in the AMD Cloud Alliance further underscores the collaborative nature of AMD’s AI ecosystem, bringing together best-of-breed technology partners to deliver integrated cloud computing solutions. Vultr aims to make high-performance cloud infrastructure easy to use, affordable, and locally accessible for enterprises and AI innovators across 185 countries.
AMD’s announcements on June 16, 2025, collectively signal a major leap forward in AI infrastructure technology. The combination of the MI350 and MI355X GPUs, open software stacks like ROCm, and the support of hyperscalers such as Oracle and cloud providers like Vultr, positions AMD as a formidable force in the AI hardware market. These advancements promise to accelerate AI research, development, and deployment, enabling organizations to tackle increasingly complex AI challenges with greater speed, efficiency, and scalability.
As AI continues to evolve rapidly, AMD’s open and scalable approach—highlighted by Dr. Su’s vision of shared innovation and open standards—may well shape the future of AI infrastructure for years to come. The industry will be watching closely as these new technologies become broadly available in the latter half of 2025 and beyond.