In the ever-evolving world of computing, the closing days of 2025 have seen a shake-up in both the desktop CPU and AI workstation markets, with Intel, AMD, and Nvidia each making bold moves that have left industry watchers and consumers alike asking: Who's really winning the performance-for-price race?
On the desktop front, Intel has made a surprising comeback in the sub-$200 CPU segment, a space once dominated by AMD's aggressive pricing and high core counts. According to TechRadar, Intel's Core Ultra 5 245KF, available for just under $220, now delivers a level of performance that, not so long ago, would have required spending hundreds more. Sporting 14 cores—six performance and eight efficiency cores—this chip can boost up to 5.2GHz and boasts a PassMark score near 43,000. For those seeking integrated graphics and the latest platform features, the Core Ultra 5 245K, priced just under $230, adds Intel Graphics and support for the new LGA1851 platform, PCIe 5.0, and large cache sizes.
What does this mean for AMD? The Ryzen 9 5900XT, once a darling of value seekers, now seems less compelling. Despite its impressive 16 cores and 32 threads, it's built on the older Zen 3 architecture and is limited to DDR4 and PCIe 4.0. At a current Amazon price of about $309, even after discounts, it struggles to justify its premium when Intel's new chips offer similar or better everyday performance for considerably less.
This role reversal is notable. As TechRadar points out, "AMD built its comeback years ago by undercutting Intel with aggressive core counts and solid value. Now Intel is doing something similar, flooding the lower price tiers with CPUs that deliver strong multi-threaded performance without demanding flagship pricing." For users building general-purpose systems, workstations, or mid-range gaming rigs, Intel's offering now represents the sweet spot between price and performance. While AMD still holds advantages in platform longevity and at higher price points, Intel is setting the pace in the crucial $200–$230 range.
But the story doesn't end with desktop CPUs. The AI workstation market, too, is undergoing a transformation, with AMD and Nvidia battling for the hearts and wallets of developers, engineers, and researchers. In a hands-on comparison published on December 25, 2025, by The Register, AMD's Strix Halo and Nvidia's DGX Spark—two compact AI workstations, each equipped with 128 GB of video memory—were put through their paces to see which could claim the AI crown.
Nvidia's DGX Spark, launched in October 2025, is described as an "AI lab in a box"—a compact, all-metal system that doubles as a heat sink and relies on an external USB-C power brick. At $3,999, it's not cheap, but it brings the promise of running nearly any AI workload, thanks in part to its Blackwell architecture GPU and 20-core Arm CPU. The Spark's 192 fifth-generation tensor cores and 48 fourth-generation RT cores deliver a theoretical 1 petaFLOPS of sparse FP4 performance, though real-world workloads typically run at 8 or 16-bit precision, capping actual performance at 250 or 125 teraFLOPS, respectively.
AMD's Strix Halo, meanwhile, is based on the Ryzen AI Max+ 395 APU and powers systems like HP's Z2 Mini G1a workstation. This system is physically larger, thanks to an integrated power supply and more robust cooling, but it costs between half and three-quarters of the Spark's price, with retail configurations around $2,950. Strix Halo features 16 Zen 5 cores clocking up to 5.1GHz, paired with a Radeon 8060S GPU and a 50 TOPS XDNA 2 neural processing unit (NPU). Importantly, it runs on AMD's ROCm and HIP software stack, easing the migration from desktop to datacenter for those already using AMD's server products.
Benchmarks and real-world tests reveal a nuanced picture. In traditional CPU tasks, the Zen 5 architecture in Strix Halo delivered 10–15% higher performance across benchmarks like Sysbench, 7zip, and HandBrake compared to the Spark's Arm-based CPU. However, in high-performance computing (HPC) tasks, the G1a achieved over twice the double-precision performance of the Spark—1.6 teraFLOPS versus 708 gigaFLOPS—though this advantage was limited to specific workloads.
For generative AI, the GPU is king. While Nvidia touts massive AI compute numbers, The Register notes that "most users will never get close to that." In practice, the Spark's GPU is 2.2–9 times faster than Strix Halo's in raw AI compute, particularly for workloads that leverage low-precision data types like FP4 and FP8—areas where AMD's older RDNA 3.5 architecture lags. Still, memory bandwidth, at 273 GB/s for Spark and 256 GB/s for Strix Halo, can level the playing field in certain large language model (LLM) inference tasks.
Single-batch LLM inference, a common use case for many developers, sees the two systems trading blows. The AMD system, running the popular Llama.cpp framework, matches or narrowly beats the Spark in token generation speed when using the Vulkan backend, though it falls behind in "time to first token"—a measure of prompt processing speed. For longer prompts or multi-turn conversations, the difference is just a second or two, which many at-home users may find acceptable given the lower cost of the Strix Halo box.
When workloads scale up—think batch inference, fine-tuning, or image generation—the Spark's superior GPU muscle becomes more apparent. In fine-tuning Meta's Llama 3.2 3B model, for instance, the Spark completed the task in about two-thirds the time of the AMD system. For massive jobs, such as QLoRA fine-tuning on a 70B parameter model, the Spark's faster GPU cut completion time from over 50 minutes on the AMD box to just 20 minutes. For image and video generation, the Spark again pulled ahead, matching the performance of workstation-class GPUs like the Radeon Pro W7900.
However, AMD's Strix Halo brings its own unique strengths. Its XDNA 2 NPU, capable of 50 TOPS, is being integrated into more AI applications, including Stable Diffusion 3 for image generation. In some cases, the NPU outperformed the GPU for specific tasks, though software support remains limited and integration is still evolving. As The Register observes, "It's good to see more applications taking advantage of the NPU for more than background blurring in video calls."
Software compatibility is still Nvidia's trump card. With nearly two decades of development, CUDA remains the gold standard for AI and machine learning, ensuring widespread support and a seamless user experience on the Spark. AMD, while making strides with ROCm and HIP, still requires more manual effort—building libraries from source or using platform-specific forks—to achieve similar results. That said, most PyTorch scripts now run on AMD hardware with minimal modification, a significant improvement from just a year ago.
So, which system should buyers choose? It depends on the use case. If you want a machine specifically for AI prototyping, fine-tuning, or heavy image generation, and don't mind paying a premium for the best performance and software support, Nvidia's Spark is the clear winner. But for developers, engineers, or enthusiasts looking for a versatile PC that handles both AI workloads and everyday tasks (including gaming), AMD's Strix Halo-based systems offer compelling value—especially given Microsoft's growing focus on NPUs in the Windows ecosystem.
Ultimately, the market in late 2025 is more competitive than ever. Intel is reclaiming its value crown in desktop CPUs, while AMD and Nvidia are battling for dominance in the AI workstation arena. For buyers, that's good news: more choice, better performance, and—at least in some segments—lower prices than ever before.