AMD Unveils 5th Gen EPYC “Turin” & Roadmap for Future Accelerators
In addition to unveiling products based on the Zen 5 series, RDNA 3+, and XDNA 2 architectures, AMD confirmed during the Computex 2024 keynote that it will introduce the fifth-generation EPYC server processors, codenamed Turin, also based on the Zen 5 architecture. Furthermore, AMD will continue to update its accelerator products annually, with the CDNA 3-based MI325X scheduled for release in Q4 this year, followed by the CDNA 4-based MI350 in 2025, and the next-generation CDNA-based MI400 in 2026.
Named after the significant industrial city of Turin in northern Italy, the fifth-generation EPYC server processors will feature up to 96 compute cores, with the Zen 5c architecture version increasing this to 192 cores. These processors will also enhance memory support specifications and utilize the SP5 socket, maintaining compatibility with the previous fourth-generation EPYC processors, codenamed “Genoa.”
For instance, the 128-core version of “Turin” is said to offer nearly 8 times the efficiency of its competitor, Intel’s 64-core Xeon 8592+, when running the enterprise version of the LLama 2-70 billion parameter model. In other AI computational throughput benchmarks, it outperforms competitors by up to 2.5 times.
The fifth-generation EPYC server processors are expected to launch in the latter half of this year. AMD also announced that the MI325X, built on the CDNA 3 architecture, will debut in Q4 2024, followed by the MI350, based on CDNA 4 architecture, in 2025. The MI400, built on the next-generation CDNA architecture, is planned for release in 2026.
The MI325X, constructed with CDNA 3 architecture, will use HBM3e high-density memory modules, supporting up to 288GB of memory capacity and 6TB/s bandwidth, offering 1.3 times to 2 times the performance of NVIDIA’s H200 accelerator. It can handle AI model parameters of up to 1 trillion, nearly twice the capacity of the H200.
Scheduled for release in 2025, the MI350, based on CDNA 4 architecture, will be manufactured using TSMC’s 3nm process, also supporting up to 288GB of HBM3e high-density memory modules and FP4/FP6 computational precision specifications. AMD emphasizes that the CDNA 4 architecture will deliver a 35-fold performance increase compared to CDNA 3, affirming previous statements about a significant computational performance boost in 2025. AMD also clarified that it will maintain a deep partnership with TSMC, dispelling market rumors of a shift to Samsung’s 3nm process.
In comparison to NVIDIA’s B200 accelerator, AMD asserts that the MI350 will offer 1.2 times the AI computational performance and 1.5 times the memory capacity. AMD also announced plans to promote larger-scale data center interconnectivity through the recently launched UALink (Ultra Accelerator Link) and UEM (Ultra Ethernet Consortium) connection standards, aiming to establish more extensive AI computational capabilities to rival NVIDIA’s NVLink design.
Moreover, AMD confirmed that the Zen architecture will span both 4nm and 3nm processes, with different products adopting the appropriate process technology. However, market analysts anticipate that, given Apple’s predominant access to TSMC’s 3nm process, most Zen 5 architecture products from AMD will likely maintain the 4nm process, with only a select few using the 3nm process.
Looking ahead, AMD plans to develop the Zen 6 architecture, which will span 3nm and 2nm processes, to be used in the sixth-generation EPYC server processors, codenamed Venice, as well as in consumer-grade Ryzen series processors.