Intel Unveils Xeon 6 Processors and Gaudi 3 AI Accelerators: A New Era in Computing Power

Intel has officially introduced its latest Xeon 6 server processors along with the Gaudi 3 AI accelerators, making significant claims about their capabilities.

The Xeon 6 6900P stands out as the inaugural model in the Xeon series featuring performance cores (P-cores) optimized for compute-heavy tasks, while also incorporating efficient cores (E-cores) intended for lighter workloads that consume less power. Intel is strategically configuring P-cores and E-cores throughout the Xeon 6 range based on specific market demands. The Xeon 6 6700E with E-cores, which debuted in June, illustrates this versatility.

According to Intel, the Xeon 6 6900P provides double the performance compared to its predecessor, attributing this enhancement to a higher core count, increased memory bandwidth, and built-in AI acceleration features across all cores.

Beyond the P-cores, Intel has integrated AI inference capabilities into the Xeon 6900P series, transforming CPUs into platforms capable of supporting AI coprocessors. This approach mirrors similar developments by AMD. The rationale is that AI inferencing typically demands less power, allowing it to be performed on standard client PCs instead of relying on high-power server processors like GPUs.

The specifications for the Xeon 6900P processors are remarkable. In comparison to the earlier Xeon generation, the maximum core count has increased twofold to reach 128 cores, thanks to an innovative chiplet architecture. Rather than being contained in a single large silicon piece, the chip is segmented into three more manageable components.

The Xeon 6 is the inaugural processor to feature the new MRDIMM modules from Micron, which enhance both bandwidth and latency performance. For the Xeon 6900P, memory speeds can increase by as much as 57%, reaching up to 8,800 MT/s.

The 6900P series provides support for six Ultra Path Interconnect 2.0 links, allowing for CPU-to-CPU transfer rates as high as 24 GT/s, alongside support for up to 96 lanes of PCIe 5.0 and CXL 2.0 connectivity. Additionally, it introduces new vector extensions aimed at high-performance computing and a novel matrix extension supporting 16-bit floating point, which is ideal for AI inferencing.

However, the increased core count does come with a trade-off, particularly in terms of power consumption. The thermal design power (TDP) for four out of the five processors in the 6900P line is set at 500 watts, while one model has a TDP of 400 watts. In comparison, the fifth-generation Xeon had a top TDP of 350 watts.

There is indeed a significant performance improvement to note. In a benchmark test involving a Llama 2 chatbot with 7 billion parameters, Intel’s 96-core Xeon 6972P outperformed AMD’s 96-core EPYC 9654 by over three times and surpassed its earlier generation Xeon by 128%. Additionally, in a BERT language processing benchmark, the Xeon 6972P demonstrated a speed that is 4.3 times greater than that of the AMD Epyc and 2.2 times faster than its prior generation Xeon.

It’s important to consider, however, that the Epyc processor used for these benchmarks has been available for two years, and AMD is expected to release a new generation of processors soon.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Intel Unveils Xeon 6 Processors and Gaudi 3 AI Accelerators: A New Era in Computing Power

Next Article

Notorious Evil Corp Hackers: Unraveling Their Targeting of NATO Allies for Russian Intelligence

Related Posts