At the Hot Chips 2024 event hosted by Stanford University, Intel introduced its latest advancements including new processors and an innovative interconnect design. The highlight was the Xeon 6 System on a Chip (SoC) that incorporates a high-speed optical interconnect crucial for AI data processing tasks.
Although the launch of the Xeon 6 has been previously detailed, additional insights were shared regarding this line, known as Granite Rapids-D, which is expected to debut in the first half of 2025.
Designed to efficiently function across a range from edge devices to edge nodes, Granite Rapids-D integrates a compute chiplet from Intel’s Xeon 6 line with a specialized edge-focused I/O chiplet. This design allows the SoC to enhance performance, energy efficiency, and transistor density over earlier models.
Intel’s development of the Granite Rapids-D was informed by telemetry data gathered from over 90,000 edge deployments. It is equipped with up to 32 lanes of PCI Express 5.0, up to 16 lanes of Compute Express Link (CXL) 2.0, and dual 100G Ethernet ports.
Intel has made significant improvements for edge applications, notably with expanded temperature capabilities and industrial reliability heightened. Additional enhancements include new media acceleration tools that aid in the enhancement of video transcode and analytics across live OTT, VOD, and broadcast media. Moreover, developments in Advanced Vector Extensions and Advanced Matrix Extensions have significantly enhanced inferencing performance.
At a recent demonstration, Intel’s Integrated Photonics Solutions Group showcased their newest, first-ever integrated optical compute interconnect (OCI) chiplet. This was co-packaged with an Intel CPU and managed live data transmissions. During the demonstration, the OCI chiplet supported 64 channels, each capable of transmitting data at 32 gigabits-per-second in both directions, and could reach distances up to 100 meters using fiber optics.
The abilities of the OCI chiplet suggest enhanced possibilities for the future scaling of CPU/GPU cluster connections and the development of innovative computing structures. These include coherent memory expansion and resource disaggregation essential for use in AI infrastructure within data centers and high-performance computing (HPC) applications.
Furthermore, Intel revealed a new client product named Lunar Lake, crafted for forthcoming AI-driven PCs. This product combines Performance cores (P-cores) with Efficient-cores (E-cores), along with a novel neural processing unit. Intel claims Lunar Lake is up to four times faster than its predecessors, promising notable advances in generative AI technologies.
Lunar Lake includes upgraded Xe2 graphics processing unit cores, enhancing gaming and graphics performance by 1.5 times compared to its predecessor.