Nvidia’s CEO, Jensen Huang, announced at CES 2025 that the new Vera Rubin AI superchip platform is in full production and is set to be delivered to customers later this year. The Vera Rubin chips are designed to significantly reduce the costs associated with training and running AI models—estimated to be around one-tenth the cost of the current Blackwell chips—while requiring fewer resources to train large models.
The first customers slated to utilize Rubin chips include major partners Microsoft and CoreWeave. Microsoft will integrate thousands of these chips into its AI data centers being built in Georgia and Wisconsin. Some partners have already begun testing their next-generation AI models on early Rubin systems.
Named after the pioneering American astronomer Vera Rubin, the new platform consists of six different chips, including a Rubin GPU and Vera CPU, manufactured using advanced 3-nanometer technology. Huang emphasized that each chip is revolutionary and sets a new standard in its class.
Nvidia’s journey with the Rubin chip system dates back several years, with initial announcements made in 2024. However, the full production statement raises questions about the timeline, as advanced chip production typically starts at low volumes before scaling up after thorough testing and validation.
The announcement is strategically aimed at reassuring investors, particularly in light of previous delays with the Blackwell chipset due to overheating issues. The growing competition in the AI space has prompted companies to seek access to Nvidia’s cutting-edge GPUs, while some firms like OpenAI are exploring custom chip designs.
Experts suggest that Nvidia is transitioning from merely supplying GPUs to becoming a comprehensive AI system architect, offering integrated solutions covering computing, networking, and software orchestration. Despite the rise of custom silicon, Nvidia’s tightly integrated platform could become increasingly difficult for competitors to displace.