Nvidia Resolves Chip Glitch, Restores Production Timeline for Q4

Nvidia has resolved an issue with its latest Blackwell chip, as confirmed by its CEO during a recent earnings call. Production is anticipated to restart in the fourth quarter.

Earlier reports of the glitch had raised concerns among enterprise IT leaders.

“We modified the Blackwell GPU mask to enhance the production yields,” Nvidia CEO Jensen Huang explained. “We plan to commence the Blackwell production ramp in the fourth quarter and extend into fiscal 2026. In Q4, we foresee Blackwell generating several billion dollars in revenue. Additionally, Hopper shipments are expected to grow in the latter half of fiscal 2025.” Nvidia’s fiscal year 2025 started on January 29, 2024.

“The mask modification is complete,” he added. “No functional changes were required. Functional samples of Blackwell, Grace Blackwell, and various system configurations are currently being sampled. Approximately 100 different Blackwell-based systems were showcased at Computex, assisting our ecosystem partners in starting their sampling processes. Blackwell’s functionality remains unaffected, and production is slated to begin in Q4.”

The CEO added: “Blackwell will start shipping out in billions of dollars at the end of this year.”

Huang did not discuss the nature of the problem nor of the fix.

Technology analyst Jeff Kagan said he doubts the delay will have any meaningful impact on enterprise IT operations.

“We have learned to always expect these kinds of glitches. Fortunately, they don’t stop progress and growth, although they can slow things down from time to time,” Kagan said. “In the end, this is not a long-term problem (as much as) one of many short-term issues that will be resolved.”

In the analyst call, Huang shared his insights on the evolving landscape of enterprise computing and how AI is poised to transform the hardware and computing sectors significantly.

“The reductions in the cost of training for large language models and deep learning have reached such a scale that it is now feasible to develop multitrillion-parameter models. These can be pretrained on practically the entire corpus of global knowledge. These models are designed to grasp human language representations, encode knowledge, and develop reasoning capabilities, which are fostering the generative AI revolution,” Huang explained.

Furthermore, the CEO highlighted how the economic foundations of IT infrastructures are swiftly evolving.

“As you double the size of a model, the data set required for training must more than double. Consequently, the computational power needed escalates quadratically,” he noted. “It’s logical to anticipate that future models will demand 10, 20, or even 40 times the computational power of their predecessors. This drives the necessity to significantly boost generational performance to reduce both the energy use and the costs involved.”

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Rise of 'Watering Hole' Attacks: How New Spyware Exploits Are Paving the Way

Next Article

Top Websites Take a Stand Against Apple's AI Scraping Practices

Related Posts