Google has emerged as the leading owner of AI compute, boasting roughly 25% of the global capacity, according to a recent analysis by Epoch AI. This report reveals that more than 60% of the world’s AI compute capacity is currently concentrated among major hyperscalers, with Google taking the lead without heavily relying on Nvidia’s technology, unlike many of its competitors.
Google utilizes its custom tensor processing units (TPUs) to fulfill its compute needs. The tech giant’s capacity is equivalent to around 5 million Nvidia H100 GPUs, with approximately 4 million of that operated through its TPUs. In contrast, other major players, including Microsoft and Amazon, are primarily dependent on Nvidia’s architecture, with Microsoft holding nearly 3.5 million H100 equivalents and Amazon around 2.5 million.
The dominance of hyperscale operators continues to grow, now accounting for nearly half of the global data center capacity. Predictions suggest this could rise to over two-thirds by 2031, as hyperscalers increasingly invest in building their own facilities, which currently represent 60% of their total capacity.
Nvidia, while still a crucial player in the AI space with its powerful chips, faces potential shifts in market dynamics as companies, including Google, Meta, and Amazon, explore alternative solutions and develop their custom silicon to mitigate dependency on a single supplier. This strategy may enhance competitiveness in the rapidly evolving AI industry.
As the infrastructure for AI evolves, the focus is shifting towards both training and inference capabilities, which are expected to bring new contenders into the market. Businesses need to approach AI deployment strategically, considering the variety of available technologies and avoiding single vendor lock-in.
For more details, visit Epoch AI.