In a surprising move, Apple revealed in a research paper that it chose to use chips designed by Google, rather than the dominant player Nvidia, for its upcoming suite of AI tools and features. This decision is particularly noteworthy, as Nvidia currently holds around 80% of the market for AI processors. While Apple did not explicitly state that it excluded Nvidia chips, the absence of any mention of Nvidia hardware in the description of its AI infrastructure is telling.
For training its AI models, Apple utilized two varieties of Google’s tensor processing units (TPUs) arranged in large clusters of chips. Specifically, Apple deployed 2,048 TPUv5p chips for the AI model that will operate on devices like iPhones, and 8,192 TPUv4 processors for its server AI model. Unlike Nvidia, which focuses on GPUs for AI, Google’s TPUs are accessible through its Google Cloud Platform, requiring customers to build software on this platform to utilize the chips effectively.
Apple’s choice to rely on Google hardware for its AI tools underscores a shift in the industry’s landscape as companies explore different avenues for AI development and deployment. With the company rolling out portions of Apple Intelligence to beta users this week, the full extent of its reliance on Google chips became clear through the research paper. Despite the lack of official comments from Google and Nvidia, Apple’s engineers expressed the potential for even larger and more sophisticated models using Google’s chips, hinting at further advancements in AI technology.