Apple, known for its in-house development, has taken an unconventional path, relying on Google’s technology to power its AI advancements. The company announced that it opted to use Google’s tensor processing units (TPUs) for two critical components of its AI infrastructure, instead of Nvidia’s widely used chips.
This decision, outlined in a Apple research paper, marks a departure from the industry norm of relying on Nvidia’s processors, which are widely recognised as the leading choice for AI applications. Historically, Nvidia has been the go-to provider for high-performance AI chips, thanks to its advanced technology and market dominance. The company holds a dominant position in the market with approximately 80% of the total market share.
While the Apple’s research paper did not explicitly mention Nvidia chips but highlighted the use of Google’s TPUv5p and TPUv4 processors for its AI models. The company used 2,048 TPUv5p chips for iPhone and device AI models, and 8,192 TPUv4 chips for server-based AI.
Nvidia does not specialise in designing tensor processing units (TPUs), instead, it concentrates on graphics processing units (GPUs), which are commonly used for AI tasks. Unlike Nvidia, which offers its chips and systems as standalone products, Google provides access to its TPUs through the Google Cloud Platform. To utilise these chips, customers must develop their software within Google’s cloud environment.
The research paper highlights that Apple’s engineers believe even more advanced models could be developed using Google’s chips beyond the two discussed. At its June developer conference, Apple introduced several new AI features, including the integration of OpenAI’s ChatGPT technology into its software. This week, Apple is beginning to roll out segments of its Apple Intelligence to beta users. The preview of iPhone AI integration has been released for beta developers.
With Reuters inputs