In context: It’s no secret that Nvidia is dominating in the AI space, with companies big and small using its GPUs as well as the CUDA software stack to power their machine-learning projects. Intel CEO Pat Gelsinger says Team Green’s success is partly the result of luck, as the latter company enjoyed 15 years of Intel being inactive in the discrete GPU space. Whether Intel can get lucky in the coming years now that Gelsinger is back at the helm
Intel CEO Pat Gelsinger believes Nvidia’s dominance in AI is more the result of luck and not necessarily the software and hardware that Team Green has been developing over the past several years. As you’d expect, it didn’t take long for someone from Nvidia’s machine learning group to fire back at his remark, though Gelsinger did explain how Intel might also get lucky in the coming decades.
During an interview hosted by the Massachusetts Institute of Technology (MIT), Team Blue’s chief was asked about Intel’s AI hardware efforts and whether he believes they represent a competitive advantage. Gelsinger began by lamenting Intel’s past mistakes, noting that Nvidia CEO Jensen Huang “got extraordinarily lucky” with his bet on AI and Intel could have been just as lucky had the company not given up on the Larrabee discrete GPU project.
Gelsinger went on to explain how his departure from Intel 13 years ago set the company on a bad trajectory where projects like Larrabee that “would have changed the shape of AI” got canceled, allowing Nvidia to thrive with very little competition in the high-performance computing space. Then he characterized Jensen Huang as a hard worker who was initially laser-focused on graphics technology but then was lucky enough to branch out into AI accelerators when the industry started moving in that direction.
Related reading: The Last Time Intel Tried to Make a Graphics Card
Now that he’s at the helm of Intel, Gelsinger is determined to course-correct with a strategy of “democratizing” AI. To that end, Intel is looking to bake a neural processing unit (NPU) into every machine and we can already see that with the launch of the Meteor Lake CPU lineup. Another area where Intel will focus is software, with a lot of work being put into developing open-source software libraries to eliminate the need for proprietary technologies like CUDA.
Moving forward, Gelsinger says we can expect at least two decades of innovations in the AI space. He believes that since AI today is mostly used to tap into simple data sets like text to create services like ChatGPT, there’s a lot of room for advancements in training AI models for a variety of other applications using more complex data sets.
Demand for AI hardware is growing rapidly, so Intel’s investments into new chip factories could also pay off in spades in the coming years. Even Nvidia is contemplating using Intel as a manufacturing partner, and it will be interesting to see if the latter company can use that interest to help its foundry business succeed in the long-term.
I worked at Intel on Larrabee applications in 2007. Then I went to NVIDIA to work on ML in 2008. So I was there at both places at that time and I can say:
– Bryan Catanzaro (@ctnzr) December 20, 2023
Gelsinger’s remarks look more like an admission that Intel made a big mistake in giving up on its discrete GPU ambitions for over a decade, but they still invited a response from Nvidia’s VP of Applied Deep Learning Research, Bryan Catanzaro. Catanzaro explains that he was part of the Larrabee project at Intel before moving on to work at Nvidia and, from his point of view, Nvidia’s dominance came from executing a vision that Intel simply lacked.