This story referred to performance, not revenue. So, no.
]]>AMD acquired Pensando in 2022
]]>You can’t and I ain’t selling mine.
Isn’t that what GigaIO is doing, basically a PCIe based network switch?
]]>Thank you Jim.
]]>I think if it can run PyTorch and LLaMA 2, it is pretty tuned up and AMD can sell every one it can make at the same “price” adjusted for features as Nvidia is selling at.
And right now, that might be enough. There are people who believe that the game will not be more parameters, but a lot more data. For enterprises, this Meta Platforms stack might be enough, and become the preferred way on a few thousand GPUs running custom expert models for each business. They are not going to buy 25,000 or 50,000 GPUs and replicate what Google and AWS/Anthropic and Microsoft/OpenAI are doing with trillions of parameters and trillions of models.
Here’s the thing: A few tens of thousands of enterprise customers paying a profitable margin to AMD and its OEMs is a hell of a lot smarter than trying to sell to the Super 8 at a discount for their internal models. The clouds will love AMD+PyTorch+LLaMA for the same reason–more margin for them because enterprise customers will pay a premium compared to cloud builders and hyperscalers due to lower chip volumes bought or rented.
]]>