Lego cephalopods with reconfigurable translucent CPO tentacles sounds about right. Then again, it seems that Google’s TPUs are not super for branching linear algebra, sparse memory access, and high-precision math ( https://cloud.google.com/tpu/docs/intro-to-tpu ). Therefore, the Legos will need to be rather beefy, like EPYC Zen Duplos I think! 8^p
]]>Google said it “Missing The Moat With AI” in TNP May 4 2023. Nvidia has big Cuda moat but TNP October 12 2023 caution: “You can build a moat, but you can’t drink it”! I like AMD Dr. Su don’t believe in moats and prefer drawbridge to medieval confinement!
But maybe big moat make trillion dollar market cap easier shot for sAMMANTAs to buy small country or two?
]]>Myth 11: CoWoS woes will set HPC/AI/ML developments back a whole decade, or more(?).
(P.S. my understanding, which could be wrong, is that CoWoS is mostly an issue for chips with HBM, yes?)
]]>These Myths (6-10) are interesting because, as noted by both interviewee and interviewer, they may have held true in the past, at earlier time points in tech development. Much innovation was produced over the years to turn them into what are now myths (which is great!).
One thing about Myth 7 though is that we do, today, have one company that is doing something quite exceptional with an ultra-monolithic die: Cerebras (wafer-scale chippery). One wonders about the potential for this tech to be re-implemented in a Lego-style (to quote TPM) chiplet approach, distributed over several packages, while maintaining the same performance, and possibly moving into FP64 as well!
]]>