Comments on: Isambard 3 To Put Nvidia’s “Grace” CPU Through The HPC Paces https://www.nextplatform.com/2023/05/25/isambard-3-to-put-nvidias-grace-cpu-through-the-hpc-paces/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Thu, 08 Jun 2023 17:33:06 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Timothy Prickett Morgan https://www.nextplatform.com/2023/05/25/isambard-3-to-put-nvidias-grace-cpu-through-the-hpc-paces/#comment-209164 Fri, 26 May 2023 16:57:38 +0000 https://www.nextplatform.com/?p=142454#comment-209164 In reply to 8^b.

It’s because the French savor each bite thoroughly….

]]>
By: q^8 https://www.nextplatform.com/2023/05/25/isambard-3-to-put-nvidias-grace-cpu-through-the-hpc-paces/#comment-209159 Fri, 26 May 2023 15:08:36 +0000 https://www.nextplatform.com/?p=142454#comment-209159 In reply to Hubert.

Speaking of cool French tech., Liam Proven (not in Prague) wrote a very good piece on successful application of exo-cortices (and a bit on exo-skeleta too) ( https://www.theregister.com/2023/05/26/experimental_brain_spine_interface/ ) that features French Clinatec’s (in Grenoble) WIMAGINE implants for Brain Computer Interface (BCI) — very high-tech stuff, human-pilot-tested in Switzerland (with a SuPeRb open-access paper in Nature), and backed-by (get this):

“Recursive Exponentially Weighted N-way Partial Least Squares Regression […] in Brain-Computer Interface”

Waouh! Viva le biomath francaise, and bioengineering!

]]>
By: 8^b https://www.nextplatform.com/2023/05/25/isambard-3-to-put-nvidias-grace-cpu-through-the-hpc-paces/#comment-209156 Fri, 26 May 2023 14:14:15 +0000 https://www.nextplatform.com/?p=142454#comment-209156 In reply to Hubert.

64KB, shm64KB! Just yesterday, Sally Ward-Foxton reported on Axelera (French) putting 4MB of L1 on its RISC-V matrix-vector multiply (MVM) accelerator (no OS) — they apparently refer to that as “in-memory compute”, seeing how each ALU/MVM is bathed in sizeable fast cache RAM (that 4MB). It is a rather surprising arch. (as suggested, I think, by Mark Sobkow in the Meta Platforms MTIA TNP piece) but the French appetite knows no bounds (and yet they somehow remain quite slim)!

]]>
By: Hubert https://www.nextplatform.com/2023/05/25/isambard-3-to-put-nvidias-grace-cpu-through-the-hpc-paces/#comment-209130 Fri, 26 May 2023 03:09:50 +0000 https://www.nextplatform.com/?p=142454#comment-209130 Yummy! It’ll be great to finally see these CPUs grace the inner sockets of live HPC hardware … as they’ve been quite elusive thus far (as noted by Thomas Hoberg in the “Google invests heavily in GPU compute” TNP story). Hopefully they won’t be delayed by the “sky freaking high” mountains of gold coins that nVidia needs to painstakingly bulldoze into its enormous coffers due to the AI-induced boom in GPU accelerator demand (covered in the other TNP article published today, on etherband and infininet, or vice-versa). 64KB of L1 instruction cache (and another 64KB for L1 data cache), as found in these V2 Neoverses, is the way to go for HPC (not so sure that more channels of LPDDR5 can compensate for no HBM, but am willing to learn!).

]]>