Comments on: Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance https://www.nextplatform.com/2023/11/13/nvidia-pushes-hopper-hbm-memory-and-that-lifts-gpu-performance/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Wed, 22 Nov 2023 17:00:01 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Slim Albert https://www.nextplatform.com/2023/11/13/nvidia-pushes-hopper-hbm-memory-and-that-lifts-gpu-performance/#comment-216247 Tue, 14 Nov 2023 00:04:22 +0000 https://www.nextplatform.com/?p=143235#comment-216247 It’ll be interesting to see how these H100 (and H200, GH200, B100 …) systems perform on HPCG, which demands a bit more concentration in the execution of memory-access kung-fu (beyond bandwidth and capacity). Today’s (Nov. 13) Top500 for example, has HPCG #10 AOBA-S NEC Type-30A Vector Engines doing 1.1 PF/s at 1.4 MW, or 0.8 PF/MJ, while Frontier (MI250x, #2) does 0.6 PF/MJ, Fugaku (A64FX, #1) is at 0.5 PF/MJ, Leonardo (A100, #4) gives 0.4 PF/MJ, and #11 Crossroads (Xeon 9480) pushes 0.2 PF/MJ. If Hoppers and/or their updates push past NEC Vector Motors on HPCG then that’ll be quite something to write home about I think!

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/11/13/nvidia-pushes-hopper-hbm-memory-and-that-lifts-gpu-performance/#comment-216243 Mon, 13 Nov 2023 23:06:45 +0000 https://www.nextplatform.com/?p=143235#comment-216243 In reply to EC.

So, I want HBM memory in my phone and PC! Let’s solve this problem!

]]>
By: EC https://www.nextplatform.com/2023/11/13/nvidia-pushes-hopper-hbm-memory-and-that-lifts-gpu-performance/#comment-216241 Mon, 13 Nov 2023 22:01:12 +0000 https://www.nextplatform.com/?p=143235#comment-216241 Great write up as usual TPM, keep ’em honest!

“H100 should have always had what is being called HBM3e. Because clearly that is the only way to get its true value of the device.”

Having spent a bit of time in GPU land back in the day when they were primarily display controllers, part of the problem here is the balancing act between Memory suppliers and GPU vendors. There is this tightrope walk of how much boutique memory performance can you get and how much do you want, but in order to get it you’re going to have to commit to it upfront, betting on the come. Each side holds back a little and those little compromises add up. It was the same crap 25 years ago when there literally was VRAM, Video Ram, that could be dialed up and optimized a little tighter and a little tighter but it was going to cost you. Everyone is financially engineering their businesses, and so yes, it feels like the sweet spot solution is being held back. Micron, Hynix, Samsung are all just doing what memory guys have always done. If HBM had the same volumes as main memory you would see a vastly different landscape.

]]>