Comments on: What Do We Do When Compute And Memory Stop Getting Cheaper? https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Thu, 02 Nov 2023 01:09:50 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Timothy Prickett Morgan https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-215806 Thu, 02 Nov 2023 01:09:50 +0000 https://www.nextplatform.com/?p=141801#comment-215806 In reply to Luis river.

I agree that if we have neat tech in Roswell, now might be a time to reverse engineer it. I want a portable fusion reactor and a warp drive, please.

]]>
By: Luis river https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-215799 Wed, 01 Nov 2023 19:23:34 +0000 https://www.nextplatform.com/?p=141801#comment-215799 One possibility dont search and viewed in above NextP. Comments is than guys laboratory Roswell Nevada USA make better their job about UFO inverse engineer tech.

]]>
By: Michael https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-212176 Tue, 08 Aug 2023 10:48:24 +0000 https://www.nextplatform.com/?p=141801#comment-212176 Shouldn’t EUV 0.55 in 2025/2026 reduce CPU costs?
https://pics.computerbase.de/8/8/5/8/2/2-1080.a15e6f85.png

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-206518 Thu, 30 Mar 2023 13:40:16 +0000 https://www.nextplatform.com/?p=141801#comment-206518 In reply to Aslan.

I think in all seriousness, analog is interesting. I am most definitely not up on my differential equations — that was a long, long time ago — and this method reminds me of how I was taught physics. When my cranky British professor threw the chalk at you, you were going to be the one up in front of the class solving problems for the day. He just stood by and nudged you here or there. But before you started doing the math, he would make you walk through an estimate of how to solve the problem and what you thought an approximate answer might look like. And armed with that estimate, you checked to see if you math was converging into that direction. At first, we all had panic attacks, and then, it got to be fun. And today, I look at every problem this way, Aslan. And of course I was taking you seriously, but with the kind of wry humor that is encouraging, not at all dismissive or mocking. I don’t roll that way.

]]>
By: Aslan https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-206515 Thu, 30 Mar 2023 12:38:26 +0000 https://www.nextplatform.com/?p=141801#comment-206515 In reply to Timothy Prickett Morgan.

Analog computing may be the way forward for AI and interference.

I honestly can’t tell whether you were taking me seriously or not regarding my previous comment Timothy Pickett Morgan.

Here’s a 10×10 mm analog chip in silicon, doing programmable computation with retransmission gates. (Yes, I know Wired, but this is real) https://www.wired.com/story/unbelievable-zombie-comeback-analog-computing/amp

“A key innovation of Cowan’s was making the chip reconfigurable—or programmable. Old-school analog computers had used clunky patch cords on plug boards. Cowan did the same thing in miniature, between areas on the chip itself, using a preexisting technology known as transmission gates. These can work as solid-state switches to connect the output from processing block A to the input of block B, or block C, or any other block you choose.”

“His second innovation was to make his analog chip compatible with an off-the-shelf digital computer, which could help to circumvent limits on precision. “You could get an approximate analog solution as a starting point,” Cowan explained, “and feed that into the digital computer as a guess, because iterative routines converge faster from a good guess.” The end result of his great labor was etched onto a silicon wafer measuring a very respectable 10 millimeters by 10 millimeters. “Remarkably,” he told me, “it did work.””

Analog is waiting in the wings looking for believers and funding, for when digital hits the wall.

Are you up on your differential equations? There’s boards available to people who could make use of them, that would make for a very interesting review.

]]>
By: S.Miller https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-204072 Sat, 28 Jan 2023 21:37:49 +0000 https://www.nextplatform.com/?p=141801#comment-204072 Good update and summary. It’s been clear to those most closely following the various metrics of Moore’s Law (SpecInt&SpectFP per watt and per dollar, DRAM costs, HDD costs, Flash Costs, transistor speed vs power, instructions per clock, clock rate advancement @ power limits, etc…) that Moore’s Law has been ending in stages. That’s why it’s so hard for many to see it, because they can often point to one last iota of advancement and say “See, well that thing got better, Moore’s Law is just fine!”. The last element that has been advancing at a reasonable pace is the number of transistors in a package (ignoring cost and overall performance). But that is starting to stumble. Most sensible folks see it, others look to nano-this, quantum-that, 3D-whatever from the various sources of hype generation for hope not really knowing how far we are from realizing a robust alternative. The limitations of physics and economics will have the final word I’m afraid. It sure was a fun run though.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-203633 Thu, 19 Jan 2023 17:45:06 +0000 https://www.nextplatform.com/?p=141801#comment-203633 In reply to Hubert.

This is a group effort, and I appreciate you.

]]>
By: Hubert https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-203627 Thu, 19 Jan 2023 14:13:10 +0000 https://www.nextplatform.com/?p=141801#comment-203627 In reply to Timothy Prickett Morgan.

Ooops! You are correct — I mis-interpreted and then mis-calculated too. Re-thinking about it suggests Intel 7 (10nm) being 1.25 times more efficient that 14nm (comparing the 8-core Cascade Lake, Ice Lake, and Sapphire Rapids). So things are o.k. efficiency-wise (continued improvements!).

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-203624 Thu, 19 Jan 2023 13:16:55 +0000 https://www.nextplatform.com/?p=141801#comment-203624 In reply to Hubert.

I get the idea of normalizing to four cores, but you can’t actually have just four cores. Or, if you turn four off, you double the effective cost of the remaining cores. I am with you about Intel 4, but Intel is not using that on Xeon SPs. They are moving straight to Intel 3 with Granite Rapids. Emerald Rapids should be Intel 4 and is just another refined Intel 7. Intel will try to close the gap some with Granite Rapids and Turin, I guess.

]]>
By: Hubert https://www.nextplatform.com/2023/01/18/what-do-we-do-when-compute-and-memory-stop-getting-cheaper/#comment-203623 Thu, 19 Jan 2023 13:06:02 +0000 https://www.nextplatform.com/?p=141801#comment-203623 I hope that Intel’s 10nm process (Intel 7) is a bit of an outlier, and that the path of increasing efficiency, with reduced feature size, resumes with Intel 4. From the high-clock-speed table of the “Hefty Premium” article, and normalizing to 4 cores, it seems that efficiency (1K perf/Watt) improved by a factor of 2 from 45nm (Nehalem, approx. 10) to 14nm (Skylake, approx. 20), but it slightly dis-improved at 10nm (Sapphire, approx. 15, for 4 cores — better than 45nm, but not as good as 14nm). Hopefully “Intel 4” will bring the missing “single core” efficiency back, despite SRAM (maybe) not scaling further area-wise.

]]>