Comments on: Show Me The Money: What Bang For The HPC Buck? https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Mon, 23 Apr 2018 11:13:20 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: John L. Larson https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-9020 Tue, 11 Aug 2015 20:20:37 +0000 http://www.nextplatform.com/?p=648#comment-9020 I worked at Cray Research in Chippewa Falls when the CRAY X-MP/4 came out in 1984. It had an 8.5ns clock ( that’s 117.64MHz ) ( the CRAY X-MP/2 had a 9.5ns clock ( 105.26 MHz ) ) and could produce 2 results per clock period on each of the 4 processors. The peak 64-bit floating point execution rate of the CRAY X-MP/4 was 940 Mflops/sec.

]]>
By: Annan Publius https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-3251 Sat, 06 Jun 2015 19:11:23 +0000 http://www.nextplatform.com/?p=648#comment-3251 I had an emeritus professor that had his office adjacent to mine. He formerly headed a small department at a large university. At one point the university came to him and told him that he must hire a secretary. He responded, paraphrasing: “No thank you, I do all the correspondence myself and do not need one. The taxpayers of the state work hard for their money, it would not be fair to them.” When a successor took over after retirement, the secretary was hired, and the nest was feathered.

These massively parallel systems have really long set up times and rarely scale well when the applications are written as they are formulated. The SPEC HPC benchmarks are much more indicative of the application performance. Application developers complained behind closed doors that these massively parallel machines were impossible to scale past a few CPUs.

Peak numbers and the RMAX/RPEAK Linpack numbers have become the macho-FLOPS of old. These figures are mostly numbers to impress funding agencies and the public. The game became how many boxes can you get into a warehouse, and how big a warehouse can you build.

What you have now are throughput clusters, whereby my associates have found that they can farm out scalable applications to Amazon and the cloud at a tiny fraction of the cost and hassle of requesting time on these systems.

Are taxpayers are getting value from these efforts? It is time to consider whether these projects are worthwhile.

]]>
By: John West https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1680 Wed, 22 Apr 2015 18:43:59 +0000 http://www.nextplatform.com/?p=648#comment-1680 In reply to Buddy Bland.

Buddy is right, and procurement data can be protected from FOIA requests in some cases. However the government doesn’t have to agree to these terms; the costs for some very large military weapons programs (all of which dwarf HPC investments in the United States) are known. With enough citizen or media pressure, the current practice could (and probably should) change.

]]>
By: John West https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1679 Wed, 22 Apr 2015 18:39:50 +0000 http://www.nextplatform.com/?p=648#comment-1679 Would be valuable to go back and include cost data for DoD and NSF investments as well. DoD has spent ~$1B on supers during its life, and NSF investments in HPC over the past 25 years have to be significant as well. Both agencies have had top 10 systems at various points (NSF has one now); both tracked in the Top500.

]]>
By: Buddy Bland https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1572 Sun, 19 Apr 2015 19:52:11 +0000 http://www.nextplatform.com/?p=648#comment-1572 One of the primary reasons that the pricing for these supercomputers isn’t made public is because the vendors want to keep the pricing data proprietary. The big centers sign contracts for these machines years ahead of general availability of the systems. Of course, by the time the systems are installed and accepted, they are yesterday’s new and no one ever asks for the pricing information then. If you ask for data on systems that are no long in production, I suspect you could get rather detailed pricing broken out by hardware, services, maintenance, etc.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1356 Wed, 15 Apr 2015 19:24:05 +0000 http://www.nextplatform.com/?p=648#comment-1356 In reply to Al Stutz.

This is exactly the kind of information I wish was publicly available. it would be interesting to do a real TCO analysis on these massive machines.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1355 Wed, 15 Apr 2015 19:21:59 +0000 http://www.nextplatform.com/?p=648#comment-1355 In reply to Hubertus van Dam.

Math error not in my favor. In flipping between my teras and petas I left a factor of 1,000 in the equation. Deepest apologies to all. Fixed now. I hope.

]]>
By: david serafini https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1320 Wed, 15 Apr 2015 03:04:37 +0000 http://www.nextplatform.com/?p=648#comment-1320 I disagree with the comment that it was tough to get 50% of peak on the old Cray machines, or that it was similar to the difficulty of doing the same on later machines. Rather, it was relatively easy to get >50% of peak on the Crays (particularly the X-MP, which may have been the best-balanced of the Cray ECL vector machines). Performance >80% was not uncommon. I used a CFD code that ran at about 90% of peak on all 4 CPUs of the X-MP. Although it was well-suited to the Cray vector architecture, it was a real application and wasn’t all linear algebra (like LINPACK), and didn’t take a heroic effort to optimize.

On the other hand, the CMOS microprocessor-based massively-parallel systems that came later were very hard to get high performance rates out of, although they made up for it by having much higher peaks.

On the log-scale of history, this isn’t really that important, but the difference is more significant that the article implies. On real programs, the Crays could be 5-10 times better when measured by percent of peak, compared to the MPPs. It’s a shame that the ECL circuits and static RAM of the vector Crays were too expensive and power hungry to scale to the performance levels of the microprocessor MPPs, since the ease-of-programming of the Crays would have saved a lot of time and effort on the part of their users.

]]>
By: Hubertus van Dam https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1307 Tue, 14 Apr 2015 23:55:01 +0000 http://www.nextplatform.com/?p=648#comment-1307 Can you check your math, please? For the Tianhe-1A machine you quote a performance of 4.7 Petaflop at a cost of $95M, to me that makes around $20M/petaflop instead of $20/petaflop. If you can really get me a petaflop machine for $20 I am going to have a chat with my boss…

]]>
By: Al Stutz https://www.nextplatform.com/2015/04/13/show-me-the-money-what-bang-for-the-hpc-buck/#comment-1296 Tue, 14 Apr 2015 20:32:40 +0000 http://www.nextplatform.com/?p=648#comment-1296 I like your article.

I wonder how your chart would change when you add the cost of
cooling, power, software, heroic datacenters. These have risen exponentially over the last ten years.

]]>