Comments on: Reimagining Accelerators with Sparsity at the Core https://www.nextplatform.com/2021/01/05/reimagining-accelerators-with-sparsity-at-the-core/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Thu, 14 Jan 2021 20:35:25 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Kevin Cameron https://www.nextplatform.com/2021/01/05/reimagining-accelerators-with-sparsity-at-the-core/#comment-159140 Mon, 11 Jan 2021 17:32:36 +0000 http://www.nextplatform.com/?p=137662#comment-159140 Sparse neural networks look a lot like circuits – i.e. the matrix math is similar to what you see in (fast-) SPICE. That implies that (analog) circuits can be built (as ASICs) with the same behavior, and those will hit performance levels well beyond FPGA/GPU.

]]>
By: Dr. FPGA https://www.nextplatform.com/2021/01/05/reimagining-accelerators-with-sparsity-at-the-core/#comment-159098 Sun, 10 Jan 2021 21:21:32 +0000 http://www.nextplatform.com/?p=137662#comment-159098 The most impressive number in this article is >10x streaming throughput vs. GPUs (BTW streaming usually has a batch size of 1 instead of N/A, Please see Microsoft Brainwave paper for reference). Streaming and massive connectivity have always been “a forte” of FPGAs in tele, data, and other comms. These ace cards have not been played well by FPGAs in current AI offerings. While sparse xNNs are faster indeed, connectivity also will break single chip barrier and FPGA can take multichip computing path of GPUs And CPUs.
On a side note, I would question the appearance of ZU3EG with infinite speedups. Perhaps someone formatting tables in the future should pay attention to division by zero?

]]>
By: Eric Olson https://www.nextplatform.com/2021/01/05/reimagining-accelerators-with-sparsity-at-the-core/#comment-159078 Sun, 10 Jan 2021 06:35:36 +0000 http://www.nextplatform.com/?p=137662#comment-159078 From a product reliability and testing point if view, continuous learning could lead to unexpected behaviour similar to a teen-ager deciding to learn how fast the family automobile can go on the freeway. In my opinion this is not the holy grail of AI.

What makes AI potentially more useful that hiring a bunch of human workers to do the same thing is that AI doesn’t start experimenting with shortcuts in the middle of doing what it is supposed to do. This and other differences between AI and human intelligence are what make AI potentially useful. In particular, the ability to switch off the learning aspects of an AI model is what leads to predictable behaviour and that’s extremely important.

]]>