The bright new world of AI has arrived rather quickly as a byproduct of Big-Data statistical computing from folks like Google trying to sift through the world’s data. The bulk of it is sparse-matrix math, which is something I did in college decades ago.
My first encounter with using that math was building a steady-state (DC) circuit solver at college as a project (maybe in Algol60), later it was using SPICE simulators for circuit design.
So when you hear CS folks talking about “neural networks”, they are usually referring to the matrix math version of something doing an AI task. However, the matrix math (as with SPICE internals) implies an actual network of neurons.
What’s the difference between the matrix math and an network of neurons? Well, as anyone who does SPICE simulation will tell you: the matrix math is orders of magnitude slower than the real circuit it is a simulation of. So, while a TPU or a GPU might chew through your matrices faster, breaking the matrices up into smaller chunks and directly implementing the math (maybe in analog circuits) is the most efficient way to implement your AI, and if you are doing that you are a: neural network engineer.
Related
The Intersection of SSCS and AI –A Tale of Two Journeys by Vivienne Sze and Boris Murmann
Makes sense. Thanks, Kevin.
Hey, Kevin. Are people building analog equivalents of neural networks (NNs) on a substrate?
Hi Todd, There are a few companies in that space – Rain Neuromorphics, and Lightelligence being a couple. Other folks are using hybrid approaches, e.g. Mythic.
It’s difficult to store real values on the Silicon where the multipliers are.