xela wrote:
At the time of the AlphaGo-Lee Sedol match, I remember hearing a lot about the algorithms, and the fact that computer go had achieved a new level by using smarter software............ and I don't recall any mention of custom hardware during the match. Seems strange that the topic didn't come up at the time, or am I misremembering things?
So this leaves me wondering: how much of AlphaGo's success really is due to the new deep learning methods, and how much is from souped-up hardware? Thoughts on this?
I'll try to explain.
A "neural net" is an ABSTRACT concept. You brain cells connect with each other to function as a neural net. In this case we have a computer running a program which emulates (implements) a neural net. That emulation could be done by a stepwise process, but since a neural net has a whole lot of "cells" each doing the same thing at the same time (receiving data from neighbors, sending data to neighbors) a parallel processor can speed things up.
Understand? The PROGRAM is implementing the neural net. The neural net is being trained (learning) to evaluate a function (given a state of the board, return a legal move, ideally the best move). That training is NOT changing the program, just data (cell values).
The IDEAL "special hardware" for implementing neural nets does not yet exist. It may never exist if it turns out to be more costly and slower than emulation using fast, more general purpose, parallel processors.