Page 1 of 1
Distributed neural network training?
Posted: Sun Aug 20, 2017 9:16 am
by Cyan
Will developers of go programs be interested in setting up a distributed computing project to train their neural networks, like Folding@home and BOINC? That way millions of go fans worldwide can contribute to making go programs stronger.
Re: Distributed neural network training?
Posted: Sun Aug 20, 2017 9:52 am
by Uberdude
Someone asked GCP (author of Leela) about this as he said his lack of computer power for training the networks (and generating self-play training data) was a major thing holding back Leela's strength. However he explained that you need a very fat data pipe between compute units so you can't effectively distribute a task for one beefy machine with several high end GPUs down to 20 low end laptops. However [my interpretation] if there are people with high spec gaming machines with a few Titan Xs then perhaps they could usefully contribute. I don't know how common that is.
Re: Distributed neural network training?
Posted: Mon Aug 21, 2017 6:59 am
by zakki
I think architecture like Folding@home is suitable to generate millions of self-play game, rather than training neural network.
Re: Distributed neural network training?
Posted: Mon Aug 21, 2017 12:24 pm
by Cyan
zakki wrote:I think architecture like Folding@home is suitable to generate millions of self-play game, rather than training neural network.
Would that be useful for current programs like Rayn?
Re: Distributed neural network training?
Posted: Tue Aug 22, 2017 5:16 am
by zakki
AlphaGo uses 30000000 games.
But now 1600000 human games without random moves and 1440000 aya games with random move are available.
Rayn (and other programs) needs high quality 10x amount of games.