Page 1 of 3

AlphaZero paper published in journal Science

Posted: Thu Dec 06, 2018 2:01 pm
by Uberdude
After about a year since the pre-print appeared in arkiv (AlphaZero L19 thread, not the same as AlphaGo Zero L19 thread), the AlphaZero paper has finally passed peer review and is in the journal Science:
http://science.sciencemag.org/content/362/6419/1140
pdf: http://science.sciencemag.org/content/s ... 0.full.pdf

Focus seems to be on chess and shogi. There's a new match vs stockfish, hopefully a better test than the last one. Chess media report: https://www.chess.com/news/view/updated ... game-match

DeepMind article and video:

Supplementary materials includes some Shogi games too which is something that community were missing:
http://science.sciencemag.org/content/s ... ver-SM.pdf

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 2:16 am
by pookpooi
chessbase.com wrote:So does this wrap up AlphaZero for good now? Hardly. As Demis Hassabis was so ready to point out recently, a new AlphaZero has been developed that is stronger than the one referenced in the paper. Be ready for new announcements!

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 3:37 am
by Javaness2
For a few moments I was convinced that their chess board was the wrong way around. Then I decided, no, they've just had a very active game.

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 8:33 am
by mumps
Hmm

Looking at the graphs shows that komi is too large!

AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 9:04 am
by jonsa
mumps wrote:Hmm

Looking at the graphs shows that komi is too large!

AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...
Yeah, I was also thinking something along those lines. An "unusual" discrepancy.

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 9:09 am
by Uberdude
mumps wrote:Hmm

Looking at the graphs shows that komi is too large!

AlphaZero wins 68.9% of games as White against AlphaGo Zero and 53.7% as Black...
Well yes, all the bots (except from Elf v1) and quite a few top pros too (even before AI) think 7.5 komi gives white a slight advantage (53% according to AG Teach). But that's not exactly the same as saying it's too much as in something else would be better, because maybe if reduced to 6.5 that would give black more of an advantage (e.g. 55%).

For the Go version of AlphaZero it's not immediately obvious, but after careful reading of the paper, see below, I'm pretty sure it's 'only' the fully-trained 20-block AlphaGo Zero which AlphaZero beat 61% overall (and they report it beating the weaker AlphaGo Lee version before that, but still taking longer than vs Stockfish/Elmo). So by not having any AlphaZero Go games we aren't missing out on some new even stronger bot games that we have already, though it would be nice to see another instance of a strong bot's play learning from scratch to see if it ended up playing a similar style to AlphaGo Zero, LeelaZero, Elf OpenGo etc.
Science paper wrote:We trained separate instances of AlphaZero for chess, shogi, and Go. Training proceeded for 700,000 steps (in mini-batches of 4096 training positions).
In chess, AlphaZero first outperformed Stockfish after just 4 hours (300,000 steps); in shogi, AlphaZero first outperformed Elmo after 2 hours (110,000 steps); and in Go, AlphaZero first outperformed AlphaGo Lee (9) after 30 hours (74,000 steps).
So to beat AlphaGo Lee, which is pretty weak by Go bot standards these days, it still took longer to train than the chess and shogi versions (a training step for Go was obviously slower, presumably because it's a bigger board). Then:
The Go match was played against the previously published version of AlphaGo Zero [also trained for 700,000 steps (footnote 25 = AlphaGo Zero was ultimately trained for 3.1 million steps over 40 days.)]. <snip> In Go, AlphaZero defeated AlphaGo Zero, winning 61% of games.
From the AlphaGo Zero paper, the 20-block version was trained for a total of 700k steps aka mini-batches (of 2048 positions, cf AlphaZero's 4096) over a total 4.9 million self-play games. They then made the 40-block version which was trained, from scratch, over 3.1 million batches (of 2048 positions again) with 29 million games of self-play (LeelaZero is currently 40 block at 11 million self-play games (over increase # blocks), with bootstrapping of increasing network sizes). So my reading of this is that A0 beat the fully-trained 20-block version (which is stronger than AG Lee but weaker than AG Master), but not the 40-block version. Beating AG0 20-block by only 61%, which is around 4350 Elo on their graphs, means I think A0 is weaker than AG Master (4858) and AG0 40b (5185).
Science figure 2 caption wrote:Tournament evaluation of AlphaZero in chess, shogi, and Go in matches against, respectively, Stockfish, Elmo, and the previously published version of AlphaGo Zero (AG0) that was trained for 3 days
Using the DeepMind Elo scale which is an extension of goratings.org we have:

Code: Select all

Player                         Elo      Matches
Fan Hui                       ~3000 
AlphaGo Fan                    3144    Beat Fan Hui 5-0  
Lee Sedol / top human         ~3600
AlphaGo Lee                    3739    Beat Lee Sedol 4-1    
AlphaGoZero 20b                4350    Beat AG Lee 100-0
AlphaZero                     ~4500    Beat AG0 20b 61% (over 1000 games?) 
AlphaGo Master                 4858    Beat top pros online 60-0
AlphaGo Zero 40b               5185    Beat AG Master 89-11       

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 10:35 am
by John Fairbairn
Top human 3600 to top AI 5185 seems like an enormous gap.

What would you say that means in handicap terms?

If we say the range from Fan Hui 2d at 3000 to Yi Se-tol (obviously more than 9d) at 3600 is close to 3 stones (maybe too generous but I'd find it hard to believe it's not more than 2 stones), we get 1 pro da = 200 Elo. So the latest AI should give the top human about 9 stones???? Even halving the figures to give a handicap of 4.5 stones seems a stretch, but I wouldn't rule that out.

Do the top bots still play so as to win by half a point rather than by as much as possible? If so, can that behaviour be easily modified so that the bot will try to maximise the score. That would give us a way to compare humans more directly (i.e. by playing only even human-AI games, telling the bot the komi is 7.5 and telling the human the real komi is 40 points or whatever).

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 10:59 am
by dfan
John Fairbairn wrote:Do the top bots still play so as to win by half a point rather than by as much as possible?
They play so as to maximize the probability that they will win by at least half a point.
If so, can that behaviour be easily modified so that the bot will try to maximise the score.
People are still working on it. One problem is that at some point you have to make a tradeoff and say, for example , "I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 11:40 am
by Uberdude
I wouldn't try to convert those Elo differences to handicap, it's like converting apples to volts. To take the example of LeelaZero vs Haylee a while ago (a bit weaker than Fan Hui I suppose), it absolutely demolished her on even and 2 stones, in a manner that if a human (e.g. Lee Sedol) did that I'd expect her to lose on 3 stones too, but she won easily on 3 with LZ going silly.

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 12:08 pm
by jlt
Note that the Elo rating does not vary linearly with handicap stones. Elo ratings are calculated in terms of winrate. God's Elo rating is infinite (well, not exactly but extremely high) , but cannot give 359 stones to a human.

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 2:33 pm
by mitsun
I can think of one fairly simple way to gauge the strength of a computer program, relative to a human, expressed in meaningful units. Start playing an even game. The computer evaluates its winning chances after every move as usual. If and when the computer calculates that passing will still result in a likely win, the computer passes. At the end of the game, the computer probably wins by a small margin. The strength difference is the number of passes issued along the way. This scheme has the desirable feature that the computer is always playing the game it was trained to play, with no need to alter komi or introduce handicap stones.

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 4:01 pm
by Bill Spight
Interesting idea. :D

One possible problem is that, as the temperature drops, the odds that a pass by the computer will not affect who wins increases, so that the computer will probably pass more often in the endgame than in the opening. It is passes in the opening that approximate handicap stones. The number of passes under this scheme is likely not only to be greater than the number of handicap stones, it is likely to be more variable. Still, an interesting idea. :)

Re: AlphaZero paper published in journal Science

Posted: Fri Dec 07, 2018 6:58 pm
by ez4u
dfan wrote:..."I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.
The statements may be logically meaningful but they are trivial. Isn't the real challenge to make sense of a statement like, "I have a 51% chance of winning by 0.5 points by playing X and a 49% chance of winning by 1.5 points by playing Y. I want to maximize my score; which should I choose?"

Re: AlphaZero paper published in journal Science

Posted: Sat Dec 08, 2018 4:15 am
by Bill Spight
ez4u wrote:
dfan wrote:..."I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.
The statements may be logically meaningful but they are trivial. Isn't the real challenge to make sense of a statement like, "I have a 51% chance of winning by 0.5 points by playing X and a 49% chance of winning by 1.5 points by playing Y. I want to maximize my score; which should I choose?"
The thing is, amateur dans play the late endgame almost perfectly; but even pros do not play the late endgame perfectly. Under those circumstances, if it's a close call in the late endgame between going for a ½ pt. win versus going for a 1½ pt. win, the extra point gives a margin of safety. At least for humans.

But most, if not all, modern top bots do not assume nearly perfect play when they calculate winrates. And they do not estimate the margin of safety by expected scores, but by percentages.* As far as I can tell, the endgame, particularly the late endgame, is one of the places where humans play better than bots; life and death, semeai, and ladders being others. In all of these places, local reading can give the right global results. Bots excel at global reading, humans still excel at local reading.

* Edit: That's not right, is it? Modern top bots do not actually estimate the margin of safety, do they?

Re: AlphaZero paper published in journal Science

Posted: Sat Dec 08, 2018 5:17 am
by ez4u
Bill Spight wrote:
ez4u wrote:
dfan wrote:..."I am willing for my chance of winning to go down from 98% to 97% in return for winning by 10.5 points instead of 0.5". Due to the nature of the playing system, there's no good way to say "I have a 100% chance of winning, and now I want to maximize my score while retaining that 100% chance", although of course that statement is logically meaningful.
The statements may be logically meaningful but they are trivial. Isn't the real challenge to make sense of a statement like, "I have a 51% chance of winning by 0.5 points by playing X and a 49% chance of winning by 1.5 points by playing Y. I want to maximize my score; which should I choose?"
The thing is, amateur dans play the late endgame almost perfectly; but even pros do not play the late endgame perfectly. Under those circumstances, if it's a close call in the late endgame between going for a ½ pt. win versus going for a 1½ pt. win, the extra point gives a margin of safety. At least for humans.

But most, if not all, modern top bots do not assume nearly perfect play when they calculate winrates. And they do not estimate the margin of safety by expected scores, but by percentages. As far as I can tell, the endgame, particularly the late endgame, is one of the places where humans play better than bots; life and death, semeai, and ladders being others. In all of these places, local reading can give the right global results. Bots excel at global reading, humans still excel at local reading.
If the discussion is about switching from a winrate strategy to a maximum point strategy, then the starting point is the fuseki not the late endgame.