Life In 19x19
http://lifein19x19.com/

MuZero beats AlphaZero
http://lifein19x19.com/viewtopic.php?f=18&t=17074
Page 1 of 1

Author:  sorin [ Wed Nov 20, 2019 11:00 pm ]
Post subject:  MuZero beats AlphaZero

DeepMind published a papar about MuZero, a new approach to learning, which they evaluated on several board games and Atari video games: https://arxiv.org/pdf/1911.08265.pdf

From what I understand from a quick browse of the paper, the innovative part compared to AlphaZero type of approach is that MuZero doesn't "know" the rules in advance, therefore is a more general learning algorithm, which can be used in more open-ended domains.

They tested it against AlphaZero for go and MuZero won, this is an exact quotation:

"In Go, MuZero slightly exceeded the performance of AlphaZero, despite using less computation per node in the search tree (16 residual blocks per evaluation in MuZero compared to 20 blocks in AlphaZero)"

Very interesting news, I hope they will publish some game records too!

Author:  Bill Spight [ Wed Nov 20, 2019 11:18 pm ]
Post subject:  Re: MuZero beats AlphaZero

sorin wrote:
DeepMind published a papar about MuZero, a new approach to learning, which they evaluated on several board games and Atari video games: https://arxiv.org/pdf/1911.08265.pdf

From what I understand from a quick browse of the paper, the innovative part compared to AlphaZero type of approach is that MuZero doesn't "know" the rules in advance, therefore is a more general learning algorithm, which can be used in more open-ended domains.


Actually, learning the rules is not innovative.

Quote:
Very interesting news, I hope they will publish some game records too!


Very interesting, indeed. :)

Author:  EdLee [ Thu Nov 21, 2019 12:37 am ]
Post subject: 

Hi sorin, thanks.

Nice to see the classic Atari games.
Mr. Aja Huang (relayer in AlphaGo-LSD match) not listed in this paper.

Too bad the "casual" readers of these papers would have no idea of the etymology of Atari and its connection to Go. :scratch: (Unless they accidentally wikipedia it up.)

Author:  Uberdude [ Thu Nov 21, 2019 1:00 am ]
Post subject:  Re: MuZero beats AlphaZero

I don't think we ever got game records of AlphaZero for Go did we? Also AlphaZero was only stronger than the 20 block version of AlphaGo Zero (which was between AG Lee and AG Master), not the 40 block version, see https://lifein19x19.com/viewtopic.php?p=239589#p239589. So these games would be interesting to see from a "what style does this new bot from an independent training run of self discovery of rules have" perspective but will likely be weaker than AG0 40b.

Author:  Uberdude [ Thu Nov 21, 2019 2:34 am ]
Post subject:  Re: MuZero beats AlphaZero

Now the real challenge for MuZero is can it play Mao?

Author:  Kirby [ Thu Nov 21, 2019 8:30 am ]
Post subject:  Re: MuZero beats AlphaZero

Next step: AI to decide to play go when it doesn’t know the rules, and also doesn’t know it can use board or stones.

Author:  Yakago [ Thu Nov 21, 2019 8:40 am ]
Post subject:  Re: MuZero beats AlphaZero

Yes, we should eagerly anticipate the day that the AI learns Go out of sheer interest

Author:  Gomoto [ Thu Nov 21, 2019 8:44 am ]
Post subject:  Re: MuZero beats AlphaZero

Mu Zero, can you tell us more about Go?

I don't care. I just win.

Author:  Bill Spight [ Thu Nov 21, 2019 9:44 am ]
Post subject:  Re: MuZero beats AlphaZero

Gomoto wrote:
Mu Zero, can you tell us more about Go?


Mu.

Author:  MikeKyle [ Thu Nov 21, 2019 4:51 pm ]
Post subject:  Re: MuZero beats AlphaZero

Uberdude wrote:
Now the real challenge for MuZero is can it play Mao?


I played Mao in college and genuinely thought it was just made up by a small group of bored Yorkshiremen.
I guess it's your point, but Mau is kind of the only game Muzero seems to play.

Author:  pookpooi [ Fri Nov 22, 2019 10:20 am ]
Post subject:  Re: MuZero beats AlphaZero

Surprised not to see Aja Huang in this, but he appears in AlphaStar paper.

Anyway, I just love the name, Zero is nothing, and Mu is also nothing in Japanese and Korean (Wu in Chinese), something like that.

I'm wondering if they manage to also play StarCraft at AlphaStar level in their next project, the AI name could be MuZeroNova, Nova is 'new' in Latin and also 'star explosion' in astronomical term. Though I might consider adding another 'nothing' in the name if the AI manage to win even without being tasked to win/winning reward.

Author:  jlt [ Fri Nov 22, 2019 10:45 am ]
Post subject:  Re: MuZero beats AlphaZero

For the next name of a Deepmind product, I suggest EpsilonZero (vacuum permittivity).

Author:  EdLee [ Fri Nov 22, 2019 12:11 pm ]
Post subject: 

μ

Author:  Bill Spight [ Fri Nov 22, 2019 2:38 pm ]
Post subject:  Re: MuZero beats AlphaZero


Author:  sorin [ Fri Nov 22, 2019 2:41 pm ]
Post subject:  Re: MuZero beats AlphaZero

Bill Spight wrote:
sorin wrote:
DeepMind published a papar about MuZero, a new approach to learning, which they evaluated on several board games and Atari video games: https://arxiv.org/pdf/1911.08265.pdf

From what I understand from a quick browse of the paper, the innovative part compared to AlphaZero type of approach is that MuZero doesn't "know" the rules in advance, therefore is a more general learning algorithm, which can be used in more open-ended domains.


Actually, learning the rules is not innovative.



Right. And this is not about "learning the rules", but learning to act in an environment where there are no clear rules.

They used it for go as well just as proof-of-concept I guess, but go (or board games in general) is not the main target for this family of algorithms. Nevertheless, I think it's very cool, I am mostly interested about the learning trajectory for go, whether it ended up learning in a different way, or did it converge to AlphaZero style, etc.

Author:  pookpooi [ Fri Nov 22, 2019 9:17 pm ]
Post subject:  Re: MuZero beats AlphaZero

Since DeepMind is not gonna provide exact elo value anyway I'll do this for fun. I try to find elo from graphs assuming graphs have accurate scale.

We'll start with the exact number the paper mention (from AlphaGo Zero paper)
3,144 for AlphaGo Fan
3,739 for AlphaGo Lee
4,858 for AlphaGo Master
AlphaGo Zero (40 blocks/ 40 days) 5,185

Now estimated number
AlphaGo Zero (20 blocks/ 3 days) 4,884 (from AlphaZero paper)
AlphaZero (20 blocks/ 13 days) 4987 (from MuZero paper), 4980 (from AlphaZero paper), very similar number across these two papers so I think they have accurate scale graphs
MuZero (16 blocks/ 12 hours?) 5161 (from MuZero paper)

Though there is a very BIG caution, they're different match condition, in MuZero paper the condition is 800 simulations per move, and in other graph shows that MuZero is able to outperform AlphaZero from 0.1 seconds to 20 seconds per move, at 20 to 50 seconds per move AlphaZero outperform MuZero, and we don't know what will happen at even longer thinking time.

Author:  lightvector [ Fri Nov 22, 2019 9:57 pm ]
Post subject:  Re: MuZero beats AlphaZero

Actually we know what will happen at longer thinking times - it's almost guaranteed that AlphaZero continues to pull further ahead of MuZero.

The reason that AlphaZero pulls ahead at longer thinking times is because the accuracy of MuZero's representation of the board degrades the more times it passes through the dynamics function, so as it thinks more and more moves ahead, its "mental picture" of the future board state becomes worse and worse until it degrades into garbage. (This is a general phenomenon that afflicts all known RNN-style architectures that attempt to model any kind of state dynamics.)

The paper itself remarks on the fact that quite amazingly, the degradation only really noticeably starts at least a whole order of magnitude beyond what was used in self-play training. But for deep searches, as it currently is, it can't compete with AlphaZero, which has an actual software implementation of a Go board to make the moves on and therefore perfect future board perception.

(As others have mentioned, it's very clear from features of the design like this one that Go wasn't really the target problem being solved here, they're focused on more general tasks where you can't simply implement the rules of the game in your model).

Author:  Bill Spight [ Fri Nov 22, 2019 10:39 pm ]
Post subject:  Re: MuZero beats AlphaZero

Thanks, lightvector. :)

One thing that keeps coming to my mind is Richard Feynman's caution about extrapolation. OC, everybody knows that you can't trust extrapolation, but Feynman pointed out that your can't trust extreme data points, either. They are not validated by further exploration. See the horizan effect.

That's why, when I see long variations produced by analytical programs, I cringe. The Elf commentaries sometimes produces long variations, as well, but they cut them off when the number of visits or playouts drops below 1500. You can't trust moves that have not been explored at least that much.

Page 1 of 1 All times are UTC - 8 hours [ DST ]
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/