In practical terms, there may not be much difference between the two for SDKs reviews. In either case I would suggest looking at, say, what the bot thinks are the three biggest mistakes and trying to understand why. Using the program to play through variations can help understanding, not just observing any lines that the program used to make its decision. Since the program is so strong, the player will probably need to explore the game more to see why the program made its assessment.
That said, I would recommend Leela Zero or Elf or other top bot. Leela 11 makes demonstrable mistakes. Also, IMHO SDKs have started to form habits and opinions, so that they see some good plays but overlook others. The bots that have been trained on self play are more likely to suggest good plays that would never have occurred to SDKs, and that is a good thing at that stage. SDKs are starting to build their ability to calculate variations, but what good is deep reading if you don't see the best possible plays and replies?
Edit: Corrected typo. The last word was replays, but I meant replies.
_________________ The Adkins Principle: At some point, doesn't thinking have to go on? — Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
Last edited by Bill Spight on Sat Feb 09, 2019 9:39 am, edited 1 time in total.
|