Mike Novack wrote:RBerenguel wrote:
I perfectly understand how an MC evaluator works. But the reason the AI gives may not have anything to do about why the game was won in most of the MC variations. Imagine an AI that does not understand ladder breakers, and plays a move because "threatens to attack a weak group" as a reason, this move turns to be a ladder breaker for some other area. Then, most MC branches pick this ladder and mark as a win. The reason was not "threatens to attack a weak group", so, as a learning tool it is not specially good.
Imagine that we have before us a go playing program that claims to be able to give go reasons for the move made. Why do you want to decide what it can or cannot do without looking? Why are you saying that if there was more than one go reason for a move it will display just one reason? Is this based upon your observation of how this program behaves?
How about if we go back to just before the first MCTS based program. The program in question was using an AI to select a set of plausible moves based on go reasons and then it made a choice of "best move" from among these. It was making that choice by an AI deciding which reasons counted more in the particular situation. You could ask it to show you why. And it was able to play go at just a stone or so weaker than you are now. You think that could have been done without the AI knowing about ladders or that a move could have more than one go reason behind it?
You are totally missing the point, so, whatever.