Ootakamoku wrote:Increasing the rank for it mitigates the problem, hopefully to the point that it becomes irrelevant on the larger scale. In any case, ideal situation would be to know exact point loss from each move, compared to the best possible move in the given situation. And use that to judge the players fuseki skill. However having no such data available, I make do with what is almost binary data, but it seems to still yield accurate result with increased sample size. Just like in go, we can play better or worse and anywhere in between, yet the end result is almost binary, you either win or you lose. Here we are working backwards, we know the move was almost perfect, or it was not perfect, yet this distinction multiplied by many others from different situations adds up to quite accurate prediction of eventual skill.
This is fine if you're evaluating a whole game, but when you are evaluating a position as open as a fuseki, you can't calculate all possible variations to ensure that a move is a winning move, keeps the game in contention, or a losing one easily.
I think it would make more sense to classify a different problem's various answers by rank. A 10k needs to play in the right corner, a 5k needs to discriminate between pincer and extension, and as skill goes up, finer distinctions are made.
This doesn't seem to address the 'fuseki' problems that primarily seem to address how to continue a particular joseki, though.