Cassandra wrote:
Tryss wrote:
One thing to understand, is that the programs don't know the difference between a problem and a game. And that has consequences in how you approach the position
If this were true, I would assume the programs to perform better.
A couple of things. Many classical problems were, in fact, whole board problems, considering that a fight over the life or death of a corner could spill out into the rest of the board. However, if the purpose of the problem was to win the game, it might well be best to play somewhere else and possibly lose the fight in the corner. But this is not one of those problems, for two reasons. 1) The fight for life and death is huge, dwarfing anything else, at least at first. 2) Regular yose are part of the problem, not just life and death. So the purpose of the problem is to win the game.
Given that the purpose is to win the game, komi matters. In fact, to the best of human analysis, if Black gives 7½ pts. komi, she cannot win the game. So far in this discussion, the bots are assumping a 7½ pt. komi. In this note, Cassandra has shown a number of AI mistakes by White that allow Black to win the game, even giving komi. To wit:
Cassandra wrote:
Let me give an example, showing the two types of White replies during the growth of the hanezeki's tail:
{snip}
White should only play this way if she knew that she can win this semeai. What the programs usually do NOT during their analysis.
{snip}
White should only play this way if she "knew" that she can win this semeai. What the programs usually do NOT during their analysis (neither do the programs "know" that capturing the tail is losing the game for White).
= = = = = = = = = = = = = = = = = = = =
I would like to reason that AI does understand (short) future sequences that capture some opponent's stones (and so either make two eyes or connect to the outside), but are unable to handle a (longer) semeai correct.
This is a well known defect of today's top bots. Humans do better at large semeai. In fact, not too long ago I posted a mistaken review by Elf where the semeai was not that big, but there was a tesuji at the end. Elf failed to see the human win until late in the actual play. (Or maybe the human loser resigned first, I don't recall exactly.)
My working hypothesis for why this happens is twofold. First, there is a horizon effect. You have to calculate the tree deeply enough to find the correct play. Humans excel at depth first search, at least consciously. Second, searching the whole board is inefficient for finding locally deep plays. Usually searching the whole board is better than local depth first search, which is one reason that today's bots play at super human levels. Another is that they are better at whole board evaluation than humans, but that is something that humans can learn from them.
Anyway, most go problems are designed for human solution, that is, for depth first local search, even if the search takes the whole board. If bots were trained on such problems, they could very likely do better than humans at them. But such training would take many thousands, maybe millions of such problems, and they do not exist, nor do we know how to have computers construct them.
I think that in the future, go playing programs will, like humans, utilize local search as well as whole board search. We know mathematically that it is more efficient to search independent regions of the board locally and combine the results of those searches into a whole board search. That is how humans do well at the endgame.
An unsolved problem is how to get computers to recognize independent regions of the board. Right now, programs are continuing to improve rapidly by improving their whole board evaluation. But eventually diminishig returns will set in, and my guess is that new programs will start to incorporate local searches in order to improve. Not using local search is a known weakness, after all.