Bots that undo

For discussing go computing, software announcements, etc.
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Bots that undo

Post by Mike Novack »

quantumf wrote:From the point of view of explanations, then, the AI will only be able to give 5k reasons. The MCTS aspect is essentially meaningless (from a tutoring point of view).

The program will give occasionally wrong reasons for the right move, and that pains me, but I can accept that the 5k explanations are still useful for someone aspiring to 5k.


Not quite. You don't know (nobody knows) the "level" of the reasons themselves. Perhaps the AI was as good at identifying all the things that a move did as a dan player. It was playing at the 5 kyu level not because it was only able to spot 5 kyu reasons but because it couldn't judge between them at better than a 5 kyu level. In other words, if move A has one set of reasons behind it and move B as some other set of reasons behind it, which set is more compelling in the context of this particular board position? That is a very different problem that "find all the reasons behind this move".

Now lets say you have a MCTS evaluator playing at the dan level. That means it is is good on average of picking the best move as a human 1 dan. So know we know the move made. We can ask an AI designed to "list the go reasons behind this move" to show us those reasons. We no longer are in the dark wondering "what did that move do?".

Why can't the other MCTS programs do this? (why just MFOG) Well the others could do it if they had an AI designed to "find the go things that this move does". They would have to design that AI, write it, and test it. It was easy for Fotland to have MFOG 12 provide the facility "give reasons" because he already had an AI able to do that (as an important part of MFOG 11).
User avatar
quantumf
Lives in sente
Posts: 844
Joined: Tue Apr 20, 2010 11:36 pm
Rank: 3d
GD Posts: 422
KGS: komi
Has thanked: 180 times
Been thanked: 151 times

Re: Bots that undo

Post by quantumf »

I'm pretty certain that there is no AI, including MFOGO, that can give post-move justifications for a move. MFOGO will use it's 5k AI to generate some moves, ask the 1d MCTS to evaluate them, and then select the best one. This way it can explain the move. I'd guess that MCTS engine would be asked to evaluate some other moves too, in case a critical but non-heuristic move is the ONLY move. In this case the explanation would have to be of the form "ONLY move".

Asking a (5k) AI to explain a move selected by another engine (that used statistical probability), is not only a very hard problem, but it's one I'd trust even less than you should trust my explanations for Lee Sedol's moves.
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Bots that undo

Post by Mike Novack »

quantumf wrote:I'm pretty certain that there is no AI, including MFOGO, that can give post-move justifications for a move. MFOGO will use it's 5k AI to generate some moves, ask the 1d MCTS to evaluate them, and then select the best one. This way it can explain the move. I'd guess that MCTS engine would be asked to evaluate some other moves too, in case a critical but non-heuristic move is the ONLY move. In this case the explanation would have to be of the form "ONLY move".


Getting much closer. OK, MFOG 12 is first using it's AI to generate a set of plausible moves to then be fed to the MCTS evaluator so it doesn't have to do anything extra to be able to display the reasons behind whichever move was chosen. Since the set of moves considered all had reasons that the AI could find, it's 100% going to be able to give a reason. Fotland chose to do this (only consider moves for which his AI could find reasons) to prevent his program from making "weird" moves. When MFOG 12 makes a move, even a hopeless sort of move as it will do when far behind, shortly before resigning, you should be able to see why that move had a chance (if you blundered your response to it). Since Fotlnd already had this AI (in MFOG 11) it cost him nothing extra.

But suppose not done that way. Suppose a program first used MCTS on all possible moves, selected the best, and then fed that to an AI like Fotlands. Usually that AI would be able to find go reasons for the move. After all, not all of the moves made by the other MCTS programs are weird, perhaps not more than 2 or 3 weird moves per game. So almost all of the time would be able to display reasons.

I think what is causing confusion here is that because the MCTS algorithm evaluated the move as best without referring to the set of go reasons behind the move folks are assuming that those reasons were irrelevant to why the move was the best move.

It is precisely because we sometimes cannot see the go reasons behind the move made by a MCTS program that we consider the move made "weird". This is not unlike the situation looking at a game between strong pros and sometimes seeing a move that we don't immediately understand. Those moves are going to be far beyond the capabilities of a "reason finding AI". We have to remember that before MCTS came along several people and teams were working on both AI reason finders and AI evaluators. That development ceased a good way back. It would take a lot of people clamoring for "I want to see reasons and I'm willing to pay for that" (pay or donate) to get development of better "reason finders".
Polama
Lives with ko
Posts: 248
Joined: Wed Nov 14, 2012 1:47 pm
Rank: DGS 2 kyu
GD Posts: 0
Universal go server handle: Polama
Has thanked: 23 times
Been thanked: 148 times

Re: Bots that undo

Post by Polama »

Mike Novack wrote:
Not quite. You don't know (nobody knows) the "level" of the reasons themselves. Perhaps the AI was as good at identifying all the things that a move did as a dan player. It was playing at the 5 kyu level not because it was only able to spot 5 kyu reasons but because it couldn't judge between them at better than a 5 kyu level. In other words, if move A has one set of reasons behind it and move B as some other set of reasons behind it, which set is more compelling in the context of this particular board position? That is a very different problem that "find all the reasons behind this move".


Or the opposite is possible: that a 5 kyu AI would generate 15 kyu reasons but process more possibilities per second than a 5kyu. Personally, I find that case more likely.

I don't think there's terribly much to "local" explanations. Individual moves are part of interesting sequences. When reviewing a professional game, I'm often left wondering why they protect a corner in gote. I understand that they "protected the corner", but from what? Would it have died? ko? lived small and the point difference is big enough to play gote for now? White would have gotten too many forcing moves in?"

"Black protects" is obvious. "White would get a ko" is better. "White gets a ko like this" better still, and "White gets a ko like this and here's why black doesn't have enough threats" better still. The MCTS in some sense knows the outcome of a black tenuki, but can't express it. The move generator knows you might want to protect, but without reading can't say exactly why.

Similarly with aji: it's easy to say "ok, there's a lot going on here, that's a good local shape." But the important thing is often "this 10 move sequence means White can't cut now" and "White here was sente when that cut was good and would have negated black's thickness after this followup." I can understand very easily that a reinforcing stone is valuable locally, but the question I need to answer to get stronger is why this reinforcing move, and what would I be up against if I tenuki.

None of this is to say a program couldn't generate relevant sequence diagrams or discuss life&death situations or what have you. And certainly knowing that a 3-dan algorithm played here in this position is enough to learn from without any additional text. But I don't think generating candidate moves based on the position and then seeing which move works means that the candidate move description is a valuable learning aid.
User avatar
quantumf
Lives in sente
Posts: 844
Joined: Tue Apr 20, 2010 11:36 pm
Rank: 3d
GD Posts: 422
KGS: komi
Has thanked: 180 times
Been thanked: 151 times

Re: Bots that undo

Post by quantumf »

Mike Novack wrote:Since the set of moves considered all had reasons that the AI could find, it's 100% going to be able to give a reason.


Agreed, but as discussed, this reason could be false.

Mike Novack wrote:But suppose not done that way. Suppose a program first used MCTS on all possible moves, selected the best, and then fed that to an AI like Fotlands. Usually that AI would be able to find go reasons for the move. After all, not all of the moves made by the other MCTS programs are weird, perhaps not more than 2 or 3 weird moves per game. So almost all of the time would be able to display reasons.


Nope, all it can do is generate the usual 5k candidates, and hope there's an overlap. If not, then there's a problem. A human might be able to able to fake it, and generate some fake explanation, and hope that his audience isn't sharp enough to spot the trickery, but no way can a program do this convincingly. What you're suggesting is, I think, essentially impossible - the 5k AI needs to generate information that doesn't exist.

Mike Novack wrote:I think what is causing confusion here is that because the MCTS algorithm evaluated the move as best without referring to the set of go reasons behind the move folks are assuming that those reasons were irrelevant to why the move was the best move.


There's no confusion. No one disagrees that there might frequently or even usually be an overlap. But if I know that some explanations will be bogus, even if it's less than 1%, then I'm not sure how much faith I'd put into any explanation.

Mike Novack wrote:It is precisely because we sometimes cannot see the go reasons behind the move made by a MCTS program that we consider the move made "weird". This is not unlike the situation looking at a game between strong pros and sometimes seeing a move that we don't immediately understand. Those moves are going to be far beyond the capabilities of a "reason finding AI". We have to remember that before MCTS came along several people and teams were working on both AI reason finders and AI evaluators. That development ceased a good way back. It would take a lot of people clamoring for "I want to see reasons and I'm willing to pay for that" (pay or donate) to get development of better "reason finders".


On this we can at least agree.
User avatar
emeraldemon
Gosei
Posts: 1744
Joined: Sun May 02, 2010 1:33 pm
GD Posts: 0
KGS: greendemon
Tygem: greendemon
DGS: smaragdaemon
OGS: emeraldemon
Has thanked: 697 times
Been thanked: 287 times

Re: Bots that undo

Post by emeraldemon »

I hesitate to jump into this thread because I've only skimmed the long arguments back and forth, but I just wanted to add that I personally do use Fuego to study and review games occasionally. It isn't that strong since I'm just running it on my laptop and not a big fast machine, but it often finds moves I hadn't considered and makes me think differently about positions, so I like to "ask its opinion" sometimes.

Fuego at least always reports the "main variation" that it found, and if you're curious about why it chose a particular move, it's easy to play a handful of moves against the bot and see what it thinks is the followup. Having an infinitely patient opponent who will respond to you as you move about in a game tree is quite nice.

I don't play chess, but I can't help but wonder how chess players feel about studying with a computer, since it's almost equivalent to having a pro on your machine at all times. I wonder if a player could take a Go-like approach, taking a big handicap from a strong computer (it starts without a queen or similar?), and just play it until you win 3 in a row, then decrease the handicap, etc. Would you learn quickly this way? Even though it can't tell you why a move is wrong exactly, it can show you refutations and point out where you lost the most of your lead...

I kinda want to try this now... I must be a 25 kyu chess player, I wonder how quickly I would improve. I wonder if someone has tried this before. Is there a chess forum like L19? Hmmm...
User avatar
Shaddy
Lives in sente
Posts: 1206
Joined: Sat Apr 24, 2010 2:44 pm
Rank: KGS 5d
GD Posts: 0
KGS: Str1fe, Midorisuke
Has thanked: 51 times
Been thanked: 192 times

Re: Bots that undo

Post by Shaddy »

quantumf wrote:But if I know that some explanations will be bogus, even if it's less than 1%, then I'm not sure how much faith I'd put into any explanation.


There's nearly no one I'd trust to be able to explain things this consistently. Considering that you have to be both correct about the move in the position, and able to tell your audience about all of the interesting positions arising therefrom... I certainly don't trust any amateur to that level of accuracy, but professionals may come close.
Boidhre
Oza
Posts: 2356
Joined: Mon Mar 05, 2012 7:15 pm
GD Posts: 0
Universal go server handle: Boidhre
Location: Ireland
Has thanked: 661 times
Been thanked: 442 times

Re: Bots that undo

Post by Boidhre »

emeraldemon wrote:I don't play chess, but I can't help but wonder how chess players feel about studying with a computer, since it's almost equivalent to having a pro on your machine at all times. I wonder if a player could take a Go-like approach, taking a big handicap from a strong computer (it starts without a queen or similar?), and just play it until you win 3 in a row, then decrease the handicap, etc. Would you learn quickly this way? Even though it can't tell you why a move is wrong exactly, it can show you refutations and point out where you lost the most of your lead...

I kinda want to try this now... I must be a 25 kyu chess player, I wonder how quickly I would improve. I wonder if someone has tried this before. Is there a chess forum like L19? Hmmm...


There's a handicap mode in Fritz: https://www.chesscentral.com/Articles.asp?ID=331

Might be what you're looking for? I've played against it a few times, I'm rather bad at chess so I can't give a good judgement on it other than it gives you a version of the engine you can beat if you get the handicap setting right.
User avatar
PeterPeter
Lives with ko
Posts: 285
Joined: Wed Oct 03, 2012 1:11 am
GD Posts: 0
Location: UK
Has thanked: 42 times
Been thanked: 52 times

Re: Bots that undo

Post by PeterPeter »

Lucas Chess has over 100 engines you can play against, with a steady progression in strength from almost-random to something stronger than Magnus Carlsen.
Regards,

Peter
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Bots that undo

Post by Mike Novack »

Polama wrote:
Or the opposite is possible: that a 5 kyu AI would generate 15 kyu reasons but process more possibilities per second than a 5kyu. Personally, I find that case more likely.


I think we could maybe rule that out. Let's consider MFOG 11 vs MFOG 12.

MFOG 11 had two AI parts. It had a "go reason finder" which it used to examine all legal moves and from those find the subset of moves which had go reasons (and what those reasons were, could be more than one). Then this result (set of plausible moves) was fed to a second AI which was the evaluator, job to look at the reasons behind each move and select the move judged best. The program was able to play at about 5-6 kyu.

MFOG 12 starts out the same way (possibly, even probably, the exact same AI). But instead it feeds that set of plausible moves to a MCTS based evaluator to judge the best. The program is able to play at about 1 dan on average hardware and about 3 dan on rather strong hardware (a stone or two weaker than the MCTS programs that are not limiting the set of moves examined to those for which an AI can find go reasons).

I would conclude from this that the limiting factor (5 kyu vs 1 dan) was not the AI "go reason finder" but the the AI evaluator vs the MCTS evaluator. The AI "go reason finder" was good enough to include in its set of moves for which go reasons that resulted in the 1 dan move.

I can guess what you are going to suggest to counter that. You want to say that while the AI found go reasons for the moves it was missing some more important reasons, wasn't including those in the set of reasons backing this move. Only the 15 kyu reasons. But then how can you explain that MFOG 11 was able to play at 5-6 kyu? It only had those 15 kyu go reasons to go by. Yes, some pof those moves might have had stronger reasons, but the MFOG 11 evaluator wouldn't be seeing those reasons, random picking among the moves with 15 kyu reasons. Which is why I would say that the "find reasons" part had to be at least 5-6 kyu. And in fact could be stronger than that, but presumably less than 1 dan based on what MFOG 12 can do (remember, MFOG 12 is using a MCTS evaluator, so unlike MFOG 11 it can end up playing stronger that the reasons, it isn't using them).

As somebody who designed software in my day (though not game playing software) I can tell you that I would think an AI to find "go reasons" much easier than an AI to try to evaluate "best of the lot" in the context of a particular board position. I can see how fairly simple rules might be able to tell whether this or that go reason applied to a move. Not so for the evaluation process.
Polama
Lives with ko
Posts: 248
Joined: Wed Nov 14, 2012 1:47 pm
Rank: DGS 2 kyu
GD Posts: 0
Universal go server handle: Polama
Has thanked: 23 times
Been thanked: 148 times

Re: Bots that undo

Post by Polama »

Mike Novack wrote:
...
I can guess what you are going to suggest to counter that. You want to say that while the AI found go reasons for the moves it was missing some more important reasons, wasn't including those in the set of reasons backing this move. Only the 15 kyu reasons. But then how can you explain that MFOG 11 was able to play at 5-6 kyu? It only had those 15 kyu go reasons to go by. Yes, some pof those moves might have had stronger reasons, but the MFOG 11 evaluator wouldn't be seeing those reasons, random picking among the moves with 15 kyu reasons. Which is why I would say that the "find reasons" part had to be at least 5-6 kyu. And in fact could be stronger than that, but presumably less than 1 dan based on what MFOG 12 can do (remember, MFOG 12 is using a MCTS evaluator, so unlike MFOG 11 it can end up playing stronger that the reasons, it isn't using them).


I wish I knew more about MFOG, but from what I can find with a quick search, it appears that it previously used min-max with alpha-beta pruning. The reason finder generated plausible looking moves, and then it read out where those plausible moves would take it. The reasons could just be "this is nice shape", and "in close fighting anything could work" and with deep enough reading you could reach dan.

Essentially, I think we can divide the problem up into how well you describe an individual move, and how deeply you read based on those descriptions. If you can look at a position and instantly know "this move kills", then you don't have to read any deeper than one move. If you only see "this move reduces eyespace" you're on your way to a solution, but have to figure out what combination, if any, goes with that. If you only see "poking at a group can be helpful" you're very far from killing, but could still get there with a fairly exhaustive read.

The reason I suspect MFOG is at the low end of describing individual moves is that in my experience 15kyu players don't read very exhaustively. They often play exclusively with surface level descriptions "that would harass their group. Not sure to what end or if it works for sure, but that group isn't alive yet so let's poke at it". Similarly, I usually consider less than 5 candidates at a time, while I suspect MFOG 11 was generating dozens. Since we play at a comparable level, I suspect my candidate generation is stronger, because I'm dismissing many of its moves outright and still playing at the same level.
User avatar
quantumf
Lives in sente
Posts: 844
Joined: Tue Apr 20, 2010 11:36 pm
Rank: 3d
GD Posts: 422
KGS: komi
Has thanked: 180 times
Been thanked: 151 times

Re: Bots that undo

Post by quantumf »

Mike Novack wrote:As somebody who designed software in my day (though not game playing software) I can tell you that I would think an AI to find "go reasons" much easier than an AI to try to evaluate "best of the lot" in the context of a particular board position. I can see how fairly simple rules might be able to tell whether this or that go reason applied to a move. Not so for the evaluation process.


If it's that easy, I wonder why MFOGO 12 doesn't do that.?

It's easy to test whether a move can be explained by your pre-programmed rules. If the move cannot be explained, then what do you do? I assert that there is nothing you can do, except fall back to showing a sequence of moves that the MCTS engine thinks will (statistically) lead to a win. That's probably OK, but is it meeting the goal of a "go explanation"?
User avatar
HermanHiddema
Gosei
Posts: 2011
Joined: Tue Apr 20, 2010 10:08 am
Rank: Dutch 4D
GD Posts: 645
Universal go server handle: herminator
Location: Groningen, NL
Has thanked: 202 times
Been thanked: 1086 times

Re: Bots that undo

Post by HermanHiddema »

Ok, a thought experiment.

Suppose I, a 4 dan player, was paired with a 15 kyu who must, on each move, suggest move candidates to me. Say he suggests 20 candidate moves on each turn, from which I must choose. What do you think our joint playing level would be?
Uberdude
Judan
Posts: 6727
Joined: Thu Nov 24, 2011 11:35 am
Rank: UK 4 dan
GD Posts: 0
KGS: Uberdude 4d
OGS: Uberdude 7d
Location: Cambridge, UK
Has thanked: 436 times
Been thanked: 3718 times

Re: Bots that undo

Post by Uberdude »

20 is a lot, I doubt a 15k could do that without resorting to mindlessly naming intersections; hey I would struggle to name 20 plausible moves a lot of the time. The mere act of being forced to think of many moves would likely mean a 15k wasn't 15k by the end of the game. When I think Malkovich players here are being too narrow-minded I suggest they write down just 3 alternative move suggestions. As a test I have picked the most recent Malkovich game here and added my 20:

Click Here To Show Diagram Code
[go]$$Bcm23
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . e X . . . . . . . . . O . . . . . |
$$ | . . W . . X . . c . . X . O . O O j . |
$$ | . . . O p . . b . , . . . . X X X . . |
$$ | . . . . . a . . . . . . X . . . . . . |
$$ | . . O . . . k . . . . . . . . . . . . |
$$ | . . . d . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . l . . . . . . . |
$$ | . . . , . . . . . , f . O . . X s . . |
$$ | . . o . . . . . . . . . . r . . . . . |
$$ | . . . . . . . . . . . . . . . t . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . h . . . . . . . . . . . . . . . . |
$$ | . . . . q . . . . . . . X . . . . . . |
$$ | . . . O . n o . . , . . . . X X X . . |
$$ | . . m . . g . . . . . X . O . O O i . |
$$ | . . . . . . . . . . . . . O . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]
User avatar
HermanHiddema
Gosei
Posts: 2011
Joined: Tue Apr 20, 2010 10:08 am
Rank: Dutch 4D
GD Posts: 645
Universal go server handle: herminator
Location: Groningen, NL
Has thanked: 202 times
Been thanked: 1086 times

Re: Bots that undo

Post by HermanHiddema »

Uberdude wrote:20 is a lot, I doubt a 15k could do that without resorting to mindlessly naming intersections;


We can use another number of moves, of course. I don't know how many MFOG suggests to its MCTS engine.

But the idea really is that the 15k doesn't think about it much at all. Just point out 20 intersections that intuitively feel like they might be a reasonable move with no reading.
Post Reply