Page 1 of 2

Otake's masterpiece

Posted: Thu Jan 16, 2020 4:44 pm
by Knotwilg
In the previous 5 games of Otake Hideo I analyzed with KataGo, inevitably either player made a major "mistake", resulting in a big change in the winning probability. Not so in this game. Otake nearly always picks one of Kata's top 3 and steadily builds his advantage. On two occasions he makes a "mistake" of >5% but at that point he's already ahead by a lot.

Black's strategy is indeed one that we can expect bots to appreciate: take corner territory first, live in the center later.

His opponent, Takagawa, seems to lack fighting spirit in this game. He attacks Black's central group right after it defended itself, which suggests he should have done so earlier. But perhaps he was just running behind an Otake Hideo who was impossible to catch up with on that day.

Especially in the late middle game, while maintaining his lead, it's impressive to see how Otake deployed the same order of play as Kata would.


Re: Otake's masterpiece

Posted: Thu Jan 16, 2020 6:54 pm
by Bill Spight
Thank you, Knotwilg. A very impressive game. :)

Given the state of go knowledge before the AI era, I think that Otake would not have criticized Takagawa's play before :w26:, which looks heavy against Black's solid wall. :)

I went over the Elf commentary, and counted only two definite errors (i.e., more than minor, but not blunders) by Otake, eight by Takagawa. Half of those errors by Takagawa came in the opening, of which Takagawa was considered a master. (Otake got that reputation, as well.) They were :w14:, :w20:, :w24:, and :w26:. :w14: might actually have been considered an advance over previous thinking because it was not an extension from White's four stones. White may have made the hanging connection, which has some eye potential, in anticipation of :w14:.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 4:57 am
by John Fairbairn
Otake nearly always picks one of Kata's top 3
I have been finding this sort of pattern in most pro games I look at. I also find that Leela and KataGo (and FineArt and others when quoted in Japanese commentaries) can disagree with each other as much as with the pros. There are also many long sequences where both bots and pros play "the only move."

The bots are ultimately stronger, of course, but I'm not at all sure that that's because they "know" more than pros. Humans blunder, err or make inaccurate plays for lots of reasons that have nothing to do with go knowledge: time pressure, psychology, experimentation, a row with the wife/mistress, a tax demand, pressure of expectations, etc, etc, etc. Allowing for all that, bots and top pros seem in close synch.

I therefore infer that what pros show us (via games) about how to improve is hardly flawed at all. You can point to a change in behaviour in most pros as regards some fusekis and josekis as a result of what they have seen in bot play, but that's nothing new. People like Go Seigen and Kobayashi Koichi, as well as a whole raft of Shin Fuseki players, had the same inspiring effect, and that research has always been ongoing, never intermittent.

Where I have found a discrepancy is in what pros tell us in books about how to improve. I have been not just astonished but even appalled at the winrate differences for moves recommended by pros in constructed examples compared to the bot recommendations. I am thinking, for instance, of classics such as Katsugo Shinpyo as well as the many pot boilers of the "Next Move" type. I think we have to assume there that either the pro was not really involved (a hardly likely explanation in many cases, though a charge of driving without due care and attention might sometimes stick), or - the explanation I favour - the pro is telling us not the best move that a pro should make but the best move that an amateur should make, for various reasons, e.g. it's the best an amateur can cope with without further specialised knowledge, or it's the best to create good habits for an amateur, or it's the move that tends to work best in the hyperquick play typical of amateurs.

Do others have similar feelings?

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 6:17 am
by Kirby
Playing moves in the top three choices is admirable- it can mean there’s a similar sense of intuition.

But always playing in the top 3 is a lot different than always playing the top move. Let’s say that your move is always 1% worse than the best play. Over the course of 200 moves, those mistakes can really compound over the course of the game. We think about analysis in terms of moves, but the overall win rate of sequences can have a big impact.

Compounding is *huge* in many areas of life - if you get 1% better every day, the impact over time is unbelievable.

Don’t get me wrong, pros are still super strong. But I don’t think we can say that their knowledge is comparable to top AI just because they can guess one of the top three move choices.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 6:28 am
by Knotwilg
Kirby wrote:Playing moves in the top three choices is admirable- it can mean there’s a similar sense of intuition.

But always playing in the top 3 is a lot different than always playing the top move. Let’s say that your move is always 1% worse than the best play. Over the course of 200 moves, those mistakes can really compound over the course of the game. We think about analysis in terms of moves, but the overall win rate of sequences can have a big impact.

Compounding is *huge* in many areas of life - if you get 1% better every day, the impact over time is unbelievable.

Don’t get me wrong, pros are still super strong. But I don’t think we can say that their knowledge is comparable to top AI just because they can guess one of the top three move choices.
What I said (or meant to say) is that this game shows a high consistency of staying in line with Kata's top 3, compared to previous games where we would see multiple 10-15% deviations on either side. Usually the curve shows big jumps up and down. This time it was a smooth ramp up to 95%

On another note, I'm not as confident in the 1% differences as you are or suggest we are. I see them as an error range, more than a clear advantage. Compounding may then not have the same effect as with interest rates. Consistently playing Kata's top choice doesn't mean you walk down the best possible path, precisely because bots play the odds, not a master plan.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 6:58 am
by Kirby
I’m not confident in 1% either - it was an example. My point is that staying in the “top 3” doesn’t seem as impressive as staying in the top 1 - and I still stand by that.

If I’m clicking through a pro game on my phone, I can guess what the pro plays with some fudge factor (he’ll play A or B or C). I can probably be right a good percentage of the time with this approach. But that’s totally different than guessing the exact move every time.

And again, I think top 3 is impressive. Just not as much as John suggests.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 7:18 am
by Knotwilg
John Fairbairn wrote: The bots are ultimately stronger, of course, but I'm not at all sure that that's because they "know" more than pros. Humans blunder, err or make inaccurate plays for lots of reasons that have nothing to do with go knowledge: time pressure, psychology, experimentation, ...
Yes, you can see that certain choices, especially in battles or towards the end of the game, pressure or fatigue can lead to mistakes which will have been obvious to human commmenters or pros in the analysis, with or without bots to point them out.

As you say, opening theory has been evolving and underwent major changes several times before the AI revolution.
Where I have found a discrepancy is in what pros tell us in books about how to improve. I have been not just astonished but even appalled at the winrate differences for moves recommended by pros in constructed examples compared to the bot recommendations. I am thinking, for instance, of classics such as Katsugo Shinpyo as well as the many pot boilers of the "Next Move" type. I think we have to assume there that either the pro was not really involved (a hardly likely explanation in many cases, though a charge of driving without due care and attention might sometimes stick), or - the explanation I favour - the pro is telling us not the best move that a pro should make but the best move that an amateur should make, for various reasons, e.g. it's the best an amateur can cope with without further specialised knowledge, or it's the best to create good habits for an amateur, or it's the move that tends to work best in the hyperquick play typical of amateurs.

Do others have similar feelings?
I don't have an explanation but I find the one you offer too condescending from the pro point of view (not "you are condescending"). It's giving the pros the wrong kind of credit, one of hiding knowledge they have. I don't think they deliberately hid knowledge, rather that they genuinely wanted to pass on the governing idea, but in some cases the idea was just wrong, by today's bot standards. Due to a momentary laps of reason? A not so clever go-between? Shared knowledge by pros alike that didn't seem worthy of challenging? That I don't know.

Certain techniques, like attaching to a shimari to force it into overconcentration and get outside influence, may be candidates for such "hidden knowledge" and instead simple extensions without the forcing sequence have been recommended. But those choices rarely amount to the big swings in % that we observe.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 8:11 am
by John Fairbairn
But always playing in the top 3 is a lot different than always playing the top move. Let’s say that your move is always 1% worse than the best play. Over the course of 200 moves, those mistakes can really compound over the course of the game. We think about analysis in terms of moves, but the overall win rate of sequences can have a big impact.
I think this is another of those cases of what I call irrational logic. It's logical in itself but you can get a very different "logical" result if you start somewhere else. Here I wouldn't (for my given context) start from the premise that a pro move is always 1% worse than the best. As I pointed out, the bots often differ among themselves (and differ within themselves according to how much time you give them - just like humans), and in addition the pro move is often equal to the bot's best move. Not to mention those cases Bill has pointed out where the human move seems to be better than the bots.

I repeat also that I suggested the caveat that human moves are often inferior not because of inferior go knowledge but because of off-board factors, so we could exclude many such moves in assessing quality rather than quantity.

Yes, on average the bots are stronger and eventually there will be a cumulative impact. But is it really "big"? Again, not necessarily in terms of quantitative results for mechanical bots that never get tired or upset, but in terms of qualitative go theory - which is the only portion of much use to humans, I suggest.

I think it also worth remembering that much of what bots have so far demonstrated, such as the effectiveness of early overconcentration, has already been absorbed by pros.
I don't think they deliberately hid knowledge, rather that they genuinely wanted to pass on the governing idea, but in some cases the idea was just wrong, by today's bot standards. Due to a momentary laps of reason? A not so clever go-between?
I too have never shared the view that Japanese pros have hidden their knowledge. They may withhold it to encourage pupils to think for themselves (and then reveal it), but usually I think it's just that they are not very interested in teaching. Still, for those who do like to teach, I do believe there is a tendency to tailor knowledge to the audience. I'm influenced in this by what was a formative experience for me. I was very, very good at chemistry in junior school. In high school I opted entirely for languages, which also meant that's what I did at university. But there my best friend was studying chemistry and I would peruse his books. Despite having what I thought was a decent starting knowledge of the subject, I was completely at sea. My friend explained that when you study chemistry at high school level you are - with not too much exaggeration - told to forget everything you learned in junior school and start again. Then when you go to university you are told to forget everything you learned in high school and start again.

I have since learned that that paradigm applies in other subjects. Language is not one of them, but piano technique may be one, and maybe art. Perhaps go is? (But of course I accept that somewhere in that mix, so-called experts can also just be plain wrong - just like expert bots with ladders :))

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 8:22 am
by jlt
Kirby wrote:If I’m clicking through a pro game on my phone, I can guess what the pro plays with some fudge factor (he’ll play A or B or C). I can probably be right a good percentage of the time with this approach. But that’s totally different than guessing the exact move every time.
"The pro chooses one of your top three moves" is not the same as "you choose one of the pro's top three moves". The first case doesn't exclude the fact that one of your top three moves is so bad (from the pro's point of view) that the pro wouldn't even consider it.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 8:34 am
by Kirby
@John
Semi-related responses
I think this is another of those cases of what I call irrational logic.
I'm sure.
Here I wouldn't (for my given context) start from the premise that a pro move is always 1% worse than the best.
Neither do I. I'll repeat that 1% is an example. The underlying point is that being a little bit wrong, in aggregate, ends up to be not such a little bit.
As I pointed out, the bots often differ among themselves
Agreed!
(and differ within themselves according to how much time you give them - just like humans)
This is off topic, but since you're bringing it up, I'll note that it's not "just like humans". Humans experience fatigue, and move quality can degrade over time. This doesn't happen with a computer unless there are real physical hardware problems (e.g. processor overheating or something).
in addition the pro move is often equal to the bot's best move
I suppose I could argue that my moves are often equal to pro moves, too. But those edge cases can get me, can't they?

---

The main point
Yes, on average the bots are stronger and eventually there will be a cumulative impact. But is it really "big"?
In a sense, this is the same argument that I'm making, but in reverse. You ask "is it really 'big'"? I don't have enough data to answer that. But my counter question is, "is it really that 'small'"?

And I don't believe that some correlations between the top N choices of some bot is sufficient evidence to say much about how closely a human's thinking is aligned with a bot's, unless N=1.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 8:35 am
by Bill Spight
John Fairbairn wrote:Do others have similar feelings?
Sadly, no. :sad: More below.
John Fairbairn wrote:
Knotwilg wrote: Otake nearly always picks one of Kata's top 3
I have been finding this sort of pattern in most pro games I look at.
As I pointed out years ago when people accused a go player of cheating because he almost always chose one of Leela's top three, that is an almost meaningless statistic. If winrates mean anything, then the winrate difference between the pro's play and the bot's top choice is a much better indication.
John Fairbairn wrote:I also find that Leela and KataGo (and FineArt and others when quoted in Japanese commentaries) can disagree with each other as much as with the pros. There are also many long sequences where both bots and pros play "the only move."
Well, when the pros and the bots pick the same play, the winrate difference is 0, eh? ;) "The only move" is another question, but not really relevant here. As for top bots quoted in Japanese commentaries, I rather suspect that the number of rollouts is not given, and that is important. The Elf team does not trust any play with fewer than 1500 rollouts, and, for publication, I would go further and not trust any play with fewer than 4k rollouts. (If you are making a real time commentary, you go with what you've got, but that's another question.)
John Fairbairn wrote:The bots are ultimately stronger, of course, but I'm not at all sure that that's because they "know" more than pros. Humans blunder, err or make inaccurate plays for lots of reasons that have nothing to do with go knowledge: time pressure, psychology, experimentation, a row with the wife/mistress, a tax demand, pressure of expectations, etc, etc, etc. Allowing for all that, bots and top pros seem in close synch.
In reviewing the Elf commentaries I have been impressed with how often pro plays have been off or nearly off Elf's radar. OC, getting zero or a handful of Elf's rollouts does not mean that a play is a mistake or unplayable, but it does not indicate that Elf and the pros are in close synch.
John Fairbairn wrote:I therefore infer that what pros show us (via games) about how to improve is hardly flawed at all.
{sigh}
John Fairbairn wrote:Where I have found a discrepancy is in what pros tell us in books about how to improve. I have been not just astonished but even appalled at the winrate differences for moves recommended by pros in constructed examples compared to the bot recommendations.
One of the promised advantages of human commentaries is that we can explain our plays. The trouble is, as your comment underscores, we are very good at coming up with explanations. ;) This is not such a bad thing, however. As humans come up with explanations for bots' choices, we are bound to come up with some good ones. But I think that even more important for improvement than verbal understanding is imitation, whether it is imitation of a top pro or a top bot or even a 2 kyu who is at least 5 stones stronger than you are. :)

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 8:40 am
by Kirby
jlt wrote:
Kirby wrote:If I’m clicking through a pro game on my phone, I can guess what the pro plays with some fudge factor (he’ll play A or B or C). I can probably be right a good percentage of the time with this approach. But that’s totally different than guessing the exact move every time.
"The pro chooses one of your top three moves" is not the same as "you choose one of the pro's top three moves". The first case doesn't exclude the fact that one of your top three moves is so bad (from the pro's point of view) that the pro wouldn't even consider it.
I agree with that, and I was actually thinking about this point while I was driving to work this morning. Maybe I used a poor choice of example. But I'd still maintain that it's a lot easier to be aligned with a fudge factor of "being in the top 3", compared to always guessing the right move.

In the case where a pro's moves were always in the top 3, there *is* some positive merit - it means that the moves the pro played were aligned with the intuition the bot had. But let's assume the choices by the bot were optimal - then always choosing the 3rd best move would result in a lower quality game than always choosing the best move. The question is, how much? And I don't think we have enough information to answer that. Just claiming that a pro is within the top 3 moves every time doesn't mean that there game is necessarily good.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 8:41 am
by Kirby
Bill Spight wrote: As I pointed out years ago when people accused a go player of cheating because he almost always chose one of Leela's top three, that is an almost meaningless statistic. If winrates mean anything, then the winrate difference between the pro's play and the bot's top choice is a much better indication.
This! This summarizes the main point I want to convey, but Bill is much more elegant in writing than I am.

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 9:31 am
by Bill Spight
John Fairbairn wrote:Still, for those who do like to teach, I do believe there is a tendency to tailor knowledge to the audience.
Martin Buber said that the job of the teacher is to build a bridge between where the teacher is and where the student is. IMX, this seldom happens. Teachers who are experts often present their knowledge and don't bother with where the students are at all. For them education is a process of elimination. Others go to where they think their students are and teach a watered down or sugar coated and often incorrect version of the subject. That does not build a bridge, either. ;) Teaching is not easy.
John Fairbairn wrote:I repeat also that I suggested the caveat that human moves are often inferior not because of inferior go knowledge but because of off-board factors, so we could exclude many such moves in assessing quality rather than quantity.
That is so. However, inferior go knowledge has a life of its own. My recent explorations of the Elf commentaries has allowed me to trace some erroneous plays and understandings (according to Elf) back into previous centuries. They persisted by being taught or imitated. Since they were generally accepted, they were not challenged or refuted. (The analogy with bot self play is, I trust, not lost on the reader. :))

Re: Otake's masterpiece

Posted: Fri Jan 17, 2020 10:24 am
by TelegraphGo
Knotwilg wrote: On another note, I'm not as confident in the 1% differences as you are or suggest we are. I see them as an error range, more than a clear advantage. Compounding may then not have the same effect as with interest rates. Consistently playing Kata's top choice doesn't mean you walk down the best possible path, precisely because bots play the odds, not a master plan.
Any particular 1% difference might not be objectively better, but if you take a large number of them, I think the AI is going to objectively correct about which play is better the majority of the time. Compounding them will absolutely have the same effect as interest rates. As an inaccurate simplification, imagine that the AI is incorrect about any given position by a symmetric random distribution to the winrate that it should have given that position, if it were a perfectly consistent AI. Bill seems to believe that the width of the statistically significant portion of this distribution is about 8%, but it could be much larger and the following would still hold true.

Then the majority of positions X that it thinks are 1% better than position Y will still be objectively better than position Y. Once you have any positive game like this, repeating a large number of times will lead you to a large gain. If you don't believe me, try playing against an AI and taking back any move that the AI says drops >1%, and that you also think may have been a mistake. If you're anything like me, you'll be lost by move 100 :sad: