It is currently Thu Mar 28, 2024 8:16 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 14 posts ] 
Author Message
Offline
 Post subject: How much do strong bots agree on moves? Case study
Post #1 Posted: Tue Mar 31, 2020 6:51 pm 
Lives in sente

Posts: 757
Liked others: 114
Was liked: 916
Rank: maybe 2d
Case study analyzing AlphaGo Master vs Ke Jie future of go summit game 1.

I started this case study more thoroughly and with more playouts after idly playing around with KataGo in Lizzie and noticing that Kata was choosing the exact same moves as AlphaGo Master on quite a lot of moves in this game, including the the precise ladder-protector on move 24, the entire invasion sequence from move 26 to 46, as well as many of the moves after that. I've also anecdotally noticed a lot of agreement between strong bots in other cases too, including lots of moves in open-space fights and other areas where it seems to me (at least, as a mid-dan amateur) that humans could easily have many possible choices. It's eerie how often strong bots agree on these moves.

So anyways, attached is an SGF taking a look at how often AlphaGo and Kata match in this particular game. In about the first 120 moves, except for move 6, AlphaGo always chose a move from from Kata's top three, and it was only the third move twice - every other time was the top or second move. Hope someone finds this interesting!

SGF intro notes:
KataGo 30 blocks s2.4G analyzing AlphaGo Master moves in AlphaGo vs Ke Jie match game 1. (notes by lightvector as of March 2020)

It's not clear how strong AlphaGo Master is. There is a non-trivial chance that LZ and/or KataGo have matched or it already, and it seems to me quite likely that some of the Chinese commercial bots have (indeed, I'd be very surprised if Fine Art isn't well past where AlphaGo Master was).

How often does AlphaGo Master's move match KataGo's preference in this game? Very often! It's amazing how often such bots agree almost exactly with each other.

The reason for this could have to do with them being similar in strength, but we also know that even top human pros similar in strength could often prefer very different moves. Plus we know that Master was bootstrapped off of human data, whereas KataGo (despite having some Go-specific inputs) is still zero with respect to its training data.

So it seems also possible that what's going on is simply we are seeing convergence on "correct" play. I'd guess such strong bots are capable of understanding the positions in this "simple" game very well ("simple" by bot standards, at least) , and therefore they often agree because the moves are the correct moves. Maybe there would be more disagreement in "messy" games like you sometimes see in bot vs bot.

And about the differences we see still? Keep in mind that the winrates for black pretty steadily go down in this game, and it's not too long before the game is basically over, so differences in move preferences could simply be differences bot ideas on how to secure a won game, or simply differences in style, or in some cases differences in the order of moves that don't make that big of a difference.

-------------------------------

In the first 122 moves of the game:

AlphaGo move matches Kata's top preference: 48 times
AlphaGo move matches Kata's 2nd preference: 10 times
AlphaGo move matches Kata's 3rd preference: 2 times
AlphaGo move matches Kata's 4rd preference: 1 time
AlphaGo move was not in Kata's top 4: never

Method: Ran analysis in Lizzie until eyeballing the top several moves their visits approximately summed to 100k (sometimes a little more since I didn't catch it in time).

Obviously results could vary on a second run due to randomness.

On each turn, Kata's most notable preferred moves are noted with the approx number of visits each received, as well as something like, e.g. "others < 2k" indicating each other move received less than 2000 playouts.




Attachments:
agkejie-kataanalysis.sgf [8.5 KiB]
Downloaded 1079 times

This post by lightvector was liked by 7 people: Dusk Eagle, Joaz Banbeck, MagicJade, Maharani, Uberdude, Waylon, zermelo
Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #2 Posted: Tue Mar 31, 2020 8:13 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
lightvector wrote:
Case study analyzing AlphaGo Master vs Ke Jie future of go summit game 1.

I started this case study more thoroughly and with more playouts after idly playing around with KataGo in Lizzie and noticing that Kata was choosing the exact same moves as AlphaGo Master on quite a lot of moves in this game, including the the precise ladder-protector on move 24, the entire invasion sequence from move 26 to 46, as well as many of the moves after that. I've also anecdotally noticed a lot of agreement between strong bots in other cases too, including lots of moves in open-space fights and other areas where it seems to me (at least, as a mid-dan amateur) that humans could easily have many possible choices. It's eerie how often strong bots agree on these moves.

So anyways, attached is an SGF taking a look at how often AlphaGo and Kata match in this particular game. In about the first 120 moves, except for move 6, AlphaGo always chose a move from from Kata's top three, and it was only the third move twice - every other time was the top or second move. Hope someone finds this interesting!


I think that this is a great idea. :) It's particularly interesting because Master was trained on human moves while KataGo was not.

BTW:

Quote:
In the first 122 moves of the game:

AlphaGo move matches Kata's top preference: 48 times
AlphaGo move matches Kata's 2nd preference: 10 times
AlphaGo move matches Kata's 3rd preference: 2 times
AlphaGo move matches Kata's 4rd preference: 1 time
AlphaGo move was not in Kata's top 4: never


48/61 ≅ 79%

13(48/61) ≅ 10

3(48/61) ≅ 2

:)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #3 Posted: Tue Mar 31, 2020 10:56 pm 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Interesting stuff! Highlights for me were also that KataGo agreed with:
- c7 restrained 2 space extension from 2 space highb shimari which is a bit unusual
- the peep and then gote but thick cut at E10 which surprised many human commentators at the time, in fact wanted to cut rather than g3 kick.
- timing of 'random bot sente move' 62
- 98 Goldilocks extension, with 1st choice cool attachment makes me think KG is stronger. But maybe AG Master world have found and played it is it was a close game and needed to, but its winrate rather than point maximizing objective function means it went for the simpler extension.

For comparison it would be interesting to know how many times Ke Jie played KG's top or second move (don't bother with playout counts).

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #4 Posted: Wed Apr 01, 2020 5:47 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Here are some measures of concordance for this game up through move 122. Elf is from the commentaries.

Top choice match

AlphaGo-KataGo 48
KataGo-Elf 46
AlphaGo-Elf 42
Elf-Ke Jie 29

Edit: Thanks to lightvector's post below, if I have counted correctly we have the following concordance measure for KataGo and Ke Jie for 60 moves. :)

KataGo-Ke Jie 29

----

Also, Elf's top choice was KataGo's 2d choice 9 times, its 3d choice 2 times, and off the charts 4 times. The winrate differences were slight, according to Elf, except for :w22:, where Elf did not make the ladder, probably because of misreading it.

Uberdude wrote:
Interesting stuff! Highlights for me were also that KataGo agreed with:
- c7 restrained 2 space extension from 2 space highb shimari which is a bit unusual
- the peep and then gote but thick cut at E10 which surprised many human commentators at the time, in fact wanted to cut rather than g3 kick.
- timing of 'random bot sente move' 62
- 98 Goldilocks extension, with 1st choice cool attachment makes me think KG is stronger. But maybe AG Master world have found and played it is it was a close game and needed to, but its winrate rather than point maximizing objective function means it went for the simpler extension.


IMO, the C-07 extension should not be considered unusual. Back in the no komi days when White frequently made an ogeima enclosure, the two space extension was regarded as appropriate from it, because of the weakness of the enclosure.

KataGo and Elf agree about the peep and cut at E-10. :)

Elf does not play the bot sente for :w62:, but the B-05 hane. IMO this is a matter of style.

KataGo and Elf agree about the attachment for :w98:. :) However Elf regards it as only 0.2% better than the AlphaGo extension. IOW, the difference is noise.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


Last edited by Bill Spight on Thu Apr 02, 2020 4:39 am, edited 1 time in total.

This post by Bill Spight was liked by: Uberdude
Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #5 Posted: Wed Apr 01, 2020 7:34 pm 
Lives in sente

Posts: 757
Liked others: 114
Was liked: 916
Rank: maybe 2d
Bill Spight wrote:
KataGo and Elf agree about the attachment for :w98:. :) However Elf regards it as only 0.2% better than the AlphaGo extension. IOW, the difference is noise.


It's probably worth drawing a careful distinction here. If your goal is to try to determine with confidence which move is "better" than yes, such a tiny difference you can regard as noise.

But also I think that for small differences, even though on any particular move you can't be sure if a move is "better" due to noise, on average across many moves and many games following the bot's preferences will probably lead to slightly stronger play than, say, choosing randomly among all moves within such a tolerance window of the top-preferred move. I would expect even such small differences to be positively correlated to some degree with move quality, even if highly imperfectly. (But of course stronger still might be to run many similarly strong bots and see how much they agree, and/or increasing the playouts yet more).

Since Uberdude was interested, I ran the numbers for Ke Jie's side too. Same methodology, waiting until about 100k playouts. This time though I didn't bother to do all the work of recording the playout counts, and instead did the much less laborious thing of 1st choice, 2nd choice, etc, along with Kata's first choice when different. If no variation, then the first choice matches up. I'll let someone else count up the totals they like. :)

Also for those people less familiar with bot stuff, keep in mind that while number-of-playouts-based "choice index" is correlated with move quality, it's far from monotone with it. Especially past 4 or 5 you're often mostly looking at a heavy dose of the policy prior mixed with a shallow evaluation rather than an a ranking of the quality of the move.



Attachments:
agkejie-kataanalysis2.sgf [3.01 KiB]
Downloaded 968 times

This post by lightvector was liked by 2 people: Maharani, Uberdude
Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #6 Posted: Thu Apr 02, 2020 4:21 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
lightvector wrote:
Bill Spight wrote:
KataGo and Elf agree about the attachment for :w98:. :) However Elf regards it as only 0.2% better than the AlphaGo extension. IOW, the difference is noise.


It's probably worth drawing a careful distinction here. If your goal is to try to determine with confidence which move is "better" than yes, such a tiny difference you can regard as noise.

But also I think that for small differences, even though on any particular move you can't be sure if a move is "better" due to noise, on average across many moves and many games following the bot's preferences will probably lead to slightly stronger play than, say, choosing randomly among all moves within such a tolerance window of the top-preferred move. I would expect even such small differences to be positively correlated to some degree with move quality, even if highly imperfectly. (But of course stronger still might be to run many similarly strong bots and see how much they agree, and/or increasing the playouts yet more).


On your last point about consulting different bots, in A Treatise on Probability Keynes pointed out that induction depends upon variation. I.e., repeatedly getting the same result does not necessarily strengthen that result if the conditions that produce it remain the same. (Unless, OC, your claim is that those conditions produce that result. In practice we usually are not aware of all the conditions.) This is a problem with self play, despite its current success. At some point the law of diminishing returns should kick in.

Yes, even small winrate differences are correlated with move quality, I have used that idea from time to time, without talking about it. For instance, I came up with my heuristic about the value of occupying the last open corner by noting that, in the Elf commentaries, even though the winrate difference was not very often enough to say that not occupying the last open corner was a mistake, occupying the last open corner was overwhelmingly preferred across a wide variety of positions.

Edit: And, OC, I noted that there was no such effect with two open corners. IOW, the rule is not just to occupy an open corner. It is a last move heuristic, one that has not been articulated before, to the best of my knowledge.

Edit2: Also, if I have counted correctly, Ke Jie played KataGo's top choice 29 times out of 60 moves, not counting :b1:. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


This post by Bill Spight was liked by: Maharani
Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #7 Posted: Fri Apr 03, 2020 6:13 am 
Oza

Posts: 3647
Liked others: 20
Was liked: 4626
Quote:
Also, if I have counted correctly, Ke Jie played KataGo's top choice 29 times out of 60 moves


I have been working on a book "The First Teenage Meijin" which has just gone off for proof-reading. I was stimulated to do this for various reasons, but one was that the Japanese commentators have started seasoning their with AI data, and I have a long-standing interest in trends in go journalism.

Although the five games of the last Meijin match were all covered with that added spice, it was a very modest amount, and so I decided to go to town with Game 5 - partly because that was rated as a game of unusually high quality. I therefore checked every move with LeelaZero and Katago (LZ and KG do differ quite often, but do indeed seem to confirm the quality of the human play; all the results are in the book). But there are a couple of teasing insights.

Cho U had perfect match with a bot 79 times out of his 126 moves. He had a very close match, which may be interpreted as matching with the computers’ second or third choices, 18 times.

Shibano Toramaru had a perfect match 88 times out of his 126 moves, and very close match 12 times.

So both players were keeping in very close step with the bots for about 80% of the time. What is fascinating, though, is that both players seemed either to get an exact match or to go off the rails significantly. In other words there were rather few cases where they were choosing the second or third best move.

Furthermore, it seemed to me that, whenever the humans went astray, they usually did so for several moves at a time. It was as if they had lost the flow of the game. On top of that, Shibano mentioned “flow” several times in his own comments. The most significant was perhaps when he revealed that he studied by playing through new games quickly either to suss out new moves or to get a feel for the flow of the game.

Flow is not a new concept in go, but I have long felt that it has been underestimated.

Because of my interest in go journalism, I long ago noted that modern commentaries have in some cases become like bloatware. There is a lot to be said for the old style of commentary where very few comments are given, but those that do, adumbrate the flow of the game. My favourite has always been a Shusai commentary which was one number and one word in Japanese (128 yoshi - 128 was good). It was actually surprisingly helpful, simply because it marked a significant bend in the river.

Because of that experience, I learned to take special note of a failry common phrase in go texts, ichidanraku (an ordinary Japanese word rather than a technical term), which is used to mark a "pause" in the flow of the game. I introduced it (as "pause point") in the Go Wisdom indexes in Genjo-Chitoku and Games of Shuei because I thought it was very important. My encounters with Shibano's views on flow and AI have strengthened my feelings about this importance.

Just to prime readers' own thoughts about this concept, I believe that debates over whether bots or players agree or disagree on a particular can be misleading. It is very often more important to judge whether a player's (or a bot's) move agree with each other. In other words, do they have a consistent flow? Game 5 of the 44th Meijin illustrates that thinking nicely in Shibano's case, and may explain why he won - he said he started off feeling the flow of each game was awkward, but he finally got on top of it.


This post by John Fairbairn was liked by: SoDesuNe
Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #8 Posted: Fri Apr 03, 2020 6:40 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
I have heard some Japanese professionals mentioning "flow" or "natural flow (like water)" since I started to play go. Now you mention it. Never has anybody explained.

What IS flow?

My definition would be "sequence of optimal moves" but this tells us nothing in practice because we do not know which moves are optimal during the opening and middle game. (Belief in bots or in a few agreeing professionals also does not provide any confirmation of optimal move choice.)

Is flow more than pretence of knowing without being able to explain? If so, what (better than a call for belief) is it?

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #9 Posted: Fri Apr 03, 2020 8:06 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
RobertJasiek wrote:
I have heard some Japanese professionals mentioning "flow" or "natural flow (like water)" since I started to play go. Now you mention it. Never has anybody explained.


Actually, I have. And to you, as I recall. But that was about the flow of the stones. You can see the positions of successive moves changing in either a linear or circular direction. Whether that apparent movement is natural or not is another question. ;)

RobertJasiek wrote:
(Belief in bots or in a few agreeing professionals also does not provide any confirmation of optimal move choice.)


Well worth repeating. Confirmatory evidence is weak.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #10 Posted: Fri Apr 03, 2020 8:35 am 
Oza

Posts: 3647
Liked others: 20
Was liked: 4626
Quote:
I have heard some Japanese professionals mentioning "flow" or "natural flow (like water)" since I started to play go. Now you mention it. Never has anybody explained.


I haven't explained it, but I did give two pointers which seem to point the way, and in a very different direction from your definition: consistency and pause points. These terms often come up in the same context as flow in Japanese, so I'm sure there's a link.

But I think that could also be approached in terms of "temperature". Although that's all Klingon to me, it does seem that a sustained level represents a flow, sudden fluctuation is a break I the flow, and a drop zero equates to a flow point.

That doesn't seem very useful in practice, though - do you learn more by watching the pitcher or the radar gun? Most baseball experts focus on the pitcher's action, I believe.

In go, it seems to me, the main point is consistency of plan. That implies not just a flow from one move to the next, but also must include consistency with previously played stones. That in turn means knowing why earlier stones were played. The best, maybe the only way, to understand that - the plan - is to get it from the horse's mouth. That means reading commentaries in which the players themselves tell you what they were trying to achieve. With the demise of the English Go World there is virtually nothing that offers such commentaries in English. There are now tons of videos or comments on forums, but these are almost all by amateurs. It is my (vast) experience that even very strong amateurs talk about very different things from pros, or put different accents on the same words. It might be painful to accept that, but it should be uncontroversial as we can each see it starkly in our own respective professional fields. Compare what Christy Matthewson has to say about pitching with what the likes of John Buck, or even non-pitching pros like Joe Morgan, say. (I mention them solely because I aim to settle down and re-watch lots of old baseball dvds while stuck at home, and the Buck/Morgan combo is my absolute favourite.)

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #11 Posted: Fri Apr 03, 2020 8:47 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
John Fairbairn wrote:
Cho U had perfect match with a bot 79 times out of his 126 moves. He had a very close match, which may be interpreted as matching with the computers’ second or third choices, 18 times.

Shibano Toramaru had a perfect match 88 times out of his 126 moves, and very close match 12 times.

So both players were keeping in very close step with the bots for about 80% of the time. What is fascinating, though, is that both players seemed either to get an exact match or to go off the rails significantly. In other words there were rather few cases where they were choosing the second or third best move.


I don't know if that means going off the rails, but one possibility is that, the bots' 2d and 3d choices (N.B. not the 2d or 3d best moves) may have relatively few rollouts, and thus be relatively more influenced by their initial evaluations. We can't tell without looking. We know that their initial evaluations are different from those of humans.

Quote:
Furthermore, it seemed to me that, whenever the humans went astray, they usually did so for several moves at a time.


Was that true of both players at the same time? If they both did for more than a few moves, then that suggests a shared understanding, or misunderstanding, as the case may be.

Quote:
It was as if they had lost the flow of the game.


Not sure what you mean by the flow of the game, here. But I have observed that when both players avoid a bot's top choices for a while, it often indicates that they have a shared blind spot.

As for flow in one sense, it seems to me that the bots' games have less flow than human games, because the bots tenuki so often.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #12 Posted: Fri Apr 03, 2020 9:03 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
John Fairbairn wrote:
Quote:
I have heard some Japanese professionals mentioning "flow" or "natural flow (like water)" since I started to play go. Now you mention it. Never has anybody explained.


I haven't explained it, but I did give two pointers which seem to point the way, and in a very different direction from your definition: consistency and pause points. These terms often come up in the same context as flow in Japanese, so I'm sure there's a link.

But I think that could also be approached in terms of "temperature". Although that's all Klingon to me, it does seem that a sustained level represents a flow, sudden fluctuation is a break I the flow, and a drop zero equates to a flow point.


It seems to me that a "pause point" indicates a local drop in temperature. :)

John Fairbairn wrote:
In go, it seems to me, the main point is consistency of plan. That implies not just a flow from one move to the next, but also must include consistency with previously played stones.


IIUC, the bots don't make plans. They live in the moment, and go with the flow. Like hippies. :cool: :lol:

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #13 Posted: Fri Apr 03, 2020 9:20 am 
Oza

Posts: 3647
Liked others: 20
Was liked: 4626
Quote:
IIUC, the bots don't make plans. They live in the moment, and go with the flow. Like hippies


I did say consistency of plans, not plans, was the main point, and I was talking about human commentators.

But there is a sense in which we could say bots make plans. If we accept that a style is the pattern shown when a player repeatedly makes choices when he has multiple moves to choose from, I think we could argue that a computer has a style, even when we can't ourselves describe it. I think we can also argue that a style runs on parallel tracks with a plan, given that plans are also choices.

There may be more eddies in the bots' flows, but there is still a flow, a directionality.

Hippies went with the flow in more than one sense. They tried so hard to be different, so how come they all ended up looking like each other?

Your question about Cho and Shibano going off the rails separately or together: usually together if it lasted more than a few moves. It was also noticeable (as I recall) that if one player made a mistake (according to the bots), while the other player then did not necessarily also get derailed and so went on to punish the mistake with a reasonable, he did not often punish it as strongly as the bot. Again, we could say he was affected (somewhat) by the discontinuity in the flow.

Top
 Profile  
 
Offline
 Post subject: Re: How much do strong bots agree on moves? Case study
Post #14 Posted: Sat Apr 04, 2020 3:56 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Speaking of bot agreements and Shibano, I saw a game between him and Elf online this morning. I played through it with LZ running on my phone (a few hundred playouts of a 15 block network). I was struck by how many times in the opening and middlegame my phone agreed that Elf's move was good (and Shibano's not, though I didn't have Elf's view on that other than my phone would punish it with the same move as Elf and see its winrate go up), particularly in the sort of open positions that rely more on positional judgement (which bots are very good at) rather than close combat semeai/ladder fighting (where even I as 4d can find mistakes in LZ phone at low playouts). So I wouldn't be sure my phone could beat Shibano (particularly if he knows he is playing a phone not powerful computer so maximizes need for deep reading), but I am pretty sure my phone's positional judgement instinct is better than his.

For example, Shibano played the low extension/pincer of 1, but that wasn't on phone LZ's (henceforth PLZ) radar at low playouts (high was). PLZ agreed with the 5th line cap, a shape I've seen in plenty of bot games. This sets up miai for white to manage his group by either pressing against the corner or top side. Shibano chose to defend the corner but PLZ says this was a mistake and he should have played at 4, which Elf and PLZ agreed on, a wrong direction of play or flow decision from Shibano. PLZ wanted black to save 1, but Shibano played on the right side, PLZ agreed with Elf's moves there which got sente and then with 10 too, which completes the punishment of black not playing at 4 himself. Shibano probably thought that it was ok to allow white 10 because his 9 had follow ups against the white groups above and below, so he invaded at r7 next, but the bots judgement is that that and the top aren't miai, the top is slightly more valuable, but that's a hard thing for even top humans to judge.

Click Here To Show Diagram Code
[go]$$Bc
$$ +---------------------------------------+
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . X . |
$$ | . . X O . . . 1 . . 0 . . X . 7 X O . |
$$ | . . X O . . . . . 4 . . . . . X O 6 . |
$$ | . . . O . . . 2 . . . . . . . . . . . |
$$ | . X . . . . . . . . . . . . . . O . . |
$$ | . . 3 . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . 8 . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . 9 , 5 . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . O . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . O . . |
$$ | . . . O . . . . . , . . . . X , . . . |
$$ | . . . . . . . . . . . . . . . X . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ +---------------------------------------+[/go]


P.S. this shape of 10 kosumi to finish off an ignored knight press is one I see time and again with bots, and wasn't part of my go vocabulary before. The jump down to the 2nd line has a shape problem with the peep and then attach into knight moves trying to give you an inefficient empty triange at a.

Click Here To Show Diagram Code
[go]$$Wc
$$ +---------------------------------------+
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . 1 . . . . . . . X . |
$$ | . . X O . . . X . 3 2 . . X . X X O . |
$$ | . . X O . . . . a O . . . . . X O O . |
$$ | . . . O . . . O 4 . . . . . . . . . . |
$$ | . X . . . . . . . . . . . . . . O . . |
$$ | . . X . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . O . . |
$$ | . . . . . . . . . . . . . . . . . . . |[/go]


This post by Uberdude was liked by: Bill Spight
Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: Majestic-12 [Bot] and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group