It is currently Fri Mar 29, 2024 8:42 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 1056 posts ]  Go to page Previous  1 ... 14, 15, 16, 17, 18, 19, 20 ... 53  Next
Author Message
Offline
 Post subject: Re: This 'n' that
Post #321 Posted: Thu Jun 22, 2017 1:29 pm 
Honinbo

Posts: 9545
Liked others: 1600
Was liked: 1711
KGS: Kirby
Tygem: 커비라고해
Bill Spight wrote:
I may be wrong, but my impression is that neural networks generalize from what they are trained on, and so they can produce some new things from time to time.


Here's my understanding of how AlphaGo works, described in layman's terms - at least the Fan Hui version, of which the Nature paper was based:

Step 1.) Train policy network to construct a function that is able to predict moves that a strong player makes. Initially, this is done with supervised learning - give it a bunch of high dan player games, train neural network to maximize weights so that you have a (non-linear) function that predicts the next move given a new high dan player game.
Step 2.) Improve policy network through reinforcement learning. To do this, have the latest version of the policy network play A against an older version of the policy network B. See who wins. Update the weights for the function trained by the policy network, giving positive value for a win, negative value for a loss.
Step 3.) Train a value network, not with sample data like in Step 1, but directly by playing games like in Step 2: At a random board position, predict who will win the game. Use the Policy network from Step 2 to play out the rest of the game, and see who won. Just like before, if the prediction was correct, add positive value to the weights; if prediction was wrong, add negative value.
Step 4.) Combine policy network, value network, and Monte Carlo Tree Search: Tree is constructed, starting at root board state. At each node in the tree, the policy network gives a prior probability that any of the given moves will be good (e.g. 62% chance I should play move X). Then you can traverse the tree to search for best outcome. Outcome is defined by a linear combination of the value as defined by the value network PLUS the outcome that would occur by doing monte carlo simulation from that point in the tree. How much weight to give to MTCS vs. the value network is not clear to me.
Step 5.) Profit (beat Lee Sedol, Ke Jie, earn millions, and start the robot revolution).

So anyway, this allows for generalization to occur, as Bill suggests. Fundamentally, the program still does a search. But the moves the breadth of actions from a given state that are likely to lead to a good result is really reduced due to the policy network (which has been trained first on training data, and then refined by playing against itself). And leaf evaluation combines the trained value network and monte carlo search from that position. The neural networks themselves basically produce non-linear functions with weights that have been adjusted through training. Given a totally new situation and board position, it can be fed to that function to produce a result.

This is basically my understanding of how things work. Please feel free to correct any misunderstandings that I have, because I'm interested in learning more about it, too.

_________________
be immersed


This post by Kirby was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #322 Posted: Thu Jun 22, 2017 2:21 pm 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Baywa wrote:
Now, in order to train such a network you don't have to feed it all possible 10^100 or so board positions. This is the whole point of NNs! Somehow (by magic, or better by variants of the gradient descent method) it learns from a far smaller number of training examples.

The NN can only give answers based on the data it received. The point is, IMO this answer will not be "globally" correct (except for the opening), there is simply not enough data for this. It will contain similar generalizations, localizations, simplifications to human intuition. For local shapes, this can be quite accurate because a distilled, generalized view can emerge for them (though will still blunder from time to time without search), but for global strategy, effect of one part of the board on another, sente, attacking maneuvers etc. you need search. Even for local fights it can only give options to search on, and not correct answers (pro level). But you CAN search, if you have NN for pruning! This is the real innovation here.

One example is the Lee Sedol match, game 3, from move 16 (IIRC) onward. This is one of the rare cases where Alphago "commits" itself to a line, that could end the game with a misread. This is different from the flexible and souba style it normally plays. IMO such commitment is only possible with very deep search. This is also what Lee Sedol concluded after the match - feeling (his NN) is not enough, you need to read out everything. Playing at that level is simply not possible otherwise.

(A minor correction: it seems the value net is simply used to refine the evaluation, averaging into the MC result, not for optimization as I guessed. -- I now see Kirby already corrected that.)


Last edited by moha on Thu Jun 22, 2017 2:25 pm, edited 1 time in total.
Top
 Profile  
 
Offline
 Post subject:
Post #323 Posted: Thu Jun 22, 2017 2:22 pm 
Honinbo
User avatar

Posts: 8859
Location: Santa Barbara, CA
Liked others: 349
Was liked: 2076
GD Posts: 312
Hi Kirby,
Quote:
How much weight to give to MTCS vs. the value network is not clear to me.
My understanding is anything that's not explicitly spelled out in their paper(s) -- ie. anything that's implemention dependent, or "user" adjustable -- is part of DM's "secret sauce". And that, plus the massive resource requirements (custom TPU's, other custom hardware, massive power supply, etc.) are why AG has a significant (?) lead from its nearest machines (DeepZen, etc.).

Seems they'll continue to develop new and improved methods for other fields (eg. medicine).

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #324 Posted: Thu Jun 22, 2017 2:27 pm 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Kirby wrote:
How much weight to give to MTCS vs. the value network is not clear to me.

I recall seeing a weight of 0.5 mentioned somewhere.

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #325 Posted: Thu Jun 22, 2017 3:24 pm 
Dies in gote

Posts: 39
Liked others: 40
Was liked: 10
moha wrote:
Baywa wrote:
Now, in order to train such a network you don't have to feed it all possible 10^100 or so board positions. This is the whole point of NNs! Somehow (by magic, or better by variants of the gradient descent method) it learns from a far smaller number of training examples.

The NN can only give answers based on the data it received. The point is, IMO this answer will not be "globally" correct (except for the opening), there is simply not enough data for this.
The question whether a method is globally correct (that is will always give the best answer) is pretty theoretical and practically irrelevant. You can only prove that a move is not optimal by finding a better move. How do you find such a move if the machine beats you every time? Especially for opening moves this is very difficult to determine. From a practical standpoint the selflearning AlphaGo plays very good moves. But it is more than likely that AlphaGo still does not play the optimal move.
Quote:
It will contain similar generalizations, localizations, simplifications to human intuition.
Of course! But that's the point of the AlphaGo-architecture. However, by playing itself many many times and by learning from it it gets new intuition.
Quote:
For local shapes, this can be quite accurate
Maybe just the opposite! Those local situations with crosscuts, shortages of liberty require heavy reading.
Quote:
because a distilled, generalized view can emerge for them
Sorry, I don't understand that. ...
Quote:
but for global strategy, effect of one part of the board on another, sente, attacking maneuvers etc. you need search.
Well of course you do. But what would you do without intuition? Searching for a needle in a haysteck...
Quote:
Even for local fights it can only give options to search on, and not correct answers (pro level).
Forget your notion of correctness! (see above) Even local losses in many cases turn out to be globally good. AlphaGo and skilled human players show that in many games. Of course that requires good reading but also good intuition to even consider.
Quote:
But you CAN search, if you have NN for pruning! This is the real innovation here.
Now, you're with me. Did we go in circles? Sorry, I have to stop here. That part with Lee Sedol's game 3 I have to read up.

_________________
Couch Potato - I'm just watchin'!

Top
 Profile  
 
Offline
 Post subject: Re:
Post #326 Posted: Thu Jun 22, 2017 5:45 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
EdLee wrote:
My understanding is anything that's not explicitly spelled out in their paper(s) -- ie. anything that's implemention dependent, or "user" adjustable -- is part of DM's "secret sauce".


I have no quarrel with the AlphaGo team. However, I have a certain disappointment with computer science papers in general. I have written a couple of papers on the mathematics of go, which got published in computers and games collections. When I made use of a computer program, I included it in an appendix. My programs were in Prolog, and I wrote them to be human readable. :) Quite often computer science papers include pseudocode, which is also human readable. However, I often found that the pseudocode was not enough for me to verify the claims that papers made. I shrugged it off because I am not really a computer scientist. Recently, though, I have heard of some research where computer scientists other than the authors were also not able to verify the results in some papers. In at least some cases the authors stated that they had not, in fact, included the tweaks necessary to produce the results in their papers, and were not going to make them public. This practice seems to be quite common. Sorry, but irreproducible results do not science make, IMO. :(

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


Last edited by Bill Spight on Thu Jun 22, 2017 6:31 pm, edited 1 time in total.
Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #327 Posted: Thu Jun 22, 2017 5:46 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
moha wrote:
Kirby wrote:
How much weight to give to MTCS vs. the value network is not clear to me.

I recall seeing a weight of 0.5 mentioned somewhere.


Yes, I have seen that, too. :)

Edit: To be clear, I believe that they average the MC playout results with the value network results, not the results of the MC tree search.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


Last edited by Bill Spight on Thu Jun 22, 2017 6:23 pm, edited 1 time in total.
Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #328 Posted: Thu Jun 22, 2017 6:16 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Kirby wrote:
Step 3.) Train a value network, not with sample data like in Step 1, but directly by playing games like in Step 2: At a random board position, predict who will win the game. Use the Policy network from Step 2 to play out the rest of the game, and see who won. Just like before, if the prediction was correct, add positive value to the weights; if prediction was wrong, add negative value.
Step 4.) Combine policy network, value network, and Monte Carlo Tree Search: Tree is constructed, starting at root board state. At each node in the tree, the policy network gives a prior probability that any of the given moves will be good (e.g. 62% chance I should play move X). Then you can traverse the tree to search for best outcome. Outcome is defined by a linear combination of the value as defined by the value network PLUS the outcome that would occur by doing monte carlo simulation from that point in the tree. How much weight to give to MTCS vs. the value network is not clear to me.


As moha points out, I believe that they weight each equally. Since they both produce probabilities, I suppose that they use the geometric mean, but I don't know.

From what you say in 3) it sounds like the value network gives an estimate of the probability of winning, given AlphaGo vs. AlphaGo. As I have stated elsewhere, the MC probabilities are those of semi-random player vs. semi-random player. which is not the situation at hand. Why, then, use the MC probabilities at all, since they are inherently inaccurate? (I think I could get rich by betting against the MC probabilities. ;)) The value network may produce play that is too "honest", particularly if you are a bit behind. Then you need to play for errors, and using an intermediate value may help to do that. Another possible advantage is what moha points out, that the MC probabilities are based upon the actual position at hand, not some aggregate of more or less similar positions. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject:
Post #329 Posted: Thu Jun 22, 2017 7:00 pm 
Honinbo
User avatar

Posts: 8859
Location: Santa Barbara, CA
Liked others: 349
Was liked: 2076
GD Posts: 312
Hi Bill,
Quote:
In at least some cases the authors stated that they had not, in fact, included the tweaks necessary to produce the results in their papers, and were not going to make them public. This practice seems to be quite common. Sorry, but irreproducible results do not science make, IMO. :(
An interesting point. Where to draw the line between "fundamental pure research" and commerce. Toward the end of 2015, before AG was made public, the top non-AG engines were about 4-5 stones from pro (they lost to pros at 5 stones; I forget the name of the computer tourney; afterwards, exhibition matches between the top engine and pros). That was when many who didn't know about AG were still saying "not for at least another decade" to beat top pros. After DM published their paper(s), the other engines jumped to near pro level (MLily: DeepZen v. human pros, 1-1). AG's results were consistent and reproducible (AG-Master's 60-0, and AG-2017's 3-0 v. Mr. Lee Sedol), and like Coca-Cola's secret formula, they were reproducible by the proprietor, just not (yet) by others.

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #330 Posted: Thu Jun 22, 2017 7:45 pm 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Bill Spight wrote:
As I have stated elsewhere, the MC probabilities are those of semi-random player vs. semi-random player. which is not the situation at hand. Why, then, use the MC probabilities at all, since they are inherently inaccurate?

I think this is the essence of Alphago (& co).

A mostly random MC is next to useless since it measures the distribution of legal moves from a node, so if you have lots of bad options, that would result in low winrate. But in reality the number of bad options does not matter, only the outcome of the best option (or at most the number of good options) counts.

But with a NN, you can get the "amateur dan" level playouts mentioned earlier, which IS informative for winrates. (For Baywa: the NN itself is weak without search, strategically as well as tactically, it have no hope of producing a pro level game - this is what I meant by correctness.) Even if the MC winrate is just a rough estimate, this is also true for the value net. There may be positions where one works better than the other (such as early game vs late game).

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #331 Posted: Thu Jun 22, 2017 7:47 pm 
Honinbo

Posts: 9545
Liked others: 1600
Was liked: 1711
KGS: Kirby
Tygem: 커비라고해
Bill Spight wrote:
Another possible advantage is what moha points out, that the MC probabilities are based upon the actual position at hand, not some aggregate of more or less similar positions. :)


The value network component of a leaf node evaluation is also based on the actual position at hand, isn't it? It's true that the value network was trained and refined by self-play, thereby adjusting itself over time from similar game experiences, but the weighted function that's produced gives a unique value for that particular position, as I understand it.

I suppose MC gives the advantage of actually continuing to play out until the end of the game, whereas the value network produces a value based on past experience, which is agnostic to any hypothetical playouts...

_________________
be immersed

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #332 Posted: Thu Jun 22, 2017 8:54 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Kirby wrote:
Bill Spight wrote:
Another possible advantage is what moha points out, that the MC probabilities are based upon the actual position at hand, not some aggregate of more or less similar positions. :)


The value network component of a leaf node evaluation is also based on the actual position at hand, isn't it? It's true that the value network was trained and refined by self-play, thereby adjusting itself over time from similar game experiences, but the weighted function that's produced gives a unique value for that particular position, as I understand it.

I suppose MC gives the advantage of actually continuing to play out until the end of the game, whereas the value network produces a value based on past experience, which is agnostic to any hypothetical playouts...


Good point. :)

What I had in mind was that the MC estimate is based solely upon the given position, where the value net estimate is based upon the given position plus (implicitly) many similar positions. The MC play outs could lead to the construction of a game tree that finds the exception in the current position, where it is significantly different from the other positions, which is what moha seems to be talking about, or least provide some correction to a mistaken value net estimate. OC, it may lead to a worse estimate, which I suspect will usually be the case. But the players are not laying bets on the game, ( ;)), they are trying to win, so a worse estimate may have no effect on the outcome.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


This post by Bill Spight was liked by: Kirby
Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #333 Posted: Fri Jun 23, 2017 2:01 am 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Bill Spight wrote:
What I had in mind was that the MC estimate is based solely upon the given position, where the value net estimate is based upon the given position plus (implicitly) many similar positions. The MC play outs could lead to the construction of a game tree that finds the exception in the current position, where it is significantly different from the other positions, which is what moha seems to be talking about, or least provide some correction to a mistaken value net estimate. OC, it may lead to a worse estimate, which I suspect will usually be the case.

Their paper mentioned that a version with MC rollouts only (for evaluation, with otherwise similar search I suppose) is significantly stronger than a version with value net only. You guys expect way too much from NNs (esp wholeboard-wise).

Although this was only when using the best policy net - presumably since MC depends on quality moves as I mentioned.

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #334 Posted: Fri Jun 23, 2017 7:45 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
moha wrote:
Bill Spight wrote:
What I had in mind was that the MC estimate is based solely upon the given position, where the value net estimate is based upon the given position plus (implicitly) many similar positions. The MC play outs could lead to the construction of a game tree that finds the exception in the current position, where it is significantly different from the other positions, which is what moha seems to be talking about, or least provide some correction to a mistaken value net estimate. OC, it may lead to a worse estimate, which I suspect will usually be the case.

Their paper mentioned that a version with MC rollouts only (for evaluation, with otherwise similar search I suppose) is significantly stronger than a version with value net only.


Interesting. Thanks. :)

Quote:
You guys expect way too much from NNs (esp wholeboard-wise).

Although this was only when using the best policy net - presumably since MC depends on quality moves as I mentioned.


Well, back in the 90s there were go playing programs that used Monte Carlo rollouts and programs that used neural networks. Neither approach was successful. The programs were very weak. I played around a bit with Monte Carlo rollouts myself, and I know that Monte Carlo evaluation sucks. But there was a breakthrough early in this century with Monte Carlo Tree Search, which uses Monte Carlo methods for both evaluation and exploration (building the search tree). As you might expect, it was guiding the search that was the breakthrough. The evaluation still sucked. (As I said, I think I could get rich betting against Monte Carlo evaluation. ;)) But for winning the game what counts is relative evaluation. If the relative evaluation orders plays well, that is what matters for winning the game, even if you could lose your shirt placing bets.

Now there is a new breakthrough, Deep Learning neural networks. AlphaGo's policy network alone, with no search tree, plays as well as the first MCTS programs. Hell, yes, that's impressive! In a short time AlphaGo reached pro level, then top pro level, and now has left pros in the dust, reaching a level surpassing even the go geniuses of all time. Furthermore, its rate of improvement has shown no signs of slowing down, which indicates that programs of the future have a lot of room to get even stronger. Yes, it was search that brought very weak Monte Carlo methods to the fore, but it is deep learning that has made an even more impressive improvement possible. We still do not know how much of an advance is possible via deep learning neural networks.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject:
Post #335 Posted: Fri Jun 23, 2017 9:51 am 
Honinbo
User avatar

Posts: 8859
Location: Santa Barbara, CA
Liked others: 349
Was liked: 2076
GD Posts: 312
Quote:
We still do not know how much of an advance is possible via deep learning neural networks.
Interesting to guesstimate how far below perfect play is the top human level: 5 stones ? Maybe that's too much; 4 ?

Top
 Profile  
 
Offline
 Post subject: Re:
Post #336 Posted: Fri Jun 23, 2017 10:13 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
EdLee wrote:
Quote:
We still do not know how much of an advance is possible via deep learning neural networks.
Interesting to guesstimate how far below perfect play is the top human level: 5 stones ? Maybe that's too much; 4 ?


Before AlphaGo I would have guessed 4 stones. They say that AlphaGo can give handicaps, despite not having trained for those conditions. Maybe AlphaGo can give today's 9 dans 3 stones? And remember that a 3 stone handicap already takes away much of the advantage White has in the opening, which is a good bit of the difference between AlphaGo and humans.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #337 Posted: Fri Jun 23, 2017 10:22 am 
Honinbo

Posts: 9545
Liked others: 1600
Was liked: 1711
KGS: Kirby
Tygem: 커비라고해
EdLee wrote:
Quote:
We still do not know how much of an advance is possible via deep learning neural networks.
Interesting to guesstimate how far below perfect play is the top human level: 5 stones ? Maybe that's too much; 4 ?


This brings about an interesting question regarding what we mean by "perfect play". I presume that the intended definition was a set of moves that are optimal in the sense that they are guaranteed to produce the best result, no matter how the opponent responds. However, this way of playing may not bring about the best results against humans. In the same way that some high-dan amateurs can sometimes give weaker players more stones than professionals, there may exist strategies for computer programs to play in such a confusing (but non-optimal) way that they are able to beat top pros with a greater number of stones.

For example, against a computer program playing "properly", only playing moves that cannot possibly be refuted, let's say that a human can win with 4 stones. A different computer program that plays in a confusing way, or perhaps a way that is not aligned with that particular pro's style, may be able to achieve wins with a greater handicap.

Answering the question of how many stones a top pro would need to win against a computer playing properly, with moves that are guaranteed to be optimal, is a difficult question to answer. But I think it's even more challenging to determine the maximum number of stones that a computer program could give a top pro and win, given an arbitrary strategy that may be particularly difficult for the human.

_________________
be immersed

Top
 Profile  
 
Offline
 Post subject:
Post #338 Posted: Fri Jun 23, 2017 10:38 am 
Honinbo
User avatar

Posts: 8859
Location: Santa Barbara, CA
Liked others: 349
Was liked: 2076
GD Posts: 312
Hi Bill, even if it's 5 stones, it seems very impressive to me for this (stone age?) brain. :)

Kirby, interesting point !

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #339 Posted: Fri Jun 23, 2017 11:22 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Kirby wrote:
EdLee wrote:
Quote:
We still do not know how much of an advance is possible via deep learning neural networks.
Interesting to guesstimate how far below perfect play is the top human level: 5 stones ? Maybe that's too much; 4 ?


This brings about an interesting question regarding what we mean by "perfect play". I presume that the intended definition was a set of moves that are optimal in the sense that they are guaranteed to produce the best result, no matter how the opponent responds.


That's not the definition used by AlphaGo or MCTS programs. Rather, they do not aim at perfect play, and have no definition, explicit or implied, for it.

Quote:
However, this way of playing may not bring about the best results against humans. In the same way that some high-dan amateurs can sometimes give weaker players more stones than professionals, there may exist strategies for computer programs to play in such a confusing (but non-optimal) way that they are able to beat top pros with a greater number of stones.

{snip}

Answering the question of how many stones a top pro would need to win against a computer playing properly, with moves that are guaranteed to be optimal, is a difficult question to answer. But I think it's even more challenging to determine the maximum number of stones that a computer program could give a top pro and win, given an arbitrary strategy that may be particularly difficult for the human.


Well, if a program perceives itself to be far behind, as in giving 5 stones, will it make desperation moves that the human can take advantage of? IIUC, some programs today use variable komi to adjust the final score of play outs in the computer's favor, reducing the komi as the game progresses, to avoid that kind of thing.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: This 'n' that
Post #340 Posted: Sun Jun 25, 2017 8:57 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Top attachment

Click Here To Show Diagram Code
[go]$$Wcm16 Top attachment
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . O O . . . . |
$$ | . . . . . . . . . . . . X X X O O . . |
$$ | . . . O . . . . . , . . . . . X X . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . , . . . |
$$ | . . . . . . . . . . . . . . . 2 1 . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . O . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . O . . . . . , . . . . . , X . . |
$$ | . . . . . X . . . . . X . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]


We teach beginners not to attach to stones they are attacking, but once you see this top attachment, it looks very nice, doesn't it? :) Top attachments against pincers and pincered stones are not unusual, but I think that the top attachment against a wedge is another AlphaGo innovation. It avoids the problems of a third line approach from either side. It also indicates the value of the center. The center is important, but not so important as to play a boshi. :)

Click Here To Show Diagram Code
[go]$$Wcm16 Stretch
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . O O . . . . |
$$ | . . . . . . . . . . . . X X X O O . . |
$$ | . . . O . . . . . , . . . . . X X . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . b 3 . . |
$$ | . . . . . . . . . . . . . . . 2 1 . . |
$$ | . . . . . . . . . . . . . . . a c . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . O . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . O . . . . . , . . . . . , X . . |
$$ | . . . . . X . . . . . X . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]


Neither hane-counterhane sequence looks good. Wa - :b18: lets the Black wall work, either in a fight or with the :b17: and :b18: stones, and White does not have much room to develop on the bottom right. If Wb - Bc instead, Black gets good development of his moyo. :w18: is solid, taking away Black's preferred hane.

Click Here To Show Diagram Code
[go]$$Wcm16
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . 5 O O . . . . |
$$ | . . . . . . . . . . . 6 X X X O O . . |
$$ | . . . O . . . . . , . . . . . X X . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . , 3 . . |
$$ | . . . . . . . . . . . . . . b 2 1 . . |
$$ | . . . . . . . . . . . . . . . . a . . |
$$ | . . . . . . . . . . . . . . . 4 . 7 . |
$$ | . . O . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . O . . . . . , . . . . . , X . . |
$$ | . . . . . X . . . . . X . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]


:b19: is very nice, and shows the utility of :b17:. If Black had played :b19: first, White could have jumped to "b". :b17: is perfectly placed. :)

:w20: relieves the pressure on the White corner with sente, and then White slides to :w22:. :w22: is not an AlphaGo style play, a human could easily have played it. It prevents a Black hane at "a", which is now senseless. It also, OC, gives the White stones some eye potential. But more important, I think, is that it offers potential for further White play in the bottom right. By contrast, an extension towards the Black wall does not have much potential development precisely because of the wall.

Large scale development

Click Here To Show Diagram Code
[go]$$Bcm23 One space jump
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . O O O . . . . |
$$ | . . . . . . . . . . . X X X X O O . . |
$$ | . . . O . . . . . , . . . . . X X . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . , O . . |
$$ | . . . . . . . . . . . . . . . X O . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . X . O . |
$$ | . . O . . . . . . . . . . . . . . . . |
$$ | . . . . . 1 . . . . . . . . . . . . . |
$$ | . . . O . . . . . , . . . . . , X . . |
$$ | . . . . . X . . . . . X . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]


:b23: is one of the oldest plays in go — against the large knight's enclosure. Against the small knight's enclosure it does not threaten the corner as much. But it is very much in the AlphaGo - Go Seigen style of rapid but thin development. Surely White has to play inside Black's huge sphere of influence, but where?

Click Here To Show Diagram Code
[go]$$Bcm23 Deep invasion
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . O O O . . . . |
$$ | . . . . . . . . . . . X X X X O O . . |
$$ | . . . O . . . . . , . . . . . X X . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . 4 . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . , O . . |
$$ | . . . . . . . . . . . . . . . X O . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . X . O . |
$$ | . . O . . . . . . . . . . . . . . . . |
$$ | . . . . . X . . . . . a . . . 3 . . . |
$$ | . . . O . . . . . , . . . . . , X . . |
$$ | . . . . . X . . . . . X . . 2 . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]


:w24: invades the bottom right. After :b25:, :w26: settles the White group on the right side. Note that White extends only one space to forestall any attack. Now :w24: looks like a gift. And indeed, it dies later in the game, along with other stones White plays in the vicinity. But it has aji. It exemplifies the AlphaGo - Kitani style of salting the opponent's position with stones which often die, but which have aji. :)

Where does Black play now? "a" would surely kill :w24:, but it is a single purpose play, and kind of small looking at that. Black has better plays.

Click Here To Show Diagram Code
[go]$$Bcm23 Prelude to a fight
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . O O O . . . . |
$$ | . . . . . . . . . . . X X X X O O . . |
$$ | . . . O . . . . . , . . . . . X X . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . O . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . , O . . |
$$ | . . . . . . . . . . . . . . . X O . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . X . O . |
$$ | . . O . . . . . . . . . . . . . . . . |
$$ | . . . . . B . . . . . . . . . X . . . |
$$ | . . . O . . . . . , . . . . . , X . . |
$$ | . . 5 . . X . . . . . X . . W . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]


The invasion, :b27:, surprised me. Surely in exchange for the corner White will go into Black's bottom left position, after which Black will have the difficult task of keeping White from utilizing the aji of :wc: while at the same time making use of :bc:. And indeed that is how the game developed. The fight is quite interesting, but any commentary is above my pay grade. ;) I'll leave that to the pros. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


This post by Bill Spight was liked by 2 people: dfan, jeromie
Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1056 posts ]  Go to page Previous  1 ... 14, 15, 16, 17, 18, 19, 20 ... 53  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group