I am wondering if a better method would be to study AI alternative moves, even when the game move isn't considered as a mistake by AI.
I think it's a psychological problem. It feels natural to try to spot a mistake and correct, and the bigger the mistake the more correction we think we will achieve. But there's a world of difference between avoiding mistakes and making good moves.
In my experience, in most disciplines, you learn mostly by being shown the correct way to do something - by imitation in other words. You are told to read the
best books - you are not told to spot the mistakes in Shakespeare. You are told to keep your fingers on the
right keys on the piano. You are not encouraged to play bum notes and sort out the mess later. In calligraphy you learn from following the best
models - until modern times at lest, you are not supposed to indulge in a "self-expressive" scrawl.
There are disciplines where your task is to spot the flaw and fix it, e.g. medicine. But even there, there is good practice and bad practice in applying the remedy, and the good practice is learned by imitating the experts.
This advice is also the perennial advice of go experts, i.e. pros. They tell us things like "Play over a 1,000 games". Although it's been decades since I studied go seriously, I can still remember vividly my Eureka moments when playing over a pro game and suddenly realising, "Oh, I didn't know you could that!" I then imitated that move and suddenly found I had added an extra arrow to my quiver.
Given both the frequency of such pro advice and my own experiences in following it, I have been staggered over the years at how little most people seem to play over whole games. Most seem to obsess over portions of a game: josekis, the fusekis, tsumegos or the endgame. That's like trying to become a good tennis player without trying a backhand, or without learning to move around on court.
I think if an AI bot could talk, it would also say to us, "I learned to play well by playing over millions of complete games and imitating the
best in each case." Aside from corrections imposed from the outside by programmers (e.g. for reading ladders), I don't think bots ever try to learn by correcting mistakes. Mistakes are spotted, of course, put are just put in the dustbin.
But even a commitment to imitation has to be carefully considered, in various ways. Choosing the best model to follow, for example, may be a matter of temperament - you want to play in a certain style, say. But, also, do you want quick results, which tend to be easy-come-easy-go, or are your prepared to take your time and achieve good habits that stick.
In go, the traditional pro advice has been to study a single player thoroughly (and then move onto another). Again going by my own experience, amateurs tend to make the choice on the basis of a style they like, and also they very rarely go onto a second player. I think that's a mistake, and not what is behind the pro advice.
There is a problem with playing over game records. Commentaries are actually quite rare. So you play over a game record partly blind, and on trust. The trust is that the moves are all good ones, or at least acceptable to a pro. But, despite our much vaunted claim that humans can tell you what is happening and bots can't, in 99% of games you play over, there is no commentary and so there is no human actually telling you what is going on. Nor do you get any advice on alternative good moves, or even what other moves the pro looked at. The only way you can acquire an understanding in such cases may be to see repeated examples by the same player in game after game. In other words, you try to absorb his habits. Doing what neural-network bots do but on a very small scale.
This is where AI scores. In a separate post I put forward my ideas on nexusology. That was just my own way of looking at something we can all see and interpret in our own ways: a bot will reveal to us all the main moves it considers as candidates, typically about 10 for much of the game. We still have to trust that these are all decent moves, but we are no longer blind.
Trust can be dealt with fairly easily. You need only to play against a bot by choosing a non-blue move, and you'll get a feel for how much that loses. My guess is that you may lose the game but you'll play far better than you did as a pure human. So you trust would not be misplaced unless you wanted to be Sin Chin-seo or Ke Jie.
So you can extrapolate from that and say to yourself, it's good enough if I can create a palette of candidate moves to choose from that more or less matches the palette that the bot creates.
My sense of how to do evolved from trying to categorise all the candidate moves a bot showed. Because I don't study go to become stronger (I prefer to be a fan who "appreciates" the game), I have not tested my method properly, but provisionally I got good results from categorising candidates moves at a first level as settling, colonising and overconcentrating moves. But these were all nexuses with multiple components (definitely not binary as in profit/thickness. At a second level I had other nexuses. E.g. one included trading/sacrificing/miai moves/tenuki.
By looking at positions through that prism and producing a list of ten candidate moves, I found I could produce a list that married up pretty well (moye around 75% much of the time) with the bot candidates (Lizzie/Leela's in my case). I could not say with any confidence which was the best move, but as I say, it's probably sufficient at my level to play any one of those moves. On the other hand, because of the categories I had imposed, I felt I had some human understanding of the moves listed. And at no point did I try to fix mistakes. My goal was exclusively to get a matching list. I stopped once I felt I had got satisfactory results, but I feel confident I could get better results if I continued on the same path (mainly by refining the categories).