When we review games it is often that we go through the game move-by-move questioning everything or we stop at a few places and spend an hour trying to come to some conclusions. It is especially common when a group of players sits down after their match in tournaments, but it is clear that many people do this on their own (I have found myself doing this). I never found it useful to ponder over a game and I have started calling it to overanalyze.
The best review is always a quick review that gives you some idea why the game was won/lost, if you are making the same mistakes as usual and if your important decisions were good (and what you could have done if they were not). Many times the only thing to learn from a game is "never make that mistake again".
First when I started reviewing my games using AI it was just this kind of overanalyzes. I hooked up a server with multiple brand new GPUs (only to learn that I could only use one at a time but it was fast enough anyways) then I downloaded the best leela weights and would just step through the game having the AI ponder on every move. Fairly useless, the top weights easily find fault in normal play and more playouts usually means their logic is irrefutable (except that they can't always find a logical resolution in very complicated local situations). Once you see their moves you just can't un-see them and your chances of learning anything are diminished.
This seems to be similar to how many players use the computer. Just have it do the work for you and then try to understand its reasoning. Be very confused and not try to discern something that you could actually improve.
Later I discovered that if I used weaker weights that they would actual agree with most of my moves and if they were not good they would usually not refute them with too crazy lines. So I started using Katrain on my laptop (that has bad GPU that is fast enough per-se but overheats and slows down after awhile). Now I find it very useful to run analysis to 250 playouts for every move and looking for obvious errors, something around -1.5 points seems to be a big mistake at my Pandanet level most of the time, if I have time (and the ambient temperature where I am is not too high for the laptop to cool off) I'll do a second run to 2500 playouts for every move to make sure the mistakes are actually mistakes (they are not always). If there is something interesting, then I will try to playout some variations (as opposite to thinking about them) to see if I can find a reasonable way to play that I might not easily think of or be unsure what to think of.
Sometimes I just throw the game into Katrain and start another game. Even though that often means that reviewing it will not have very high priority later. That seems to be a good thing because playing more is better than reviewing more.
Someone might say I am obviously lazy now that I have figured out my AI analyzes