Thanks Philip for sharing this. Indeed it's beautiful and fascinating. It's part of a larger research paper, "Opening strategies in the Game of Go from feudalism to superhuman AI" by Bret Beheim: there's a preprint at
https://osf.io/preprints/psyarxiv/cewst_v5, and code at
https://github.com/babeheim/go-learning-eras.
It's a type of evidence-based research that I'd like to see more of, but I'm frustrated by some of the methodological choices in this particular study. It's a shame the author didn't collaborate with an expert (high dan or professional) go player.
The overall theme is to measure how innovation happens slower or faster depending on "infostructure" (information infrastructure): it's not just about having large group of people doing something, but it matters how they're organising and exchanging information. There's reference to literature suggesting that innovation sometimes happens in slower groups.
The paper uses GoGoD as a dataset, and derives some numerical measures of "diversity" in opening play over time. Specifically, a go game is represented in SGF notation -- so, for example, the Shusaku fuseki is "qd;dc;pq;oc;cp;qo;pe;" -- and two openings are "similar" if the Levenshtein edit distance between them is small. Notably, terminology such as "3-4 point" is completely absent from the paper, and there scarcely a go diagram to be seen.
To my mind, this leads to two major problems. Firstly, openings are classified into "families" based on the first few moves, sometimes literally on the first two moves. So, for example,

-

-

and

- white 'a' -

are treated as different families. And in general, transpositions (same position reached by different move order) are not accounted for properly.
- Click Here To Show Diagram Code
[go]$$c Not accounting properly for symmetry/transposition
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . 2 . . . . . , . . . . . 1 . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . , . . . . . , . . . . . , . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . a . . . . . , . . . . . 3 . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]
Secondly, common formations (san-ren-sei, low Chinese, Kobayashi fuseki, taisha joseki, etc) won't appear as single clusters in the dataset, but will get split up according to the different contexts in which they appear (low Chinese against two 4-4 points is, by these measures, quite different from low Chinese if white has played a 3-4 point).
This means that the patterns in the various graphs, although nice to look at, don't map at all to how I conceive of the opening in go. A stronger player might disagree with me: but I would like to see that articulated in the paper! From a machine learning perspective, you need to get your feature selection right before plunging into the analysis.
So the abstract says "Surprisingly, the influence of AI has produced only a modest, short-lived disruption in opening move diversity, suggesting human-AI convergence and incremental, rather than revolutionary, change." And I think this reflects the fact that AI has a strong preference for 4-4 points and early 3-3 invasions, but the analysis is not picking up the explosion of new variations arising from the 3-3 invasion, AI's greater willingness to tenuki or to sacrifice, the re-evaluation of thickness and aji...
And later on: "It was not at all obvious before AlphaGoZero whether SAI would rediscover human opening play, or simply show us that we’d been doing it wrong the whole time. A similar phenomenon occurred in Chess when AlphaZero bootstrapped its ‘understanding’ of the game and converged on many of the preferred human openings e.g. d4, e4, Nf3, c4, in roughly the same proportions (McGrath et al., 2021). Such convergence is a major success for our collective problem-solving abilities." It's not spelled out, but the implication seems to be: go-playing AIs choose 4-4 points or 3-4 points just like humans do, therefore humans' understanding of the opening was already pretty good before AI came along. Personally, I think AI has in fact shown us that we were doing plenty of things wrong the whole time.
Despite these quibbles, the paper is worth reading. It's certainly thought-provoking, and I hope to see more of this sort of analysis over time. The reference list covers a lot of ground, pointing to research on cultural evolution across many different domains, not just go. (I'm expecting John Fairbairn to pop up and object to too many numbers. Download the paper and go to the last five pages: there are plenty of words to be found as well.)