It is currently Mon Feb 24, 2020 11:35 am

 All times are UTC - 8 hours [ DST ]

 Page 6 of 36 [ 720 posts ] Go to page Previous  1 ... 3, 4, 5, 6, 7, 8, 9 ... 36  Next
 Print view Previous topic | Next topic
Author Message
 Post subject: Re: “Decision: case of using computer assistance in League A #101 Posted: Fri Mar 30, 2018 11:02 am
 Oza

Posts: 2465
Liked others: 15
Was liked: 3558
Some years ago Mark Hall and I ran a sideshow at the London Open where players (nearly all dan and high kyu) took up an invitation to predict all the moves of a pro game (i.e. both sides) they had never seen before using GoScorer. They had to guess the move played, not just one of the top three. It was accepted that the first dozen moves or so would be more or less impossible to guess, so straightaway no-one could score close to 100% (or even 98%). There was, however, a function that gave you a broad hint (e.g. which quarter of the board), though this meant you didn't get the full score for that move.

Nevertheless, scores were consistently high, which surprised us. I can't remember the percentages now, but I think 60-70% was common and a high dan scored (I think) over 80% by thinking a long time. But the surprise at the generally high scores is still a fresh memory.

Thinking about the explanations later, it became apparent that quite a lot more moves than we expected are routine (e.g. hanetsugi) or trivial (connecting after atari).

But some years later, I noticed another delimiting effect. A very high percentage of moves are adjacent to or within one space of the previous move (i.e. the opponent's move). Just as a rough pointer, I have just looked at a recent game between a pro and an AI, limiting myself to moves 11-110, and counted how many moves fell within that scope. It was 64 (i.e. 64%).

You can extend this. If you instead count all the moves adjacent to or within one space of the last move (his) and all those adjacent to or within one space of the move before (yours), you get noticeably high figures. I didn't actually do a count here but I could see at a glance there were many such moves.

Now of course "adjacent to or within one space of the last move" can cover a fair number of points (not all empty, though) so there is some guesswork, and this is a big part of the reason why you can't use a computer to generate good moves like this. But in most cases even an amateur dan human (and, apparently, high kyus) can make a decent stab at which is the right point. And a strong human cam also often tell when to tenuki.

If you add in style training, I imagine you can up the percentages even more.

Does style training work? It must, surely, otherwise no-one would have a style. Is it possible to copy a style well enough to bring a high score obtained by other factors, such as the above? It may be unusual but it seems possible. I recall Jan van der Steen's remarkable ability to comment on a game in exactly the manner used for the pro commentaries in Go World. It wasn't a parody. If you posed a sort of Turing test and presented someone with his commentary and a GW commentary, I don't think they could tell the difference in origin.

But at that time, Go World was just about the only thing in existence that gave long commentaries in English and, like many others, Jan studied them intensely faute de mieux (he was 3-dan at the time, I think). So, studying Leela intensely could in like manner possibly produce a Leela clone, especially given that unlike humans Leela is probably very consistent. The clone may not understand what he's doing, but his subconscious has learnt enough to be a good mimic?

 This post by John Fairbairn was liked by: Akura
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #102 Posted: Fri Mar 30, 2018 12:24 pm
 Dies with sente

Posts: 105
Location: Ventura
Liked others: 42
Was liked: 48
Rank: KGS 4 kyu
So I ran some old chess games through a strong chess engine (Houdini Pro 4), and the results were a bit surprising to me, similar to those noted by Mr. Fairbairn.

I looked at just five games. There were played in 1992 at a small local FIDE invitational. At the time my FIDE rating was in the low 2300s; I would guess that this is roughly the equivalent of the lower end of the mid-dans range in Go. All of my opponents were rated within 120 points of me.

The results were as follows, looking at only the number of matches with Houdini's top three moves:

Game 1: 28/43 or 65 percent
Game 2: 14/16 or 87.5 percent
Game 3: 29/38 or 76 percent
Game 4: 55/77 or 71 percent
Game 5: 26/30 or 87 percent
Overall: 152/204 or 74.5 percent

Frankly, these match rates are significantly higher than I anticipated, so I am having to rethink my position that top three moves matching can be a strong indicator.

Interestingly, of my two games with the highest match rates, in Game 2 my opponent played rather weakly and lost quickly; in Game 5 I lost rather miserably to a stronger player.

 This post by Bartleby was liked by 3 people: Bill Spight, jeromie, Uberdude
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #103 Posted: Fri Mar 30, 2018 1:44 pm
 Judan

Posts: 6331
Location: Cambridge, UK
Liked others: 365
Was liked: 3435
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Bartleby wrote:
Are you saying that the fact that Leela is weaker makes matching more likely? I would think the opposite is true. There are generally more suboptimal moves in any given position than optimal moves, so I suspect that match rate should be lower with weaker engines, not higher.

I suspect that normal play of mid- to high-dan amateur has a closer match to Leela (whose policy network has been trained on human games) than to a stronger bot like AlphaGo Zero without human training (or recent Leela Zero, or AlphaGo Master which started with human training but has had a lot of self play training to develop its own style). I also suspect a top pro would have a lower match against Leela. Of course I'd like some real data and would update my views accordingly.

Bartleby wrote:
I still think 98 percent is really high. Although confirmatory evidence may be weak in general, at some point that becomes no longer true. If a player had a 100 percent match rate over an entire game would this not be highly suspect? 98 per cent is quite close to 100 per cent.

I agree 98% is suspicious, but not particularly so if 80-90% is normal.
But suspicious is not enough to convict and punish. After suspicions I think a human analysis like Stanislaw did which I posted, or we did here of moves e2 or l17 or t13 is warranted.

Bartleby wrote:
But my main point is not about this game in particular, or even match rates in general. It's rather that in a competitive game like chess or Go, and especially when playing over the Internet when a cheater can always rely on plausible deniability, cheating is likely to become a real problem as stronger and stronger engines become available.

Yup. It's difficult. If we were already in a position where it's accepted 10% or something of people are cheating online then I'd be happier with much weaker evidence to convict someone, "on the balance of probabilities" level (as for civil cases in English law). But if we are still in the cheating is rare world (maybe I'm being naive) then stronger "beyond reasonable doubt" (criminal law) evidence is needed.
Bartleby wrote:
Maybe the problem will be less in Go than in chess: my general impression is that there is a significantly higher percentage of dysfunctional personalities among chess players than Go players.

I hope so!

P.S. That 1k on reddit with the 64% made an interesting point that in his game Leela had a delusion about the status of a dead group (it's sometimes really stupid at nakades) so wanted to keep playing dumb moves there which sensible humans don't, thus lowering the matching metric. Carlo's game nor mine had a dead group to confuse Leela. Games which do should perhaps be excluded from the dataset for finding the usual distribution of this similarity metric.

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #104 Posted: Fri Mar 30, 2018 3:24 pm
 Gosei

Posts: 1455
Liked others: 751
Was liked: 485
Rank: AGA 3k KGS 1k Fox 1d
GD Posts: 61
KGS: dfan
I don't have enough information to have a real opinion in this matter (and I suspect neither do the people in charge of the decision), but I think that the numbers to compare when hand-wavily throwing around stats are the disagreements of 2% vs 10-20%, not the agreements of 98% vs 80-90%. It's easy to look at numbers above 80 as just all being pretty large, but there's a really big difference between 2% disagreement and 10% disagreement which is easier to see when you look at it that way.

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #105 Posted: Fri Mar 30, 2018 10:29 pm
 Honinbo

Posts: 9445
Liked others: 2956
Was liked: 3156
Uberdude wrote:
Bartleby wrote:
Are you saying that the fact that Leela is weaker makes matching more likely? I would think the opposite is true. There are generally more suboptimal moves in any given position than optimal moves, so I suspect that match rate should be lower with weaker engines, not higher.

I suspect that normal play of mid- to high-dan amateur has a closer match to Leela (whose policy network has been trained on human games) than to a stronger bot like AlphaGo Zero without human training (or recent Leela Zero, or AlphaGo Master which started with human training but has had a lot of self play training to develop its own style). I also suspect a top pro would have a lower match against Leela. Of course I'd like some real data and would update my views accordingly.

My suspicions are similar, given the kind of matching done. The range of moves that (strong amateur) Leela considers good is very likely to include the move a strong but slightly weaker amateur human might play. This is ignoring painfully obvious moves and one lane roads, OC, where the actual moves chosen should match almost 100%. But pros will often dismiss a move that a strong amateur thinks is good, because they see at a glance that it doesn't stand up. So there are probably in general fewer pro moves for the strong amateur human to match.

Uberdude wrote:
Bartleby wrote:
I still think 98 percent is really high. Although confirmatory evidence may be weak in general, at some point that becomes no longer true. If a player had a 100 percent match rate over an entire game would this not be highly suspect? 98 per cent is quite close to 100 per cent.

I agree 98% is suspicious, but not particularly so if 80-90% is normal.
But suspicious is not enough to convict and punish. After suspicions I think a human analysis like Stanislaw did which I posted, or we did here of moves e2 or l17 or t13 is warranted.

I agree with Regan. Matches with Leela or other very strong bot may provide supporting evidence, given other evidence of cheating, but is very rarely good enough to stand alone. I also agree with Uberdude that the high number of matches with Leela justifies looking for more evidence, such as analysis of specific plays.

Uberdude wrote:
Bartleby wrote:
But my main point is not about this game in particular, or even match rates in general. It's rather that in a competitive game like chess or Go, and especially when playing over the Internet when a cheater can always rely on plausible deniability, cheating is likely to become a real problem as stronger and stronger engines become available.

Yup. It's difficult. If we were already in a position where it's accepted 10% or something of people are cheating online then I'd be happier with much weaker evidence to convict someone, "on the balance of probabilities" level (as for civil cases in English law). But if we are still in the cheating is rare world (maybe I'm being naive) then stronger "beyond reasonable doubt" (criminal law) evidence is needed.

That's a very Bayesian outlook, Uberdude. And I think that the similarity to civil and criminal law is important. I became a duplicate bridge director after taking a class from the world's best. The class did not cover cheating, and it was emphasized that the attitude of civil law was the right one. If there is an irregularity the idea is to restore equity, not to punish wrongdoing. The burden of proof is also less in civil law. I defer to the officials who found that this game was irregular, in which case throwing out the game result would restore equity. The close similarity to Leela's play is enough to say that there is something funny about this game. But it is, as Regan says, only supporting evidence of cheating. Also, cheating is not just an irregularity, it is wrongdoing that should be punished. For a finding of cheating, the burden of proof needs to be higher, just as it is in criminal law.

BTW, if online cheating at chess is like other anti-social behavior, it may well be that the number of players who have cheated could be as high as 25%; but the number who cheat frequently or who cheat in tournaments is probably less than 5%. A lot of people succumb to temptation once or twice, but then find that it is not all that rewarding, and give it up.

_________________
At some point, doesn't thinking have to go on?

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #106 Posted: Sat Mar 31, 2018 2:58 am
 Oza

Posts: 2465
Liked others: 15
Was liked: 3558
Quote:
BTW, if online cheating at chess is like other anti-social behavior, it may well be that the number of players who have cheated could be as high as 25%; but the number who cheat frequently or who cheat in tournaments is probably less than 5%. A lot of people succumb to temptation once or twice, but then find that it is not all that rewarding, and give it up.

It's useful to stress that this refers (I assume) to computer assistance. The nature of chess events is such that cheating can take other forms, some of which can appeal in particular to grandmasters. Many events are round robins or Swisses where it is possible to deliberately lose or accept a draw in order to boost the chances of a compatriot - or of someone who is willing to give you a share of his extra prize money. This was allegedly popular, normal even, when Soviet Union players travelled en bloc, and it apparently still happens. Presumably there are suspicions it can happen (or has happened) in go, given that draws for playing partner nowadays seem routinely be made so as to pair compatriots in early rounds. Go professionals have certainly not been immune to other forms of cheating, so we can't be sanctimonious about go.

Chess officials seem to rely on two main strands to detect computer-assisted cheating. One is a loo patrol and other ways of searching for mobile phones. The other is to look for sudden improvements in Elo ratings. Both can be implemented in go. But there are yet other difference between go and chess in that most chess events are either open or invitation only (much more than in go). On the one hand open events seem to provide more opportunities for weak players to try cheating. On the other, mere suspicions of cheating can mean invitations are no longer forthcoming, so cheating can be controlled to some extent without even proving it. Go pro events are mostly controlled by guilds, so amateurs are mostly excluded. Also, the guilds rather then the event organisers control invitations.

I suppose it's also worth asking whether we are getting too uptight about cheating. When I was young the first electronic calculators came out and there was a huge kerfuffle about some kids using them in exams. They were rigorously banned even in the classroom. But nowadays calculators are allowed, encouraged even, and I was shocked last week to hear that a grandson taking a language exam is to be allowed to take a dictionary into his exam, as well as bullet points for essays (though there is a 30-word limit).

Applying this approach to go, some computer assistance (or "cheating" as it would then become) could be allowed but controlled less by officials and more by social disapproval, and to some extent also by reducing time limits (as in exams - the blitz approach seems popular in online chess). A report on the BBC this week claimed calculators had actually helped people learn. It could be that allowing computers could also allow go and chess players under tournament conditions to learn faster.

Does anyone know what sort of effect computers have had in correspondence chess? It seems to generate a big debate, and one comment I found interesting was this:
Quote:
In competitive correspondence chess everybody uses the strongest computer they can get, but there are still consistent differences in strength between different players.
Computers are strong, but they're nowhere near perfect. They're extremely good in positions where calculation is the primary factor but not that good in endgames and positional play. Good correspondence players know which computer lines to trust and how to direct the computer to the critical lines.
Unassisted humans have no chance at all, of course.

 This post by John Fairbairn was liked by: Bill Spight
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #107 Posted: Sat Mar 31, 2018 7:07 am
 Tengen

Posts: 4308
Location: North Carolina
Liked others: 461
Was liked: 708
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
One of the chapters in Freakonomics concerns this sort of cheating in sumo. There was a decided pattern where the competitor who needed a win to meet some cutoff (promotion, tournament qualification? I’m not sure) would win far more often than you’d expect. By itself that’s not much evidence. Perhaps you just fight harder when it matters and take it easy when the game doesn’t matter (we see this all the time in the NBA/NFL).

The real evidence was that after one player got that important win, they were extremely likely to lose their next match against that same opponent—-more likely than their normal win-loss rates would predict. It’s not enough to accuse a single competitor, but the collective pattern says someone is “loaning” a win in a match that doesn’t matter in exchange for a future win that may be much more valuable.

_________________
Occupy Babel!

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #108 Posted: Sat Mar 31, 2018 7:48 am
 Honinbo

Posts: 9445
Liked others: 2956
Was liked: 3156
hyperpape wrote:
One of the chapters in Freakonomics concerns this sort of cheating in sumo. There was a decided pattern where the competitor who needed a win to meet some cutoff (promotion, tournament qualification? I’m not sure) would win far more often than you’d expect. By itself that’s not much evidence. Perhaps you just fight harder when it matters and take it easy when the game doesn’t matter (we see this all the time in the NBA/NFL).

The real evidence was that after one player got that important win, they were extremely likely to lose their next match against that same opponent—-more likely than their normal win-loss rates would predict. It’s not enough to accuse a single competitor, but the collective pattern says someone is “loaning” a win in a match that doesn’t matter in exchange for a future win that may be much more valuable.

I don't know what they said in Freakonomics, but I remember being shocked at age 13 to find out about prize money sharing in golf, where the pattern usually involved two pro golfers of differing skills. They would share their prize money in the same tournament at some unequal percentage. Later I came to understand that that arrangement tended to reduce the risk for both players by smoothing the variations in income. Something that is important if you are raising a family, buying a house, putting kids through college. Another effect might be to enable the less skilled player to remain a pro after a bad year. This could benefit pros as a whole by maintaining a more skilled field than otherwise, thus adding to the entertainment value of the sport.

I haven't played poker for a long time, but poker players used to take out that kind of insurance all the time, by buying and selling shares of each other's tournament winnings. Also, it was not unusual when everybody was eliminated except two players for them to agree to share their prize money equally and compete for the glory alone. OC, in poker there were potential problems when two players who owned parts of each other's winnings met at the same table. In one celebrated case in the 1990s two young players were kicked out of the tournament and banned for life by the sponsoring casino. One player was caught throwing away the best hand as a way to pass money to the other one, to help keep him from being knocked out. There are other ways of playing as a team at poker, as well.

The prize money schedule of most sports is set up more for the excitement of the spectators than for sustaining the sport. OC, you could let the lower level pros drop out, but at the cost of lowering the quality of the field and of discouraging young people from becoming pros in the first place.

Edit: OT, sort of.

When I was in high school a couple of little old ladies told me about some ways to cheat at bridge. At that time some people would open One Club with fewer than four cards in the suit. (Actually, a lot of people played that system.) The cheaters would politely cough before bidding One Club with only three cards in the suit. OC, everybody at the table was in on the secret, so it was not exactly cheating. Years later, when I was playing in my first national tournament, in the Swiss Team event, my partner and I faced an elderly married couple who were playing the Precision Club system, in which One Club was conventional, and players were sometimes forced to open One Diamond with only two cards in the suit. The elderly lady coughed and opened One Diamond. My partner and I just looked at each other and grinned.

_________________
At some point, doesn't thinking have to go on?

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #109 Posted: Sat Mar 31, 2018 9:27 am
 Lives in gote

Posts: 434
Liked others: 64
Was liked: 93
Rank: 4 Dan European
Bill Spight wrote:
That's a very Bayesian outlook, Uberdude. And I think that the similarity to civil and criminal law is important. I became a duplicate bridge director after taking a class from the world's best. The class did not cover cheating, and it was emphasized that the attitude of civil law was the right one. If there is an irregularity the idea is to restore equity, not to punish wrongdoing. The burden of proof is also less in civil law. I defer to the officials who found that this game was irregular, in which case throwing out the game result would restore equity. The close similarity to Leela's play is enough to say that there is something funny about this game. But it is, as Regan says, only supporting evidence of cheating. Also, cheating is not just an irregularity, it is wrongdoing that should be punished. For a finding of cheating, the burden of proof needs to be higher, just as it is in criminal law.

In English criminal law, Bayes' based statistical arguments are explicitly not allowed.

This was established in Regina versus Adams and the associated appeals:

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #110 Posted: Sat Mar 31, 2018 3:07 pm
 Honinbo

Posts: 9445
Liked others: 2956
Was liked: 3156
Bill Spight wrote:
That's a very Bayesian outlook, Uberdude.

drmwc wrote:
In English criminal law, Bayes' based statistical arguments are explicitly not allowed.

This was established in Regina versus Adams and the associated appeals:

Thanks. From what I read, it seems like an attempt was made to get the jury to apply Bayes' Theorem in their deliberations, and, after appeals, that was overridden by a different way to instruct the jury.

To be clear, here is what Uberdude said that seemed Bayesian to me.

Uberdude wrote:
If we were already in a position where it's accepted 10% or something of people are cheating online then I'd be happier with much weaker evidence to convict someone, "on the balance of probabilities" level (as for civil cases in English law).

If 10% of people cheat online, Carlo is much more likely to have cheated than if only 1% of people cheat online (to pick a number). This is what cognitive scientists refer to as use of background knowledge. The use of background knowledge is a hallmark of Bayesian reasoning. I don't know about "balance of probabilities" means in English civil law, but this particular kind of background knowledge is not allowed to be used against a defendant in US criminal law. I suspect that the same is the case in English criminal law, as well.

_________________
At some point, doesn't thinking have to go on?

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #111 Posted: Mon Apr 02, 2018 1:29 am
 Judan

Posts: 6331
Location: Cambridge, UK
Liked others: 365
Was liked: 3435
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
There's just been another case of an accusation, conviction, and punishment for Leela assistance cheating in the online go world, this time in the Creators Invitation Tournament, an informal event for streamers (people who broadcast video of themselves talking whilst playing go). But rather than learning from the mistakes of the PGETC case with Carlo, they've made the same ones, plus some more. So the same Leela top 3 metric was used (probably without the stricter within 5% win rate of top 1 clause, though in my test that was hardly relevant), a figure of 95% found, and then the player convicted and thrown out of the tournament. On the plus side an sgf of that game with the analysis was released, but no control seems to have been done, nor human analysis of the plausibility of the moves in the game (RobertT, a dan player who's been around a while said it looked pretty normal). On the downside the analysis was done by a supposed and self-admitted troll who has a personal animosity against the accused. There's a whole load of other drama in these streaming communities which I'm not part of, but it looks to me like if you don't like a dan player you can just find one of their games with a high Leela similarity metric, accuse them of cheating and Boom! silly referees convict and punish them for you. Perhaps I should report someone for playing 100% of their moves on the intersections of the go board, a perfect match with Leela!

 This post by Uberdude was liked by: Bill Spight
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #112 Posted: Mon Apr 02, 2018 2:30 am
 Lives in sente

Posts: 1281
Liked others: 106
Was liked: 267
This latest episode, slyly announced on April 1st, seems more like a possible case of https://www.cse.buffalo.edu/~regan/ches ... esults.txt
With the former episode, you can at least start witha higher than expected performance rating, before moving on to the Leela metrics. Not sure if the Creator's Invitational Tournament has even that to help it out? But come on, it must be an April Fool - right?

_________________

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #113 Posted: Mon Apr 02, 2018 3:56 am
 Honinbo

Posts: 9445
Liked others: 2956
Was liked: 3156
Uberdude wrote:
There's just been another case of an accusation, conviction, and punishment for Leela assistance cheating in the online go world, this time in the Creators Invitation Tournament, an informal event for streamers (people who broadcast video of themselves talking whilst playing go). But rather than learning from the mistakes of the PGETC case with Carlo, they've made the same ones, plus some more. So the same Leela top 3 metric was used (probably without the stricter within 5% win rate of top 1 clause, though in my test that was hardly relevant), a figure of 95% found, and then the player convicted and thrown out of the tournament. On the plus side an sgf of that game with the analysis was released, but no control seems to have been done, nor human analysis of the plausibility of the moves in the game (RobertT, a dan player who's been around a while said it looked pretty normal).

Again, people are falling into the trap of matching plays. If you want to know whether a player plays like Leela, that is a good thing to look at. But that is not the same thing as whether a player is cheating. I'll not belabor the point, I have made it before. In fact, looking at the game, I have found evidence that Triton (White) was not cheating.

`[go]\$\$Wcm44 Critical error, says Leela\$\$ ---------------------------------------\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . O . . . . . . . . . a . . . . . |\$\$ | . . O , O . . . . , . . . . . X . . . |\$\$ | . . X X . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . X . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . X , . . . . . , . . . . . , X . . |\$\$ | . . . O . . . . . . . . . . . . . X . |\$\$ | . . X O . O . . . . . . . . . . X O . |\$\$ | . . X X O . . . . . . . . . . . 1 O . |\$\$ | . . . . . . . . . . . . . . . X X O . |\$\$ | . . . O . . . . . . . . . . . O O X . |\$\$ | . . . O . O . . . , . . . . . O X X . |\$\$ | . . O X . . X X . . . . . O . X O X . |\$\$ | . . . X . . . . . . . . . . . O O . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ ---------------------------------------[/go]`

Leela regards as a critical error, preferring the approach at "a". According to Leela, that approach gives White a 56-44 lead, while Triton's move gives Black a 52-48 lead. Not only is an 8 pt. blunder, according to Leela, it yields the advantage to Black. Yes, it is Leela's second best play, but so what? It's a bad play. Furthermore, Leela's number one play, the approach, is an ordinary play, one which will not arouse suspicion. If you are going to use Leela to cheat, this is the perfect time to do so. What are your accusers going to say? Oh, look, he didn't make a blunder.???

Edit: I can't load the commented SGF file, but here are the next few plays.

`[go]\$\$Wcm44 Black blunders back\$\$ ---------------------------------------\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . O . . . . . . . . . . . . . . . |\$\$ | . . O , O . . . . , . . . . . X . . . |\$\$ | . . X X . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . X . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . X , . . . . . , . . . . . , X . . |\$\$ | . . . O . . . . . . . . . . . . . X . |\$\$ | . . X O . O . . . . . . . . . 7 X O . |\$\$ | . . X X O . . . . . . . . . 8 6 O O . |\$\$ | . . . . . . . . . . . . . 9 5 X X O . |\$\$ | . . . O . . . . . . . . . 4 3 O O X . |\$\$ | . . . O . O . . . , . . . . 2 O X X . |\$\$ | . . O X . . X X . . . . . O . X O X . |\$\$ | . . . X . . . . . . . . . . . O O . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ ---------------------------------------[/go]`

Perhaps White thought that Black would reply at 49 right away, but Black found Leela's best play, . However, for Leela recommends 50. , it says, was a blunder that cost 24%.

`[go]\$\$Bcm53 Black blunders again\$\$ ---------------------------------------\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . O . . . . . . . . . . . . . . . |\$\$ | . . O , O . . . . , . . . . . X . . . |\$\$ | . . X X . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . X . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ | . . X , . . . . . , . . . . . , X . . |\$\$ | . . . O . . . . . . . . . . 5 . 3 X . |\$\$ | . . X O . O . . . . . . . . a O X O . |\$\$ | . . X X O . . . . . . . 4 1 X X O O . |\$\$ | . . . . . . . . . . . . 2 O O X X O . |\$\$ | . . . O . . . . . . . . . X O O O X . |\$\$ | . . . O . O . . . , . . . . X O X X . |\$\$ | . . O X . . X X . . . . . O . X O X . |\$\$ | . . . X . . . . . . . . . . . O O . . |\$\$ | . . . . . . . . . . . . . . . . . . . |\$\$ ---------------------------------------[/go]`

For Leela recommends "a", but Black plays and . Leela regards as a 10% blunder. With Leela gives White a 67-33 lead.

_________________
At some point, doesn't thinking have to go on?

 This post by Bill Spight was liked by: Uberdude
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #114 Posted: Mon Apr 02, 2018 5:14 am
 Oza

Posts: 2465
Liked others: 15
Was liked: 3558
Quote:
Again, people are falling into the trap of matching plays.

I'm sure you're right, but as I don't think in a mathematical/statistical way I don't get the right vibes from that approach. I prefer just to try to see it from a practical viewpoint, and two "traps" seem to have to be negotiated even before we get to the numbers game.

1. No-one (even in the chess world, which has much more experience of these accusations) seems to have explained why, in such meaningless environments, people want to cheat, and especially why they want to cheat anonymously. It's possible to invent explanations, of course, but in the absence of proper tests none of them seems entirely satisfactory to me.

2. More relevant to this particular case, no-one has yet put forward a good case for why people would want to cheat in such a way as to pick (?randomly) one of the top three moves chosen by Leela, at least two of which might actually be awful, and not always the top one.

The latter point is even more relevant if you try to imagine a practical situation. It is probably inconvenient, to say the least, to run Leela for every move with typical amateur time controls. A more likely scenario is to use it only for critical moves. But if someone is using Leela just a handful of times in a game (and even then possibly picking just one of the top three moves), how on earth can the current statistical approach work?

I accept that cheating of various sorts goes on, but I have no answers as to how to stop it, beyond those I mentioned earlier - essentially, social disapproval. Under that system false accusations can be made, of course, but if the community also demands that accusations be supported, people can study them and just as easily voice disagreement (or even disapproval), as is happening here.

It seems to me that the accusers in the two cases here are on shaky ground. Apart from the doubtful statistical assumptions, I think we need to hear more about the motives for the accusations.

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #115 Posted: Mon Apr 02, 2018 5:20 am
 Judan

Posts: 6331
Location: Cambridge, UK
Liked others: 365
Was liked: 3435
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Javaness2 wrote:
But come on, it must be an April Fool - right?

I did wonder that, but it would be a terribly bad taste one to pick on a particular person and trash their reputation as a joke, plus all the various related commenters in that reddit threat (including the guy who made the analysis) make it look real.

Bill Spight wrote:
Perhaps White thought that Black would reply at 49 right away, but Black found Leela's best play, . However, for Leela recommends 50. , it says, was a blunder that cost 24%.

Or perhaps he watched AlphaGo versus Ke Jie game 2 in which a very similar sequence occurred. Nothing unusual about finding good moves when you are copying good moves from pros and then messing up when you have to think for yourself (though actually the extend was also in the AG game).

 This post by Uberdude was liked by: Bill Spight
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #116 Posted: Mon Apr 02, 2018 6:13 am
 Lives with ko

Posts: 282
Liked others: 93
Was liked: 149
Rank: OGS 7 kyu
Could be interesting to run the same metrics for other bots. AQ, Leela Zero and Ray are strong enough go bots to be used for cheating. What if the metric is equally high for those others bots ?

This could be used in fact to agree on what moves are really obvious to not be considered copied from a bot: if among a group of X bots, more than Y bots gives the same answer as the players in their top three moves, then one can consider that move an obvious move.

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #117 Posted: Mon Apr 02, 2018 6:41 am
 Lives with ko

Posts: 187
Liked others: 31
Was liked: 62
Rank: 2d
John Fairbairn wrote:
1. No-one (even in the chess world, which has much more experience of these accusations) seems to have explained why, in such meaningless environments, people want to cheat, and especially why they want to cheat anonymously. It's possible to invent explanations, of course, but in the absence of proper tests none of them seems entirely satisfactory to me.

I did once see it discussed in Twitch chat for a chess stream. Some people more or less admitted to using engines, saying it's kind of a game in itself to see if you can get away with it. You don't have to understand or be able to explain the mindset, you just have to be aware that it exists. I've seen it often enough that someone creates a new account, plays a meaningless online tournament and afterwards gets banned because all his moves matched engine output. People also cheat at video games by hacking their games so they can see opponents through walls. Etc.

So, the problem is real, and if we want online go to continue to exist, we need to think about what we can do to combat it. When I originally saw this thread, for me there was no reasonable doubt that cheating happened: I had the awareness that cheating can and does happen, and I don't for one moment buy 98% agreement between two playing entities (even if one postulates a teacher-student relationship between Leela and a human, styles do differ). I do know that when I run post-game analysis on my games (3d IGS, 3d OGS, 2d real life), there are wild swings in win rate and very often I choose entirely different plans than the computer.

I'm slightly less convinced now about the original example, and I do agree with Bill that move 44 seems to be a giveaway that a human was playing in the second game.

The problem with saying "we can't really say anything from fuseki/joseki moves" is that if you let LZ play your fuseki, you might have an unloseable game by move 30 or 40 and you can phone it in from there on, using the engine to check for obvious blunders. I think our game differs from chess in this regard; tactical blunders are much more of an issue in chess even at a high level, in any position, and not making any in a blitz game is a very strong indicator that you're either Magnus Carlsen or Stockfish.

 This post by bernds was liked by: Bill Spight
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #118 Posted: Mon Apr 02, 2018 6:47 am
 Honinbo

Posts: 9445
Liked others: 2956
Was liked: 3156
John Fairbairn wrote:
2. More relevant to this particular case, no-one has yet put forward a good case for why people would want to cheat in such a way as to pick (?randomly) one of the top three moves chosen by Leela, at least two of which might actually be awful, and not always the top one.

Tarzan want to win. Leela win. Tarzan make Leela plays. But wait! If Tarzan always make Leela plays, Tarzan get caught. Tarzan not always make Leela plays, also make plays Leela like, but not play. Tarzan not get caught playing Leela plays. Tarzan King of Jungle. Tarzan King of Go.

_________________
At some point, doesn't thinking have to go on?

 This post by Bill Spight was liked by: Gomoto
Top

 Post subject: Re: “Decision: case of using computer assistance in League A #119 Posted: Mon Apr 02, 2018 6:54 am
 Honinbo

Posts: 9445
Liked others: 2956
Was liked: 3156
pnprog wrote:
Could be interesting to run the same metrics for other bots. AQ, Leela Zero and Ray are strong enough go bots to be used for cheating. What if the metric is equally high for those others bots ?

This could be used in fact to agree on what moves are really obvious to not be considered copied from a bot: if among a group of X bots, more than Y bots gives the same answer as the players in their top three moves, then one can consider that move an obvious move.

Regan's approach addresses these issues. But he has better engines. The metric, however, is not matching strong bots or engines, per se. It is playing better than you usually do, plus evidence that you are playing better because you are cheating. If you play very well, you will often choose plays that the top bots do. But if you choose obvious plays, you may be playing well, but not better than usual. Regan rates the difficulty of the plays, so obvious plays get little or no weight.

_________________
At some point, doesn't thinking have to go on?

Top

 Post subject: Re: “Decision: case of using computer assistance in League A #120 Posted: Mon Apr 02, 2018 7:04 am
 Gosei

Posts: 1499
Location: Earth
Liked others: 562
Was liked: 240
Quote:
Tarzan want to win. Leela win. Tarzan make Leela plays. ... Tarzan King of Jungle. Tarzan King of Go.

My plan (but not by lookin at Leela during game, I learn the LeelaZ moves by hard )

(King of Ko only counts in realtime at my Go Club without computer support)

We have already a system against cheating in online go. It is called rating. I dont care if my opponent is cheating, he is playing around my strength according to rating.

I do not take part in any online tournaments. I only play for fun and learning. Cheating is only a problem for online tournaments in my view.

 This post by Gomoto was liked by: pnprog
Top

 Display posts from previous: All posts1 day7 days2 weeks1 month3 months6 months1 year Sort by AuthorPost timeSubject AscendingDescending
 Page 6 of 36 [ 720 posts ] Go to page Previous  1 ... 3, 4, 5, 6, 7, 8, 9 ... 36  Next

 All times are UTC - 8 hours [ DST ]

#### Who is online

Users browsing this forum: Majestic-12 [Bot] and 1 guest

 You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum

Search for:
 Jump to:  Select a forum ------------------ Life In 19x19.com General Topics    Introductions and Guidelines    Off Topic    Announcements    General Go Chat    Beginners    Amateurs    Professionals       Lee Sedol vs Gu Li    Go Rules    Forum/Site Suggestions and Bugs    Creative writing    Tournaments       Ride share to tournaments Improve Your Game    Game Analysis    Study Group    Teachers/Club Leaders       Teacher advertisements    Study Journals L19²GO (Malkovich)    1-on-1 Malkovich games    Big Brother Malkovich games    Rengo Games    Other versions of turn-based games Go Gear    Go Books    Go Book Reviews    Computer Go    Gobans and other equipment    Trading Post    New Products/Upgrades/Sales Go Club Forums    Go Club Discussions       Honinbo Go League    American Go Association Forum       Go Congress 2011 volunteers       AGA volunteers ( non-congress)    Australian Go Association    European Go Federation Forum    Singapore Weiqi Association    KGS    ASR League    IGS    OGS    Tygem    WBaduk    Turn Based Servers    Insei League Events    Kaya.gs       King of the Hill