ez4u wrote:
Bill Spight wrote:
...
Thanks for the links to Regan's writing.
He provides another link to the Parable of the Golfers (
https://www.cse.buffalo.edu//~regan/che ... lfers.html ), in which he states what is close to my position.
Ken Regan wrote:
the statistical analysis can only be supporting evidence of cheating, in cases that have some other concrete distinguishing mark.
I am enough of a Bayesian to accept very strong statistical evidence by itself, especially when the question is one of throwing out a result. In that article he refers to the civil court level of evidence. For disciplinary action I think that we should require much more. IMO there is enough evidence to have Carlos's play in future tournaments monitored.
In this case shouldn't the Parable of the Golfers be modified to ask how many golfers we need in order to observe someone sink their drive on
all 4 par 3's on a typical course?
First, let me say something about being a Bayesian. Before the 20th century, everybody was a Bayesian, but by then were aware of its problems. To be a Bayesian is to believe that hypotheses have, or may have, probabilities. Frequentists believe that only events have probabilities. Frequentists won the field in the early 20th century, as ideas such as hypothesis testing were developed without assigning probabilities to hypotheses. In the mid 20th century there were still a few famous Bayesians, such as L. J. Savage and I. J. Good. I was fortunate enough to spend an afternoon with Savage, who showed me that I was a closet Bayesian.
Later in the 20th century Bayesianism had a rebirth, perhaps because of computer programmers who were able to write programs that made sense in Bayesian terms, and those programs worked.
Bayesiansism when I came across it, in the mid 20th century, was largely subjectivist, because how you alter your probabilities in the face of evidence depends upon your prior beliefs. This subjective property, OC, is a major problem from a scientific point of view. I. J. Good talked about how to interrogate yourself to find out what your probabilistic beliefs were. The question you ask is a good example. What would be convincing statistical evidence in itself? How about aceing all four par threes on the course?
Quote:
As a Bayesian, if you start with an expectation of 80% accuracy (the high end of what was observed in FTF games), how do you interpret 98 out of 100 (or 49 out of 50?)? [This is not argumentative; I simply would like to know!]
IIRC, Uberdude observed 89% in his opponent, who was of comparable strength but had not undergone Carlos's training regime. Anyway, as a Bayesian, you have to have a non-zero belief in any probability that you may possibly come to assign to a hypothesis, or you will never assign that probability. IOW, your beliefs have a distribution over a range of probabilities. So you can't just focus on, say, 75% or 80% in itself.
I know in a way I am dodging the question, but let me say that first, I would want to know more about the games and what kind of moves he made which were like Leela's and what kind were not, and so on. Second, like just about everybody else, I think that there is enough evidence to think that in this one game he played unusually like Leela. But there could be any number of reasons for that. Cheating is not the only hypothesis to consider. It may be the one with the highest prior degree of belief, and the one that is most bolstered by the evidence, but that still may not give it a high posterior degree of belief. It may be the most supported hypothesis, but that is a different question.
Another question that Baysians must ask is how do we know what we know? An example from the early scientific literature has to do with the probability that the sun will rise tomorrow. Suppose we start by assigning equal weight to each non-zero probability that the sun would rise the next day, starting from day 1, which in those days was supposed to be some 6,000 years in the past. As we update our beliefs with the sun rising every day for 6,000 years, the probability that it would rise tomorrow becomes very, very, close to 1. (Yay!) But someone, I forget who and where, published a paper pointing out that on the same evidence the probability that the sun would rise 5,000 years from today was only ⅔. That result was not appealing.
That result illustrates the problematic nature of Bayesianism for science. Speaking for myself, my belief about the sun rising has to do with the rotation of the earth. Given that knowledge, whether the sun rose yesterday is irrelevant to whether it will rise tomorrow.
In the parable of the golfers Regan states that the chance of a scratch golfer making a hole in one on a par three is about 1/5000. If we gathered 10,000 such golfers and had them each hit a ball from the tee on a par three hole, we would expect 2 of them to make a hole in one. He goes on to say that suppose that we slipped a piece of paper with a black dot on it into the pocket of 10 of those 10,000, and one of them made a hole in one. The probability that (at least) one of them would do so is around 1/500, i.e., (1 - (4999/5000)^10). The statistical evidence is very strong that this is not just a chance event, that there is a reason for it. Good enough, Regan suggests, to win a civil case in court.
Something that he does not go on to say, but I think that he should have said, is that once we know about the black dots, we should chalk up that hole in one to chance. Having a black dot in your pocket is irrelevant to your golfing ability. But in the parable he has the black dot stand for "physical or observational evidence of cheating". I.e., given such evidence, then the hole in one (statistically very rare occurrence) may offer support. In this case, all we have is the statistical evidence. IMO, this evidence supports looking for physical or observational evidence of cheating. For example, by monitoring Carlos's play in the future.
Frequentists agree, BTW. I still remember the prof announcing, "Statistics proves nothing."