I think the statistical question is quite murky, unless you make some simplifying assumptions (which then make it meaningless).
It's trivially obvious that the loser of a game lost more points than the winner. If you count the severity of a mistake as the number of points it lost compared to perfect play, then that tells you... not much, because the loser could have made 100 1-point mistakes while the winner made 9 10-point mistakes. The average size of the loser's mistakes was much smaller, but they were more numerous.
If the mistakes of the two players have the same frequency and size distributions, then I
think (with low confidence) that given a sufficient sample size, it is true that the average mistake size of the loser will be larger than the average mistake size of the winner. I'm not at all certain that the loser's largest mistake would (on average) be bigger than the winner's largest mistake-- I think that would depend on the exact frequency and size distributions. I could write a computer program to simulate this but I don't think I'm quite that curious.
And anyway, that "same distribution" requirement is almost certainly false given any two particular players.
Given all the above, I think
some of you are blaspheming the holy name of Bayes. You're saying, if I read correctly, "Given that I experienced a loss, Bayes says we should expect my mistakes must have been bigger". In isolation, yes. But you're not done, you also have to run some other hypotheses through, like the one that "my mistakes must have been more numerous", and the one that, "my opponent's mistakes were fewer and/or less severe".
You can't use Bayes unless your evidence distinguishes between those hypotheses, i.e., it has to actually be evidence. Without knowing the player's mistake frequency and size distributions, I don't think the fact that there was a loss favors any of those explanations.