slightly off topic: instead of sorting network you can consider more generally trying to apply a sorting algo to a set of players.
But something implicit in tournaments but not in sorting algorithms is that you try to have everybody have the same number of game with the exception of a few bye (meaning that you want an heavily parrallelizable algo: so sorting networks are indeed good candidates, but a decent tournament model should maximise parrallelism before all else);
Imagine applying a standard selecting algo to a tournament of 100 player when you want to know the top 3 to qualifwy them to an event.
apply quickselect:
you select a random player, then make him play against everybody else;
Then if more than 3 players have beaten him, select a random player amongst those that beat the first one and make him play against all of those that beat the first one; Those that have been beaten the first time are out (if more than 3 ar beaten the first pivot)
rince and repeat until exactly 3 players have beaten your pivot.
That would be a pretty dumb tournament model
back on the topic;, i dont think there are been studies of the effects of errors on the comparison and the resilience of sorting algo to that ?
In theory, there is no difference between theory and practice. In practice, there is.