It is currently Fri Apr 19, 2024 1:14 pm

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 39 posts ]  Go to page Previous  1, 2
Author Message
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #21 Posted: Mon Jun 11, 2018 12:05 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Bill Spight wrote:
moha wrote:
dfan wrote:
To be clearer, what I meant with that phrase was "if you assume that the win rate accurately represents the probability that a human would win a game against another human of equal ability starting from the position in question".
This assumption also seems false. The winrate approximates the probability of the given bot winning against itself starting from the position. This is how it was trained,


Are you sure about that? In that case it would be easy to produce margin of error statistics, which, IIUC, are not given.


Tryss wrote:
No, it's not easy, because the given winrate is mostly based on winrate given by the evaluation of the network. And there is no easy way to get the margin of error of these numbers.


That's my point. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #22 Posted: Mon Jun 11, 2018 12:08 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
moha wrote:
Bill Spight wrote:
moha wrote:
The winrate approximates the probability of the given bot winning against itself starting from the position. This is how it was trained,
Are you sure about that? In that case it would be easy to produce margin of error statistics, which, IIUC, are not given.
Consider the training method: from zillions of positions taken from zillions of selfplay games the value head is trained with a loss function that is the difference of its current output and the actual outcome (1/-1).


Isn't that a form of reinforcement learning? You don't need accurate winrates for that to work.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #23 Posted: Mon Jun 11, 2018 12:26 pm 
Gosei

Posts: 1590
Liked others: 886
Was liked: 528
Rank: AGA 3k Fox 3d
GD Posts: 61
KGS: dfan
moha wrote:
dfan wrote:
To be clearer, what I meant with that phrase was "if you assume that the win rate accurately represents the probability that a human would win a game against another human of equal ability starting from the position in question".
This assumption also seems false. The winrate approximates the probability of the given bot winning against itself starting from the position. This is how it was trained, but this can be significantly different from human winrate due to different playstyles. In fact, a drop of 2% (bot) winrate may even be an 1% (human) winrate gain.
OK. This is all incidental to the actual point I was trying to make anyway, which has now gotten lost in the noise, so I'm just going to drop it.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #24 Posted: Mon Jun 11, 2018 12:39 pm 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Bill Spight wrote:
moha wrote:
Consider the training method: from zillions of positions taken from zillions of selfplay games the value head is trained with a loss function that is the difference of its current output and the actual outcome (1/-1).
Isn't that a form of reinforcement learning? You don't need accurate winrates for that to work.
It's closer to supervised than to "real" reinforcement learning (the selfplay cycle makes it a bit different, net->selfplay->newnet). And the winrates will be pretty "accurate" in a sense, since the network is trained until the loss diminishes, at that point it will output reasonable values - in the positions it was trained on. Hence the need for a different test set if you are interested in its real accuracy.

Or one could actually run hundreds of selfplays from hundreds of chosen test positions. To go back to dfan's original assumption: you could also do the same with human games starting from chosen test positions and collect the accuracy statistics.

Edit: I somehow missed your comment about move selection / number of visits. What I wrote is the value net only, when strengthened with search it will most often use an average of the value evaluations at leafs starting with the move candidate. And selecting on number of visits will converge to selection on avg value, since the higher valued candidates will get more future visits (either reducing the avg if refutation is found, or increasing the visit counts).

It's true this would work even with inaccurate values/winrates, provided at least their ordering is reasonably good. But the above sampling tests still seem possible. And btw, if the nets would be much faster then policy net based rollouts (almost real winrates) would be used for the evaluation.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #25 Posted: Mon Jun 11, 2018 1:25 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Anyway, we can test the winrates by bot vs. bot self play ourselves. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #26 Posted: Tue Jun 12, 2018 1:18 am 
Lives with ko

Posts: 142
Liked others: 27
Was liked: 89
Rank: 5 dan
Go Review Partner can analyze entire game, using selection of bots.
After analysis, it can produce histogram which shows deviations from bot's play.
It is not direct proof of similarities. Of course josekis would be similar, opening and even close fighting.
But if player has a long game similar to Leela, that is cause for further examinations.

Here is histogram of one game between european pros.
Red bars are deviation's from Leela's move (it considers them bad), and green are better moves.


Attachments:
QIQJWEPNSE.png
QIQJWEPNSE.png [ 22.23 KiB | Viewed 7370 times ]
Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #27 Posted: Tue Jun 12, 2018 3:11 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
It would be interesting to compare the same game with a LeelaZero analysis: when I was reviewing one of Ilya Shikshin's games with Leela 0.11 it often didn't like or expect his moves, as a 4d I thought sometimes it was right they were bad, but sometimes I think his moves were actually better (and indeed sometimes Leela would then like them when shown them, a point pnprog recently explained). As LZ is more strongly opinionated I would expect more red overall, but maybe some of those bars would be relatively smaller. Of course sometimes even the Euro pros do just play pretty badly ;-) .

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #28 Posted: Tue Jun 12, 2018 3:15 am 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Bill Spight wrote:
Anyway, we can test the winrates by bot vs. bot self play ourselves. :)
This is what I was suggesting. And for their accuracy in human games you may not even need the mentioned hundreds of special games from chosen positions: just take a large human database, get bot prediction (both raw net and search result) in a chosen sample of positions, then calculate the overall correlation to outcomes. You may even do this separately for opening-middlegame-endgame positions (or for various winrate ranges).

Uberdude wrote:
It would be interesting to compare the same game with a LeelaZero analysis
My first thought was taking a game between two different bots (like an LZ vs. Golaxy game from earlier) and analyzing it with a third bot (Leela?). :)

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #29 Posted: Wed Jun 13, 2018 2:01 am 
Lives with ko
User avatar

Posts: 284
Liked others: 94
Was liked: 153
Rank: OGS 7 kyu
Hi!

Uberdude wrote:
This info is basically the raw data behind the win rate delta graph, so if you could somehow dump out the data for the whole game as text/file somewhere that'd be super useful, e.g. a CSV (I added a few bonus columns) like
Quote:
Move number,Colour,Bot move,Bot winrate,Game move,Game winrate,Bot choice,Policy prob
20,W,h17,54.23,j17,53.5,2,5.12
21,B,h18,46.5,h18,46.5,1,45.32
So I prepared an "analysis kit" pretty similar to what I prepared for Ales already:
I call it a "kit" because it includes a copy of Leela 0.11, and can be used to perform batch analysis of SGF files, and conversion of the RSGF files to CSV files.

So inside, there are:
  • A python file rsgf2csv.py that is used to extract the data from Leela's RSGF files into CSV file. If you run it directly, it will have you select a RSGF file on your computer, and then create the CSV. For example: mygame.rsgf => mygame.rsgf.csv
  • A minimalist version of GRP, that can only be used to perform analysis with Leela. It has been configured to use Leela with those parameters: Leela0110GTP.exe --gtp --noponder --playouts 150000 --nobook and a thinking time of 1000secondes per moves. In fact, Leela does not follow the --playouts very respectfully, and tends to give much more playouts when she is not sure. But at least 150000 playouts seems to be her minimum limit in that case.
  • An empty folder games_to_be_analysed where you can place the SGF files you want to analyse.
  • Two batch files (bash scripts for Linux) that can be run to perform the batch analysis of all SGF files in games_to_be_analysed folder. So one for Leela CPU (batch_analysis_CPU), and one for Leela GPU (batch_analysis_GPU). For windows, the batch file has first to detect where python is located on the computer to run the analysis. It's working on my Windows computer, but I am not so confident it would work on others windows computer, let me know.

You can modify the Leela command line by modifying the config.ini file:
Code:
[Leela]
slowcommand = Leela0110GTP.exe
slowparameters = --gtp --noponder --playouts 150000 --nobook
slowtimepermove = 1000
fastcommand = Leela0110GTP_OpenCL.exe
fastparameters = --gtp --noponder --playouts 150000 --nobook
fasttimepermove = 1000

slowparameters is for the CPU analysis, and fastparameters for the GPU analysis.

If you want to perform the analysis only on a subset of moves, you can modify the batch_analysis_CPU/GPU to modify the GRP command line by adding the --range parameter. For example:
Code:
for /f "delims=" %%i in ('Assoc .py') do set filetype=%%i
set filetype=%filetype:~4%
echo filetype for .py files: %filetype%

for /f "delims=" %%i in ('Ftype %filetype%') do set pythonexe=%%i
set pythonexe=%pythonexe:~12,-7%

echo path to python interpreter: %pythonexe%

for %%f in (games_to_be_analysed/*.sgf) do (
   %pythonexe% leela_analysis.py --profil=slow --range="30-1000" "games_to_be_analysed/%%~nf.sgf"
)

for %%f in (games_to_be_analysed/*.rsgf) do (
   %pythonexe% rsgf2csv.py "games_to_be_analysed/%%~nf.rsgf"
)

echo ==================
echo Analysis completed

pause
In the above example, %pythonexe% leela_analysis.py --profil=slow --range="30-1000" "games_to_be_analysed/%%~nf.sgf" will make Leela skip the analysis of moves before 30 and after 1000, so the opening won't be analysed.

At the moment, the main drawback is that it requires python 2.7 to be installed on the computer. For Mac users, I think the Linux version can be used, but the Leela executables need to be replaced by MacOs executables, and the names of the executables has to be updated in the config.ini

Please have a try and let me know if it works, or can be improved.

Edit: in that "kit", I also set GRP to save up to 361 variations. This way, one can be sure none informations is discarded. The --nobook parameter prevents Leela to use her joseki dictionary to play the opening, so she is forced to think about all moves, including during the opening. I deliver all this together in a zip to help making this analysis repeatable: I more people want to help analysing big volume of data by sharing their computer power, it's easy to just distribute this zip file so everybody in analysing is conditions as similar as possible to everybody else.

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!


This post by pnprog was liked by: Uberdude
Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #30 Posted: Wed Jun 13, 2018 5:08 am 
Lives with ko
User avatar

Posts: 284
Liked others: 94
Was liked: 153
Rank: OGS 7 kyu
Uberdude wrote:
I'm also thinking that we should also analyse the games with GnuGo, and any move which GnuGo agrees with the human and the strong bot be discarded from the analysis as an obvious move with little information. This should help mitigate the "this was a simple game with many obvious forced moves so will be more similar to the bot" problem.

This also can be performed with GRP, because Gnugo has a command to produce the 10 preferred moves (maybe one could modify the source code of Gnugo to get more moves). And that is what GRP does when using Gnugo to perform an analysis.

I made a quick proof of concept using the controversial game from PGETC. I enclose the CSV file.
Attachment:
WWIWTFDSGS.rsgf.csv.zip [1.03 KiB]
Downloaded 305 times

The column Bot choice indicates the rank of the game move among Gnugo preferred moves. So a rank of 1 means that GnuGo would have played the same move. When the rank indicates ">10" it means this move in not part of Gnugo best 10 moves.

I calculated the average rank for both players (using rank=11 when rank>10) and they are both between 6 and 7 in average.
23/83 moves by black correspond to Gnugo first move.
14/82 moves by white correspond to Gnugo first move.
Both players have played exactly 48 moves inside Gnugo top 10 moves.

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #31 Posted: Wed Jun 13, 2018 5:17 am 
Lives with ko

Posts: 142
Liked others: 27
Was liked: 89
Rank: 5 dan
Pnprog,

first I would like to appreciate you for the GRP, it is excellent software, great work!

----

On the topic, you can not simply count players moves that correspond to GnuGo.
You can have atari, peep, joseki - and all those moves probably would be answered as best choice by any player.

It is necessary to focus on important moves, move sequences, etc.
Simple statistics is not good enough.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #32 Posted: Wed Jun 13, 2018 7:50 am 
Lives with ko
User avatar

Posts: 284
Liked others: 94
Was liked: 153
Rank: OGS 7 kyu
Bojanic wrote:
On the topic, you can not simply count players moves that correspond to GnuGo.
You can have atari, peep, joseki - and all those moves probably would be answered as best choice by any player.

It is necessary to focus on important moves, move sequences, etc.
Simple statistics is not good enough.

Haha, I am just some guy who can makes tools that could be useful for you to test your hypothesis, or perform analysis :)

So I am trying to stay "neutral" on the existing PGETC case, and I won't embark into trying to develop a method to solve future case.

But, if you have some ideas that you want to apply on large set of data, and that it's to much work (and error prone) to do by hand, then I would be happy to help :salute:

Above was just a "proof of concept" of the sort of data that could be extracted from Gnugo, as was mentioned by Uberdude. If some of you believe it could be an useful tool in itself, then I will release the tool in a easy way for you to use.

Bojanic wrote:
You can have atari, peep, joseki - and all those moves probably would be answered as best choice by any player
On this specific question, one way to differentiate between important move and urgent move would be, with Leela:
  • Check if Leela only proposes one move: this is a strong indicator that this is a do or die move
  • Check the decrease in win rate before the first top move and the second top move. If the first top move has 51% win rate, and the second top move only has 15% win rate, this also indicate a forced move.

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!


This post by pnprog was liked by: Javaness2
Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #33 Posted: Wed Jun 13, 2018 8:59 am 
Lives with ko

Posts: 142
Liked others: 27
Was liked: 89
Rank: 5 dan
pnprog wrote:
On this specific question, one way to differentiate between important move and urgent move would be, with Leela:
  • Check if Leela only proposes one move: this is a strong indicator that this is a do or die move
  • Check the decrease in win rate before the first top move and the second top move. If the first top move has 51% win rate, and the second top move only has 15% win rate, this also indicate a forced move.

It could be helpful, but some analysis would be needed.
IE, in one game I have seen forced move with two answers, both good.
In other cases, someone might choose not to answer peep, or to play other move nearby.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #34 Posted: Sun Jun 17, 2018 6:30 am 
Lives with ko
User avatar

Posts: 284
Liked others: 94
Was liked: 153
Rank: OGS 7 kyu
Hi!

In some other thread, you mentioned that the PGETC games also have time record. This is also something that could be extracted together with other informations, in its own column.

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #35 Posted: Mon Jun 18, 2018 2:08 am 
Lives with ko
User avatar

Posts: 284
Liked others: 94
Was liked: 153
Rank: OGS 7 kyu
That's me again!

I was thinking about something that maybe would work, but would be a lot of work to implement:

Basically, it would consist in training a set of policy networks, each one corresponding to a specific level of play (3k, 2k, 1k, 1d, 2d, 3d...).

<Edit> to be clear, I am not proposing to train a bot, only a policy network. Not something that can play Go, no play-out, no tree search, no MC rolls, no value network...</Edit>

A policy network, as I understand it, was developed by Deepmind for there first version of AlphaGo by showing it games of strong amateurs players they downloaded from the internet. This policy network was used to indicate, for a specific game position, what moves a strong amateur would play. This was used to reduce the number of moves AlphaGo had to evaluate (evaluation being done with value network and montecarlo rolls). Later they used AlphaGo VS AlphaGo games to improve better their policy network.

So, we could try to train one policy network using ~2k players' games, then another one using ~1k players' games, then another one using ~1d players' games, and so on.

Note that we don't really care what level at policy network is labelled (1k, or 3d), we only need them to be in croissant order, and ideally at a regular distance in strength. We could classify them using ELO or simply A, B, C...

With such a set of policy networks, we could evaluate how the moves of one player in his game correlate with each of our policy networks, and draw a chart. One could expect this chart to peak at the policy network closest to this player level.

Then, by comparing those charts for different games, we could then tell that for a particular game, that player did not played at his usual level.

The difficult part would be to gather enough games for training, games from players with stable level, and have those games classified by level...

One way to do that could be to work with Go severs, more specifically with the players they use as anchors
Now, they won't probably want to disclose publicly what players are used as anchors, but maybe this could be done under a non disclosure agreement. Or maybe they could disclose this information when the anchor is removed. Then we can download his games from the period he was selected as an anchor.
Or maybe we could collaborate with Go server to get statistics on what player have a very strong rating confidence.

Once we get enough games to train our policy networks, it also open all sort of possibility regarding the rating of players or their games (like, one could finally get to know the equivalence of ranks among Go servers).

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!


Last edited by pnprog on Mon Jun 18, 2018 4:55 am, edited 2 times in total.
Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #36 Posted: Mon Jun 18, 2018 4:14 am 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
I see two problems with training an 1k network, for example. First, to get 1k level of play you should disable search (otherwise you get much higher levels, someone did this with 1d games and the results were comparable to full bot strength - the policy only used for pruning the search, and good search with 1d pruning is VERY strong). OTOH a no-search policy net will have specific NN related oversights, atypical and different to a human 1k.

Second, even if you get an artificial 1k player, comparing to it doesn't seem much better than comparing to other humans of similar strength. And even two 1k-s can have quite different playstyles and error distributions.

The stronger approach seems to be to compare to a "perfect" player, collect detailed error statistics (exact size of the errors in points dropped in various phases of the game), and then compare those DISTRIBUTIONS to known reference distributions. But even with this approach one should start by studying typical human error distributions, and see how similar or different two humans can be. Those errors may be quite dependent on playing style, for example.

But if you only want to have NN aid in detecting cheaters, you could train a net specifically for this. Showing it a lot of bot games, and a lot of human games of different strength (maybe even human+bot games), you have a direct training target if you ask whether the player was human (maybe subdivided for different strength levels). But since a cheater may not use the bot for all moves (only blunder checking), such direct approaches doesn't seem viable.

Detailed study of error statistics seems to be the only promising way - whatever a player does will have SOME mark on his distribution.

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #37 Posted: Mon Jun 18, 2018 4:51 am 
Lives with ko
User avatar

Posts: 284
Liked others: 94
Was liked: 153
Rank: OGS 7 kyu
moha wrote:
... Second, even if you get an artificial 1k player ...

No no no, you got me wrong!

I am not proposing to train a bot, I am just proposing to train a policy network :)

_________________
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #38 Posted: Mon Jun 18, 2018 6:04 am 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
pnprog wrote:
I am not proposing to train a bot, I am just proposing to train a policy network :)
Ok, but then:
moha wrote:
a no-search policy net will have specific NN related oversights, atypical and different to a human 1k.
There are some things a raw net often gets wrong, because of the lack of tactical understanding that is inevitable with no search (and because of the fuzzy, approximative nature of NNs). These can be quite different from human mistakes.

EDIT: Back to the original suggestion, even assuming these policies form worthwhile comparison points. Suppose you find a game where the player played better than usual (correlation peak above). This would correspond to having his error distribution shifted/scaled a bit. How do you judge if he were lucky, had a good day or cheated, without a closer look at the details of his distribution?

Top
 Profile  
 
Offline
 Post subject: Re: Measuring player mistakes versus bots
Post #39 Posted: Mon Jun 18, 2018 6:54 am 
Gosei
User avatar

Posts: 1754
Liked others: 177
Was liked: 492
I think it's very hard to detect difficulty of a move using a neural net. The level of problems on the website https://neuralnetgoproblems.com/ is far from accurate, some 1d problems are quite easy (common joseki moves for instance), while some 10k problems look much harder than 10k. In addition, the strength of a player depends on
  • knowledge
  • reading.

Knowledge corresponds roughly to the neural network, and reading to simulations. Some players don't have a lot of knowledge but are good at reading, and conversely. Also, you can be (relatively) strong because you make many good moves but regular blunders, or because you make mostly small mistakes.

Maybe the following approach could work:

  • Choose a database of at least a few hundred games.
  • Choose a strong bot, like a recent version of LeelaZero.
  • Say that a position is "relevant" when the game is between moves 30 and 150, LeelaZero evaluates the winrate between 30% and 80%, and the move it suggests is different from the move suggested by GnuGo.
  • define the "winrate loss" of a human move the difference between the winrate before the move, and the winrate after the move. It can be a negative number when the human finds a better move than LeelaZero.
  • Using the database, determine the parameters a and b such that exactly 10% of moves made by 1d players at relevant positions have a winrate loss less than a, and 10% of moves made by 1d players at relevant positions have a winrate loss more than b.
  • Define a "good move" as a move, made at a relevant position, with winrate loss less than a.
  • Define a "bad move" as a move, made at a relevant position, with winrate loss more than b.
  • By definition, a "good move" is a move that would be found by less than 10% of 1d players, and a "bad move" is a mistake that less than 10% of 1d players would make.
  • Using the database, given a grade g (g = ...2k, 1k, 1d, 2d,...) define ag as the percentage of good and bg as the percentage of bad moves made by players of grade g. The point Mg=(ag,bg) in the plane represents the average play of grade g players.
  • We will say that a person played at level g during a game if the proportion of good and bad moves he made during that game is closest to the point Mg.
  • Then one can check using the database how often a 6d player plays at level 4d or conversely.

Of course I have no idea if the above approach works at all. The reference to 1d is arbitrary, as well as the proportions 10%. The approach can also be refined by classifying moves as "very good", "good", "average", "bad", "blunder". The notion of "relevant position" is also arbitrary and could be refined, but as given above it is easy to check on a computer.

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 39 posts ]  Go to page Previous  1, 2

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group