Mike Novack wrote:b) Until "weaknesses discovered" Well this is more or less dated. An AI playing based on "go knowledge" might not be able to "see" something (has no code for that) but the strongest programs of the last couple years are all using MCTS to select moves and that algorithms doesn't depend on "go knowledge".
I mean more general weaknesses, weaknesses of style, not specific shape weaknesses. If you find a style that beats a computer, that style will KEEP working.
This was a problem for the best chess computers way up until they finally established dominance, GM's could play a "closed" game which made the look aheads fiendishly difficult.
The systematic weaknesses are in the way such engines over/undervalue certain kinds of positions.
soon computer will be strong in below areas but 1) computer will overpower any human in end game. 2) computer will know many many josekies and known variations of it. 3) computer will read small and closed life and death perfectly.
but they will not be able to read well if the board become too complicated. they will not be good at counting thickness.
in the game of go it only take one move to lose the game. that is why i am saying that i can not lose to computer in my lifetime.
"The more we think we know about
The greater the unknown" Words by neil peart, music by geddy lee and alex lifeson
Magicwand wrote:that is why i am saying that i can not lose to computer in my lifetime.
don't know how old you are, but looking at your picture and you being 4d kgs I think you will need some serious studying to be still ahead of the computer in 30-40 years.
Although I'm in the camp that welcomes research into computer go and is pleased by progress, I'm also in the camp that believes current programs are probably significantly overranked, for various reasons - people not being serious when playing them, not analysing the program's weaknesses, etc. I also believe that computers that do score well do so often because they have strengths that work against amateurs but that would not work against professionals. In particular, they seem to have a pretty powerful endgame. I reckon even dan players among amateurs lose 10-20 points per game in this area. Most pros, however, claim to play a pretty good endgame, and mistakes are usually only of the order of a point or so.
There is also the point that the weaknesses of programs have not been properly studied yet. I know, for example, that when I have played programs, I have often tried to engineer unusual positions such as semeais and sekis which are both difficult enough perhaps not to have been taken account of in the programming and which require extreme precision, something Monte Carlo may be bad at. My experience is that programs play utterly stupid moves in these cases. I am sure a pro who dedicates himself to studying these weaknesses would do even better.
The argument about computer weaknesses is often countered by the claim that they exist but are sorted out over time. This has certainly been true in chess. The horizon effect, lack of random play and taking the program out of the book were all strategies that even weak amateurs could use for a long time but are now irrelevant. Among chess pros, the strategy of playing close games with as few tactics as possible has also been demolished.
But I do wonder if this really will apply to go (except in the very long term). The nature of go is such that it involves several battles over an entire game (and can be made to have even more). One mistake in chess is usually fatal against a computer. In go you get to fight again. But on the computer side, the very programming strategies that have been devised to deal with the huge branching factor involve a large measure of randomness and therefore (I presume) mistakes. You could argue that, because it is go, mistakes don't have to be fatal for the computer no more than they are more the human, but I suspect there might be a major difference. The computer would be unaware that a mistake has been made, but a pro would be aware. The computer finds it hard to change its behaviour, the human is designed to create coping strategies in new environments. Intuitively, I also feel a pro would be able to find enough precision to punish a mistake whereas a computer of the current type would not.
I also have reservations about pro reactions to computer go so far. I think there has been a large measure of politeness, or maybe noblesse oblige. One exception may have been when a program called Erica won the World Computer Olympiad recently and so got to play young Fujisawa Rina on six stones. She made a monkey out of it, relying on a large semeai strategy incidentally. She made it plain beforehand that she wasn't going to go easy, as previous pros have apparently done, although she hadn't made any special study of computer go. This game doesn't seem to have appeared elsewhere so I give it here as another GoGoD Christmas present (thank TMark as he transcribed it).
As regards pros going easy on computers, I do know this happens. I can't speak for every case, obviously, but many years ago, when I went to Japan to help market the British-made computer Shogimaster (program written by David Levy's team that had won world chess programming titles), we had the benefit of this sort of behaviour.
We had already encountered, in various visits to major Japanese companies that we hoped would take on production, and to universities doing shogi research, a resentment that a non-Japanese team had made such a product with a previously unheard of level of play. But one evening in the Shogi Renmei (where George Hodges and I were billeted for the week in the pros' overnight quarters) we were invited to the poshest playing room and a senior pro deigned to play our program, giving a four piece handicap (which meant he was treating us as potential dan level). Not only that, he let us win. Very soon after someone at the Renmei arranged for a reporter from a major Japanese newspaper to interview us about this scoop of beating a pro at our first attempt (and we even got paid for the interview!).
I regard that as pure altruism, and I have seen that sort of behaviour countless times. I therefore factor it in when I see pro-program games in go.
Although I'm a computer go fan, I will be rooting for John Tromp, by the way.
I think it is close to certain that go playing computer programs will not have the strenth of a high ranking pro any time soon and I think that remains a true statement even if there is another breakthrough equivalent to the last.
But that wasn't the question, was it? Not high rated pro but amateur ~3 dan.
As an aside, am I alone in thinking that the latest computer tournament didn't really tell us much about the comparative strength of the top three finishers? Yes of course, there were rules in place that would define a "winner" in a situation like that* so clearly Erica "won" the contest according to the rules.
* A round robin where the results were each of the top three finishers lost exactly one game to each other and won all the rest of its games.
hyperpape wrote:Mike: sounds like what you need is a thread in the Go Rules forum. They'll definitely have something to say.
Why? I wasn't expressing an opinion that anything was wrong with the torunament rules. We like to have a well defined winner of tournaments and so have rules in place to ensure that and since all the contestants know the rules this is perfectly fair.
No matter what the rules (or whether a human or computer competition) the results sometimes allow us to draw conclusions about the strength of the contestants and sometimes the results don't. I was expressing an opinion that in this case the outcome was inconclusive with regard to the strength of the top three programs. I wasn't saying that Erica shouldn't have won the tournament, just that its winning didn't didn't let us conclude stronger than Zen or MFOG.
Don't others here find what people are saying even mildly odd? I don't mean the statements given as conclusions as much as the reasons presented.
Are the bots playing at the strength their ratings indicate? Well perhaps not, but instead of people saying things like "probably the people playing the bots aren't really trying" (that would imply the conclusion) would expect to see a reason for this belief. After all, if I didn't believe the bot was as strong as its rating and I were playing it I don't think I'd want to lose to it under conditions where by what I believe my own rating to be I shouldn't.
We aren't seeing here postings of personal experience. Why not? My own observations are useless in this regard but surely some of you have ratings similar to some of these bots. What happens when you play against them? I know that if I believed they had grossly inflated ratings I'd want to confirm that for myself.
After reading this thread, I felt compelled to create a personal experience. Even getting a game against a bot stronger than 7k or so was very difficult, and I was unable to get a game against the strongest ones. My first game was against the PS3 1d-bot on KGS, which I won by more than 160 points in even game. Then I gave it three stones and won by more than 50. After that I narrowly lost two games with two and three stones, then won even game by about 70-80 points. But by then I was tired and only played because I thought the bot was seriosly overranked, I would not play other people in that state. So at least that bot I believe is overranked.
I would love to test the "4d" zenbot, but just getting a game is too much trouble. I want to confirm its strenght myself, but if I have to sit for ours to try to be the first to challenge the bot for a game, then thats too much effort, and spending energy on that would make me play worse anyway.
I think the people who say that the people that thinks the bots are overranked should try playing them themselves, should try getting a game with the bots themselves. Or, if I am the only one experiencing difficulties with this, could someone explain to me how to get a game with a decent bot more easily?
Mildly amusing that someone would actually believe that a computer will not beat them in the next ~50 years.. (A computer 50 years ago..)
Computers will beat humans in Go much sooner than in 50 years, unless an unforeseen "wall" is hit. But it works both ways, we might make practical quantum computers, or maybe even something we can't even imagine yet, and solve the whole game for all I know.
And yet, I am rooting for the human in this bet. Woo.
We aren't seeing here postings of personal experience. Why not? My own observations are useless in this regard but surely some of you have ratings similar to some of these bots. What happens when you play against them?
I've found them very easy to play against except a blitz speeds where I can't move my mouse fast enough or in those cases where the computer takes lierally every second of its time and the game becomes so tedious that I start to mess around in the hope of triggering a quicker outcome (or I escape, and presumably the computer eventually gets the win).
I gather other people have the same experience and so I think quite a few computer wins are unearned runs. They count on the scoreboard but are not taken all that seriously.
Another factor is the weird style. A computer will play daft moves and so you start to expect them, only for you to be caught out once in a while by a proper move. More familiarity with the computer would probably obviate that.
As to arguing that if you are a 1-dan and the computer is a 1-dan you must be the same strength, I'd differ. You are just the same grade. We see a version of this paradox in human play. There are many players who have a certain reliable grade in even games but who go all to pieces when playing a handicap game. Substitute "computer" for "handicap game" and you get a similar distortion.
I managed to get a game with the zenbot of 4 dans. It was blitz and I lost by time close to the end. I tried to play out the last moves by myself, and concluded that I would have won, but it was close so I am not really sure. I think that if I just got used to its strange style, I would win more easily. There was a group that I could have killed several times, but the strange moves made me forgot about it.
But maybe I was wrong about the result if I had not lost on time, and perhaps the bot had read out the rest of the game, and knew it would win by 0,5. So it is either weaker than 4d, or it is actually trying to fool me.