Computers reach 5d on KGS

For discussing go computing, software announcements, etc.

How long until a bot reaches consistent 6d on KGS?

Poll ended at Fri Jul 01, 2011 2:33 pm

<3 months
0
No votes
<6 months
4
13%
<1 year
15
50%
<2 years
4
13%
<3 years
3
10%
<5 years
0
No votes
never
2
7%
The Terminator Skynet takes over the world first.
2
7%
 
Total votes: 30

iazzi
Beginner
Posts: 16
Joined: Sun May 08, 2011 4:48 am
Rank: 9k
GD Posts: 0
KGS: iazzi
Has thanked: 6 times

Re: Computers reach 5d on KGS

Post by iazzi »

RobertJasiek wrote:Nothing changes ALA programs do only use sheer calculation power. Humans will have to "justify" their superiority only when computers start to explain their decisions by human-readable reasoning and can maintain their hard- and software alone.


Actually the goal in computer chess has shifted to use less power than before. Computers stronger than Deep Blue now run on mobile phones. Pocket Fritz 4 is at Grand Master level and analyses (if I remember correctly) a few million times less positions than Deep Blue.

Computers can and do train people by showing variations and giving positional judgement. Even champions use them.

Most people would like to think otherwise of course, that computers are dumb and sheer power.

I think it is a safer and more relaxed attitude to just admit that computers are better chess players and "understand" chess better, and that this will apply to go at some point. Board games are a minor use for human intelligence. If we take it too badly that computers are better than us here, what will happen when they will surpass humans in the really important stuff?
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Computers reach 5d on KGS

Post by Mike Novack »

More generally (on task that require intelligence)

Yes, back then we thought we'd soon have computers that could interpet speech, etc. But that was because we were rather naive about the difficulty of the task. In the process we have learned a great deal about human languages and how in practice they depend on far more than rules of language and the meaning of words.

An example?

"The chickens are ready to eat"
"The steaks are ready to eat"

The first you find ambiguous but the second not. But it's not by any meaning of words or rules of language by which you determine that the steaks must be cooked, couldn't be hungry. You've just called your entire knowledge about how things behave in the real world into play.
User avatar
Mnemonic
Lives in gote
Posts: 324
Joined: Wed Aug 11, 2010 10:41 pm
Rank: KGS 7 kyu
GD Posts: 0
KGS: Mnemonic, dude13
Location: Dresden
Has thanked: 26 times
Been thanked: 22 times

Re: Computers reach 5d on KGS

Post by Mnemonic »

hyperpape wrote:What goes unmentioned in Kurzweil's comment is that people once thought that once we had a chess playing computer, we would have computers that recognized speech, had vision and could write a novel. But it turned out that chess was vastly easier than those, and really didn't lead to much progress on those fronts. So the earlier valuation of a chess playing computer was based on massive factual errors.

Kurzweil does comment on this belief in his book, but the professional community has long since abandoned the notion that chess (or go for that matter) are the Holy Grail or computing or AI. Sure, they are important steppingstones, but the accepted test for intelligence is still the Turing Test (or variations thereof) According to Kurzweil 'Strong AI' will have been developed by 2030 and will have surpassed all (unaugmented) human intelligence by 2040.
While I was teaching the game to a friend of mine, my mother from the other room:
"Cutting? Killing? Poking out eyes? What the hell are you playing?"
hyperpape
Tengen
Posts: 4382
Joined: Thu May 06, 2010 3:24 pm
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
Location: Caldas da Rainha, Portugal
Has thanked: 499 times
Been thanked: 727 times

Re: Computers reach 5d on KGS

Post by hyperpape »

I have a hard time seeing the point you're making, Mike. I agree that we had naive expectations about how hard those problems would be.

Alternately, you can say that we simplified the problem space with chess. Very early on, I think people assumed you would teach a computer to play chess by teaching it to think it like a human.

In one way or another, people had assumed problems like chess were more tightly coupled to general purpose AI, and they weren't.

(Btw: that last paragraph steps into a minefield in philosophy of language. You could argue forever about whether you can understand words like 'steak' and 'eat' without realizing that a steak couldn't be hungry. I recommend against such discussions unless you take a perverse pleasure in them. Without stepping further into that minefield--we haven't even really taught computers the meaning of words, or the rules of language, so I think you're not getting the obstacles quite right).
hyperpape
Tengen
Posts: 4382
Joined: Thu May 06, 2010 3:24 pm
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
Location: Caldas da Rainha, Portugal
Has thanked: 499 times
Been thanked: 727 times

Re: Computers reach 5d on KGS

Post by hyperpape »

@Mnemonic Well, that's the reason why the importance of chess was downgraded. And it's a good reason!
User avatar
Mnemonic
Lives in gote
Posts: 324
Joined: Wed Aug 11, 2010 10:41 pm
Rank: KGS 7 kyu
GD Posts: 0
KGS: Mnemonic, dude13
Location: Dresden
Has thanked: 26 times
Been thanked: 22 times

Re: Computers reach 5d on KGS

Post by Mnemonic »

hyperpape wrote:Well, that's the reason why the importance of chess was downgraded. And it's a good reason!

Are you talking about chess as a measurement of intelligence? In the professional community it was never regarded as such and therefore cannot have been downgraded. And if you are talking about how computer are now able to beat humans and the effect it had on the public perception of chess, I disagree with you. The rules of chess have not changed. Humanity at large hasn't 'solved' chess yet, unlike several other board games. Why should it be downgraded?

hyperpape wrote:Alternately, you can say that we simplified the problem space with chess. Very early on, I think people assumed you would teach a computer to play chess by teaching it to think it like a human.

Right. We have taken a short cut and taught computer to play chess by teaching them to think like a computer. That is why chess is a bad example for "human level intelligence". Speech on the other hand can only be learned by teaching somebody to think like a human. That is also the point Mike is trying to make. That minefield of yours can only be navigated by a human (and probably one older than say 7 years)

hyperpape wrote:we haven't even really taught computers the meaning of words, or the rules of language, so I think you're not getting the obstacles quite right

Wrong. We have, and quite some time ago, at that. The problem that Mike is trying to hint at is that meaning of words and rules of languages is not enough. You need a whole lot of reasoning power and real world experience to understand speech. Those are the obstacles.
While I was teaching the game to a friend of mine, my mother from the other room:
"Cutting? Killing? Poking out eyes? What the hell are you playing?"
RobertJasiek
Judan
Posts: 6273
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Re: Computers reach 5d on KGS

Post by RobertJasiek »

Ok, it depends on how "intelligence" is defined. If the criterion is "solves problems", then programs are reasonably intelligent in go. If it is "lets humans explore variations to understand part of the solution", then programs are becoming useful. If it is "explains solutions to humans", then programs are still duffers.
hyperpape
Tengen
Posts: 4382
Joined: Thu May 06, 2010 3:24 pm
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
Location: Caldas da Rainha, Portugal
Has thanked: 499 times
Been thanked: 727 times

Re: Computers reach 5d on KGS

Post by hyperpape »

Mnemonic wrote:
hyperpape wrote:Well, that's the reason why the importance of chess was downgraded. And it's a good reason!

Are you talking about chess as a measurement of intelligence? In the professional community it was never regarded as such and therefore cannot have been downgraded. And if you are talking about how computer are now able to beat humans and the effect it had on the public perception of chess, I disagree with you. The rules of chess have not changed. Humanity at large hasn't 'solved' chess yet, unlike several other board games. Why should it be downgraded?
Never? That's a long time. My understanding is that super-early AI people thought chess would be tackled via general purpose intelligence. This did quickly fall out of favor, but you said "never". I could be wrong, though.

Mnemonic wrote:Right. We have taken a short cut and taught computer to play chess by teaching them to think like a computer. That is why chess is a bad example for "human level intelligence". Speech on the other hand can only be learned by teaching somebody to think like a human. That is also the point Mike is trying to make. That minefield of yours can only be navigated by a human (and probably one older than say 7 years)
This is why the Kurzweil quote is a bit silly. He writes as if we're moving the goalposts out of pique. But really, it was not that the accomplishment one might have thought it was.

Mnemonic wrote:
hyperpape wrote:we haven't even really taught computers the meaning of words, or the rules of language, so I think you're not getting the obstacles quite right

Wrong. We have, and quite some time ago, at that. The problem that Mike is trying to hint at is that meaning of words and rules of languages is not enough. You need a whole lot of reasoning power and real world experience to understand speech. Those are the obstacles.
Show me where there are computers that understand what words mean. Where are there even computers that understand syntax for natural language?
User avatar
daniel_the_smith
Gosei
Posts: 2116
Joined: Wed Apr 21, 2010 8:51 am
Rank: 2d AGA
GD Posts: 1193
KGS: lavalamp
Tygem: imapenguin
IGS: lavalamp
OGS: daniel_the_smith
Location: Silicon Valley
Has thanked: 152 times
Been thanked: 330 times
Contact:

Re: Computers reach 5d on KGS

Post by daniel_the_smith »

Mike Novack wrote:a) explain "why" in human understandable terms
You are assuming that none of the existing programs can do this? Just because few of the programs have been given this capability doesn't make that so. What I believe isn't possible at the present time is giving a why for "why is move A (which has x, y, z "go reasons" behind it) better than move B (which has u, v, w go reasons behind it)". In other words, in human understandable terms, why in this instance are x, y, and z more important than u, v, and w. ...


I think when humans do this it's mostly confabulation. (IOW, your brain internally came up with a good/bad judgment, and then you verbally come up with reasons to support your feeling. You know this is what happened if you have ever started to explain something and realized halfway through that you were totally wrong!) Some are better at producing convincing confabulations than others...

So, the hard part isn't coming up with reasons-- I could probably write a program right now with a built in set of possible reason fragments, give it a few simple rules, and it would generate moderately convincing reasons for any move in a pro game. This would be the digital equivalent of confabulation.

The hard part is making those reasons correspond with reality. And if the bot's choice is based off of "in 100,000 positions, this move came out the best most often", there's not going to be a way to express that. The bot would have to examine all the failed positions, identify the commonalities between them, and then it could say something like, "If I play X, it's no good because of Y, if Z, then W, ... So, this move avoids most of the problems."

But even that isn't terribly useful in the way you guys might want. For the computer to produce a general principle for the situation would genuinely be impressive (unless it's via the confabulation route, in which case it's not impressive and it's questionable that it's based in reality).

The process of selecting moves is somewhat opaque, even to the person doing it. Unraveling that process and putting it into understandable terms is not trivial even when you have the source code for that process. And if your bot is based on Bayesian weighting of a jillion small automatically tuned factors or something similar, you could have written the source code and still not have even the slightest inkling of how it works.
That which can be destroyed by the truth should be.
--
My (sadly neglected, but not forgotten) project: http://dailyjoseki.com
User avatar
Mnemonic
Lives in gote
Posts: 324
Joined: Wed Aug 11, 2010 10:41 pm
Rank: KGS 7 kyu
GD Posts: 0
KGS: Mnemonic, dude13
Location: Dresden
Has thanked: 26 times
Been thanked: 22 times

Re: Computers reach 5d on KGS

Post by Mnemonic »

Never? That's a long time. My understanding is that super-early AI people thought chess would be tackled via general purpose intelligence. This did quickly fall out of favor, but you said "never". I could be wrong, though.

Honestly, I don't know how super-early AI people thought they would tackle chess. I do know that they thought general purpose machines would have been developed by the 60's so that trying to conquer specific problems might be wasting time. However, my statement did not regard the methods of trying to solve chess, but if chess was ever considered as a test of intelligence (i.e. if a computer beats a human in chess a computer is as smart as a human) AFAIK this was never the case. The Turing Test was introduced very early (1950) and has garnered wide support over several decades of research.

Show me where there are computers that understand what words mean. Where are there even computers that understand syntax for natural language?

Showing that computers understand words is easy: Automatic spell check, online translations, search engines, ect. Now you might argue that this is an unfair comparison because computer don't actually 'understand' words, but then you have to define understand. For all intense and purposes they do exactly what a human would do if you ask them to translate, spell check, ect. Or would you say that a human translating a text is intelligence but that a computer translation is not? You can't have it both ways.

Understanding syntax is harder, but even Word does it to a certain extent. For better examples see ELIZA which was programmed in 1964 or just in general the chatterbots

All of these examples are not perfect, but that has nothing to do with poor understandings of words or concepts of language, but that this is simply not enough.
While I was teaching the game to a friend of mine, my mother from the other room:
"Cutting? Killing? Poking out eyes? What the hell are you playing?"
User avatar
ez4u
Oza
Posts: 2414
Joined: Wed Feb 23, 2011 10:15 pm
Rank: Jp 6 dan
GD Posts: 0
KGS: ez4u
Location: Tokyo, Japan
Has thanked: 2351 times
Been thanked: 1332 times

Re: Computers reach 5d on KGS

Post by ez4u »

Do or do not... there is no "why"!

GOda (Yoda's younger sister; may the force be with you!) :)
Dave Sigaty
"Short-lived are both the praiser and the praised, and rememberer and the remembered..."
- Marcus Aurelius; Meditations, VIII 21
hyperpape
Tengen
Posts: 4382
Joined: Thu May 06, 2010 3:24 pm
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
Location: Caldas da Rainha, Portugal
Has thanked: 499 times
Been thanked: 727 times

Re: Computers reach 5d on KGS

Post by hyperpape »

Mnemonic wrote:Honestly, I don't know how super-early AI people thought they would tackle chess. I do know that they thought general purpose machines would have been developed by the 60's so that trying to conquer specific problems might be wasting time. However, my statement did not regard the methods of trying to solve chess, but if chess was ever considered as a test of intelligence (i.e. if a computer beats a human in chess a computer is as smart as a human) AFAIK this was never the case. The Turing Test was introduced very early (1950) and has garnered wide support over several decades of research.
This seems to rest on the idea that there can be only one test of intelligence, and that is not plausible.

Showing that computers understand words is easy: Automatic spell check, online translations, search engines, ect. Now you might argue that this is an unfair comparison because computer don't actually 'understand' words, but then you have to define understand. For all intense and purposes they do exactly what a human would do if you ask them to translate, spell check, ect.
Spell check is neither here nor there, because it's only tangentially about meaning or even syntax, but take the case of translation. Humans and computers do not do the same thing.

A human, according to probably the best theory that we have, forms an internal representation of the syntax tree of what they read, one which follows the rules of a transformational grammar*. They probably incorporate general purpose knowledge in disambiguating terms that have the same spelling or sound, but different syntactic categories, but let's focus on the tree itself. That tree is simply not used by mainstream translation software, which typically uses statistical machine translation based on smaller segments of the sentence.

The difference is that the human recursively creates a tree, so that the syntax of the sentence as a whole depends on the syntactic categories of every element. The computer doesn't--it takes little bits, and uses statistics to predict how those might be translated.

So there is something very fundamental that the computer does not do that is at the core of human performance. This doesn't even touch what humans do to understand what words mean--it's all just at the level of syntactic rules, but already there is a gap.

P.S. The obligation to define "understand" either applies to both of us or neither.

* I'm not a linguist, so I'm really dodgy about the categories like transformational vs. generative grammars and all that. There's probably a lot of places this could be clearer or better informed, but I hope it's accurate enough.
User avatar
Joaz Banbeck
Judan
Posts: 5546
Joined: Sun Dec 06, 2009 11:30 am
Rank: 1D AGA
GD Posts: 1512
Kaya handle: Test
Location: Banbeck Vale
Has thanked: 1080 times
Been thanked: 1434 times

Re: Computers reach 5d on KGS

Post by Joaz Banbeck »

daniel_the_smith wrote:...
So, the hard part isn't coming up with reasons-- I could probably write a program right now with a built in set of possible reason fragments, give it a few simple rules, and it would generate moderately convincing reasons for any move in a pro game...


I'd find even that fascinating.
Help make L19 more organized. Make an index: https://lifein19x19.com/viewtopic.php?f=14&t=5207
RobertJasiek
Judan
Posts: 6273
Joined: Tue Apr 27, 2010 8:54 pm
GD Posts: 0
Been thanked: 797 times
Contact:

Re: Computers reach 5d on KGS

Post by RobertJasiek »

Mnemonic wrote:but then you have to define understand


Let me try...

To _understand_ something is to have a mental or stored representation in the form of semantical expressions so that what is being described equals the something and its characteristica, behaviours and context embeddings.

Hm. Is this good enough???
User avatar
quantumf
Lives in sente
Posts: 844
Joined: Tue Apr 20, 2010 11:36 pm
Rank: 3d
GD Posts: 422
KGS: komi
Has thanked: 180 times
Been thanked: 151 times

Re: Computers reach 5d on KGS

Post by quantumf »

Joaz Banbeck wrote:
daniel_the_smith wrote:...
So, the hard part isn't coming up with reasons-- I could probably write a program right now with a built in set of possible reason fragments, give it a few simple rules, and it would generate moderately convincing reasons for any move in a pro game...


I'd find even that fascinating.


I suspect it might be a bit harder than you say, depending on how convincing you have in mind. Plausible sounding sentences, yes, but ones that are even remotely related to the actual move/position? Consider yourself challenged!
Post Reply