It is currently Thu Mar 28, 2024 1:44 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 53 posts ]  Go to page Previous  1, 2, 3
Author Message
Offline
 Post subject: Re: On AI vs human thinking
Post #41 Posted: Sun Jun 10, 2018 2:30 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Tryss wrote:
Quote:
They resemble human club players who never study or analyze their games but, through just playing a lot, reach a dan level. The "Zero" type AIs do that, playing millions and millions of games but still not "studying" or understanding theory


Is there really a theory to understand? In my opinion, go theory is "just" a crutch that allow humans to play better than with just pure reading and intuition. It is a (human) way to create heuristics to prune the game tree.


A lot of theory consists of heuristics, and a lot is informal. But, as Robert Jasiek points out, there is theory that is formal, either logical or mathematical. Life and death knowledge about corners is theory, for instance. The proverb, "Eye vs. no eye" may be considered a heuristic, but it may also be considered an informal expression of theoretical knowledge that has been proven and formalized.

Quote:
And while theory is indeed very useful for humans, is it necessarily that useful for a program?


That depends upon how the program is written. IIUC, AlphaGo Zero and Leela Zero start out with no theoretical go knowledge and end up with none. It may be possible that other strong programs will come along that utilize go theory. Don't top level chess engines incorporate chess theory in the form of opening books and tablebases?

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #42 Posted: Sun Jun 10, 2018 2:53 am 
Oza
User avatar

Posts: 2508
Liked others: 1304
Was liked: 1128
gowan wrote:
From my perspective, the AI "players" don't understand go at all. They resemble human club players who never study or analyze their games but, through just playing a lot, reach a dan level. The "Zero" type AIs do that, playing millions and millions of games but still not "studying" or understanding theory. That's why these systems can't explain what they do.


Perhaps the reason is simply that that is not what they are programmed to do. It doesn't seem implausible to me that future programmers, tired of making zero clones will try to focus on how the AIs justify their moves and find ways of expressing it. Bring on the Malkovitch bot!

_________________
Patience, grasshopper.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #43 Posted: Sun Jun 10, 2018 5:38 am 
Lives in sente

Posts: 1037
Liked others: 0
Was liked: 180
Bill Spight wrote:

That depends upon how the program is written. IIUC, AlphaGo Zero and Leela Zero start out with no theoretical go knowledge and end up with none. It may be possible that other strong programs will come along that utilize go theory. Don't top level chess engines incorporate chess theory in the form of opening books and tablebases?


The neural net programs aren't written to be able to do anything but to implement a neural net. A neural net has the property of being able to learn (of being taught) to implement a function. In OUR case (the human brain) we probably start out with some connections, biasing us to form "theories" of why things happen << presumably this biasing useful in our evolutionary process >>

IF you have ever had "experimental psychology" you will have learned that not only is this tendency not uniquely human, but that it is "superstition" developed in the learning process << the rat might have accidentally "learned" to do a back flip before pressing the reward lever -- the back flip of course having nothing to do with the mechanism giving the reward >>

Understand? We humans are more comfortable with "theories", thinking that we understand why. Why must we suffer? Why must we die? << to give the underlying questions of a couple of major religions >> I stuck THIS in here (give this example) because many/most of you find SOME of these theories "nonsense", the ones of religions other than your own.

But back to WHY thinking life might have evolved with this tendency to be subject to "superstition"<< the flip side of what we think of correct theories >> I rather suspect that it optimizes/speeds up learning of "what will work" given that the inclusion of "superstition" elements (unnecessary parts) into the theory might cost only minor inefficiency and the price of not learning quickly enough severe << not learning what is a predator means getting eaten --- the "price" learning that a non-predator is safe just a bit of energy running when not necessary >>

The "theories" of WHY "the next move is a game of go is such that there is no better move" would represent shortcuts to the evaluation of that function, shortcuts that we humans might find useful even when slightly incorrect. This makes sense since the practical evaluation of the function is an approximation of the exact function. These neural net AIs are also learning to evaluate the function, also still inexact approximations. But they are learning to do that directly, skipping the theory business.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #44 Posted: Sun Jun 10, 2018 8:21 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Bill Spight wrote:

That depends upon how the program is written. IIUC, AlphaGo Zero and Leela Zero start out with no theoretical go knowledge and end up with none. It may be possible that other strong programs will come along that utilize go theory. Don't top level chess engines incorporate chess theory in the form of opening books and tablebases?

Mike Novack wrote:
The neural net programs aren't written to be able to do anything but to implement a neural net. A neural net has the property of being able to learn (of being taught) to implement a function. In OUR case (the human brain) we probably start out with some connections, biasing us to form "theories" of why things happen << presumably this biasing useful in our evolutionary process >>

IF you have ever had "experimental psychology" you will have learned that not only is this tendency not uniquely human, but that it is "superstition" developed in the learning process << the rat might have accidentally "learned" to do a back flip before pressing the reward lever -- the back flip of course having nothing to do with the mechanism giving the reward >>

Understand? We humans are more comfortable with "theories", thinking that we understand why.


A lot of go theories are heuristics. I suppose by superstition you mean a heuristic that is unsound. For instance, when I was learning go textbooks said that it was incorrect to extend further from a 3-4 point than an enclosure. That is wrong. Curiously, this was at a time when the Chinese Fuseki was becoming popular. ;) This was a heuristic that held sway for a time. If you go back a couple of centuries players often extended further, just as we do today.

Quote:
The "theories" of WHY "the next move is a game of go is such that there is no better move" would represent shortcuts to the evaluation of that function, shortcuts that we humans might find useful even when slightly incorrect. This makes sense since the practical evaluation of the function is an approximation of the exact function. These neural net AIs are also learning to evaluate the function, also still inexact approximations. But they are learning to do that directly, skipping the theory business.


Go theories need not be incorrect at all. For instance, one of the first theories that many human beginners are taught is the basic theory of ladders. The theory is simple and because the reading of simple ladders does not tax human working memory, even human beginners can play ladders that span the board. IIUC, modern go neural nets have not learned that theory to the same depth. I suppose for two reasons: first, they cannot represent the intensive theory like humans do, and second, that they have not encountered enough instances where such long ladders are relevant to have learned them.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #45 Posted: Sun Jun 10, 2018 9:46 am 
Oza

Posts: 3647
Liked others: 20
Was liked: 4626
Quote:
one of the first theories that many human beginners are taught is the basic theory of ladders


A propos probably nothing, ladders have the distinction of being the first go theory (the six-diagonals heuristic) in the go literature, appearing in the Dunhuang Classic well over a millennium ago.

I think this highlights something special about theories/heuristics for humans. They need to be practical first of all, but if they entertain us as well they become something special and have a curious knock-on effect. I can certainly remember the frisson of delight at discovering ladders. It led to an interest in go.

Other firm favourites such as extending three from a two-stone wall (and so on) likewise have an ancient pedigree. We take them for granted, but the ancient who came up with them was maybe the Demis Hassabis of his day.


This post by John Fairbairn was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #46 Posted: Sun Jun 10, 2018 1:28 pm 
Gosei
User avatar

Posts: 1753
Liked others: 177
Was liked: 491
Go knowledge consists of

(1) Good shapes: shapes that have potential for eye space, connecting stones; living and dead shapes; shapes that give many liberties...
(2) Good sequences: openings, josekis, invasion sequences, reduction sequences...
(3) Concepts: strength, influence, sente/gote, proverbs...

AI cannot explain its moves in terms of type (3) concepts, but can contribute to knowledge of types (1) and (2). For instance:

(1) Attaching to a stone is more often a good move than humans previously thought.
(2) New josekis.


This post by jlt was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #47 Posted: Mon Jun 11, 2018 6:11 am 
Lives in sente

Posts: 1037
Liked others: 0
Was liked: 180
I think I need to explain what I meant by "theories" being useful, but slightly wrong shortcuts for humans. Since "ladders" were given as an example, a good starting point.

Early on we human players learn about ladders and how to determine whether they work or not. At our early playing level, extremely useful.

But ultimately, a ladder (the potential of a ladder) can't be judged simply by whether the ladder works or not but by the collective value of all the sente moves that can be made because of the potential of that ladder vs the plus and minus of the ladder working or not.

Most of our go theories are like that. They give "local" answers but go is a "global" game. Thus, if you think of josekis as theories, one could play joseki in all four corners and have a hopelessly lost game < because although locally correct, they do not cooperate globally over the whole board >

There may be very good evolutionary reasons why our animal brains tend to learn in terms of "theories" and "explanations" << quick, good enough solutions >> OUR "neural nets" maybe begin not empty but with some connections biasing toward such solutions.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #48 Posted: Mon Jun 11, 2018 7:06 am 
Honinbo

Posts: 9545
Liked others: 1600
Was liked: 1711
KGS: Kirby
Tygem: 커비라고해
isn't it similar to having multiple layers in a neural network? a neural network may learn/chunk certain sub-ideas at a given layer, which will be input to a subsequent layer. it may not equate to the same idea as a "ladder", but it seems similar to learning subproblems that are used as input to larger problems in subsequent layers of the neural network.

that being said, i don't know enough about deep reinforcement learning to understand well if there's a connection there.

_________________
be immersed

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #49 Posted: Mon Jun 11, 2018 8:18 am 
Gosei

Posts: 1733
Location: Earth
Liked others: 621
Was liked: 310
you do not learn to play music by theories and explanations

at least not if you play any good ;-)



Go is an art and not a science too.

There are go scholars and go players. I like and appreciate both characters, but I personally enjoy to develop my play and not my knowledge in the first place.


I doubt the argument humans mainly learn by theories and explanations. Humans learn mainly by intuition probably.

We combine intuitive and logical features in a nice human way :)


@ jlt, I think AI is quite strong at sente/gote, strength and influence. I learned a lot by reviewing with AI about these concepts.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #50 Posted: Mon Jun 11, 2018 8:45 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
Gomoto wrote:
Go is an art and not a science too.
I doubt the argument humans mainly learn by theories and explanations. Humans learn mainly by intuition probably.


Go is art, science and much more.

Humans learn by theory, subconscious thinking and other means. Different humans emphasise different means differently. (JFTR, I learn mostly by theory and, where it is missing, develop it.)

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #51 Posted: Mon Jun 11, 2018 11:26 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Mike Novack wrote:
I think I need to explain what I meant by "theories" being useful, but slightly wrong shortcuts for humans. Since "ladders" were given as an example, a good starting point.

Early on we human players learn about ladders and how to determine whether they work or not. At our early playing level, extremely useful.

But ultimately, a ladder (the potential of a ladder) can't be judged simply by whether the ladder works or not but by the collective value of all the sente moves that can be made because of the potential of that ladder vs the plus and minus of the ladder working or not.

Most of our go theories are like that. They give "local" answers but go is a "global" game. Thus, if you think of josekis as theories, one could play joseki in all four corners and have a hopelessly lost game < because although locally correct, they do not cooperate globally over the whole board >


No disagreement. However, we must keep in mind that, although current bots make global evaluations, they also produce slightly wrong evaluations. (Go is not solved. ;)) The lack of theoretical shortcuts is a disadvantage for bots. Chess has good examples of positions that top engines misjudge, but humans can evaluate correctly using theory and logic. Also, chess engines utilize theory through opening books and tablebases. The Zero go bots can achieve superhuman play without theory, but who knows what the future holds?

To underscore the point, here is an example Uberdude posted here (post #3).
Click Here To Show Diagram Code
[go]$$Wc
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . O . . . . . |
$$ | . . . O . . . . . . . . X O . O O O . |
$$ | . . . , . . . . . X . . . . X X X O . |
$$ | . . O . . . . . . . . . X . . . X X . |
$$ | . . . . . . . . . . . . . . . . O . . |
$$ | . . . O O . . . . . . . . . . . . . . |
$$ | . . O X . . . . . . . . . . . . . . . |
$$ | . . . X . X . . . . . . . . . . X . . |
$$ | . . X , . . . . . , . . . . . 1 2 . . |
$$ | . X X O . . . . . . . . . . . . . . . |
$$ | X O X O . O O . . . . . . . . 3 . . . |
$$ | . O X X X X O . . . . . . . . . . . . |
$$ | . O O X X O O X . . . . . . . . . . . |
$$ | a . O X O . O . . . . . . . . . . . . |
$$ | 5 4 O X X O . . . , . . . . . , X . . |
$$ | . 6 O O X . O . . O . X . . X . . . . |
$$ | O O X X . . . . . . . . . . . . . . . |
$$ | . X . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]

LeelaElf missed :b6:, wanting to play at "a".

The depth of local reading to find :b6: is shallow. The required depth of global reading is surely much greater. The thing is to find :b6: to start with. That is not difficult for humans to do who are familiar with this theoretical shortcut.

Click Here To Show Diagram Code
[go]$$Bc
$$ | . . . . . . . . .
$$ | X X X . X . . . .
$$ | X O X . . O O . .
$$ | X O X X X X O . .
$$ | O O O X X O O . .
$$ | 5 4 O X . X O . .
$$ | 2 1 O X X . X O .
$$ | . 3 O O X . X O .
$$ -------------------[/go]


Because of damezumari :w4: fails.

Even a player who had not seen the position below but had seen the previous one could find the descent in the following variations.

Click Here To Show Diagram Code
[go]$$Bc
$$ | . . . . . . . . .
$$ | . X . . . . . . .
$$ | . . . . . . . . .
$$ | . X X . X . . . .
$$ | X O X . . O O . .
$$ | 5 O X X X X O . .
$$ | . O O X X O O . .
$$ | . 4 O X . X O . .
$$ | 2 1 O X X . X O .
$$ | . 3 O O X . X O .
$$ -------------------[/go]

Click Here To Show Diagram Code
[go]$$Bc
$$ | . . . . . . . . .
$$ | . X . . . . . . .
$$ | . . . . . . . . .
$$ | 6 X X . X . . . .
$$ | X O X . . O O . .
$$ | 4 O X X X X O . .
$$ | . O O X X O O . .
$$ | 5 . O X . X O . .
$$ | 2 1 O X X . X O .
$$ | 7 3 O O X . X O .
$$ -------------------[/go]


Given enough time, a Zero bot could learn :b6: in the original diagram, but it would require two things, I think. First, enough similar examples. Second, enough examples where the analogous play was found. With self play whether the analogous play would be found is a real question.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #52 Posted: Tue Jun 26, 2018 12:36 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Also Sprach Judea Pearl: https://www.quantamagazine.org/to-build ... -20180515/
:D

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


This post by Bill Spight was liked by: Gomoto
Top
 Profile  
 
Offline
 Post subject: Re: On AI vs human thinking
Post #53 Posted: Tue Jun 26, 2018 2:16 pm 
Gosei

Posts: 1733
Location: Earth
Liked others: 621
Was liked: 310
He is probably wrong,

the missing part is not WHY,

the missing part is conciousness.







(conciousness is probably a intuition of oneself by the way ;-), even more curve fitting)

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 53 posts ]  Go to page Previous  1, 2, 3

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group