Page 4 of 4

Re: On AI vs human thinking

Posted: Sun Jun 10, 2018 1:28 pm
by jlt
Go knowledge consists of

(1) Good shapes: shapes that have potential for eye space, connecting stones; living and dead shapes; shapes that give many liberties...
(2) Good sequences: openings, josekis, invasion sequences, reduction sequences...
(3) Concepts: strength, influence, sente/gote, proverbs...

AI cannot explain its moves in terms of type (3) concepts, but can contribute to knowledge of types (1) and (2). For instance:

(1) Attaching to a stone is more often a good move than humans previously thought.
(2) New josekis.

Re: On AI vs human thinking

Posted: Mon Jun 11, 2018 6:11 am
by Mike Novack
I think I need to explain what I meant by "theories" being useful, but slightly wrong shortcuts for humans. Since "ladders" were given as an example, a good starting point.

Early on we human players learn about ladders and how to determine whether they work or not. At our early playing level, extremely useful.

But ultimately, a ladder (the potential of a ladder) can't be judged simply by whether the ladder works or not but by the collective value of all the sente moves that can be made because of the potential of that ladder vs the plus and minus of the ladder working or not.

Most of our go theories are like that. They give "local" answers but go is a "global" game. Thus, if you think of josekis as theories, one could play joseki in all four corners and have a hopelessly lost game < because although locally correct, they do not cooperate globally over the whole board >

There may be very good evolutionary reasons why our animal brains tend to learn in terms of "theories" and "explanations" << quick, good enough solutions >> OUR "neural nets" maybe begin not empty but with some connections biasing toward such solutions.

Re: On AI vs human thinking

Posted: Mon Jun 11, 2018 7:06 am
by Kirby
isn't it similar to having multiple layers in a neural network? a neural network may learn/chunk certain sub-ideas at a given layer, which will be input to a subsequent layer. it may not equate to the same idea as a "ladder", but it seems similar to learning subproblems that are used as input to larger problems in subsequent layers of the neural network.

that being said, i don't know enough about deep reinforcement learning to understand well if there's a connection there.

Re: On AI vs human thinking

Posted: Mon Jun 11, 2018 8:18 am
by Gomoto
you do not learn to play music by theories and explanations

at least not if you play any good ;-)



Go is an art and not a science too.

There are go scholars and go players. I like and appreciate both characters, but I personally enjoy to develop my play and not my knowledge in the first place.


I doubt the argument humans mainly learn by theories and explanations. Humans learn mainly by intuition probably.

We combine intuitive and logical features in a nice human way :)


@ jlt, I think AI is quite strong at sente/gote, strength and influence. I learned a lot by reviewing with AI about these concepts.

Re: On AI vs human thinking

Posted: Mon Jun 11, 2018 8:45 am
by RobertJasiek
Gomoto wrote:Go is an art and not a science too.
I doubt the argument humans mainly learn by theories and explanations. Humans learn mainly by intuition probably.
Go is art, science and much more.

Humans learn by theory, subconscious thinking and other means. Different humans emphasise different means differently. (JFTR, I learn mostly by theory and, where it is missing, develop it.)

Re: On AI vs human thinking

Posted: Mon Jun 11, 2018 11:26 am
by Bill Spight
Mike Novack wrote:I think I need to explain what I meant by "theories" being useful, but slightly wrong shortcuts for humans. Since "ladders" were given as an example, a good starting point.

Early on we human players learn about ladders and how to determine whether they work or not. At our early playing level, extremely useful.

But ultimately, a ladder (the potential of a ladder) can't be judged simply by whether the ladder works or not but by the collective value of all the sente moves that can be made because of the potential of that ladder vs the plus and minus of the ladder working or not.

Most of our go theories are like that. They give "local" answers but go is a "global" game. Thus, if you think of josekis as theories, one could play joseki in all four corners and have a hopelessly lost game < because although locally correct, they do not cooperate globally over the whole board >
No disagreement. However, we must keep in mind that, although current bots make global evaluations, they also produce slightly wrong evaluations. (Go is not solved. ;)) The lack of theoretical shortcuts is a disadvantage for bots. Chess has good examples of positions that top engines misjudge, but humans can evaluate correctly using theory and logic. Also, chess engines utilize theory through opening books and tablebases. The Zero go bots can achieve superhuman play without theory, but who knows what the future holds?

To underscore the point, here is an example Uberdude posted here (post #3).
Click Here To Show Diagram Code
[go]$$Wc
$$ ---------------------------------------
$$ | . . . . . . . . . . . . . . . . . . . |
$$ | . . . . . . . . . . . . . O . . . . . |
$$ | . . . O . . . . . . . . X O . O O O . |
$$ | . . . , . . . . . X . . . . X X X O . |
$$ | . . O . . . . . . . . . X . . . X X . |
$$ | . . . . . . . . . . . . . . . . O . . |
$$ | . . . O O . . . . . . . . . . . . . . |
$$ | . . O X . . . . . . . . . . . . . . . |
$$ | . . . X . X . . . . . . . . . . X . . |
$$ | . . X , . . . . . , . . . . . 1 2 . . |
$$ | . X X O . . . . . . . . . . . . . . . |
$$ | X O X O . O O . . . . . . . . 3 . . . |
$$ | . O X X X X O . . . . . . . . . . . . |
$$ | . O O X X O O X . . . . . . . . . . . |
$$ | a . O X O . O . . . . . . . . . . . . |
$$ | 5 4 O X X O . . . , . . . . . , X . . |
$$ | . 6 O O X . O . . O . X . . X . . . . |
$$ | O O X X . . . . . . . . . . . . . . . |
$$ | . X . . . . . . . . . . . . . . . . . |
$$ ---------------------------------------[/go]
LeelaElf missed :b6:, wanting to play at "a".

The depth of local reading to find :b6: is shallow. The required depth of global reading is surely much greater. The thing is to find :b6: to start with. That is not difficult for humans to do who are familiar with this theoretical shortcut.
Click Here To Show Diagram Code
[go]$$Bc
$$ | . . . . . . . . .
$$ | X X X . X . . . .
$$ | X O X . . O O . .
$$ | X O X X X X O . .
$$ | O O O X X O O . .
$$ | 5 4 O X . X O . .
$$ | 2 1 O X X . X O .
$$ | . 3 O O X . X O .
$$ -------------------[/go]
Because of damezumari :w4: fails.

Even a player who had not seen the position below but had seen the previous one could find the descent in the following variations.
Click Here To Show Diagram Code
[go]$$Bc
$$ | . . . . . . . . .
$$ | . X . . . . . . .
$$ | . . . . . . . . .
$$ | . X X . X . . . .
$$ | X O X . . O O . .
$$ | 5 O X X X X O . .
$$ | . O O X X O O . .
$$ | . 4 O X . X O . .
$$ | 2 1 O X X . X O .
$$ | . 3 O O X . X O .
$$ -------------------[/go]
Click Here To Show Diagram Code
[go]$$Bc
$$ | . . . . . . . . .
$$ | . X . . . . . . .
$$ | . . . . . . . . .
$$ | 6 X X . X . . . .
$$ | X O X . . O O . .
$$ | 4 O X X X X O . .
$$ | . O O X X O O . .
$$ | 5 . O X . X O . .
$$ | 2 1 O X X . X O .
$$ | 7 3 O O X . X O .
$$ -------------------[/go]
Given enough time, a Zero bot could learn :b6: in the original diagram, but it would require two things, I think. First, enough similar examples. Second, enough examples where the analogous play was found. With self play whether the analogous play would be found is a real question.

Re: On AI vs human thinking

Posted: Tue Jun 26, 2018 12:36 pm
by Bill Spight

Re: On AI vs human thinking

Posted: Tue Jun 26, 2018 2:16 pm
by Gomoto
He is probably wrong,

the missing part is not WHY,

the missing part is conciousness.







(conciousness is probably a intuition of oneself by the way ;-), even more curve fitting)