On AI vs human thinking
-
gowan
- Gosei
- Posts: 1628
- Joined: Thu Apr 29, 2010 4:40 am
- Rank: senior player
- GD Posts: 1000
- Has thanked: 546 times
- Been thanked: 450 times
Re: On AI vs human thinking
I also dislike talk of programs as if they were people. I don't find the fact that the best programs can beat the best humans at go to be a matter of awe and shock. Fifty years ago the Japanese pros ruled the go world and, among Western amateurs at least, the Japanese go was considered the most important to study and emulate. Thirty years ago or so when the Chinese teams were defeating the Japanese teams in the Supergo matches there was an indication that the pros and go styles we admired was not the end and that something different could be good, and players flocked to study the new ideas. Then came the Korean wave, playing seemingly superior go. Now, among human players the Chinese seem to be at the top and computers driven by programs are defeating everyone. I believe these shifts will continue and that AI play sufficiently different will appear. There is a vast amount of undiscovered go still waiting to be found. From my perspective, the AI "players" don't understand go at all. They resemble human club players who never study or analyze their games but, through just playing a lot, reach a dan level. The "Zero" type AIs do that, playing millions and millions of games but still not "studying" or understanding theory. That's why these systems can't explain what they do. What we get is such and such a move has a better chance of leading to a win than some other move does. The AI programs are just scratching the surface, not getting deep in. The new moves the AI programs discover are fun to play with but the pros aren't arbitrarily using them, they are working hard analyzing them so they understand why they work.
-
hyperpape
- Tengen
- Posts: 4382
- Joined: Thu May 06, 2010 3:24 pm
- Rank: AGA 3k
- GD Posts: 65
- OGS: Hyperpape 4k
- Location: Caldas da Rainha, Portugal
- Has thanked: 499 times
- Been thanked: 727 times
Re: On AI vs human thinking
I can only imagine.[/quote]
I don’t know what I think, but there was something compelling about the claim.
I saw a contrarian take recently. It said something to the effect that if you look back, technological progress might be having less of an impact. Take 1865 to 1940 (I can’t remember what years the article used): you get cars, planes, radios, phones. These technologies change some of the most fundamental things about how life is lived. Can smartphones really compete? A pessimist would say that rocketry was an area of rapid progress by 1940, so even the moon landing is impressive, but still just improving existing ideas.Kirby wrote:Think about it. Humans have been around for thousands of years, but electricity was just being investigated a few hundred years ago. In my grandfather's lifetime, he had been using a kerosene oil lamp to study. The first airplane was invented just over a hundred years ago, and since then, we've traveled to the moon. They're discovering organic material on Mars, now. The world wide web was invented when - like the 90s? That's just a couple of decades ago.
The rate of advancement in technology isn't happening linearly. It's starting to lift off like crazy.
Here's a funny article along the same lines:
https://waitbutwhy.com/2015/01/artifici ... ion-1.html
I don’t know what I think, but there was something compelling about the claim.
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: On AI vs human thinking
Edit: Corrected author of quote.Kirby wrote:The rate of advancement in technology isn't happening linearly. It's starting to lift off like crazy.
Here's a funny article along the same lines:
https://waitbutwhy.com/2015/01/artifici ... ion-1.html
Epigraph of the article:
Which made me think of this quote:Vernor Vinge wrote:We are on the edge of change comparable to the rise of human life on Earth.
"We know that we are forerunnersBertolt Brecht wrote:Wir wissen, daß wir Vorläufige sind
Und nach uns wird kommen: nichts Nennenswertes.
And after us will come: nothing worth mentioning."
Last edited by Bill Spight on Sun Jun 10, 2018 2:14 am, edited 1 time in total.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
-
Tryss
- Lives in gote
- Posts: 502
- Joined: Tue May 24, 2011 1:07 pm
- Rank: KGS 2k
- GD Posts: 100
- KGS: Tryss
- Has thanked: 1 time
- Been thanked: 153 times
Re: On AI vs human thinking
Is there really a theory to understand? In my opinion, go theory is "just" a crutch that allow humans to play better than with just pure reading and intuition. It is a (human) way to create heuristics to prune the game tree.They resemble human club players who never study or analyze their games but, through just playing a lot, reach a dan level. The "Zero" type AIs do that, playing millions and millions of games but still not "studying" or understanding theory
And while theory is indeed very useful for humans, is it necessarily that useful for a program?
-
RobertJasiek
- Judan
- Posts: 6273
- Joined: Tue Apr 27, 2010 8:54 pm
- GD Posts: 0
- Been thanked: 797 times
- Contact:
Re: On AI vs human thinking
There are different kinds of go theory, such as
- informal, traditional go theory from guesswork to supported by empirical evidence,
- informal go theory supported by systematic study but without mathematical proof,
- formal go theory of theorems aka truths proven mathematically,
- hybrids of systematic study awaiting allegedly possible mathematical proofs or requiring working out details so that some variation of study can be proven,
- hybrids involving computers from AI guesswork (of which AlphaGo Zero is successful empirically) to algorithms being part of mathematical proving,
- etc.
Only a few people research in theorems but do not overlook them!
- informal, traditional go theory from guesswork to supported by empirical evidence,
- informal go theory supported by systematic study but without mathematical proof,
- formal go theory of theorems aka truths proven mathematically,
- hybrids of systematic study awaiting allegedly possible mathematical proofs or requiring working out details so that some variation of study can be proven,
- hybrids involving computers from AI guesswork (of which AlphaGo Zero is successful empirically) to algorithms being part of mathematical proving,
- etc.
Only a few people research in theorems but do not overlook them!
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: On AI vs human thinking
Thanks. Corrected.Gomoto wrote:@Bill (Kirby wrote ... not Gomoto)
I don't know how that happened. I didn't write in your name.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
-
John Fairbairn
- Oza
- Posts: 3724
- Joined: Wed Apr 21, 2010 3:09 am
- Has thanked: 20 times
- Been thanked: 4672 times
Re: On AI vs human thinking
I used to gasp at the rate of change but now I feel more detached, and not always impressed.I saw a contrarian take recently. It said something to the effect that if you look back, technological progress might be having less of an impact. Take 1865 to 1940 (I can’t remember what years the article used): you get cars, planes, radios, phones. These technologies change some of the most fundamental things about how life is lived. Can smartphones really compete? A pessimist would say that rocketry was an area of rapid progress by 1940, so even the moon landing is impressive, but still just improving existing ideas.
I don’t know what I think, but there was something compelling about the claim.
People tend to assume technological progress is positive. It can be, but in reality it also brings overpopulation, pollution, depletion of resources and apprehension - even those who think that all that glitters is gold must surely, once in a while, have a shiver of apprehension at the potential technology carries for future wars, terrorism and spread of epidemics.
And is the impact really that great? Yes, change can be huge, and the sheer speed of movement of "progress" can lend an illusion that any change is even bigger than it really is. But I'm inclined now to think that stasis has a much bigger impact on the human condition - there's a phrase you don't often hear now but it would be useful to resurrect it.
I'll explain what I mean by statis - lack of change - in that context. I may have mentioned here before what was for me a relatively recent discovery. I used to think "second childhood" meant when you went gaga and had to wear nappies again. Maybe it still does, but I discovered a much nicer meaning. Undistracted by work and mortgages I had time to observe my grandchildren in a leisurely way that I couldn't do with my own children. What I saw reminded me of my own childhood and the things I used to do. It was very pleasurable. But what also dawned on me gradually was that, despite them having developed fully rotatable thumbs to play video games and type on iPhones, deep down we haven't changed one bit mentally as humans.
There are physical things that astonish the grandkids. Such as when I tell them that I began life with a tin bath and outside toilet and actually WALKED to school ON MY OWN, in all weathers. Deliveries of coal and milk and other stuff were by horse and cart, and men would rush out and scoop up horse poo into sacks for their allotments. But I believe that in time they will also come to believe that the human condition hasn't really changed.
I can also look back at my own grandfather. He fought in the First World War and lost his brother there. When I was 15, and because I was good at French at school, he asked me to accompany him to the war cemeteries in France. Because his brother was in an unmarked grave, this involved going round lots of cemeteries so that he could he pay his respects to every unmarked grave in the hope he got the right one. As you can imagine, I was bored and more interested in seeing if I could wangle an under-age Pernod at the café and visiting the Quai d'Orsay to see where Maigret worked. But I'm now convinced that my grandfather saw right through my behaviour, very different from his, and understood we all shared the same human condition.
Another thing that has influenced me is a slow realisation of how wise Confucius was. Well over 2,000 years ago he saw how important seemingly irrational and wasteful things like pomp an pageantry were for humans. We are not machines - or, if we are, we are far, far more complex than technology has come up with.
But apart from Confucius there have been countless others in the past who have made enormous contributions in both thinking and technology, and they too tend to get overlooked. I'm sure it would be easy to fill an encyclopaedia just listing such changes, but to mention of few that get overlooked by people blinded by the glitter of computer screens, think of the impact of tea in Britain (made water safe to drink, led to healthier workers, led to the Industrial Revolution), penicillin, the postal service (smart phones aren't the only way to communicate), the steam engine, fire, wheels, oil lamps, axes, ploughs, literacy for all, brushes (everyone uses several every day), vaccinations, etc etc.
Evidently, therefore, I believe that modern technology is really just improving existing ideas. I don't think you really understand that by reading about it. At least, I didn't. You have to live through it.
But there are two areas where I still feel confused. One is that while I think we take both the existence and the size of new changes in our stride (stripping away, in due time, the hype and frenzy of excitement), the rate of change may be quite another matter, and I'm not very sanguine about that. I expect some sort of explosion of implosion.
The other area is AI. I'm not certain whether this represents a paradigm shift, e. g. robots may take over from us. It's a huge intellectual idea. But at present I'm optimistic. Mankind has coped perfectly well with huge intellectual change in, say, learning to live without religion, for example, even to the extent of unbelievers co-exist in a world where many people cling to the old ways. It causes friction, of course, but we muddle through, as humans tend to do. I expect the AI sceptics and the AI worshippers to muddle along together, too. Muddling along is La Condition Humaine, and doesn't have to be as bleak as Andre Malraux painted it.
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: On AI vs human thinking
A lot of theory consists of heuristics, and a lot is informal. But, as Robert Jasiek points out, there is theory that is formal, either logical or mathematical. Life and death knowledge about corners is theory, for instance. The proverb, "Eye vs. no eye" may be considered a heuristic, but it may also be considered an informal expression of theoretical knowledge that has been proven and formalized.Tryss wrote:Is there really a theory to understand? In my opinion, go theory is "just" a crutch that allow humans to play better than with just pure reading and intuition. It is a (human) way to create heuristics to prune the game tree.They resemble human club players who never study or analyze their games but, through just playing a lot, reach a dan level. The "Zero" type AIs do that, playing millions and millions of games but still not "studying" or understanding theory
That depends upon how the program is written. IIUC, AlphaGo Zero and Leela Zero start out with no theoretical go knowledge and end up with none. It may be possible that other strong programs will come along that utilize go theory. Don't top level chess engines incorporate chess theory in the form of opening books and tablebases?And while theory is indeed very useful for humans, is it necessarily that useful for a program?
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
- daal
- Oza
- Posts: 2508
- Joined: Wed Apr 21, 2010 1:30 am
- GD Posts: 0
- Has thanked: 1304 times
- Been thanked: 1128 times
Re: On AI vs human thinking
Perhaps the reason is simply that that is not what they are programmed to do. It doesn't seem implausible to me that future programmers, tired of making zero clones will try to focus on how the AIs justify their moves and find ways of expressing it. Bring on the Malkovitch bot!gowan wrote:From my perspective, the AI "players" don't understand go at all. They resemble human club players who never study or analyze their games but, through just playing a lot, reach a dan level. The "Zero" type AIs do that, playing millions and millions of games but still not "studying" or understanding theory. That's why these systems can't explain what they do.
Patience, grasshopper.
-
Mike Novack
- Lives in sente
- Posts: 1045
- Joined: Mon Aug 09, 2010 9:36 am
- GD Posts: 0
- Been thanked: 182 times
Re: On AI vs human thinking
The neural net programs aren't written to be able to do anything but to implement a neural net. A neural net has the property of being able to learn (of being taught) to implement a function. In OUR case (the human brain) we probably start out with some connections, biasing us to form "theories" of why things happen << presumably this biasing useful in our evolutionary process >>Bill Spight wrote:
That depends upon how the program is written. IIUC, AlphaGo Zero and Leela Zero start out with no theoretical go knowledge and end up with none. It may be possible that other strong programs will come along that utilize go theory. Don't top level chess engines incorporate chess theory in the form of opening books and tablebases?
IF you have ever had "experimental psychology" you will have learned that not only is this tendency not uniquely human, but that it is "superstition" developed in the learning process << the rat might have accidentally "learned" to do a back flip before pressing the reward lever -- the back flip of course having nothing to do with the mechanism giving the reward >>
Understand? We humans are more comfortable with "theories", thinking that we understand why. Why must we suffer? Why must we die? << to give the underlying questions of a couple of major religions >> I stuck THIS in here (give this example) because many/most of you find SOME of these theories "nonsense", the ones of religions other than your own.
But back to WHY thinking life might have evolved with this tendency to be subject to "superstition"<< the flip side of what we think of correct theories >> I rather suspect that it optimizes/speeds up learning of "what will work" given that the inclusion of "superstition" elements (unnecessary parts) into the theory might cost only minor inefficiency and the price of not learning quickly enough severe << not learning what is a predator means getting eaten --- the "price" learning that a non-predator is safe just a bit of energy running when not necessary >>
The "theories" of WHY "the next move is a game of go is such that there is no better move" would represent shortcuts to the evaluation of that function, shortcuts that we humans might find useful even when slightly incorrect. This makes sense since the practical evaluation of the function is an approximation of the exact function. These neural net AIs are also learning to evaluate the function, also still inexact approximations. But they are learning to do that directly, skipping the theory business.
-
Bill Spight
- Honinbo
- Posts: 10905
- Joined: Wed Apr 21, 2010 1:24 pm
- Has thanked: 3651 times
- Been thanked: 3373 times
Re: On AI vs human thinking
Bill Spight wrote:
That depends upon how the program is written. IIUC, AlphaGo Zero and Leela Zero start out with no theoretical go knowledge and end up with none. It may be possible that other strong programs will come along that utilize go theory. Don't top level chess engines incorporate chess theory in the form of opening books and tablebases?
A lot of go theories are heuristics. I suppose by superstition you mean a heuristic that is unsound. For instance, when I was learning go textbooks said that it was incorrect to extend further from a 3-4 point than an enclosure. That is wrong. Curiously, this was at a time when the Chinese Fuseki was becoming popular.Mike Novack wrote: The neural net programs aren't written to be able to do anything but to implement a neural net. A neural net has the property of being able to learn (of being taught) to implement a function. In OUR case (the human brain) we probably start out with some connections, biasing us to form "theories" of why things happen << presumably this biasing useful in our evolutionary process >>
IF you have ever had "experimental psychology" you will have learned that not only is this tendency not uniquely human, but that it is "superstition" developed in the learning process << the rat might have accidentally "learned" to do a back flip before pressing the reward lever -- the back flip of course having nothing to do with the mechanism giving the reward >>
Understand? We humans are more comfortable with "theories", thinking that we understand why.
Go theories need not be incorrect at all. For instance, one of the first theories that many human beginners are taught is the basic theory of ladders. The theory is simple and because the reading of simple ladders does not tax human working memory, even human beginners can play ladders that span the board. IIUC, modern go neural nets have not learned that theory to the same depth. I suppose for two reasons: first, they cannot represent the intensive theory like humans do, and second, that they have not encountered enough instances where such long ladders are relevant to have learned them.The "theories" of WHY "the next move is a game of go is such that there is no better move" would represent shortcuts to the evaluation of that function, shortcuts that we humans might find useful even when slightly incorrect. This makes sense since the practical evaluation of the function is an approximation of the exact function. These neural net AIs are also learning to evaluate the function, also still inexact approximations. But they are learning to do that directly, skipping the theory business.
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
At some point, doesn't thinking have to go on?
— Winona Adkins
Visualize whirled peas.
Everything with love. Stay safe.
-
John Fairbairn
- Oza
- Posts: 3724
- Joined: Wed Apr 21, 2010 3:09 am
- Has thanked: 20 times
- Been thanked: 4672 times
Re: On AI vs human thinking
A propos probably nothing, ladders have the distinction of being the first go theory (the six-diagonals heuristic) in the go literature, appearing in the Dunhuang Classic well over a millennium ago.one of the first theories that many human beginners are taught is the basic theory of ladders
I think this highlights something special about theories/heuristics for humans. They need to be practical first of all, but if they entertain us as well they become something special and have a curious knock-on effect. I can certainly remember the frisson of delight at discovering ladders. It led to an interest in go.
Other firm favourites such as extending three from a two-stone wall (and so on) likewise have an ancient pedigree. We take them for granted, but the ancient who came up with them was maybe the Demis Hassabis of his day.