Commonsense Go

For discussing go computing, software announcements, etc.
Post Reply
Kirby
Honinbo
Posts: 9553
Joined: Wed Feb 24, 2010 6:04 pm
GD Posts: 0
KGS: Kirby
Tygem: 커비라고해
Has thanked: 1583 times
Been thanked: 1707 times

Re: Commonsense Go

Post by Kirby »

djhbrown wrote:...so when a scientific issue is debased into playground needling...


scientific? :scratch:
be immersed
User avatar
pnprog
Lives with ko
Posts: 286
Joined: Thu Oct 20, 2016 7:21 am
Rank: OGS 7 kyu
GD Posts: 0
Has thanked: 94 times
Been thanked: 153 times

Re: Commonsense Go

Post by pnprog »

djhbrown wrote:An empty point is said to be colour-controlled if at least 3 of its links are coloured and none is enemy coloured.
The edge of the board is friendly to both sides, so an edge point is colour-controlled if 2 of its links are exclusively-coloured, and a corner point is if just 1 is.
Clear enough.
In my understanding, this carries the same meaning as your step 2b (with that correction in red I added):
djhbrown wrote:step 2b: a neutral point in the middle/edge/corner, 3/2/1 of whose edges links are exclusively-coloured, becomes colour-controlled.
And with that change, goban corners points (A1, T1, T19 and A19) can now become color controlled points.
So far, this is what I get:
map3.png
map3.png (23.49 KiB) Viewed 15353 times


Regarding the other change of definition:
djhbrown wrote:step 1: a colour-controlled point colours its links and their endpoints.
step 2a: a link connecting two singly-coloured points, or a singly-coloured point on either the second or third line with neutral link(s) to a neutral edge point, is coloured.
step 2b: a neutral point in the middle/edge/corner, 3/2/1 of whose edges links are exclusively-coloured, becomes colour-controlled.
I think you should not try to make your definitions so tidy. In the opposite, I think they should be explicit and verbose.
This is how I understand it:
step 1: a colour-controlled point colours its links and their endpoints.
step 2: a link connecting two singly-coloured points is coloured.
step 3: a link connecting a singly-coloured point on the second line with neutral link(s) to a neutral edge point, is coloured.
step 4: a link connecting a singly-coloured point on the third line with neutral link(s) to a neutral edge point, is coloured.
step 5: a neutral point in the middle/edge/corner, 3/2/1 of whose edges links are exclusively-coloured, becomes colour-controlled.
As those step repeat until no more new links are discovered, then the order of steps does not matter.

I am doubtful about step 4, regarding singly colored point on the third line. With such rule, you would not get that result in your paper, page 14:
one_space_jump.png
one_space_jump.png (44.71 KiB) Viewed 15353 times

What do you think?
I am the author of GoReviewPartner, a small software aimed at assisting reviewing a game of Go. Give it a try!
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

open apology to Uberdude

Post by djhbrown »

open apology to Uberdude

this morning i re-read through all the comments on this thread and suddenly saw Uberdude's comments in a new light.

i had replied somewhat acerbically when he quoted Baudis, who is not exactly the President of my fan club, which is hardly surprising, because there isn't one.

but Uberdude wasn't trolling me.

and indeed, Baudis may even have genuinely believed the absurd nonsense he wrote on his own grafitti wall about what i wrote on Gnugo's.

so i may owe him an apology too.

there is a third man... <<<<<<<<<<< one day, 20 years ago, [redacted before submission] never seen before or since.
Henry Ford wrote:history is bunk


>>>>>>>>>>>>>Back to the present, and my apology to Uberdude, whom i perceive to be a totally different character to she who must be obeyed:

His first comment in that thread is the one that showed me the mote in my own eye when i re-read it, because although at the time i had misunderstood his intentions in writing it, i've finally woken up to that fact.

he and i are (were) coming to the subject of computer Go - and AI in general - from different places, and using words differently.

in particular, the word "think".

this little word opens up a whole can of worms, but - to me at least - it is a can worth looking into, as many many others have done before, even before Turing.

the mistake was mine, because in my post, i used the word in the sense in which i meant it, but - reasonably enough - Uberdude read into it (what i now infer to be) a very different, and much more particular, sense.

and indeed, the sense in which Uberdude was using it is common sense, whereas mine was not!!
the irony could hardly be more apposite :oops:

if the above makes no sense to you, that is explicable, because i didn't start at the beginning, but about halfway through and then got sidetracked on a redacted time travel. it's what they call a stream of consciousness (or flood of self-consciousness)

let me try to "please explain"...

the fault was mine from the outset, because in the post on which Uberdude commented, i had written:

design of software able to think and talk about Go in a commonsense way,is described and illustrated


there it is, in black and white - or rather, dark grey and light grey.

what i should have written is this:
design of software able to 'think' and talk about Go in a commonsense way,is described and illustrated
because Lewis Carrol's Humpty-Dumpty was spot on when he said to Alice:
Humpty-Dumpty wrote:when i use a word, i use it to mean what I want it to mean. Neither more, nor less.
and - self-evidently it does not go without saying - Dodgson is showing his readers that what you mean when you say something is entirely irrelevant, because the only thing that matters is what the other person thinks you mean - and that is more often than not very different!

as it happens, in the dozens of drafts of my paper, i had several times vacillated between putting think in quotes and not doing so. Eventually i settled on not, for after all, Turing hadn't, so why should i?

well, i now have the answer to that question - of course i should!!

many of those of you that haven't already switched off by now, if indeed there be any that started to read this in the first place, will probably be steaming: "Get to the point!"

yes, right, sorry, ok, here's what i mean:

Code: Select all

1. there is a thing called "thinking"
2. people do it
3. BUT, they are not the only ones
4. even bacteria think!!
Because, "thinking" = processing information in a sensible way, so as to extract meaning from it, and - if you are smart enough - responding to it appropriately (which means in a way that is in your own best interests (or, rather, as that pea fellow and CD (not forgetting the other one who got there at the same time whose letter prompted him to publish but whose name he didnt even mention) and RD showed us, your selfish genome's best interests).

Bacteria meet all these requirements, when they sniff out whether to flagellate or not, depending on what it smells like around them.

Rotating (using the only known example of a biological wheel) its corkscrew-shaped flagellum clockwise propels a bacterium forwards, whereas rotating it anticlockwise makes it tumble in the ebb and flow of the sea it's in, which has a fair chance of it ending up pointing in a different direction.

In this way, a bacterium - let's call it E (short for E.coli) - can navigate chemical gradients, moving towards things that are good to eat, and away from things that are poisonous.

Guess what?! - That's EXACTLY what Alfadog's reinforcement learning gradient ascent does!!!!

Alfa's policy network is the computational electronic equivalent of E's sniffer, and her value network is the equivalent of E's tumbling or screwing, depending on what it detected, which is the biological equivalent of what Alfa imagines in her Monte-Carlo rollouts of possible futures.

Both of them use the principle that Ulam's flash of inspiration revealed to him all those years ago:
Eckhardt, Roger (1987). "Stan Ulam, John von Neumann, and the Monte Carlo method" http://library.lanl.gov/cgi-bin/getfile?15-13.pdf

Long before Ulam, there was another fellow who had an equally good idea. His name was Plato, and he wrote stories about an imaginary man called Socrates, who went around telling people he was mortal, which proved to be true, because, due to his annoying habit of tricking them into contradicting themselves, they eventually got so fed up about being made to look like fools to themselves, they forced him to commit suicide by drinking hemlock.

Plato was one of the first to write about what the Greeks called Logos - the notion that one thing can lead to another because the Universe ain't random, no matter what Schrödinger says.

Do bacteria use logic?

In one sense, they do - because their built-in associative machinery (learned by reinforcement learning across generations of evolution) does implement a form of Modus Ponens, which Russel and Whitehead (or was it Frege? i forget now) might have written like this:

Code: Select all

Axioms:

A1. nice_smell implies nice
A2. nice implies i should flagellate
A3. flagellate implies ("well i don't actually know what it implies, i just do it")

Premiss:

1. exists (nice_smell)

Lemma:

L1: nice (1, A1, MP)

Conclusion:

flagellate (L1, A2, MP)


-some while later-

mmm... yummy!


QED


... Q: Do people use logic?
A: Yes. (Homer uses exactly the above reinforcement learning learned logic in the latest episode of MiG).
Q: How do you know?
A: Because people have neurons, and neurons implement logical implication.
Q: Huh?!
A: http://sites.google.com/site/djhbrown/H ... es/umb.pdf (Ch 13)
https://3fc9298e-a-62cb3a1a-s-sites.goo ... C6to10.pdf pp198-229.
Q: So what?
A: Fair question; understanding how things work isn't needed to be able to use them. For example, i can drive a car without needing to know anything about Boyle's Law.

Q: I am Right and you are Wrong. Right? RIGHT?!!
A: If you say so; please don't hurt me (remembers de Bono)

Enter, stage left, the ghost of Turing

T: To think, or not to think, that is the question
Q: No it's not! What the f are you talking about?
A: I think he means that thinking is something we think only people can do, but maybe it isn't. Maybe even a machine can...
Q: This is just stupid. Off course machines can't think - and even if they could, we're still better than them because we have consciousness, and feelings, which they can't because only i can have feelings because i'm me and i'm important and f you. I've had enough of this; I'm off to play with my stones.
several audience members, in chorus and sequence: He's right. They tried that before and it didn't work and it can't ever work, at least, not doing it that way, and your way is unintelligible and malconceived and ill-defined and just plain wrong and the same as theirs, regardless of your saying it isn't.
A: I wish i were as streetwise as Mr T, to know that the only way to make a statement even slightly acceptable is to express it as a question, rather than a statement.


Scene 2, May 2017, somewhere in China.

crowd: gasp! isn't she beautiful!, omg, i can't believe it!!
MC: this is the most significant moment in the entire history of the world, for it signals the end of Man's dominion over the Earth and all the living things that dwell thereupon, and (house of) Ushers in the Dawn of the Age of the Machines.
voice at the back, sotto voce: No, it doesn't.

<<<<<<flashback 18 months
hideous Hebdo satirist: Monte is great. I avow that there is only one Monte, and Monte shall be his name.
Pope B: You are hereby excommunicated and commanded to neither say nor think said heresy again, and if you fail in this, you will be shown the instruments of torture.
>>>>>>flashforward

crowd: Kill him!
T: which one?

Curtain falls.


Q: Have you finished??!
A: This isn't the end. It isn't even the end of the beginning. Nor is it the beginning of the end (digital oxymorons that only a moron with two fingers in the air would say, because Time flows smoothly - my brother Esau is an hairy man, but i am a smooth man - like an arrow, not in mythical Planck steps (regrettably still the only half-decent model we have, 117 years old and still spluttering along)), but it might be the beginning of the beginning.
Q: Pffft! - what's on the other side?
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Commonsense Go

Post by Mike Novack »

I am not sure you want to go there, raise the issue of "think" and "consciousness".

Are you really wanting to claim "hardware makes a difference" << to a computer program in the abstract >>

If something is computable, it is computable on a Turing machine, yes? << just very slow >>

Agreed so far?
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

Mike Novack wrote:I am not sure you want to go there
i've already gone there
viewtopic.php?f=8&t=14175
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

pnprog wrote:Your paper uses concept that are easier said than implemented. For example:

"A group with two eyes, or a single eye large enough to be able to form two eyes, is alive".

Any life and death book for beginner contains tons of examples that will challenge our definition/understanding of "single eye large enough to be able to form two eyes" (just think of group alive by seki or by ko, or by ladders). There is probably no easy way to code such concept without going for the exhaustive/recursive search.

And with that definition being already that hard (impossible?) to implement, then working out a "proof of concept" of your map will block at the second step of the cluster map (when the dead stones are noted and the color map redrawn). And so all the following steps (shadow map, groups, path...) cannot be done as well.


anything is easier to say than to do :) ... except of course, for poor Alfadog, who hasn't learned to talk and possibly never will, but that's another story...


extract from https://sites.google.com/site/djhbrown2/icgo
Swim's perceptions of lad (life and death) are only perceptions, not proofs. A single eye large enough to be able to form 2 eyes - just how big is that? In reference [4] [edit: no, i mean "Swimming with Alphago", or somewhere, i cant remember now.. oh yes, it's in "JueYi's New Move"] i discuss this issue; i haven't come up with an exact number yet, but it has to be at least 7x5. Now, of course, there are going to be enemy stones nearby, and that will affect things a lot. i discuss that issue in "JueYi's new move".

Yes, let's think about seki: what is seki? - in common parlance (ie Go books), it's a tussle between two eyeless "groups" - but that definition stops short of defining what a group is, so it's a bit vague. Of course, book authors know that readers have their own ideas of what a group is, so they don't bother defining it.

Swim is not vague. Swim doesn't think or talk about lad of "groups", but of clusters, which are precisely defined by the colour map.

Who's going to win a seki? No-one can say for sure until they read it out, as you rightly say. And that could take forever - or at least 10up170 which is too big to think about.

So Swim can't say either - so she doesn't!

The same applies to kos and ladders.

[edit2 - look, i can't prove mathematically that Swim's lad perception is 100% accurate, any more than you can prove that Alfadog's is. As it happens, i can prove mathematically that Alfa's isn't! :)- but i'm not going to do it here because it's off topic and i dont want to be swatted again.]
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

One of the ironic things about commonsense, is that most people don't have it. Especially programmers. They live in a world of their own, using a sublanguage of their own, which is fine, so long as they only talk to each other.

2001-10, i had to teach usability (it was called HCI) to 2nd year undergrads and based my class on an MIT opencourseware graduate class, since it required no prior anything. one of the many nice things about that courseware, unlike html which wont let you put 2 spaces after a full stop, was that the author had assembled a "Hall of Shame" in which he put a gallery of GUI design cockups in apps from all over the place.

Well worth a look, if you're thinking of making software for people who dont speak your particular brand of geek to use.

i was prompted to write this note by an experience today which forced me to Google to find out how undo the damage i had just done to myself by pressing a button that should never have been there at just the Murphy-wrong time. luckily, i wasn't the first to fall foul of programmers being blissfully unaware of what users actually think, want and do.

https://forum.xfce.org/viewtopic.php?id=8703 :)

i might cite as another example, the variation tree navigation >> buttons on Go clients that hop 10 moves, which no-one in their right mind would ever want to do, except maybe those who have never heard of sente or quiescence and are in too much of a hurry.

three more for the hall... at least Thunar doesn't have Nemo's scrollbar that zooms instead of scrolling ... sigh.... when will they ever learn, Marlene?

PS why is it, that cover versions of songs (eg PPM, Joanie, the Kingston trio (never heard of them)) are almost never as good as the original, just as Go program clones dont match up to the original alphas?

PPS Can anyone tell me what i'm doing wrong here? (it used to work fine on xubuntu 14 but i lost it during upgrade to 16.04):

Code: Select all

cd /home/d/go/gogui-1.4.9
sudo ./install.sh -j /usr/lib/jvm/java-8-openjdk-amd64 -s /etc
d@d-HP-Pavilion-dv6700-Notebook-PC:~/go/gogui-1.4.9$ sudo ./install.sh -j /usr/lib/jvm/java-8-openjdk-amd64
install: cannot stat 'lib/*.jar': No such file or directory
install: cannot stat 'doc/manual/html/*.html': No such file or directory
install: cannot stat 'doc/manual/man/*.1': No such file or directory
i shrink, therefore i swarm
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

i shrink, therefore i swarm
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

who reads the subject line on comments? the font is so smal

Post by djhbrown »

Code: Select all

i swim, therefore i think

i shrink, therefore i swarm
alphaville
Dies with sente
Posts: 101
Joined: Sat Apr 22, 2017 10:28 pm
GD Posts: 0
Has thanked: 24 times
Been thanked: 16 times

Re: Commonsense Go

Post by alphaville »

djhbrown wrote:Updated Q&A here:
https://sites.google.com/site/djhbrown2/icgo


It would be very useful indeed to have some software that can explain the reasons behind AlphaGo's moves!

As for using more than one decision making agent: I am pretty sure that AlphaGo itself should already combine Monte-Carlo with some local reading module - I don't think ladders for instance can be estimated statically by any neural-network system.

How do you plan to decide in icGo between the suggestions of various modules? Let's say that the influence module tells you to play at A, the life-and-death module tells you to play B, and the generic Monte-Carlo module tells you to play C; how is the decision made between A, B and C?
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

Alphaville wrote:explain the reasons behind AlphaGo's moves!
Swim does not pretend to be able explain Alfadolfa's own reasons for her moves, but she (Swim is a she too..) can sometimes - as in *Swimming with Alphago* and in *JueYi's New Move* - find a move, which if not identical, is (i believe) close enough in strategic import to explain why Alpha's (or JueYi's) move is a good one, and, more significantly, provide a meaningful rationale for Swim's own version of it, which Monte-Carloers cannot do, because they don't reason, they just search, albeit in a convoluted guided way, better than old Zen's large patterns, but one which can't tell the difference between a baseball bat and a toothbrush, as DARPA recently showed us (or the difference between a cat and a load of Pollock's, as Rodney Brooks pointed out yonks ago).

At the moment, the best Monty Pythons can do to explain themselves is to show you their heatmaps and expected continuations, as Leela does, although even there, she doesn't show you her win% map, for some secret reason sjeng has of his own - i think it's to trip up cheats, so good on him :).

Alfa et al can read ladders 361+ rungs long - even as we speak, Leela is still thinking about what Andrew should do in MiG34; over 24 hrs of thinking, 5 million positions examined, and still counting... she seems stuck on O8 (67.41 win%), with Andrew's N6 coming in 4th, and Swim's F10 in 6th place with an estimated win% of 65.51.

Leela isn't as bright as Nick, however, because in their original video, Nick already said he should resign even before move 116, but Leela still thinks he has a 100-67.41 = 32.59% chance. Given how absurd that is, measuring woefully inaccurate estimates to two decimal places seems a trifle unnecessary...

Alphaville wrote:How do you plan to decide in icGo between the suggestions of various modules?
Obviously, Swim wouldn't ask anyone whose opinion she didn't value, and she regards all opinions as equally valid.

So, in Mig34 for example, Andrew proposes 4 candidates, Swim another 4 (in the vid, i mistakenly included G9, which doesn't fit Swim's criteria) and now Leela is coming up with another 4 different ones (O8, O9, K8 and F11, in that order), making 13 candidates all up.

The sensible thing for Swim to do is to examine each of these by seeing how white could respond, and reassessing the situation, etc, until byo-yomi starts to run out (i see no merit in Swim allowing herself to drop even a single byo-yomi flag, no matter how many she has).

I don't like the idea of trying to use Swim's value function as a decision maker like Alfredo's value net does; it's better to use it just as a preliminary perception to choose a strategy.

So, when byo-yomi starts to run out, i reckon Swim should defer to Leela's latest appraisal (which could be of a downstream position starting from an intial move Swim or Andrew suggested), since Leela will at least have made a big effort to read ahead, even if, like Alfadolfa, she doesn't "know" what she's looking for until she gets there (ie the end of a Monte-Carlo rollout).

PS Since it's not, as John McCarthy advocated, a one-design sailboat race (ie standardised hardware), i don't see why Swim shouldn't have googleplexes of parallel cpus as well, except that i'm not going to buy them. right, time's up; i have to go and repair my brand new rusty old second-hand bicycle that i bought for 50 bucks this morning, which to my way of thinking is every bit as fit for purpose as the $4000 gleaming new carbon-fibre ultralight monster in the showroom.

PPS i asked on reddit how much is that alfadoggie in the windows, but it was taken down as fast as it was putten up, so you will have to do the sums yourself.
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Commonsense Go

Post by Mike Novack »

I still think you need to give your reasons why you think "the ability to explain why while doing some task" is NECESSARILY better than "doing the task". What I mean by that, is why you think IN THIS CASE it implies being better at doing the task. We do not ordinarily expect that to be true. Consider the adage "them that can, do; those that can't, teach how to do". We all know examples of this. Being able to do something well and being able to teach/explain doing that something are quite different -- teaching is a separate skill. The better teacher of X might not be the best doer of X, and at least with humans, we do not expect that to be the case. As a student, we likely profit most from the better teacher as opposed to the better doer, so I am NOT arguing against your proposal for that use. I am just asking WHY you think "better teacher (able to explain why) implies a better doer".
lobotommy
Lives in gote
Posts: 408
Joined: Thu Jul 29, 2010 2:01 am
Rank: EGF 3kyu
GD Posts: 0
Universal go server handle: tommyray (1d/2d)
Location: Poland, Gliwice
Has thanked: 127 times
Been thanked: 94 times

Re: Commonsense Go

Post by lobotommy »

Mike Novack wrote:I still think you need to give your reasons why you think "the ability to explain why while doing some task" is NECESSARILY better than "doing the task". What I mean by that, is why you think IN THIS CASE it implies being better at doing the task. We do not ordinarily expect that to be true. Consider the adage "them that can, do; those that can't, teach how to do". We all know examples of this. Being able to do something well and being able to teach/explain doing that something are quite different -- teaching is a separate skill. The better teacher of X might not be the best doer of X, and at least with humans, we do not expect that to be the case. As a student, we likely profit most from the better teacher as opposed to the better doer, so I am NOT arguing against your proposal for that use. I am just asking WHY you think "better teacher (able to explain why) implies a better doer".


In go both are the same, making a distinction between doer and teacher is rather false in this case.

However, you have asked an important question. It's a tough one :). Let me rephrase it: why anybody wants to know and understand x, when x can be done automatically without any cognitive action involved, and without any understanding of action? Good question, philosophical one, why do we want to know anything? :)

And then you got it all wrong starting from second sentence ;)
What kind of argument in terms of logic is this Doer vs Teacher? It's induction, it's anecdotical, and it has logic value of 0.

The point of the SWIM thing proposed by djhbrown is to provide explanation for us, humans, in natural language, using our heuristics for description of situation on the board. Good enough for me.
Tsumego/Tesuji apps for iPad, iPhone & Android devices:http://www.lifein19x19.com/forum/viewto ... =18&t=7511
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

John Cleese wrote:it is not an ex-parrot, it has not ceased to be, because it never was in the first place.
Mike Novack wrote:I am just asking WHY you think "better teacher (able to explain why) implies a better doer".
to repeat and emphasise what i said last time you raised that exact question in this very thread, i don't think that, never thought that, never will think that.

neither did i ever say what you say i think.

i didn't raise my voice then, nor do i do so now, because i know you are sincere, even if a little heavy-handed with the caps lock.

to the best of my memory, the last time you raised the same thing (maybe you forgot you did), i cited the example of Boris Becker's coach not being able to beat him (or it may have been Andy Murray's, i can't recall who it was exactly - probably Boris', because i remember being impressed that his much older coach could return his boom-boom serve, which few humans on the planet could do at the time. the coach explained that he had to move before Boris put racket to ball, which made so much sense i tried to apply it in my own tennis game, and it sure did improve my return of serve at the time. the same basic principle applies to Bobby Jones' waggle (not Bobby, that other great of the day) - ie get your body moving before you do anything else. Hey, maybe we should wiggle our fingers before putting a stone down?!... But i can't stand those types that rattle the stones in their bowl when i'm trying to think, or have an "uncontrollable" cough in the middle of my backswing.

it might make dialogue flow more productively if you could refrain from starting out with "you think X", and instead criticise or query what i actually write, not what you think i think.

i DO think - and have written (in video) - that all Monte Pythons have an inherent, intrinsic, hole in their defences (we could call it the Alpha-hole, or A-...., no, better not) that pros might (please note the use of the conditional here) be able to exploit.

Maybe Swim could too...:)

i call this the Monte-Carlo analogue of the Horizon Effect. I discuss it at length in one of my videos.

Sure, that video was made before Alphago appeared on the scene, and her strength took me by surprise as much as anyone, but she's still Monte, so still horizon-bound, as Lee Sedol demonstrated.

Despite this, i don't think Ke Jie has any chance, and i DO think that he agrees with that, because he said as much, in print, before i came independently to the same conclusion. So he was very wise to ask for appearance money - which, by the way, you and i have to pay for, by paying extra for the goods that are advertised on Google, to defray the producers' and retailers' advertising costs, even if we have ad-blockers, a software tool that is even more of a public service than Swim.
User avatar
djhbrown
Lives in gote
Posts: 392
Joined: Tue Sep 15, 2015 5:00 pm
Rank: NR
GD Posts: 0
Has thanked: 23 times
Been thanked: 43 times

Re: Commonsense Go

Post by djhbrown »

lobotommy wrote:The point of the SWIM thing proposed by djhbrown is to provide explanation for us, humans, in natural language, using our heuristics for description of situation on the board. Good enough for me.
:) here we go... actually, as it happens, as far as i am concerned, that's not my point of the Swim thing at all and never was! - and i've said so before in this thread too....:)

i started my journey on this topic back in 1971, when i set out to see if i could program a computer to learn a language. i figured that children begin by learning single word associations, and then move on to Chomsky type 3, then 1, skipping 2 because people dont think that way, except some Bishop in 18something or other, who caused all the trouble for schoolkids today because he misunderstood Artistotle. Specifically, a Sentence in language (as opposed to a logical statement) is NOT composed of a Subject and a Predicate (and, as Boole showed, neither is a logical statement, so Aristotle was wrong too!!).

type 0 sounds meaningless to me as far as human language is concerned; there has to be some kind of syntax - but let me tell you this: the English grammar you learned in school is completely wrong!! and i proved it, following in the footsteps of Chomsky who proved it long before when i was still in short pants - >>>>>>>>> but, thanks to a student at Shenzhen University in 2001, who asked me to explain prepositions, i discovered the syntactic form of Chomsky's hypothesised "universal grammar", which will make me famous long after i'm dead because no-one knows about it yet (not even Chomsky), despite it being there for all to see in black and white [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2205530]

<<<<<<<<<< back in 1971, i started on the linear language of Contract Bridge-bidding, but that was too tough (for me to program a computer to learn properly), so i ended up producing a program that could learn any language :)

Or, to be more precise (and restricted), any linear language.

Such as the language of differential diagnosis of liver disease. I was disappointed when it only performed at 85% accuracy, but when i expressed this disappointment to a Professor of Internal Medicine, he said "Oh, that's as much as we get, anyway".

My program also learned to bid at Contract Bridge, but only as well as a weak amateur. I privately concluded from this that Bridge was more complicated than medicine :)

But i was still dissatisfied, because i realised that although my (nameless) concept-learning program could learn "how" to do something, it couldn't learn "why" it should do it.
[Reasoning About Games. Proc European Conference on Artificial Intelligence, Hamburg, 1978. Reprinted in AISB Newsletter 32, 14-17, 1978.]

I saw this as a significant defect, and started a new journey, which was quickly interrupted by a need to make a living, and didn't come back to it for 37 years, not until 2015, after having been ethnically cleansed and put out to grass, and having solved the riddle of who invented Christmas, and why, quicker than i would have imagined possible (it turned out to be a much straighter lineage than i had expected, just by lots of Googling, and with nothing better to do, because i was by then a rentier capitalist, living (modestly but comfortably) off the interest alone, and without the avarice of Morgan, found my old interest in Go resurface.

By then i had started to enjoy making movies, and no longer needed peer review to keep my job, as i didn't have one, so could freely indulge my sense of humour and love of exploration at the same time.

My quest was - and still is - and i am finally getting around to the point - to find a way for a machine to understand Go, not as an end in itself, but as a stepping-stone to understanding understanding in general (and yes, i did proofread that and do mean two "understanding" words; the first is the act (relation) and the second is the object (concept). C -> RC )

First came HaLY, then HoLY, then CG, then Swim, and now icGo.

Moreover - and this point is key - i do not try to make it think like a human - on the contrary, i try to make it think about the fundamental nature of Go, based upon the objective and mind-independent nature of the semeotic relationships between stones and the rules of the game. Whenever something Swim thinks happens to coincide with a Go proverb (such as Andrew's 5-space jump, for example), i celebrate the fact, but my idea of AI concurs with what i believe to be the view of its prophets McCarthy and Minsky, who say that their objective for AI is not to replicate human intelligence, but to discover the essence of intelligence in general.

As it happens, that's what Demis says he's up to as well, but frankly i am unconvinced that he and all the other neural net nutters are going in the right direction - i agree with Chomsky. btw, the epithet "nutters" is not deprecatory, it merely means people who are single-mindedly fascinated by something. Eg, Einstein and Feynman were both particle nutters; Debussy was an impressionist nutter; i (and, i like to think, Feynman) am a hierarchy nutter, etc, etc.

The aim of icGo is twofold:
1. to help people
2. to beat the pants off Alphago!

Now look here, let's get serious for a moment; i am as big a fan of Alfie as anyone, but i see a hole in her, the very same hole that was in my concept learning program all those years ago.

She knows "how", but she doesn't know "why" - and because of that, she can drive herself off a cliff, as she did in game 4.

DM found a patch for that, and have improved her by using "anti-Alpha", so she is further up the beanstalk than this time last year, which is why Ke Jie is toast.

But she still can't see eyes!!

So, it's not that Swim can explain herself, it's that she can see eyes and etc.
that might give her the edge over A's dcnn - one evidence for which is that dcnn without Monte is almost as hopeless across the board as i am. btw, GnuGo and all that lot tried to see eyes, but they couldn't see potential eyes well enough to avoid having needles stuck in them and being cut into pieces and being dumped on their backsides by even weak players like me.

and Swim can use googleplexes of cpus just as easily as the doggie in the windows, so A's comparative advantage is nothing!

Even without them, even with no Monte at all, CG found a better move than Alfie - i'm convinced J13 works, because it's kikashi, and then Swim can play Myungwan Kim's move at L10, which Swim found all by herself, with no help from Kim, who himself didn't notice it until Haylee suggested it in the middle of a different sequence she was exploring, when they were commenting on the game, before Lee made his move. That's why they were so disappointed when Lee made the wedge, because unlike all the other commentators - and Alfie! - they knew it didn't work (shouldn't have worked). Had Swim been watching over Lee's shoulder, she would have whispered in his ear not to do that silly thing that Demis even today still calls a godlike move because it beat his own baby up.

See what i mean?

Of course, every proud father sees his own child through rose-tinted glasses (until it becomes a teenager) so my view of Swim's prowess potential may well be exaggerated, but i challenge you all to this:

My God exists unless you can prove otherwise! - which you can easily do by programming her and showing that she falls over her own feet.

PS it feels good to get things off one's chest, doesn't it :)
Post Reply