It is currently Sat Apr 27, 2024 8:14 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next
Author Message
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #41 Posted: Thu Jan 28, 2016 12:37 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Nikolas73 wrote:
but is the database not publicly accessible to some extent? I imagine it wouldn't be difficult to find a way to bulk download SGF files...


Perhaps they bought the BiGo database which already has loads of KGS games.
http://bigo.baduk.org/assistant_databases.html

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #42 Posted: Thu Jan 28, 2016 12:48 am 
Lives in sente

Posts: 923
Location: UK
Liked others: 72
Was liked: 479
Rank: 5 dan
KGS: macelee
xed_over wrote:
palapiku wrote:
Quote:
This data set contains 29.4 million positions from 160,000 games played by KGS 6 to 9 dan human players

Is it just me or does that seem like a pretty bad dataset to try to beat Lee Sedol with?

those are all (mostly) just amateur games.
why didn't they use a collection of professional games, such as from GoGoD?


Their published work needed to be peer reviewed so it should be many weeks old. I supplied a copy of Go4Go collection to the team in December, which must be after the submission of the paper.


This post by macelee was liked by: Bonobo
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #43 Posted: Thu Jan 28, 2016 2:02 am 
Lives in gote

Posts: 448
Liked others: 5
Was liked: 187
Rank: BGA 3 dan
Marcel Grünauer wrote:
By the way, the paper also said that the Fan Hui matches took place from 5th to 9th October 2015, so who knows what happened since then.


Things got "serious" in October - this I have from the horse's mouth.

Fairly clearly a three-month timescale to have the work written up and published with a big PR splash (done); a six-month timescale for a challenge match. I spoke to Demis about this on Monday: he said contacts with the KBA were very friendly.

To state the obvious, AlphaGo is being trained as we speak.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #44 Posted: Thu Jan 28, 2016 2:39 am 
Gosei
User avatar

Posts: 2011
Location: Groningen, NL
Liked others: 202
Was liked: 1087
Rank: Dutch 4D
GD Posts: 645
Universal go server handle: herminator
Well, I just got my 5 minutes of fame over this.

Yesterday morning I got a call at work from our national broadcasting company, saying they were doing a radio show on "man and computer" that evening and asking whether I'd be willing to be on it to say something about go and computers. I agreed, and they promised to call me back to hammer out the details later. They did so around 8 pm, when it turned out this was going to be about AlphaGo. Of which I had then been aware for all of an hour :)

Long story short, I appeared on a 10 minute segment together with a science journalist who filled in the parts about AI and Google and we had a nice radio chat about go, computers and what this means for the future. I told them that this is a very impressive result, and, when asked, that it does not "spoil the game" for me, because it takes nothing away from the fun and challenging nature of the game. I told them I expect the match with Lee Sedol to be very close and exciting, and that although I still slightly favor Lee, I expect it can go either way.

Really interesting all in all, and a great opportunity to get some much needed publicity for the game!

EDIT: For those who understand Dutch: http://www.radio1.nl/popup/terugluister ... 1-27/23:00 (skip to 41:43 in, it's about 8 minutes total)


Last edited by HermanHiddema on Thu Jan 28, 2016 4:21 am, edited 1 time in total.

This post by HermanHiddema was liked by 4 people: Codexus, daal, DrStraw, emeraldemon
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #45 Posted: Thu Jan 28, 2016 3:38 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
British media appearances:

Charles Leedham-Green was interviewed on Go and AI on
The World Tonight on Radio 4.
http://www.bbc.co.uk/programmes/b06yfcpm 34:26 in.

(I can't help feel the obsessing over how to hold a stone is off-putting, likewise starting with "No, you're doing it wrong", but it gets better. Also they had a few inaccuracies, saying Deepmind was the name of the program rather than company, and said it beat a world (rather than European) champion).

News at 10 is at
http://www.bbc.co.uk/iplayer/episode/b0 ... n-27012016
until 18:30 today,

(Links might not work outside the UK without IP location spoofing)

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #46 Posted: Thu Jan 28, 2016 6:19 am 
Lives in sente

Posts: 1037
Liked others: 0
Was liked: 180
qianyilong wrote:
Also didn't see much mention that 5 informal games were played at the same time and the record for that was 3-2 in favor of alphago.
The informal games had 3 x30 byo-yomi time control. So when time gets tight it starts to struggle a bit.


This merits discussion. Unlike us, a machine isn't in "absolute time". In other words, if it has more computing crunch, effectively time slows down (a Turing machine can compute anything computable, but it would be doing it VERY slowly).

Now the actual machine being used here has a vast amount of crunch power compared to our home machine, huge even compared to the larger machines being used in the computer go tournaments. The result that it does worse compared to the human when at faster time controls implies that it NEEDS that much crunch to play at the pro level.

So I would NOT expect this breakthrough to affect go programs on our home machines any time soon (if ever). Current development on our small machines is for using less power while retaining adequate crunch to get the job done (the jobs that they are asked to do). Not toward developing more crunch, since so few users have any use for more crunch power. Remember, it's not just a matter of needing more battery capacity (and so weight) if the device uses more power but the problem of dissipating that much waste heat from a small device.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #47 Posted: Thu Jan 28, 2016 6:50 am 
Lives in gote

Posts: 677
Liked others: 6
Was liked: 31
KGS: 2d
fwiffo wrote:
Five minute simplified explanation of neural networks and why you would use GPUs for them:

Neural networks can be visualized as a graph consisting of nodes and edges (connections between nodes). The nodes are some sort of mathematical function like f(x)=m*x+b. That node would have three input edges (x, m and b) it and one output edge. The inputs and outputs are connected to other nodes or can be inputs/outputs to things outside the network.

A neural net will consist of many nodes arranged in interconnected layers. For example, part of a neural net for a go playing program might be a 19x19 grid of nodes, each with an input representing a point on a go board (0, 1 and 2 for empty, black and white). Those 361 nodes performing a mathematical function on individual numeric inputs can also be represented as a single node performing the same function on a 19x19 matrix.

So, actual neural network programs boil down to massive amounts of math on vectors, matrices and higher-dimensional arrays of numbers called tensors. Video games graphics is also a bunch of linear algebra and math on vectors and matrices. And we already have commodity specialized processors (GPUs) for doing exactly that, and they're many times more efficient at it than traditional CPUs.


What I do not understand: If there is a neural network then there is huge complexity of wires and firings, but someone has to make sure that this network has to produce a specific result in return, e.g. a good Go move. Can one conceptulize such a network or do they just play around with networks trial-and-error-wise?

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #48 Posted: Thu Jan 28, 2016 7:11 am 
Tengen

Posts: 4380
Location: North Carolina
Liked others: 499
Was liked: 733
Rank: AGA 3k
GD Posts: 65
OGS: Hyperpape 4k
Mike Novack wrote:
So I would NOT expect this breakthrough to affect go programs on our home machines any time soon (if ever). Current development on our small machines is for using less power while retaining adequate crunch to get the job done (the jobs that they are asked to do). Not toward developing more crunch, since so few users have any use for more crunch power. Remember, it's not just a matter of needing more battery capacity (and so weight) if the device uses more power but the problem of dissipating that much waste heat from a small device.
They said that a non distributed version of the program can beat Crazy Stone, Zen and one other program quite reliably (https://news.ycombinator.com/item?id=10982973). Of course, the non-distributed version is still a beefy server class machine, but that's also true of how the strongest versions of those programs have run.

_________________
Occupy Babel!

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #49 Posted: Thu Jan 28, 2016 7:17 am 
Beginner

Posts: 11
Liked others: 0
Was liked: 2
Rank: ogs 12kyu
Universal go server handle: qianyilong
Mike Novack wrote:
qianyilong wrote:

So I would NOT expect this breakthrough to affect go programs on our home machines any time soon (if ever). Current development on our small machines is for using less power while retaining adequate crunch to get the job done (the jobs that they are asked to do). Not toward developing more crunch, since so few users have any use for more crunch power. Remember, it's not just a matter of needing more battery capacity (and so weight) if the device uses more power but the problem of dissipating that much waste heat from a small device.


They did mention however a scaled down version. The full version does a mix of rollouts and a neural network that estimates the value of each move but running just the value network without the rollouts they were estimating at amateur 5d with a 15,000 factor decrease in computation time. So it most definitely will affect our local computers. That version might even manage to make it on our mobile devices. Neural networks are blazing fast to evaluate but slow to train(in the paper I believe that the value network was trained using 50 GPUs for a week) but once trained(which only needs to happen once) it could be shipped elsewhere and definitely run with very few resources.

Edited:misread their graph 5 dan not 5-7. in particular page 11 of the paper for the graph in question.


Last edited by qianyilong on Thu Jan 28, 2016 7:20 am, edited 1 time in total.
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #50 Posted: Thu Jan 28, 2016 7:18 am 
Lives with ko

Posts: 199
Liked others: 6
Was liked: 55
Rank: KGS 3 kyu
Pippen wrote:
What I do not understand: If there is a neural network then there is huge complexity of wires and firings, but someone has to make sure that this network has to produce a specific result in return, e.g. a good Go move. Can one conceptualize such a network or do they just play around with networks trial-and-error-wise?


That is what "training" a network is all about. You need to learn the correct weights for your "wires", so that your nodes only fire when the right inputs are present.

There are many ways in which this can be done, hence the use of pro databases or self-playing. Deep learning has quite some black magic going on, in the sense that many things are done by experience rather than by mathematical rigor. Also, there are many kinds of artificial neural networks and I haven't read the paper so I don't know which one they used. This explanation is just how simple feed-forward networks (i.e. no cycles) with back-propagation work, but hopefully it gives you an idea of the possibilities.

Lets pretend that you have a network which has not been trained at all.
You start by showing it a game position, encoding the game state in whatever way you want at the input layer. This is done, maybe, by having one node (i.e. neuron) for each of the 361 positions of the board and giving each one a signal along an edge (synapse, wire, whatever you want to call it) connected to it representing whether it has a black stone, a white stone, or whether it is empty. You then look at the output layer to see whatever random thing this untrained network outputed as result.
The trick is: you have the "right" answer because you know which move was played in the real game. Therefore, you can slowly adjust the weights of all the wires connected to your output layer so that your output layer is closer to the answer you want, and similarly for all the layers before the output layer, all the way until you reach the input layer. Imagine doing this for 300 million positions in high-dan KGS or pro games and, once again hopefully, the network will start imitating this moves. The network is not big enough to "memorize" all the positions and corresponding outputs (this is what we usually call overfiting), so in order to give the correct answers, the network will have to have identified whatever these positions that output the same move have in common.

There are many ways in which self-playing could also lead to improvement. For instance, a naive way would be to slightly perturb the weights of the current network and then make both versions play many games against each other. Whichever network wins could be further refined.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #51 Posted: Thu Jan 28, 2016 7:37 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Marcel Grünauer wrote:
Uberdude wrote:
Perhaps they bought the BiGo database which already has loads of KGS games.
http://bigo.baduk.org/assistant_databases.html


I'm sure a company owned by Google gets access to things it wants to get access to.


They probably already had all the SGF files in a basement inside Area 51. ;)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #52 Posted: Thu Jan 28, 2016 9:37 am 
Lives in sente

Posts: 827
Location: UK
Liked others: 568
Was liked: 84
Rank: OGS 9kyu
Universal go server handle: WindnWater, Elom
Quote:
Google's AlphaGo defeats Fan Hui 2p, 19x19,


Wow, that's impressive. I wonder what the handi is...

Quote:
5-0


Okay, I understand how it could make news, but I'm not so sure it's quite the level google were hoping for or anything especially new...

Oh, wait

*read thrice*...


Even so, I don't think it's quite the simple thing to jump to the level of a Top active Professional yet. While I understand that it has been commented that white was clearly stronger (in those games), indicating that white is well above a 1p level, you can be clearly above 1p level and clearly below top pro level. Therefore, Alpha go can be at any pro level, and as the programmers themselves cannot simply look at the games and conclude, it seems to be a matter of probability on their part? While I seem to get the impression a few of the team believe they have come so tantalisingly close to finally surpassing Go, I wouldn't be so confident as to be disappointed with losing to Lee 9p.

_________________
On Go proverbs:
"A fine Gotation is a diamond in the hand of a dan of wit and a pebble in the hand of a kyu" —Joseph Raux misquoted.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #53 Posted: Thu Jan 28, 2016 9:54 am 
Gosei
User avatar

Posts: 1435
Location: California
Liked others: 53
Was liked: 171
Rank: Out of practice
GD Posts: 1104
KGS: fwiffo
If you're interested in how neural networks work, the tutorials for Tensorflow are actually pretty accessible for anyone with basic programming skill (they are in Python).

I don't know the details about what tools Deepmind used here, but Tensorflow is used by machine learning teams at Google.

_________________
KGS 4 kyu - Game Archive - Keyboard Otaku


This post by fwiffo was liked by: emeraldemon
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #54 Posted: Thu Jan 28, 2016 10:02 am 
Oza

Posts: 2180
Location: ʍoquıɐɹ ǝɥʇ ɹǝʌo 'ǝɹǝɥʍǝɯos
Liked others: 237
Was liked: 662
Rank: AGA 5d
GD Posts: 4312
Online playing schedule: Every tenth February 29th from 20:00-20:01 (if time permits)
Bill Spight wrote:
They probably already had all the SGF files in a basement inside Area 51. ;)


Pretty soon they may have all the POSSIBLE sgf files in their basement.

_________________
Still officially AGA 5d but I play so irregularly these days that I am probably only 3d or 4d over the board (but hopefully still 5d in terms of knowledge, theory and the ability to contribute).

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #55 Posted: Thu Jan 28, 2016 10:08 am 
Oza

Posts: 2180
Location: ʍoquıɐɹ ǝɥʇ ɹǝʌo 'ǝɹǝɥʍǝɯos
Liked others: 237
Was liked: 662
Rank: AGA 5d
GD Posts: 4312
Online playing schedule: Every tenth February 29th from 20:00-20:01 (if time permits)
Pippen wrote:
What I do not understand: If there is a neural network then there is huge complexity of wires and firings, but someone has to make sure that this network has to produce a specific result in return, e.g. a good Go move. Can one conceptulize such a network or do they just play around with networks trial-and-error-wise?


A very, very simple analogy:

A baby is born with no knowledge of the world around it. Initially it actions are random. But it gets a positive response from crying when it is hungry, in the form of food. So the crying mechanism gets reinforced. Next time it is hungry it is more likely to cry than to not cry.

_________________
Still officially AGA 5d but I play so irregularly these days that I am probably only 3d or 4d over the board (but hopefully still 5d in terms of knowledge, theory and the ability to contribute).

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #56 Posted: Thu Jan 28, 2016 10:56 am 
Lives in gote

Posts: 677
Liked others: 6
Was liked: 31
KGS: 2d
Ok, some dummy questions:

1. When we speak about neural networks, do we speak about hardware (neural network-like wires) or software that simulates those networks on a normal computer with v. Neumann architecture?
2. If somebody develops a normal computer chip he basically has to go to the drawboard first and draw all the NAND, NOR,...-gates to make sure that some input gets some output he wishes and never some input breaks the machine down. Is this the same approach with neural networks, just that all those gates are more dependant from each other and need more calibration to function smoothly in comparison to a normal architecture where everything functions straightforward right from the beginning?

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #57 Posted: Thu Jan 28, 2016 11:40 am 
Lives with ko

Posts: 199
Liked others: 6
Was liked: 55
Rank: KGS 3 kyu
Pippen wrote:
Ok, some dummy questions:

1. When we speak about neural networks, do we speak about hardware (neural network-like wires) or software that simulates those networks on a normal computer with v. Neumann architecture?
2. If somebody develops a normal computer chip he basically has to go to the drawboard first and draw all the NAND, NOR,...-gates to make sure that some input gets some output he wishes and never some input breaks the machine down. Is this the same approach with neural networks, just that all those gates are more dependant from each other and need more calibration to function smoothly in comparison to a normal architecture where everything functions straightforward right from the beginning?


Artificial neural networks are only software.

In your brain, there are synapses that allow a neuron to pass a signal to a different neuron. These are created during your life, but particularly during brain development. A neuron "fires" when it receives a sufficiently strong signals from neurons connected to it.

An artificial neural network tries to emulate this behavior, but it is only software. Instead of creating synapses, we usually model that through a weight between two neurons - "a strong weight corresponds to a strong synapse".
Therefore, the structure (i.e. neurons) itself is not enough - you also need to know how strongly they are connected. This is what the training part tries to figure out.


This post by uPWarrior was liked by 3 people: emeraldemon, Pippen, wolfking
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #58 Posted: Fri Jan 29, 2016 9:01 am 
Gosei
User avatar

Posts: 1744
Liked others: 703
Was liked: 288
KGS: greendemon
Tygem: greendemon
DGS: smaragdaemon
OGS: emeraldemon
Here is what An Younggil 8p says about AlphaGo at the end of his review:

Quote:
White's play in the second half of the game was excellent, and I couldn't find any serious mistakes from White.

AlphaGo's style seems to be slightly territorial, but well balanced. It's good at haengma and tesuji, but its play is not yet perfect.

As we can see, it made several questionable moves and mistakes, but its play in the second half of the game was much more accurate and refined than the in the first.

AlphaGo's games throughout this match were very impressive, and I never expected that a computer Go program could play such strong and smooth moves.

It's so surprising and shocking, and I look forward to watching the match between Lee Sedol 9p and AlphaGo in March 2016.

When AlphaGo faces Lee, we will be able to see more clearly just how strong it is, and how much it has improved since October 2015 (when this game was played).

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #59 Posted: Fri Jan 29, 2016 9:04 am 
Gosei
User avatar

Posts: 1744
Liked others: 703
Was liked: 288
KGS: greendemon
Tygem: greendemon
DGS: smaragdaemon
OGS: emeraldemon
It's interesting to me that An Younggil says AlphaGo is strongest in the second half of the game. Maybe we will see games where Lee Sedol takes a lead early from strong opening, but then AlphaGo tries to claw its way back.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #60 Posted: Fri Jan 29, 2016 9:37 am 
Lives in sente
User avatar

Posts: 1311
Liked others: 14
Was liked: 153
Rank: German 1 Kyu
I remember some comments by Ôhashi Hirofumi 6p on professional (tournament) games that had an "unusual", centre-oriented opening.

It seemed to me that -- after 60 to 80 moves or so -- the position reached cannot be clearly distinguished from a position that derives from "normal" opening play (with the exception of one or two seemingly "misplaced" stones).

This would imply that there will be much more similar (may be partial) positions to learn from in "later" stages of the games than during the opening stage, when the board is relative empty.

_________________
The really most difficult Go problem ever: https://igohatsuyoron120.de/index.htm
Igo Hatsuyōron #120 (really solved by KataGo)

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group