Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-0

For discussing go computing, software announcements, etc.
DrStraw
Oza
Posts: 2180
Joined: Tue Apr 27, 2010 4:09 am
Rank: AGA 5d
GD Posts: 4312
Online playing schedule: Every tenth February 29th from 20:00-20:01 (if time permits)
Location: ʍoquıɐɹ ǝɥʇ ɹǝʌo 'ǝɹǝɥʍǝɯos
Has thanked: 237 times
Been thanked: 662 times
Contact:

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by DrStraw »

emeraldemon wrote:It's interesting to me that An Younggil says AlphaGo is strongest in the second half of the game. Maybe we will see games where Lee Sedol takes a lead early from strong opening, but then AlphaGo tries to claw its way back.


Not really surprising as there are far fewer things to think about in the second half of the game.
Still officially AGA 5d but I play so irregularly these days that I am probably only 3d or 4d over the board (but hopefully still 5d in terms of knowledge, theory and the ability to contribute).
Charles Matthews
Lives in gote
Posts: 450
Joined: Sun May 13, 2012 9:12 am
Rank: BGA 3 dan
GD Posts: 0
Has thanked: 5 times
Been thanked: 189 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Charles Matthews »

DrStraw wrote:
emeraldemon wrote:It's interesting to me that An Younggil says AlphaGo is strongest in the second half of the game. Maybe we will see games where Lee Sedol takes a lead early from strong opening, but then AlphaGo tries to claw its way back.


Not really surprising as there are far fewer things to think about in the second half of the game.


There would be a truth in there, but maybe only a partial one. Pro time-management, if graphed (does anyone do this?), and smoothed out, would probably be consistent with the thought.

But, but, ... come the endgame and definite one-point errors can be glaring, to a pro. And I don't suppose one gets away with saying they also actually lie on the surface.

Urrgh. One thing that a strong AI might cast a light on is "the moment when the board really can be treated as a sum of sub-boards", coupled only weakly by ko threats. I'm not quite sure how. There is the traditional oyose, which I suppose starts when both players can be sure that a big group isn't going to die. Then there is the "phase transition" to sums of sub-boards.

Maybe AlphaGo has an edge when those two points in the game are widely separated? Generalising from one instance, or fewer, which we all enjoy, that might be the lesson from Game 1.
DrStraw
Oza
Posts: 2180
Joined: Tue Apr 27, 2010 4:09 am
Rank: AGA 5d
GD Posts: 4312
Online playing schedule: Every tenth February 29th from 20:00-20:01 (if time permits)
Location: ʍoquıɐɹ ǝɥʇ ɹǝʌo 'ǝɹǝɥʍǝɯos
Has thanked: 237 times
Been thanked: 662 times
Contact:

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by DrStraw »

Charles Matthews wrote:
DrStraw wrote:Not really surprising as there are far fewer things to think about in the second half of the game.


There would be a truth in there, but maybe only a partial one. Pro time-management, if graphed (does anyone do this?), and smoothed out, would probably be consistent with the thought.



I was not thinking at all about the pro when I made my comment. I was thinking of the computer. The more moves left before the game ends the much faster the potential game tree grows. I know that AlphaGo is using nets and not tree searches but it still seems that there will be far more opportunities for the computer to play better during the latter part of the game.
Still officially AGA 5d but I play so irregularly these days that I am probably only 3d or 4d over the board (but hopefully still 5d in terms of knowledge, theory and the ability to contribute).
User avatar
Shawn Ligocki
Dies with sente
Posts: 109
Joined: Sat Dec 28, 2013 12:10 am
Rank: AGA 1k
GD Posts: 0
KGS: sligocki
Online playing schedule: Ad hoc
Location: Boston
Has thanked: 159 times
Been thanked: 19 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Shawn Ligocki »

Marcel Grünauer wrote:Some thoughts about the continued benefit of AlphaGo for Go players.

DeepMind is a business concerned with Artificial Intelligence. AlphaGo is an interesting step in that direction, but it's not DeepMind's core business.

I don't see DeepMind in the business of making a commercial version of AlphaGo, neither by producing a dumbed-down desktop or mobile version, nor by connecting the bot to KGS or making it available for rent, like Champion Go HD's "Engine Server Games". And they are probably not going to open-source AlphaGo either, given that the algorithms seem to be the foundation of their far-ranging business ideas...


I think they could release the final policy and valuation neural nets without giving away general intellectual property. I think the business value here is in the process, not the product.

On the other hand, their Nature paper actually provides an extensive description of how they produced this AI including details on the parameter values that they used. Maybe someone with more interest in the Go community could make a version that would be useful for learning!
Pippen
Lives in gote
Posts: 677
Joined: Thu Sep 16, 2010 3:34 pm
GD Posts: 0
KGS: 2d
Has thanked: 6 times
Been thanked: 31 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Pippen »

Marcel Grünauer wrote:Therefore I fear that after AlphaGo has defeated Yi Se-tol, whether it's this year or the next, the project will be abandoned and other more lucrative AI challenges will be tackled. (Side note: Why play Yi Se-tol, not Ke Jie? Yi Se-tol is more famous.)



I see some significant potential for manipulation here. If AlphaGo is not made public afterwards (by playing other pro's too or in whatever way), I could easily see Google to bribe Sedol into not giving his very best, so that Google gets all the publicity. That is always a danger when a lot of fame/money is involved and we have a one-to-one-fight (ask boxing fans). Just as a reminder.
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Mike Novack »

I think this discussion is handicapped by too many of being confused about what a neural net IS as compared to other sorts of programs.

Step back for a moment. Think of a human brain. Think of a newborn child. The child's brain starts out not knowing any human language. But in its first year of life it will be LEARNING one. The brain of the child didn't start out different before it has learned say French vs having learned Mandarin. It was the changes during the learning process that made the difference.

Well the software neural net is like that. In the beginning a program which simulates a lot of nodes and defines how these nodes communicate to their neighbors, and for each of those communication paths a value that modifies (amplifies or attenuates) the signal sent along that path. The ONLY thing that changes during training is those values. In the beginning, the neural net can do nothing. After having been trained, it can evaluate some function (in this case, input "a board position"; output "a move").

The SAME neural net could instead have been taught to play chess or do anything else. The neural net program itself is knowledge neutral. It's the training process that makes the difference. But of course, having trained a neural net (accumulated all those values) could easily "clone" the knowledge into an identical (but empty of knowledge) neural net. Just copy those values. But note, VERY IMPORTANT, that only would work for an IDENTICAL neural net (not a smaller version.

So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.

What the "dumbed down" question is really asking is could there be built a smaller neural net that could also be taught to play go and how strong would that be. Open question. What we have here is only showing us the upper bound.

A couple of other points. A certain amount of randomization is involved in the training process. So if (instead of copying in the values) we trained that second neural net using the same training data we would end up with a (trained) neural net able to play at the same strength BUT the set of values would in general be different!

In terms of "intellectual property" law, this is probably new territory. There is no particular value in the implementation details of the neural net per se. The real costs (and resulting value) come from the training process.
luigi
Lives in gote
Posts: 352
Joined: Wed Jul 06, 2011 12:01 pm
Rank: Low
GD Posts: 0
Location: Spain
Has thanked: 181 times
Been thanked: 41 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by luigi »

Mike Novack wrote:So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.

Interesting. So how much hard drive space would such a big neural net take? I had hoped it would be like any other program.
Uberdude
Judan
Posts: 6727
Joined: Thu Nov 24, 2011 11:35 am
Rank: UK 4 dan
GD Posts: 0
KGS: Uberdude 4d
OGS: Uberdude 7d
Location: Cambridge, UK
Has thanked: 436 times
Been thanked: 3718 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Uberdude »

The hard drive space to store the code of the program when it is not running is not important. Even if the code is big, and storing all the various parameters from within the network (which get tuned by the training) is big, hard disks are dirt cheap. The expensive factor is the memory and processing power required to run the program. A program whose source/compiled/whatever code is just 1 KB can contain an instruction saying "allocate 100 TB of memory".
luigi
Lives in gote
Posts: 352
Joined: Wed Jul 06, 2011 12:01 pm
Rank: Low
GD Posts: 0
Location: Spain
Has thanked: 181 times
Been thanked: 41 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by luigi »

Uberdude wrote:The hard drive space to store the code of the program when it is not running is not important. Even if the code is big, and storing all the various parameters from within the network (which get tuned by the training) is big, hard disks are dirt cheap. The expensive factor is the memory and processing power required to run the program. A program whose source/compiled/whatever code is just 1 KB can contain an instruction saying "allocate 100 TB of memory".

A DeepMind programmer created a chess engine based on deep learning a few months ago. It runs smoothly on a cheap laptop. I would expect the same to be true of AlphaGo, even if its performance on such a machine will clearly not be pro level. At the very least, the version without lookahead shouldn't require many resources to run (and not much time, either), and it's allegedly as good as the previous state of the art.
mika
Beginner
Posts: 11
Joined: Tue Jul 28, 2015 12:13 am
GD Posts: 0
Has thanked: 1 time

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by mika »

Mike Novack wrote:So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.


Are you absolutely certain about that? Since here somebody is arguing that some ANNs can be compressed to save space without sacrificing accuracy. See for example http://arxiv.org/abs/1510.00149.
Krama
Lives in gote
Posts: 436
Joined: Mon Jan 06, 2014 3:46 am
Rank: KGS 5 kyu
GD Posts: 0
Has thanked: 1 time
Been thanked: 38 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Krama »

AlphaGo was playing a teaching game against the human, thus it made the obvious pro mistakes in order to see if the human is strong enough.

I don't expect it to be able to do a teaching game with Lee Sedol but winning should be easy for the AI overlord.

Hail our new robotic overlords!
Mike Novack
Lives in sente
Posts: 1045
Joined: Mon Aug 09, 2010 9:36 am
GD Posts: 0
Been thanked: 182 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Mike Novack »

We're getting nowhere.

This NOT a data problem. This is an intensive computation problem. "All those parameters" wouldn't take a great deal of space.

Look, someday, if and when neural nets become greatly used, we will have HARDWARE to implement the cells, and so neural nets where the nodes are all doing their thing in parallel (instead of the process being simulated by a linear process). Could be orders of magnitude faster.

Your brain cells in the neural net that is "you" are receiving and sending (to neighbors) in times that might be in thousandths of a second (not billionths as in our modern electronic devices). But there are lots of them doing that simultaneously. So in half a second real time, analyzes the visual image determining that is a ball coming your way, and you begin to move to catch it.
Pippen
Lives in gote
Posts: 677
Joined: Thu Sep 16, 2010 3:34 pm
GD Posts: 0
KGS: 2d
Has thanked: 6 times
Been thanked: 31 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by Pippen »

@mikenovack: can u recommend some book or link about neural networks and how they work in the brain resp. computer?
User avatar
fwiffo
Gosei
Posts: 1435
Joined: Tue Apr 20, 2010 6:22 am
Rank: Out of practice
GD Posts: 1104
KGS: fwiffo
Location: California
Has thanked: 49 times
Been thanked: 168 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by fwiffo »

Mike Novack wrote:The SAME neural net could instead have been taught to play chess or do anything else. The neural net program itself is knowledge neutral.

This is incorrect. The topology and other characteristics of the network is chosen for a particular problem. The dimensions of these networks were built specifically for the 19x19 dimensions of a go board and for the tri-state nature of the points. Furthermore, there must be *some* explicit knowledge built into the system as a whole - it is expected and OK that the network will suggest sub-optimal moves, but it cannot suggest illegal moves (e.g. it must know the ko rule).

In theory, you could wire in chess games in a really hacky way, but the network probably would work very poorly or not at all. It would be the electronic equivalent of traumatic brain injury.

Look, someday, if and when neural nets become greatly used, we will have HARDWARE to implement the cells, and so neural nets where the nodes are all doing their thing in parallel (instead of the process being simulated by a linear process). Could be orders of magnitude faster.

Such highly-parallel hardware exists and is already in most PCs - GPUs. And neural networks are already in broad use. They're the reason that speech recognition on your phone works quite well these days but was quite terrible 5 years ago.

Wikipedia wrote:Large-scale automatic speech recognition is the first and most convincing successful case of deep learning in the recent history, embraced by both industry and academia across the board. Between 2010 and 2014, the two major conferences on signal processing and speech recognition, IEEE-ICASSP and Interspeech, have seen a large increase in the numbers of accepted papers in their respective annual conference papers on the topic of deep learning for speech recognition. More importantly, all major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning methods.
User avatar
emeraldemon
Gosei
Posts: 1744
Joined: Sun May 02, 2010 1:33 pm
GD Posts: 0
KGS: greendemon
Tygem: greendemon
DGS: smaragdaemon
OGS: emeraldemon
Has thanked: 697 times
Been thanked: 287 times

Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-

Post by emeraldemon »

Pippen wrote:@mikenovack: can u recommend some book or link about neural networks and how they work in the brain resp. computer?


Free online textbook:
http://neuralnetworksanddeeplearning.com/
Post Reply