It is currently Sat Apr 27, 2024 4:28 pm

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next
Author Message
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #61 Posted: Fri Jan 29, 2016 12:28 pm 
Oza

Posts: 2180
Location: ʍoquıɐɹ ǝɥʇ ɹǝʌo 'ǝɹǝɥʍǝɯos
Liked others: 237
Was liked: 662
Rank: AGA 5d
GD Posts: 4312
Online playing schedule: Every tenth February 29th from 20:00-20:01 (if time permits)
emeraldemon wrote:
It's interesting to me that An Younggil says AlphaGo is strongest in the second half of the game. Maybe we will see games where Lee Sedol takes a lead early from strong opening, but then AlphaGo tries to claw its way back.


Not really surprising as there are far fewer things to think about in the second half of the game.

_________________
Still officially AGA 5d but I play so irregularly these days that I am probably only 3d or 4d over the board (but hopefully still 5d in terms of knowledge, theory and the ability to contribute).


This post by DrStraw was liked by: xed_over
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #62 Posted: Fri Jan 29, 2016 2:14 pm 
Lives in gote

Posts: 448
Liked others: 5
Was liked: 187
Rank: BGA 3 dan
DrStraw wrote:
emeraldemon wrote:
It's interesting to me that An Younggil says AlphaGo is strongest in the second half of the game. Maybe we will see games where Lee Sedol takes a lead early from strong opening, but then AlphaGo tries to claw its way back.


Not really surprising as there are far fewer things to think about in the second half of the game.


There would be a truth in there, but maybe only a partial one. Pro time-management, if graphed (does anyone do this?), and smoothed out, would probably be consistent with the thought.

But, but, ... come the endgame and definite one-point errors can be glaring, to a pro. And I don't suppose one gets away with saying they also actually lie on the surface.

Urrgh. One thing that a strong AI might cast a light on is "the moment when the board really can be treated as a sum of sub-boards", coupled only weakly by ko threats. I'm not quite sure how. There is the traditional oyose, which I suppose starts when both players can be sure that a big group isn't going to die. Then there is the "phase transition" to sums of sub-boards.

Maybe AlphaGo has an edge when those two points in the game are widely separated? Generalising from one instance, or fewer, which we all enjoy, that might be the lesson from Game 1.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #63 Posted: Fri Jan 29, 2016 3:21 pm 
Oza

Posts: 2180
Location: ʍoquıɐɹ ǝɥʇ ɹǝʌo 'ǝɹǝɥʍǝɯos
Liked others: 237
Was liked: 662
Rank: AGA 5d
GD Posts: 4312
Online playing schedule: Every tenth February 29th from 20:00-20:01 (if time permits)
Charles Matthews wrote:
DrStraw wrote:
Not really surprising as there are far fewer things to think about in the second half of the game.


There would be a truth in there, but maybe only a partial one. Pro time-management, if graphed (does anyone do this?), and smoothed out, would probably be consistent with the thought.



I was not thinking at all about the pro when I made my comment. I was thinking of the computer. The more moves left before the game ends the much faster the potential game tree grows. I know that AlphaGo is using nets and not tree searches but it still seems that there will be far more opportunities for the computer to play better during the latter part of the game.

_________________
Still officially AGA 5d but I play so irregularly these days that I am probably only 3d or 4d over the board (but hopefully still 5d in terms of knowledge, theory and the ability to contribute).

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #64 Posted: Fri Jan 29, 2016 3:24 pm 
Dies with sente
User avatar

Posts: 109
Location: Boston
Liked others: 159
Was liked: 19
Rank: AGA 1k
KGS: sligocki
Online playing schedule: Ad hoc
Marcel Grünauer wrote:
Some thoughts about the continued benefit of AlphaGo for Go players.

DeepMind is a business concerned with Artificial Intelligence. AlphaGo is an interesting step in that direction, but it's not DeepMind's core business.

I don't see DeepMind in the business of making a commercial version of AlphaGo, neither by producing a dumbed-down desktop or mobile version, nor by connecting the bot to KGS or making it available for rent, like Champion Go HD's "Engine Server Games". And they are probably not going to open-source AlphaGo either, given that the algorithms seem to be the foundation of their far-ranging business ideas...


I think they could release the final policy and valuation neural nets without giving away general intellectual property. I think the business value here is in the process, not the product.

On the other hand, their Nature paper actually provides an extensive description of how they produced this AI including details on the parameter values that they used. Maybe someone with more interest in the Go community could make a version that would be useful for learning!

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #65 Posted: Sat Jan 30, 2016 6:55 am 
Lives in gote

Posts: 677
Liked others: 6
Was liked: 31
KGS: 2d
Marcel Grünauer wrote:
Therefore I fear that after AlphaGo has defeated Yi Se-tol, whether it's this year or the next, the project will be abandoned and other more lucrative AI challenges will be tackled. (Side note: Why play Yi Se-tol, not Ke Jie? Yi Se-tol is more famous.)



I see some significant potential for manipulation here. If AlphaGo is not made public afterwards (by playing other pro's too or in whatever way), I could easily see Google to bribe Sedol into not giving his very best, so that Google gets all the publicity. That is always a danger when a lot of fame/money is involved and we have a one-to-one-fight (ask boxing fans). Just as a reminder.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #66 Posted: Sat Jan 30, 2016 8:27 am 
Lives in sente

Posts: 1037
Liked others: 0
Was liked: 180
I think this discussion is handicapped by too many of being confused about what a neural net IS as compared to other sorts of programs.

Step back for a moment. Think of a human brain. Think of a newborn child. The child's brain starts out not knowing any human language. But in its first year of life it will be LEARNING one. The brain of the child didn't start out different before it has learned say French vs having learned Mandarin. It was the changes during the learning process that made the difference.

Well the software neural net is like that. In the beginning a program which simulates a lot of nodes and defines how these nodes communicate to their neighbors, and for each of those communication paths a value that modifies (amplifies or attenuates) the signal sent along that path. The ONLY thing that changes during training is those values. In the beginning, the neural net can do nothing. After having been trained, it can evaluate some function (in this case, input "a board position"; output "a move").

The SAME neural net could instead have been taught to play chess or do anything else. The neural net program itself is knowledge neutral. It's the training process that makes the difference. But of course, having trained a neural net (accumulated all those values) could easily "clone" the knowledge into an identical (but empty of knowledge) neural net. Just copy those values. But note, VERY IMPORTANT, that only would work for an IDENTICAL neural net (not a smaller version.

So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.

What the "dumbed down" question is really asking is could there be built a smaller neural net that could also be taught to play go and how strong would that be. Open question. What we have here is only showing us the upper bound.

A couple of other points. A certain amount of randomization is involved in the training process. So if (instead of copying in the values) we trained that second neural net using the same training data we would end up with a (trained) neural net able to play at the same strength BUT the set of values would in general be different!

In terms of "intellectual property" law, this is probably new territory. There is no particular value in the implementation details of the neural net per se. The real costs (and resulting value) come from the training process.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #67 Posted: Sat Jan 30, 2016 8:45 am 
Lives in gote

Posts: 340
Location: Spain
Liked others: 181
Was liked: 41
Rank: Low
Mike Novack wrote:
So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.

Interesting. So how much hard drive space would such a big neural net take? I had hoped it would be like any other program.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #68 Posted: Sat Jan 30, 2016 9:03 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
The hard drive space to store the code of the program when it is not running is not important. Even if the code is big, and storing all the various parameters from within the network (which get tuned by the training) is big, hard disks are dirt cheap. The expensive factor is the memory and processing power required to run the program. A program whose source/compiled/whatever code is just 1 KB can contain an instruction saying "allocate 100 TB of memory".

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #69 Posted: Sat Jan 30, 2016 9:43 am 
Lives in gote

Posts: 340
Location: Spain
Liked others: 181
Was liked: 41
Rank: Low
Uberdude wrote:
The hard drive space to store the code of the program when it is not running is not important. Even if the code is big, and storing all the various parameters from within the network (which get tuned by the training) is big, hard disks are dirt cheap. The expensive factor is the memory and processing power required to run the program. A program whose source/compiled/whatever code is just 1 KB can contain an instruction saying "allocate 100 TB of memory".

A DeepMind programmer created a chess engine based on deep learning a few months ago. It runs smoothly on a cheap laptop. I would expect the same to be true of AlphaGo, even if its performance on such a machine will clearly not be pro level. At the very least, the version without lookahead shouldn't require many resources to run (and not much time, either), and it's allegedly as good as the previous state of the art.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #70 Posted: Sat Jan 30, 2016 10:46 am 
Beginner

Posts: 11
Liked others: 1
Was liked: 0
Mike Novack wrote:
So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.


Are you absolutely certain about that? Since here somebody is arguing that some ANNs can be compressed to save space without sacrificing accuracy. See for example http://arxiv.org/abs/1510.00149.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #71 Posted: Sat Jan 30, 2016 11:19 am 
Lives in gote

Posts: 436
Liked others: 1
Was liked: 38
Rank: KGS 5 kyu
AlphaGo was playing a teaching game against the human, thus it made the obvious pro mistakes in order to see if the human is strong enough.

I don't expect it to be able to do a teaching game with Lee Sedol but winning should be easy for the AI overlord.

Hail our new robotic overlords!


This post by Krama was liked by: Shawn Ligocki
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #72 Posted: Sat Jan 30, 2016 2:17 pm 
Lives in sente

Posts: 1037
Liked others: 0
Was liked: 180
We're getting nowhere.

This NOT a data problem. This is an intensive computation problem. "All those parameters" wouldn't take a great deal of space.

Look, someday, if and when neural nets become greatly used, we will have HARDWARE to implement the cells, and so neural nets where the nodes are all doing their thing in parallel (instead of the process being simulated by a linear process). Could be orders of magnitude faster.

Your brain cells in the neural net that is "you" are receiving and sending (to neighbors) in times that might be in thousandths of a second (not billionths as in our modern electronic devices). But there are lots of them doing that simultaneously. So in half a second real time, analyzes the visual image determining that is a ball coming your way, and you begin to move to catch it.


This post by Mike Novack was liked by: luigi
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #73 Posted: Sat Jan 30, 2016 3:17 pm 
Lives in gote

Posts: 677
Liked others: 6
Was liked: 31
KGS: 2d
@mikenovack: can u recommend some book or link about neural networks and how they work in the brain resp. computer?

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #74 Posted: Sat Jan 30, 2016 4:48 pm 
Gosei
User avatar

Posts: 1435
Location: California
Liked others: 53
Was liked: 171
Rank: Out of practice
GD Posts: 1104
KGS: fwiffo
Mike Novack wrote:
The SAME neural net could instead have been taught to play chess or do anything else. The neural net program itself is knowledge neutral.

This is incorrect. The topology and other characteristics of the network is chosen for a particular problem. The dimensions of these networks were built specifically for the 19x19 dimensions of a go board and for the tri-state nature of the points. Furthermore, there must be *some* explicit knowledge built into the system as a whole - it is expected and OK that the network will suggest sub-optimal moves, but it cannot suggest illegal moves (e.g. it must know the ko rule).

In theory, you could wire in chess games in a really hacky way, but the network probably would work very poorly or not at all. It would be the electronic equivalent of traumatic brain injury.

Quote:
Look, someday, if and when neural nets become greatly used, we will have HARDWARE to implement the cells, and so neural nets where the nodes are all doing their thing in parallel (instead of the process being simulated by a linear process). Could be orders of magnitude faster.

Such highly-parallel hardware exists and is already in most PCs - GPUs. And neural networks are already in broad use. They're the reason that speech recognition on your phone works quite well these days but was quite terrible 5 years ago.

Wikipedia wrote:
Large-scale automatic speech recognition is the first and most convincing successful case of deep learning in the recent history, embraced by both industry and academia across the board. Between 2010 and 2014, the two major conferences on signal processing and speech recognition, IEEE-ICASSP and Interspeech, have seen a large increase in the numbers of accepted papers in their respective annual conference papers on the topic of deep learning for speech recognition. More importantly, all major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning methods.

_________________
KGS 4 kyu - Game Archive - Keyboard Otaku

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #75 Posted: Sat Jan 30, 2016 5:04 pm 
Gosei
User avatar

Posts: 1744
Liked others: 703
Was liked: 288
KGS: greendemon
Tygem: greendemon
DGS: smaragdaemon
OGS: emeraldemon
Pippen wrote:
@mikenovack: can u recommend some book or link about neural networks and how they work in the brain resp. computer?


Free online textbook:
http://neuralnetworksanddeeplearning.com/


This post by emeraldemon was liked by 2 people: Pippen, yoyoma
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #76 Posted: Sun Jan 31, 2016 5:31 am 
Dies with sente

Posts: 111
Liked others: 9
Was liked: 23
Marcel Grünauer wrote:
There are videos of a Korean press conference where the AlphaGo team also answers questions.

Briefing
https://www.youtube.com/watch?v=yR017hmUSC4

Q & A
https://www.youtube.com/watch?v=_r3yF4lV0wk

There they mention that they are going to sponsor tournaments and try to make the game more popular, so that's very nice to hear.

By the way, one of the questions was whether they have any plans to register AlphaGo as a professional player in Korea; they don't have such a plan, but it's an interesting idea.


They'll be providing some sponsorship for the London Open 2016.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #77 Posted: Mon Feb 01, 2016 11:34 am 
Dies with sente
User avatar

Posts: 109
Location: Boston
Liked others: 159
Was liked: 19
Rank: AGA 1k
KGS: sligocki
Online playing schedule: Ad hoc
mika wrote:
Mike Novack wrote:
So no, CAN'T do a "dumbed down version" for small machines. This is a very big neural net, lot of nodes, lots of "signals" going between nodes, takes a very powerful machine to get all that done within the allowed real time.


Are you absolutely certain about that? Since here somebody is arguing that some ANNs can be compressed to save space without sacrificing accuracy. See for example http://arxiv.org/abs/1510.00149.


Well, the AlphaGo team did actually train a "compressed" neural net for the move prediction policy engine. It sacrificed a significant amount of accuracy (57% -> 24%), but sped the computation up by over 1000 times (3ms -> 2µs). Enough to be worth it for evaluating the Monte Carlo rollout.

AlphaGo Paper wrote:
We trained a 13 layer policy network, which we call the SL policy network, from 30 million positions from the KGS Go Server. The network predicted expert moves with an accuracy of 57.0% on a held out test set, using all input features, and 55.7% using only raw board position and move history as inputs, compared to the state-of-the-art from other research groups of 44.4% at date of submission 24 (full results in Extended Data Table 3). Small improvements in accuracy led to large improvements in playing strength (Figure 2,a); larger networks achieve better accuracy but are slower to evaluate during search. We also trained a faster but less accurate rollout policy pπ(a|s), using a linear softmax of small pattern features (see Extended Data Table 4) with weights π; this achieved an accuracy of 24.2%, using just 2 µs to select an action, rather than 3 ms for the policy network.


This post by Shawn Ligocki was liked by: mika
Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #78 Posted: Mon Feb 01, 2016 6:31 pm 
Dies in gote

Posts: 43
Liked others: 4
Was liked: 22
That's not a compressed neural network but just a frequency-based specialized pattern classifier.

_________________
Go programmer and researcher: http://pasky.or.cz/~pasky/go/
EGF 1921, KGS ~1d and getting weaker

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #79 Posted: Mon Feb 01, 2016 7:05 pm 
Dies with sente
User avatar

Posts: 109
Location: Boston
Liked others: 159
Was liked: 19
Rank: AGA 1k
KGS: sligocki
Online playing schedule: Ad hoc
pasky wrote:
That's not a compressed neural network but just a frequency-based specialized pattern classifier.


Ah, my mistake. I see the paper you referenced is specifically referring to compressing an existing neural network rather than training a smaller network.

Top
 Profile  
 
Offline
 Post subject: Re: Google's AlphaGo defeats Fan Hui 2p, 19x19, no handi, 5-
Post #80 Posted: Thu Feb 11, 2016 6:51 am 
Lives in gote

Posts: 319
Liked others: 4
Was liked: 39
Rank: 6k
GD Posts: 25
OGS: phillip1882
incredible, i didn't think it would happen for another 5 years. an 8-9 Dan computer go program.

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group