It is currently Mon Aug 21, 2017 12:38 pm

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 39 posts ]  Go to page 1, 2  Next
Author Message
Offline
 Post subject: Will AI ever beat humans at Go with energy parity?
Post #1 Posted: Sat Mar 25, 2017 3:23 pm 
Beginner

Posts: 18
Liked others: 0
Was liked: 0
AlphaGo used thousands of CPU's and hundreds of GPU's to inch out Lee Sedol. The human brain only uses about 20 to 25 Watts. In comparing with Chess, where nowadays smartphones can beat top Grand Masters, will we ever see the day that an AI Go program can beat top Go professionals while using the total equilavent of same or similar quantity of energy? This can take the form of scaling AI Go down to a single CPU with a single CPU and perhaps no TPU etc, or alternatively, if the program still uses the huge datacenter that AlphaGo uses, then capping the AI's time allocated to say like a few milliseconds per move and giving the human several hours per move to make it fair, to at least to make it so they are both expelling the same entropy at the same rates for an even comparison.

Lee Sedol himself recently remarked that to be actually fair, AG should be given time limits while humans given extra time as opposed to the stone handicap system. The metric used should be energy parity. Human body consumes roughly 100Watts of power, brain only 20 Watts, so if AlphaGo is ran on Google's distributed computing datacenter and they can't scale it down to a single machine and still make it viable wins, then they should cap the total thinking time that AlphaGo is allowed to use in order to make it most fair in matches so that AlphaGo uses the same total equivalent power in a match as a human player. Chess has already reached and even surpassed this level, (my smartphone uses less power than a human brain and yet can already use a Chess app on the smartphone to beat all top Chess professionals), but it will be a very long time (if ever) before any GO AI can beat top Go professionals while using the same energy consumption as a human body. The version of AG that beat Lee Sedol used at least 1202 CPUs and 176 GPUs, lets assume a CPU uses only 100Watts and a GPU only 250Watts; the AI was using at least 1,600 times more power than Lee Sedol. Give a top pro 30 minutes per move while limiting AlphaGo to about one second per move, and then it would be a fair game. When Google can win on these terms (without any handicap used) then and only then would it have truly beat humans and solved Go. ("solved" in the same comparative sense that Chess is 'solved' today with Chess apps on phones using less power than human brains yet still beating top Chess pros).

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #2 Posted: Sat Mar 25, 2017 8:12 pm 
Judan

Posts: 7468
Liked others: 1358
Was liked: 1149
KGS: Kirby
Tygem: 커비라고해
AlphaGo trains and improves its evaluation capability probably as we speak. As it's improved through training, the power needed actually during a game will keep decreasing (at least to a point).

_________________
Discipline is remembering what you want. -David Campbell

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #3 Posted: Sat Mar 25, 2017 8:21 pm 
Beginner

Posts: 18
Liked others: 0
Was liked: 0
Kirby wrote:
AlphaGo trains and improves its evaluation capability probably as we speak. As it's improved through training, the power needed actually during a game will keep decreasing (at least to a point).



I noticed the new version of Leela 0.9.1, while still buggy, when/if correctly for flaws, already plays better than the Paid version of Deep Learning Crazy Stone 1.01 2016; and on KGS Leela was ranked about 8dan.

Leela is GPU optimized and on a GTX 1080Ti, can approach professional levels.

So I think for the average go player, this is strong enough already. But there was talk about AlphaGo CEO saying that Master only needs one GPU now, so I'm curious when in the future if a single desktop can consistently beat the world's top pro.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #4 Posted: Tue Mar 28, 2017 8:22 pm 
Lives with ko

Posts: 175
Liked others: 6
Was liked: 27
Rank: 3k
KGS: dankenzon
It's not a matter of if this will happen or not.. is a matter of when.

Top
 Profile  
 
Online
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #5 Posted: Wed Mar 29, 2017 5:55 am 
Lives in sente

Posts: 852
Liked others: 0
Was liked: 138
But to be fair, the human brain should be considered to require 100-150 watts as it cannot function without the rest of its "support system". Just as we should consider the power used by a computer to be the input power to the computer as whole, not just that consumed by the CPU.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #6 Posted: Wed Mar 29, 2017 7:12 am 
Lives in gote

Posts: 498
Liked others: 30
Was liked: 163
GD Posts: 10
hydrogenpi7 wrote:
but it will be a very long time (if ever) before any GO AI can beat top Go professionals while using the same energy consumption as a human body.

Just how long is your 'very long' time?
And if I remember correctly Aja Huang said while test AlphaGo v19 (or 20) in KGS that AlphaGo run on smartphone was already better than him, and he's KGS 7 dan. Not surprised to me as Crazy Stone 2016 run on LG Nexus 5 also get 5 dan on KGS, even with pondering option off.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #7 Posted: Wed Mar 29, 2017 7:39 am 
Lives with ko

Posts: 199
Liked others: 6
Was liked: 55
Rank: KGS 3 kyu
This question surprises me. If we change the question to "when will AI beat humans at Go with energy parity?", then I expect the answer to be along the lines of "3 months ago", or something close.

I read somewhere that "master" was playing using a single machine and it didn't lose a single game. If that's true, then we're already past that, because any 2x or 4x energy reduction could be achieved with current hardware - just use the most energy efficient platforms instead of whatever was at hand, which is probably what happened.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #8 Posted: Wed Mar 29, 2017 10:03 am 
Dies in gote

Posts: 24
Liked others: 0
Was liked: 5
Rank: Europe 5 dan
KGS: Flashgoe
Pardon me, please, but playing 20 sec per move with a computer is nonsense. When things became serious DeepZen lost 2 out of 3 games using pretty strong hardware. And they had time to prepare. So if you give a strong pro the motivation to fight against machine they will win. Actually even Ichiriki overcalculated and killed a big group of the famous FineArt. Had he more time and who knows what would the result be then. Btw, does someone knows what hardware it was running on?

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #9 Posted: Thu Apr 06, 2017 12:56 pm 
Beginner

Posts: 3
Liked others: 0
Was liked: 4
Rank: kgs 2 dan
KGS: Pettyx2
First I do agree that it's mainly a matter of time and also, if that were to be really the goal, I think there shouldn't be a problem for Google to drastically reduce the power required by losing a little bit of calculation power and still be able to topple pros even(by making custom made electronics that would just be used for alphaGo the power consumption could drop drastically), but it would cost money to do so and it would also have an impact on performance depending on how much you wanna push it, I doubt power-consumption was even an issue for them at the time. Switching the question to: will my personal computer or even my smartphone ever be able to beat pros consistently? I thought it was a matter of time even before the AlphaGo jump in A.I. programming, now it's simply behind the corner.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #10 Posted: Thu Apr 06, 2017 9:02 pm 
Dies in gote

Posts: 24
Liked others: 0
Was liked: 5
Rank: Europe 5 dan
KGS: Flashgoe
Just switch to 21x21 board and you'll have another 10 years of laughing at the computer attempts to beat an average amateur.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #11 Posted: Thu Apr 06, 2017 10:20 pm 
Lives in gote

Posts: 498
Liked others: 30
Was liked: 163
GD Posts: 10
Bohdan wrote:
Just switch to 21x21 board and you'll have another 10 years of laughing at the computer attempts to beat an average amateur.

This is very interesting, and since you're European 5 dan you're the perfect subject for this challenge. Since AlphaGo is not available for anyone, the nearest program we can contact to make this challenge happen is DeepZenGo. (This year we'll also have codecentric challenge at European Go Congress in which AI running on mobile phone trying to beat European Amateur Champion)

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #12 Posted: Fri Apr 07, 2017 2:01 am 
Lives with ko

Posts: 199
Liked others: 6
Was liked: 55
Rank: KGS 3 kyu
Bohdan wrote:
Just switch to 21x21 board and you'll have another 10 years of laughing at the computer attempts to beat an average amateur.


I have the impression that this would backfire spectacularly. There is nothing special about 19x19, all the training could simply be replicated on 21x21. Human knowledge and intuition, on the other hand, would take a very big hit.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #13 Posted: Fri Apr 07, 2017 2:48 am 
Tengen

Posts: 4069
Location: Cambridge, UK
Liked others: 145
Was liked: 2001
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
uPWarrior wrote:
Bohdan wrote:
Just switch to 21x21 board and you'll have another 10 years of laughing at the computer attempts to beat an average amateur.


I have the impression that this would backfire spectacularly. There is nothing special about 19x19, all the training could simply be replicated on 21x21. Human knowledge and intuition, on the other hand, would take a very big hit.


Indeed, DeepMind surprised us in taking only about 2 years to bring the level of top go bots from strong amateur to beating a top pro, so 10 years is an awful long time to think it would take to get good at 21x21. There is the problem of no human games to seed the training, but if recent rumours and hints about a version of AlphaGo that was trained from the rules without human games are true, then 21x21 would be fairly easy: just a few weeks/months of re-training (and some re-programming of data structures etc for new board size if we assume 19x19 is hard-coded and optimized).

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #14 Posted: Fri Apr 07, 2017 4:30 am 
Dies in gote

Posts: 24
Liked others: 0
Was liked: 5
Rank: Europe 5 dan
KGS: Flashgoe
Uberdude wrote:
uPWarrior wrote:
Bohdan wrote:
Just switch to 21x21 board and you'll have another 10 years of laughing at the computer attempts to beat an average amateur.


I have the impression that this would backfire spectacularly. There is nothing special about 19x19, all the training could simply be replicated on 21x21. Human knowledge and intuition, on the other hand, would take a very big hit.


Indeed, DeepMind surprised us in taking only about 2 years to bring the level of top go bots from strong amateur to beating a top pro, so 10 years is an awful long time to think it would take to get good at 21x21. There is the problem of no human games to seed the training, but if recent rumours and hints about a version of AlphaGo that was trained from the rules without human games are true, then 21x21 would be fairly easy: just a few weeks/months of re-training (and some re-programming of data structures etc for new board size if we assume 19x19 is hard-coded and optimized).


The AlphaGo main power is still Monte Carlo trees. So increasing the board size will hit the bot strength dramatically. Also where are they going to get games to train the bot? Take into account that even when you change the komi from 6.5 to 7.5 you need to retrain the whole neural network!

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #15 Posted: Fri Apr 07, 2017 4:57 am 
Tengen

Posts: 4069
Location: Cambridge, UK
Liked others: 145
Was liked: 2001
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Bohdan wrote:
The AlphaGo main power is still Monte Carlo trees.


I disagree, and various statements from DeepMind people suggest they would too. The breakthrough and source of AlphaGo's power is the value network, a quick and strong board evaluation function that didn't exist in bots prior to AlphaGo.

Bohdan wrote:
Also where are they going to get games to train the bot?

Did you read what I wrote? Self-play games form the rules without any human expert games as initial training.

Bohdan wrote:
Take into account that even when you change the komi from 6.5 to 7.5 you need to retrain the whole neural network!

That might be true for the value network (not policy). But it might no longer be true. When asked about Zen's problems with komi in its value network in the WGC Hideki Kato said AlphaGo had apparently solved the komi problem somehow. I don't know if that is by retraining or something cleverer. But as Demis said training only takes a week now (probably helped by lots of TPUs), 10 years seems rather an over-estimation.

Top
 Profile  
 
Online
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #16 Posted: Fri Apr 07, 2017 5:33 am 
Lives in sente

Posts: 852
Liked others: 0
Was liked: 138
Neural nets have some interesting properties. That makes it difficult to discuss things like "if komi changes, wouldn't that mean retraining the net from scratch?".

For example --- suppose a neural net has been trained from scratch to evaluate a function. You then randomly destroy the cell values in some percentage of the cells << you give the neural net the equivalent of a "stroke" >> You then retrain the net to evaluate the function. It will NOT take anywhere as near as long (as much training) as it did before when starting from scratch.

Think about how the "komi" question might be related to that << because of then komi change, some percentage of the cell values are wrong >>

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #17 Posted: Fri Apr 07, 2017 5:44 am 
Dies in gote

Posts: 24
Liked others: 0
Was liked: 5
Rank: Europe 5 dan
KGS: Flashgoe
Quote:
I disagree, and various statements from DeepMind people suggest they would too. The breakthrough and source of AlphaGo's power is the value network, a quick and strong board evaluation function that didn't exist in bots prior to AlphaGo.


Then why do they need 500 CPU and 200 GPU? Why do Zen need 44 cores? You cannot play without reading out variations. Value and policy networks only narrowed the number of candidate moves which helped to read variations more efficiently. In the Nature paper they wrote that network-only bot without MC searching scored 85% against Pachi 2d version.

Quote:
Did you read what I wrote? Self-play games form the rules without any human expert games as initial training.


Last year DeepMind promised that they will train a bot only using the self-play without any human players. No success so far. Why do you think it will work? And remember that you need a millions of self-played games to make some relevant database.


Quote:
That might be true for the value network (not policy). But it might no longer be true. When asked about Zen's problems with komi in its value network in the WGC Hideki Kato said AlphaGo had apparently solved the komi problem somehow. I don't know if that is by retraining or something cleverer. But as Demis said training only takes a week now (probably helped by lots of TPUs), 10 years seems rather an over-estimation.


Transfer the knowledge between neural networks with different architecture is quite a challenge. No success is secured.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #18 Posted: Fri Apr 07, 2017 7:56 am 
Tengen

Posts: 4069
Location: Cambridge, UK
Liked others: 145
Was liked: 2001
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Bohdan wrote:
Quote:
I disagree, and various statements from DeepMind people suggest they would too. The breakthrough and source of AlphaGo's power is the value network, a quick and strong board evaluation function that didn't exist in bots prior to AlphaGo.


Then why do they need 500 CPU and 200 GPU? Why do Zen need 44 cores? You cannot play without reading out variations. Value and policy networks only narrowed the number of candidate moves which helped to read variations more efficiently. In the Nature paper they wrote that network-only bot without MC searching scored 85% against Pachi 2d version.


Yes, you still need to read, but as the policy and value networks get better you need to read less. For Lee Sedol they needed lots of processing power because the networks weren't as good as they are now. Demis said they only used one GPU for Master so, assuming he's correct, that suggests their networks are now even stronger (following lots of up-front training computations) so AlphaGo can play very strongly without hundreds of processors. Remember the Nature paper was for v13 vs Fan Hui, it got a lot stronger vs Lee Sedol and even stronger now. So whilst I expect the just policy network or just value network or just them together will still be weaker than plus MCTS, I expect the networks on their own are now a lot stronger than they were 18 months ago.

Bohdan wrote:
Quote:
Did you read what I wrote? Self-play games form the rules without any human expert games as initial training.


Last year DeepMind promised that they will train a bot only using the self-play without any human players. No success so far. Why do you think it will work? And remember that you need a millions of self-played games to make some relevant database.

I'm not sure it will work, but I think it's very likely DeepMind pursued that avenue (it is very important to the more general AI goals of training systems in which you don't have a large corpus of expert examples like we do with Go) and were quite possibly successful, given Demis's response to a question about it. It's only Chinese media rumours which have been wrong before, but the recent Sina article about the rumoured upcoming match with Ke Jie mentioned about this trained from scratch aspect. Also their Atari game playing AIs did a similar thing of training from scratch and worked. Sure, training requires a lot of compute resource, but DeepMind is part of Google and has access to a lot of oomph. If training via human games used to take 3 months and now takes 1 week as Demis said, then let's say training from scratch takes 10 times as long, that's 10 weeks. To take 10 years it would need to take ~500 times longer, and for them to not make any improvements to their training algorithms. They are a clever bunch.


Bohdan wrote:
Quote:
That might be true for the value network (not policy). But it might no longer be true. When asked about Zen's problems with komi in its value network in the WGC Hideki Kato said AlphaGo had apparently solved the komi problem somehow. I don't know if that is by retraining or something cleverer. But as Demis said training only takes a week now (probably helped by lots of TPUs), 10 years seems rather an over-estimation.


Transfer the knowledge between neural networks with different architecture is quite a challenge. No success is secured.

Yes, but they have successfully done some very challenging things few thought possible already haven't they! Before AlphaGo I'd made what I thought was a safe bet with a Chess-playing friend that no Go bot would beat a top pro in the next 5 years. How the landscape has changed in under 2 years, we now wonder if the humans can win with 2 handicaps. But even if such transfers aren't achieved, just retrain the network for 21x21: taking weeks or months, not 10 years, would be my bet. And that's for equivalent to AlphaGo top pro beating level, not average amateur (by which do you mean a high dan like yourself, or something around 4 kyu which is the (medium) average). I wouldn't be surprised if the existing AlphaGo policy network plus MCTS (no value network) with minor hacks to get it working on 21x21 would already beat a 4 kyu.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #19 Posted: Fri Apr 07, 2017 8:03 am 
Lives in gote

Posts: 498
Liked others: 30
Was liked: 163
GD Posts: 10
Bohdan, the free DCNN program Leela has option to let you play with 25x25 board. Can you tell us how much weaker it is compare to 19x19. You'll have no problem winning in both board size in even game since it has serious bug that prevent it to go beyond 6 dan KGS so you've to give it handicap stones.

Top
 Profile  
 
Offline
 Post subject: Re: Will AI ever beat humans at Go with energy parity?
Post #20 Posted: Fri Apr 07, 2017 8:20 am 
Dies in gote

Posts: 24
Liked others: 0
Was liked: 5
Rank: Europe 5 dan
KGS: Flashgoe
Let us look at the different angle. The Crazy Stone bot without NN palys around 5 dan on an average laptop. It was built by only one man who was doing it mostly for fun having a lot of other stuff to do. Imagine how whole team of very skilled programers and scientists from DeepMind (read Google) could improve an algorithm from a Crazy Stone bot and multiply it by 500 CPU and 200 GPU power. I would never be surprised if optimized version of MC trees can easily beat an average pro.

Regarding neural networks training:
the training process always converges to some value. So a bot can jump from 30 kyu to 1 dan in one month. Then from 1 dan to 6 dan in 3 month. From 6 dan to 1p in 6 month. From 1p to top pro in 1 year. There always is a point when you cannot improve anymore with current algorithm. That's a basic math.
Guys from DeepMind were not the first who used self-training algorithm. It was used also in chess but at some point it just stoped improvement. For example https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/ You can't just run a self-training and wait until it becomes a perfect player.

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 39 posts ]  Go to page 1, 2  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group