It is currently Thu Mar 28, 2024 8:54 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 18 posts ] 
Author Message
Offline
 Post subject: Confirmation bias in neural nets?
Post #1 Posted: Tue Feb 19, 2019 9:51 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
I just stumbled across this paper about confirmation bias in humans based upon previous choices. https://www.sciencedirect.com/science/a ... 2218309825

Here are the highlights of the paper:
Quote:
Highlights

People’s interpretation of new evidence is often biased by their previous choices


Talluri, Urai et al. developed a new task for probing the underlying mechanisms


Evidence consistent with an observer’s initial choice is processed more efficiently


This “choice-induced gain change” affects both perceptual and numerical decisions
Emphasis mine.

While, OC, brains are not the same as neural nets, the highlighted point seems to me to similar to how neural net bots work, both in play and in training. In play, initial choices based upon prior learning (with some randomness) are used to build the game tree, and thus, are processed more efficiently. In self-play training, the points identified as important by each player get more processing. In actual brains, neural connections that get more processing (activation) tend to be reinforced. I don't know if the same kind of thing applies to AI neural nets, but if so, "Confirmation Bias through Selective Overweighting of Choice-Consistent Evidence" may apply to them, as well. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.


This post by Bill Spight was liked by 2 people: mhlepore, MikeKyle
Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #2 Posted: Tue Feb 19, 2019 10:51 am 
Lives in gote

Posts: 653
Location: Austin, Texas, USA
Liked others: 54
Was liked: 216
Yes I think some problems with NNs (especially when Reinforcement Learning gets involved) are similar to confirmation bias in humans. Just recently someone did a study of a positive feedback problem that could cause a NN to become convinced certain moves are best and ignore the alternatives:

https://github.com/leela-zero/leela-zero/issues/2230


This post by yoyoma was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #3 Posted: Tue Feb 19, 2019 11:04 am 
Lives with ko
User avatar

Posts: 205
Liked others: 49
Was liked: 36
Rank: EGF 2k
KGS: MKyle
I've heard Catalin Taranu 5p mention a few times something his teacher would say. I may get the quote wrong but I think it's something along the lines of
Quote:
If you find that you want to play a move then just keep reading until you discover that it's the right move


I kind of feel like confirmation bias might lead to blindspots and isolated mistakes, but might be pretty good for go strength.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #4 Posted: Tue Feb 19, 2019 11:38 am 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
MikeKyle wrote:
I've heard Catalin Taranu 5p mention a few times something his teacher would say. I may get the quote wrong but I think it's something along the lines of
Quote:
If you find that you want to play a move then just keep reading until you discover that it's the right move


I kind of feel like confirmation bias might lead to blindspots and isolated mistakes, but might be pretty good for go strength.


I would word that slightly differently.

If you find that you want to play a move then just keep trying the prove it wrong until you discover that it's the right move. :)

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #5 Posted: Tue Feb 19, 2019 12:20 pm 
Oza

Posts: 3647
Liked others: 20
Was liked: 4626
Bill: Good and timely topic. I think it can apply also to efforts to understand why AI bots are stronger.

We look at their games and notice first, or most intently, those moves which we think we can understand on the basis of existing knowledge. It seems easy enough to find things that represent an improvement that we can then, a posteriori, justify in a way that we find very convincing because it confirms what we already know! We could know it just a little better, it seems, but we do know it!

I suspect the early (and continuing) focus on josekis derives from precisely this approach. Josekis are an area we feel we must know pretty well.

I mention this because of something I read just yesterday. I had felt early on that the focus on josekis was a bit misguided. My own hunch was that the real explanation of the strength of bots was primarily in the fact that they didn't make careless mistakes like us (ladders excepted) but they probably also saw more in the centre of the board than we will ever be able to.

I had no proof, of course, but felt it rather strongly. I was accordingly rather shaken this week when I read something by a Japanese pro in which he said they have concluded something different: the reason the bots don't like the small knight's move shimari is because it is overconcentrated. Well, overconcentration is something I know a lot about. I make it more than most people. So I started looking at some games in that light. These were pro games, not AI games, but nevertheless games in which the pros were clearly trying to play like bots. Blow me, just about every move could be explained as an attempt to force overconcentration or to resist it - all these shoulder hits, attachments, playing close to thickness... Everything fits. It must be true because I know what overconcentration is, after all :)

But a little more seriously, the thing I have picked up most from recently studying the Genjo-Chitoku games (or, more precisely, their commentaries) is how fixated pros are with the efficiency of plays. For example, the sheer number of comments on forcing moves and timing is staggering.

And overconcentration is just an aspect of efficiency. Unlike us, a bot can measure efficiency quite easily. It occurred to me therefore that if they do ever learn to talk about go, they won't ever use any of our terms (except to pander to us). Every explanation will basically be just "this is the most efficient move in this sort of position," all backed up by what is essentially just looking up results stored in a massive database. This is essentially what the recent Elf exercise amounts to.

But if that, or what the Japanese pros say, is close to correct, it does at least tell us very specifically what we need to study more: efficiency. And the beauty of doing that is that even if it wide of the mark, it can't do any harm. But we are in the same position as the bots on that. We need to find a way to talk about it :scratch:


This post by John Fairbairn was liked by 2 people: Aidoneus, jonsa
Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #6 Posted: Tue Feb 19, 2019 12:57 pm 
Lives in gote

Posts: 389
Liked others: 81
Was liked: 128
KGS: lepore
Bill Spight wrote:
...In self-play training, the points identified as important by each player get more processing. In actual brains, neural connections that get more processing (activation) tend to be reinforced. I don't know if the same kind of thing applies to AI neural nets, but if so, "Confirmation Bias through Selective Overweighting of Choice-Consistent Evidence" may apply to them, as well. :)


Presuming that there is a legitimate reason the points were identified for more processing in the first place (i.e., because they increase the chance of winning), then I don't see how this leads to confirmation bias. Especially if we are talking about a self-play system where millions of rounds of learning can take place. Biases that are based on something suboptimal would be exploited and weeded out.

Or am I giving these algorithms too much credit? (I probably am, but I'll keep the question as a straw man)

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #7 Posted: Tue Feb 19, 2019 1:13 pm 
Lives in gote

Posts: 502
Liked others: 1
Was liked: 153
Rank: KGS 2k
GD Posts: 100
KGS: Tryss
You need to understand how the training of a neural network like LZ (or Alpha Zero, or ELF) works :

Step 0 : You start with a random network that predict "best move" for a board and the winner.

Step 1 : You play games with this network (with randomness)

Step 2 : You then use these games to train a new network that predict the moves of the winner of these games and the winner

Step 3 : you test if this new network is better than the old one, if it is, it become the new network. Return to step 1


So obviously, this approach may get the network stuck in a local optimum worse than the global optimum. Fortunatly, it seems that a little randomness allow for enough exploration to get the level to a superhuman strength, but it doesn't means that it can't happens.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #8 Posted: Tue Feb 19, 2019 1:20 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
John Fairbairn wrote:
I suspect the early (and continuing) focus on josekis derives from precisely this approach. Josekis are an area we feel we must know pretty well.

I mention this because of something I read just yesterday. I had felt early on that the focus on josekis was a bit misguided.


Well, it seems to me that all of a sudden we have glimpsed the joseki of 200 years into the future. That's a pretty big deal. :) But I also think that that is not the main difference between us and the bots.

Quote:
My own hunch was that the real explanation of the strength of bots was primarily in the fact that they didn't make careless mistakes like us (ladders excepted) but they probably also saw more in the centre of the board than we will ever be able to.


Yes, I think that the bots understand the centre even better than the sainted Go Seigen, and that is a major advance. They understand the sides better, too.

Quote:
I had no proof, of course, but felt it rather strongly. I was accordingly rather shaken this week when I read something by a Japanese pro in which he said they have concluded something different: the reason the bots don't like the small knight's move shimari is because it is overconcentrated.


Sounds right to me. My crude influence heuristic doesn't like it, either, for that reason. FWIW.

Quote:
Well, overconcentration is something I know a lot about. I make it more than most people. So I started looking at some games in that light. These were pro games, not AI games, but nevertheless games in which the pros were clearly trying to play like bots. Blow me, just about every move could be explained as an attempt to force overconcentration or to resist it - all these shoulder hits, attachments, playing close to thickness... Everything fits. It must be true because I know what overconcentration is, after all :)

But a little more seriously, the thing I have picked up most from recently studying the Genjo-Chitoku games (or, more precisely, their commentaries) is how fixated pros are with the efficiency of plays. For example, the sheer number of comments on forcing moves and timing is staggering.


Interesting observation. :)

Quote:
And overconcentration is just an aspect of efficiency. Unlike us, a bot can measure efficiency quite easily.


Well, they certainly do not try to measure it. It seems plausible that a good theory of their play would have the concept of efficiency, however. As for overconcentration, one crude measure might be the difference between Black and White plays in a given region.

Quote:
It occurred to me therefore that if they do ever learn to talk about go, they won't ever use any of our terms (except to pander to us).


I think that we will need new concepts. ::)

Quote:
Every explanation will basically be just "this is the most efficient move in this sort of position," all backed up by what is essentially just looking up results stored in a massive database. This is essentially what the recent Elf exercise amounts to.


Humans are rather good at coming up with explanations, and good ones, too. A massive database is not necessarily a problem. I have mentioned how researchers were able to reproduce, within a close margin of error, the predictions of a big data algorithm, using only three parameters. :cool: In the next decade we are going to come up with some new insights into go. :D

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #9 Posted: Tue Feb 19, 2019 1:31 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
Tryss wrote:
So obviously, this approach may get the network stuck in a local optimum worse than the global optimum. Fortunatly, it seems that a little randomness allow for enough exploration to get the level to a superhuman strength, but it doesn't means that it can't happens.


Oh, I am not aware of getting stuck in a local optimum, or even that the law of diminishing returns has kicked in. But there does seem to be some path dependency. Greater exploration might help reduce blind spots, but it might also slow down improvement.

For instance, suppose that we trained two zero bots to play against each other, rather than against themselves. If one of them found a blind spot in the other, it could learn to exploit it, and the other bot could learn to correct it. But training two bots at the same time would increase development time.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #10 Posted: Tue Feb 19, 2019 2:28 pm 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Bill Spight wrote:
Greater exploration might help reduce blind spots, but it might also slow down improvement.
Not necessarily significantly. Depending on the bot, some of these blind spots develop because a key move is never looked at. So the bias comes from always selecting the better move from two previous candidates (reinforcing it), never a third non-candidate. This can be corrected relatively cheaply by allocating a few visits to non-candidate moves as well.

A common assumption is that the strongest playing configuration should be used to generate training games. This leads to looking at narrow range of moves as that happens to maximize playing strength (per visits). But the strongest player is not necessarily the best teacher, and the benefits of exploration is different from a training view than during playing.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #11 Posted: Tue Feb 19, 2019 2:50 pm 
Honinbo

Posts: 10905
Liked others: 3651
Was liked: 3374
moha wrote:
Bill Spight wrote:
Greater exploration might help reduce blind spots, but it might also slow down improvement.
Not necessarily significantly. Depending on the bot, some of these blind spots develop because a key move is never looked at. So the bias comes from always selecting the better move from two previous candidates (reinforcing it), never a third non-candidate. This can be corrected relatively cheaply by allocating a few visits to non-candidate moves as well.


Right.

Quote:
A common assumption is that the strongest playing configuration should be used to generate training games. This leads to looking at narrow range of moves as that happens to maximize playing strength (per visits). But the strongest player is not necessarily the best teacher, and the benefits of exploration is different from a training view than during playing.
Emphasis mine.

Back when I was considering such things, it was plain that a good player needed to be able to refute bad moves, but there are some bad moves that a good opponent will never play, and so the ability to refute them may not be learned or may be lost. (Efficient learning involves forgetting.) So when I was thinking about a training regime, I thought that it might be a good idea, before testing Alpha(n+1) against Alpha(n), that Alpha(n) had to beat all the previous Alphas, to prove that it could refute their bad plays.

_________________
The Adkins Principle:
At some point, doesn't thinking have to go on?
— Winona Adkins

Visualize whirled peas.

Everything with love. Stay safe.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #12 Posted: Tue Feb 19, 2019 4:01 pm 
Lives in gote

Posts: 603
Location: Indiana
Liked others: 114
Was liked: 176
Tryss wrote:
So obviously, this approach may get the network stuck in a local optimum worse than the global optimum. Fortunatly, it seems that a little randomness allow for enough exploration to get the level to a superhuman strength, but it doesn't means that it can't happens.


Sounds like premature convergence, a subject much studied in designing genetic algorithms, as well as experience with evolution in isolated populations.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #13 Posted: Tue Feb 19, 2019 5:34 pm 
Honinbo

Posts: 9545
Liked others: 1600
Was liked: 1711
KGS: Kirby
Tygem: 커비라고해
There is a classic problem in machine learning - the tradeoff between "exploration" and "exploitation". It comes up in reinforcement learning quite a bit.

Basically, there's a balance between exploiting what you already "know" and exploring what's yet to be known. Here's some lecture slides by our good friend, David Silver: http://www0.cs.ucl.ac.uk/staff/d.silver ... les/XX.pdf

The idea of confirmation bias in neural nets reminds me a little bit of this. I suppose leaning toward the "exploitation" side too much in RL would have a similar effect to confirmation bias.

Anyway, a takeaway from this is that there is a balance between reinforcing what you already know and exploring new options. Either strategy, in isolation, is suboptimal in many domains.

_________________
be immersed


This post by Kirby was liked by: mhlepore
Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #14 Posted: Tue Feb 19, 2019 6:11 pm 
Gosei

Posts: 1733
Location: Earth
Liked others: 621
Was liked: 310
Confirmation bias theory is an example for a confirmation bias. ;-)


This post by Gomoto was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #15 Posted: Wed Feb 20, 2019 12:41 am 
Gosei
User avatar

Posts: 1753
Liked others: 177
Was liked: 491
Bill Spight wrote:
Back when I was considering such things, it was plain that a good player needed to be able to refute bad moves, but there are some bad moves that a good opponent will never play, and so the ability to refute them may not be learned or may be lost. (Efficient learning involves forgetting.) So when I was thinking about a training regime, I thought that it might be a good idea, before testing Alpha(n+1) against Alpha(n), that Alpha(n) had to beat all the previous Alphas, to prove that it could refute their bad plays.


I don't think there is any need to worry. I would be surprised if, for some n, LeelaZero(n) didn't beat LeelaZero(n-10) more than 50% of the time.

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #16 Posted: Wed Feb 20, 2019 3:14 am 
Judan

Posts: 6725
Location: Cambridge, UK
Liked others: 436
Was liked: 3719
Rank: UK 4 dan
KGS: Uberdude 4d
OGS: Uberdude 7d
Bill Spight wrote:
Back when I was considering such things, it was plain that a good player needed to be able to refute bad moves, but there are some bad moves that a good opponent will never play, and so the ability to refute them may not be learned

There was a nice example of this I posted about a while ago, I think from a neural network bot shortly before AlphaGo came along. Said bot was doing well in a fight and had trapped some key cutting stones with a potential crane's nest tesuji (they had the 3 liberties and opponent 2 extensions on the side). The other bot opponent then played the one-point jump to escape everyone stronger than 20 kyu knows is doomed to fail. Neural bot wedged, other bot atarid, neural bot connected instead of the squeeze and woopsy game over. A little MCTS reading would have saved the day, but presumably playing out the doomed crane's nest tesuji is so rare in the strong player games used for training the neural network hadn't learned how to refute it.

Also with exploration vs exploitation there is a conflict in what is needed in different situations. We see blindspots of bots not considering moves even 1 ply deep which is due to insufficient exploration. But to read ladders you want high exploitation and low exploration to quickly go deep down the 1 relevant variation.


This post by Uberdude was liked by: Bill Spight
Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #17 Posted: Wed Feb 20, 2019 3:51 am 
Lives in gote

Posts: 311
Liked others: 0
Was liked: 45
Rank: 2d
Uberdude wrote:
But to read ladders you want high exploitation and low exploration to quickly go deep down the 1 relevant variation.
This assumes that the opponent ('s policy view) will also continue a losing ladder, which in turn would mean that the net has high policy on ladder moves regardless of them working or not. This is a failure for the net (could do better), and is also unlikely from training point of view, since the net is trained towards the result of a search, which will not be the ladder move if that doesn't work (50% of times).

Top
 Profile  
 
Offline
 Post subject: Re: Confirmation bias in neural nets?
Post #18 Posted: Wed Feb 27, 2019 11:25 am 
Dies with sente

Posts: 113
Liked others: 11
Was liked: 27
Rank: 1d
Universal go server handle: iopq
Leela Zero never seems to converge to the higher win percentage node, even after thousands of playouts it has 0.3% higher win rate. You'd think it would give it a chance. But it stupidly prefers to exploit the move it chose right away and just search there a little bit more.

This didn't seem to occur in pure MCT bots, they would eventually switch to the ever so slightly better option and exploit it instead

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 18 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group