Mike Novack wrote:a) explain "why" in human understandable terms
You are assuming that none of the existing programs can do this? Just because few of the programs have been given this capability doesn't make that so. What I believe isn't possible at the present time is giving a why for "why is move A (which has x, y, z "go reasons" behind it) better than move B (which has u, v, w go reasons behind it)". In other words, in human understandable terms, why in this instance are x, y, and z more important than u, v, and w. ...
I think when humans do this it's mostly confabulation. (IOW, your brain internally came up with a good/bad judgment, and then you verbally come up with reasons to support your feeling. You know this is what happened if you have ever started to explain something and realized halfway through that you were totally wrong!) Some are better at producing convincing confabulations than others...
So, the hard part isn't coming up with reasons-- I could probably write a program right now with a built in set of possible reason fragments, give it a few simple rules, and it would generate moderately convincing reasons for any move in a pro game. This would be the digital equivalent of confabulation.
The hard part is making those reasons correspond with reality. And if the bot's choice is based off of "in 100,000 positions, this move came out the best most often", there's not going to be a way to express that. The bot would have to examine all the failed positions, identify the commonalities between them, and then it could say something like, "If I play X, it's no good because of Y, if Z, then W, ... So, this move avoids most of the problems."
But even that isn't terribly useful in the way you guys might want. For the computer to produce a general principle for the situation would genuinely be impressive (unless it's via the confabulation route, in which case it's not impressive and it's questionable that it's based in reality).
The process of selecting moves is somewhat opaque, even to the person doing it. Unraveling that process and putting it into understandable terms is not trivial
even when you have the source code for that process. And if your bot is based on Bayesian weighting of a jillion small automatically tuned factors or something similar, you could have
written the source code and still not have even the slightest inkling of how it works.