hyperpape wrote:Show me where there are computers that understand what words mean. Where are there even computers that understand syntax for natural language?
IBM is trying to go there. Watson was a big step forward. It is debatable how much forward. But there are a couple of things that are usually disregarded in relation with computers.
First you can inspect them. When Watson said Toronto was an US city, you will laugh. but it can easily detail exactly how it arrived at such conclusion. And you may realise how to avoid such a wrong reasoning.
The second is that they can inspect themselves. After you build a monster like Deep Blue or Watson, after every answer you can tell them: "Good job! but... couldn't you have realised that faster?" They will optimise themselves, or put in a more human way, they will try to extract principles from raw data. The principles need not be the same that a human would use. In correct go play there may be nothing even remotely similar to what we call "influence".
daniel_the_smith wrote:The hard part is making those reasons correspond with reality. And if the bot's choice is based off of "in 100,000 positions, this move came out the best most often", there's not going to be a way to express that. The bot would have to examine all the failed positions, identify the commonalities between them, and then it could say something like, "If I play X, it's no good because of Y, if Z, then W, ... So, this move avoids most of the problems."
I am not sure what you are measuring here.
A chess playing program will show you the variations he considers best and confutations. It will also tell you that a position is better than another because of doubled/passed pawns, it will evaluate the strength of knights and bishops depending on how many pawns are still on the board etc... Incidentally they do not evaluate moves, only positions. In game theory there is no concept like doing the same move in different positions.
They will say: "I would like to go to this position because the advantage of a rook on the 7th row is good enough to win from here". I think it is exactly what you would call "general principle".
Of course there is a detail here. They will actually say "I would like to go to this position because there is no way to stop me from having a rook on the 7th line in ten moves and this advantage is good enough to win from here", i.e. they combine positional judgement with plain old reading ahead.
daniel_the_smith wrote:But even that isn't terribly useful in the way you guys might want. For the computer to produce a general principle for the situation would genuinely be impressive (unless it's via the confabulation route, in which case it's not impressive and it's questionable that it's based in reality).
You assume that general principles exist at all. It is very unlikely. What you call general principles are a set of mnemonics to estimate the relative winning probability. Something that the computer does very well, in fact

daniel_the_smith wrote:And if your bot is based on Bayesian weighting of a jillion small automatically tuned factors or something similar, you could have written the source code and still not have even the slightest inkling of how it works.
You overestimate the number of factors and their obscurity. E.g. if I remember correctly Rybka has an abnormally high opinion of a knight in a high row, something very easily understandable and translatable in human terms yet completely unexpected!