I just came across Cleverbot the other day, and found it pretty intriguing. When I got my first PC (an IBM AT 80286), a friend gave me a floppy with a little program on it called Eliza.
I could spend hours chatting with Eliza, not so much trying to hold an actual conversation, but trying to exploit her simplistic nature and get her to say outrageous things. I even went on to write a few similar programs myself, that made basic attempts at understanding sentence structure or picking out key words, and tried to respond appropriately.
Cleverbot actually takes a simpler approach than this, and that’s what really piqued my interest about it. Instead of making a real attempt at behaving intelligently, it simply learns to mimic. This makes quite a lot of sense, as that’s how humans learn anyways (watch a toddler around adults some time).
The bot, unfortunately, is crap. This is most likely because people chatting with it know they’re chatting with a bot, so the data set it has available to learn from isn’t a proper example of intelligent conversation. It’s really no more likely to pass a Turing test than Eliza was.
I got to thinking though, why don’t we see this sort of learning in games? I could probably put together a simple tic-tac-toe program that easily learned to play a perfect game without ever having the algorithm programmed. It would simply record every move of a game it’s played so far, so that in the future it could examine the board’s state, and repeat any action for the same state previously that resulted in a win. This is basically how chess programs work anyways, except that the data is all pre-programmed from games of the masters.
The tricky part of this sort of AI, I suppose, is that games have just gotten too complicated. A tic-tac-toe board has a very limited number of possible states before a win has been reached (so few that I’ve written a simple tic-tac-toe program before in about 100 lines of code), but even moving up to a still simple game like chess or go, the number reaches astronomical levels. One would assume then, that to apply such strategies to a game like Civilization would be impossible, but I don’t think that’s necessarily the case.
The key would be to simply make the conditions fuzzy enough. Think like a human player, who might learn a lesson such as “when my treasury is low, I should build banks.” By identifying a few hundred key measurements, an AI could track any actions it took in those situations (and of course track the human, as well). These would be stored in a database and flagged as part of either a winning or losing game. Repetition of this process would allow the best and worst strategies to filter to the top and bottom, and could be picked accordingly, based on the game difficulty chosen.
This has the incredible advantage of being truly what people want when they choose Easy or Hard difficulties in strategy games. Ideally, they want a simulation of either playing against a poor or exceptional player, respectively. Instead of adjusting production bonuses, a simpleton AI would purposely make bad decisions, or at the very least not take full advantage of its learned behaviors.
So why hasn’t this happened yet? I first heard mention of games be programmed to use learning AI routines in the early 90s, but I’ve seen about as many examples of it as I have playable Virtual Reality games.