So the big news this morning is that Google/Deepmind's AlphaGo
system just beat the current no.1 human player Lee Sedol at the game of Go, winning the first three straight rounds in a - best of five - competition.
The hype is that this is around a decade early, compared to best industry estimates from a month or so ago. Not sure I'm buying it as a quantum leap, tho. Deep learning is pretty much where everyone knew it was, aside from week on week improvements in setup and application. I think this is more an illustration that the human players don't really have a solid grasp of the game. This makes sense when you consider that a game of Go has more permutations than there are atoms in the universe and we already learned back 97 that even the relatively trivial task of examining all the possible permutations on a chessboard is way beyond the capability of the best players our species has to offer.
Either way, it's undoubtedly a significant milestone and unlike Deep Blue, which was programmed from the ground up to be a highly specialized, dedicated chess computer and nothing else, AlphaGo is a collection of general purpose learning algorithms which can be applied to any problem domain, merely by switching the training data. Make no mistake, machine learning is poised to have a much more significant impact than the traditional computing paradigm ever delivered and probably in a shorter timescale, to boot.
The word on the street is the next game-target is starcraft. For a whole bunch of reasons, mostly relating to overview limitations and long term planning, solving Starcraft will make the Chess and Go accomplishments look infantile by comparison.