I think the next really major breakthrough in AI will come when we finally have enough computing power to model the whole-brain neural network of an animal such as a rat. We have a lot of theory, but unfortunately computers are just so far behind theory that we can't actually test anything. However, recently one of the world's most powerful supercomputers was able to successfully model the connectivity of one cubic millimeter of rat cortex in less than a year - about nine months if I recall - and that tells us that once the technology catches up, it will ultimately be possible to model the complexity of the networks that make up a mammalian brain. There will still be a lot of missing pieces, such as how and where memories are stored, and truly we know very little about brain function, but being able to model the connectivity should help us start to answer these questions, and that will be a good start in informing us how to move forward in developing true AI.
Spoken like a true neuroscientist I've no doubt that being able to model an entire brain will yield tons of awesome data in the neuroscience space but, as you rightly point out - that's a ways off. Short term the advances are in domain-specific narrow-AI which is exploding right now. There's not as much public demand for an artificial entity that can eat cheese and navigate a simple maze (even though that'd be a pretty fkin awesome accomplishment) as there is for say medical imaging analysis or voice recognition or self driving cars.
Even if they gimp it to human comparable levels then it will still easily be able to pick the most relevant one due to being able to assess the whole game state instantly.
The thinking is they're not giving it access to the whole game state. Bear in mind AlphaGo is an evolution of the same system that beat all those atari games, based on only screen input. They'll take the same approach with Starcraft otherwise, like you say - it'd be kind of pointless. Bear in mind these game milestones are just that - milestones. They're not trying to bring to market new and improved chess computers, it's more about PR and measuring the capability of the tech to solve real-world use cases.
Again, I'd like to see a learning AI that isn't modeled on a human.
Because it would scare the shit out of everyone. Also, it would probably be less likely to decide that we're only good for building out infrastructure as slaves, etc, etc.
AI's aren't really modeled on a human, per se, they're just algorithms that transform data in a vaguely similar way to neurons. Functionally they may as well be modeled on rat or lizard braincells. What's funny to me is that for each successful use case, they're giving humanity a serious kick in the ego since we're forced to accept there's another thing our brains are totally shit at. Don't get me wrong, I think our skullmeat is a pretty impressive piece of hardware (doubly so when you consider the "design process") but what we're finding out here is there are better tools for many of the jobs it used to be considered good at.
*update* Sedol just forced the machine to resign game 4. Looks like there's life in the old ape-configuration yet