Speaking of DNN's Nvidia, The cornerstone of pretty much everything that's happened the last couple of years, the same way intel was the foundation of the information revolution, are poised to give us 40+ years worth of Moores law progress in the space of a couple of months by refining architecture instead of metal.
The downside is more presentations featuring the eminently uncharismatic Jen-Hsun Huang 
So Google has been making TPU's their own custom hardware for their tensor flow framework. Nvidia has these machine learning chips you linked. IBM is back in the hardware game with their new power9 chips. Meanwhile what is Intel doing?
Crammulus you are pretty much right. The learning can only really figure out the most probable response.
However it very well might have a variable cordoned off for certain words that it itself assigned. For both the K-Means and SVM machine learning (for more general DNNs this may or may not be true) you figure out the most probable response after chucking your input off to a different vector space with thousands of dimensions (even infinite with Hilbert Spaces) and then do the clumping there and then map the info back. While its in that other space there very well maybe a specific dimension for the concept of a dog. That slot for the concept may even be language agnostic if they feature extract the language out before chucking the input into the algorithm.
I probably explained that poorly.