The unrecognized death of speech recognition The accuracy of computer speech recognition flat-lined in 2001, before reaching human levels. The funding plug was pulled, but no funeral, no text-to-speech eulogy followed. [...] In 2001 recognition accuracy topped out at 80%, far short of HAL-like levels of comprehension. Adding data or computing power made no difference. Researchers at Carnegie Mellon University checked again in 2006 and found the situation unchanged. With human discrimination as high as 98%, the unclosed gap left little basis for conversation. But sticking to a few topics, like numbers, helped. Saying "one" into the phone works about as well as pressing a button, approaching 100% accuracy. But loosen the vocabulary constraint and recognition begins to drift, turning to vertigo in the wide-open vastness of linguistic space. [...] A 1996 look at the state of the art reported that "Despite over three decades of research effort, no practical domain-independent parser of unrestricted text has been developed." As with speech recognition, parsing works best inside snug linguistic boxes, like medical terminology, but weakens when you take down the fences holding back the untamed wilds. Today's parsers "very crudely are about 80% right on average on unrestricted text," according to Cambridge professor Ted Briscoe, author of the 1996 report. Parsers and speech recognition have penetrated language to similar, considerable depths, but without reaching a fundamental understanding. [...] We are surrounded by unceasing, rapid technological advance, especially in information technology. It is impossible for something to be unattainable. There has to be another way. Right? Yes—but it's more difficult than the approach that didn't work. In place of simple speech recognition, researchers last year proposed "cognition-derived recognition" in a paper authored by leading academics, a scientist from Microsoft Research and a co-founder of Dragon Systems. The project entails research to "understand and emulate relevant human capabilities" as well as understanding how the brain processes language. The researchers, with that particularly human talent for euphemism, are actually saying that we need artificial intelligence if computers are going to understand us. Originally, however, speech recognition was going to lead to artificial intelligence. Computing pioneer Alan Turing suggested in 1950 that we "provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English." Over half a century later, artificial intelligence has become prerequisite to understanding speech. We have neither the chicken nor the egg. | (http://farm1.static.flickr.com/52/129197471_e1a8db8459.jpg) Strings, heavy with meaning |
Quote
Statistics veiling ignorance [my emphasis]
Many spoken words sound the same. Saying "recognize speech" makes a sound that can be indistinguishable from "wreck a nice beach." Other laughers include "wreck an eyes peach" and "recondite speech." But with a little knowledge of word meaning and grammar, it seems like a computer ought to be able to puzzle it out. Ironically, however, much of the progress in speech recognition came from a conscious rejection of the deeper dimensions of language. As an IBM researcher famously put it: "Every time I fire a linguist my system improves." But pink-slipping all the linguistics PhDs only gets you 80% accuracy, at best.