(title shamelessly stolen from Dr. Steel.)
writing (k) me
-----
There's been a lot of talk over the past twenty years or so about the singularity. It's an exciting prospect, and new technology seems to inch ever-closer to the tortoise it's chasing. Predictions for the arrival of the singularity range from the ever-optimistic (or not) "ten years hence" to "perhaps within the next millenium." I don't want to make a prediction myself, but I'd like to address some of the issues surrounding the topic.
First: Whose singularity? Different authors paint different pictures of what constitutes the technological singularity. The general definition is lain out by the Wikipedia article I linked to above -- roughly "the point in time at which technology is capable of designing its own successors." But that's more problematic than it sounds. If we include genetic engineering and other biotechnologies, it's already easily there... although the argument might be made that we're simply hijacking Nature. Perhaps we mean only "mechanical" technology? But then what about wet networks or similar solutions? While these are still largely the domain of science fiction, they're a not-so-improbable potential means of addressing some of the problems that have plagued fuzzy logic systems for decades. Or perhaps we simply mean "design" more selectively... that is, the chaotic randomness of genetic evolution 'doesn't count' for the intentional process of design. I'll address that more directly in a moment.
But another issue first. Wrapped up in the idea of the singularity is "improvement." The child machines are supposed to be better than their parents. But "better" is a hugely qualifiable term. Are they 'better' if they're basically the same but their components are higher-grade materials, and they've been engineered with logical efficiency improvements taking advantage of this (smaller size, better heat control, less waste)? But many people might say there's no real 'invention' there. Are they 'better' if they sacrifice some elements of design to specialize themselves to an environment more? Again, there are obvious criticisms this position would need to surmount.
The issue here is two-fold. The first, more obvious aspect is that of the black swan. Many technological improvements are largely of the category I described as 'logical efficiency improvements,' which occur as ideas and technologies filter through the memetic environment -- a better grasp of some physical process yields minor refinements to some area of thermodynamics, perhaps, which results in a compound with better heat distribution properties, which results in smaller, faster computer processors. This process could span many decades. But what really jumps industries forward are ideas which borrow something from a completely unrelated field, or come up with something almost completely original: the black swan. Many of these discoveries or applications are memorialized due to the unlikely story of their origin, with penicillin being one very well-known example. The origin of analog computing, with Babbage's reworking of the Jacquard loom (although exactly where the black swan was there -- with Babbage, Jacquard, or Bouchon -- is a matter of some debate), is another. These are unpredictable moments of confusion and inspiration, when someone looks at something and sees something else entirely.
And these, I would argue, are very often a result of humans being very bad at logical thought. Despite the many analogies to the contrary, our mind doesn't work much like a computer... at least, like no computer any sane technician would make. It's very easy for us to cross-reference material and we frequently bring up completely the wrong information for a given context. We temporarily forget things and are forced to make do without... at the dinner table, a request for 'ketchup' becomes "Hey, pass me that thing. The red bottle. Next to the salt." We interpret songs according to mondegreens. We look at an abstract shape in a good mood and see a rainbow; in a bad mood, we see a frown. This is an important attribute of the process of invention, these accidents of context.
Making a mechanical device that operated this way would require a substantial degradation of its abilities as a machine. We don't want a calculator to tell us that the square root of 269 is 13, even if it then laughs at the mistake and tells us a story about some other time it made a mistake that ended up being pretty funny if you think about it. We have strangers on the subway for shit like that. Machines are built to be useful. Logical. Precise. A device that can not only utilize new information to improve on itself to a point but continually improve on its design by making novel discoveries of its own is far removed from current technology. And the occurrence of a technological singularity seems to entirely depend on such a thing.
Of course, as suggested above, if we permit cybernetic, mechasymbiotic, or biological definitions... we're just riding the wave of a three-and-a-half billion year old singularity already.
writing (k) me
-----
There's been a lot of talk over the past twenty years or so about the singularity. It's an exciting prospect, and new technology seems to inch ever-closer to the tortoise it's chasing. Predictions for the arrival of the singularity range from the ever-optimistic (or not) "ten years hence" to "perhaps within the next millenium." I don't want to make a prediction myself, but I'd like to address some of the issues surrounding the topic.
First: Whose singularity? Different authors paint different pictures of what constitutes the technological singularity. The general definition is lain out by the Wikipedia article I linked to above -- roughly "the point in time at which technology is capable of designing its own successors." But that's more problematic than it sounds. If we include genetic engineering and other biotechnologies, it's already easily there... although the argument might be made that we're simply hijacking Nature. Perhaps we mean only "mechanical" technology? But then what about wet networks or similar solutions? While these are still largely the domain of science fiction, they're a not-so-improbable potential means of addressing some of the problems that have plagued fuzzy logic systems for decades. Or perhaps we simply mean "design" more selectively... that is, the chaotic randomness of genetic evolution 'doesn't count' for the intentional process of design. I'll address that more directly in a moment.
But another issue first. Wrapped up in the idea of the singularity is "improvement." The child machines are supposed to be better than their parents. But "better" is a hugely qualifiable term. Are they 'better' if they're basically the same but their components are higher-grade materials, and they've been engineered with logical efficiency improvements taking advantage of this (smaller size, better heat control, less waste)? But many people might say there's no real 'invention' there. Are they 'better' if they sacrifice some elements of design to specialize themselves to an environment more? Again, there are obvious criticisms this position would need to surmount.
The issue here is two-fold. The first, more obvious aspect is that of the black swan. Many technological improvements are largely of the category I described as 'logical efficiency improvements,' which occur as ideas and technologies filter through the memetic environment -- a better grasp of some physical process yields minor refinements to some area of thermodynamics, perhaps, which results in a compound with better heat distribution properties, which results in smaller, faster computer processors. This process could span many decades. But what really jumps industries forward are ideas which borrow something from a completely unrelated field, or come up with something almost completely original: the black swan. Many of these discoveries or applications are memorialized due to the unlikely story of their origin, with penicillin being one very well-known example. The origin of analog computing, with Babbage's reworking of the Jacquard loom (although exactly where the black swan was there -- with Babbage, Jacquard, or Bouchon -- is a matter of some debate), is another. These are unpredictable moments of confusion and inspiration, when someone looks at something and sees something else entirely.
And these, I would argue, are very often a result of humans being very bad at logical thought. Despite the many analogies to the contrary, our mind doesn't work much like a computer... at least, like no computer any sane technician would make. It's very easy for us to cross-reference material and we frequently bring up completely the wrong information for a given context. We temporarily forget things and are forced to make do without... at the dinner table, a request for 'ketchup' becomes "Hey, pass me that thing. The red bottle. Next to the salt." We interpret songs according to mondegreens. We look at an abstract shape in a good mood and see a rainbow; in a bad mood, we see a frown. This is an important attribute of the process of invention, these accidents of context.
Making a mechanical device that operated this way would require a substantial degradation of its abilities as a machine. We don't want a calculator to tell us that the square root of 269 is 13, even if it then laughs at the mistake and tells us a story about some other time it made a mistake that ended up being pretty funny if you think about it. We have strangers on the subway for shit like that. Machines are built to be useful. Logical. Precise. A device that can not only utilize new information to improve on itself to a point but continually improve on its design by making novel discoveries of its own is far removed from current technology. And the occurrence of a technological singularity seems to entirely depend on such a thing.
Of course, as suggested above, if we permit cybernetic, mechasymbiotic, or biological definitions... we're just riding the wave of a three-and-a-half billion year old singularity already.
