some interesting stuff to chew on... I know it's long, but I found it really cool
Tinker Toy Brains
CLIFF PICKOVER (http://sprott.physics.wisc.edu/Pickover/pc/realitycarnival.html)
Computer scientist, IBM's T. J. Watson Research Center; Author, Calculus and Pizza
If we believe that consciousness is the result of patterns of neurons in the brain, our thoughts, emotions, and memories could be replicated in moving assemblies of Tinkertoys. The Tinkertoy minds would have to be very big to represent the complexity of our minds, but it nevertheless could be done, in the same way people have made computers out of 10,000 Tinkertoys. In principle, our minds could be hypostatized in patterns of twigs, in the movements of leaves, or in the flocking of birds. The philosopher and mathematician Gottfried Leibniz liked to imagine a machine capable of conscious experiences and perceptions. He said that even if this machine were as big as a mill and we could explore inside, we would find "nothing but pieces which push one against the other and never anything to account for a perception."
If our thoughts and consciousness do not depend on the actual substances in our brains but rather on the structures, patterns, and relationships between parts, then Tinkertoy minds could think. If you could make a copy of your brain with the same structure but using different materials, the copy would think it was you. This seemingly materialistic approach to mind does not diminish the hope of an afterlife, of transcendence, of communion with entities from parallel universes, or even of God. Even Tinkertoy minds can dream, seek salvation and bliss,Äîand pray.
Half-Man, Half-Machine: The Mind of the Future
source: http://www.businessweek.com/1999/99_35/b3644022.htm (http://www.businessweek.com/1999/99_35/b3644022.htm)
Raymond C. Kurzweil is the author of The Age of Intelligent Machines, published in 1990, and The Age of Spiritual Machines: When Computers Exceed Human Intelligence, published this year. He is the founder and chairman of Kurzweil Technologies in Wellesley Hills, Mass., as well as five other companies that still bear his name or are still operating under new ownership. He spoke with Business Week Senior Writer Otis Port about the separate and joint futures of human and artificial intelligence.
Q: Do you have any doubts that a superior intelligence will emerge in the next few decades?
A: No. It's inevitable. For example, nanotubes would allow computing at the molecular level. A one-inch cube of nanotube circuitry would be about 1 billion times more powerful than the human brain, in terms of computing capacity. That raw computing capacity is a necessary but not sufficient condition to achieve human-level intelligence in a machine.
We also need the organization and the software to organize those resources. There are a number of scenarios for achieving that. The most compelling is reverse-engineering the human brain. We're already well down that path, with techniques like MRI. But we'll do better because the speed and resolution -- the bandwidth -- with which we can scan the brain are also accelerating exponentially.
One means of scanning the brain would be to send small scanners in the form of nanobots into the blood stream. Millions of them would go through every capillary of the brain. We already have electronic means for scanning neurons and neurotransmitter concentrations that are nearby, and within 30 years we'll have these little nanobots that can communicate with each other wirelessly. They would create an enormous database with every neuron, every synoptic connection, every neurotransmitter concentration -- a precise map of the human brain.
So we'll have the templates for human intelligence, and by then we'll have the hardware that can run these processes. So we can reinstate that information in a neural computer.
Once we can embody human thought processes in a nonbiological medium, it will necessarily soar past human intelligence -- for several reasons. First, machines can share their knowledge electronically. With humans, you spend years teaching language to each child. [But] once any one machine has mastered something, it can share that knowledge instantly with millions of other machines over the global wireless Web, which we'll have by then. So a machine can become expert at any number of disciplines.
Secondly, machines are far faster. Electronic circuits are 10 million times faster than neural connections, and machine memories can be far larger and much more accurate. However, machines do not yet have the depth of pattern recognition or the subtlety of human intelligence. They can't deal with emotions and humor and other subtle qualities of human intelligence.
Once their complexity matches that of humans and they are able to master the skills at which humans now excel, and those abilities are combined with the ways in which machines are already superior -- that will be a very formidable combination. It'll get to the point where the next generation of technology can only be designed by the machines themselves.
Finally, while the complexity of the biological computational circuitry in humans is essentially fixed, the density of machine circuitry will continue to grow exponentially. By 2030, a $1,000 computer system will have the power of 1,000 human brains; by 2050, 1 billion human brains.
Q: Won't we end up feeling like pets?
A: Those same nanobots that can scan the human brain will also provide a type of neural implant to extend human intelligence -- expand your memory and improve your pattern-recognition capabilities. Ultimately they will augment human intelligence quite profoundly as we go through the 21st century.
We are doing this today, after a fashion. We now have neural implants for Parkinson's disease patients that actually reprogram their neural cells. The implants literally turn off the symptoms of Parkinson's as soon as you throw a switch. It's very dramatic. These patients are wheeled in, their bodies frozen. Then a switch is thrown to activate the neural implants, and the patients suddenly come alive -- their symptoms are suppressed by the implant.
With microscopic nanobots, we'll be able to send millions or billions [of them] into your brain. They would take up key positions inside our brains and detect what's going on in our brains. They would be communicating with each other, via a wireless local-area network, which would be linked to the wireless Web and intelligent machines, and they could cause particular neurons to fire, or suppress them.
This will enable us to artificially boost human intelligence dramatically. Ultimately, the majority of thinking will be done in the nonbiological parts of our brains.
Q: If nanobots are sitting inside our heads and controlling the brain, how will we know they're not fooling us with false signals?
A: Well, actually, another thing we could do with this would be virtual reality. If we had nanobots take up positions by every nerve fiber that comes from all of our five senses, they could either sit there and do nothing, in which case you'd perceive the world normally -- or they could shut off the nerve impulses coming from our real senses and replace them with simulated nerve impulses representing what you would perceive if you were in the virtual environment.
Q: So we wouldn't be able to tell the difference at all between the real world and a simulated world?
A: Right. It would be as if you were really in that virtual environment. If you decided to walk, the nanobots would intercept the signals to your real legs and send back all the sensory signals of walking -- from the changing tactile pressure on your feet to the air moving across your hands as you swing your arms. It would be just as high-resolution and just as compelling as real reality. You could actually go there and meet other real people. So you and I, instead of being on the telephone, could be meeting on a Mozambique game preserve, and we'd both feel the warm breeze on our faces and hear the animal sounds in the background.
Eventually, anything you can do in real reality -- business meetings, social events, sex, sports -- could be done in virtual reality. As the technology gets perfected, we'll be spending more and more time in virtual reality, because it'll be more and more compelling. Going to Web sites will mean going to a virtual reality environment. Some will emulate real environments, so you'll visit the Web to go skiing in the Alps or to take a walk on a beach in Tahiti. Others will be fantastic environments that don't exist, or couldn't exist, in the real world.
Q: Let's go back to machines that design new machines. Doesn't that open the potential for them to evolve a nonhuman intelligence -- utterly different ways of thinking?
A: Sure. Once we have intelligent systems in a nonbiological medium, they're going to have their own ideas, their own agendas. They'll evolve off in completely unpredictable directions. Instead of being derived only from human civilization, new concepts will also be derived from their electronic civilization. But I see this as part of evolution -- a continuation of the natural progression.
Q: But couldn't it pose a threat to the human race?
A: I don't see an invasion of alien machines coming over the horizon. They'll be emerging from within our human-machine civilization. We're already quite intimate with our technology. If all the computers stopped today, essentially everything would grind to halt. That was not true just 30 years ago. At that point only a few scientists and government bureaucrats would have been frustrated by the delay in getting printouts from their punch-card machines.
Today we've become highly dependent on computer intelligence. It's already embedded in our decision-making software much more than most people realize. That's going to continue to accelerate.
Next, we're going to be putting these machines into our bodies and into our brains. So it's not going to be humans on one side and machines on the other. There's not going to be a clear distinction between humans and machines. We'll be using nanobots to expand human intelligence, and over time, the bulk of our thinking will be done in the nonbiological parts of our brains, because that part of our brain will continue to grow as technology advances. But the biological part is not growing.
Q: There won't be a clear distinction between us and them?
A: No. Ultimately, you're going to have nonbiological entities that are exact copies of biological brains. They will claim to be human, because they will have all the memories of the original brain. So there won't be a clear distinction between what's human and what's not.
But remember, this will be emerging gradually from within our own civilization. It's the next phase of our own evolution. It's only a threat if you believe things should always stay the same as they are today.
That's not to say there aren't any dangers. An obvious one is uncontrolled growth of these nonbiological entities in your body -- nonbiological cancer.
second one was a bit tl;dr but the first one is a sentiment i agree with wholeheartedly.
except, probably, like always in biology, "things are in fact a bit more complicated than that", they always are, always will be.
as long as by the time i'm old and close to death they can download my consciousness into a bigass computer and make me live forever, i'm cool with it.
if it's going to take longer than that to perfect this technorogy, then it is evil.
I like the first one, very Blade Runner/Do Robots Dream of Electric Sheep?.
The 2nd one is all a bit to "where's my flying cars"
Given that most new technologies are immediately harnessed to provide access to naughty pictures, I reckon the technology would very quickly bring about an exciting age of custom virtual shagging-buddies, world-wide better-than-reality gangbangs, and all sorts of invigorating filth in much the same way as the Holodeck would probably not be used to recreate Sherlock Holmes mysteries but instead allow the Enterprise,Äôs crewmembers to collapse from sexual exhaustion in Virtual Tokyo Brothels. It,Äôs just human nature.
And what would be the legal status of non-biological intelligences, particularly those who believed themselves to be the human beings from whose brains they were copied? And would they be, you know, ,Äúfully functional?,Äù It,Äôs all a bit gross, I love it,Ķ :D
Quote from: BumWurst on July 02, 2007, 01:26:42 PM
Given that most new technologies are immediately harnessed to provide access to naughty pictures, I reckon the technology would very quickly bring about an exciting age of custom virtual shagging-buddies, world-wide better-than-reality gangbangs, and all sorts of invigorating filth in much the same way as the Holodeck would probably not be used to recreate Sherlock Holmes mysteries but instead allow the Enterprise,Äôs crewmembers to collapse from sexual exhaustion in Virtual Tokyo Brothels. It,Äôs just human nature.
And what would be the legal status of non-biological intelligences, particularly those who believed themselves to be the human beings from whose brains they were copied? And would they be, you know, ,Äúfully functional?,Äù It,Äôs all a bit gross, I love it,Ķ :D
Yet once again, the erect penis points the way of progress
if evolution were to have a direction (which it doesn't), it would be that way. :D
I like how philosophers can say that they don't know how something works in a way that makes them sound profound. I'm surprised that the word emergence was not mentioned.
Regret,
Feels ashamed that he still hasn't seen bladerunner
Yeah, wtf. I would have said emergence so much, someone would be wishing they copyrighted the term.
Hey, if anyone gets their consciousness downloaded into a soup-er computer, can they put my memories and experiences into a separate file? I don't feel like being conscious for a zillion years (I might become a victim of philosophy), but the idea of my slightly twisted mind being able to infect humanity long after I kick the bucket is a pretty cool idea.
Also, just yesterday I picked up the Scientific American Reports issue on exactly this topic :tinfoilhat:
What happens if they download your consciousness into two or three identical clones made from your original DNA 10,000 years in the future? Or even unidentical clones? How about robots? What happens if you upload your consciousness to a filesharing site and hundred of people copy it?
I would just like to state that I *hate* kurzweil. Yes he may be right about the post singularity, but the entire point of the singularity is that we don't know enough to have any way to accurately predict whats going to happen yet. And even then, he makes *massive* assumptions about our ability to maintain the current rate of acceleration.
I don't know exactly how I would feel about having my consciousness installed in a computer... I think it might be neat, if most of my personal memories were stored in a separate database so they wouldn't distract me. I'd love to have several centuries in which to do research and write, but I'm afraid I would miss my body and my family if those memories were too accessible. On the other hand, I suspect that by the time I've lived a full life I might be ready to just be pure brain. In some ways it would be wierd just being a copy of myself, and I imagine it would be extra strange if those copies existed in multiple locations, having different experiences and developing in separate ways. If I met myself on the web, would I like me or hate me?
Quote from: Nigel on March 26, 2008, 03:10:09 AM
I don't know exactly how I would feel about having my consciousness installed in a computer... I think it might be neat, if most of my personal memories were stored in a separate database so they wouldn't distract me. I'd love to have several centuries in which to do research and write, but I'm afraid I would miss my body and my family if those memories were too accessible. On the other hand, I suspect that by the time I've lived a full life I might be ready to just be pure brain. In some ways it would be wierd just being a copy of myself, and I imagine it would be extra strange if those copies existed in multiple locations, having different experiences and developing in separate ways. If I met myself on the web, would I like me or hate me?
You'd make out with yourself, and you know it.
Inasmuch as digital personas can make out.
1. Buy a LOT of computers good enough to run your brain. Or rent them from Google, whatever.
2. Install yourself on all of them.
3. Have each one conduct research simultaneously. (We're talking, "I read the internet" level of massive research.)
Oh, and they're all networked, including the original (you.) So you are now better read than, say, everyone before the singularity put together.
Quote from: Cainad on March 26, 2008, 03:28:27 AM
Quote from: Nigel on March 26, 2008, 03:10:09 AM
I don't know exactly how I would feel about having my consciousness installed in a computer... I think it might be neat, if most of my personal memories were stored in a separate database so they wouldn't distract me. I'd love to have several centuries in which to do research and write, but I'm afraid I would miss my body and my family if those memories were too accessible. On the other hand, I suspect that by the time I've lived a full life I might be ready to just be pure brain. In some ways it would be wierd just being a copy of myself, and I imagine it would be extra strange if those copies existed in multiple locations, having different experiences and developing in separate ways. If I met myself on the web, would I like me or hate me?
You'd make out with yourself, and you know it.
Inasmuch as digital personas can make out.
Truth.
Quote from: Golden Applesauce on March 26, 2008, 04:56:32 AM
1. Buy a LOT of computers good enough to run your brain. Or rent them from Google, whatever.
2. Install yourself on all of them.
3. Have each one conduct research simultaneously. (We're talking, "I read the internet" level of massive research.)
Oh, and they're all networked, including the original (you.) So you are now better read than, say, everyone before the singularity put together.
I just came.
I was just having a conversation last week with my roommate about how nanomachines will be the next great technological revolution. Aside from the mental effects in that second article, the medical benefits of having tiny intelligent repair bots in your body mean that within our lifetime we can expect to see the end to aging. Luckily, that same technology concurrently provides the end to poverty, hunger, obesity, ugliness, and the advent of nanoterrorism. It's going to be a crazy couple of decades coming up here.
do i want to explain that nanotech would probably be nothing more then engineered enzymes? (i.e. very expensive catalysts for very few new usefull reactions)
naaah don't feel like it today, please, continue to enjoy your illusions.
Quote from: Regret on May 17, 2008, 02:56:03 AM
do i want to explain that nanotech would probably be nothing more then engineered enzymes? (i.e. very expensive catalysts for very few new usefull reactions)
naaah don't feel like it today, please, continue to enjoy your illusions.
At first, sure, but we're talking about the natural future progression of nanotech. As circuits get smaller and we start to develop technologies like nanotubes and graphene circuits which are only one atom thick we'll have the eventual ability to pack massive computing power in a molecule-sized machine. It's not at all an illusion. This guy knows what he's talking about, and he has a track record of being right in his predictions about when what technologies will be available to us--an impressive (http://en.wikipedia.org/wiki/Ray_Kurzweil#Accuracy_of_predictions) track record.
regret, please explain it another time when you do feel like it, then. i'd be interested, it sounds more plausible than the scifi stuff i keep hearing about.
and padra pataphoros, reading that bit on wikipedia ("may contain original research or unverified claims", but okay), i get more of the feeling that he was right about some very specific things about future tech developments, cause statements like "that many documents would exist solely on computers and on the Internet by the end of the 1990s, and that they would commonly be embedded with animations, sounds and videos" sound to me like part of a much larger description, which is probably much less spot-on (seeing that the internet and computers have evolved to be so much more than just that).
what i mean is, as long as that description doesn't give an account of how many times Kurzweil was wrong about his predictions compared to when he was (arguably, in some cases) right, it doesn't tell us much about the "impressiveness" of his track record, as opposed to any other creative proliferous day-dreamer.
don't get me wrong, there's nothing wrong with that, seeing that he's putting his creative effort into an online OCR reading system for the blind, which is commendable IMO. it's just that, and we discussed this a while before (do a forum search for "black swan" and "taleb" to get most of it, it was before you joined AFAIK), making accurate predictions about the future is pretty much impossible to do, except by being right by accident. given that, there are always people whose track record appears impressive (by sheer chance, even), unfortunately the accuracy of their track record historically appears to gives no claim to the accuracy of their future predictions.
I'm a big believer in "If we can dream it, we can build it". I have read The Age of Spiritual Machines and I will say that if anyone has a good chance of predicting the course of technological evolution it is this guy.
And if you think about it, following the progression of scientific advances should be a lot easier than predicting many other things. There's a clear line of cause and effect stretching from the beginning of scientific research. We know all of the ingredients necessary to make the pie, now it's just a matter of getting our hands on them in a cheap and efficient way. All we need is time.
A lot of people argue that there are limits--like using silicon based chips. There's only so thin you can make a silicon chip before it bleeds current all over the place. We know there's a hard limit to how small we can make that chip. But we're already aware of that limitation and we're working on alternatives, several of which show promise. If there's one thing we've shown over the course of history is that limitations are only temporary setbacks. There's always another horizon.
I know a lot of what I'm saying here is a far cry from any sort of reasoned defense to my assertation that this guy knows what he's talking about, but I'm one of those pesky dreamers. It doesn't matter whether or not he's right, technology will march ever onward. If it doesn't happen in 30 years it will happen in 90. Or 400.
no Padre Pata, that's not what i was trying to argue.
sure i believe that technology will give us major leaps "forward" in whatever. it seems to be only increasing.
all i'm saying is that it's impossible to accurately predict what shape and impact these innovations will have.
yesterday, Cain posted a wonderful summary of the Black Swan, which will hopefully clarify what i'm trying to say:
http://www.principiadiscordia.com/forum/index.php?topic=16370.0
Quote from: BumWurst on July 02, 2007, 01:26:42 PM
Given that most new technologies are immediately harnessed to provide access to naughty pictures, I reckon the technology would very quickly bring about an exciting age of custom virtual shagging-buddies, world-wide better-than-reality gangbangs, and all sorts of invigorating filth in much the same way as the Holodeck would probably not be used to recreate Sherlock Holmes mysteries but instead allow the Enterprise,Äôs crewmembers to collapse from sexual exhaustion in Virtual Tokyo Brothels. It,Äôs just human nature.
And what would be the legal status of non-biological intelligences, particularly those who believed themselves to be the human beings from whose brains they were copied? And would they be, you know, ,Äúfully functional?,Äù It,Äôs all a bit gross, I love it,Ķ :D
i have so much more to say about all this, but about the legal status: There won't be much we can do to stop them from establishing "rights". Assuming "we" represents the portion of the population that is not already nonbiological by this time. Personally, I plan on being one of "them", if at all possible (shouldn't be
too difficult to stay in the know, considering my field).
Also, another quick note: as far as the "copies" go... I think it's more likely that the procedures by which your neurophysiological patterns/structures are "copied" will be non-invasive and much more fluid than just lying down at a table, waking up, and suddenly you have a nonbiological doppelganger... Neural pathways could be reconstructed with nanotubes or similar technology while you are fully conscious. You wouldn't have to be copied to some place "over there"... I'm much more interested in having my biological portions replaced "in house". Making adjustments at a subcellular level using nanomachinery, you'd most likely start noticing
something different during the procedure, but it doesn't appear there'd be much clinical danger, nor any necessary doppelgangers involved. I don't see anything desirable about creating doppelgangers, but I WOULD love to augment my cognitive capabilities, etc.
Anyway, thanks for posting this Cram... I think it's all very interesting, and I think there's really something to these exponential trends.
Also, for you Kurzweil bashers (not just ITT), if you check out "Singularity Is Near", you'll find quite a few other people involved with these so-called "wild-eyed speculative" predictions. Kurzweil's projections for technological developments are often
more conservative than say IBM or INTEL, etc. I'll have to find some of Intel's figures...
TRIP: Kurzweil admits that beyond the singularity, everything is PURE speculation... I'm sure he just enjoys it. I do. So do the creators of GITS and Serial Experiments: Lain.
Quote from: triple zero on May 17, 2008, 05:37:25 PM
don't get me wrong, there's nothing wrong with that, seeing that he's putting his creative effort into an online OCR reading system for the blind, which is commendable IMO. it's just that, and we discussed this a while before (do a forum search for "black swan" and "taleb" to get most of it, it was before you joined AFAIK), making accurate predictions about the future is pretty much impossible to do, except by being right by accident. given that, there are always people whose track record appears impressive (by sheer chance, even), unfortunately the accuracy of their track record historically appears to gives no claim to the accuracy of their future predictions.
I don't think it's THAT hard to predict. That would be our point of disagreement. I
could be wrong, of course, but educated guesses are VERY useful, if they are educated enough.
Also, 20 years ago, accurate predictions about the technology of today would look VERY scifi (EDIT: and Kurzweil's did). Need we be specific? Neural implants for Parkinsons, Exoskeletons, cochlear implants... obviously this list can get REALLY long, but no more time, gotta clock out and go home. lol.
I want a Beowulf Cluster of Me!!!
Quote from: Dr. Pataphoros, SpD on May 17, 2008, 05:22:02 PM
At first, sure, but we're talking about the natural future progression of nanotech. As circuits get smaller and we start to develop technologies like nanotubes and graphene circuits which are only one atom thick we'll have the eventual ability to pack massive computing power in a molecule-sized machine. It's not at all an illusion. This guy knows what he's talking about, and he has a track record of being right in his predictions about when what technologies will be available to us--an impressive (http://en.wikipedia.org/wiki/Ray_Kurzweil#Accuracy_of_predictions) track record.
alrighty more of my view on nanotech.(warning! just brainstorming here don't take my word for anything)
You have a good point on the computing power, but to transfer the data from output of nanocomputer to actually doing it you will always need something in the same size range as natural enzymes to be able to have any effect on the patient.
These enzymes also need to be present in the right concentration, not react with the wrong substrate, be produced by (other?) nanomachines
<1> and be biodegradable by the host(otherwize you'll have mayor problems with the kidneys and lymphe-nodes) and all reactions will still have to abide to the laws of thermodynamics.
( http://en.wikipedia.org/wiki/Catalysis#Catalysts_and_reaction_energetics )
so the range of possible reactions will not be that much greater then that wich is already possible now within your own cells.
<1> The way to solve this is obviously a von neumann-ish molecule but you need to make one that beside copying itself also performs a myriad of other reactions(these would be the desired effects of the nanomachine).
Every one of these reactions would require a new site of action wich would make the nanomachine big enough to be unable to get through the cellwall(this is neccesary for the nanomachine to get in all the places it needs to be).
Ofcourse there is a solution to this(put the proper receptors to induce both endocytosis and exocytosis on the nanomachine(also add a way to regulate wich are active)) but that would make the machine even bigger.
Another aproach would be to make a nanomachine that is capable of adapting its shape to what is needed.
For this it would need to stimulate the production of the right enzymes in the cells of the host wich would have lots of fun(for us) and nasty(for the host/patient) side-effects especially on the functioning of the cell. (think reduced efficiency of degradation of free oxygenradicals(why don't the papers ever mention the oxygen?))
To stimulate the cell the machine would need the ability to create mRNA of all shapes and sizes(controlled ofcourse, don't underestimate the size of control mechanisms with all the positive and engative feedbacks, remember at this scale you're working with mechanics(electrons are just another cogtype and hydrolics work badly when there are only 6 watermolecules near(no pressure see, only concentration(okay enough parentheses))))
And ehmmmm my mind just went blank... sorry, guess i wont be finishing that train of thought :(
nanotech IS interesting but i think you should look for the materials that come out of it instead of mini-machines.
Check this out for example:
http://www.aip.org/tip/INPHFA/vol-10/iss-4/p16.html
I'm not putting in a very structured story and for that i'm sorry but my head hurts now and i don't want to reread/rewrite. I hope i have given some clarification for the problems concerning 'intelligent' machines living in our cells. I think the best we can hope for is making something about the size of bacterial cells if we want it to be adaptable to changing surroundings.
Quote from: daruko on May 19, 2008, 10:37:28 PM
Quote from: triple zero on May 17, 2008, 05:37:25 PM
don't get me wrong, there's nothing wrong with that, seeing that he's putting his creative effort into an online OCR reading system for the blind, which is commendable IMO. it's just that, and we discussed this a while before (do a forum search for "black swan" and "taleb" to get most of it, it was before you joined AFAIK), making accurate predictions about the future is pretty much impossible to do, except by being right by accident. given that, there are always people whose track record appears impressive (by sheer chance, even), unfortunately the accuracy of their track record historically appears to gives no claim to the accuracy of their future predictions.
I don't think it's THAT hard to predict. That would be our point of disagreement. I could be wrong, of course, but educated guesses are VERY useful, if they are educated enough.
Also, 20 years ago, accurate predictions about the technology of today would look VERY scifi (EDIT: and Kurzweil's did). Need we be specific? Neural implants for Parkinsons, Exoskeletons, cochlear implants... obviously this list can get REALLY long, but no more time, gotta clock out and go home. lol.
:fap:
Comments on Kurzweil's predictions.
Focusing on the Age of Spiritual Machines, published in 1999, quotes are from wikipedia.
2019
QuoteA $1,000 personal computer has as much raw power as the human brain.
This one is 11 years away, so I have a bit more perspective about it, but lets do some bad math.
By 2019, with optimal progression, a system will have about a terabyte of memory, common for a modern supercomputer, and the more exotic data center solutions sometimes come close. (All hail the memory eating ability of Java) for lower progression (doubling every 2 years and more likely), it would be 64 gigabytes, Impressive, but still a couple orders of magnitude behind what IBM will sell you right now for 7 or 8 figures.
Unless we're already to the point that a supercomputer can match the brain, I think Kurzweil is off by about 10 to 20 years, though we may see the first supercomputers able to crunch the English language around then.
QuotePeople experience 3-D virtual reality through glasses and contact lenses that beam images directly to their retinas (retinal display). Coupled with an auditory source (headphones), users can remotely communicate with other people and access the Internet.
He's relying on a black swan to occur for the method of the glasses, but video glasses will likely be here soon (their clunky older brother is on sale at the grocery store right now for that matter). Hard to say if they will take off, especially given the lack of interest in economical solutions for fancy toys (actually, Japan would love these things...).
QuoteCables connecting computers and peripherals have almost completely disappeared
Kurzweil has clearly never worked with bluetooth. If I am very very lucky, I will not have to again.
QuoteComputers have made paper books and documents almost completely obsolete.
No, two reasons, the libraries aren't going anywhere so there's a lot of books that will still be in paper, and the book publishers seem to be trying to sabotage it, these are political, not technological problems though. Kurzweil seems to underestimate the potential damage of the politicos and corporate greed in general.
QuotePeople communicate with their computers via two-way speech and gestures instead of with keyboards. Furthermore, most of this interaction occurs through computerized assistants with different personalities that the user can select or customize. Dealing with computers thus becomes more and more like dealing with a human being.
You can have my keyboard when you take it from my cold dead cybernetic hands. Or when you get me a neural interface that supports ssh, either way.
QuotePrototype personal flying vehicles using microflaps exist. They are also primarily computer-controlled.
Did he just promise me a flying car?
QuoteEffective language technologies (natural language processing, speech recognition, speech synthesis)
Speech synthesis has been here for a while, and some of the screen readers the blind use are fine tuned enough to have accents. Recognition is the same crap it was in 99, though you don't have to spend an hour for each new user either. I don't think we have the first clue how natural language processing works yet though 11 years may be enough.
I'll do some of the later predictions tomorrow.
Quote from: RequiemUnless we're already to the point that a supercomputer can match the brain, I think Kurzweil is off by about 10 to 20 years, though we may see the first supercomputers able to crunch the English language around then.
[teraFLOPS
(tera FLoating point OPerations per Second) One trillion floating point operations per second. IBM's BlueGene/L supercomputer, designed for computational science at Lawrence Livermore National Laboratory, was upgraded in 2007 from 65,536 to 106,496 processing nodes, where each added node had twice the memory of the old. The result for BlueGene/L: a peak speed of 596 teraFLOPS.
Human TeraFLOPS
It has been said that the human brain processes 100 teraFLOPS;] however, I've also read figures up to 10 petaflops... there are some excellent figures in Singularity is Near, but I'm too lazy to look it up right now.
On that note:
NASA looks for 10 petaflops with new computerQuote from: Sharon Gaudin, Computerworld (US)
SGI and Intel are teaming up to build a supercomputer for NASA that they expect will pass the petaflop barrier next year and hit 10 petaflops by 2012. A petaflop is 1,000 trillion calculations per second.
Techs from SGI, a maker of high-performance computers, will begin installing the new supercomputer on 21 May and are expected to have it fully assembled in July. The machine, running quad-core Intel Xeon processors with a total of 20,480-cores, should initially hit 245 teraflops or 245 trillion operations per second.
The machine will be installed at NASA's Advanced Supercomputing facility at the Ames Research Center at the Moffett Federal Airfield in California.
Bill Thigpen, engineering branch chief at NASA, said they need the extra computing power to get astronauts back into space on an entirely new rocket.
"We're designing our next-generation rocket for getting to the moon and then eventually to Mars," said Bill Thigpen, engineering branch chief at NASA. "They're retiring the shuttle and the president has said he wants us to go to the moon. There's a lot to work on."
Aside from designing a new rocket, Thigpen said they plan to use the new supercomputer to model the ocean, study global warming and build the next-generation engine and aircraft. "It's really important to look at what decisions government can make to make things better in the future," he added.
Quote from: RequiemHe's relying on a black swan to occur for the method of the glasses, but video glasses will likely be here soon (their clunky older brother is on sale at the grocery store right now for that matter). Hard to say if they will take off, especially given the lack of interest in economical solutions for fancy toys (actually, Japan would love these things...).
I see it taking off soon. I've read a lot of research in this area, but too lazy to post any of it for now. I will say that bypassing the visual data from your optical nerve and simulating virtual reality to your brain is right around the corner, but we've got some work to do with calculating the physics for fully convincing virtual environments. Computational Physicists are working on it. I'd bet there are many sources from the private sector looking into pushing visually convincing immersive entertainment, beyond what I've read.
Quote from: RequiemKurzweil has clearly never worked with bluetooth. If I am very very lucky, I will not have to again.
fuck bluetooth... the coolest thing about bluetooth i've seen is that it's compatible with cochlear implants. I worked with a deaf guy recently who wasn't deaf anymore because of his implant. He could call someone and talk for hours on his bluetooth.
Quote from: RequiemNo, two reasons, the libraries aren't going anywhere so there's a lot of books that will still be in paper, and the book publishers seem to be trying to sabotage it, these are political, not technological problems though. Kurzweil seems to underestimate the potential damage of the politicos and corporate greed in general.
I speculate that once full visual-auditory virtual environments hit the stores, and we start (we've already started) augmenting real environments with virtual ones, there won't be much need for paper media, because we can digitally
experience it on paper if we so choose. It would still take a while for paper to disappear.. probably a good while, but I don't remember reading Kurzweil stating 2019: No Paper. On the latter point, Kurzweil is certainly an optimist when it comes to politics. Still... he could be right. We'll have to see.
Quote from: RequiemYou can have my keyboard when you take it from my cold dead cybernetic hands. Or when you get me a neural interface that supports ssh, either way.
See Emotiv headset (neural interface coming up), Microsoft and Apple's new camera motion-depth-perceptive cameras, and ATT and Bell Labs synthetic voice techmologies for a start. That's just the tip of the iceberg. Japan is way ahead in this area of the market.
Quote from: RequiemSpeech synthesis has been here for a while, and some of the screen readers the blind use are fine tuned enough to have accents. Recognition is the same crap it was in 99, though you don't have to spend an hour for each new user either. I don't think we have the first clue how natural language processing works yet though 11 years may be enough.
I think you'll be very very surprised.
Flops isn't really the limit of brain equivilent power, in fact, I think the brain probably caps out at a couple of flops on average, its a kind of calculation we are very very bad at doing. The brains big computatational advantage is in its size (on the order of mangnitude of 1 trillion neurons, each one capable of storing a faily substantial amount of data), flops is only important with respect to our ability to process the information in a reasonable time frame (frankly, a computer that takes an hour to do the same task as a human does in a second, is still a human intelligent computer, the problem is doing the tasks at all). Incidentaly, IBM now sells a setup that runs at a full Petaflop, and I think Sun is claiming 2 Petaflops, though so far nobody has bought the full rigs.
QuoteSee Emotiv headset (neural interface coming up), Microsoft and Apple's new camera motion-depth-perceptive cameras, and ATT and Bell Labs synthetic voice techmologies for a start. That's just the tip of the iceberg. Japan is way ahead in this area of the market.
Oh I have no doubt they will exist, especially for mobile computing purposes, I just don't think they will be able to replace a keyboard. Voice is lousy in a cubical enviornment, no matter how good the technology is. And gestures, while perhaps good for replacing a mouse, can't do words. I also fail to see how you can have an interface that lacks a keyboard and can still handle punctuation and symbols without being tedious. Though the last command will likely be a problem more for me and my fellow console junkies than the general population.
As for the book thing... right now when you buy a digital book, it works on *one* book reader, DRM prevents you from backing the book up properly, so system failure means you lose your entire library, and most books are not available on all readers. And until the political considerations are dealt with, paper books will have to stick around just so that libraries can lend them.
Quote from: daruko on May 25, 2008, 01:46:10 PM
Quote from: RequiemHe's relying on a black swan to occur for the method of the glasses, but video glasses will likely be here soon (their clunky older brother is on sale at the grocery store right now for that matter). Hard to say if they will take off, especially given the lack of interest in economical solutions for fancy toys (actually, Japan would love these things...).
I see it taking off soon. I've read a lot of research in this area, but too lazy to post any of it for now. I will say that bypassing the visual data from your optical nerve and simulating virtual reality to your brain is right around the corner, but we've got some work to do with calculating the physics for fully convincing virtual environments. Computational Physicists are working on it. I'd bet there are many sources from the private sector looking into pushing visually convincing immersive entertainment, beyond what I've read.
too bad my previous post timed out, short version:
- VR technology has been around for decades
- i have worked with it on the university, coded a couple of apps for and, and trust me, it's not nearly as cool as it sounds
- ability to project images onto glasses (or contacts) is only a very small part of the story. motion tracking sensors for the head (and in case of contacts, the eyes as well) are much more important to create a proper immersive VR environment
- the retina is a 2D surface of high bandwidth information input. the perception of "depth" is in fact an optical illusion to accomodate for the discrepancy between the actual world (3D) and the retina surface (2D). with computers you can transmit the information in 2D from the beginning, no need to introduce an extra discrepancy as roadblock by going from 3D to 2D illusion etc. except for games, where you want to use this limitation for gameplay.
QuoteQuote from: RequiemKurzweil has clearly never worked with bluetooth. If I am very very lucky, I will not have to again.
fuck bluetooth... the coolest thing about bluetooth i've seen is that it's compatible with cochlear implants. I worked with a deaf guy recently who wasn't deaf anymore because of his implant. He could call someone and talk for hours on his bluetooth.
well that's very nice but it has nothing to do with Kurzweil's prediction that wireless would *replace* cables, now does it?
the point is, that both cables and wireless have their distinct advantages and disadvantages. creating a useful place for both. not realizing this and claiming that one will replace the other is a pretty big lack of foresight.
QuoteQuote from: RequiemNo, two reasons, the libraries aren't going anywhere so there's a lot of books that will still be in paper, and the book publishers seem to be trying to sabotage it, these are political, not technological problems though. Kurzweil seems to underestimate the potential damage of the politicos and corporate greed in general.
I speculate that once full visual-auditory virtual environments hit the stores, and we start (we've already started) augmenting real environments with virtual ones, there won't be much need for paper media, because we can digitally experience it on paper if we so choose. It would still take a while for paper to disappear.. probably a good while, but I don't remember reading Kurzweil stating 2019: No Paper. On the latter point, Kurzweil is certainly an optimist when it comes to politics. Still... he could be right. We'll have to see.
here, the point is black swans.
the prediction fails because of politics not technology? does that make the prediction anything less inaccurate? no.
this is one major source of black swans in a lot of predictive situations, claiming that your predictive skills are not any worse because the reason why your predictions failed came from outside of your domain.
that's very nice, but a prediction that didn't come true is just as useless, regardless of the reason why it failed. i'm not really interested in the reason, anyway. if you claim him to be so good, i'm interested in the accuracy.
QuoteQuote from: RequiemYou can have my keyboard when you take it from my cold dead cybernetic hands. Or when you get me a neural interface that supports ssh, either way.
See Emotiv headset (neural interface coming up), Microsoft and Apple's new camera motion-depth-perceptive cameras, and ATT and Bell Labs synthetic voice techmologies for a start. That's just the tip of the iceberg. Japan is way ahead in this area of the market.
except that nobody is actually going to use it for data input.
the emotiv thing simply doesn't have a bandwidth comparable to the speed with which people can enter data into a computer via the keyboard.
also, consider with how much resistance even the simple switch from QWERTY to Dvorak keyboard layouts is meeting. Dvorak has a clear, objective, measurable advantage to QWERTY, even health benefits (much less RSI), but nobody except hardcore nerds want to make the switch.
no matter how cool hand gesture thingies are going to be, the keyboard is definitely going to be around for a while.
and Japanese just like gizmos, they're not actually predictive for what's going to happen. they just like gizmos so they have a lot of gizmos. that doesn't mean that every single one of the gizmos that Japan is "ahead" of what we're going to have here.
look back at the gizmos Japanese have been walking around with, and how many of those have completely failed to even appear on the radar in the West.
QuoteQuote from: RequiemSpeech synthesis has been here for a while, and some of the screen readers the blind use are fine tuned enough to have accents. Recognition is the same crap it was in 99, though you don't have to spend an hour for each new user either. I don't think we have the first clue how natural language processing works yet though 11 years may be enough.
I think you'll be very very surprised.
that's one thing i can agree with.
natural language processing is, afaik, really far. all the building blocks are there, someone just needs to glue them togetehr in the right way.
i don't understand why this is not yet happening.
i suppose i'm missing some crucial step here, that we're not able to solve yet:
sound > phonemes > words > syntax > meaning
something like that, there's problems at every step, but most of them have been solved to reasonable accuracy, especially if you take into account domain knowledge and feedback between the networks to resolve ambiguities.
okay, 2029 time:
Quote* A $1,000 personal computer is 1,000 times more powerful than the human brain.
This will only hold true with respect to the 2019 prediction if you assume optimal progression, most years that doesn't happen though, and the 1000 fold increase in computing power that I witnessed so far, took about 15 years.
QuoteMassively parallel neural nets, which are constructed through reverse-engineering the human brain, are in common use.
Neural nets are already in heavy use (IBM folded one into their antivirus back in 1996). They do a have a slight basis on brain studies, and models have been done of the cortex of rats, though the practical applications tend to have unique architectures (a side effect of the way neural programming works, you give it a problem, and let it 'grow' into the parameters, effectively teaching the computer by telling it when it guesses right or wrong).
QuoteComputer implants designed for direct connection to the brain are also available. They are capable of augmenting natural senses and of enhancing higher brain functions like memory, learning speed and overall intelligence.
Politicians are just going to love letting people do this I'm sure.
QuoteArtificial Intelligences claim to be conscious and openly petition for recognition of the fact. Most people admit and accept this new truth.
Most people? Most people are going to believe what big money wants them to believe, that Ais have no emotions, and are only programmed to pretend to care by neer do wells that want to undermine the economy.
I'm not sure how well a slave class of AIs would work though, the same theoretical taskmasters are the people who grew up on science fiction books, and likely to be sympathetic to the AIs. Not to mention the idea of the kind of personality an AI created by open source programmers would be likely to have.
QuoteThe manufacturing, agricultural and transportation sectors of the economy are almost entirely automated and employ very few humans. Across the world, poverty, war and disease are almost nonexistent thanks to technology alleviating want.
I think eliminating the need for 90% of the workforce is going to *create* a lot of war, poverty and disease, at least any time prior to post scarcity.
A final comment on 2029, Kurzweil has never commented on how we are going to face the energy crunch, and actually power all these magnificent machines. By 2029 the oil will be nearly gone, especially if we keep increasing consumption at this rate, and while we've made a lot of small improvements, politics and physics seem to be severely limiting our ability to get what we're going to need in 20 years.
I'm going to post something thicker, but real quickly I just want to add that the technology has been moving forward at a faster and faster rate regardless of politics, and especially american politics. If we don't do it, they will. If AI doesn't thrive here, it certainly will in Japan, or elsewhere. Even if 75% of the world powers become luddites, the 25% that don't will keep pushing forward at an exponential rate.
In fact, I'm not getting this "optimal" term from Kurzweil. He has made conservative and less than conservative projections, but the Law of Accelerating Returns idea seems to be about the unstoppable nature of this technology. Any country that tries to impede this research, or prevent it's population access to it, may just watch it's economic status plummet. And any AI that is AS intelligent as a human being, will also be MORE intelligent and more capable than a human being in a myriad of ways. I think denying them rights would be a bad move. They might just do something about it, and I'll bet it'd be over before we knew it.
I think it's a bit more complicated than creating a slave class of AI's. Certainly, truely advanced AI's will prefer to have primitive AI's do their work for them. I also doubt it will be very difficult for them to display emotions, and the scientific community has been expecting to put good use to the Turing Test for quite some time. I believe they've been giving it a whirl lately, just for good measure.
All of this would be much more fun to go into, than just to briefly touch on like this. Hard not to post on these topics before I'm ready to put in the full post, but I'll add more.
Quote from: Requiem on May 26, 2008, 09:40:14 AMQuoteMassively parallel neural nets, which are constructed through reverse-engineering the human brain, are in common use.
Neural nets are already in heavy use (IBM folded one into their antivirus back in 1996). They do a have a slight basis on brain studies, and models have been done of the cortex of rats, though the practical applications tend to have unique architectures (a side effect of the way neural programming works, you give it a problem, and let it 'grow' into the parameters, effectively teaching the computer by telling it when it guesses right or wrong).
ok i'm just going to step in here and clarify a bit:
Neural Networks are a type of machine learning algorithm that are only very vaguely related to what the brain actually does. They are much, much more simple than that. Typically consist of about 100 up to a few 1000 "neurons".
Increasing the amount of neurons in a neural net does NOT always increase the power or accuracy of such a system, because of over-training. The idea here, being that you adjust the parameters in the net (usually by a stochastic gradient descent to a cost function) so that the network will get more and more accurate in correctly classifying the training vectors you feed it.
The thing is, you don't really care about the training vectors, you want the network to perform well for unseen data. If you give the net enough power to learn the training vectors verbatim, it will do so, and it will return very good accuracy on the trainingset, but have incredibly poor generalization ability, so it's pretty much useless.
One of the factors in training a proper neural network is to give it just the right amount of degrees of freedom it needs for the job.
Further, Neural Network algorithms have been widely replaced by newer and better Pattern Classification algorithms, such as Support Vector Machines (seems to be the current "market leader") and Learning Vector Quantization (the focus of my research group). These algorithms do not have anything to do with brains and/or neurons anymore, but they have other advantages (you can't really see *how* a neural network or an SVM has learned what it does, but LVQ allows this, for instance), and are more accurate.
And then, on the other hand, the experiments done with simulating the rat's cortex and such, are an entirely different field of research. They don't have much to do with pattern recognition (and therefore the classical meaning of the term "Neural Network"), but much more with bio-informatics and computational physics.
Very interesting nonetheless, but you shouldn't confuse the two.
Quote from: Requiem on May 25, 2008, 11:24:12 AM
QuotePrototype personal flying vehicles using microflaps exist. They are also primarily computer-controlled.
Did he just promise me a flying car?
He promised you a
prototype.
As to the rest of the discussion that has come over the weekend: we've been talking about the nature of predictions, the progression of technology, and the viability of certain scientific advances. One side (the optimistic side) says good ol' Dr. K. has at least a good idea of where we're going. They cite where we've been and where we are now as evidence. The pessimistic side says the optimists are too optimistic. They cite where we've been and where we are now as evidence.
Sound like I've summed it up pretty well so far?
If 25% of the world is Luddites, they can easily keep the other 75% down if they care enough, this is the wonder of democracies where 2/3rds of the population doesn't vote.
Optimal progression refers to Moore's law, specifically the doubling of transistors every 18 months for a given amount of money, this almost never happens though, the real rate falls between 18 and 24. And while I can't speak as much for fields outside my expertise, for computers that rate of acceleration has slowed, fairly significantly, for computers. It's coming back up, but in 5 or 10 more years it may well have halted altogether. Oh, Moore's law will continue for a while, like a shambling zombie, as things get cheaper, but the actual progress is going to stop within a few years of the orgy of advancement IBM is bust orchestrating. Yes it may not happen, there are a lot of non lithographic techniques that might one day replace what we use now, but this is serious black swan territory, we have no idea if it will even be possible to break the 32 nm barrier, let alone when a process will materialize, or how much it will cost to do it, only that what we're doing now won't do it.
The IBM computers you mention...are they the sooper awesome ones talked about in Techmology? Because those are fucking awesome, and I can see how its going to take a while for the research and market to adapt. Those things look incredible.
I'm going to assume you mean my reference to IBM screwing with the playing field a couple years hence? I can't find what particular thread you refer to to match it, but it goes like this.
The current limits of transistor size is currently 45 nm (nanometer) only Intel and IBM can do that, AMD will have it soon. Other manufacturers are working between 55 and 90 nm. Each step to a smaller size is a slow laborous process of research, and takes about 2 years. A couple months back, IBM decided they were going to give *everyone* the 32 nm process as soon as they get it. Now, this doesn't affect CPU much, Intel will have it soon enough, but most components of the computer stand to jump the curve of Moore's law by at least 2 or 3 years, since it suddenly becomes cheaper (mostly) to make a big leap instead of a little one.
Quote from: triple zero on May 26, 2008, 04:29:42 PM
Further, Neural Network algorithms have been widely replaced by newer and better Pattern Classification algorithms, such as Support Vector Machines (seems to be the current "market leader") and Learning Vector Quantization (the focus of my research group). These algorithms do not have anything to do with brains and/or neurons anymore, but they have other advantages (you can't really see *how* a neural network or an SVM has learned what it does, but LVQ allows this, for instance), and are more accurate.
very interesting, 000. Can you elaborate a bit on what these algorithms are used for?
Quote from: Requiem on May 27, 2008, 07:34:22 AM
I'm going to assume you mean my reference to IBM screwing with the playing field a couple years hence? I can't find what particular thread you refer to to match it, but it goes like this.
The current limits of transistor size is currently 45 nm (nanometer) only Intel and IBM can do that, AMD will have it soon. Other manufacturers are working between 55 and 90 nm. Each step to a smaller size is a slow laborous process of research, and takes about 2 years. A couple months back, IBM decided they were going to give *everyone* the 32 nm process as soon as they get it. Now, this doesn't affect CPU much, Intel will have it soon enough, but most components of the computer stand to jump the curve of Moore's law by at least 2 or 3 years, since it suddenly becomes cheaper (mostly) to make a big leap instead of a little one.
I have no idea what's going on with this science nonsense, but I was curious and found this:
QuoteTachyon DPT uses Brion's latest double patterning technology to allow advanced chip makers to develop devices down to the 22nm technology node. A production-ready, complete end-to-end solution that is available now, it supports both litho- (litho-etch-litho-etch) and spacer-DPT - two leading double patterning techniques. Tachyon DPT offers full-chip conflict-free pattern split, model-based OPC, model-based stitching compensation, and automatic density balancing.
source: http://findarticles.com/p/articles/mi_m0EIN/is_2008_Feb_25/ai_n24322071
Whatever all this stuff means, it sounds both cool and scary.
Also, would you put your brain in a robot body?
Quote from: Requiem on May 27, 2008, 07:34:22 AM
I'm going to assume you mean my reference to IBM screwing with the playing field a couple years hence? I can't find what particular thread you refer to to match it, but it goes like this.
The current limits of transistor size is currently 45 nm (nanometer) only Intel and IBM can do that, AMD will have it soon. Other manufacturers are working between 55 and 90 nm. Each step to a smaller size is a slow laborous process of research, and takes about 2 years. A couple months back, IBM decided they were going to give *everyone* the 32 nm process as soon as they get it. Now, this doesn't affect CPU much, Intel will have it soon enough, but most components of the computer stand to jump the curve of Moore's law by at least 2 or 3 years, since it suddenly becomes cheaper (mostly) to make a big leap instead of a little one.
and we're still using 2 dimensional architecture.. after quad core, and deca core or whatever, we start using 3 dimensional architecture, molecular computing, etc... despite popular conjecture, i would propose that quantum computing can be physically integrated with digital... either way, i suspect moore's law will fold to another trend; correlating with new paradigms.
much too much emphasis is given to moore's law. the exponential rate of technological development and innovation seems to me to extend far beyond the threshold of one economic projection. Even from a purely economic standpoint, computational price performance seems no more soley dependent upon transistor size than transportation price perfomance is dependent upon gasoline supply.
(I wish I could think of a better analogy there, but I haven't, so I'm gonna run with it.) I would suggest that rather than
decreasing the rate of "bang for your buck", the diminishment of fuel sources may
increase the rate of innovation in the transportation industry, and although it is not obvious
yet, we may find ourselves paying
less to get around as alternatives are explored and developments are made. Yes, this sounds like extreme optimism, but I'm not stating this as fact. I'm offering it up as being quite possible to occur, given the trends across the board for accelerating technological progress. It seems to me, that revolutionary technologies are hitting us harder and faster every day, and this also seems to be enabling a massive amount of decentralized information and more importantly: innovation.
Moore's law may break. There may not be a new paradigm. But like Y2k, I see no point in worrying about a slow-down in technological breakthroughs. If you examine the last hundred years as compared to the last thousand, or the last ten as opposed to the last hundred, how could it be more reasonable to project a breaking point due to mere transistor limitations? And yes, I suppose there's no REAL reason to
expect it will keep speeding up indefinitely, but if I'm speculating anyway, I'd rather use an optimistic lens, because A)it's tremendously easy; faster technology makes
faster technology and B)it's so much more fun to stretch and examine the limits of the possible than to work safely within the confines of the "known".
Concerning nonbiological intelligence: A lot of people think we will see this... think THEY will help develop it, during the next few decades. IF, and I'm just saying IF we DO see human level AI in our lifetimes, it COULD be a massive evolutionary moment for primates. There may be running and screaming and gnashing of teeth. But for those paying attention, there may also be opportunities for one hell of a universal freakout!
QuoteEven from a purely economic standpoint, computational price performance seems no more soley dependent upon transistor size than transportation price perfomance is dependent upon gasoline supply. (I wish I could think of a better analogy there, but I haven't, so I'm gonna run with it.)
Ah the bad car analogy. You should really come to /.
You really miss my point, in bringing up molecular computing and 3D architecture, you're claiming technology will advance in a way we aren't even sure is even possible in theory yet, let alone if it can be practically fabricated.
And no, quantum computing *can't* be integrated into the way we compute now, not because its hard to make but because it behaves in a fundamentally different manner. It might eventually be possible to use for general computing, but you would have to rebuild every line of code along the way, and even then, do you really want a spreadsheet that is maybe right and maybe wrong? (Insert obligatory comment about Excel here).
Aside from that, at least you got the 'maybe' part down :)
Oh and not to be mean, but get a browser with a decent spell checker.
Quote from: Requiem on May 29, 2008, 07:21:26 AM
You really miss my point, in bringing up molecular computing and 3D architecture, you're claiming technology will advance in a way we aren't even sure is even possible in theory yet, let alone if it can be practically fabricated.
And no, quantum computing *can't* be integrated into the way we compute now, not because its hard to make but because it behaves in a fundamentally different manner. It might eventually be possible to use for general computing, but you would have to rebuild every line of code along the way, and even then, do you really want a spreadsheet that is maybe right and maybe wrong? (Insert obligatory comment about Excel here).
I've bolded the statements that need E-primed, and are pretty much just wrong. Quantum computers do not yield spreadsheets that are maybe right and maybe wrong... it doesn't seem to me that you understand this field very well at all. But before you say "No yUo"... Let me just say:
:cn:
Here, I'll give you a good start from Intel and IBM:
Concerning 3-dimensional Architecture and Moore's LawIntel believes the best is yet to come. By 2015, Intel envisions processors with tens to potentially hundreds of cores per processor die. Those cores will be supporting tens, hundreds, or maybe even thousands of simultaneous execution threads.
Intel is even now researching 3-dimensional (3D) die and wafer stacking technologies, which could move device density from hundreds or thousands of pins, to a million or 10 million connections. This is the type of dramatic increase in memory-to-processor connectivity that will be required to deliver the bandwidth needed to support Intel's upcoming many-core architectures.
Intel also expects to see more natural, more human, and more error-tolerant interfaces; personalized, interactive 3D entertainment; and intelligent data management for both home and business applications. Some of Intel's latest innovations and breakthrough research areas include:
-Packaging technology, including eliminating the bumps of solder that make the connections between the package and the chip, and so reducing the thickness of the layers and allowing the further shrinking of devices.
-Transistor design, including novel, tri-gate transistors that reduce leakage current in general, and so could reduce power consumption in mobile devices.
-New dielectric materials, such as High-K, which reduces leakage current by a factor of 100 over silicon dioxide.
-Extreme ultra-violet (EUV) lithography, which uses a wavelength of 13.5nm, is expected to enable the printing of features that are 10nm and below.
-Silicon photonics, including the world's first continuous wave silicon laser, which solves the previously insurmountable, two-photon absorption problem.This excerpt was taken from http://www.intel.com/technology/magazine/silicon/moores-law-0405.htm#section4 (http://www.intel.com/technology/magazine/silicon/moores-law-0405.htm#section4) and I highly suggest anyone interested in this subject read the full article and check out the linked sections. Very interesting stuff.
Concerning Molecular ComputingIBM scientists have built a computer circuit that is a quantum leap smaller than any yet created, using a technique they call "molecule cascade." The company's scientists claim this technique enables them to make computer logic elements 260,000 times smaller than those in today's silicon semiconductor chips..........
.... Heinrich noted that the molecule cascade circuit represents a completely new approach to computing.
He pointed out that current silicon-based computing relies on moving electrons through materials. In contrast, in IBM's molecule cascade circuit, "we're doing all the computations by moving single molecules from one location to another," he said.
IBM researchers built the circuit by creating a pattern of carbon monoxide molecules on a copper surface. They were then able to create a molecular cascade by moving one molecule, which in turn moved the remaining molecules like a line of dominoes.
"We use the precise locations of these molecules as our binary information," Heinrich said. "If [a molecule] is in location A, we call that logic 0; if it's in location B, we call that logic 1."
And you may find the article for that excerpt here: http://www.newsfactor.com/perl/story/19781.html (http://www.newsfactor.com/perl/story/19781.html)
A simple google search on these topics (molecular computing, dna circuits, quantum computation, 3-dimensional cpu architecture, or any of the zillions of other innovative approaches to computation currently being explored) in which Intel, IBM, Sony, Oxford, MIT, Cambridge, or other respectable centers of technological development are included in the search, yields data for an enormous variety of current research being done, and most of the larger corporations have projected roadmaps.
Quote from: RequiemOh and not to be mean, but get a browser with a decent spell checker.
I'm having an overwhelming amount of spelling errors? Weird I'm still not noticing it, but perhaps I'll look into that.
Excuse any tones of haughtiness in this post, but these days everyone and their brother seems to think they're an expert in physics and computation. Please provide stronger examples of any impenetrable roadblocks.
Quibbling about spelling is juvenile. Unless you literally CAN NOT read it because of the spelling, STFU.
Quote from: Hoopla on May 29, 2008, 09:16:24 PM
Quibbling about spelling is juvenile. Unless you literally CAN NOT read it because of the spelling, STFU.
I don't understand any of this. You need to clean it up and make it readable. Where did you learn to type? Damn!
In other news, this (http://www.amazon.com/Quantum-Computing-Mika-Hirvensalo/dp/3540667830) is a good book to read as an introduction. Just thinking about the possibilities makes me have a quantum leap in my pants.
I don't mean to quibble its just annoying when my own browser (which I'm not using right now, so thus the spelling errors I probably have in this post) starts flagging me because of something I'm quoting. This problem can also be solved by not saying anything I wish to quote.
WTF is E-Prime?
QuoteBy 2015, Intel envisions processors with tens to potentially hundreds of cores per processor die. Those cores will be supporting tens, hundreds, or maybe even thousands of simultaneous execution threads.
Intel is even now researching 3-dimensional (3D) die and wafer stacking technologies, which could move device density from hundreds or thousands of pins, to a million or 10 million connections.
Researching, as in don't have it yet. The IBM molecular circuit is new to me, though I've seen articles on independant companies with the same thing, the point, however, is that
they cant manufacture it yet. They may never be able to manufacture it as a reasonable price, there are a lot of absolutely wonderful working technologies that never came to market because they couldn't mass produce the results, (see nanodrives). 3D and wafer stacking are also particularly dubious, since they both make venting the heat of the processor very difficult.
As far as the quantum thing goes...
http://www.schneier.com/blog/archives/2008/03/quantum_computi_1.html
Quote from: Bruce Schneierwhen you add 2+2 on a quantum computer you get the most probable outcome of such an addition - which is okay if you are working in a way such that your spectra is restricted to the integers (2+2=4), and not the reals (2+2=3.99792).
If you must appeal to authority, I'd hope this will do.