http://www.popsci.com/technology/article/2009-11/neuron-computer-chips-could-overcome-power-limitations-digital
QuoteAccording to Kwabena Boahen, a computer scientist at Stanford University, a robot with a processor as smart as the human brain would require at least 10 megawatts to operate. That's the amount of energy produced by a small hydroelectric plant. But a small group of computer scientists may have hit on a new neural supercomputer that could someday emulate the human brain's low energy requirements of just 20 watts--barely enough to run a dim light bulb.
Discover Magazine (http://discovermagazine.com/2009/oct/06-brain-like-chip-may-solve-computers-big-problem-energy/) has the story on how the Neurogrid computer could completely overhaul the traditional approach to computers. It trades the extreme precision of digital transistors for the brain's chaos of many neurons firing, with misfires 30 percent to 90 percent of the time. Yet the brain works with this messy system by relying on crowds of neurons to shout over the noise of misfires and competing signals.
That willingness to give up precision for chaos could lead to a new era of creative computing that simulates the unpredictable patterns of brain activity. It could also represent a far more energy-efficient era -- the Neurogrid fits in a briefcase and runs on what amounts to a few D batteries, or less than a watt. Rather than transistors, it uses capacitors that get the same voltage of neurons.
Boahen has so far managed to squeeze a million neurons onto his new supercomputer, compared to just 45,000 silicon neurons on previous neural machines. A next-generation Neurogrid may host as many as 64 million silicon neurons by 2011, or approximately the brain of a mouse.
This new type of supercomputer will not replace the precise calculations of current machines. But its energy efficiency could provide the necessary breakthrough to continue upholding Moore's Law, which suggests that the number of transistors on a silicon chip can double about every two years. Perhaps equally exciting, the creative chaos from a chaotic supercomputer system could ultimately lay the foundation for the processing power necessary to raise artificial intelligence to human levels.
Wow. Sounds like we're approaching the Singularity more rapidly than I thought.
Not that any of this makes any sense to me. How can something misfire 90% of the time and still be efficient?
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
I can't figure it out exactly myself.
Im guessing the efficiency is emergent.
Quote from: Regret on November 09, 2009, 02:41:11 PM
I can't figure it out exactly myself.
Im guessing the efficiency is emergent.
Ion flow and protein gated channels run on much less energy overall than copper wire, especially since action potentials are intermittent rather than continuous. Ion movement in fluid versus electron movement in a wire. Theres less resistance in the former.
If we're talking about the efficiency of "how often when I input one stimulus do I get the same output", then that is much much lower than silicon chips. If you're looking for creative output efficiency, its higher, but if your looking for consistancy efficiency, its lower. So, I guess you could use it to discover novel pathways, but for regular computing it wouldn't be that useful.
It's only a matter of time before Luca Turin's research on protein conductors leads to computers made out of meat.
Quote from: Kai on November 09, 2009, 01:05:22 PM
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
And yet somehow we continue to function efficiently enough. As I understood it works as well as a human brain.
Quote from: rygD on November 09, 2009, 07:02:18 PM
Quote from: Kai on November 09, 2009, 01:05:22 PM
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
And yet somehow we continue to function efficiently enough.
:cn:
Quote from: Nigel on November 09, 2009, 06:59:47 PM
It's only a matter of time before Luca Turin's research on protein conductors leads to computers made out of meat.
We already have those. We call them "math students".
Quote from: rygD on November 09, 2009, 07:02:18 PM
Quote from: Kai on November 09, 2009, 01:05:22 PM
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
And yet somehow we continue to function efficiently enough. As I understood it works as well as a human brain.
Meaning it's also capable of making tremendous errors for no good reason, with no obvious remedy.
With a computer, you know that stupid in will give you stupid out. With a brain, sometimes you just get stupid. And it likely won't be as good for math or databases. It will be good for interfacing with the local chaos (real matter), though. It might be able to count the number of birds in the air better than existing man and machine, for less cost, for example.
Also, 20 Watts is actually a significant amount of energy. It's enough to burn you, and it's actually quite bright if you use a 20W FLUORESCENT light.
Quote from: yhnmzw on November 09, 2009, 10:12:09 PM
Also, 20 Watts is actually a significant amount of energy. It's enough to burn you, and it's actually quite bright if you use a 20W FLUORESCENT light.
I think you are missing the point that 20 Watts is a hell of a lot lower than 10,000,000 Watts. This could make supercomputers cheaper to run/cool in the future.
Quote from: LMNO on November 09, 2009, 07:06:22 PM
Quote from: rygD on November 09, 2009, 07:02:18 PM
Quote from: Kai on November 09, 2009, 01:05:22 PM
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
And yet somehow we continue to function efficiently enough.
:cn:
Ok, bunnies function efficiently enough.
Quote from: yhnmzw on November 09, 2009, 10:12:09 PM
Quote from: rygD on November 09, 2009, 07:02:18 PM
Quote from: Kai on November 09, 2009, 01:05:22 PM
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
And yet somehow we continue to function efficiently enough. As I understood it works as well as a human brain.
Meaning it's also capable of making tremendous errors for no good reason, with no obvious remedy.
With a computer, you know that stupid in will give you stupid out. With a brain, sometimes you just get stupid. And it likely won't be as good for math or databases. It will be good for interfacing with the local chaos (real matter), though. It might be able to count the number of birds in the air better than existing man and machine, for less cost, for example.
Also, 20 Watts is actually a significant amount of energy. It's enough to burn you, and it's actually quite bright if you use a 20W FLUORESCENT light.
Considering mid to high end consumer computers require anywhere from 400-700 Watts I would say that 20 watts is next to nothing in the field of computers.
Quote from: yhnmzw on November 09, 2009, 10:12:09 PM
Quote from: rygD on November 09, 2009, 07:02:18 PM
Quote from: Kai on November 09, 2009, 01:05:22 PM
Yeah, see, thats why our current computers are as useful as they are: consistency. If your computer isn't getting the same result 9 out of ten times, then most people are simply going to be frustrated.
And yet somehow we continue to function efficiently enough. As I understood it works as well as a human brain.
Meaning it's also capable of making tremendous errors for no good reason, with no obvious remedy.
With a computer, you know that stupid in will give you stupid out. With a brain, sometimes you just get stupid. And it likely won't be as good for math or databases. It will be good for interfacing with the local chaos (real matter), though. It might be able to count the number of birds in the air better than existing man and machine, for less cost, for example.
Also, 20 Watts is actually a significant amount of energy. It's enough to burn you, and it's actually quite bright if you use a 20W FLUORESCENT light.
^The above. I don't want errors in my research data.
Quote from: Kai on November 11, 2009, 01:19:27 AM
^The above. I don't want errors in my research data.
Well, yes. That is why the article said:
QuoteThis new type of supercomputer will not replace the precise calculations of current machines
This won't be used on number-crunching at first. It will more likely be used in artificial intelligence and evolutionary computation. The Discover magazine article covers this a lot better than the original link: http://discovermagazine.com/2009/oct/06-brain-like-chip-may-solve-computers-big-problem-energy/article_view?b_start:int=3&-C=
QuoteNeurogrid's noisy processors will not have anything like a digital computer's rigorous precision. They may, however, allow us to accomplish everyday miracles that digital computers struggle with, like prancing across a crowded room on two legs or recognizing a face.
Plus neurocomputers will help us keep up with Moore's Law and cut energy costs drastically:
QuoteThe lessons of Neurogrid may soon start to pay off in the world of conventional computing too. For decades the electronics industry has hummed along according to what is known as Moore's law: As technology progresses and circuitry shrinks, the number of transistors that can be squeezed onto a silicon chip doubles every two years or so.
So far so good, but this meteoric growth curve may be headed for a crash.
For starters, there is, again, the matter of power consumption. Heat, too, is causing headaches: As engineers pack transistors closer and closer together, the heat they generate threatens to warp the silicon wafer. And as transistors shrink to the width of just a few dozen silicon atoms, the problem of noise is increasing. The random presence or absence of a single electricity-conducting dopant atom on the silicon surface can radically change the behavior of a transistor and lead to errors, even in digital mode. Engineers are working to solve these problems, but the development of newer generations of chips is taking longer. "Transistor speeds are not increasing as quickly as they used to with Moore's law?, and everyone in the field knows that," Sarpeshkar says. "The standard digital computing paradigm needs to change—and is changing."
Not to mention the fact that we are currently running into the problem that our ultra-small transistors are more prone to error than previous versions:
QuoteAs transistors shrink, the reliability of digital calculation will at some point fall off a cliff, a result of the "fundamental laws of physics," says Sarpeshkar. Many people place that statistical precipice at a transistor size of 9 nanometers, about 80 silicon atoms wide. Some engineers say that today's digital computers are already running into reliability problems. In July a man in New Hampshire bought a pack of cigarettes at a gas station, according to news reports, only to discover his bank account had been debited $23,148,855,308,184,500. (The error was corrected, and the man's $15 overdraft fee was refunded the next day.) We may never know whether this error arose from a single transistor in a bank's computer system accidentally flipping from a 1 to a 0, but that is exactly the kind of error that silicon-chip designers fear.
"Digital systems are prone to catastrophic errors," Sarpeshkar says. "The propensity for error is actually much greater now than it ever was before. People are very worried."
Neurally inspired electronics represent one possible solution to this problem, since they largely circumvent the heat and energy problems and incorporate their own error-correcting algorithms. Corporate titans like Intel are working on plenty of other next-generation technologies, however. One of these, called spintronics, takes advantage of the fact that electrons spin like planets, allowing a 1 or 0 to be coded as a clockwise versus counterclockwise electron rotation.
The error-correcting part seems to be the key. The "neurons" are built to be redundant and dynamic. If you can coax it to do enough redundant computation it can not only give you precision but also creativity:
QuoteThe most important achievement of Boahen's Neurogrid, therefore, may be in re-creating not the brain's efficiency but its versatility. Terrence Sejnowski, a computational neuroscientist at the Salk Institute in La Jolla, California, believes that neural noise can contribute to human creativity.
Digital computers are deterministic: Throw the same equation at them a thousand times and they will always spit out the same answer. Throw a question at the brain and it can produce a thousand different answers, canvassed from a chorus of quirky neurons. "The evidence is overwhelming that the brain computes with probability," Sejnowski says. Wishy-washy responses may make life easier in an uncertain world where we do not know which way an errant football will bounce, or whether a growling dog will lunge. Unpredictable neurons might cause us to take a wrong turn while walking home and discover a shortcut, or to spill acid on a pewter plate and during the cleanup to discover the process of etching.
Re-creating that potential in an electronic brain will require that engineers overcome a basic impulse that is pounded into their heads from an early age. "Engineers are trained to make everything really precise," Boahen says. "But the answer doesn't have to be right. It just has to be approximate."
Oh great, mechanical monkeys.
i propose using both types of computers in the transition stage, that'll give 'em enough time to work out the making mistakes bit.
Actually there are enough examples where it is perfectly fine to make efficiency / accuracy trade-offs like this.
In fact, this would be pretty fucking useful for databases. I've read some stuff about neural models for databases, and if you design them that way, they are able to deduce missing pieces of data and discover patterns and trends in the data as primitive operations like you'd normally do an exact query on a traditional DB. it's not always what you want, but there's quite some situations in which this would actually be better than a traditional DB.
it's kinda scary too, though. cause this is getting pretty close to those positronic brain things in Asimov's novels. being very complex, computer-designed networks of computer stuff that doesn't always work entirely right and the very worst thing is that it is way too complex for us ever to truly understand what is going on. Yes, kind of like the human brain, or many other processes in biology.
problem is, what if it works? what if it happens to be more efficient than biology?
you could argue that that's not gonna happen cause evolution has had millions of years to polish and perfect their biological processors, and we would be hard pressed to outdo nature on this field.
but, you don't know that. these bio processors were evolved for fitness, at every step along the way. which means they could only move through the ridges of the local maxima. if you add some human "intelligent design" into that mix, who knows where you can carry it.
I mean look at the regular computer,. such a thing wasnt gonna evolve by itself either.
Quote from: Kai on November 11, 2009, 02:15:42 AM
Oh great, mechanical monkeys.
I figure that's the goal.