News:

There are no innocents, only the squeamish and the aroused.

Main Menu

Nonbiological Thinking

Started by Cramulus, June 28, 2007, 04:40:35 PM

Previous topic - Next topic

Requia ☣

Flops isn't really the limit of brain equivilent power, in fact, I think the brain probably caps out at a couple of flops on average, its a kind of calculation we are very very bad at doing.  The brains big computatational advantage is in its size (on the order of mangnitude of 1 trillion neurons, each one capable of storing a faily substantial amount of data), flops is only important with respect to our ability to process the information in a reasonable time frame (frankly, a computer that takes an hour to do the same task as a human does in a second, is still a human intelligent computer, the problem is doing the tasks at all).   Incidentaly, IBM now sells a setup that runs at a full Petaflop, and I think Sun is claiming 2 Petaflops, though so far nobody has bought the full rigs.


QuoteSee Emotiv headset (neural interface coming up), Microsoft and Apple's new camera motion-depth-perceptive cameras, and ATT and Bell Labs synthetic voice techmologies for a start.  That's just the tip of the iceberg.  Japan is way ahead in this area of the market.

Oh I have no doubt they will exist, especially for mobile computing purposes, I just don't think they will be able to replace a keyboard.  Voice is lousy in a cubical enviornment, no matter how good the technology is.   And gestures, while perhaps good for replacing a mouse, can't do words.  I also fail to see how you can have an interface that lacks a keyboard and can still handle punctuation and symbols without being tedious.  Though the last command will likely be a problem more for me and my fellow console junkies than the general population.

As for the book thing... right now when you buy a digital book, it works on *one* book reader, DRM prevents you from backing the book up properly, so system failure means you lose your entire library, and most books are not available on all readers.  And until the political considerations are dealt with, paper books will have to stick around just so that libraries can lend them.
Inflatable dolls are not recognized flotation devices.

Triple Zero

Quote from: daruko on May 25, 2008, 01:46:10 PM
Quote from: RequiemHe's relying on a black swan to occur for the method of the glasses, but video glasses will likely be here soon (their clunky older brother is on sale at the grocery store right now for that matter).  Hard to say if they will take off, especially given the lack of interest in economical solutions for fancy toys (actually, Japan would love these things...).

I see it taking off soon.  I've read a lot of research in this area, but too lazy to post any of it for now.   I will say that bypassing the visual data from your optical nerve and simulating virtual reality to your brain is right around the corner, but we've got some work to do with calculating the physics for fully convincing virtual environments.   Computational Physicists are working on it.   I'd bet there are many sources from the private sector looking into pushing visually convincing immersive entertainment, beyond what I've read.

too bad my previous post timed out, short version:

- VR technology has been around for decades
- i have worked with it on the university, coded a couple of apps for and, and trust me, it's not nearly as cool as it sounds
- ability to project images onto glasses (or contacts) is only a very small part of the story. motion tracking sensors for the head (and in case of contacts, the eyes as well) are much more important to create a proper immersive VR environment
- the retina is a 2D surface of high bandwidth information input. the perception of "depth" is in fact an optical illusion to accomodate for the discrepancy between the actual world (3D) and the retina surface (2D). with computers you can transmit the information in 2D from the beginning, no need to introduce an extra discrepancy as roadblock by going from 3D to 2D illusion etc. except for games, where you want to use this limitation for gameplay.

Quote
Quote from: RequiemKurzweil has clearly never worked with bluetooth.  If I am very very lucky, I will not have to again.

fuck bluetooth... the coolest thing about bluetooth i've seen is that it's compatible with cochlear implants.  I worked with a deaf guy recently who wasn't deaf anymore because of his implant.  He could call someone and talk for hours on his bluetooth.

well that's very nice but it has nothing to do with Kurzweil's prediction that wireless would *replace* cables, now does it?

the point is, that both cables and wireless have their distinct advantages and disadvantages. creating a useful place for both. not realizing this and claiming that one will replace the other is a pretty big lack of foresight.

Quote
Quote from: RequiemNo, two reasons, the libraries aren't going anywhere so there's a lot of books that will still be in paper, and the book publishers seem to be trying to sabotage it, these are political, not technological problems though.  Kurzweil seems to underestimate the potential damage of the politicos and corporate greed in general.

I speculate that once full visual-auditory virtual environments hit the stores, and we start (we've already started) augmenting real environments with virtual ones, there won't be much need for paper media, because we can digitally experience it on paper if we so choose.  It would still take a while for paper to disappear.. probably a good while, but I don't remember reading Kurzweil stating 2019: No Paper.  On the latter point, Kurzweil is certainly an optimist when it comes to politics.   Still... he could be right.  We'll have to see.

here, the point is black swans.

the prediction fails because of politics not technology? does that make the prediction anything less inaccurate? no.

this is one major source of black swans in a lot of predictive situations, claiming that your predictive skills are not any worse because the reason why your predictions failed came from outside of your domain.

that's very nice, but a prediction that didn't come true is just as useless, regardless of the reason why it failed. i'm not really interested in the reason, anyway. if you claim him to be so good, i'm interested in the accuracy.

Quote
Quote from: RequiemYou can have my keyboard when you take it from my cold dead cybernetic hands.  Or when you get me a neural interface that supports ssh, either way.

See Emotiv headset (neural interface coming up), Microsoft and Apple's new camera motion-depth-perceptive cameras, and ATT and Bell Labs synthetic voice techmologies for a start.  That's just the tip of the iceberg.  Japan is way ahead in this area of the market.

except that nobody is actually going to use it for data input.

the emotiv thing simply doesn't have a bandwidth comparable to the speed with which people can enter data into a computer via the keyboard.

also, consider with how much resistance even the simple switch from QWERTY to Dvorak keyboard layouts is meeting. Dvorak has a clear, objective, measurable advantage to QWERTY, even health benefits (much less RSI), but nobody except hardcore nerds want to make the switch.

no matter how cool hand gesture thingies are going to be, the keyboard is definitely going to be around for a while.

and Japanese just like gizmos, they're not actually predictive for what's going to happen. they just like gizmos so they have a lot of gizmos. that doesn't mean that every single one of the gizmos that Japan is "ahead" of what we're going to have here.
look back at the gizmos Japanese have been walking around with, and how many of those have completely failed to even appear on the radar in the West.

Quote
Quote from: RequiemSpeech synthesis has been here for a while, and some of the screen readers the blind use are fine tuned enough to have accents.  Recognition is the same crap it was in 99, though you don't have to spend an hour for each new user either.  I don't think we have the first clue how natural language processing works yet though 11 years may be enough.

I think you'll be very very surprised.

that's one thing i can agree with.

natural language processing is, afaik, really far. all the building blocks are there, someone just needs to glue them togetehr in the right way.

i don't understand why this is not yet happening.

i suppose i'm missing some crucial step here, that we're not able to solve yet:

sound > phonemes > words > syntax > meaning

something like that, there's problems at every step, but most of them have been solved to reasonable accuracy, especially if you take into account domain knowledge and feedback between the networks to resolve ambiguities.
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Requia ☣

okay, 2029 time:
Quote* A $1,000 personal computer is 1,000 times more powerful than the human brain.

This will only hold true  with respect to the 2019 prediction if you assume optimal progression, most years that doesn't happen though, and the 1000 fold increase in computing power that I witnessed so far, took about 15 years.


QuoteMassively parallel neural nets, which are constructed through reverse-engineering the human brain, are in common use.

Neural nets are already in heavy use (IBM folded one into their antivirus back in 1996).  They do a have a slight basis on brain studies, and models have been done of the cortex of rats, though the practical applications tend to have unique architectures (a side effect of the way neural programming works, you give it a problem, and let it 'grow' into the parameters, effectively teaching the computer by telling it when it guesses right or wrong).

QuoteComputer implants designed for direct connection to the brain are also available. They are capable of augmenting natural senses and of enhancing higher brain functions like memory, learning speed and overall intelligence.

Politicians are just going to love letting people do this I'm sure.

QuoteArtificial Intelligences claim to be conscious and openly petition for recognition of the fact. Most people admit and accept this new truth.

Most people?  Most people are going to believe what big money wants them to believe, that Ais have no emotions, and are only programmed to pretend to care by neer do wells that want to undermine the economy.

I'm not sure how well a slave class of AIs would work though, the same theoretical taskmasters are the people who grew up on science fiction books, and likely to be sympathetic to the AIs.  Not to mention the idea of the kind of personality an AI created by open source programmers would be likely to have.

QuoteThe manufacturing, agricultural and transportation sectors of the economy are almost entirely automated and employ very few humans. Across the world, poverty, war and disease are almost nonexistent thanks to technology alleviating want.

I think eliminating the need for 90% of the workforce is going to *create* a lot of war, poverty and disease, at least any time prior to post scarcity.

A final comment on 2029, Kurzweil has never commented on how we are going to face the energy crunch, and actually power all these magnificent machines.  By 2029 the oil will be nearly gone, especially if we keep increasing consumption at this rate, and while we've made a lot of small improvements, politics and physics seem to be severely limiting our ability to get what we're going to need in 20 years.
Inflatable dolls are not recognized flotation devices.

Daruko

I'm going to post something thicker, but real quickly I just want to add that the technology has been moving forward at a faster and faster rate regardless of politics, and especially american politics.  If we don't do it, they will.  If AI doesn't thrive here, it certainly will in Japan, or elsewhere.    Even if 75% of the world powers become luddites, the 25% that don't will keep pushing forward at an exponential rate. 

In fact, I'm not getting this "optimal" term from Kurzweil.  He has made conservative and less than conservative projections, but the Law of Accelerating Returns idea seems to be about the unstoppable nature of this technology.    Any country that tries to impede this research, or prevent it's population access to it, may just watch it's economic status plummet.   And any AI that is AS intelligent as a human being, will also be MORE intelligent and more capable than a human being in a myriad of ways.  I think denying them rights would be a bad move.   They might just do something about it, and I'll bet it'd be over before we knew it.

I think it's a bit more complicated than creating a slave class of AI's.  Certainly, truely advanced AI's will prefer to have primitive AI's do their work for them.    I also doubt it will be very difficult for them to display emotions, and the scientific community has been expecting to put good use to the Turing Test for quite some time.  I believe they've been giving it a whirl lately, just for good measure.

All of this would be much more fun to go into, than just to briefly touch on like this.  Hard not to post on these topics before I'm ready to put in the full post, but I'll add more.

Triple Zero

Quote from: Requiem on May 26, 2008, 09:40:14 AM
QuoteMassively parallel neural nets, which are constructed through reverse-engineering the human brain, are in common use.

Neural nets are already in heavy use (IBM folded one into their antivirus back in 1996).  They do a have a slight basis on brain studies, and models have been done of the cortex of rats, though the practical applications tend to have unique architectures (a side effect of the way neural programming works, you give it a problem, and let it 'grow' into the parameters, effectively teaching the computer by telling it when it guesses right or wrong).

ok i'm just going to step in here and clarify a bit:

Neural Networks are a type of machine learning algorithm that are only very vaguely related to what the brain actually does. They are much, much more simple than that. Typically consist of about 100 up to a few 1000 "neurons".

Increasing the amount of neurons in a neural net does NOT always increase the power or accuracy of such a system, because of over-training. The idea here, being that you adjust the parameters in the net (usually by a stochastic gradient descent to a cost function) so that the network will get more and more accurate in correctly classifying the training vectors you feed it.
The thing is, you don't really care about the training vectors, you want the network to perform well for unseen data. If you give the net enough power to learn the training vectors verbatim, it will do so, and it will return very good accuracy on the trainingset, but have incredibly poor generalization ability, so it's pretty much useless.
One of the factors in training a proper neural network is to give it just the right amount of degrees of freedom it needs for the job.

Further, Neural Network algorithms have been widely replaced by newer and better Pattern Classification algorithms, such as Support Vector Machines (seems to be the current "market leader") and Learning Vector Quantization (the focus of my research group). These algorithms do not have anything to do with brains and/or neurons anymore, but they have other advantages (you can't really see *how* a neural network or an SVM has learned what it does, but LVQ allows this, for instance), and are more accurate.

And then, on the other hand, the experiments done with simulating the rat's cortex and such, are an entirely different field of research. They don't have much to do with pattern recognition (and therefore the classical meaning of the term "Neural Network"), but much more with bio-informatics and computational physics.
Very interesting nonetheless, but you shouldn't confuse the two.
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Dr. Pataphoros, SpD

Quote from: Requiem on May 25, 2008, 11:24:12 AM

QuotePrototype personal flying vehicles using microflaps exist. They are also primarily computer-controlled.

Did he just promise me a flying car?


He promised you a prototype.

As to the rest of the discussion that has come over the weekend: we've been talking about the nature of predictions, the progression of technology, and the viability of certain scientific advances.  One side (the optimistic side) says good ol' Dr. K. has at least a good idea of where we're going.  They cite where we've been and where we are now as evidence.  The pessimistic side says the optimists are too optimistic.  They cite where we've been and where we are now as evidence.

Sound like I've summed it up pretty well so far?
-Padre Pataphoros, Bearer of Nine Names, Custodian of the Gate to the Forward Four, The Man Called Nobody, Philosopher of the Eleventeenth Sphere, The Noisy Ninja, Guardian of the Silver Hammer, Patron of the Perpetual Plan B, The Lord High Slacker, [The Secret Name of Power]

Requia ☣

If 25% of the world is Luddites, they can easily keep the other 75% down if they care enough, this is the wonder of democracies where 2/3rds of the population doesn't vote. 

Optimal progression refers to Moore's law, specifically the doubling of transistors every 18 months for a given amount of money, this almost never happens though, the real rate falls between 18 and 24.  And while I can't speak as much for fields outside my expertise, for computers  that rate of acceleration has slowed, fairly significantly, for computers.  It's coming back up, but in 5 or 10 more years it may well have halted altogether.  Oh, Moore's law will continue for a while, like a shambling zombie, as things get cheaper, but the actual progress is going to stop within a few years of the orgy of advancement IBM is bust orchestrating.  Yes it may not happen, there are a lot of non lithographic techniques that might one day replace what we use now, but this is serious black swan territory, we have no idea if it will even be possible to break the 32 nm barrier, let alone when a process will materialize, or how much it will cost to do it, only that what we're doing now won't do it.
Inflatable dolls are not recognized flotation devices.

Cain

The IBM computers you mention...are they the sooper awesome ones talked about in Techmology?  Because those are fucking awesome, and I can see how its going to take a while for the research and market to adapt.  Those things look incredible.

Requia ☣

I'm going to assume you mean my reference to IBM screwing with the playing field a couple years hence?  I can't find what particular thread you refer to to match it, but it goes like this.

The current limits of transistor size is currently 45 nm (nanometer) only Intel and IBM can do that, AMD will have it soon.  Other manufacturers are working between 55 and 90 nm.  Each step to a smaller size is a slow laborous process of research, and takes about 2 years.  A couple months back, IBM decided they were going to give *everyone* the 32 nm process as soon as they get it.  Now, this doesn't affect CPU much, Intel will have it soon enough, but most components of the computer stand to jump the curve of Moore's law by at least 2 or 3 years, since it suddenly becomes cheaper (mostly) to make a big leap instead of a little one.
Inflatable dolls are not recognized flotation devices.

Cramulus

Quote from: triple zero on May 26, 2008, 04:29:42 PM
Further, Neural Network algorithms have been widely replaced by newer and better Pattern Classification algorithms, such as Support Vector Machines (seems to be the current "market leader") and Learning Vector Quantization (the focus of my research group). These algorithms do not have anything to do with brains and/or neurons anymore, but they have other advantages (you can't really see *how* a neural network or an SVM has learned what it does, but LVQ allows this, for instance), and are more accurate.

very interesting, 000. Can you elaborate a bit on what these algorithms are used for?

NWC

Quote from: Requiem on May 27, 2008, 07:34:22 AM
I'm going to assume you mean my reference to IBM screwing with the playing field a couple years hence?  I can't find what particular thread you refer to to match it, but it goes like this.

The current limits of transistor size is currently 45 nm (nanometer) only Intel and IBM can do that, AMD will have it soon.  Other manufacturers are working between 55 and 90 nm.  Each step to a smaller size is a slow laborous process of research, and takes about 2 years.  A couple months back, IBM decided they were going to give *everyone* the 32 nm process as soon as they get it.  Now, this doesn't affect CPU much, Intel will have it soon enough, but most components of the computer stand to jump the curve of Moore's law by at least 2 or 3 years, since it suddenly becomes cheaper (mostly) to make a big leap instead of a little one.

I have no idea what's going on with this science nonsense, but I was curious and found this:

QuoteTachyon DPT uses Brion's latest double patterning technology to allow advanced chip makers to develop devices down to the 22nm technology node. A production-ready, complete end-to-end solution that is available now, it supports both litho- (litho-etch-litho-etch) and spacer-DPT - two leading double patterning techniques. Tachyon DPT offers full-chip conflict-free pattern split, model-based OPC, model-based stitching compensation, and automatic density balancing.
source: http://findarticles.com/p/articles/mi_m0EIN/is_2008_Feb_25/ai_n24322071

Whatever all this stuff means, it sounds both cool and scary.



Also, would you put your brain in a robot body?
PROSECUTORS WILL BE TRANSGRESSICUTED

Daruko

Quote from: Requiem on May 27, 2008, 07:34:22 AM
I'm going to assume you mean my reference to IBM screwing with the playing field a couple years hence?  I can't find what particular thread you refer to to match it, but it goes like this.

The current limits of transistor size is currently 45 nm (nanometer) only Intel and IBM can do that, AMD will have it soon.  Other manufacturers are working between 55 and 90 nm.  Each step to a smaller size is a slow laborous process of research, and takes about 2 years.  A couple months back, IBM decided they were going to give *everyone* the 32 nm process as soon as they get it.  Now, this doesn't affect CPU much, Intel will have it soon enough, but most components of the computer stand to jump the curve of Moore's law by at least 2 or 3 years, since it suddenly becomes cheaper (mostly) to make a big leap instead of a little one.

and we're still using 2 dimensional architecture.. after quad core, and deca core or whatever, we start using 3 dimensional architecture, molecular computing, etc... despite popular conjecture, i would propose that quantum computing can be physically integrated with digital...  either way, i suspect moore's law will fold to another trend; correlating with new paradigms. 

much too much emphasis is given to moore's law.  the exponential rate of technological development and innovation seems to me to extend far beyond the threshold of one economic projection.  Even from a purely economic standpoint, computational price performance seems no more soley dependent upon transistor size than transportation price perfomance is dependent upon gasoline supply.  (I wish I could think of a better analogy there, but I haven't, so I'm gonna run with it.)  I would suggest that rather than decreasing the rate of "bang for your buck", the diminishment of fuel sources may increase the rate of innovation in the transportation industry, and although it is not obvious yet, we may find ourselves paying less to get around as alternatives are explored and developments are made.  Yes, this sounds like extreme optimism, but I'm not stating this as fact.  I'm offering it up as being quite possible to occur, given the trends across the board for accelerating technological progress.  It seems to me, that revolutionary technologies are hitting us harder and faster every day, and this also seems to be enabling a massive amount of decentralized information and more importantly: innovation. 

Moore's law may break.  There may not be a new paradigm.  But like Y2k, I see no point in worrying about a slow-down in technological breakthroughs.  If you examine the last hundred years as compared to the last thousand, or the last ten as opposed to the last hundred, how could it be more reasonable to project a breaking point due to mere transistor limitations?   And yes, I suppose there's no REAL reason to expect it will keep speeding up indefinitely, but if I'm speculating anyway, I'd rather use an optimistic lens, because A)it's tremendously easy; faster technology makes faster technology and B)it's so much more fun to stretch and examine the limits of the possible than to work safely within the confines of the "known".

Concerning nonbiological intelligence:  A lot of people think we will see this... think THEY will help develop it, during the next few decades.  IF, and I'm just saying IF we DO see human level AI in our lifetimes, it COULD be a massive evolutionary moment for primates.    There may be running and screaming and gnashing of teeth.  But for those paying attention, there may also be opportunities for one hell of a universal freakout!

Requia ☣

QuoteEven from a purely economic standpoint, computational price performance seems no more soley dependent upon transistor size than transportation price perfomance is dependent upon gasoline supply.  (I wish I could think of a better analogy there, but I haven't, so I'm gonna run with it.)

Ah the bad car analogy.  You should really come to /.

You really miss my point, in bringing up molecular computing and 3D architecture, you're claiming technology will advance in a way we aren't even sure is even possible in theory yet, let alone if it can be practically fabricated.

And no, quantum computing *can't* be integrated into the way we compute now, not because its hard to make but because it behaves in a fundamentally different manner.  It might eventually be possible to use for general computing, but you would have to rebuild every line of code along the way, and even then, do you really want a spreadsheet that is maybe right and maybe wrong?  (Insert obligatory comment about Excel here).

Aside from that, at least you got the 'maybe' part down :)

Oh and not to be mean, but get a browser with a decent spell checker.
Inflatable dolls are not recognized flotation devices.

Daruko

#43
Quote from: Requiem on May 29, 2008, 07:21:26 AM
You really miss my point, in bringing up molecular computing and 3D architecture, you're claiming technology will advance in a way we aren't even sure is even possible in theory yet, let alone if it can be practically fabricated.

And no, quantum computing *can't* be integrated into the way we compute now, not because its hard to make but because it behaves in a fundamentally different manner.  It might eventually be possible to use for general computing, but you would have to rebuild every line of code along the way, and even then, do you really want a spreadsheet that is maybe right and maybe wrong?  (Insert obligatory comment about Excel here).

I've bolded the statements that need E-primed, and are pretty much just wrong.  Quantum computers do not yield spreadsheets that are maybe right and maybe wrong... it doesn't seem to me that you understand this field very well at all.  But before you say "No yUo"... Let me just say: 
:cn:
Here, I'll give you a good start from Intel and IBM:

Concerning 3-dimensional Architecture and Moore's Law
Intel believes the best is yet to come. By 2015, Intel envisions processors with tens to potentially hundreds of cores per processor die. Those cores will be supporting tens, hundreds, or maybe even thousands of simultaneous execution threads.
Intel is even now researching 3-dimensional (3D) die and wafer stacking technologies, which could move device density from hundreds or thousands of pins, to a million or 10 million connections. This is the type of dramatic increase in memory-to-processor connectivity that will be required to deliver the bandwidth needed to support Intel's upcoming many-core architectures.
Intel also expects to see more natural, more human, and more error-tolerant interfaces; personalized, interactive 3D entertainment; and intelligent data management for both home and business applications.


Some of Intel's latest innovations and breakthrough research areas include:

-Packaging technology, including eliminating the bumps of solder that make the connections between the package and the chip, and so reducing the thickness of the layers and allowing the further shrinking of devices.
-Transistor design, including novel, tri-gate transistors that reduce leakage current in general, and so could reduce power consumption in mobile devices.
-New dielectric materials, such as High-K, which reduces leakage current by a factor of 100 over silicon dioxide.
-Extreme ultra-violet (EUV) lithography, which uses a wavelength of 13.5nm, is expected to enable the printing of features that are 10nm and below.   
-Silicon photonics, including the world's first continuous wave silicon laser, which solves the previously insurmountable, two-photon absorption problem.


This excerpt was taken from http://www.intel.com/technology/magazine/silicon/moores-law-0405.htm#section4 and I highly suggest anyone interested in this subject read the full article and check out the linked sections.  Very interesting stuff.

Concerning Molecular Computing

IBM scientists have built a computer circuit that is a quantum leap smaller than any yet created, using a technique they call "molecule cascade." The company's scientists claim this technique enables them to make computer logic elements 260,000 times smaller than those in today's silicon semiconductor chips..........

.... Heinrich noted that the molecule cascade circuit represents a completely new approach to computing.
He pointed out that current silicon-based computing relies on moving electrons through materials. In contrast, in IBM's molecule cascade circuit, "we're doing all the computations by moving single molecules from one location to another," he said.
IBM researchers built the circuit by creating a pattern of carbon monoxide molecules on a copper surface. They were then able to create a molecular cascade by moving one molecule, which in turn moved the remaining molecules like a line of dominoes.
"We use the precise locations of these molecules as our binary information," Heinrich said. "If [a molecule] is in location A, we call that logic 0; if it's in location B, we call that logic 1."

And you may find the article for that excerpt here: http://www.newsfactor.com/perl/story/19781.html

A simple google search on these topics (molecular computing, dna circuits, quantum computation, 3-dimensional cpu architecture, or any of the zillions of other innovative approaches to computation currently being explored) in which Intel, IBM, Sony, Oxford, MIT, Cambridge, or other respectable centers of technological development are included in the search, yields data for an enormous variety of current research being done, and most of the larger corporations have projected roadmaps. 

Quote from: RequiemOh and not to be mean, but get a browser with a decent spell checker.

I'm having an overwhelming amount of spelling errors?  Weird I'm still not noticing it, but perhaps I'll look into that.

Excuse any tones of haughtiness in this post, but these days everyone and their brother seems to think they're an expert in physics and computation.  Please provide stronger examples of any impenetrable roadblocks.

hooplala

Quibbling about spelling is juvenile.  Unless you literally CAN NOT read it because of the spelling, STFU.
"Soon all of us will have special names" — Professor Brian O'Blivion

"Now's not the time to get silly, so wear your big boots and jump on the garbage clowns." — Bob Dylan?

"Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)"
— Walt Whitman