what i should have said, is that the research field of AI is not actively looking to make any sort of self-aware/self-conscious system. the best thing, i guess, we could currently do is generate a population of random kaufmann-like networks, throw that into some sort of evolutionary scheme (though with what fitness criteria, i wouldn't know), and it might become self-aware, or slightly intelligent, but not in any meaningful way :)
what we
are looking for is simulating/emulating small modules and parts of human
cognition. the study AI at my university, which i do not follow (i'm Computer Science, but doing an AI-ish master), used to be called "Technical Cognition Science", which is IMO a much more accurate description of what they actually do (but they changed it because AI sounds more snazzy, and i can't blame them for it).
topics we/they study:
- speech recognition. translating soundwaves to phonemes, and translating phonemes to words/sentences. (fun fact: did you ever notice that the "space" between words in written text is completely absent when spoken?)
- computer vision. this basically the reverse of what a 3D engine for a computer game does :) given an image, try to figure out the perspective transform, 3D coordinates of objects, shapes, textures, faces, whatnot.
- pattern recognition. this is my area of research. simulation human cognition's ability to sort incoming data (of any type) into classes, and getting better at it given more example data to learn from. is sometimes based vaguely on the way human brains seem to work, but usually also combined with a bayesian/statistical approach (which our brains are intrinsically wired to fail at, badly) to make it work better.
- computational linguistics. from written text, automatically analyze verbs, nouns, grammatical structures. i would expect them to be reasonably good at this these days, but i'm afraid they aren't.
- robotics. yep, they're playing with Lego Mindstorm at the AI faculty ;-) and other stuff.
- multi-agent systems and logical reasoning. something humans are also not particularly good at, but computers are. used to solve logical puzzles, like the Prisoners Dilemma and Game Theory stuff.
- a lot more
now there's two problems
- if you plug all these systems and some more together in a meaningful way, would you get something approaching human behaviour? perhaps, probably. but would it be self-aware? i don't think so.
- most of these systems are currently at anywhere between 50% and 98% accuracy (I get 98% sometimes with machine learning datasets, but those aren't really the interesting problems). plugging them together would pretty much fail and collapse in on itself.
QuoteQuote from: triple zero on February 15, 2008, 11:07:04 PMand if you want me to make a guess, but don't hold me to it :) i think it's likely that any self aware AI would sooner come into existence by accident, than by design. just somebody hooking the right systems together and crosses the threshold, is what i expect, if it's gonna happen.
Hooking the right systems together and crosses the threshold? I'd be generous to assume something is lost in translation here, but you're fluent enough for me to think you just said AI will be an accident. And you call me unscientific? What's this threshold? There's no threshold for consciousness, it's an analog phenomenon.
as i said, this is my guess, my personal idea. also i don't intend to call you unscientific ;-)
but i do believe there is a treshold for consciousness. simply because a system is either capable of meaningful self-reference/self-representation/self-modelling, or it isn't.
also, it shouldn't be an analog phenomenon, because none of the interesting complex phenomena in nature are, in fact, analog. they are all based on complex combinatorics. i dare even to say, without chemical reactions somehow rising above the level of analog phenomena, into combinatorics, there wouldn't have been life.
think of DNA, it's discrete, it's combinatorics, it's a huge mess, but it works with symbols. when organisms mutate, they do so in little (or large) jumps, not by any continuous analog path.
think of neurons in the brain, they're discrete. they build up potential, and fire in pulses, not analog signals, when the treshold is reached, the neuron will start emitting signals at a certain frequency.
but as i said, these are my own speculations. i can elaborate on it, but in that case you might wanna split this thread into arguing about whether it's possible to make predictions on social impact of future technology and musings about artificial intelligence (please do :) )
QuoteMy current theory of AI is that to be truly conscious you must have an array of physical sensors that feed constant data into a behaviour algorithm. The model for intelligence simply does not need consciousness to be intelligent, and all intelligence is intrinsic to behaviour and how it reacts to new experiences and internal stimuli like abstract calculations such as emotions. I could go on, because this has a high mindshare with me.
physical sensors are very possible. the behaviour algorithm is highly complex, and we can't do that (yet?)
QuoteTo address your main point, AI isn't robots right now, but it will inevitaby be so in my opinion because intelligence without a physical presence is incapable of doing real work, which is what we want robots for in the first place. And in a physically enabled intelligence of any type it must have sensors to understand the world it's interfacing with, so it must de facto have senses. And there you have it; my prediction is that robotic AI will occur because there is money in it (real work to be done) and they will be governed by behavioural intelligence which must require sensors and will THUSLY be comparatively conscious in the same way humans are.
well, it's my opinion that robots being able to do real work, don't need intelligence at all.
look at car factories. lots of robots there. intelligent? not at all.
in fact i'd be very hesitant to build any kind of autonomous reasoning system into a robot. not because it just might become self-aware and take over the world, but because before it even gets there, it's gonna cause a shitload of trouble with the mistakes it will undoubtly make.
i heard that the umanned robot gun tank thingies they deployed somewhere (korea?) killed a bunch of friendly soldiers?
also, think of the ED-209 from Robocop. a robot? yes. intelligent? kind of.
QuoteAnd strong AI will perforce be MADE on a computer, because broccoli and cheese doesn't process binary code very fast. So yes, it WILL occur first on a computer lacking robotics. However, I am highly suspect of any robotic AI that was programmed on a computer without some kind of physical interface to train on. I think any kind of useful strong AI with a robotic form will have to be given the form, then trained with algorithmic learning processes to attain a level of control that makes it useful. The future of AI is in robotics because if it remains purely digital then it has far fewer ways of being a profit to it's creators. You're just not following the money if you can't see that.
yes, i agree with that principle, but personally i think that a digital environment, like the Internet would also be sufficient for an AI to train and learn its skills.
and it just seems to me that the Internet is a very likely breeding ground for the accidental hookup of the right systems/modules to spawn such an AI for the first time, rather than a robotics laboratory.
This is fascinating, and I'll respond with a lengthy post from my philisophical angle as to how I see things, but right now I'm sort of burned out and stressed from other stuff, so expect it tommorrow.
kthx
I am glad you posted this, if only because most people don't understand the work that's being done in the area of AI, let alone the areas of departure from whence it is most likely to rise.
i'm hesitant. i usually can't stand transhumanists.
they think they can solve all their problems with technology, instead of people
IMO
eta: Not that I'd say no to being digitized, I just think people who can't stop talking about it are waterheads.