News:

Don't get me wrong, I greatly appreciate the fact that you're at least putting effort into sincerely arguing your points. It's an argument I've enjoyed having. It's just that your points are wrong and your reasons for thinking they're right are stupid.

Main Menu

FFFFUUUUUUUUUU

Started by Jasper, January 14, 2010, 06:47:56 AM

Previous topic - Next topic

Jasper

Okay, first thing's first.  I've set my life goal.  I'm currently working on it.  Said goal is to research and develop machine consciousness.  It means a lot to me, but the reasons are for another thread.  This is a subject I ponder about as often as one could sanely ponder anything.  It consumes me at the expense of my being able to talk about other subjects competently.

Okay.  So, machine consciousness.  How hard could that be?  Sometime last year I found out about a guy named Chalmers, who has caused me a great deal of discomfiture.  He has described a problem with consciousness that supports his brand of dualism.




I hate dualism.





Here's a brief on Chalmers' hard problem:

http://en.wikipedia.org/wiki/Hard_problem_of_consciousness


SEE THIS

WHAT DO

:x

BabylonHoruv

You're a special case, Babylon.  You are offensive even when you don't post.

Merely by being alive, you make everyone just a little more miserable

-Dok Howl

Jasper

Quote from: BabylonHoruv on January 14, 2010, 06:51:47 AM
Fall back on Turing.

Intelligence =/= Consciousness.  Sure, intelligent behavior is theoretically feasible with enough hard work, but I am interested in consciousness.  To be more specific, I want machines that experience reality the way we do, as opposed to just act smart. ...Like we do. :/

I'm currently researching a cool old guy by the name of Dan Dennett, another philosopher who researches consciousness in terms of actual neuroscience.  His goal is to prove that consciousness itself isn't all it's cracked up to be; just a bag of tricks.  This pleases me greatly, because if he's right, materialism is valid.  Meaning a machine can theoretically become conscious.

Dennett wrote a swathe of books.  I'm reading them.  Right now I'm reading "Kinds of Minds", which is interesting.  I will post notes here when I finish, if this thread gets some interest.

LMNO

Perhaps Hofsteader has some insight?


Crudely put, if you stack enough high-function metaprocessors on top of the sensory equipment, the possibility of emergence may increase.  That is to say, you can't "program" consciousness, but you can create fertile environments which may foster it.

The Johnny


How about making a "rat overmind"? Instead of building consciousness from scratch; you just add the technology to enhance an already functioning one.

Or maybe thats cheating?
<<My image in some places, is of a monster of some kind who wants to pull a string and manipulate people. Nothing could be further from the truth. People are manipulated; I just want them to be manipulated more effectively.>>

-B.F. Skinner

Elder Iptuous

What LMNO said.  first thing i thought of was Hoffstadter...
I don't think the problem is a discrete one.  i would think the initial supposition would be that consciousness is a gradient.

LMNO

It almost seems a cop-out, though.  As soon as you get enough metaprocessors cross-talking so much that you can't figure out what's going on, you're probably going to get some sort of consciousness.


OMG, Chaos Theory as grounds for consciousness.

Triple Zero

Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?


Crudely put, if you stack enough high-function metaprocessors on top of the sensory equipment, the possibility of emergence may increase.  That is to say, you can't "program" consciousness, but you can create fertile environments which may foster it.

Yes, this. I was gonna say about the same thing.

I'm pretty convinced myself that this is the way our (human) consciousness forms.

Of course that doesn't mean there are no other (more controllable?) ways.

However, I don't see any arguments in this "hard problem" wiki page that preclude taking the emergence route.

I think Asimov's positronic brains also did a similar thing? [although he never explicitly stated this in his books, it seemed to me they were likw highly complex clumps of computronium, built piece by piece, module upon module and humans were not really able anymore to grasp the entire workings of the things anymore--hence the need for android psychologists :) ]
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Vaudeville Vigilante

I'm not sure we've developed a fully functional description of consciousness.  The dividing line between consciousness and intelligence also seems very wiggly and hard to pin down.  In fact, even the term intelligence is difficult to categorically define, and I think this was what prompted the development of tests like Turing's. 

There are a lot of differing theories developing from different angles, which is a good thing, because we're attempting to model very sophisticated processes.  Although I don't share all of Dennet's opinions on this subject, he does share my distaste for the overinflated dualistic language often used to describe consciousness, which we can't seem to keep from beating our heads dicks against. 

I think Dennet developed an interesting and flexible approach in his Consciousness Explained (<--audacious humor), and you might like his take on this mahdjickal quantum qualia hard problem crap, as he thoroughly rapes its definitions.  As Patricia Churchland has said, "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."  As an optimistic note towards your similar end, I think each new success in any area of modelling the computational processes of the brain are very likely to yield results closer to a conscious AI, or an intelligence which seems to possess an so-called "emergent subjective experience".  While we diverge at points, I'm at least with Dennet that this "hard problem" is not a problem at all, and holds little promise towards achieving much beyond masturbatory rhetoric.  There are many too many engineering problems to work out, algorithmic problems to conquer, computational models to realize, to waste time battling with tired philosophical garbage.  Philosophical terminology is virtually useless in this domain.  Empirical data is crucial.

Template

I seem to recall that Asimov's positronic brains were seeded with an element of pure randomness.  Imagine the tides coming and going on a beach.  Things changing inexorably, based on a random starting state and mehcanical drive.
Now, in my view, there's no problem if consciousness exists dually to the living body!  The question becomes how to mount or adapt consciousness to a synthetic body.  Or a synthetic body to consciousness.

Iason Ouabache

Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?
Kill two birds with one stone and read The Mind's I, which was edited by Hofstadter and Dennett.
You cannot fathom the immensity of the fuck i do not give.
    \
┌( ಠ_ಠ)┘┌( ಠ_ಠ)┘┌( ಠ_ಠ)┘┌( ಠ_ಠ)┘

Elder Iptuous

Jason,
not having read that, does it have a significant amount of overlap with "I am a strange loop"?
i noticed that it seems to have some essays that are similar/same from either that or GEB, if i recall correctly...

Cain

Quote from: Iason Ouabache on January 14, 2010, 03:37:18 PM
Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?
Kill two birds with one stone and read The Mind's I, which was edited by Hofstadter and Dennett.

Hah, I was just coming back to this thread to suggest the same thing.

Vaudeville Vigilante

Quote from: Iason Ouabache on January 14, 2010, 03:37:18 PM
Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?
Kill two birds with one stone and read The Mind's I, which was edited by Hofstadter and Dennett.
Yes, thank you much for the recommendation.  I will have to read this one.

Jasper

You guys rock.  I already knew about Hofstadter's work, but I didn't know he made a book with Dennett.  I have been trying to work up the enthusiasm to read GEB, but it's just such a damned big book.  I will definitely check out the Mind's I.

Still, my concerns are not entirely quelled.  Until there is strong evidence that shows that our subjective experience of reality can be explained objectively (or that it can't), I can't be satisfied.
Quote from: LMNO on January 14, 2010, 02:31:28 PM
It almost seems a cop-out, though.  As soon as you get enough metaprocessors cross-talking so much that you can't figure out what's going on, you're probably going to get some sort of consciousness.


OMG, Chaos Theory as grounds for consciousness.


Despite it being sort of disappointing ("Oh, yes great, consciousness is chaotic and therefore inscrutable oh well.") I think there are sufficient grounds to say chaos is inherent in the brain.  Neuroscience has provided us with math that proscribes the exact behavior or neurons mechanistically, but the equations become so hugely complex that they are functionally uncomputable.  Any mechanistic theory of mind will have to provide for these conditions.