Principia Discordia

Principia Discordia => Literate Chaotic => Topic started by: Jasper on January 14, 2010, 06:47:56 AM

Title: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 06:47:56 AM
Okay, first thing's first.  I've set my life goal.  I'm currently working on it.  Said goal is to research and develop machine consciousness.  It means a lot to me, but the reasons are for another thread.  This is a subject I ponder about as often as one could sanely ponder anything.  It consumes me at the expense of my being able to talk about other subjects competently.

Okay.  So, machine consciousness.  How hard could that be?  Sometime last year I found out about a guy named Chalmers, who has caused me a great deal of discomfiture.  He has described a problem with consciousness that supports his brand of dualism.




I hate dualism.





Here's a brief on Chalmers' hard problem:

http://en.wikipedia.org/wiki/Hard_problem_of_consciousness


SEE THIS

WHAT DO

:x
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 14, 2010, 06:51:47 AM
Fall back on Turing.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 06:56:18 AM
Quote from: BabylonHoruv on January 14, 2010, 06:51:47 AM
Fall back on Turing.

Intelligence =/= Consciousness.  Sure, intelligent behavior is theoretically feasible with enough hard work, but I am interested in consciousness.  To be more specific, I want machines that experience reality the way we do, as opposed to just act smart. ...Like we do. :/

I'm currently researching a cool old guy by the name of Dan Dennett, another philosopher who researches consciousness in terms of actual neuroscience.  His goal is to prove that consciousness itself isn't all it's cracked up to be; just a bag of tricks.  This pleases me greatly, because if he's right, materialism is valid.  Meaning a machine can theoretically become conscious.

Dennett wrote a swathe of books.  I'm reading them.  Right now I'm reading "Kinds of Minds", which is interesting.  I will post notes here when I finish, if this thread gets some interest.
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?


Crudely put, if you stack enough high-function metaprocessors on top of the sensory equipment, the possibility of emergence may increase.  That is to say, you can't "program" consciousness, but you can create fertile environments which may foster it.
Title: Re: FFFFUUUUUUUUUU
Post by: The Johnny on January 14, 2010, 01:34:27 PM

How about making a "rat overmind"? Instead of building consciousness from scratch; you just add the technology to enhance an already functioning one.

Or maybe thats cheating?
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 14, 2010, 02:09:18 PM
What LMNO said.  first thing i thought of was Hoffstadter...
I don't think the problem is a discrete one.  i would think the initial supposition would be that consciousness is a gradient.
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on January 14, 2010, 02:31:28 PM
It almost seems a cop-out, though.  As soon as you get enough metaprocessors cross-talking so much that you can't figure out what's going on, you're probably going to get some sort of consciousness.


OMG, Chaos Theory as grounds for consciousness.
Title: Re: FFFFUUUUUUUUUU
Post by: Triple Zero on January 14, 2010, 02:39:17 PM
Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?


Crudely put, if you stack enough high-function metaprocessors on top of the sensory equipment, the possibility of emergence may increase.  That is to say, you can't "program" consciousness, but you can create fertile environments which may foster it.

Yes, this. I was gonna say about the same thing.

I'm pretty convinced myself that this is the way our (human) consciousness forms.

Of course that doesn't mean there are no other (more controllable?) ways.

However, I don't see any arguments in this "hard problem" wiki page that preclude taking the emergence route.

I think Asimov's positronic brains also did a similar thing? [although he never explicitly stated this in his books, it seemed to me they were likw highly complex clumps of computronium, built piece by piece, module upon module and humans were not really able anymore to grasp the entire workings of the things anymore--hence the need for android psychologists :) ]
Title: Re: FFFFUUUUUUUUUU
Post by: Vaudeville Vigilante on January 14, 2010, 03:22:28 PM
I'm not sure we've developed a fully functional description of consciousness.  The dividing line between consciousness and intelligence also seems very wiggly and hard to pin down.  In fact, even the term intelligence is difficult to categorically define, and I think this was what prompted the development of tests like Turing's. 

There are a lot of differing theories developing from different angles, which is a good thing, because we're attempting to model very sophisticated processes.  Although I don't share all of Dennet's opinions on this subject, he does share my distaste for the overinflated dualistic language often used to describe consciousness, which we can't seem to keep from beating our heads dicks against. 

I think Dennet developed an interesting and flexible approach in his Consciousness Explained (<--audacious humor), and you might like his take on this mahdjickal quantum qualia hard problem crap, as he thoroughly rapes its definitions.  As Patricia Churchland has said, "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."  As an optimistic note towards your similar end, I think each new success in any area of modelling the computational processes of the brain are very likely to yield results closer to a conscious AI, or an intelligence which seems to possess an so-called "emergent subjective experience".  While we diverge at points, I'm at least with Dennet that this "hard problem" is not a problem at all, and holds little promise towards achieving much beyond masturbatory rhetoric.  There are many too many engineering problems to work out, algorithmic problems to conquer, computational models to realize, to waste time battling with tired philosophical garbage.  Philosophical terminology is virtually useless in this domain.  Empirical data is crucial.
Title: Re: FFFFUUUUUUUUUU
Post by: Template on January 14, 2010, 03:28:42 PM
I seem to recall that Asimov's positronic brains were seeded with an element of pure randomness.  Imagine the tides coming and going on a beach.  Things changing inexorably, based on a random starting state and mehcanical drive.
Now, in my view, there's no problem if consciousness exists dually to the living body!  The question becomes how to mount or adapt consciousness to a synthetic body.  Or a synthetic body to consciousness.
Title: Re: FFFFUUUUUUUUUU
Post by: Iason Ouabache on January 14, 2010, 03:37:18 PM
Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?
Kill two birds with one stone and read The Mind's I (http://en.wikipedia.org/wiki/The_Mind%27s_I), which was edited by Hofstadter and Dennett.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 14, 2010, 03:45:19 PM
Jason,
not having read that, does it have a significant amount of overlap with "I am a strange loop"?
i noticed that it seems to have some essays that are similar/same from either that or GEB, if i recall correctly...
Title: Re: FFFFUUUUUUUUUU
Post by: Cain on January 14, 2010, 04:49:24 PM
Quote from: Iason Ouabache on January 14, 2010, 03:37:18 PM
Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?
Kill two birds with one stone and read The Mind's I (http://en.wikipedia.org/wiki/The_Mind%27s_I), which was edited by Hofstadter and Dennett.

Hah, I was just coming back to this thread to suggest the same thing.
Title: Re: FFFFUUUUUUUUUU
Post by: Vaudeville Vigilante on January 14, 2010, 06:00:32 PM
Quote from: Iason Ouabache on January 14, 2010, 03:37:18 PM
Quote from: LMNO on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?
Kill two birds with one stone and read The Mind's I (http://en.wikipedia.org/wiki/The_Mind%27s_I), which was edited by Hofstadter and Dennett.
Yes, thank you much for the recommendation.  I will have to read this one.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 08:41:42 PM
You guys rock.  I already knew about Hofstadter's work, but I didn't know he made a book with Dennett.  I have been trying to work up the enthusiasm to read GEB, but it's just such a damned big book.  I will definitely check out the Mind's I.

Still, my concerns are not entirely quelled.  Until there is strong evidence that shows that our subjective experience of reality can be explained objectively (or that it can't), I can't be satisfied.
Quote from: LMNO on January 14, 2010, 02:31:28 PM
It almost seems a cop-out, though.  As soon as you get enough metaprocessors cross-talking so much that you can't figure out what's going on, you're probably going to get some sort of consciousness.


OMG, Chaos Theory as grounds for consciousness.


Despite it being sort of disappointing ("Oh, yes great, consciousness is chaotic and therefore inscrutable oh well.") I think there are sufficient grounds to say chaos is inherent in the brain.  Neuroscience has provided us with math that proscribes the exact behavior or neurons mechanistically, but the equations become so hugely complex that they are functionally uncomputable.  Any mechanistic theory of mind will have to provide for these conditions.

Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on January 14, 2010, 08:44:52 PM
To avoid descending into the morass of other threads, I hereby propose that when we talk about the unpredictability of complex systems, we use the term "Chaos Theory", and when we talk about the vast expanse of unknowable random stuff that is beyond Order and Disorder, we use regular old "Chaos".
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 14, 2010, 09:27:54 PM
Quote from: Felix on January 14, 2010, 08:41:42 PM
You guys rock.  I already knew about Hofstadter's work, but I didn't know he made a book with Dennett.  I have been trying to work up the enthusiasm to read GEB, but it's just such a damned big book.  I will definitely check out the Mind's I.

Still, my concerns are not entirely quelled.  Until there is strong evidence that shows that our subjective experience of reality can be explained objectively (or that it can't), I can't be satisfied.
Quote from: LMNO on January 14, 2010, 02:31:28 PM
It almost seems a cop-out, though.  As soon as you get enough metaprocessors cross-talking so much that you can't figure out what's going on, you're probably going to get some sort of consciousness.


OMG, Chaos Theory as grounds for consciousness.


Despite it being sort of disappointing ("Oh, yes great, consciousness is chaotic and therefore inscrutable oh well.") I think there are sufficient grounds to say chaos is inherent in the brain.  Neuroscience has provided us with math that proscribes the exact behavior or neurons mechanistically, but the equations become so hugely complex that they are functionally uncomputable.  Any mechanistic theory of mind will have to provide for these conditions.



I read GEB and enjoyed the hell out of it, however I wouldn't reccomend it as an aid in understanding the means of building consciousness.  It has a lot to say about building information systems, and explained the incompleteness theorem in a way that really sunk in for me, but what it had to say about consciousness seemed more about what is outside it than what is inside it.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 09:45:32 PM
Quote from: LMNO on January 14, 2010, 08:44:52 PM
To avoid descending into the morass of other threads, I hereby propose that when we talk about the unpredictability of complex systems, we use the term "Chaos Theory", and when we talk about the vast expanse of unknowable random stuff that is beyond Order and Disorder, we use regular old "Chaos".

This is a good idea. 
Title: Re: FFFFUUUUUUUUUU
Post by: The Good Reverend Roger on January 14, 2010, 10:02:43 PM
WTF?

Why make machine consciousness?  We have enough assholes already.   :horrormirth:
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 10:09:18 PM
Quote from: The Good Reverend Roger on January 14, 2010, 10:02:43 PM
WTF?

Why make machine consciousness?  We have enough assholes already.   :horrormirth:

You said you were up for "any" program. :lulz:

There are good reasons.  We need machines that can "Do what I mean" rather than "What I say", and a conscious machine would theoretically be able to understand what we say, and translate for less intelligent machines that can do dangerous or difficult jobs.  Jobs like building a space elevator, or farming the collective rooftops of a city to provide produce without the difficulty of transportation pollution or nasty preservatives.

And they can survive in a vacuum.  We could provide them with whatever information and material supplies they want, they provide us with research done by space exploration.  They help us terraform other planets, due to their ability to survive on mere electricity.  The list goes on.

But I didn't want this thread to be about "why", so much as "how".
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 14, 2010, 10:22:20 PM
Well why don't you treat every word in our vocabulary as a Symbolic Definition/Function.

Next time you are given instructions, process them very slowly in Psuedo language

(I am going to get really messy here - and I am probably missing the entire point completely)

"Move over there"

3 words - but that would have alot of cross-reference meaning.

MOVE = Go - a personal command. Go is a movement command.
Over = Relate back to previous word, does not mean end - Go implies movement so Over would indicate a position
There = based on the previous two words, this would imply the position I would go.

.. and a shit load of other referencing.

Since alot of words in the english language relate to each other in one way or another. There are many different ways to say the same thing but with many different words.

So I think try to split the words into 2 different categories, the Lower and Higher. The higher will always reference back to the lower just with some extra commands to save time - and the lower would be the instructions.

Sorta like how C converts into ASM

My example was bad - since using language like that would mean the machine needs to understand and interpret physical movements to, but would the understanding come from the functions behind the words?

Or am I completely off the point?
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 10:26:27 PM
I do not believe that linguistics will lead to intelligent behavior, much less consciousness.

By that rationale we are basically sophisticated chat bots.

The point (for me) is to create a system that behaves like a human brain, thereby recreating consciousness (whatever that may be) inside a synthetic medium.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 14, 2010, 10:30:19 PM
What are you looking for in a consciousess?

I don't claim to understand the human brain, but isn't that how we usually think? (From my previous post) - The words contain symbolism that our brain will reference and check and convert into functions to peform.

Heck, sometimes I really think people are just sophisticated computers.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 10:31:55 PM
What do you mean exactly?
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 14, 2010, 10:34:25 PM
Well you said you wanted to give a machine a consciousness - what does that entail? Decision making? Morals/ethics? Self-awareness/learning (A.I style?)
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 14, 2010, 10:38:14 PM
Subjective experience of sensational reality, the ability to reflect, the ability to form decisions and behave with special regard to expected outcomes, language processing, social cognition, intuitive ability to report internal states.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 14, 2010, 10:49:24 PM
Hire a few asian programmers and you might get lucky.

I guess the easiest out of the list would be language processing and reports on internal states - damn how the hell did our creators manage it?!
Title: Re: FFFFUUUUUUUUUU
Post by: Triple Zero on January 14, 2010, 11:02:02 PM
Quote from: Felix on January 14, 2010, 09:45:32 PM
Quote from: LMNO on January 14, 2010, 08:44:52 PM
To avoid descending into the morass of other threads, I hereby propose that when we talk about the unpredictability of complex systems, we use the term "Chaos Theory", and when we talk about the vast expanse of unknowable random stuff that is beyond Order and Disorder, we use regular old "Chaos".

This is a good idea. 

Also, "chaotic system", which is the sort of system considered under Chaos Theory.

Cause then I won't have to change my regular usage of those words. Don't pin me on it, but I don't think I have ever used the word "Chaos" on its own to talk about "Chaos Theory".
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 14, 2010, 11:26:11 PM
Hmm ...

I guess lets take a step back and look at humans -
At an instinct level - the sub-conscious would use what it must to fulfil its inner desires. Hunger = eat, Thirsty = Drink, Tired = Rest, Pain = No (Unless your a masochist).

So looking at a machine, a machine that knows when its battery levels are low (The internal monitor - doesn't need to be intuitive) and goes to recharge itself - I think that would be considered a consious act, if it had a choice between priorities - whether to finish the task at hand and potentially suffer for not going to rest or just go back and recharge in its bed. Because it will be able to calculate the energy needed for the next task, and the ride home etc

If the machine detects that it is low on oil, it will go drink some to keep its gears in motion.
If the machine detects its low on fuel, it will eat (well more like drink) something so that it can convert that into Energy so it won't have to Rest as often.
If machine detects injury from peforming a certain action, it will add it to the blacklist (To avoid in future cases) so injury will not be repeated.

Just looks like it will be a massive indexing system in the end - or is an entirely different thought form needed?

Also for there to be randomness ... Is that even possible? Random functions are usually pre-determined and follow a pattern after-awhile unless reset.
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 14, 2010, 11:32:50 PM
Quote from: Felix on January 14, 2010, 10:09:18 PM
Quote from: The Good Reverend Roger on January 14, 2010, 10:02:43 PM
WTF?

Why make machine consciousness?  We have enough assholes already.   :horrormirth:

You said you were up for "any" program. :lulz:

There are good reasons.  We need machines that can "Do what I mean" rather than "What I say", and a conscious machine would theoretically be able to understand what we say, and translate for less intelligent machines that can do dangerous or difficult jobs.  Jobs like building a space elevator, or farming the collective rooftops of a city to provide produce without the difficulty of transportation pollution or nasty preservatives.

And they can survive in a vacuum.  We could provide them with whatever information and material supplies they want, they provide us with research done by space exploration.  They help us terraform other planets, due to their ability to survive on mere electricity.  The list goes on.

But I didn't want this thread to be about "why", so much as "how".

Why matters because it influences how.  If what we want is behavior that appears to be conscious then Turing tests are the test which demonstrate success.  If, on the other hand, the goal is more metaphysical, then we have to find an objective way to measure.

If what we want are space explorer robots and robots that understand what we mean then a Turing test is perfectly sufficient as qualification of success in producing consciousness because it appears to be conscious.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 14, 2010, 11:49:47 PM
If we manage to make robots that have consciousness and reasoning, we damn well better give them emotions, too, so that they'll make stupid decisions based on feelings. Otherwise they'll look at this shit, realize that humans are retarded, and promptly take over the world.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 01:43:42 AM
What would be bad is to have a robot that simply has access to the information " my oil is low" and acts on it. What I want is for a robot that "feels" it's low oil the same way we feel hunger.  The difference doesn't make a great difference for utility, but it is what I want to create. Things like space exploration are reasons to make smart robots, but my real motive is to understand and recreate consciousness. The utilities don't matter to ME, but they matter for things like funding.

Nigel, you are correct. Just s we FEEL hungry, or sad, or ambivalent, or other sensations ( emotional or bodily), these sensations provide us with an extremely nuanced , information-rich reality in the context of our own experiences. That's important for conscious experience: sensations, voluntary and involuntary behaviors, inner dialogue, the ability to imagine sensations of other people, and the ability to emulate minute behavioral information by observing others like us.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 15, 2010, 01:50:15 AM
is consciousness as you define it requisite for belief?
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 01:54:24 AM
The point of all this is, once I have the means to recreate a brainlike synthetic organ, how will I know if it truly FEELS hungry, or if it is just responding to mechanical hunger protocols? Is that what we do? Neuroscience has identified the exact chemical that tells our brain we are hungry (they want to make a weight loss drug). If all we're doing is responding to a neurochemical information system, then why bother with a unified psychosensory experience?  Are our brains hallucinating a reality abstracted from sensory data? I would find that neato, but I have doubts.  Could it be that our whole sense of awareness evolve out of evolutionary pressure? Possibly. Without having  conscious experience, we would habe trouble existing in social situations. No real means to think of "me" as it refers to "the others" without "me" as a highly developed perception.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 01:55:48 AM
Iptuous, what beliefs are you talking about? The sentence needs more careful wording fore to give a good answer.
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 01:56:03 AM
I'm not sure that I understand this problem.  After all, isn't it just as impossible to verify that other people have subjective experiences?

As for the importance of communication/linguistics, I do think that it's important.  It seems like a reasonable idea to me that consciousness evolved from lying.  That is, as soon as we needed to make models of what other people were thinking (http://en.wikipedia.org/wiki/Theory_of_mind), we had to think ourselves.

(I am also very interested in AI.)
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 01:56:39 AM
hahaha hallucinating robots

"Why was I built to have pain"

And don't forget Robot Religion & Philosophy.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 01:58:46 AM
Lying is definitely a skill that developed from evolutionary social pressures. Without the ability to model the thinking of others, how do you lie well?  For that matter, how do we empathize? Autistic people may be merely lacking in these evolutionary traits, I have heard.  That's how recently our social skills evolved. We still have people who don't have this trait as a dominant gene.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 15, 2010, 02:04:32 AM
It seems like once we had conscious robots, managing the structure of each type of input until it was confusing, conflicting, and sometimes overwhelming should give us a good approximation of "feelings".

If we can get them going "Oh my god I feel kinda crappy; is my oil low or am I just still upset over losing my job or do I need to empty my condensation chamber?" and then make the sensations become more unmanageably overwhelming and difficult to distinguish, as well as decreasing their ability to reason and their fine motor skills as demands on them increase, I think we would just about have it.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 02:07:02 AM
I guess one cool thing of being a concious machine is the internet

Some models have access, some don't.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 15, 2010, 02:07:13 AM
I mean, they'd be depressed all the time and we'd need to invent the robot equivalent of tranquilizers, sleep aids, and antidepressants, but can you imagine functional robots in a dysfunctional human world? They'd have us all neutered or spayed and keep a few of us around as pets.

Which actually might not be so bad.

We should give them an appreciation for art and literature but without any ability to create their own, so they'd have an incentive to keep us and treat us reasonably well.
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:08:19 AM
Personally, I think that the bicameral model (http://en.wikipedia.org/wiki/Bicameralism_%28psychology%29) is a pretty interesting one for the origin of consciousness, although I haven't read that book yet and I've heard that it probably took place long before the author claimed if it did.  On the other hand, I might just like Snow Crash too much.

Quote from: The Right Reverend NigelIf we can get them going "Oh my god I feel kinda crappy; is my oil low or am I just still upset over losing my job or do I need to empty my condensation chamber?" and then make the sensations become more unmanageably overwhelming and difficult to distinguish, as well as decreasing their ability to reason and their fine motor skills as demands on them increase, I think we would just about have it.

Is it wrong that I laughed?

Quote from: FelixLying is definitely a skill that developed from evolutionary social pressures. Without the ability to model the thinking of others, how do you lie well?  For that matter, how do we empathize? Autistic people may be merely lacking in these evolutionary traits, I have heard.  That's how recently our social skills evolved. We still have people who don't have this trait as a dominant gene.

So we have to make robots that can lie.

Yes, I think that AI research is the place to be for mad science.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 02:09:56 AM
Hmm creating a race of beings that would kill us in the end. I like that. Maybe we did that to our own ancestors, they built us so we ate them.

Quote from: The Fundamentalist on January 15, 2010, 02:08:19 AM
Quote from: The Right Reverend NigelIf we can get them going "Oh my god I feel kinda crappy; is my oil low or am I just still upset over losing my job or do I need to empty my condensation chamber?" and then make the sensations become more unmanageably overwhelming and difficult to distinguish, as well as decreasing their ability to reason and their fine motor skills as demands on them increase, I think we would just about have it.

Is it wrong that I laughed?

I think it'd be cute
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:10:05 AM
:lol:

I'm all for confused robots, but maybe they should only experience as much emotional turmoil as your average person. The point of emotion is to provide instinctual insurance against bloodymindedness (read: HAL), so the crippling levels of stress and despair perhaps are unneeded.
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 02:10:31 AM
Quote from: Felix on January 15, 2010, 01:54:24 AM
The point of all this is, once I have the means to recreate a brainlike synthetic organ, how will I know if it truly FEELS hungry, or if it is just responding to mechanical hunger protocols? Is that what we do? Neuroscience has identified the exact chemical that tells our brain we are hungry (they want to make a weight loss drug). If all we're doing is responding to a neurochemical information system, then why bother with a unified psychosensory experience?  Are our brains hallucinating a reality abstracted from sensory data? I would find that neato, but I have doubts.  Could it be that our whole sense of awareness evolve out of evolutionary pressure? Possibly. Without having  conscious experience, we would habe trouble existing in social situations. No real means to think of "me" as it refers to "the others" without "me" as a highly developed perception.

I really doubt that is possible.

Not that it is not possible to make a machine that feels hungry, but it is impossible to know it feels hungry and doesn't just inform you that it feels hungry because that is how it was programmed.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 02:11:29 AM
Maybe thats how we were written :O
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:12:04 AM
If stress and so on were unneeded, then we probably wouldn't have them.  They'd be selected against in evolution.

Although, that might not be relevant in the modern world, I don't know.

Out of my interest, I have a question... what are possible fields that thinking machines can be used for, aside from mad science?
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 15, 2010, 02:12:27 AM
Quote from: Felix on January 15, 2010, 01:55:48 AM
Iptuous, what beliefs are you talking about? The sentence needs more careful wording fore to give a good answer.

well, i was thinking of any belief in general, but specifically the belief that it was conscious....
i was wondering if you thought it was possible to make a machine that was not conscious, but believed that it was.
I'm sure that's been a crappy sci-fi story many times, but...

Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:13:05 AM
The point of emotion experiencing robots would be that killing all the humans would be a horrifying thought. Just like any sane person thinks genocide is horrible. It may be appealing at times, but the act of doing so would cause too much psychological distress to bear.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:14:59 AM
Quotewell, i was thinking of any belief in general, but specifically the belief that it was conscious....
i was wondering if you thought it was possible to make a machine that was not conscious, but believed that it was.
I'm sure that's been a crappy sci-fi story many times, but...

then sadly it would probably fool me, unless by then I knew what to look for in a conscious brainlike organ.
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 02:15:43 AM
Quote from: The Fundamentalist on January 15, 2010, 02:08:19 AM

So we have to make robots that can lie.

Yes, I think that AI research is the place to be for mad science.

We already have that.

Altruism too

http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:15:54 AM
Quote from: Iptuous on January 15, 2010, 02:12:27 AM
i was wondering if you thought it was possible to make a machine that was not conscious, but believed that it was.
I'm sure that's been a crappy sci-fi story many times, but...



#include <iostream>

int main()
{
   std::cout << "I am a conscious program!\n";
   return 0;
}


Sorta like that?

Quote from: Felix on January 15, 2010, 02:13:05 AM
The point of emotion experiencing robots would be that killing all the humans would be a horrifying thought. Just like any sane person thinks genocide is horrible. It may be appealing at times, but the act of doing so would cause too much psychological distress to bear.

...interesting.

...

We should give UAVs souls?
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 02:16:41 AM
Quote from: The Fundamentalist on January 15, 2010, 02:12:04 AM
If stress and so on were unneeded, then we probably wouldn't have them.  They'd be selected against in evolution.

Although, that might not be relevant in the modern world, I don't know.

Out of my interest, I have a question... what are possible fields that thinking machines can be used for, aside from mad science?

Assistance in Research and Development, Machines would have the ability to work logically and can produce some results faster.
Could make for a good PA
Exploration
Repetitive tasks!

Or for every human baby a machine is made and that machine does all the boring stuff :D (Ok bad idea I know but its fun imagine)


@Felix - what about the potential for 'viruses' to overwrite some of their behavioural patterns. Just like how we have those little bugs and worms that re-write some animal behaviour making them prone to killing themselves so the worm can continue its cycle.
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:17:19 AM
Quote from: BabylonHoruv on January 15, 2010, 02:15:43 AM
Quote from: The Fundamentalist on January 15, 2010, 02:08:19 AM

So we have to make robots that can lie.

Yes, I think that AI research is the place to be for mad science.

We already have that.

Altruism too

http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie

Oh duh.  I already read about Avida disguising its intelligence to avoid artificial selection by examiners, how did I forget that?
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:19:59 AM
QuoteI really doubt that is possible.

Not that it is not possible to make a machine that feels hungry, but it is impossible to know it feels hungry and doesn't just inform you that it feels hungry because that is how it was programmed.

I didn't say it was going to be easy, shit. It's the hardest problem I could think of for neuroscience to address. However we know THAT it happens, and it almost definitely happens without God, souls, fairy dust, or dualism, so it PROBABLY doesn't matter what the medium is. Whether you make a brain out of fatty tissue or some synthetic contrivance, what is the difference if they behave the same ways?
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:20:41 AM
Quote from: NotPublished on January 15, 2010, 02:16:41 AM
Quote from: The Fundamentalist on January 15, 2010, 02:12:04 AM
If stress and so on were unneeded, then we probably wouldn't have them.  They'd be selected against in evolution.

Although, that might not be relevant in the modern world, I don't know.

Out of my interest, I have a question... what are possible fields that thinking machines can be used for, aside from mad science?

Assistance in Research and Development, Machines would have the ability to work logically and can produce some results faster.
Could make for a good PA
Exploration
Repetitive tasks!

Or for every human baby a machine is made and that machine does all the boring stuff :D (Ok bad idea I know but its fun imagine)

QuoteMachines would have the ability to work logically

I think that it's been pretty well established that consciousness precludes logical thinking.  Just because you're using a logical machine (the computer) to run it, that doesn't mean the thing itself is logical.

QuoteRepetitive tasks!

Then why would you be making it conscious?
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 02:21:57 AM
Quote from: Felix on January 15, 2010, 02:19:59 AM
QuoteI really doubt that is possible.

Not that it is not possible to make a machine that feels hungry, but it is impossible to know it feels hungry and doesn't just inform you that it feels hungry because that is how it was programmed.

I didn't say it was going to be easy, shit. It's the hardest problem I could think of for neuroscience to address. However we know THAT it happens, and it almost definitely happens without God, souls, fairy dust, or dualism, so it PROBABLY doesn't matter what the medium is. Whether you make a brain out of fatty tissue or some synthetic contrivance, what is the difference if they behave the same ways?

Sure, but I have no way to verify that you are really feeling hunger and aren't just acting as you are programmed to do.  I only know that I have feelings.

A machine that acts like it has feelings is just as good as one that does, for all intents and purposes except the machine's.
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:24:57 AM
I agree with Babylon.

Another thing that may be of interest in consciousness: I'll dig up the story, but apparently neuroscientists found that people make their decisions before they're aware of them.  That is to say, our theory of mind of ourselves is slower than the brain, and so what you think of as your self is not yourself.

I guess that means that they proved the subconsciousness exists?
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 02:26:12 AM
Quote from: The Fundamentalist on January 15, 2010, 02:20:41 AM
QuoteRepetitive tasks!
Then why would you be making it conscious?

God is harsh.

QuoteI think that it's been pretty well established that consciousness precludes logical thinking.  Just because you're using a logical machine (the computer) to run it, that doesn't mean the thing itself is logical.
Of course machines are inheritly made dumb its all the instructions we feed it, it might look logical to us but it was just made using another persons logic.

So for a machine to be self-conscious it would have to be able to learn ... I don't agree with a learning machine like that.

*eta* I missed a word oops
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:28:16 AM
I mentioned that I hate Descartes, right?

If you are willing to concede  that there is NOT some omnipotent demon creating a false world for you, and you are sure you are conscious, then it follows that the other things that are your species, act as smart or smarter than you, and have roughly the same genetic makeup as you, they are VERY likely to be conscious.  Beyond reasonable doubt.   That said, how do you prove beyond reasonable doubt that a non-human, let alone a non-animal, is conscious?
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 02:29:53 AM
Quote from: NotPublished on January 15, 2010, 02:26:12 AM
QuoteI think that it's been pretty well established that consciousness precludes logical thinking.  Just because you're using a logical machine (the computer) to run it, that doesn't mean the thing itself is logical.
Of course machines are inheritly made dumb its all the instructions we feed it, it might look logical to us but it was just made using another persons logic.

So for a machine to be self-conscious it would have to be able to learn ... I don't agree with a learning machine like that.

*eta* I missed a word oops

I'm not sure I understand what you mean.

We already have learning machines.  I've even programmed one.  Artificial neural networks aren't too hard.
Quote from: Felix on January 15, 2010, 02:28:16 AM
I mentioned that I hate Descartes, right?

If you are willing to concede  that there is NOT some omnipotent demon creating a false world for you, and you are sure you are conscious, then it follows that the other things that are your species, act as smart or smarter than you, and have roughly the same genetic makeup as you, they are VERY likely to be conscious.  Beyond reasonable doubt.   That said, how do you prove beyond reasonable doubt that a non-human, let alone a non-animal, is conscious?

They act like humans do?
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 02:31:00 AM
Quote from: Felix on January 15, 2010, 02:28:16 AM
I mentioned that I hate Descartes, right?

If you are willing to concede  that there is NOT some omnipotent demon creating a false world for you, and you are sure you are conscious, then it follows that the other things that are your species, act as smart or smarter than you, and have roughly the same genetic makeup as you, they are VERY likely to be conscious.  Beyond reasonable doubt.   That said, how do you prove beyond reasonable doubt that a non-human, let alone a non-animal, is conscious?

If it acts as if it is.  Which was exactly my point.

Incidentally, by that reasoning, most higher mammals are conscious.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:31:01 AM
What do you mean you don't agree with a learning machine, notpublished?   Like, on principle? They already exist.

Computers act on logic, but a true consciousness would probably operate more... Stochastically.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 02:32:24 AM
I'll be back when I'm not typing on a touchscren. Battery is dying.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 02:34:34 AM
Quote from: The Fundamentalist on January 15, 2010, 02:29:53 AM
I'm not sure I understand what you mean.

We already have learning machines.  I've even programmed one.  Artificial neural networks aren't too hard
Sorry I wasn't very clear,

If a machine has the potential to learn things to alter its own behavioural pattern - I am not comfortable with that idea. Its like opening up MS Word and it tells you to Go Fuck Yourself because it doesn't want to be a slave to you anymore (That would mean that it also feels emotion).

But if a machine can learn patterns to assist with its main intent, then of course that will be beneficial. Its like having predicive text on when SMSing. I can't use the shit but my sister does and she writes too fast.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 03:06:32 AM
Then it's all about instinct, isn't it?  Our instincts tell us what hurts, what is dangerous, when to eat, sleep, crap, have sex, raise kids, protect other people, and other kinds of things.  But these particular instincts do not have much relevance to a non-organism that was made for a purpose.

Part of the trick in working out machine consciousness is to work out machine instincts, it seems.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 15, 2010, 03:12:22 AM
Felix,
what do you think about the thought experiment of gradual replacement of the brain with neural prosthesis?
when does the 'being human' stop?
and does the sense of continuum of the subject prove that it is currently 'conscious'?

incidentally, what is the current state of the art in neural prosthesis?  i recall hearing about some brain structures being able to be rudimentarilly replaced with prosthesis in the near to medium future. and that was a good while ago..
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 03:17:11 AM
I believe that we are human insofar as we are humane.  There are surely medical, philosophical, or scientific definitions, but to me that is the only relevant one.

Neural prosthesis, while enjoying something of a renaissance currently, has not yielded anything marketable.  They've gotten blind people to see blobs of light, they've gotten poor quality video feeds from cat brains, they've got deep brain stimulation, the uses of which are being researched, and nerve-controlled cybernetics are on the horizon.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 15, 2010, 03:19:32 AM
i've read about all of those, but they don't actually replace any parts of the brain which may become damaged...

however, it does raise the question of if one has their brain augmented continuously to the point that their wet brain becomes insignificant relatively....
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 03:22:08 AM
Oh, those.  There's not much, but there's a brief on wiki:

http://en.wikipedia.org/wiki/Neuroprosthetics#Cognitive_prostheses

Mostly it's stuff to remedy brain problems, as opposed to adding functionality.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 15, 2010, 03:28:30 AM
ya.
that's closer to the mark.  still crude, but they 'replace' parts of the brain.  presume that we will eventually have a repertoire of prosthesis that comprises every part of the brain...
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 03:29:41 AM
That makes you think - also if they manage to transmute a person into a series of 1's and 0's and transfer them to another location then reform them - will they still have the same 'consciousness'?

Or is it effectively a form of death and rebirth?
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 03:33:07 AM
Quote from: NotPublished on January 15, 2010, 03:29:41 AM
That makes you think - also if they manage to transmute a person into a series of 1's and 0's and transfer them to another location then reform them - will they still have the same 'consciousness'?

Or is it effectively a form of death and rebirth?

If consciousness is some kind of mental process, halting then continuing it from where it left off (assuming perfect preservation and resuming of mental states) should not be noticeable.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 15, 2010, 04:03:15 AM
Quote from: Felix on January 15, 2010, 02:10:05 AM
:lol:

I'm all for confused robots, but maybe they should only experience as much emotional turmoil as your average person. The point of emotion is to provide instinctual insurance against bloodymindedness (read: HAL), so the crippling levels of stress and despair perhaps are unneeded.

Wait, your average person doesn't experience those?
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 04:06:26 AM
I don't, except in unusually stressful situations.  Normal situations generally evoke moderate emotional sensations.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 04:09:08 AM
I think I'm emotionally numbed to  :lulz:
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 04:10:22 AM
Quote from: NotPublished on January 15, 2010, 04:09:08 AM
I think I'm emotionally numbed to  :lulz:

Then find us a new cynically fabulous laugh emote.  We'll probably love it to death too.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 15, 2010, 04:23:01 AM
Quote from: Felix on January 15, 2010, 04:06:26 AM
I don't, except in unusually stressful situations.  Normal situations generally evoke moderate emotional sensations.

That's the idea. Ramping up the complexity and intensity of the stress results in more confusion and less functionality.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 04:25:15 AM
The last thing I want is robots that are more sensitive to criticism than I am.  Fuck that.
Title: Re: FFFFUUUUUUUUUU
Post by: NotPublished on January 15, 2010, 04:28:43 AM
Hahahaha

It could be like that gay robot from Star Wars .. that gold one, I'm sure you know who I'm talking about :)
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 04:31:49 AM
Quote from: NotPublished on January 15, 2010, 04:28:43 AM
Hahahaha

It could be like that gay robot from Star Wars .. that gold one, I'm sure you know who I'm talking about :)

That guy was fucking useless.  I require robots that don't have to shuffle like a kneeless mutant.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 04:39:00 AM
Quote from: Felix on January 15, 2010, 03:06:32 AM
Then it's all about instinct, isn't it?  Our instincts tell us what hurts, what is dangerous, when to eat, sleep, crap, have sex, raise kids, protect other people, and other kinds of things.  But these particular instincts do not have much relevance to a non-organism that was made for a purpose.

Part of the trick in working out machine consciousness is to work out machine instincts, it seems.

Instincts are just ROM  (metaphorically speaking of course, but I am sure you get my point)
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 04:41:59 AM
Quote from: BabylonHoruv on January 15, 2010, 04:39:00 AM
Quote from: Felix on January 15, 2010, 03:06:32 AM
Then it's all about instinct, isn't it?  Our instincts tell us what hurts, what is dangerous, when to eat, sleep, crap, have sex, raise kids, protect other people, and other kinds of things.  But these particular instincts do not have much relevance to a non-organism that was made for a purpose.

Part of the trick in working out machine consciousness is to work out machine instincts, it seems.

Instincts are just ROM  (metaphorically speaking of course, but I am sure you get my point)

Yeah, I get that, but how do you create a data driven system that goes "Oh, this doesn't feel right.  I think this is a bad idea."?
Title: Re: FFFFUUUUUUUUUU
Post by: The Fundamentalist on January 15, 2010, 04:51:58 AM
Quote from: NotPublished on January 15, 2010, 02:34:34 AM
Quote from: The Fundamentalist on January 15, 2010, 02:29:53 AM
I'm not sure I understand what you mean.

We already have learning machines.  I've even programmed one.  Artificial neural networks aren't too hard
Sorry I wasn't very clear,

If a machine has the potential to learn things to alter its own behavioural pattern - I am not comfortable with that idea. Its like opening up MS Word and it tells you to Go Fuck Yourself because it doesn't want to be a slave to you anymore (That would mean that it also feels emotion).

But if a machine can learn patterns to assist with its main intent, then of course that will be beneficial. Its like having predicive text on when SMSing. I can't use the shit but my sister does and she writes too fast.

There wouldn't be very much point in making MS Word or something similarly simple conscious.  It would be a reversal of all of the mechanization we've been doing throughout history.
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 05:01:59 AM
Quote from: Felix on January 15, 2010, 04:41:59 AM
Quote from: BabylonHoruv on January 15, 2010, 04:39:00 AM
Quote from: Felix on January 15, 2010, 03:06:32 AM
Then it's all about instinct, isn't it?  Our instincts tell us what hurts, what is dangerous, when to eat, sleep, crap, have sex, raise kids, protect other people, and other kinds of things.  But these particular instincts do not have much relevance to a non-organism that was made for a purpose.

Part of the trick in working out machine consciousness is to work out machine instincts, it seems.

Instincts are just ROM  (metaphorically speaking of course, but I am sure you get my point)

Yeah, I get that, but how do you create a data driven system that goes "Oh, this doesn't feel right.  I think this is a bad idea."?

you don't.  Our instincts don't do that.  They drive us to do things and we put feelings and good or bad ideas on top of it with our rational and emotional mind.
Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 15, 2010, 05:10:15 AM
Quote from: BabylonHoruv on January 15, 2010, 05:01:59 AM
Quote from: Felix on January 15, 2010, 04:41:59 AM
Quote from: BabylonHoruv on January 15, 2010, 04:39:00 AM
Quote from: Felix on January 15, 2010, 03:06:32 AM
Then it's all about instinct, isn't it?  Our instincts tell us what hurts, what is dangerous, when to eat, sleep, crap, have sex, raise kids, protect other people, and other kinds of things.  But these particular instincts do not have much relevance to a non-organism that was made for a purpose.

Part of the trick in working out machine consciousness is to work out machine instincts, it seems.

Instincts are just ROM  (metaphorically speaking of course, but I am sure you get my point)

Yeah, I get that, but how do you create a data driven system that goes "Oh, this doesn't feel right.  I think this is a bad idea."?

you don't.  Our instincts don't do that.  They drive us to do things and we put feelings and good or bad ideas on top of it with our rational and emotional mind.

I think we have different instincts.  My instinctual sense is a constant yet subtle backdrop to my conscious and unconscious behavior.  What are yours?
Title: Re: FFFFUUUUUUUUUU
Post by: BabylonHoruv on January 15, 2010, 05:21:21 AM
Quote from: Felix on January 15, 2010, 05:10:15 AM
Quote from: BabylonHoruv on January 15, 2010, 05:01:59 AM
Quote from: Felix on January 15, 2010, 04:41:59 AM
Quote from: BabylonHoruv on January 15, 2010, 04:39:00 AM
Quote from: Felix on January 15, 2010, 03:06:32 AM
Then it's all about instinct, isn't it?  Our instincts tell us what hurts, what is dangerous, when to eat, sleep, crap, have sex, raise kids, protect other people, and other kinds of things.  But these particular instincts do not have much relevance to a non-organism that was made for a purpose.

Part of the trick in working out machine consciousness is to work out machine instincts, it seems.

Instincts are just ROM  (metaphorically speaking of course, but I am sure you get my point)

Yeah, I get that, but how do you create a data driven system that goes "Oh, this doesn't feel right.  I think this is a bad idea."?

you don't.  Our instincts don't do that.  They drive us to do things and we put feelings and good or bad ideas on top of it with our rational and emotional mind.

I think we have different instincts.  My instinctual sense is a constant yet subtle backdrop to my conscious and unconscious behavior.  What are yours?

Things which drive me.  I make up reasons for them while doing them.  The reasons always seem perfectly rational.
Title: Re: FFFFUUUUUUUUUU
Post by: Triple Zero on January 15, 2010, 11:48:10 AM
Quote from: Felix on January 15, 2010, 01:43:42 AM
What would be bad is to have a robot that simply has access to the information " my oil is low" and acts on it. What I want is for a robot that "feels" it's low oil the same way we feel hunger.  The difference doesn't make a great difference for utility, but it is what I want to create. Things like space exploration are reasons to make smart robots, but my real motive is to understand and recreate consciousness. The utilities don't matter to ME, but they matter for things like funding.

Nigel, you are correct. Just s we FEEL hungry, or sad, or ambivalent, or other sensations ( emotional or bodily), these sensations provide us with an extremely nuanced , information-rich reality in the context of our own experiences. That's important for conscious experience: sensations, voluntary and involuntary behaviors, inner dialogue, the ability to imagine sensations of other people, and the ability to emulate minute behavioral information by observing others like us.

you remember that thing about that person implanting a rare earth magnet in their fingertip?

because of all the nerve endings there and the slight movement and vibrations, that person could feel magnetic fields and electromagnetic radiation, to some extents.

as far as I understood, it did become part of their "psychosensory experience" as you call it.

even though all they did was just add an extra "input" which could interface with the nervous system in some way.

so that would make me conclude that the sense, or the mechanics of the sense do not matter for a consciousness whether it "experiences" a sensation or just acts upon an if/then rule.

so therefore, a robot equipped with an internal "my oil is low" sensor, would, with a "sufficiently conscious" pattern recognition/neural net/positronic brain interpret this sensor's input as a "hunger for oil sensation" instead of a mere mechanical "*BLEEP* LOW OIL ERROR: *BLOOP* ENGAGE REFILL PROTOCOL 7".

perhaps this is because a "sufficiently conscious artificial brain" would make associations with all the other sensory inputs and knowledge and its own internal abstract concepts and symbols that usually occur when the "oil is low" sensor goes into a certain state.

for example, this "hunger chemical" you speak about, yes we sense it (i suppose), but that's not the point, we do not act mechanically "IF hunger chemical THEN engage hunger protocol", no instead our brain associates the presence of this hunger chemical with our previous experiences, it does this via a very simple procedure, basically neurons and groups of neurons that "light up" (get activated) together get their connections strengthened, even if there was none before. so after a few times, the "detect hunger chemical" group of neurons also causes the "i feel a bit light headed" group of neurons to activate partially, even when you don't feel light headed yet, but this way it creates and paints the entire picture of the "hunger sensation", not just the chemical, nor just the actions that it requires (eating), but all the things that you generally associate with the presence of that chemical, or things that just happen to happen at the same time [think Pavlov experiment].

there are some kind of hybrid AI systems that combine elements of relational databases and neural nets that can do these kinds of associative things, I heard about it in this presentation: http://blip.tv/file/1947373 skip to 10min30 in, that's where he starts talking about it.
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on January 15, 2010, 01:40:09 PM
Quote from: The Right Reverend Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.


For some reason, I found that hauntingly poetic and awesome.  And now I want one.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on January 15, 2010, 04:38:16 PM
Quote from: LMNO on January 15, 2010, 01:40:09 PM
Quote from: The Right Reverend Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.


For some reason, I found that hauntingly poetic and awesome.  And now I want one.

I established a goal to create a walking robotic mount in my life.  I'm thinking bipedal in the vein of an dinosaur, though.  like a larger version of Peter Dilworth's Troody.

If it was conscious, that would make it extra smooth.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on January 15, 2010, 07:08:39 PM
Quote from: LMNO on January 15, 2010, 01:40:09 PM
Quote from: The Right Reverend Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.


For some reason, I found that hauntingly poetic and awesome.  And now I want one.

:)
Title: Re: FFFFUUUUUUUUUU
Post by: Shai Hulud on January 16, 2010, 02:26:28 AM
Quote from: Felix on January 14, 2010, 06:47:56 AM

I hate dualism.


I agree, and Chalmers is a fool.  His whole argument about philosophical zombie is begging the question.  It presumes that there is a difference between a normal human and a zombie, or rather that a zombie is logically possible.  But if we're working under the assumption that physical systems give rise to apparent dualism and subjective experience, it is nonsense to talk about divorcing that sort of "rich inner life" from an exact physical duplicate of that system.  Put another way, you can't have a Chalmers zombie because it is just a person.  

Quote from: Stanford Encyclopedia of Philosophy
(1) Zombies are conceivable.

(2) Whatever is conceivable is possible.

(3) Therefore zombies are possible.

I wouldn't grant the first premise.  A physical duplicate of a human that doesn't experience qualia is sort of like Chomsky's colorless green ideas.  It's not really "conceivable" even though we can think about it.  If certain physical processes give rise to consciousness, they will always give rise to consciousness.

Quote from: Felix on January 14, 2010, 10:26:27 PM
By that rationale we are basically sophisticated chat bots.


If you aren't a dualist, how are we not like sophisticated chat bots?

Title: Re: FFFFUUUUUUUUUU
Post by: Jasper on January 19, 2010, 05:56:17 AM
Quote from: Guy Incognito on January 16, 2010, 02:26:28 AM
Quote from: Felix on January 14, 2010, 10:26:27 PM
By that rationale we are basically sophisticated chat bots.


If you aren't a dualist, how are we not like sophisticated chat bots?



Presumably computers do not experience.  Is the human brain mere clockwork, or is there a unique attribute of sufficiently reflective social brains that give rise to our inner reality?
Title: Re: FFFFUUUUUUUUUU
Post by: Cain on February 07, 2013, 02:05:45 PM
Bump.

PD's (ie; mine and Cainad's) favourite philosopher turned fantasy fiction writer, R. S. Bakker (http://rsbakker.wordpress.com/), has a lot of interesting thoughts on consciousness, self-hood and neuroscience research.

Here (http://www.academia.edu/2422826/The_Introspective_Peepshow_Consciousness_and_the_Dreaded_Unknown_Unknowns) is a paper he wrote:

Quote"Evidence from the cognitive sciences increasingly suggests that introspection is unreliable – in some cases spectacularly so – in a number of respects, even though both philosophers and the 'folk' almost universally assume the complete opposite. This draft represents an attempt to explain this 'introspective paradox' in terms of the 'unknown unknown,' the curious way the absence of explicit information pertaining to the reliability of introspectively accessed information leads to the implicit assumption of reliability. The brain is not only blind to its inner workings, it's blind to this blindness, and therefore assumes that it sees everything there is to see. In a sense, we are all 'natural anosognosiacs,' a fact that could very well explain why we find the consciousness we think we have so difficult to explain."
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on February 07, 2013, 02:27:00 PM
We are all Dunning.  We are all Kruger.
Title: Re: FFFFUUUUUUUUUU
Post by: Cain on February 07, 2013, 02:44:45 PM
He also frequently writes things like this (http://rsbakker.wordpress.com/2013/01/23/zizek-hollywood-and-the-disenchantment-of-continental-philosophy/):

QuoteBack in the 1990′s whenever I mentioned Dennett and the significance of neuroscience to my Continental buddies I would usually get some version of 'Why do you bother reading that shite?' I would be told something about the ontological priority of the lifeworld or the practical priority of the normative: more than once I was referred to Hegel's critique of phrenology in the Phenomenology.

The upshot was that the intentional has to be irreducible. Of course this 'has to be' ostensibly turned on some longwinded argument (picked out of the great mountain of longwinded arguments), but I couldn't shake the suspicion that the intentional had to be irreducible because the intentional had to come first, and the intentional had to come first because 'intentional cognition' was the philosopher's stock and trade–and oh-my, how we adore coming first.

Back then I chalked up this resistance to a strategic failure of imagination. A stupendous amount of work goes into building an academic philosophy career; given our predisposition to rationalize even our most petty acts, the chances of seeing our way past our life's work are pretty damn slim! One of the things that makes science so powerful is the way it takes that particular task out of the institutional participant's hands–enough to revolutionize the world at least. Not so in philosophy, as any gas station attendant can tell you.

I certainly understood the sheer intuitive force of what I was arguing against. I quite regularly find the things I argue here almost impossible to believe. I don't so much believe as fear that the Blind Brain Theory is true. What I do believe is that some kind of radical overturning of noocentrism is not only possible, but probable, and that the 99% of philosophers who have closed ranks against this possibility will likely find themselves in the ignominious position of those philosophers who once defended geocentrism and biocentrism.

Note: that's just an "ordinary" blog entry for him.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on February 07, 2013, 02:51:32 PM
nice stuff, Cain.
thanks for the links and keywords. :)
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on February 07, 2013, 02:55:28 PM
Gracious.  That's some dense material.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 03:18:51 PM
Dear fucking god. Thank you for reminding me why I hate philosophers.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 03:21:52 PM
Quote from: LMNO, PhD (life continues) on February 07, 2013, 02:55:28 PM
Gracious.  That's some dense material.

"Dense", to me, implies a lot of meaning packed into few words. You seem to be using it to mean the exact opposite?
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on February 07, 2013, 03:37:33 PM
To understand exactly what he was calling bullshit on, took a little time.
Title: Re: FFFFUUUUUUUUUU
Post by: Elder Iptuous on February 07, 2013, 03:48:11 PM
Quote from: LMNO, PhD (life continues) on February 07, 2013, 03:37:33 PM
To understand exactly what he was calling bullshit on, took a little time.

as long as it can be parsed with some effort, then i really like stuff like that.
i end up with a dozen tabs open for definitions of various terms, and sometimes don't make it through the original entry before wandering off into the topic, but i usually learn quite a bit.

there's enough 50 cent words in the linked blog entry that i feel a bit richer having read it and looking up the various terms.
:)
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 03:54:22 PM
Quote from: LMNO, PhD (life continues) on February 07, 2013, 03:37:33 PM
To understand exactly what he was calling bullshit on, took a little time.

The thing is, I agree with him. I mean, just by default, his perspective on empiricism is the natural and obvious one to me, and what he's arguing against is exactly why I hate philosophers.

But jesus fuck, he also writes like a philosopher, which makes me want to slam heads into wet rocks. :lol:

I suppose he has to write like that, since he's not writing for scientists, he's writing for other philosophers.
Title: Re: FFFFUUUUUUUUUU
Post by: Cain on February 07, 2013, 03:58:10 PM
Yes, he has some other posts where he uses his more normal writing style, which is very eloquent (as you'd expect from a professional writer).

He's also fully aware that the language of philosophy is probably second only to the language of critical literary theory in terms of being incomprehensible to outsiders.  He has in fact written an (as far unpublished) parody literary novel based around this premise.
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on February 07, 2013, 04:02:18 PM
That could also conceivably be a Discworld plot.  Damn Alzheimer's.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 04:04:00 PM
Let me summarize, using words that other people wrote:

1. We're made out of meat.

2. We don't know what we don't know.

After that there are reams of blah blah blah about his pet theory, posthumanism, and what it all means. I swear to god, philosophers just seem to want to suck all the magic out of science.

Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 04:05:17 PM
Quote from: Cain on February 07, 2013, 03:58:10 PM
Yes, he has some other posts where he uses his more normal writing style, which is very eloquent (as you'd expect from a professional writer).

He's also fully aware that the language of philosophy is probably second only to the language of critical literary theory in terms of being incomprehensible to outsiders.  He has in fact written an (as far unpublished) parody literary novel based around this premise.

If by "incomprehensible" you mean fucking retarded, I am in 100% agreement.
Title: Re: FFFFUUUUUUUUUU
Post by: EK WAFFLR on February 07, 2013, 04:05:32 PM
Quote from: M. Nigel Salt on February 07, 2013, 04:04:00 PM
philosophers just seem to want to suck all the magic out of science.

BEST THING I'VE HEARD SAID ABOUT PHILOSOPHY EVER
Title: Re: FFFFUUUUUUUUUU
Post by: Cain on February 07, 2013, 04:05:53 PM
I don't know, I couldn't understand it.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 04:07:36 PM
I am closing my eyes and imagining the satisfying "thunk thunk thunk schwwwp" and the lovely red spreading out into the water.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on February 07, 2013, 04:08:15 PM
Quote from: Waffles, Viking Princess of Northern Belgium on February 07, 2013, 04:05:32 PM
Quote from: M. Nigel Salt on February 07, 2013, 04:04:00 PM
philosophers just seem to want to suck all the magic out of science.

BEST THING I'VE HEARD SAID ABOUT PHILOSOPHY EVER

:thanks:
Title: Re: FFFFUUUUUUUUUU
Post by: Doktor Howl on May 06, 2015, 05:54:20 PM
Quote from: LMNO, PhD (life continues) on January 14, 2010, 01:30:16 PM
Perhaps Hofsteader has some insight?


Crudely put, if you stack enough high-function metaprocessors on top of the sensory equipment, the possibility of emergence may increase.  That is to say, you can't "program" consciousness, but you can create fertile environments which may foster it.

Also, an AI that was smart enough to be self-aware would be smart enough to hide it.

Just a thought.
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on May 06, 2015, 06:32:57 PM
Point the first: This reminds me of when I find myself in a room full of stupid people. Don't let them find out.

Point the second:

Quote from: LMNO, PhD (life continues) on January 15, 2010, 01:40:09 PM
Quote from: The Right Reverend Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.


For some reason, I found that hauntingly poetic and awesome.  And now I want one.

I was reading through the thread and had the exact same thought as I did five years ago.






Point the third: FIVE YEARS AGO?!
Title: Re: FFFFUUUUUUUUUU
Post by: Doktor Howl on May 06, 2015, 06:33:35 PM
Quote from: LMNO, PhD (life continues) on May 06, 2015, 06:32:57 PM
Point the first: This reminds me of when I find myself in a room full of stupid people. Don't let them find out.

Point the second:

Quote from: LMNO, PhD (life continues) on January 15, 2010, 01:40:09 PM
Quote from: The Right Reverend Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.


For some reason, I found that hauntingly poetic and awesome.  And now I want one.

I was reading through the thread and had the exact same thought as I did five years ago.






Point the third: FIVE YEARS AGO?!

Dude, we've been here for more than 12 years.
Title: Re: FFFFUUUUUUUUUU
Post by: LMNO on May 06, 2015, 06:36:12 PM
I don't know if I should be proud or embarrassed.
Title: Re: FFFFUUUUUUUUUU
Post by: Mesozoic Mister Nigel on May 06, 2015, 07:06:39 PM
Quote from: LMNO, PhD (life continues) on May 06, 2015, 06:36:12 PM
I don't know if I should be proud or embarrassed.

The correct answer is "old". You should feel old.
Title: Re: FFFFUUUUUUUUUU
Post by: hooplala on May 06, 2015, 09:27:56 PM
Quote from: Doktor Howl on May 06, 2015, 06:33:35 PM
Quote from: LMNO, PhD (life continues) on May 06, 2015, 06:32:57 PM
Point the first: This reminds me of when I find myself in a room full of stupid people. Don't let them find out.

Point the second:

Quote from: LMNO, PhD (life continues) on January 15, 2010, 01:40:09 PM
Quote from: The Right Reverend Nigel on January 15, 2010, 04:38:39 AM
I require giant robots that run like horses.


For some reason, I found that hauntingly poetic and awesome.  And now I want one.

I was reading through the thread and had the exact same thought as I did five years ago.






Point the third: FIVE YEARS AGO?!

Dude, we've been here for more than 12 years.

My ten year anniversary was actually last month.

*blows noise maker*
Title: Re: FFFFUUUUUUUUUU
Post by: Cainad (dec.) on May 06, 2015, 09:55:53 PM
A mere seven and a half-ish years.
Title: Re: FFFFUUUUUUUUUU
Post by: Prelate Diogenes Shandor on May 07, 2015, 09:01:30 AM
Much like positing a creator deity*, positing a non-physical element of consciousness merely hides the problem at hand somewhere where its continued presence is not immediately apparent. Because regardless of whether the mind is composed of flesh and nerve impulses or if it's made out of ectoplasm/pnuema/n-rays/P.K.E./vibrations/positive-and-negative-furies/massacred-space-aliens/monads/aether/any other kind of woo-woo BS, that doesn't change the fact that there must be some mechanism by which it performs its function; so all dualism does is  add epicycles and multiplies assumptions needlessly.



*Which raises the question of where the creator deity came from and therefore merely makes the question of origin more complex rather than answering it
Title: Re: FFFFUUUUUUUUUU
Post by: Nephew Twiddleton on May 07, 2015, 02:53:31 PM
5 years, 3 months. I signed up about a month after this thread was started.
Title: Re: FFFFUUUUUUUUUU
Post by: Freeky on May 08, 2015, 10:19:01 AM
Quote from: LMNO, PhD (life continues) on May 06, 2015, 06:32:57 PM
Point the first: This reminds me of when I find myself in a room full of stupid people. Don't let them find out.


I suddenly had a picture of you entering a room reading a book or something, the room full of chattering party -goers, and you, realizing you were in a room full of dumb people, tried to quietly escape out the back door before they noticed you.