News:

If it quacks like a sociopath, but also ponders its own sociopathy, it's probably just an asshole.

Main Menu

Altruistic robots produced through evolution

Started by Iason Ouabache, January 29, 2010, 03:42:23 AM

Previous topic - Next topic

BabylonHoruv

You're a special case, Babylon.  You are offensive even when you don't post.

Merely by being alive, you make everyone just a little more miserable

-Dok Howl

Jasper

That is a very simple question, and the answer is unfortunately not.

Determining intentionality in other beings is an inductive practice, due to the limits of describability of multimodal experiences, including conscious states such as intentionality. 

In one sense you could say that intentionality is the quality of an agent's behavior towards an objective with special regard to an expected outcome (e.g. I threw the ball with the intention that the dog fetch it.)

However due to the lack of dialog between man and robot, we cannot verify this intention verbally.  It is not likely that the researchers made a robot that can answer the question, "What did you intend by this action?"   In fact, the capacity to even act with special regard to an expected outcome implies a certain amount of metacognition. 

In the end, we can predict how the robots will act by assigning intentionality to their actions by saying "the robot is trained to want this outcome, so it will do x behavior", but we have no good evidence that they are intentional agents.

BabylonHoruv

Quote from: Sigmatic on February 04, 2010, 08:54:16 AM
That is a very simple question, and the answer is unfortunately not.

Determining intentionality in other beings is an inductive practice, due to the limits of describability of multimodal experiences, including conscious states such as intentionality. 

In one sense you could say that intentionality is the quality of an agent's behavior towards an objective with special regard to an expected outcome (e.g. I threw the ball with the intention that the dog fetch it.)

However due to the lack of dialog between man and robot, we cannot verify this intention verbally.  It is not likely that the researchers made a robot that can answer the question, "What did you intend by this action?"   In fact, the capacity to even act with special regard to an expected outcome implies a certain amount of metacognition. 

In the end, we can predict how the robots will act by assigning intentionality to their actions by saying "the robot is trained to want this outcome, so it will do x behavior", but we have no good evidence that they are intentional agents.

We don't have any particular evidence that dogs are intentional agents for that matter, but we do tend to assume it.  Sentimentality as well.  We assume robots are not.  We don't really have any reason to do so in either case, so far as I can tell, except that dogs, like us, are made out of meat and robots aren't.
You're a special case, Babylon.  You are offensive even when you don't post.

Merely by being alive, you make everyone just a little more miserable

-Dok Howl

Captain Utopia

Moreover, I disagree with the premise that altruism requires "intentionality (and sentimentality)".  Or either, even.  Certainly, in this context, having a sentient AI which could answer questions like those posed above would be a far greater achievement than altruism.

BabylonHoruv

Quote from: FP on February 04, 2010, 09:11:37 AM
Moreover, I disagree with the premise that altruism requires "intentionality (and sentimentality)".  Or either, even.  Certainly, in this context, having a sentient AI which could answer questions like those posed above would be a far greater achievement than altruism.

Not really.  iGod can answer questions like that.  On the other hand it can't do any of the things the "altruistic" robots can do.
You're a special case, Babylon.  You are offensive even when you don't post.

Merely by being alive, you make everyone just a little more miserable

-Dok Howl

Rococo Modem Basilisk

Isn't the status quo in cognitive science currently that humans lack intentionality, and that the reasoning behind our actions is just rationalization made up after the fact?


I am not "full of hate" as if I were some passive container. I am a generator of hate, and my rage is a renewable resource, like sunshine.

Captain Utopia

Quote from: BabylonHoruv on February 04, 2010, 09:18:14 AM
Quote from: FP on February 04, 2010, 09:11:37 AM
Moreover, I disagree with the premise that altruism requires "intentionality (and sentimentality)".  Or either, even.  Certainly, in this context, having a sentient AI which could answer questions like those posed above would be a far greater achievement than altruism.

Not really.  iGod can answer questions like that.  On the other hand it can't do any of the things the "altruistic" robots can do.
I assume you're joking - iGod is just A.L.I.C.E. -  it matches against patterns in sentence structure but has no comprehension of the conversation, other than a few pre-programmed responses on predictable subjects.  It's a clever trick, but it's not sentience.

Rococo Modem Basilisk

Quote from: FP on February 04, 2010, 02:11:34 PM
Quote from: BabylonHoruv on February 04, 2010, 09:18:14 AM
Quote from: FP on February 04, 2010, 09:11:37 AM
Moreover, I disagree with the premise that altruism requires "intentionality (and sentimentality)".  Or either, even.  Certainly, in this context, having a sentient AI which could answer questions like those posed above would be a far greater achievement than altruism.

Not really.  iGod can answer questions like that.  On the other hand it can't do any of the things the "altruistic" robots can do.
I assume you're joking - iGod is just A.L.I.C.E. -  it matches against patterns in sentence structure but has no comprehension of the conversation, other than a few pre-programmed responses on predictable subjects.  It's a clever trick, but it's not sentience.

It can answer questions like that. It can't necessarily answer them CORRECTLY. (Answering questions has no bearing on sentience OR sapience, of course...)


I am not "full of hate" as if I were some passive container. I am a generator of hate, and my rage is a renewable resource, like sunshine.

Captain Utopia


Rococo Modem Basilisk

 :argh!: Motherfucking chinese room  :argh!:

Bitches don't know about the Church-Turing Computability Theorem!


I am not "full of hate" as if I were some passive container. I am a generator of hate, and my rage is a renewable resource, like sunshine.

Captain Utopia


Jasper

Quote from: Enki v. 2.0 on February 04, 2010, 11:51:32 AM
Isn't the status quo in cognitive science currently that humans lack intentionality, and that the reasoning behind our actions is just rationalization made up after the fact?

I heard about this a while ago, and ever since then I keep checking my own behaviors against this, and so far I can't find any evidence to prove it wrong.


Requia ☣

Quote from: Enki v. 2.0 on February 04, 2010, 11:51:32 AM
Isn't the status quo in cognitive science currently that humans lack intentionality, and that the reasoning behind our actions is just rationalization made up after the fact?

No.

Not that there aren't people who argue that, but its based on a gross misinterpretation of an experiment on quick responses (if you don't give someone time to think about their response, they will give a response then rationalize it afterwards, there's no evidence this occurs if people have time to think things over.)
Inflatable dolls are not recognized flotation devices.

Iason Ouabache

Quote from: BabylonHoruv on February 04, 2010, 09:04:12 AM
Quote from: Sigmatic on February 04, 2010, 08:54:16 AM
That is a very simple question, and the answer is unfortunately not.

Determining intentionality in other beings is an inductive practice, due to the limits of describability of multimodal experiences, including conscious states such as intentionality. 

In one sense you could say that intentionality is the quality of an agent's behavior towards an objective with special regard to an expected outcome (e.g. I threw the ball with the intention that the dog fetch it.)

However due to the lack of dialog between man and robot, we cannot verify this intention verbally.  It is not likely that the researchers made a robot that can answer the question, "What did you intend by this action?"   In fact, the capacity to even act with special regard to an expected outcome implies a certain amount of metacognition. 

In the end, we can predict how the robots will act by assigning intentionality to their actions by saying "the robot is trained to want this outcome, so it will do x behavior", but we have no good evidence that they are intentional agents.

We don't have any particular evidence that dogs are intentional agents for that matter, but we do tend to assume it.  Sentimentality as well.  We assume robots are not.  We don't really have any reason to do so in either case, so far as I can tell, except that dogs, like us, are made out of meat and robots aren't.
TITCM. I see intentionality a bit like sincerity: if you can fake that, you've got it made. A simulation of intentionality would be indistinguishable from real intentionality. There is no test you can run on intentionality that can't be cheated. Same thing with a simulation of altruism. If you can program a robot to appear to be altruistic then it would be altruistic. Altruism isn't a state of mind, it is an action.

IMHO, altruism is over-rated anyways. More often then not it is pure illusion. What appears to be selfless behavior is really meant to protect a gene or meme. Plus no one would be altruistic if it didn't give us an over-inflated sense of self-righteousness, you fucking hedonists.
You cannot fathom the immensity of the fuck i do not give.
    \
┌( ಠ_ಠ)┘┌( ಠ_ಠ)┘┌( ಠ_ಠ)┘┌( ಠ_ಠ)┘

Captain Utopia

Quote from: Iason Ouabache on February 05, 2010, 08:47:29 AM
IMHO, altruism is over-rated anyways. More often then not it is pure illusion. What appears to be selfless behavior is really meant to protect a gene or meme. Plus no one would be altruistic if it didn't give us an over-inflated sense of self-righteousness, you fucking hedonists.
From a zoological perspective, altruism is a useful pattern in studying genetically-influenced behaviour in individuals.  So to say that a potentially altruistic individual hastens their own death only to protect genetics present in a group, or because they exist in a society where this is mutually beneficial, is both restating the obvious and missing the point.

I assumed, given the subject matter, we were talking about zoological altruism?