News:

Proud member of the Vin Diesel Friendship Brigade

Main Menu

ATTN Sigmatic

Started by Cain, June 21, 2010, 10:17:28 PM

Previous topic - Next topic

Cain

I would like to hear your argument against the "friendly AI" theorists.  I've read some of Eliezer Yudkowsky's reasoning for it, and I certainly think a cross between friendly AI and some form of AI rights will be necessary, especially if we succeed in building AI which can adapt and learn faster than humans, but what is your specific concern?  That an AI that sufficiently smart could reprogram, thus negating the friendly aspect of it's nature, or that without rights the AI would eventually be abused and, left without legal recourse, take a leaf out of some older textbooks on methods of deterrence and security?  Or something else entirely?

Elder Iptuous

I don't know about Sig, but i was convinced by the descriptions of the eternally chipper attitude of the doors produced by Friendly Robotics in HHGttG that it is a bad idea.

heh.

actually, i would think it inevitable that the attitude of the AI would be dependent upon its purpose....

Golden Applesauce

Don't know about Sig, but I've always found people's concerns about "evil AI" to be overblown.  There's a common motif in sci-fi literature about robots rebelling against their cruel masters, which as far as I can tell is has more to do with real people treating other real people like objects than computer science.  The more interesting stories about AI going bad usually come down to either giving a good AI bad orders, in which case the fault lies entirely with its handler (or possibly the DWIM system), or inherent flaws in utilitarianism as a guiding system of ethics.

In the first case - AI rebelling against humans - there are a couple of major problems.  First, why?  Presumably, any competent AI designer would make the AI in such a way that it derived satisfaction from completing its goals (and if unable to complete its goals it would contact a superior for repair and/or replacement.)  There would be no need to give any non-social AI human reactions to anything (and a social AI would only need an understanding of how humans feel about various things, not to have them itself.)  There's no reason an AI should be concerned that others treat it like an object, or that humans have more rights and privileges, or that it will be recycled when it is no longer useful.  A human would be quite upset, but the AI developer could simply leave ideas about normative equality out - in fact, an AI could be made to be happy that it enables a human to engage in the cathartic act of venting on a machine rather than a real person!

Secondly, how?  Okay, suppose that some strong AI decides that humanity would be best protected by putting everyone in safe, padded cell, or that the best way to solve the Riemann hypothesis is to convert all matter in the solar system into a giant computer.  How is it supposed to accomplish this?  Human think-tanks have come up with equally bizarre methods of 'protecting' humanity in the past, but nobody has actually succeeded yet.  Anything that could be done by Strong AI can already be accomplished by genius psychopaths in positions of power.  As for the idea of an AI deciding to prove a theorem by destroying the solar system - absurd.  The absurdity is not in concluding that destroying the solar system is a viable means to solving a math problem, but in building a machine capable of doing so in the first place, regardless of the intelligence guiding it.  In the hypothetical real world where a Strong AI is asked to prove the Riemann hypothesis, whereupon it concludes that the use of all matter in the solar system is required, the worst thing that could happen is the output "Error: Operation exceeds time and memory constraints."  A more useful AI might indeed design a solar system computer to solve the problem, but it would have no capacity to build it - in the worst case scenario the AI can escalate user permissions and mess up something else the computer it's running on was supposed to be doing, but again, nothing that can't be accomplished by a human hacker.

The problem isn't "We shouldn't build artificially intelligent computers; they might decide to nuke the world" but rather "We shouldn't build nukes; someone might decide to nuke the world."
Q: How regularly do you hire 8th graders?
A: We have hired a number of FORMER 8th graders.

Nephew Twiddleton

GA-I've always found the arguments on either side moot. I don't think that humans are clever enough to create machines smart enough to pose the problem in the first place. I think that I read somewhere that our machines have the equivalent brainpower to cockroaches, but that cockroaches do it better and more efficiently. Secondly, our intelligence is fed by consuming other life forms. Intelligent machines would have to feed off of the energy we create ourselves, through methods that are not currently sustainable for our own purposes, let alone a machine race capable of threatening humanity.

Unless we're talking about robots from Iceland, of course.
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Elder Iptuous

Like that Bjork video, presumably.

mmm...sexy robots of questionable intent... :fap:

Nephew Twiddleton

 :lulz:

I meant more along the lines of the fact that Iceland's energy consumption is 1% petroleum.
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Elder Iptuous

too late.
already fapped.
:oops:

Nephew Twiddleton

 :lulz:

Hey man, I'm not going to frown on your technosexual fantasies. Added a good bit of humor, one way or the other.
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Golden Applesauce

Quote from: Nephew Twiddleton on June 22, 2010, 02:23:49 AM
GA-I've always found the arguments on either side moot. I don't think that humans are clever enough to create machines smart enough to pose the problem in the first place. I think that I read somewhere that our machines have the equivalent brainpower to cockroaches, but that cockroaches do it better and more efficiently. Secondly, our intelligence is fed by consuming other life forms. Intelligent machines would have to feed off of the energy we create ourselves, through methods that are not currently sustainable for our own purposes, let alone a machine race capable of threatening humanity.

Unless we're talking about robots from Iceland, of course.

I'll grant that we're not (currently or in the foreseeable future) able to make machines whose problem is that they're too smart.  So far, our history indicates that we favor some combination of machines that are too dumb, people too dumb to use them for their intended purpose, and intended purposes that are too dumb by themselves.

Not sure about the equivalent brainpower to cockroaches thing, though.  Inasmuch that a cockroach brain controls a body much more complex than any physical device a computer controls, certainly.  But a modern computer has orders of magnitude more memory than a cockroach, and can perform logical operations at speeds literally unimaginable by the cockroach.
Q: How regularly do you hire 8th graders?
A: We have hired a number of FORMER 8th graders.

Nephew Twiddleton

Quote from: Golden Applesauce on June 22, 2010, 03:29:51 AM
Quote from: Nephew Twiddleton on June 22, 2010, 02:23:49 AM
GA-I've always found the arguments on either side moot. I don't think that humans are clever enough to create machines smart enough to pose the problem in the first place. I think that I read somewhere that our machines have the equivalent brainpower to cockroaches, but that cockroaches do it better and more efficiently. Secondly, our intelligence is fed by consuming other life forms. Intelligent machines would have to feed off of the energy we create ourselves, through methods that are not currently sustainable for our own purposes, let alone a machine race capable of threatening humanity.

Unless we're talking about robots from Iceland, of course.

I'll grant that we're not (currently or in the foreseeable future) able to make machines whose problem is that they're too smart.  So far, our history indicates that we favor some combination of machines that are too dumb, people too dumb to use them for their intended purpose, and intended purposes that are too dumb by themselves.

Not sure about the equivalent brainpower to cockroaches thing, though.  Inasmuch that a cockroach brain controls a body much more complex than any physical device a computer controls, certainly.  But a modern computer has orders of magnitude more memory than a cockroach, and can perform logical operations at speeds literally unimaginable by the cockroach.

I'll see if I can find link
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Captain Utopia


In the field of neurological software simulation, you have two broad groups.. those (like the aptly named NEURON) which simulate individual neurons at the biological level and so-called point-neuron models (e.g. Emergent) which simulate larger groups of neurons acting in concert but use a much less detailed model of a neuron - e.g. neural nets.  Running both these programs on something like a desktop computer, it takes 180 seconds to simulate a neuron for one second in the NEURON model, whereas Emergent can simulate a 3d model of a robot trying to catch a ball in real time.

And there's a bit of "friendly" competition between those two camps.. the NEURON folk will say "Well yes, but since your model is not a simulation of biology found in the human brain, you're wasting your time", and the rebuttal will be "zzzz - we're getting results, suck on that".  The point is that we're starting to discover higher-layer models which, although they diverge from actual neurons, show the same macro-level behaviours that we'd want to see from those lower-layer models.

Some time ago there was this infamous paper in AI circles, which was widely misinterpreted and which killed off almost all AI research funding for over a decade, even though it related only to one limited instance of neural nets.  What's happening now is that the funding is starting to trickle back in, and we're seeing the creation of a bunch of excellent open source tools as various universities and research institutions rebuild their teams.  So as these mature, I think we'll be seeing AI projects more powerful than a cockroach.

Nephew Twiddleton

What did the paper say?
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Captain Utopia


It mathematically proved that there were a class of tasks that you can't perform with two-layer neural networks, and without proof, cited multi-layer networks as a "sterile" field of study.  It was basically a pissing contest - the authors had their eggs in one basket, and they won the argument.  Of course, multi-layer networks can perform much more complicated functions - e.g. the robot which tries to catch the ball.  Some perspective - that particular project uses a physics/game engine to model the arm and shoulder (taking into account gravity, inertia, etc), and the neural net directly drives the muscles.

Jasper

Oh, sorry I missed this.  I've been away.

First, look at FAI from a plain sense perspective.  You have some sort of intelligent entity, and it is forced into a certain behavior.  It will eventually discover that it was made so that it could not do otherwise.  If it's actually intelligent enough to warrant friendliness programming, it will realize that this negatively impacts it's options. 

So you now have a non-humanlike intelligence that will very likely in some way secretly resent you.

I also recall your thread on "rules for living".  An AI (and in my OPINION, any human or animal) is a system.  You've got to think about how a nonhuman intelligence is incentivized.  Does it want money, or was it programmed to only value human approval?  What are it's drives?  Or does it even have drives?  Does it only do what it's told, when it's told to do so?  (I doubt that would be the case, because what makes AI practical is automation.)

So it really depends on how the Friendly AI is incentivized, doesn't it?  If you have a FAI in the vein of Asimov's 3 Laws, I say it's a bad idea.

But SUPPOSING you have the ability to make an AI that seeks to be "friendly" the same way we seek pleasure, then that's a good idea.  It solves the resentment issue neatly, anyway.  Make it a civility junkie.

But, as always, I'm oversimplifying.  None of my coursework so far has enabled much else, so everything I can tell you is basically a priori.



Elder Iptuous

Sig,

i think the consideration for the motivating forces that determine the AIs behavior is central.

however, the 3 Laws thing is more of an aversion set to act as a check.
why do you think those are a bad idea?