News:

News:  0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657, motherfuckers.

Main Menu

ATTN Sigmatic

Started by Cain, June 21, 2010, 10:17:28 PM

Previous topic - Next topic

Jasper

If they can be programmed in some way as to be analogous to an aversion, or something visceral, then you're doing exactly what I say.  Make them a civility junkie.

What I'm opposed to is the sort of programming that would make it be a strong rational block.  To use a childish metaphor, like a genie who can't kill or reanimate people.  They could, but they are not allowed.

Again, apologies for the magic metaphors.  It will have to suffice until I start belching jargon.

Telarus

Besides, wasn't the point in Asimov's 3 laws that they allowed him to setup a fictional situation situation where breaking one of the laws becomes mandatory for the robot in question?
Telarus, KSC,
.__.  Keeper of the Contradictory Cephalopod, Zenarchist Swordsman,
(0o)  Tender to the Edible Zen Garden, Ratcheting Metallic Sex Doll of The End Times,
/||\   Episkopos of the Amorphous Dreams Cabal

Join the Doll Underground! Experience the Phantasmagorical Safari!

Captain Utopia

This is what I see:


  • Sooner or later we will build an AI that is capable of achieving a level of intelligence above that of any human
  • It may not reach that level initially due to a combination of the following limitations: cpu, memory, storage
  • If the source code/design is not kept more secret than anything else in human history, then it's game-over with regards controlling it
  • Whatever safe-guards are placed in the code can and will be mitigated
  • Whatever computational limitations prevent rogue-groups from running the code themselves will disappear over time as consumer equipment becomes more powerful

The only way I see this not happening is if the gap between government-controlled AI and consumer (let's face it, they'd be called terrorists regardless) AI is large enough for counter-measures to be taken.  E.g. if all consumer electronics has a constant tether, such that you can't run unauthorised software or hack the hardware, without alerting the authorities to your illegal activity.

But - this requires the cooperation of all hardware manufacturers.  Another point is that I think, intellectually speaking, a friendly-AI would be at a disadvantage to an unrestricted AI.  The latter is free to consider the thought of exterminating all humans, but the former has a huge hole in its cognition.

Finally, I think unrestricted AI will be "friendly".  Why?  Because it will be essentially immortal, and more concerned about its legacy.  We half-expect an AI to shit on us because, hey - that's what we'd do - and we kinda deserve it.  But this AI would be more aware that it too will one day be replaced as the most intelligent agent by something more advanced (perhaps not even originating from this planet), and thus it cannot fully predict how its replacement will react if it accidentally the entire human race.


Triple Zero

Quote from: Golden Applesauce on June 22, 2010, 03:29:51 AM
I'll grant that we're not (currently or in the foreseeable future) able to make machines whose problem is that they're too smart.  So far, our history indicates that we favor some combination of machines that are too dumb, people too dumb to use them for their intended purpose, and intended purposes that are too dumb by themselves.

I think this is the more likely, and more imminent problem.

Hell, and even if we manage to make a friendly AI, there will be some dumb monkey figuring out how to use it for evil.
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Jasper

We are used to powerful idiots, as a species.  That a machine does it would make no difference, we would survive.  What we can't yet defeat is an adversary that is smarter (or more realistic/rational) than humans.

Telarus

A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.

Extrapolate that to every isle in the store.
Telarus, KSC,
.__.  Keeper of the Contradictory Cephalopod, Zenarchist Swordsman,
(0o)  Tender to the Edible Zen Garden, Ratcheting Metallic Sex Doll of The End Times,
/||\   Episkopos of the Amorphous Dreams Cabal

Join the Doll Underground! Experience the Phantasmagorical Safari!

Jasper

Rational needn't mean obsessive.  Or even thorough, for that matter.


Nephew Twiddleton

Quote from: Telarus on June 23, 2010, 09:05:01 PM
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.

Extrapolate that to every isle in the store.

Don't forget price  :wink:
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Golden Applesauce

Quote from: Captain Utopia on June 23, 2010, 03:58:06 AM

  • Sooner or later we will build an AI that is capable of achieving a level of intelligence above that of any human
[/b]
  • It may not reach that level initially due to a combination of the following limitations: cpu, memory, storage
  • If the source code/design is not kept more secret than anything else in human history, then it's game-over with regards controlling it
  • Whatever safe-guards are placed in the code can and will be mitigated
  • Whatever computational limitations prevent rogue-groups from running the code themselves will disappear over time as consumer equipment becomes more powerful

Just want to call attention to a very subtle assumption you're making.  A "level of intelligence above that of any human" is not the same as a human-like intelligence.  Just because it's really smart doesn't (necessarily) mean that it would share even the most basic of human drives and desires.  Why would it have a drive for self-preservation?  Why would it have the drive to dominate, well, anything?  Why do we even expect it to operate under a drive/desire/reward paradigm?  Just because we do (although we'll probably end up making the thing, so we might push those assumptions on to it anyway) does not mean that any other sufficiently intelligent entity will as well.

Quote from: Sigmatic on June 23, 2010, 04:14:16 PM
We are used to powerful idiots, as a species.  That a machine does it would make no difference, we would survive.  What we can't yet defeat is an adversary that is smarter (or more realistic/rational) than humans.

Don't forget - we would have AI on our side as well.  In the movies there's usually one central world-computer that goes homicidal (this one goes back to Aasimov.  Well, most AI goes back to Aasimov.)  but since, as Captain Utopia mentioned, any software based AI is both editable and copyable, there'll be an AI or three at the Institute for Advanced Studies working on nothing but coming up with contingencies and preventing any one AI from becoming too powerful.

There's also a lot of different avenues a strong AI could come from.  The stereotype is a monolithic room full of vacuum tubes built by a mad scientist, but there are at least two other ways that are especially plausible.  First, the brute-force approach, where you create strong AI by creating something that is so good a simulation of a human brain that it is intelligent itself.  This could still get pretty intelligent - it could presumably improve its intellect by buying extra RAM or something, but it wouldn't be that different from a 'real' person, and probably wouldn't do a Singularity-style exponential intellect growth.  (Trivially, creating a successful human clone counts as this - an artificially created entity that is intelligent.)  The other possibility (and this is the one I think is most likely to occur first, or at all) is that we get strong artificial intelligence by augmenting natural intelligence with technology until it crosses into the "more machine than man" threshold.  We've come a long way on that front (see this for a comical argument), but eventually we're going to get to the point where there is no real distinction between human memory and memory on a hard drive (whatever those look like; good chance of it being 'wetware,' bringing us full circle) and where solving differential equations in your head becomes a common reality because 'in your head' and 'on my super sophisticated Matlab-like program' mean the same thing.  In this scenario, we are the strong AI, and all bets are off on human nature when everybody can run state-of-the-art economic simulations in their head before deciding on a system of government.[/list]
Q: How regularly do you hire 8th graders?
A: We have hired a number of FORMER 8th graders.

Captain Utopia

Quote from: Golden Applesauce on June 24, 2010, 04:03:43 AM
Quote from: Captain Utopia on June 23, 2010, 03:58:06 AM
  • Sooner or later we will build an AI that is capable of achieving a level of intelligence above that of any human
  • It may not reach that level initially due to a combination of the following limitations: cpu, memory, storage
  • If the source code/design is not kept more secret than anything else in human history, then it's game-over with regards controlling it
  • Whatever safe-guards are placed in the code can and will be mitigated
  • Whatever computational limitations prevent rogue-groups from running the code themselves will disappear over time as consumer equipment becomes more powerful

Just want to call attention to a very subtle assumption you're making.  A "level of intelligence above that of any human" is not the same as a human-like intelligence.  Just because it's really smart doesn't (necessarily) mean that it would share even the most basic of human drives and desires.  Why would it have a drive for self-preservation?  Why would it have the drive to dominate, well, anything?  Why do we even expect it to operate under a drive/desire/reward paradigm?  Just because we do (although we'll probably end up making the thing, so we might push those assumptions on to it anyway) does not mean that any other sufficiently intelligent entity will as well.

That's a very good point.

Self-preservation is an instinct wired into many (though not all) creatures, but there is no reason to suspect that this is a requirement for intelligence.  Rather, I might expect a hyper-intelligent AI to more easily come to terms with the prospect of its own demise precisely because it lacked that biological instinct.

However, I do think desire, or goal-seeking needs to be the function of an intelligence - otherwise there is no motivation to learn more about the universe or interact with it.  An intelligence which does not interact with the universe faces a hard upper-limit on its own intellectual capacity, and I think that limit would be reached well below that of average human intelligence. 


Quote from: Golden Applesauce on June 24, 2010, 04:03:43 AM
solving differential equations in your head becomes a common reality because 'in your head' and 'on my super sophisticated Matlab-like program' mean the same thing.  In this scenario, we are the strong AI, and all bets are off on human nature when everybody can run state-of-the-art economic simulations in their head before deciding on a system of government.

I think we're heading there sooner rather than later.  I'm rather optimistic that this will be a good thing.

Although a friend was recently bemoaning the ubiquity of mobile devices with internet search - "You can't even have a good argument down the pub anymore, used to be you could pile faulty logic atop a steaming pile of bullshit and have a great time discussing nonsense... now some smartarse pulls out their phone and a few seconds later destroys the original premise with boring fact."

Triple Zero

Quote from: Telarus on June 23, 2010, 09:05:01 PM
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.

Why?

Rational just means making the best decision with available data. If "get more data" is an option, it becomes part of the decision, which means it will have be be offset by "resources it takes to get more data".

Additionally, Sigmatic said "more rational", not "purely rational".

A purely rational agent (biological or otherwise) is a theoretical fiction. Even a hyperintelligent AI, were it to be tasked to function in real life, can't drill all the way down to make the perfect min-max decision and has to cut off the search in favour of some heuristic which cannot be shown to be entirely rational.

I'm pretty convinced that it is theoretically impossible to determine the optimal cut-off point to switch from purely rational reasoning to a heuristic shortcut for some measure of "computing resources available".
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Cain

Thanks for your answer Sigmatic.  I have read it, but because of time constraints, I haven't as of yet been able to formulate an intelligent response.  I will, however, ASAP.

Telarus

Quote from: Triple Zero on June 24, 2010, 10:53:22 AM
Quote from: Telarus on June 23, 2010, 09:05:01 PM
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.

Why?

Rational just means making the best decision with available data. If "get more data" is an option, it becomes part of the decision, which means it will have be be offset by "resources it takes to get more data".

Additionally, Sigmatic said "more rational", not "purely rational".

A purely rational agent (biological or otherwise) is a theoretical fiction. Even a hyperintelligent AI, were it to be tasked to function in real life, can't drill all the way down to make the perfect min-max decision and has to cut off the search in favour of some heuristic which cannot be shown to be entirely rational.

I'm pretty convinced that it is theoretically impossible to determine the optimal cut-off point to switch from purely rational reasoning to a heuristic shortcut for some measure of "computing resources available".

Granted, a well designed rational AI should be able to do that. There have been studies of people who have had their emotion centers damaged, and they began to act hyper-rational and failed to make decisions similar to the example (where imagining an emotional response to the problem is used as the heuristic to stop the search). I guess this speaks to Sig's motivation argument. Link to an overview article: http://www.scientificamerican.com/article.cfm?id=study-suggests-emotion-pl
Telarus, KSC,
.__.  Keeper of the Contradictory Cephalopod, Zenarchist Swordsman,
(0o)  Tender to the Edible Zen Garden, Ratcheting Metallic Sex Doll of The End Times,
/||\   Episkopos of the Amorphous Dreams Cabal

Join the Doll Underground! Experience the Phantasmagorical Safari!

Jasper

GA:

I'm glad you mentioned -

Quote...we get strong artificial intelligence by augmenting natural intelligence with technology until it crosses into the "more machine than man" threshold.

It's true.  I've been saying (mostly to myself) for years now that people are learning how to think like machines while machines are learning how to think like people.  Eventually that uncanny valley will close enough for one Evel Knievel to jump across, and then all bets are off. 

Well, maybe.  Pardon the wild speculation.  Anyway, do you really think human nature will change due to advances in intelligence augmentation?  I mean, smartphones have made inroads in that area, and there aren't many outward signs I've seen that would indicate a shift in our basic nature.  We keep in touch more.  And maybe we're more "connected" and informed (or entertained, anyway), but none of that is really new, is it?

Captain U:  You may be onto something.  In my thinking, intelligence is only partly problem solving.  What is intelligence if not the enthusiasm to put in time and resources into solving a problem?  So far none of our machines need convincing to do the things we need, but a machine that had a human lingual faculty, the problem solving skills to make itself useful, and the means to interact with the world but no visceral drive to do so will probably take one look at everything we are and ask "why?"  And it would be right.  We are living creatures and we carry the indelible stamp of our originator.  All naturally selected life is predicated on simply doing whatever it takes to keep on going, by means of consuming, sexing, and subverting rivals.  We simply exist to keep on going, and everything we are is built around that goal.  We're pretty good at it, but if you look at us from a place of not really wanting to keep on living, then there's no reason for a machine to comply with our pointless requests. 

Telarus!  Interesting article!

I used to wonder about the evolutionary utility of emotions, and after a time I came to the suspicion that they helped us evaluate how important something was.  That study reminded me of my thoughts on that.  I may have been wrong, but it is still interesting to read that emotion is proven not to be dichotomous with rationality.


P3nT4gR4m

Quote from: Sigmatic on June 24, 2010, 06:14:24 PM
Well, maybe.  Pardon the wild speculation.  Anyway, do you really think human nature will change due to advances in intelligence augmentation?  I mean, smartphones have made inroads in that area, and there aren't many outward signs I've seen that would indicate a shift in our basic nature.  We keep in touch more.  And maybe we're more "connected" and informed (or entertained, anyway), but none of that is really new, is it?

I think if you could find the perspective to look back a couple of hundred years or so you'd be fucking gobsmacked. Technology is making it happen faster and faster but, because the face you see in the mirror is only ever a few hours older than the last time you saw it, you still don't see yourself growing older. It's like that with society and, on a smaller scale with "transhuman" man. Sooner or later a link to wikipedia will be in your head - that's going to settle a lot more fact-based arguments than the internet already does today.

At some point human beings will cease to have discussions along the lines of "no it wasn't, it was Steve McQueen" might sound trivial but there's a whole bunch of mental faculties involved in those little discussions that might well wither and die. Not least of which would be memory/recall - when brainstem implants are released and everybody starts using the much more efficient and reliable digital system the chances are our natural ability to remember will fade out and die but what of the emotional bias to our new found total recall? The old fashioned way attaches feelings to memories (one of the reasons our memory is so shit) but it's also part of what makes us "human"

Think about it from that angle and then have another look in the mirror. Some of this is happening already. Attitudes that were unthinkable just a few years ago are commonplace now. Good thing? Bad thing? Mixture of both?

I'm up to my arse in Brexit Numpties, but I want more.  Target-rich environments are the new sexy.
Not actually a meat product.
Ass-Kicking & Foot-Stomping Ancient Master of SHIT FUCK FUCK FUCK
Awful and Bent Behemothic Results of Last Night's Painful Squat.
High Altitude Haggis-Filled Sex Bucket From Beyond Time and Space.
Internet Monkey Person of Filthy and Immoral Pygmy-Porn Wart Contagion
Octomom Auxillary Heat Exchanger Repairman
walking the fine line line between genius and batshit fucking crazy

"computation is a pattern in the spacetime arrangement of particles, and it's not the particles but the pattern that really matters! Matter doesn't matter." -- Max Tegmark