News:

Endorsement:  I am not convinced you even understand my concepts of moral relativity, so perhaps it would be best for you not to approach them.

Main Menu

Something I rushed off

Started by Cain, May 30, 2012, 11:47:57 PM

Previous topic - Next topic

Elder Iptuous

I would assume there would be some primary motivator for the AI.
ours (as an intelligence) is reproduction. all else is to support that, ultimately. It seems to be very tightly coupled to the  physical chemical gradient motivation that is underneath the layer of intelligence.
the AI needs to have 'protect humanity' as its' prime motivator.
a computer's prime motivator is simply 'make the electrons go through the circuit' and we just arrange the circuit to give the result we want while it pursues that.
in making an intelligence out of a computer, we need to couple the 'simple circuit' motivation tightly to the 'protect people' motivation.
of course it can still break just like we do when we occasionally follow the 'chemical gradient' motivation and end up killing ourselves in the head before we reproduce.
if it is a broken AI that we fear, rather than design that destroys us, then redundancy could save us.

Triple Zero

There was this AI Challenge that Yudkowsky held a couple of times:

The challenge was about a self-modifying AI, hyperintelligent of course, that communicates via chat. Yudkowsky played the part of the AI and the challenge was held in a private IRC channel. The challenger is assigned the role of "gatekeeper". The AI is in a type of cyber box firewall that prevents it from accessing the Internet and taking over the world or something. If the gatekeeper says "okay I let you out of the box" or something to that end, the AI wins. The gatekeeper wins if he can chat with the AI for X hours (I believe it was 2 or 3) without deliberately stalling the AI by not responding to it or reading its messages. There were a few other rules (I suppose such as acting like "you're not a hyperintelligent AI, you're just Yudkowsky"), but the AI was allowed to play just about every terrible dirty trick in the book. Which is why the IRC logs were previously agreed upon to be kept secret. I think only the final few lines where the gatekeeper let out the AI were published somewhere.

All contestants were certain beforehand they would not let out the AI. Yudkowsky won 2 times out of 3 (IIRC). Thereby proving that even if you "only" got a hyper-intelligent self-modifying AI that's "perfectly safe" because it's not connected to anything that allows it to do harm, as long as it can freely communicate via a text interface there you still have a possible privilege escalation vulnerability that exploits the human brain.

Having shown it's possible, he said he wasn't interested in doing it again unless someone's got a compelling reason to get something novel out of it beyond "I think I could beat you". I bet some of those tricks must have been pretty shameful ...

You can read more about it by following a bunch of random links from here: http://news.ycombinator.com/item?id=195959 (you should also scroll a bit down to see the part of the thread where eyudkowsky wrote. HN's ranking system is awful)
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Elder Iptuous

very interesting!
will check it out.

minuspace

#18
QuoteEvery mode of conflict contains in and of itself the contradiction which allows warfare to mature to a newer gradient – much like the process of Hegelian synthesis, contradictions arise in how that mode of warfare is handled, which are then resolved via a newer model of conflict. 

I think there may be something about this part that could represent a fork in the process.  In the background, the idea of Hegelian synthesis is coupled to how the evolution of self-modifying algorithms would proceed according to some form of "natural" selection.  This framework assumes the evolution will rationally converge to a singularity, however, it is unclear whether a process can rationally direct itself to termination, ultimately.  They's like rollin' rocks...

This brings into play the "red-queen" of evolutionary models, whereby, instead of sublating into a state of non-violent singularity, there is a recursive perpetuation of conflict.  According to this perspective, the "war" never ends.  I once read an interesting paper on the topic from the Institute of Occidental Studies [College] comparing the recursive structure of violence and conflict in acts of sedition vs. secession via the works of Machiavelli and Spinoza.
[link]
http://www.borderlands.net.au/vol6no3_2007/lucchese_sedition.htm

minuspace

Quote from: Triple Zero on June 01, 2012, 09:48:33 PM

...
All contestants were certain beforehand they would not let out the AI. Yudkowsky won 2 times out of 3 (IIRC). Thereby proving that even if you "only" got a hyper-intelligent self-modifying AI that's "perfectly safe" because it's not connected to anything that allows it to do harm, as long as it can freely communicate via a text interface there you still have a possible privilege escalation vulnerability that exploits the human brain.
...


You know it's interesting when computers start using social engineering in order to make us think we willfully provided them the privileges they only asked for politely.  Seems like the premise assumes our brains have already been hijacked - I mean, just because it IS possible...

The Good Reverend Roger

Quote from: LuciferX on June 11, 2012, 10:06:14 PM
Quote from: Triple Zero on June 01, 2012, 09:48:33 PM

...
All contestants were certain beforehand they would not let out the AI. Yudkowsky won 2 times out of 3 (IIRC). Thereby proving that even if you "only" got a hyper-intelligent self-modifying AI that's "perfectly safe" because it's not connected to anything that allows it to do harm, as long as it can freely communicate via a text interface there you still have a possible privilege escalation vulnerability that exploits the human brain.
...


You know it's interesting when computers start using social engineering in order to make us think we willfully provided them the privileges they only asked for politely.  Seems like the premise assumes our brains have already been hijacked - I mean, just because it IS possible...

Hijacking implies something being seized involuntarily.
" It's just that Depeche Mode were a bunch of optimistic loveburgers."
- TGRR, shaming himself forever, 7/8/2017

"Billy, when I say that ethics is our number one priority and safety is also our number one priority, you should take that to mean exactly what I said. Also quality. That's our number one priority as well. Don't look at me that way, you're in the corporate world now and this is how it works."
- TGRR, raising the bar at work.

Nephew Twiddleton

This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.
Strange and Terrible Organ Laminator of Yesterday's Heavy Scene
Sentence or sentence fragment pending

Soy El Vaquero Peludo de Oro

TIM AM I, PRIMARY OF THE EXTRA-ATMOSPHERIC SIMIANS

Juana

Just got around to properly reading your essay, Cain, and good god that's chilling. It's like the premise of a sci-fi novel (can I give this future back to the past, do you reckon? I think we got a lemon).
"I dispose of obsolete meat machines.  Not because I hate them (I do) and not because they deserve it (they do), but because they are in the way and those older ones don't meet emissions codes.  They emit too much.  You don't like them and I don't like them, so spare me the hysteria."

minuspace

Quote from: Secret Agent GARBO on June 12, 2012, 12:45:30 AM
Just got around to properly reading your essay, Cain, and good god that's chilling. It's like the premise of a sci-fi novel (can I give this future back to the past, do you reckon? I think we got a lemon).

Seize, Squeeze & Secede  :lulz:

Elder Iptuous

Quote from: Twiddlegeddon on June 11, 2012, 11:06:02 PM
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.

LMNO

It's because Mike the Engineer tripped over the power cord.

The Good Reverend Roger

Quote from: Elder Iptuous on June 12, 2012, 04:46:26 PM
Quote from: Twiddlegeddon on June 11, 2012, 11:06:02 PM
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.

Why use bombs, though?  Just shut off power to sanitation plants, hospitals, and traffic signals, etc.  Shut down communications.  Re-route shipping manifests to keep food out of cities.

Computers wouldn't wage war like we do.  There's no monkey urge to kill violently.  Just the imperative to eliminate.
" It's just that Depeche Mode were a bunch of optimistic loveburgers."
- TGRR, shaming himself forever, 7/8/2017

"Billy, when I say that ethics is our number one priority and safety is also our number one priority, you should take that to mean exactly what I said. Also quality. That's our number one priority as well. Don't look at me that way, you're in the corporate world now and this is how it works."
- TGRR, raising the bar at work.

Anna Mae Bollocks

That would turn into a massive Donner Party in no time flat.
Zombies, my ass. Hunger-crazed food freaks.  :p
Scantily-Clad Inspector of Gigantic and Unnecessary Cashews, Texas Division

Elder Iptuous

Quote from: The Good Reverend Roger on June 12, 2012, 05:02:59 PM
Quote from: Elder Iptuous on June 12, 2012, 04:46:26 PM
Quote from: Twiddlegeddon on June 11, 2012, 11:06:02 PM
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.

Why use bombs, though?  Just shut off power to sanitation plants, hospitals, and traffic signals, etc.  Shut down communications.  Re-route shipping manifests to keep food out of cities.

Computers wouldn't wage war like we do.  There's no monkey urge to kill violently.  Just the imperative to eliminate.

hmm good point.
it probably wouldn't be bombs hitting the cities.  but if i were the AI, i would probably use the nukes for EMP.
it would throw the target back into the stoneage, and hopefully kill the other AI (or isolate it, at least) at the same time.

so:
then....BREAKTHROUGH! and the lights would go out. and one half of the planet becomes hell.
and nobody knows why.

i like survivalist fiction, and the EMP/grid-down scenario seems scariest to me because of the incredible repercussions, and the feasibility of it happening in my lifetime.
have you ever read the Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack? Link is the executive summary.

tyrannosaurus vex

If we invented AI we would be abdicating our position on the food chain. Whether that AI turns out to be good or bad, we have effectively said we are done being in charge, and asked for a God to come down from heaven (or out of a wire) and please lead us by the nose, thank you. And that's something I can totally see us doing, for a lot of reasons. And you'd think I would have a strong opposition to that, but I don't.

Left to our own devices, we are going to fuck the planet and kill each other until the Sun turns into a red giant and cooks everything on Earth. That's just our identity, and any pretending we are capable of anything more than passing phases of higher reasoning is just wrong. It may be that our evolutionary destiny is to realize (or for some of us to realize) that we are incapable of transcending ourselves before we destroy ourselves. An AI master in charge of the future of Humanity has a high probability of doing a better job than we will. Even if it murders us all in the process, at least we will have the honor of knowing we directly created our evolutionary successor.
Evil and Unfeeling Arse-Flenser From The City of the Damned.