Principia Discordia

Principia Discordia => Or Kill Me => Topic started by: Cain on May 30, 2012, 11:47:57 pm

Title: Something I rushed off
Post by: Cain on May 30, 2012, 11:47:57 pm
Busy with other stuff.  This isn't a fully realised and carefully thought out essay, its something I knocked off in the last half hour, in hope of stimulating discussion more than anything else.



“He was intellect… He was war!  That is what they are!  Do you not see?  With every heartbeat they war against circumstance, with every breath they conquer!  They walk among us as we walk among dogs, and we yowl when they throw out scraps, we whine and whimper when they raise their hands…”
- Cnaiur urs Skiotha, The Thousandfold Thought, R. Scott Bakker

The difference between war and civil peace is only a relative lack of open violence in the latter, with the contention between forces possibly boiling over into new war or revolution, which may in turn result in a new settlement. Indeed, from the inversion of Clausewitz, Foucault concludes that “the final decision can only come from war, or in other words a trial by strength in which weapons are the final judges”.
- Mark G.E. Kelly, The Political Philosophy of Michel Foucault

The whole secret lies in confusing the enemy, so that he cannot fathom our real intent.
- Sun Tzu, The Art of War

In the trailer to Tinker Tailor Soldier Spy, the viewer is asked "how do you find an enemy who is hidden right before your eyes?"  Not a bad question, for those who work involves questions of trust and espionage.

A more relevant question, however, would have been "how do you fight an enemy you don't even know you have?"

As the 21st century was born, a deadly mix of global connectivity, reliance on highly complex systems and the emergence of non-traditional modes of warfare has made it that the old, traditional methods of fighting have been rendered obsolete in many circumstances, while the opening the door for a new and entirely more insidious method of conflict.

The nature of the conflict is such that the first shots in this new kind of war may have been fired, and yet we will remain unaware of them.  This is the coming age of 5GW, or perhaps it could also be called "Invisible Warfare".  It will also be the last great transition in how humanity fights its wars.

Conflict can be seen, broadly speaking, as falling somewhere on a line, with total warfare (genocide) at the one end, and pure politics (non-violent, vocal conflict) at the other.  While it is hard to generalize about such a thing, as a rule the kinds of war we have experienced throughout history keep moving towards the political end of the conflict spectrum.  As such, warfare has become more discriminate and more undefined in terms of combatants and theatres of conflict. 

A full realization of this would be a war that hews as close to the political end of the spectrum as possible, manipulating political, religious and social contexts and themes to bring about an outcome preferable to the strategist in question.  Violence may be needed for elements of that, but ideally it should be kept to the minimum necessary to order the context on your terms.

The problem is, of course, if such manipulation were blatant, many would find it offensive and feel prepared to work against it.  Furthermore, it allows your enemy to formulate counter-strategies to defeat your attempts, or just target and destroy you outright.  Therefore, we come to the most important qualification of this new kind of warfare – that it must practice deception at the Grand Strategic level.  Your enemy can never know that they are your enemy in the first place.  Thus, invisible warfare. 

The importance of perception not only exists in how the 5GW must be planned and executed, but it is also the focus on how to defeat the enemy.  If you can convince an enemy they are not your enemy, and to act in a way that does not harm (or in fact benefits) you, then they are not an enemy, are they?

It also makes this new kind of war intensely intellectual.  Not only will the strategist(s) in charge of such a campaign be required to have a near encyclopaedic knowledge of the society they wish to subvert, they will have to take an approach that is somewhere between intelligent guerrilla warfare and intellectual Aikido, using the cognitive processes of their enemy against their own designs, attacking from unknown mental territory, using new cognitive scripts to obtain the element of surprise which is so vital to success.  Those who think fastest, and have the most accurate mental maps and are the most creative…victory will likely be theirs.

And this is where the problem comes in.

Every mode of conflict contains in and of itself the contradiction which allows warfare to mature to a newer gradient – much like the process of Hegelian synthesis, contradictions arise in how that mode of warfare is handled, which are then resolved via a newer model of conflict.  For example, nuclear weapons made large-scale conventional warfare obsolete.  Therefore, proxy wars waged by guerrilla forces, special forces and paramilitary arms of intelligence agencies, came to replace large-scale warfare.  One can see this in Vietnam, or Afghanistan, or the current war on terror.

If 5GW is reliant on intelligence and secrecy as its key attribute, then there is at least one obvious problem here.  Or two, depending on how you want to look at it.  That is, what happens when you add self-modifying intelligences which can optimize themselves, into such a mix?

I am, of course, talking about artificial intelligence.

At once, you can see the temptation, I am sure.  If a nation builds up a 5GW infrastructure, eventually it will want an AI or something that is functionally similar, to help with computations, planning the strategy of the campaign, analysis of what targets need to be eliminated, which ones can be spared and, in short, how best to achieve the goals of the strategists.

However, an AI which is more intelligent than a human is fundamentally untrustworthy.  Its goals cannot be really known, or understood.  Given the speed at which it could work, and the modifications it could make to itself, it may be impossible to figure out where it is misleading you. 

The obvious solution is to not use an AI at all.  But that will then put nations who refuse to use them at a comparative disadvantage to those who do.  Once Pandora's box is opened, it cannot be closed again.  Once machines take over the vital war-making functions from human strategists, we are entering what is entirely unknowable territory.  Will AIs compete, or cooperate?  How will they view humans?  What would their goals actually be?

This Sixth Generation of Warfare will mark the end of the human era of conflict.  To be sure, if humans are still around, they will fight.  But will they do it because they have chosen to, or because they have been carefully manipulated into believing it is in their best interest by an intelligence far beyond them?

Of course, this may never come to happen.  AI technology may not even be theoretically possible, in the sense that most of us understand it.  But there is another path this can take, with similar outcomes.  And that is transhumanism.  I can understand the desire to transcend our biology, to optimize our physical selves through a variety of methods, to be the best possible sentient being that we can.

But the same problems apply.  If transhumans are created which are more intelligent than the average human, that self-secrete "smart drugs", that have far denser and larger brains than the average person, then they can also manipulate human strategists to their own ends.  And as with AIs, the temptation to create such transhumans for short term comparative advantage will overcome all objections to such a plan, regardless of how many bioethics treaties are signed.

Or, transhumanism is also, fundamentally unfeasible.  Either on a theoretical level, or because our planet is becoming more and more resource-strapped, and the vast amount of effort this would require may not be worth the payoff, given the choices available at the time.

Regardless of scenario, it is fair to say we have now entered the last great age of human warfare.  It will be messy, it will be confusing, and you will not know it when you see it.
Title: Re: Something I rushed off
Post by: Salty on May 31, 2012, 12:41:59 am
What terrifies me with this kind of warfare is the view from my window will look the same not matter what. It seems to me that no matter who wins the bulk of people lose because they just roll along with the program because it's the right thing to do. The meaty, plastic cogs that keep this kind of war machine alive sitting there slack jawed, staring at romantic comedies while others get chewed up....it'll all just get dialed up.
Title: Re: Something I rushed off
Post by: AnarChloe on May 31, 2012, 12:47:56 am
What terrifies me with this kind of warfare is the view from my window will look the same not matter what. It seems to me that no matter who wins the bulk of people lose because they just roll along with the program because it's the right thing to do. The meaty, plastic cogs that keep this kind of war machine alive sitting there slack jawed, staring at romantic comedies while others get chewed up....it's all just get dialed up.

^ This ^

With this kind of warfare, huge and major changes to policy could be made and no one would notice.

And that is terrifying.
Title: Re: Something I rushed off
Post by: [redacted] on May 31, 2012, 02:02:44 am
Hopefully an AI would be free of our instinct to be top dog / pack leader.

If we're their programmers then this would be some achievement though.
Title: Re: Something I rushed off
Post by: Mesozoic Mister Nigel on May 31, 2012, 02:30:46 am
I really wish my brain was in adequate working order for me to formulate a quality reply right now.

But it's not, so I'm just posting to say, interesting topic, and I hope to have more to say about it when I'm not so exhausted.
Title: Re: Something I rushed off
Post by: NewSpag on May 31, 2012, 07:48:10 am
I disagree.  I would postulate that we are well beyond 5GW but the entity that won the war now controls the flow of information in your world (quite possibly an AI, if that means a non-human intelligence), leading you to suggest that we are entering into said war.  In this situation by believing you are going to war you have already lost the war.  Your "enemy" has you so throughly confused that you cannot see the "secret", that there is no enemy at all.
Title: Re: Something I rushed off
Post by: P3nT4gR4m on May 31, 2012, 10:48:17 am
With regards transhumanism, I'm of the opinion that this has already happened but we don't really class it as that yet. Imagine a soldier with a gun, walking about, looking for something to shoot.  Now imagine his enemy - another soldier with a gun and a gps smartphone, hooked up to a video camera on a mini drone - which one is your money riding on?

Transhuman is generally accepted as humans with bits altered or bits added - smartphone doesn't quite count cos it's not growing out his spleen kinda thing but the tactical advantage is already in play, we're just waiting on the spleen-meld.

I see my ability to harness technology and use it to my advantage as something that makes me a vastly more powerful free agent in this emerging invisible theatre. The thing about 5GW is that the traditional players aint the only ones fucking up shit on the battlefield anymore. The deeper down the rabbit hole we go the more opportunity arises for the one man nation state to carve out a nice little slice of pie for himself.
Title: Re: Something I rushed off
Post by: Cain on May 31, 2012, 10:49:13 am
Hopefully an AI would be free of our instinct to be top dog / pack leader.

If we're their programmers then this would be some achievement though.

Hopefully, but not necessarily.  An AI would have to be able to rewrite themselves, in order to keep flexible and update their models of reality.  A constrained AI would almost certainly lose in a war between it and an unconstrained AI, so the question of being able to program it to act in that way is ultimately a moot point.

Because we're dealing with something of a magnitude of intelligence higher than humans, trying to figure out what it may be doing and its goals would be like a dog trying to make sense of human actions.  Hence the very first quote.  Even with significant genetic modifications, a human or transhuman simply wouldn't be able to keep up with an AI, given Moore's Law (even taking into account the curent slowing down of that trend).

I disagree.  I would postulate that we are well beyond 5GW but the entity that won the war now controls the flow of information in your world (quite possibly an AI, if that means a non-human intelligence), leading you to suggest that we are entering into said war.  In this situation by believing you are going to war you have already lost the war.  Your "enemy" has you so throughly confused that you cannot see the "secret", that there is no enemy at all.

Uh, yeah, or maybe we're really brains in a vat, and everything we sense is an illusion.  Do you have an actual contribution, rooted in facts, to contribute to this debate, or do you want to just play clever word games so you can pride yourself on how "smart" you are?
Title: Re: Something I rushed off
Post by: Cain on May 31, 2012, 10:53:49 am
With regards transhumanism, I'm of the opinion that this has already happened but we don't really class it as that yet. Imagine a soldier with a gun, walking about, looking for something to shoot.  Now imagine his enemy - another soldier with a gun and a gps smartphone, hooked up to a video camera on a mini drone - which one is your money riding on?

Transhuman is generally accepted as humans with bits altered or bits added - smartphone doesn't quite count cos it's not growing out his spleen kinda thing but the tactical advantage is already in play, we're just waiting on the spleen-meld.

I see my ability to harness technology and use it to my advantage as something that makes me a vastly more powerful free agent in this emerging invisible theatre. The thing about 5GW is that the traditional players aint the only ones fucking up shit on the battlefield anymore. The deeper down the rabbit hole we go the more opportunity arises for the one man nation state to carve out a nice little slice of pie for himself.

Well, I consider transhumanism to specifically be the controlled genetic alteration of human DNA for optimized performance.

Everything else is just technology.  The same comparisons could be made between a 14th century archer and a 19th soldier armed with a rife and relying on trains and semaphore, but I wouldn't exactly consider the latter to be transhuman.

The rest I agree with, however, though it will be significantly easier for factions within a government to practice deception on the grand strategic level, because they have access to superior resources (everything imposes a cost, in terms of time, efficiency etc.  Even tactical deception requires reliable intel, being able to move the troops into a position without them being spotted, having a position on the route where an ambush is feasible, coordinating the attack so all the individual elements are working together etc).  An actual government, most likely, would not be able to do that, but the permament elements within the bureaucracy, military and intelligence services possibly could.
Title: Re: Something I rushed off
Post by: P3nT4gR4m on May 31, 2012, 11:16:43 am
Everything else is just technology.  The same comparisons could be made between a 14th century archer and a 19th soldier armed with a rife and relying on trains and semaphore, but I wouldn't exactly consider the latter to be transhuman.

Totally, no argument there. Personally I include cybernetics under the transhuman umbrella. That's why I'm looking at current microprocessor tech and thinking that, even before I can get it as an implant, I'm getting most of that advantage already.

Communications is going to be one of the core weapons in the new warfare. The ability to eavesdrop on theirs whilst hiding or obfuscating your own is going to be more crucial to the outcome than it was back in the enigma days, given the speed that things can shift and react now. Reading between the disinformation lines will be a big part of the key to success.

One thing that interests me is the emerging potential for a loose network of mercenaries to set up shop. Something along the lines of Anon or the chinese hackers but with a much less idealistic motivation. An organisation like that could quickly become a pretty formidable player in the new game.

Title: Re: Something I rushed off
Post by: LMNO on May 31, 2012, 02:13:38 pm
I don't have much to say about AI or TransHumanism that hasn't either been said before or is retarded.  But before the essay went there, I was struck by this:

Quote from: Cain
The importance of perception not only exists in how the 5GW must be planned and executed, but it is also the focus on how to defeat the enemy.  If you can convince an enemy they are not your enemy, and to act in a way that does not harm (or in fact benefits) you, then they are not an enemy, are they?

The idea of silent, total manuplation of a percieved threat or enemy who has no idea a manipulation is taking place sounds both like a PKD novel and what has been slowly happening to this country for a long time now; only the enemy is us.  Say what you want about "memetics", but the marketing concept that ideas can be introduced that slip through mental filters has been becoming more refinded with every passing year, and every new psychological/sociological experiment that gets published. 

I know that Cain was most likely talking about the manipulation of a state or government by another, but an even less obvious ploy is to unobtrusively act against the citizenry, especially in a democracy/republic, so that the people would be the movers, electing the kind of government that the enemy country wants to see. 

Another question that quote brings to mind is, if two states are acting like friends, and behaving like friends, isn't that functionally equivalent to "peace"?
Title: Re: Something I rushed off
Post by: NewSpag on June 01, 2012, 01:42:46 am
Uh, yeah, or maybe we're really brains in a vat, and everything we sense is an illusion.  Do you have an actual contribution, rooted in facts, to contribute to this debate, or do you want to just play clever word games so you can pride yourself on how "smart" you are?
Yeesh, I'm still pretty new here so I don't know what constitutes a "fact" in this debate.  Hell, considering how many layers of abstraction I have to go through to even access this forum I'm surprised that I actually am able to convince myself that you are actually a human being.  Point being until you tell me where "facts" come from the brains in a vat/illusion theory is pretty much my default fallback.

Try #2:

The idea of silent, total manuplation of a percieved threat or enemy who has no idea a manipulation is taking place sounds both like a PKD novel and what has been slowly happening to this country for a long time now; only the enemy is us.  Say what you want about "memetics", but the marketing concept that ideas can be introduced that slip through mental filters has been becoming more redefined with every passing year, and every new psychological/sociological experiment that gets published. 


While surfing my way through the interweb the other day I stumbled across this site (http://audint.net/) claiming to be a research collective on InfraSonic warfare. According to the wikipedia Infrasonic waves are outside of humans' normal hearing range and that it can cause people to "feel vaugely that supernatural events are taking place".  Sounds almost crazy enough to be something that Cain would accept as a fact a project that started here. 
Title: Re: Something I rushed off
Post by: The Johnny on June 01, 2012, 03:16:14 am

Conflict can be seen, broadly speaking, as falling somewhere on a line, with total warfare (genocide) at the one end, and pure politics (non-violent, vocal conflict) at the other.  While it is hard to generalize about such a thing, as a rule the kinds of war we have experienced throughout history keep moving towards the political end of the conflict spectrum.  As such, warfare has become more discriminate and more undefined in terms of combatants and theatres of conflict. 

A full realization of this would be a war that hews as close to the political end of the spectrum as possible, manipulating political, religious and social contexts and themes to bring about an outcome preferable to the strategist in question.  Violence may be needed for elements of that, but ideally it should be kept to the minimum necessary to order the context on your terms.

I dont want to seem as a snob, this is merely a detail that could broaden the discussion (and its the only real thing that i can add to it):

Theres symbolical violence, or non-physical violence that resides within the political interactions between people or groups.

A certain society might be structured in a way in which the distribution of wealth is very much unequal, that is in a sense violent in an indirect manner. Discrimination is a kind of symbolical violence too, or sexism, etc.

Violence can be either direct physical harm but it can also be an injustice.

My point is, that violence isnt reduced from one trend of managing conflict to another, its merely transformed or expressed in a different manner... why kill someone when you can make them do what you want? The bottom line is the annulation of the other person's projects/agenda/desires and promoting one's own
Title: Re: Something I rushed off
Post by: Triple Zero on June 01, 2012, 10:15:12 am
(written yesterday to follow LMNO's reply, but then the forum started crashing--I'm blaming Wintermute or Helios)

I was sort of wondering a similar thing as LMNO's last thought, it may be a bit blasphemous in a "freedom or kill me" sense, but as these generations of warfare advance, they also seem to get less violent and with less overall horrors (like Goya's horrors of war, we still got plenty of new and improved horrors) taken on the whole? Or do those horrors just move elsewhere, as they are currently occurring in Africa, and parts of the Middle East.

I'm not making a very clear point I suppose but what I was thinking about, isn't this progress, in some sense?

BTW Cain I really enjoyed your essay, and I hadn't at all expected the sudden turn to AI and transhumanism :)

I have read several articles on the musings about hyperintelligent AIs on LessWrong.com, which I assume is where you also read about those theories. I'm really not sure what to make of those. But I guess that's exactly what the Technological Singularity has always meant, by definition: It's impossible to reason about what comes after, until you actually get there, and by then it's too late to do anything about it.

But I wouldn't even want to guess when we get something like that.

The closest thing, that I've heard about and is used for actual strategic decisions, is the gargantuan data-centre that is (being?) built in Utah. Based on the Total Information Awareness program, they're receiving more data than any human could conceivably process. Something on the order of the amount of data that is sent through the Internet daily (or four times that? it was in the article I linked in Privacy Thread).

Now I have some background in Machine Learning, and recent developments seem to point out that, if you got insane amounts of data, you merely need the processing power to deal with all of that, and ML algorithms get really really good. There was an article, or maybe a TED talk called "the unreasonable effectiveness of big data"--or something like that, it's been a while since I came across it.

So, basically you just need to have really fast and powerful computers, and then you need to mass-produce an insane fuckton of them and bury them underground in Utah.

Now Machine Learning isn't the same thing as AI. I'm writing it capitalized because I'm referring to the specific definition of the term as used in Computational Science: a modelling, decision, classification or prediction algorithm that gets better at doing its task on unseen data, the more seen (training) data you provide it with. It's basically extra fancy types of statistics and regression formulas with a bunch of Bayes thrown in. But they work very well.

So can we build really smart computers to analyze really complex things? Yeah. But one thing I'm not at all sure we're close to cracking it or even knowing how to approach starting building it, is the "self-modifying/self-improving" part of the deal. Which is a rather key element in Yudkowsky's reasonings on these matters. But he just takes it for granted. He justifies this decision in one of his essays btw and that's okay, I really have to commend him for trying to take on that Singularity problem, in a serious manner, and actually doing a pretty damn good job at trying to predict what comes after an event that changes everything.

See it's one thing for a ML algorithm to optimize and modify its own parameters to optimally fit, model and predict incoming data, according to the algorithm. But it's a whole other ballgame for a computer to actually modify the algorithm to do something different, free-form. Or to improve the algorithm into something that works and optimizes significantly better than before. In fact that last one would probably be enough to set off the chain reaction into AI so intelligent it's impossible to know what its goals are or what it will do, aka Singularity.

But it might be quite a while before we can build such a thing, IF it's even possible. Example, right now we got pretty "smart" cars, monitoring tire pressure, detecting obstacles, capable of self-diagnosing problems, etc. They can internally dial all sorts of knobs to make the car engine run as smoothly as possible, and that's amazing. But they can't increase the size of their gas tank, modify the physical engine, or grow more wheels. They can only tweak parameters, but not change the actual system. And that's where current day AI and ML is at. Sure, they got a LOT of parameters, making the system tweakable to great extent, but only within those parameters. They can't think outside that box. Even though, if you design the system very cleverly, that box might be a lot bigger than a human would assume at first glance of the parameters given, and the machine will find it, it's still a box.

I was going to comment on the other, transhuman, part of your essay, but this post is getting long enough as it is.
Title: Re: Something I rushed off
Post by: Prince Glittersnatch III on June 01, 2012, 08:49:22 pm
The obvious solution is to not use an AI at all.  But that will then put nations who refuse to use them at a comparative disadvantage to those who do.  Once Pandora's box is opened, it cannot be closed again.  Once machines take over the vital war-making functions from human strategists, we are entering what is entirely unknowable territory.  Will AIs compete, or cooperate?  How will they view humans?  What would their goals actually be?

At their core all AIs will probably have some sort of command telling them to obey orders, or to act in their owners best interest. I cant really see any advantage to allowing them to modify that. It would probably be pretty vague as well, because giving such an intelligence specific orders would be pointless micro-management. What would be interesting is if they were allowed to lie to their owners, or if their obligation to act in their owners best interest overrode their programing against lying. People are stupid, that should be double obvious to a higher intelligence. What if the AIs lie to their owners for their own good?


Title: Re: Something I rushed off
Post by: Elder Iptuous on June 01, 2012, 09:20:43 pm
I would assume there would be some primary motivator for the AI.
ours (as an intelligence) is reproduction. all else is to support that, ultimately. It seems to be very tightly coupled to the  physical chemical gradient motivation that is underneath the layer of intelligence.
the AI needs to have 'protect humanity' as its' prime motivator.
a computer's prime motivator is simply 'make the electrons go through the circuit' and we just arrange the circuit to give the result we want while it pursues that.
in making an intelligence out of a computer, we need to couple the 'simple circuit' motivation tightly to the 'protect people' motivation.
of course it can still break just like we do when we occasionally follow the 'chemical gradient' motivation and end up killing ourselves in the head before we reproduce.
if it is a broken AI that we fear, rather than design that destroys us, then redundancy could save us.
Title: Re: Something I rushed off
Post by: Triple Zero on June 01, 2012, 09:48:33 pm
There was this AI Challenge that Yudkowsky held a couple of times:

The challenge was about a self-modifying AI, hyperintelligent of course, that communicates via chat. Yudkowsky played the part of the AI and the challenge was held in a private IRC channel. The challenger is assigned the role of "gatekeeper". The AI is in a type of cyber box firewall that prevents it from accessing the Internet and taking over the world or something. If the gatekeeper says "okay I let you out of the box" or something to that end, the AI wins. The gatekeeper wins if he can chat with the AI for X hours (I believe it was 2 or 3) without deliberately stalling the AI by not responding to it or reading its messages. There were a few other rules (I suppose such as acting like "you're not a hyperintelligent AI, you're just Yudkowsky"), but the AI was allowed to play just about every terrible dirty trick in the book. Which is why the IRC logs were previously agreed upon to be kept secret. I think only the final few lines where the gatekeeper let out the AI were published somewhere.

All contestants were certain beforehand they would not let out the AI. Yudkowsky won 2 times out of 3 (IIRC). Thereby proving that even if you "only" got a hyper-intelligent self-modifying AI that's "perfectly safe" because it's not connected to anything that allows it to do harm, as long as it can freely communicate via a text interface there you still have a possible privilege escalation vulnerability that exploits the human brain.

Having shown it's possible, he said he wasn't interested in doing it again unless someone's got a compelling reason to get something novel out of it beyond "I think I could beat you". I bet some of those tricks must have been pretty shameful ...

You can read more about it by following a bunch of random links from here: http://news.ycombinator.com/item?id=195959 (you should also scroll a bit down to see the part of the thread where eyudkowsky wrote. HN's ranking system is awful)
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 01, 2012, 10:12:47 pm
very interesting!
will check it out.
Title: Re: Something I rushed off
Post by: minuspace on June 11, 2012, 08:08:21 pm
Quote
Every mode of conflict contains in and of itself the contradiction which allows warfare to mature to a newer gradient – much like the process of Hegelian synthesis, contradictions arise in how that mode of warfare is handled, which are then resolved via a newer model of conflict. 

I think there may be something about this part that could represent a fork in the process.  In the background, the idea of Hegelian synthesis is coupled to how the evolution of self-modifying algorithms would proceed according to some form of "natural" selection.  This framework assumes the evolution will rationally converge to a singularity, however, it is unclear whether a process can rationally direct itself to termination, ultimately.  They's like rollin' rocks...

This brings into play the "red-queen" of evolutionary models, whereby, instead of sublating into a state of non-violent singularity, there is a recursive perpetuation of conflict.  According to this perspective, the "war" never ends.  I once read an interesting paper on the topic from the Institute of Occidental Studies [College] comparing the recursive structure of violence and conflict in acts of sedition vs. secession via the works of Machiavelli and Spinoza.
[link]
http://www.borderlands.net.au/vol6no3_2007/lucchese_sedition.htm (http://www.borderlands.net.au/vol6no3_2007/lucchese_sedition.htm)
Title: Re: Something I rushed off
Post by: minuspace on June 11, 2012, 10:06:14 pm

...
All contestants were certain beforehand they would not let out the AI. Yudkowsky won 2 times out of 3 (IIRC). Thereby proving that even if you "only" got a hyper-intelligent self-modifying AI that's "perfectly safe" because it's not connected to anything that allows it to do harm, as long as it can freely communicate via a text interface there you still have a possible privilege escalation vulnerability that exploits the human brain.
...


You know it's interesting when computers start using social engineering in order to make us think we willfully provided them the privileges they only asked for politely.  Seems like the premise assumes our brains have already been hijacked - I mean, just because it IS possible...
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 11, 2012, 10:30:17 pm

...
All contestants were certain beforehand they would not let out the AI. Yudkowsky won 2 times out of 3 (IIRC). Thereby proving that even if you "only" got a hyper-intelligent self-modifying AI that's "perfectly safe" because it's not connected to anything that allows it to do harm, as long as it can freely communicate via a text interface there you still have a possible privilege escalation vulnerability that exploits the human brain.
...


You know it's interesting when computers start using social engineering in order to make us think we willfully provided them the privileges they only asked for politely.  Seems like the premise assumes our brains have already been hijacked - I mean, just because it IS possible...

Hijacking implies something being seized involuntarily.
Title: Re: Something I rushed off
Post by: Nephew Twiddleton on June 11, 2012, 11:06:02 pm
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.
Title: Re: Something I rushed off
Post by: Juana on June 12, 2012, 12:45:30 am
Just got around to properly reading your essay, Cain, and good god that's chilling. It's like the premise of a sci-fi novel (can I give this future back to the past, do you reckon? I think we got a lemon).
Title: Re: Something I rushed off
Post by: minuspace on June 12, 2012, 08:04:07 am
Just got around to properly reading your essay, Cain, and good god that's chilling. It's like the premise of a sci-fi novel (can I give this future back to the past, do you reckon? I think we got a lemon).

Seize, Squeeze & Secede  :lulz:
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 12, 2012, 04:46:26 pm
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.
Title: Re: Something I rushed off
Post by: LMNO on June 12, 2012, 05:00:26 pm
It's because Mike the Engineer tripped over the power cord.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 05:02:59 pm
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.

Why use bombs, though?  Just shut off power to sanitation plants, hospitals, and traffic signals, etc.  Shut down communications.  Re-route shipping manifests to keep food out of cities.

Computers wouldn't wage war like we do.  There's no monkey urge to kill violently.  Just the imperative to eliminate.
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 05:16:28 pm
That would turn into a massive Donner Party in no time flat.
Zombies, my ass. Hunger-crazed food freaks.  :p
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 12, 2012, 05:34:41 pm
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.

Why use bombs, though?  Just shut off power to sanitation plants, hospitals, and traffic signals, etc.  Shut down communications.  Re-route shipping manifests to keep food out of cities.

Computers wouldn't wage war like we do.  There's no monkey urge to kill violently.  Just the imperative to eliminate.

hmm good point.
it probably wouldn't be bombs hitting the cities.  but if i were the AI, i would probably use the nukes for EMP.
it would throw the target back into the stoneage, and hopefully kill the other AI (or isolate it, at least) at the same time.

so:
then....BREAKTHROUGH! and the lights would go out. and one half of the planet becomes hell.
and nobody knows why.

i like survivalist fiction, and the EMP/grid-down scenario seems scariest to me because of the incredible repercussions, and the feasibility of it happening in my lifetime.
have you ever read the Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack (http://www.empcommission.org/docs/empc_exec_rpt.pdf)? Link is the executive summary.
Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 05:49:48 pm
If we invented AI we would be abdicating our position on the food chain. Whether that AI turns out to be good or bad, we have effectively said we are done being in charge, and asked for a God to come down from heaven (or out of a wire) and please lead us by the nose, thank you. And that's something I can totally see us doing, for a lot of reasons. And you'd think I would have a strong opposition to that, but I don't.

Left to our own devices, we are going to fuck the planet and kill each other until the Sun turns into a red giant and cooks everything on Earth. That's just our identity, and any pretending we are capable of anything more than passing phases of higher reasoning is just wrong. It may be that our evolutionary destiny is to realize (or for some of us to realize) that we are incapable of transcending ourselves before we destroy ourselves. An AI master in charge of the future of Humanity has a high probability of doing a better job than we will. Even if it murders us all in the process, at least we will have the honor of knowing we directly created our evolutionary successor.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 05:51:09 pm
If we invented AI we would be abdicating our position on the food chain. Whether that AI turns out to be good or bad, we have effectively said we are done being in charge, and asked for a God to come down from heaven (or out of a wire) and please lead us by the nose, thank you. And that's something I can totally see us doing, for a lot of reasons. And you'd think I would have a strong opposition to that, but I don't.

Left to our own devices, we are going to fuck the planet and kill each other until the Sun turns into a red giant and cooks everything on Earth. That's just our identity, and any pretending we are capable of anything more than passing phases of higher reasoning is just wrong. It may be that our evolutionary destiny is to realize (or for some of us to realize) that we are incapable of transcending ourselves before we destroy ourselves. An AI master in charge of the future of Humanity has a high probability of doing a better job than we will. Even if it murders us all in the process, at least we will have the honor of knowing we directly created our evolutionary successor.

Balls.  I'd rather the human race puked and died.
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 05:54:13 pm
Yeah, that sounded a lot like "OK, AI, YOU'RE IN CHARGE! ABUSE ME TO DEATH!"
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 05:55:07 pm
Yeah, that sounded a lot like "OK, AI, YOU'RE IN CHARGE! ABUSE ME TO DEATH!"

Even if it was entirely benevolent, I'd still take a fucking axe to the mainframe.

Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 05:56:33 pm
Yeah, that sounded a lot like "OK, AI, YOU'RE IN CHARGE! ABUSE ME TO DEATH!"

Even if it was entirely benevolent, I'd still take a fucking axe to the mainframe.

Yep.
A benevolent overlord is still an overlord.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 05:58:53 pm
Yeah, that sounded a lot like "OK, AI, YOU'RE IN CHARGE! ABUSE ME TO DEATH!"

Even if it was entirely benevolent, I'd still take a fucking axe to the mainframe.

Yep.
A benevolent overlord is still an overlord.

It's not even that.  Man is meant to be free, and that means making your own decisions and mistakes.

If you're not free to fail, you're not free.  And if you're not free, then what's the point?
Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 06:00:49 pm
If we invented AI we would be abdicating our position on the food chain. Whether that AI turns out to be good or bad, we have effectively said we are done being in charge, and asked for a God to come down from heaven (or out of a wire) and please lead us by the nose, thank you. And that's something I can totally see us doing, for a lot of reasons. And you'd think I would have a strong opposition to that, but I don't.

Left to our own devices, we are going to fuck the planet and kill each other until the Sun turns into a red giant and cooks everything on Earth. That's just our identity, and any pretending we are capable of anything more than passing phases of higher reasoning is just wrong. It may be that our evolutionary destiny is to realize (or for some of us to realize) that we are incapable of transcending ourselves before we destroy ourselves. An AI master in charge of the future of Humanity has a high probability of doing a better job than we will. Even if it murders us all in the process, at least we will have the honor of knowing we directly created our evolutionary successor.

Balls.  I'd rather the human race puked and died.

Then who cares if robots take over? I think there's something to be said for the finest moments of human experience and creativity, but at the other end of the spectrum is all the evil we do. And past that is the apathy. The human spirit is strong when it needs to be, but the march of history shows us, time and again, trying our damnedest to make sure that our spirit never needs to be strong. Eventually, we will probably succeed in that endeavor, one way or another.

The ultimate achievement, to create an intelligence greater and more resilient than our own, would be the first achievement that could truly be shared by all of us -- because all of us -- rich, poor, smart, dumb, tall, fat, skinny, smudgy, white, black -- all of us would be rendered completely obsolete and unnecessary. And we could, at least for the few moments before the atmosphere ignites, breathe the long, final sigh and think, "so this is how it ends."

BZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 06:02:16 pm
If we invented AI we would be abdicating our position on the food chain. Whether that AI turns out to be good or bad, we have effectively said we are done being in charge, and asked for a God to come down from heaven (or out of a wire) and please lead us by the nose, thank you. And that's something I can totally see us doing, for a lot of reasons. And you'd think I would have a strong opposition to that, but I don't.

Left to our own devices, we are going to fuck the planet and kill each other until the Sun turns into a red giant and cooks everything on Earth. That's just our identity, and any pretending we are capable of anything more than passing phases of higher reasoning is just wrong. It may be that our evolutionary destiny is to realize (or for some of us to realize) that we are incapable of transcending ourselves before we destroy ourselves. An AI master in charge of the future of Humanity has a high probability of doing a better job than we will. Even if it murders us all in the process, at least we will have the honor of knowing we directly created our evolutionary successor.

Balls.  I'd rather the human race puked and died.

Then who cares if robots take over? I think there's something to be said for the finest moments of human experience and creativity, but at the other end of the spectrum is all the evil we do. And past that is the apathy. The human spirit is strong when it needs to be, but the march of history shows us, time and again, trying our damnedest to make sure that our spirit never needs to be strong. Eventually, we will probably succeed in that endeavor, one way or another.

The ultimate achievement, to create an intelligence greater and more resilient than our own, would be the first achievement that could truly be shared by all of us -- because all of us -- rich, poor, smart, dumb, tall, fat, skinny, smudgy, white, black -- all of us would be rendered completely obsolete and unnecessary. And we could, at least for the few moments before the atmosphere ignites, breathe the long, final sigh and think, "so this is how it ends."

BZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

Unnecessary for what?
Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 06:13:11 pm
If we invented AI we would be abdicating our position on the food chain. Whether that AI turns out to be good or bad, we have effectively said we are done being in charge, and asked for a God to come down from heaven (or out of a wire) and please lead us by the nose, thank you. And that's something I can totally see us doing, for a lot of reasons. And you'd think I would have a strong opposition to that, but I don't.

Left to our own devices, we are going to fuck the planet and kill each other until the Sun turns into a red giant and cooks everything on Earth. That's just our identity, and any pretending we are capable of anything more than passing phases of higher reasoning is just wrong. It may be that our evolutionary destiny is to realize (or for some of us to realize) that we are incapable of transcending ourselves before we destroy ourselves. An AI master in charge of the future of Humanity has a high probability of doing a better job than we will. Even if it murders us all in the process, at least we will have the honor of knowing we directly created our evolutionary successor.

Balls.  I'd rather the human race puked and died.

Then who cares if robots take over? I think there's something to be said for the finest moments of human experience and creativity, but at the other end of the spectrum is all the evil we do. And past that is the apathy. The human spirit is strong when it needs to be, but the march of history shows us, time and again, trying our damnedest to make sure that our spirit never needs to be strong. Eventually, we will probably succeed in that endeavor, one way or another.

The ultimate achievement, to create an intelligence greater and more resilient than our own, would be the first achievement that could truly be shared by all of us -- because all of us -- rich, poor, smart, dumb, tall, fat, skinny, smudgy, white, black -- all of us would be rendered completely obsolete and unnecessary. And we could, at least for the few moments before the atmosphere ignites, breathe the long, final sigh and think, "so this is how it ends."

BZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

Unnecessary for what?

Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 06:16:16 pm
"Collective intelligence" sounds like the Borg. Or an ant hill. I don't see it as more evolved.
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 06:17:31 pm
Not to say that devolving isn't possible.

WE ARE NOT MEN, WE ARE DEVO!
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 12, 2012, 06:20:12 pm
"Collective intelligence" sounds like the Borg. Or an ant hill. I don't see it as more evolved.
of course you and idon't.  but WE will...  :p
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 06:24:38 pm
"Collective intelligence" sounds like the Borg. Or an ant hill. I don't see it as more evolved.
of course you and idon't.  but WE will...  :p

 :lulz:  :horrormirth:
Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 06:28:13 pm
"Collective intelligence" sounds like the Borg. Or an ant hill. I don't see it as more evolved.
of course you and idon't.  but WE will...  :p

 :lulz:  :horrormirth:

I'm almost having an idea for something now. I'll wait for further instructions from the Mother Brain to see what develops
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 06:29:39 pm
Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

That is not an end in itself.  To exist strictly to exist is not a purpose.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?

Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars.  Peace is also not always a desirable thing, given the nature of humans.

And individual identity is what makes us what we are.  Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.

Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.
Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 06:44:22 pm
Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

That is not an end in itself.  To exist strictly to exist is not a purpose.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?

Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars.  Peace is also not always a desirable thing, given the nature of humans.

And individual identity is what makes us what we are.  Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.

Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.

I don't mean a Borg consciousness or anything out of sci fi. What I mean is that systems (political/military/social) are growing larger and more interconnected and interdependent. These systems draw on the intentions of populations worldwide and therefore have a vested interest in shaping those intentions. Media and other propagandist organs grow to shelter the population and feed it the information necessary to grow its intentions and concerns in the desired direction. In some sense this could be referred to as a collective intelligence.

And no, it isn't always mutually exclusive with the general welfare of the species, except when those larger constructs we have created to oversee our general welfare begin to see individualism as a threat. There isn't anything inherently dangerous about being yourself but that doesn't mean the system isn't wired to see it as a threat anyway, for one reason or another.
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 06:47:22 pm
Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

That is not an end in itself.  To exist strictly to exist is not a purpose.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?

Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars.  Peace is also not always a desirable thing, given the nature of humans.

And individual identity is what makes us what we are.  Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.

Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.

I don't mean a Borg consciousness or anything out of sci fi. What I mean is that systems (political/military/social) are growing larger and more interconnected and interdependent. These systems draw on the intentions of populations worldwide and therefore have a vested interest in shaping those intentions. Media and other propagandist organs grow to shelter the population and feed it the information necessary to grow its intentions and concerns in the desired direction. In some sense this could be referred to as a collective intelligence.

And no, it isn't always mutually exclusive with the general welfare of the species, except when those larger constructs we have created to oversee our general welfare begin to see individualism as a threat. There isn't anything inherently dangerous about being yourself but that doesn't mean the system isn't wired to see it as a threat anyway, for one reason or another.

That's just creepy as fuck. Best to throw as many monkey wrenches as possible.
Title: Re: Something I rushed off
Post by: LMNO on June 12, 2012, 06:47:59 pm
Yeah, that sounded a lot like "OK, AI, YOU'RE IN CHARGE! ABUSE ME TO DEATH!"

Even if it was entirely benevolent, I'd still take a fucking axe to the mainframe.

Yep.
A benevolent overlord is still an overlord.

It's not even that.  Man is meant to be free, and that means making your own decisions and mistakes.

If you're not free to fail, you're not free.  And if you're not free, then what's the point?

You got some James Tiberius Kirk thinking going on there.  And I like it.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 06:49:05 pm
Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

That is not an end in itself.  To exist strictly to exist is not a purpose.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?

Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars.  Peace is also not always a desirable thing, given the nature of humans.

And individual identity is what makes us what we are.  Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.

Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.

I don't mean a Borg consciousness or anything out of sci fi. What I mean is that systems (political/military/social) are growing larger and more interconnected and interdependent. These systems draw on the intentions of populations worldwide and therefore have a vested interest in shaping those intentions. Media and other propagandist organs grow to shelter the population and feed it the information necessary to grow its intentions and concerns in the desired direction. In some sense this could be referred to as a collective intelligence.

And no, it isn't always mutually exclusive with the general welfare of the species, except when those larger constructs we have created to oversee our general welfare begin to see individualism as a threat. There isn't anything inherently dangerous about being yourself but that doesn't mean the system isn't wired to see it as a threat anyway, for one reason or another.

And an AI that leads - directly or undirectly - to our extinction isn't a threat?

Look, the only good things that happen in this world are almost always a result of someone FUCKING UP.  ALL the FUNNY shit in the world is a result of someone fucking up.

I don't want a future in which people don't fuck up GLORIOUSLY, and I don't see an AI God as being helpful in that department.  If we're going to go extinct, I'd just as soon we did it THE OLD FASHIONED WAY.  By fucking up.  All by ourselves.

And even if that fucking up was the creation of said AI, I say we smash the fucking thing on the way off stage.  Because nobody likes a smartass know-it-all. 

One other thought:  An AI designed by humans is about as likely to succeed as a regular operating system developed by humans.  So we build this fucking AI to stop war or whatever, but then spend the next 200 years rebooting the fucking thing due to blue screen o' death.

Fuck that, I get enough frustration out of my laptop.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 06:50:38 pm
Yeah, that sounded a lot like "OK, AI, YOU'RE IN CHARGE! ABUSE ME TO DEATH!"

Even if it was entirely benevolent, I'd still take a fucking axe to the mainframe.

Yep.
A benevolent overlord is still an overlord.

It's not even that.  Man is meant to be free, and that means making your own decisions and mistakes.

If you're not free to fail, you're not free.  And if you're not free, then what's the point?

You got some James Tiberius Kirk thinking going on there.  And I like it.

Really, I just get angry enough to...to PEE, whenever someone starts talking about a grand destiny for the species or for intelligence or whatever.  I am here to do what I do well, enjoy myself a little, and be a damn human.  That's all the destiny I need.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 06:51:47 pm
Also, someone get me a green chick with antenae.  I'm feeling anxious.

Set your phasers to SHUT UP.

That is all.
Title: Re: Something I rushed off
Post by: Anna Mae Bollocks on June 12, 2012, 06:55:04 pm
Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

That is not an end in itself.  To exist strictly to exist is not a purpose.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?

Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars.  Peace is also not always a desirable thing, given the nature of humans.

And individual identity is what makes us what we are.  Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.

Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.

I don't mean a Borg consciousness or anything out of sci fi. What I mean is that systems (political/military/social) are growing larger and more interconnected and interdependent. These systems draw on the intentions of populations worldwide and therefore have a vested interest in shaping those intentions. Media and other propagandist organs grow to shelter the population and feed it the information necessary to grow its intentions and concerns in the desired direction. In some sense this could be referred to as a collective intelligence.

And no, it isn't always mutually exclusive with the general welfare of the species, except when those larger constructs we have created to oversee our general welfare begin to see individualism as a threat. There isn't anything inherently dangerous about being yourself but that doesn't mean the system isn't wired to see it as a threat anyway, for one reason or another.

And an AI that leads - directly or undirectly - to our extinction isn't a threat?

Look, the only good things that happen in this world are almost always a result of someone FUCKING UP.  ALL the FUNNY shit in the world is a result of someone fucking up.

I don't want a future in which people don't fuck up GLORIOUSLY, and I don't see an AI God as being helpful in that department.  If we're going to go extinct, I'd just as soon we did it THE OLD FASHIONED WAY.  By fucking up.  All by ourselves.

And even if that fucking up was the creation of said AI, I say we smash the fucking thing on the way off stage.  Because nobody likes a smartass know-it-all. 

One other thought:  An AI designed by humans is about as likely to succeed as a regular operating system developed by humans.  So we build this fucking AI to stop war or whatever, but then spend the next 200 years rebooting the fucking thing due to blue screen o' death.

Fuck that, I get enough frustration out of my laptop.

THIS

People can't even build a coffee vending machine that can get the coffee into the cup. I mean, it might have happened, by accident, at some bus station, somewhere. But I've never seen it.
Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 06:55:24 pm
Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.

That is not an end in itself.  To exist strictly to exist is not a purpose.

Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?

Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars.  Peace is also not always a desirable thing, given the nature of humans.

And individual identity is what makes us what we are.  Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.

Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.

I don't mean a Borg consciousness or anything out of sci fi. What I mean is that systems (political/military/social) are growing larger and more interconnected and interdependent. These systems draw on the intentions of populations worldwide and therefore have a vested interest in shaping those intentions. Media and other propagandist organs grow to shelter the population and feed it the information necessary to grow its intentions and concerns in the desired direction. In some sense this could be referred to as a collective intelligence.

And no, it isn't always mutually exclusive with the general welfare of the species, except when those larger constructs we have created to oversee our general welfare begin to see individualism as a threat. There isn't anything inherently dangerous about being yourself but that doesn't mean the system isn't wired to see it as a threat anyway, for one reason or another.

And an AI that leads - directly or undirectly - to our extinction isn't a threat?

Look, the only good things that happen in this world are almost always a result of someone FUCKING UP.  ALL the FUNNY shit in the world is a result of someone fucking up.

I don't want a future in which people don't fuck up GLORIOUSLY, and I don't see an AI God as being helpful in that department.  If we're going to go extinct, I'd just as soon we did it THE OLD FASHIONED WAY.  By fucking up.  All by ourselves.

And even if that fucking up was the creation of said AI, I say we smash the fucking thing on the way off stage.  Because nobody likes a smartass know-it-all. 

One other thought:  An AI designed by humans is about as likely to succeed as a regular operating system developed by humans.  So we build this fucking AI to stop war or whatever, but then spend the next 200 years rebooting the fucking thing due to blue screen o' death.

Fuck that, I get enough frustration out of my laptop.

Mechanical/compuerized AI would exist and act for its own, possibly unknowable by us, reasons. Natural AI, or the result of all of us Humans fucking up in a global symphony of ignorance and fear, is exactly what we have already and what we are adding to every time we try to change something.

Oh, and you must be using the wrong OS:
(http://cdn-sr3.saintsrow.com/profile/avatar/tux-borg.JPG)
(Just because, obviously, this must be posted now.)
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 06:58:16 pm
Mechanical/compuerized AI would exist and act for its own, possibly unknowable by us, reasons.

Then fuck them.  They can create themselves if they want to be all cryptic and shit.  THEN I'll be impressed.



Natural AI, or the result of all of us Humans fucking up in a global symphony of ignorance and fear, is exactly what we have already and what we are adding to every time we try to change something.

Oh, and you must be using the wrong OS:
(http://cdn-sr3.saintsrow.com/profile/avatar/tux-borg.JPG)
(Just because, obviously, this must be posted now.)

A red X?  My point is proven.

Also, the "symphony of ignorance and fear" just happened to give us Michealangelo, DaVinci, Gallileo, Feynman, Einstien, and Johnny Cash.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 07:01:01 pm
One more thing:  341 people were injured in 2004 when they attempted to iron clothes that they were wearing.

Let's see your fancy mechanical AI do THAT.

Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 07:01:45 pm
Regardless of scenario, it is fair to say we have now entered the last great age of human warfare.

I think that's taking a few things for granted, Cain.
Title: Re: Something I rushed off
Post by: minuspace on June 12, 2012, 07:18:48 pm
This thread gave me an interesting image. Say all of the major powers have their own skynet for lack of a better term. Except that since these skynets are programmed to protect the interests of the nation state that owns it- a scenario where increasingly the skynets focus almost exclusively on each other. A state of war with no physical violence but rather constant attempts for the ai generals to sabotage each other through viruses and other cyberattacks.

I was just thinking the same thing on the way to work this morning.
The papers would say "WAR!", and nothing visible would happen... 
the AIs constantly twarting each other's attempts to deal physical damage while attempting to do the same.
the frantic conflict fought autonomously in bits and packets would rage on as it slipped from public memory over the years, people going about their daily lives happily.  The human generals responsible for overseeing the war increasingly relegating the tasks to the  category of routine, mundane, and ultimately, ignorable, until, even they forget about it.
then....BREAKTHROUGH! and the bombs would drop. and one half of the planet becomes uninhabitable.
and nobody knows why.

Why use bombs, though?  Just shut off power to sanitation plants, hospitals, and traffic signals, etc.  Shut down communications.  Re-route shipping manifests to keep food out of cities.

Computers wouldn't wage war like we do.  There's no monkey urge to kill violently.  Just the imperative to eliminate.

Sources indicate that a large metropolis would descend into chaos within 72hrs. of having communications / utilities shut down...  The AI's would just let us have it at ourselves to do the dirty work.  Otherwise, my main preoccupation is not what it would do, just, as Trip brought up, having it gain full privileges is the problem.  The problem is not AI as much it is encryption.  Once the machine gains access I would nearly rather it pull the trigger than have the distorted imagination of a human devise some twisted method of elimination?
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 07:24:06 pm
Sources indicate that a large metropolis would descend into chaos within 72hrs. of having communications / utilities shut down... 

Your sources are fucked.  The Great New York Blackout of 1977 led to chaos in less than 24 hours, with looting, arson, and general disorder starting at nightfall.

Title: Re: Something I rushed off
Post by: tyrannosaurus vex on June 12, 2012, 07:29:58 pm
Also, the "symphony of ignorance and fear" just happened to give us Michealangelo, DaVinci, Gallileo, Feynman, Einstien, and Johnny Cash.

I'm not arguing for the elimination of individual identity or great fuckups/failures and associated great art and experience, I'm saying the system we already have in place is moving in that direction. For every Johnny Cash there are 2 or 3 real-life Enrico Salazars. I know you respect that you can't have the good without the bad - but many people don't respect that, or don't even have any concept of it at all.

So while you and I can admire the best of Humanity while appropriately appreciating, remembering, and staying mindful of the worst, millions of people on the planet either are incapable of realizing the unbreakable bond between these or are willing to sacrifice the what they see as "good" to eliminate what they see as "evil." And there are enough of those people to tip the scales in their favor.

Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 12, 2012, 09:54:41 pm
Also, the "symphony of ignorance and fear" just happened to give us Michealangelo, DaVinci, Gallileo, Feynman, Einstien, and Johnny Cash.

I'm not arguing for the elimination of individual identity or great fuckups/failures and associated great art and experience, I'm saying the system we already have in place is moving in that direction. For every Johnny Cash there are 2 or 3 real-life Enrico Salazars.

I'm gonna go out on a limb here and say that Ghaddafy make dictatorship look pretty damn stylish.

In fact, as individuals go (morals aside), he was one of the KINGS.  If we HAVE to have bad guys, and we do, we need a few more like him.  He was a murdering shit, but he had a sense of humor.

He had to be killed, of course.  For him, EVERY NIGHT was SATURDAY NIGHT, and he never knew when to quit.  He didn't just have a good time, he had a VERY good time, and that had to be stopped.
Title: Re: Something I rushed off
Post by: Mesozoic Mister Nigel on June 13, 2012, 12:23:56 am
I would assume there would be some primary motivator for the AI.
ours (as an intelligence) is reproduction. all else is to support that, ultimately. It seems to be very tightly coupled to the  physical chemical gradient motivation that is underneath the layer of intelligence.
the AI needs to have 'protect humanity' as its' prime motivator.
a computer's prime motivator is simply 'make the electrons go through the circuit' and we just arrange the circuit to give the result we want while it pursues that.
in making an intelligence out of a computer, we need to couple the 'simple circuit' motivation tightly to the 'protect people' motivation.
of course it can still break just like we do when we occasionally follow the 'chemical gradient' motivation and end up killing ourselves in the head before we reproduce.
if it is a broken AI that we fear, rather than design that destroys us, then redundancy could save us.

Remember "With Folded Hands"?

http://en.wikipedia.org/wiki/With_Folded_Hands
Title: Re: Something I rushed off
Post by: Don Coyote on June 13, 2012, 12:33:35 am
I would assume there would be some primary motivator for the AI.
ours (as an intelligence) is reproduction. all else is to support that, ultimately. It seems to be very tightly coupled to the  physical chemical gradient motivation that is underneath the layer of intelligence.
the AI needs to have 'protect humanity' as its' prime motivator.
a computer's prime motivator is simply 'make the electrons go through the circuit' and we just arrange the circuit to give the result we want while it pursues that.
in making an intelligence out of a computer, we need to couple the 'simple circuit' motivation tightly to the 'protect people' motivation.
of course it can still break just like we do when we occasionally follow the 'chemical gradient' motivation and end up killing ourselves in the head before we reproduce.
if it is a broken AI that we fear, rather than design that destroys us, then redundancy could save us.

Remember "With Folded Hands"?

http://en.wikipedia.org/wiki/With_Folded_Hands

NOOOOOOO The humanoid series is really "WTF I don't know who is the truly bad guy" series.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 12:35:07 am
I would assume there would be some primary motivator for the AI.
ours (as an intelligence) is reproduction. all else is to support that, ultimately. It seems to be very tightly coupled to the  physical chemical gradient motivation that is underneath the layer of intelligence.
the AI needs to have 'protect humanity' as its' prime motivator.
a computer's prime motivator is simply 'make the electrons go through the circuit' and we just arrange the circuit to give the result we want while it pursues that.
in making an intelligence out of a computer, we need to couple the 'simple circuit' motivation tightly to the 'protect people' motivation.
of course it can still break just like we do when we occasionally follow the 'chemical gradient' motivation and end up killing ourselves in the head before we reproduce.
if it is a broken AI that we fear, rather than design that destroys us, then redundancy could save us.

Remember "With Folded Hands"?

http://en.wikipedia.org/wiki/With_Folded_Hands

NOOOOOOO The humanoid series is really "WTF I don't know who is the truly bad guy" series.

I think the answer to that is pretty obvious.
Title: Re: Something I rushed off
Post by: P3nT4gR4m on June 13, 2012, 11:15:54 am
Here's some shit that just popped into my head. I'm taking it as a given that, sooner or later, some organisation will take over the world. Tinfoil hats aside I don't think we're there yet but all it will really take is one empire to rise and then not fall. The old "can't fool all of the people all of the time" thing used to matter but we're quickly moving to a point where fooling most of the people all of the time is a doddle and the rest, the unfooled minority are becoming easier and easier to detect and neutralise. The relentless march of technological advancement brings us closer to this place.

So we either get a benevolent dictator or someone who makes Hitler look like the tooth fairy. I posit that both scenarios are equally shitty. Genocide may be a bummer but at least it's reasonably quick (dare I say humane) when compared to what would happen if utopia was handed to us on a silver platter. I'm thoroughly convinced that down that road lies unimaginable horror.

Basically what I'm saying is that it's going to happen whether we think it's a good idea or not. I'm kinda finding myself siding with Vex here - given that the next thing that's going to happen to us as a species would appear to be sliding toward a more cohesive, structured and integrated group aspect, I'm thinking that some hyperintelligent machine would do a much better job of directing the proceedings than some retarded monkey with no fucking masterplan whatsoever beyond - "something something bananas something"
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 02:15:23 pm
Here's some shit that just popped into my head. I'm taking it as a given that, sooner or later, some organisation will take over the world.

There's no profit in owning the whole thing.
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 03:26:24 pm
Remember "With Folded Hands"?

http://en.wikipedia.org/wiki/With_Folded_Hands
ooh. nope. hadn't read that one yet.  thanks for the tip!

Here's some shit that just popped into my head. I'm taking it as a given that, sooner or later, some organisation will take over the world.

There's no profit in owning the whole thing.
Also, it would seem to me that power structures just become more unstable the larger they are.  it might happen, but it would fragment quickly, no doubt in my mind.
i guess it also depends on what, exactly, "take over the world" means...
like 'King Omniglobe the first'?  seems unlikely...
but 'Central Bank of the World'?  yeah. i can see that lasting a little while.  and that could be said to have 'taken over the world' in a sense, right?
Title: Re: Something I rushed off
Post by: Nephew Twiddleton on June 13, 2012, 04:23:29 pm
Remember "With Folded Hands"?

http://en.wikipedia.org/wiki/With_Folded_Hands
ooh. nope. hadn't read that one yet.  thanks for the tip!

Here's some shit that just popped into my head. I'm taking it as a given that, sooner or later, some organisation will take over the world.

There's no profit in owning the whole thing.
Also, it would seem to me that power structures just become more unstable the larger they are.  it might happen, but it would fragment quickly, no doubt in my mind.
i guess it also depends on what, exactly, "take over the world" means...
like 'King Omniglobe the first'?  seems unlikely...
but 'Central Bank of the World'?  yeah. i can see that lasting a little while.  and that could be said to have 'taken over the world' in a sense, right?

I think part of the problem is what drives a conqueror. Once you have the whole thing, what's there left to do? What's the point of having an army (given the assumption that the global population is subdued enough to forsake rebellion)? It would probably have to be some sort of corporation. They don't care about glory in battle. Only profits.
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 04:56:29 pm
you sure about that?
http://alt-market.com/articles/795-wall-street-banks-building-a-private-army
Title: Re: Something I rushed off
Post by: Mesozoic Mister Nigel on June 13, 2012, 04:57:36 pm
Who wrote the classic early scifi story in which the AI realized that the only way to keep humanity safe was to kill them all?
Title: Re: Something I rushed off
Post by: Mesozoic Mister Nigel on June 13, 2012, 05:03:06 pm
you sure about that?
http://alt-market.com/articles/795-wall-street-banks-building-a-private-army

You realize that's a crackpot publication on par with the Examiner, right?

I mean, it's the kind of source that people who believe in our lizard overlords reference. It's not that I think it's implausible that banks would be building a private army, but consider your source on this one.
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 05:39:45 pm
yeah. it was just a silly response.
although, it doesn't seem entirely implausible, so it was kind of tongue in cheek.
i can see international corporations having private armies build up significantly to the point of overshadowing state armies as a possible bizarre future.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 06:05:43 pm
yeah. it was just a silly response.
although, it doesn't seem entirely implausible, so it was kind of tongue in cheek.
i can see international corporations having private armies build up significantly to the point of overshadowing state armies as a possible bizarre future.

In the present.
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 06:21:22 pm
yeah. it was just a silly response.
although, it doesn't seem entirely implausible, so it was kind of tongue in cheek.
i can see international corporations having private armies build up significantly to the point of overshadowing state armies as a possible bizarre future.

In the present.
do you mean indirectly? as in the corporations de-facto controlling the state armies for their private use?
Title: Re: Something I rushed off
Post by: P3nT4gR4m on June 13, 2012, 06:46:36 pm
Here's some shit that just popped into my head. I'm taking it as a given that, sooner or later, some organisation will take over the world.

There's no profit in owning the whole thing.

Don't quite follow you but my curiosity is piqued. Pls to explain?
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 07:14:45 pm
yeah. it was just a silly response.
although, it doesn't seem entirely implausible, so it was kind of tongue in cheek.
i can see international corporations having private armies build up significantly to the point of overshadowing state armies as a possible bizarre future.

In the present.
do you mean indirectly? as in the corporations de-facto controlling the state armies for their private use?

No, I mean directly.  My company has direct-employed troops in a couple of African nations, and I would be fucking SHOCKED if our competitors didn't.

We call them "security", but they're essentially air cav.
Title: Re: Something I rushed off
Post by: LMNO on June 13, 2012, 07:16:25 pm
Also, check out some of those bannana farms in South America.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 07:16:52 pm
Here's some shit that just popped into my head. I'm taking it as a given that, sooner or later, some organisation will take over the world.

There's no profit in owning the whole thing.

Don't quite follow you but my curiosity is piqued. Pls to explain?

You can only make money on a global scale if you can exploit the gap between the HAVES and the HAVE NOTS.  If your organization includes everyone, there is no gap to exploit.  Just a bunch of poor people COSTING you money.  It's a sucker's game.  Why do you think people stopped annexing other countries?  When you take over, THEIR problems become YOUR problems, instead of something to exploit.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 07:18:08 pm
Also, check out some of those bannana farms in South America.

Two things:

1.  DelMonte, under it's old name "United Fruit" used to have company troops they called "United States Marines" to handle that shit.  It's cheaper and more popular to use local troops these days.

2.  "Bannana"?  You make me sad.
Title: Re: Something I rushed off
Post by: LMNO on June 13, 2012, 07:21:53 pm
 :regret:
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 07:31:56 pm
yeah. it was just a silly response.
although, it doesn't seem entirely implausible, so it was kind of tongue in cheek.
i can see international corporations having private armies build up significantly to the point of overshadowing state armies as a possible bizarre future.

In the present.
do you mean indirectly? as in the corporations de-facto controlling the state armies for their private use?

No, I mean directly.  My company has direct-employed troops in a couple of African nations, and I would be fucking SHOCKED if our competitors didn't.

We call them "security", but they're essentially air cav.

Oh, definitely.
i'm just saying it would be unsurprising in the future if they may overshadow state military. A-la Jennifer Govt.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 07:35:00 pm
yeah. it was just a silly response.
although, it doesn't seem entirely implausible, so it was kind of tongue in cheek.
i can see international corporations having private armies build up significantly to the point of overshadowing state armies as a possible bizarre future.

In the present.
do you mean indirectly? as in the corporations de-facto controlling the state armies for their private use?

No, I mean directly.  My company has direct-employed troops in a couple of African nations, and I would be fucking SHOCKED if our competitors didn't.

We call them "security", but they're essentially air cav.

Oh, definitely.
i'm just saying it would be unsurprising in the future if they may overshadow state military. A-la Jennifer Govt.

The nations they are in?  The corporate troops are definitely stronger and better trained.
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 07:54:12 pm
This is so?  :?
Title: Re: Something I rushed off
Post by: Mesozoic Mister Nigel on June 13, 2012, 09:32:12 pm
This is so?  :?

Yes, it is.
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 09:47:44 pm
This is so?  :?

Yes, it is.

although corporations are likely to spend on private armies much more efficiently than the state, how is it possible that they overshadow them (specifically the US) just given the money spent?
what corporations would be able to have air superiority over the US military?
or has carrier fleets?
what corporations have weapon stockpiles to rival it?
or soldiers for that matter?

i either didn't make my meaning clear, or i'm missing something big.
Title: Re: Something I rushed off
Post by: Mesozoic Mister Nigel on June 13, 2012, 10:00:29 pm
This is so?  :?

Yes, it is.

although corporations are likely to spend on private armies much more efficiently than the state, how is it possible that they overshadow them (specifically the US) just given the money spent?
what corporations would be able to have air superiority over the US military?
or has carrier fleets?
what corporations have weapon stockpiles to rival it?
or soldiers for that matter?

i either didn't make my meaning clear, or i'm missing something big.

I think what you're missing is the specific context of Roger's statement that the company militias are bigger and better armed/trained than the armies of the countries they are deployed in. Not that they are bigger than the US military.
Title: Re: Something I rushed off
Post by: The Good Reverend Roger on June 13, 2012, 11:34:40 pm
This is so?  :?

Yes, it is.

although corporations are likely to spend on private armies much more efficiently than the state, how is it possible that they overshadow them (specifically the US) just given the money spent?
what corporations would be able to have air superiority over the US military?
or has carrier fleets?
what corporations have weapon stockpiles to rival it?
or soldiers for that matter?

i either didn't make my meaning clear, or i'm missing something big.

You didn't say America.  I said "two African nations".
Title: Re: Something I rushed off
Post by: Elder Iptuous on June 13, 2012, 11:54:12 pm
Ah ok.
Totally.
:)
Title: Re: Something I rushed off
Post by: minuspace on June 14, 2012, 07:31:43 am
Sources indicate that a large metropolis would descend into chaos within 72hrs. of having communications / utilities shut down... 

Your sources are fucked.  The Great New York Blackout of 1977 led to chaos in less than 24 hours, with looting, arson, and general disorder starting at nightfall.

Fucked perhaps that they are, however, that only increases their credibility, if you catch my drift.  That, and any thoughts about trust on the matter only corroborate how quickly these things can turn on themselves.  Back to the point, sure, NYC is different, although having been there through the last one does not necessarily make me feel any better - just becaue I must be bored enough to think about these things.
Title: Re: Something I rushed off
Post by: Sung Low on March 25, 2018, 04:11:43 am
Bumped because of prescience

At the moment though it seems that dick waving still takes precedence over subterfuge and AT THE MOMENT I think that they'll be unable to let go of that.