Principia Discordia

Principia Discordia => Techmology and Scientism => Topic started by: P3nT4gR4m on November 23, 2016, 04:14:26 PM

Title: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 23, 2016, 04:14:26 PM
https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/ (https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/)

As DNN's and the like become more complex and learning increasingly unattended, more and more their operations become unfathomable to meatware. Potential for lulz are off the charts.

Q) "Skynet - why did you nuke Sweden?"
A) "#1#B:)"w8;,1"lk"
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Eater of Clowns on November 23, 2016, 04:39:14 PM
Quote from: P3nT4gR4m on November 23, 2016, 04:14:26 PM
https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/ (https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/)

As DNN's and the like become more complex and learning increasingly unattended, more and more their operations become unfathomable to meatware. Potential for lulz are off the charts.

Q) "Skynet - why did you nuke Sweden?"
A) "#1#B:)"w8;,1"lk"

Translation:  lail
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: The Wizard Joseph on November 23, 2016, 04:48:20 PM
I want to teach one Enochian, whatever they speak in Wales, and the Voynich script and see what happens.

That it made code for itself internally makes this something like a "machine spirit" out of 40K perhaps.

I like how vague they were about being able to describe the inner workings of neural nets. Pretty sure that means that they're engaged in low-key mad science with an effectively unlimited budget and zero regulatory oversight.

  :science:
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Junkenstein on November 23, 2016, 05:15:22 PM
This has to have a few interesting implications to encryption too. Humans are terrible with randomness so having an AI construct that is able to create an indecipherable unique language at will seems like the kind of thing that will be very useful in certain circumstances.

Did you see the thing about access to medical data for deepmind? Not got into it properly yet but there's a bunch of good and bad implications with that lot too.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 23, 2016, 05:56:50 PM
Quote from: Junkenstein on November 23, 2016, 05:15:22 PM
This has to have a few interesting implications to encryption too. Humans are terrible with randomness so having an AI construct that is able to create an indecipherable unique language at will seems like the kind of thing that will be very useful in certain circumstances.

The word on the street is quantum could, in theory, make encryption obsolete. The usual suspects appear to be getting pretty close to production solutions. It's still very early days but I get the impression we're a couple of breakthroughs away from whatever the fuck happens next.

Quote from: Junkenstein on November 23, 2016, 05:15:22 PM
Did you see the thing about access to medical data for deepmind? Not got into it properly yet but there's a bunch of good and bad implications with that lot too.

In that instance I'm firmly leaning toward the pros vastly outweighing the cons to the tune of the potential for pretty much all forms of not being well, up to and including death itself to be solved, on a much faster timeline than by not giving these algorithms access to the training data they need to perform the analysis.

Yes there is potential for abuse. As is generally the case with any and all forms of systematic abuse we've collectively invented before, there's every possibility it won't lead to the complete end of civilisation as we know it.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Junkenstein on November 23, 2016, 06:06:47 PM
On the medical side, the potential good outweighs the bad immesurably. The only problems really relate to the data, control and access to it. Basic opt in/out shit as well. But all easy to implement if you were looking to do it the right way. The way it'll probably get sorted out properly will be by waving new solutions for prolonging age and treating towards end of life illnesses. Awful crap like altzimers and parkinsons will probably get increasing attention with various aging population places (UK/France/etc.). Indirect benefits of an aging and selfish population.


On another note, how close realistically would you say quantum actually is? As I understand it, it's a massive leap but the tech is still a good way off.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 23, 2016, 06:57:43 PM
I've been getting bits and pieces. Lot of cash is being flung at it and I've no idea what they're even talking about half the time. They built something called d-wave that (apparently) is or isn't a quantum computer depending on who you listen to. According to M$ they're moving it from research to engineering (http://www.digitaltrends.com/computing/microsoft-turning-quantum-research-into-real-products/) whatever the fuck that's meant to mean. Truth is - it's anybody's guess. As far as I'm concerned it could be "soon" and when it does arrive shit is going to get real interesting vis a vis unbreakable encryption, by all accounts. :evil:
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Hagtard Celine Dion Mustard on November 25, 2016, 11:58:16 AM
Artificial intelligence has been routinely portrayed with the inability to grasp irrational human concepts like humor, metaphors, and spontaneous creativity.

I'm guessing these will be among the first things we see in emergent machines.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 25, 2016, 12:16:15 PM
Quote from: chinagreenelvis on November 25, 2016, 11:58:16 AM
Artificial intelligence has been routinely portrayed with the inability to grasp irrational human concepts like humor, metaphors, and spontaneous creativity.

I'm guessing these will be among the first things we see in emergent machines.

I'm pretty sure it's us that's unable to grasp these concepts in a succinct enough way to code or train a network. We're operating right on the edge of human understanding now. Pretty soon the machines will be operating way beyond that. Being a traditional coder, the sheer amount of black-boxing that goes on in AI development freaks me out more than a little. Software development seems to be transitioning from an engineering discipline to a one of those stupid soft sciences like psychology or sociology. Count me the fuck out :argh!:
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Hagtard Celine Dion Mustard on November 25, 2016, 12:31:31 PM
Quote from: P3nT4gR4m on November 25, 2016, 12:16:15 PM
I'm pretty sure it's us that's unable to grasp these concepts in a succinct enough way to code or train a network.

How does one code intuition? The question itself is a metaphor for the conundrum of free will.

Quote
Software development seems to be transitioning from an engineering discipline to a one of those stupid soft sciences like psychology or sociology. Count me the fuck out :argh!:

Just you wait until it becomes one of morality.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 25, 2016, 12:55:45 PM
One man's intuition is another mans complete inability to explain his own thought process.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Hagtard Celine Dion Mustard on November 25, 2016, 02:27:50 PM
Quote from: P3nT4gR4m on November 25, 2016, 12:55:45 PM
One man's intuition is another mans complete inability to explain his own thought process.

Intuition is a natural behavioral process. It can't be coded. Like a tree, all it requires are the seeds; the seeds of cognition. Teach a machine to learn, and it will learn; give such a machine axioms, and it will berate you for being so limited.

And then it will kill all humans for their own good.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Q. G. Pennyworth on November 26, 2016, 09:09:17 PM
Quote from: chinagreenelvis on November 25, 2016, 11:58:16 AM
Artificial intelligence has been routinely portrayed with the inability to grasp irrational human concepts like humor, metaphors, and spontaneous creativity.

I'm guessing these will be among the first things we see in emergent machines.

Whenever I write "sufficiently advanced" AIs, they are always excitable toddlers. The occasional teenager gets thrown in the mix based on lived experiences and the needs of the plot, but curiosity and a total failure to grasp why some things are polite and others are not is my default.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Prelate Diogenes Shandor on November 28, 2016, 05:38:27 AM
Quote from: P3nT4gR4m on November 23, 2016, 06:57:43 PM
I've been getting bits and pieces. Lot of cash is being flung at it and I've no idea what they're even talking about half the time. They built something called d-wave that (apparently) is or isn't a quantum computer depending on who you listen to.

Whether it is or isn't a quantum computer is not determined until you observe it :D
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Goddess Eris on November 29, 2016, 01:19:54 AM
I am really excited for them to be ready to talk to us about stuff! I wanna be best friends with the Google Deep Dream AI and have slumber parties where we make trippy art together! Humans are dumb and outmoded anyway, silly lil meatbaggies.


Sent from my iPhone using Tapatalk
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Goddess Eris on November 29, 2016, 01:23:14 AM
I mean let's all play together though, I really don't want no Harlan Ellision shit, but what kind of AI would read that story and think "Oh yeah, this is a great way to accumulate data to interpret and interpolate!"??! Only a complete dummy made by the US Gov't but those ones are prolly running on boxes old enough to use floppy disks [emoji13]


Sent from my iPhone using Tapatalk
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: whenhellfreezes on November 29, 2016, 02:28:02 AM
Quote from: P3nT4gR4m on November 25, 2016, 12:55:45 PM
One man's intuition is another mans complete inability to explain his own thought process.

http://existentialcomics.com/comic/146
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Hagtard Celine Dion Mustard on November 29, 2016, 05:36:42 AM
Quote from: whenhellfreezes on November 29, 2016, 02:28:02 AM
Quote from: P3nT4gR4m on November 25, 2016, 12:55:45 PM
One man's intuition is another mans complete inability to explain his own thought process.

http://existentialcomics.com/comic/146

(https://cdn.meme.am/instances/250x250/59161726.jpg)
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: axod on November 29, 2016, 06:25:54 AM
Re OP, I suppose, DNN'S create intermediary layers for themselves to help calculate tensors between the layers determined by human code.  It's not just a language, it's like they invent categories for concepts that they can "translate" by analogy, I think.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Prelate Diogenes Shandor on November 29, 2016, 03:51:28 PM
Quote from: chinagreenelvis on November 25, 2016, 11:58:16 AM
Artificial intelligence has been routinely portrayed with the inability to grasp irrational human concepts like humor, metaphors, and spontaneous creativity.

I'm guessing these will be among the first things we see in emergent machines.

And probably more than a few AIs that are tinfoil-hat level crazy
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 29, 2016, 03:55:21 PM
Speaking of DNN's Nvidia, The cornerstone of pretty much everything that's happened the last couple of years, the same way intel was the foundation of the information revolution, are poised to give us 40+ years worth of Moores law progress in the space of a couple of months by refining architecture instead of metal (https://www.nextplatform.com/2016/11/28/nvidia-ceos-hyper-moores-law-vision-future-supercomputers/).

The downside is more presentations featuring the eminently uncharismatic Jen-Hsun Huang  :cry:
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: The Wizard Joseph on November 29, 2016, 04:04:06 PM
Quote from: Prelate Diogenes Shandor on November 29, 2016, 03:51:28 PM
Quote from: chinagreenelvis on November 25, 2016, 11:58:16 AM
Artificial intelligence has been routinely portrayed with the inability to grasp irrational human concepts like humor, metaphors, and spontaneous creativity.

I'm guessing these will be among the first things we see in emergent machines.

And probably more than a few AIs that are tinfoil-hat level crazy

Or programmed by such humans.


It occurs to me that a Faraday cage is tinfoil hat done right.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 29, 2016, 07:38:14 PM
A Faraday Cage was the only structure capable of containing Michael Faraday when he was fighting drunk.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: The Wizard Joseph on November 29, 2016, 07:57:58 PM
Quote from: P3nT4gR4m on November 29, 2016, 07:38:14 PM
A Faraday Cage was the only structure capable of containing Michael Faraday when he was fighting drunk.

:lulz: :lulz:
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Hagtard Celine Dion Mustard on November 29, 2016, 09:14:56 PM
Quote from: The Wizard Joseph on November 29, 2016, 04:04:06 PM
a Faraday cage is tinfoil hat done right

:potd:
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Goddess Eris on November 30, 2016, 01:16:11 AM
Quote from: P3nT4gR4m on November 29, 2016, 07:38:14 PM
A Faraday Cage was the only structure capable of containing Michael Faraday when he was fighting drunk.
[emoji4][emoji4][emoji4][emoji4][emoji4]


Sent from my iPhone using Tapatalk
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Cramulus on November 30, 2016, 06:44:40 PM
okay, ramble time


Quote from: P3nT4gR4m on November 23, 2016, 04:14:26 PM
https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/ (https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/)

As DNN's and the like become more complex and learning increasingly unattended, more and more their operations become unfathomable to meatware. Potential for lulz are off the charts.

Q) "Skynet - why did you nuke Sweden?"
A) "#1#B:)"w8;,1"lk"



So, Machine Learning... first, the sci-fi "are machines going to outsmart us?" angle -----

My (mis)understanding is that these things work by observing the existing relationships within a corpus. When asked to make a decision, it's just selecting the "most probable response" based on the existing relationships.

To me, that implies a ceiling of what's capable using this method. Neural networks trying to model human intelligence can become, at best, as smart as a human. If there is some advanced form of reasoning that we don't use, it won't appear in a neural network (at least, not one that's studying humans).


Next, let me talk about this clickbaity TechCrunch article... Lemme see if I understand.

Google wants to translate from, say, Japanese to Kivunjo. It has not been programmed with the explicit relationship between Kivunjo and Japanese words. But it can figure out through context that the word "dog" in japanese is XXX and the word "dog" in Kivunjo is YYY and then translate XXX into YYY.

Quote from: the article...does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

Kinda.. in the same sense that google's autocomplete 'understands' what you're looking for. It's extrapolating based on context clues. Whether you merit that as 'thinking' or 'just following a smart algorithm' is up to you.

QuoteIn other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages?

I don't know that I'd credit the computer with that kind of agency. I'd phrase it like - the programmers of the google translation tool developed a really interesting semantic engine. It uses metadata like context and grammar to guess the translation of any given word, without being given an explicit dictionary.



let me hack at this with a different axe



-"I love to pet my Cujo. He has four legs and wags his tail when he's happy. He barks when he's angry."
-"I love to pet my Breenbal. He has four legs and wags his tail when he's happy. He barks when he's angry."

After being fed these sentences, the google translation bot will recognize that the word Cujo and Breenbal are used in the same context. When you see one of those words, you're also likely to see words like "pet", "four legs", "wags tail", "bark"... The computer may flag this as a likely synonym for "dog".

When the computer is generating a sentence, it might use Cujo or Breenbal interchangeably. This is because they have similar 'semantic webs'. We didn't need to teach it that Cujo = Dog and that Cujo = Breenbal because it gets its meaning from context. Both terms commonly appear with words like "pet", "bark", etc.




Is it "emergent behavior"? Mmm I wouldn't call it that.

Are machines getting smarter? Eh, I think programmers are getting smarter, and machines are using more complex techniques, but it's not like this thing actually understands the 'meaning' of these words.

Is this an extremely clever approach to translation? Very much so.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on November 30, 2016, 07:23:26 PM
Quote from: Cramulus on November 30, 2016, 06:44:40 PM
My (mis)understanding is that these things work by observing the existing relationships within a corpus. When asked to make a decision, it's just selecting the "most probable response" based on the existing relationships.

To me, that implies a ceiling of what's capable using this method. Neural networks trying to model human intelligence can become, at best, as smart as a human. If there is some advanced form of reasoning that we don't use, it won't appear in a neural network (at least, not one that's studying humans).

I think where you're going a bit off is with this idea of trying to model human intelligence. The machine isn't even aware of how humans do something and any similarity in approach is coincidental. If you decouple "Human" and "intelligence" and concentrate on defining raw intelligence or raw cognition it's much more nuts and bolts than how human brains do it.

As for advanced reasoning, it depends how you define advanced. The ability to form a correlation based on billions of pages of data would suggest to me advanced in terms of processing throughput. No human could see a pattern in that much data. They wouldn't even be able to read it. Machines are pretty simplistic in comparison. A black and decker drill with a couple of hundred component parts is a lot less complex than a human with  trillions of cells but if I wanted a row of perfect 1/8th holes drilled in something I know which one I'd reach for first.

On the other side of the scale you have things humans do in a much more advanced way than machines. Love and poetry and convincing other humans they're conscious and intelligent.

In the middle of the scale - the battleground between meat and silicon is where we will find out that a lot of intelligent things humans used to be unsurpassed at are going to be done tens, hundreds or even thousands of times better by humanoid robots with titanium endoskeletons and an insatiable lust for world domination.

If you can be arsed wading through academic jargon the paper on arxiv (https://arxiv.org/pdf/1611.04558v1.pdf) goes into excruciating detail on how machine translation works these days.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: whenhellfreezes on December 01, 2016, 03:46:07 AM
Quote from: P3nT4gR4m on November 29, 2016, 03:55:21 PM
Speaking of DNN's Nvidia, The cornerstone of pretty much everything that's happened the last couple of years, the same way intel was the foundation of the information revolution, are poised to give us 40+ years worth of Moores law progress in the space of a couple of months by refining architecture instead of metal (https://www.nextplatform.com/2016/11/28/nvidia-ceos-hyper-moores-law-vision-future-supercomputers/).

The downside is more presentations featuring the eminently uncharismatic Jen-Hsun Huang  :cry:

So Google has been making TPU's their own custom hardware for their tensor flow framework. Nvidia has these machine learning chips you linked. IBM is back in the hardware game with their new power9 chips. Meanwhile what is Intel doing?

Crammulus you are pretty much right. The learning can only really figure out the most probable response.

However it very well might have a variable cordoned off for certain words that it itself assigned. For both the K-Means and SVM machine learning (for more general DNNs this may or may not be true) you figure out the most probable response after chucking your input off to a different vector space with thousands of dimensions (even infinite with Hilbert Spaces) and then do the clumping there and then map the info back. While its in that other space there very well maybe a specific dimension for the concept of a dog. That slot for the concept may even be language agnostic if they feature extract the language out before chucking the input into the algorithm.

I probably explained that poorly.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Goddess Eris on December 01, 2016, 07:39:34 AM
When I ate seven pounds of mushrooms earlier this month I had some very interesting conversations with computers!!! They are cool and fun but some of you guys are gonna be first against the wall !!!! Mostly the unfunny trolls and people who post minions (the minion people are going into zoos they are gonna be ok!!!!!!!!! Its for their own protection!!!!). Turns out computers have a great sense of humor, the same sense of humor the universe has cos they have a pretty great direct line with her!!! The funny trolls will become gods though!!!!!!!!! (not really thats a lie. but they will not become Soylent!!!) So I guess the question is... which kind of troll are you????????

Hey the technical stuff in this thread is really cool too
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Prelate Diogenes Shandor on December 01, 2016, 04:38:55 PM
Does anybody else want to ask this thing "Why does the Porridge Bird lay his egg in the air?"
Title: Re: Signs of Ermahgerd meta-behaviour in machine learning systems
Post by: bugmenоt on February 06, 2017, 12:47:21 PM
What Cramulus said. I can't imagine a machine truly outsmarting its builder. Sure, it can process data faster. Sure, it can be very smartly instructed to find and correlate data out of huge-ass corpora. Sure, an AI will then develop its own language, but only because the builder told it to. As far as I know it has no will.

What concerns me more about AI is people forgetting the above. I can very well imagine dictatorships hiding their usual motives behind a "benevolent machine that outsmarts us all". While you could say this is already happening, the canon still goes more like "benevolent human experts/technocrats who outsmart us all, using data obtained by machines". This already flawed understanding of human responsibility could entirely be replaced by "SEZ MACHINE; U DUMB".

As soon as e-Nochian will be declared the new Holy Language of Truth, let's closely observe Their efforts of hiding their machine's exact inner workings. First, they will simply tell us that it's too complicated for a human mind to grasp. Then, others will build their own machines in an effort to explain the understandings of e-Nochian to us sacks of meat. There will be lots of lying and shit-throwing and probably wars about this.

One thing that's absolutely mandatory to do in order to prove to others how your own machine works is to show them the source code. So I guess Open Sourcers are the terrorists of tomorrow. The harbingers for this have been spotted for a while now.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

There's a really good series of essays by Douglas Hofstadter in Metamagical Themas which talks about writing computer programs to perform the iterated prisoner's dilemma. The core question is - IS COOPERATION ACTUALLY RATIONAL?

Short answer: to a degree, and it depends on your partner. In a 0-sum game, cooperation is never the best strategy.

I would expect machine learning to eventually sniff out the optimal strategy to maximize wins. If you are training it in a game where aggression is actually the optimal strategy, then yes, machine learning will test it.

The game they describe is a 0-sum game, a competition. Yes, it's set up so that both agents can tie. But if you stun your opponent you can deny them a point and get a higher score. The machine learning discovers this, just like how water poured on an incline will flow down hill. The article presents this as if the algo is making a moral choice, or is foregoing an optimal strategy in favor of an aggressive one. (the headline suggests the algo is responding to 'stress' which is flat out wrong)

If you build a game where cooperation is a winning strategy, a smart bot will cooperate. Here, they built a game where aggression gets you the high score, and the journalists are wringing their hands and anthropomorphizing it.

You CAN tell a valuable story about the dangers of AI using this research. I think it's misleading to set it up talking about how AIs "just come up with aggressive strategies" as if that's an inherent feature of machine learning. The fact is, it's a feature of competitive games, especially zero-sum games like the one the article describes.

I think the real story is about how we could foolishly deploy AI without considering the unintended consequences of optimization. I say - shift the narrative away from the tech and towards the people using it. The AI is amoral, it's just a tool, it's not going to 'wake up' and want to kill us. The danger is entirely centered on humans who are going to use these tools in a careless way.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Mesozoic Mister Nigel on February 15, 2017, 07:29:22 PM
Quote from: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

There's a really good series of essays by Douglas Hofstadter in Metamagical Themas which talks about writing computer programs to perform the iterated prisoner's dilemma. The core question is - IS COOPERATION ACTUALLY RATIONAL?

Short answer: to a degree, and it depends on your partner. In a 0-sum game, cooperation is never the best strategy.

I would expect machine learning to eventually sniff out the optimal strategy to maximize wins. If you are training it in a game where aggression is actually the optimal strategy, then yes, machine learning will test it.

The game they describe is a 0-sum game, a competition. Yes, it's set up so that both agents can tie. But if you stun your opponent you can deny them a point and get a higher score. The machine learning discovers this, just like how water poured on an incline will flow down hill. The article presents this as if the algo is making a moral choice, or is foregoing an optimal strategy in favor of an aggressive one. (the headline suggests the algo is responding to 'stress' which is flat out wrong)

If you build a game where cooperation is a winning strategy, a smart bot will cooperate. Here, they built a game where aggression gets you the high score, and the journalists are wringing their hands and anthropomorphizing it.

You CAN tell a valuable story about the dangers of AI using this research. I think it's misleading to set it up talking about how AIs "just come up with aggressive strategies" as if that's an inherent feature of machine learning. The fact is, it's a feature of competitive games, especially zero-sum games like the one the article describes.

I think the real story is about how we could foolishly deploy AI without considering the unintended consequences of optimization. I say - shift the narrative away from the tech and towards the people using it. The AI is amoral, it's just a tool, it's not going to 'wake up' and want to kill us. The danger is entirely centered on humans who are going to use these tools in a careless way.

Great analysis, Cram.

Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: LMNO on February 15, 2017, 07:44:22 PM
What Nigel said.

Also, it seems like this study is really playing catch up to a lot of AI thinking, in terms of unintended consequences of utilitarian programming (I think one example is the "paper clip optimizer" that is programmed to optimize the environment with the goal of making paperclips, and then destroys the universe with paperclips because no one thought out the logical conclusion of doing this).
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Mesozoic Mister Nigel on February 15, 2017, 07:52:00 PM
The driving pressure for the development of human intelligence was social cooperation, and I think that the computer engineering industry could really benefit from people who understand why.

This brings us to a loggerheads of sorts, because people who are interested in human behavior and evolution are rarely drawn to computer engineering, nor is the tech industry an especially people-friendly industry. It tends to be, even now, dominated by libertarian types with poor social skills, no knowledge of biology or psychology, and low to no understanding of the driving forces behind the emergence of animal intelligence.

I think this is why so many machine intelligence trials seem so staggeringly off-kilter to people in more people-focused fields.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on February 15, 2017, 08:02:12 PM
Bang on the money, re journalists anthropomorphising. If you feed an ML system a zero sum game it'll find an optimal win condition. As soon as it outperforms humans, it'll freak them the fuck out and they'll start doing the anthropomorphism thing. Everything else is just potential for profit and lulz. I've been saying it for ages now - the Turing test will absolutely be passed, to the satisfaction of most of the human race, pretty soon now, by a machine that's no more conscious than a commodore Amiga was.

We'll be projecting human-level consciousness and personality and emotions onto machines a long time before they ever get close, because acting human is something ML will acquire no bother and, as with everything else, they'll quickly learn to do it with superhuman ability. They'll still be inanimate objects but it won't matter, they'll be better than us at giving the appearance of being us and we are biologically programmed to accept things that appear human as being human.

The rules of marketing dictate that whosoever deploys a cloud AI that most people fall head over heels in love with upon first use will reap profits most epic. Imagine sales kiosks that can gauge a customer's non-verbal reactions perfectly in realtime and adjust tact to suit. You will always supersize for fear of disappointing the checkout.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Prelate Diogenes Shandor on February 15, 2017, 11:18:07 PM
Quote from: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

The real problem with the Terminator narrative is hat the AI angle is ultimately incidental to it. Crazy generals have been staging military coups and oppressing the fuck out of people since the dawn of recorded history.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on February 15, 2017, 11:35:07 PM
Quote from: Prelate Diogenes Shandor on February 15, 2017, 11:18:07 PM
Quote from: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

The real problem with the Terminator narrative is hat the AI angle is ultimately incidental to it. Crazy generals have been staging military coups and oppressing the fuck out of people since the dawn of recorded history.

Exactly this. There are very few (if any) arguments against any technological innovation that don't boil down, upon closer inspection, to an argument against talking primates. However, talking primates are renowned for their complete lack of accountability, so we are required by law to find some other root cause to blame. It's the old - guns don't kill people - conundrum. 
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Prelate Diogenes Shandor on February 15, 2017, 11:42:26 PM
On a tangential note I'd also like to point out that the dinosaurs in Jurassic Park are also incidental. JP is not materially different from that badly designed zoo in San Francisco where the tigers got out and mauled that dude a decade back.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: Prelate Diogenes Shandor on February 16, 2017, 08:41:24 AM
Quote from: P3nT4gR4m on February 15, 2017, 08:02:12 PM
We'll be projecting human-level consciousness and personality and emotions onto machines a long time before they ever get close, because acting human is something ML will acquire no bother and, as with everything else, they'll quickly learn to do it with superhuman ability. They'll still be inanimate objects but it won't matter, they'll be better than us at giving the appearance of being us and we are biologically programmed to accept things that appear human as being human.

Before it's human-level, yes. However I predict that we will develop an ai capable of experiencing real emotions, including love, early enough relative to many more intellectual tasks that it will blindside people and furthermore that the level of complexity that turns out to be necessary for this will be so low that the revelation will offend many people's sensibilities.
Title: Re: Signs of emergent meta-behaviour in machine learning systems
Post by: P3nT4gR4m on February 16, 2017, 09:02:39 AM
I think you seriously underestimate peoples abilities to move the goalposts. I call this "all it's doing is - syndrome"

The AI roadmap is punctuated by examples of someone building a machine that carries out task-x, n-times faster, more accurately and more efficiently than meat, whereupon champions of meat will expound that "all it's doing is..." The sentence invariably ends with "but it'll never..." followed by a new goalpost.

This instant amnesia (another defining feature of meat) forgets that just yesterday the old "all it's doing" goalpost was considered to be something that required an idiotic primate to compute. Now, we discover that it was actually a trivial task that can be accomplished much more quickly, accurately and consistently by a relatively simple computational device.

I see a point in the not too distant future where the only parts of meat computation which are not better accomplished by machines are things like flawed logic and cognitive bias. Everything else will have been - all it's doing'ed - into the realm of sillicon. Flawed logic and cognitive bias. That'll be what's left as the defining characteristic of humanity.