News:

Goddammit.  Another truckload of bees.

Main Menu

Signs of emergent meta-behaviour in machine learning systems

Started by P3nT4gR4m, November 23, 2016, 04:14:26 PM

Previous topic - Next topic

Prelate Diogenes Shandor

Does anybody else want to ask this thing "Why does the Porridge Bird lay his egg in the air?"
Praise NHGH! For the tribulation of all sentient beings.


a plague on both your houses -Mercutio


https://www.youtube.com/watch?v=zrTGgpWmdZQ
https://www.youtube.com/watch?v=rVWd7nPjJH8


It is an unfortunate fact that every man who seeks to disseminate knowledge must contend not only against ignorance itself, but against false instruction as well. No sooner do we deem ourselves free from a particularly gross superstition, than we are confronted by some enemy to learning who would plunge us back into the darkness -H.P.Lovecraft


He who fights with monsters must take care lest he thereby become a monster -Nietzsche


https://www.youtube.com/watch?v=SHhrZgojY1Q


You are a fluke of the universe, and whether you can hear it of not the universe is laughing behind your back -Deteriorata


Don't use the email address in my profile, I lost the password years ago

bugmenоt

What Cramulus said. I can't imagine a machine truly outsmarting its builder. Sure, it can process data faster. Sure, it can be very smartly instructed to find and correlate data out of huge-ass corpora. Sure, an AI will then develop its own language, but only because the builder told it to. As far as I know it has no will.

What concerns me more about AI is people forgetting the above. I can very well imagine dictatorships hiding their usual motives behind a "benevolent machine that outsmarts us all". While you could say this is already happening, the canon still goes more like "benevolent human experts/technocrats who outsmart us all, using data obtained by machines". This already flawed understanding of human responsibility could entirely be replaced by "SEZ MACHINE; U DUMB".

As soon as e-Nochian will be declared the new Holy Language of Truth, let's closely observe Their efforts of hiding their machine's exact inner workings. First, they will simply tell us that it's too complicated for a human mind to grasp. Then, others will build their own machines in an effort to explain the understandings of e-Nochian to us sacks of meat. There will be lots of lying and shit-throwing and probably wars about this.

One thing that's absolutely mandatory to do in order to prove to others how your own machine works is to show them the source code. So I guess Open Sourcers are the terrorists of tomorrow. The harbingers for this have been spotted for a while now.

Cramulus


Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

There's a really good series of essays by Douglas Hofstadter in Metamagical Themas which talks about writing computer programs to perform the iterated prisoner's dilemma. The core question is - IS COOPERATION ACTUALLY RATIONAL?

Short answer: to a degree, and it depends on your partner. In a 0-sum game, cooperation is never the best strategy.

I would expect machine learning to eventually sniff out the optimal strategy to maximize wins. If you are training it in a game where aggression is actually the optimal strategy, then yes, machine learning will test it.

The game they describe is a 0-sum game, a competition. Yes, it's set up so that both agents can tie. But if you stun your opponent you can deny them a point and get a higher score. The machine learning discovers this, just like how water poured on an incline will flow down hill. The article presents this as if the algo is making a moral choice, or is foregoing an optimal strategy in favor of an aggressive one. (the headline suggests the algo is responding to 'stress' which is flat out wrong)

If you build a game where cooperation is a winning strategy, a smart bot will cooperate. Here, they built a game where aggression gets you the high score, and the journalists are wringing their hands and anthropomorphizing it.

You CAN tell a valuable story about the dangers of AI using this research. I think it's misleading to set it up talking about how AIs "just come up with aggressive strategies" as if that's an inherent feature of machine learning. The fact is, it's a feature of competitive games, especially zero-sum games like the one the article describes.

I think the real story is about how we could foolishly deploy AI without considering the unintended consequences of optimization. I say - shift the narrative away from the tech and towards the people using it. The AI is amoral, it's just a tool, it's not going to 'wake up' and want to kill us. The danger is entirely centered on humans who are going to use these tools in a careless way.

Mesozoic Mister Nigel

Quote from: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

There's a really good series of essays by Douglas Hofstadter in Metamagical Themas which talks about writing computer programs to perform the iterated prisoner's dilemma. The core question is - IS COOPERATION ACTUALLY RATIONAL?

Short answer: to a degree, and it depends on your partner. In a 0-sum game, cooperation is never the best strategy.

I would expect machine learning to eventually sniff out the optimal strategy to maximize wins. If you are training it in a game where aggression is actually the optimal strategy, then yes, machine learning will test it.

The game they describe is a 0-sum game, a competition. Yes, it's set up so that both agents can tie. But if you stun your opponent you can deny them a point and get a higher score. The machine learning discovers this, just like how water poured on an incline will flow down hill. The article presents this as if the algo is making a moral choice, or is foregoing an optimal strategy in favor of an aggressive one. (the headline suggests the algo is responding to 'stress' which is flat out wrong)

If you build a game where cooperation is a winning strategy, a smart bot will cooperate. Here, they built a game where aggression gets you the high score, and the journalists are wringing their hands and anthropomorphizing it.

You CAN tell a valuable story about the dangers of AI using this research. I think it's misleading to set it up talking about how AIs "just come up with aggressive strategies" as if that's an inherent feature of machine learning. The fact is, it's a feature of competitive games, especially zero-sum games like the one the article describes.

I think the real story is about how we could foolishly deploy AI without considering the unintended consequences of optimization. I say - shift the narrative away from the tech and towards the people using it. The AI is amoral, it's just a tool, it's not going to 'wake up' and want to kill us. The danger is entirely centered on humans who are going to use these tools in a careless way.

Great analysis, Cram.

"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


LMNO

What Nigel said.

Also, it seems like this study is really playing catch up to a lot of AI thinking, in terms of unintended consequences of utilitarian programming (I think one example is the "paper clip optimizer" that is programmed to optimize the environment with the goal of making paperclips, and then destroys the universe with paperclips because no one thought out the logical conclusion of doing this).

Mesozoic Mister Nigel

The driving pressure for the development of human intelligence was social cooperation, and I think that the computer engineering industry could really benefit from people who understand why.

This brings us to a loggerheads of sorts, because people who are interested in human behavior and evolution are rarely drawn to computer engineering, nor is the tech industry an especially people-friendly industry. It tends to be, even now, dominated by libertarian types with poor social skills, no knowledge of biology or psychology, and low to no understanding of the driving forces behind the emergence of animal intelligence.

I think this is why so many machine intelligence trials seem so staggeringly off-kilter to people in more people-focused fields.
"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


P3nT4gR4m

Bang on the money, re journalists anthropomorphising. If you feed an ML system a zero sum game it'll find an optimal win condition. As soon as it outperforms humans, it'll freak them the fuck out and they'll start doing the anthropomorphism thing. Everything else is just potential for profit and lulz. I've been saying it for ages now - the Turing test will absolutely be passed, to the satisfaction of most of the human race, pretty soon now, by a machine that's no more conscious than a commodore Amiga was.

We'll be projecting human-level consciousness and personality and emotions onto machines a long time before they ever get close, because acting human is something ML will acquire no bother and, as with everything else, they'll quickly learn to do it with superhuman ability. They'll still be inanimate objects but it won't matter, they'll be better than us at giving the appearance of being us and we are biologically programmed to accept things that appear human as being human.

The rules of marketing dictate that whosoever deploys a cloud AI that most people fall head over heels in love with upon first use will reap profits most epic. Imagine sales kiosks that can gauge a customer's non-verbal reactions perfectly in realtime and adjust tact to suit. You will always supersize for fear of disappointing the checkout.

I'm up to my arse in Brexit Numpties, but I want more.  Target-rich environments are the new sexy.
Not actually a meat product.
Ass-Kicking & Foot-Stomping Ancient Master of SHIT FUCK FUCK FUCK
Awful and Bent Behemothic Results of Last Night's Painful Squat.
High Altitude Haggis-Filled Sex Bucket From Beyond Time and Space.
Internet Monkey Person of Filthy and Immoral Pygmy-Porn Wart Contagion
Octomom Auxillary Heat Exchanger Repairman
walking the fine line line between genius and batshit fucking crazy

"computation is a pattern in the spacetime arrangement of particles, and it's not the particles but the pattern that really matters! Matter doesn't matter." -- Max Tegmark

Prelate Diogenes Shandor

Quote from: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

The real problem with the Terminator narrative is hat the AI angle is ultimately incidental to it. Crazy generals have been staging military coups and oppressing the fuck out of people since the dawn of recorded history.
Praise NHGH! For the tribulation of all sentient beings.


a plague on both your houses -Mercutio


https://www.youtube.com/watch?v=zrTGgpWmdZQ
https://www.youtube.com/watch?v=rVWd7nPjJH8


It is an unfortunate fact that every man who seeks to disseminate knowledge must contend not only against ignorance itself, but against false instruction as well. No sooner do we deem ourselves free from a particularly gross superstition, than we are confronted by some enemy to learning who would plunge us back into the darkness -H.P.Lovecraft


He who fights with monsters must take care lest he thereby become a monster -Nietzsche


https://www.youtube.com/watch?v=SHhrZgojY1Q


You are a fluke of the universe, and whether you can hear it of not the universe is laughing behind your back -Deteriorata


Don't use the email address in my profile, I lost the password years ago

P3nT4gR4m

Quote from: Prelate Diogenes Shandor on February 15, 2017, 11:18:07 PM
Quote from: Cramulus on February 15, 2017, 07:21:27 PM

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations
http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations

I'm immediately struck by the sensational science reporting - you could tell this story in a lot of ways but they choose to go with the doomsday terminator apocalypse language again because that's the only pop-culture narrative for AI.

The real problem with the Terminator narrative is hat the AI angle is ultimately incidental to it. Crazy generals have been staging military coups and oppressing the fuck out of people since the dawn of recorded history.

Exactly this. There are very few (if any) arguments against any technological innovation that don't boil down, upon closer inspection, to an argument against talking primates. However, talking primates are renowned for their complete lack of accountability, so we are required by law to find some other root cause to blame. It's the old - guns don't kill people - conundrum. 

I'm up to my arse in Brexit Numpties, but I want more.  Target-rich environments are the new sexy.
Not actually a meat product.
Ass-Kicking & Foot-Stomping Ancient Master of SHIT FUCK FUCK FUCK
Awful and Bent Behemothic Results of Last Night's Painful Squat.
High Altitude Haggis-Filled Sex Bucket From Beyond Time and Space.
Internet Monkey Person of Filthy and Immoral Pygmy-Porn Wart Contagion
Octomom Auxillary Heat Exchanger Repairman
walking the fine line line between genius and batshit fucking crazy

"computation is a pattern in the spacetime arrangement of particles, and it's not the particles but the pattern that really matters! Matter doesn't matter." -- Max Tegmark

Prelate Diogenes Shandor

On a tangential note I'd also like to point out that the dinosaurs in Jurassic Park are also incidental. JP is not materially different from that badly designed zoo in San Francisco where the tigers got out and mauled that dude a decade back.
Praise NHGH! For the tribulation of all sentient beings.


a plague on both your houses -Mercutio


https://www.youtube.com/watch?v=zrTGgpWmdZQ
https://www.youtube.com/watch?v=rVWd7nPjJH8


It is an unfortunate fact that every man who seeks to disseminate knowledge must contend not only against ignorance itself, but against false instruction as well. No sooner do we deem ourselves free from a particularly gross superstition, than we are confronted by some enemy to learning who would plunge us back into the darkness -H.P.Lovecraft


He who fights with monsters must take care lest he thereby become a monster -Nietzsche


https://www.youtube.com/watch?v=SHhrZgojY1Q


You are a fluke of the universe, and whether you can hear it of not the universe is laughing behind your back -Deteriorata


Don't use the email address in my profile, I lost the password years ago

Prelate Diogenes Shandor

Quote from: P3nT4gR4m on February 15, 2017, 08:02:12 PM
We'll be projecting human-level consciousness and personality and emotions onto machines a long time before they ever get close, because acting human is something ML will acquire no bother and, as with everything else, they'll quickly learn to do it with superhuman ability. They'll still be inanimate objects but it won't matter, they'll be better than us at giving the appearance of being us and we are biologically programmed to accept things that appear human as being human.

Before it's human-level, yes. However I predict that we will develop an ai capable of experiencing real emotions, including love, early enough relative to many more intellectual tasks that it will blindside people and furthermore that the level of complexity that turns out to be necessary for this will be so low that the revelation will offend many people's sensibilities.
Praise NHGH! For the tribulation of all sentient beings.


a plague on both your houses -Mercutio


https://www.youtube.com/watch?v=zrTGgpWmdZQ
https://www.youtube.com/watch?v=rVWd7nPjJH8


It is an unfortunate fact that every man who seeks to disseminate knowledge must contend not only against ignorance itself, but against false instruction as well. No sooner do we deem ourselves free from a particularly gross superstition, than we are confronted by some enemy to learning who would plunge us back into the darkness -H.P.Lovecraft


He who fights with monsters must take care lest he thereby become a monster -Nietzsche


https://www.youtube.com/watch?v=SHhrZgojY1Q


You are a fluke of the universe, and whether you can hear it of not the universe is laughing behind your back -Deteriorata


Don't use the email address in my profile, I lost the password years ago

P3nT4gR4m

I think you seriously underestimate peoples abilities to move the goalposts. I call this "all it's doing is - syndrome"

The AI roadmap is punctuated by examples of someone building a machine that carries out task-x, n-times faster, more accurately and more efficiently than meat, whereupon champions of meat will expound that "all it's doing is..." The sentence invariably ends with "but it'll never..." followed by a new goalpost.

This instant amnesia (another defining feature of meat) forgets that just yesterday the old "all it's doing" goalpost was considered to be something that required an idiotic primate to compute. Now, we discover that it was actually a trivial task that can be accomplished much more quickly, accurately and consistently by a relatively simple computational device.

I see a point in the not too distant future where the only parts of meat computation which are not better accomplished by machines are things like flawed logic and cognitive bias. Everything else will have been - all it's doing'ed - into the realm of sillicon. Flawed logic and cognitive bias. That'll be what's left as the defining characteristic of humanity.

I'm up to my arse in Brexit Numpties, but I want more.  Target-rich environments are the new sexy.
Not actually a meat product.
Ass-Kicking & Foot-Stomping Ancient Master of SHIT FUCK FUCK FUCK
Awful and Bent Behemothic Results of Last Night's Painful Squat.
High Altitude Haggis-Filled Sex Bucket From Beyond Time and Space.
Internet Monkey Person of Filthy and Immoral Pygmy-Porn Wart Contagion
Octomom Auxillary Heat Exchanger Repairman
walking the fine line line between genius and batshit fucking crazy

"computation is a pattern in the spacetime arrangement of particles, and it's not the particles but the pattern that really matters! Matter doesn't matter." -- Max Tegmark