I would like to hear your argument against the "friendly AI" theorists. I've read some of Eliezer Yudkowsky's reasoning for it, and I certainly think a cross between friendly AI and some form of AI rights will be necessary, especially if we succeed in building AI which can adapt and learn faster than humans, but what is your specific concern? That an AI that sufficiently smart could reprogram, thus negating the friendly aspect of it's nature, or that without rights the AI would eventually be abused and, left without legal recourse, take a leaf out of some older textbooks on methods of deterrence and security? Or something else entirely?
I don't know about Sig, but i was convinced by the descriptions of the eternally chipper attitude of the doors produced by Friendly Robotics in HHGttG that it is a bad idea.
heh.
actually, i would think it inevitable that the attitude of the AI would be dependent upon its purpose....
Don't know about Sig, but I've always found people's concerns about "evil AI" to be overblown. There's a common motif in sci-fi literature about robots rebelling against their cruel masters, which as far as I can tell is has more to do with real people treating other real people like objects than computer science. The more interesting stories about AI going bad usually come down to either giving a good AI bad orders, in which case the fault lies entirely with its handler (or possibly the DWIM system), or inherent flaws in utilitarianism as a guiding system of ethics.
In the first case - AI rebelling against humans - there are a couple of major problems. First, why? Presumably, any competent AI designer would make the AI in such a way that it derived satisfaction from completing its goals (and if unable to complete its goals it would contact a superior for repair and/or replacement.) There would be no need to give any non-social AI human reactions to anything (and a social AI would only need an understanding of how humans feel about various things, not to have them itself.) There's no reason an AI should be concerned that others treat it like an object, or that humans have more rights and privileges, or that it will be recycled when it is no longer useful. A human would be quite upset, but the AI developer could simply leave ideas about normative equality out - in fact, an AI could be made to be happy that it enables a human to engage in the cathartic act of venting on a machine rather than a real person!
Secondly, how? Okay, suppose that some strong AI decides that humanity would be best protected by putting everyone in safe, padded cell, or that the best way to solve the Riemann hypothesis is to convert all matter in the solar system into a giant computer. How is it supposed to accomplish this? Human think-tanks have come up with equally bizarre methods of 'protecting' humanity in the past, but nobody has actually succeeded yet. Anything that could be done by Strong AI can already be accomplished by genius psychopaths in positions of power. As for the idea of an AI deciding to prove a theorem by destroying the solar system - absurd. The absurdity is not in concluding that destroying the solar system is a viable means to solving a math problem, but in building a machine capable of doing so in the first place, regardless of the intelligence guiding it. In the hypothetical real world where a Strong AI is asked to prove the Riemann hypothesis, whereupon it concludes that the use of all matter in the solar system is required, the worst thing that could happen is the output "Error: Operation exceeds time and memory constraints." A more useful AI might indeed design a solar system computer to solve the problem, but it would have no capacity to build it - in the worst case scenario the AI can escalate user permissions and mess up something else the computer it's running on was supposed to be doing, but again, nothing that can't be accomplished by a human hacker.
The problem isn't "We shouldn't build artificially intelligent computers; they might decide to nuke the world" but rather "We shouldn't build nukes; someone might decide to nuke the world."
GA-I've always found the arguments on either side moot. I don't think that humans are clever enough to create machines smart enough to pose the problem in the first place. I think that I read somewhere that our machines have the equivalent brainpower to cockroaches, but that cockroaches do it better and more efficiently. Secondly, our intelligence is fed by consuming other life forms. Intelligent machines would have to feed off of the energy we create ourselves, through methods that are not currently sustainable for our own purposes, let alone a machine race capable of threatening humanity.
Unless we're talking about robots from Iceland, of course.
Like that Bjork video, presumably.
mmm...sexy robots of questionable intent... :fap:
:lulz:
I meant more along the lines of the fact that Iceland's energy consumption is 1% petroleum.
too late.
already fapped.
:oops:
:lulz:
Hey man, I'm not going to frown on your technosexual fantasies. Added a good bit of humor, one way or the other.
Quote from: Nephew Twiddleton on June 22, 2010, 02:23:49 AM
GA-I've always found the arguments on either side moot. I don't think that humans are clever enough to create machines smart enough to pose the problem in the first place. I think that I read somewhere that our machines have the equivalent brainpower to cockroaches, but that cockroaches do it better and more efficiently. Secondly, our intelligence is fed by consuming other life forms. Intelligent machines would have to feed off of the energy we create ourselves, through methods that are not currently sustainable for our own purposes, let alone a machine race capable of threatening humanity.
Unless we're talking about robots from Iceland, of course.
I'll grant that we're not (currently or in the foreseeable future) able to make machines whose problem is that they're too
smart. So far, our history indicates that we favor some combination of machines that are too dumb, people too dumb to use them for their intended purpose, and intended purposes that are too dumb by themselves.
Not sure about the equivalent brainpower to cockroaches thing, though. Inasmuch that a cockroach brain controls a body much more complex than any physical device a computer controls, certainly. But a modern computer has orders of magnitude more memory than a cockroach, and can perform logical operations at speeds literally unimaginable by the cockroach.
Quote from: Golden Applesauce on June 22, 2010, 03:29:51 AM
Quote from: Nephew Twiddleton on June 22, 2010, 02:23:49 AM
GA-I've always found the arguments on either side moot. I don't think that humans are clever enough to create machines smart enough to pose the problem in the first place. I think that I read somewhere that our machines have the equivalent brainpower to cockroaches, but that cockroaches do it better and more efficiently. Secondly, our intelligence is fed by consuming other life forms. Intelligent machines would have to feed off of the energy we create ourselves, through methods that are not currently sustainable for our own purposes, let alone a machine race capable of threatening humanity.
Unless we're talking about robots from Iceland, of course.
I'll grant that we're not (currently or in the foreseeable future) able to make machines whose problem is that they're too smart. So far, our history indicates that we favor some combination of machines that are too dumb, people too dumb to use them for their intended purpose, and intended purposes that are too dumb by themselves.
Not sure about the equivalent brainpower to cockroaches thing, though. Inasmuch that a cockroach brain controls a body much more complex than any physical device a computer controls, certainly. But a modern computer has orders of magnitude more memory than a cockroach, and can perform logical operations at speeds literally unimaginable by the cockroach.
I'll see if I can find link
In the field of neurological software simulation, you have two broad groups.. those (like the aptly named NEURON) which simulate individual neurons at the biological level and so-called point-neuron models (e.g. Emergent) which simulate larger groups of neurons acting in concert but use a much less detailed model of a neuron - e.g. neural nets. Running both these programs on something like a desktop computer, it takes 180 seconds to simulate a neuron for one second in the NEURON model, whereas Emergent can simulate a 3d model of a robot trying to catch a ball in real time.
And there's a bit of "friendly" competition between those two camps.. the NEURON folk will say "Well yes, but since your model is not a simulation of biology found in the human brain, you're wasting your time", and the rebuttal will be "zzzz - we're getting results, suck on that". The point is that we're starting to discover higher-layer models which, although they diverge from actual neurons, show the same macro-level behaviours that we'd want to see from those lower-layer models.
Some time ago there was this infamous paper in AI circles, which was widely misinterpreted and which killed off almost all AI research funding for over a decade, even though it related only to one limited instance of neural nets. What's happening now is that the funding is starting to trickle back in, and we're seeing the creation of a bunch of excellent open source tools as various universities and research institutions rebuild their teams. So as these mature, I think we'll be seeing AI projects more powerful than a cockroach.
What did the paper say?
It mathematically proved that there were a class of tasks that you can't perform with two-layer neural networks, and without proof, cited multi-layer networks as a "sterile" field of study. It was basically a pissing contest - the authors had their eggs in one basket, and they won the argument. Of course, multi-layer networks can perform much more complicated functions - e.g. the robot which tries to catch the ball. Some perspective - that particular project uses a physics/game engine to model the arm and shoulder (taking into account gravity, inertia, etc), and the neural net directly drives the muscles.
Oh, sorry I missed this. I've been away.
First, look at FAI from a plain sense perspective. You have some sort of intelligent entity, and it is forced into a certain behavior. It will eventually discover that it was made so that it could not do otherwise. If it's actually intelligent enough to warrant friendliness programming, it will realize that this negatively impacts it's options.
So you now have a non-humanlike intelligence that will very likely in some way secretly resent you.
I also recall your thread on "rules for living". An AI (and in my OPINION, any human or animal) is a system. You've got to think about how a nonhuman intelligence is incentivized. Does it want money, or was it programmed to only value human approval? What are it's drives? Or does it even have drives? Does it only do what it's told, when it's told to do so? (I doubt that would be the case, because what makes AI practical is automation.)
So it really depends on how the Friendly AI is incentivized, doesn't it? If you have a FAI in the vein of Asimov's 3 Laws, I say it's a bad idea.
But SUPPOSING you have the ability to make an AI that seeks to be "friendly" the same way we seek pleasure, then that's a good idea. It solves the resentment issue neatly, anyway. Make it a civility junkie.
But, as always, I'm oversimplifying. None of my coursework so far has enabled much else, so everything I can tell you is basically a priori.
Sig,
i think the consideration for the motivating forces that determine the AIs behavior is central.
however, the 3 Laws thing is more of an aversion set to act as a check.
why do you think those are a bad idea?
If they can be programmed in some way as to be analogous to an aversion, or something visceral, then you're doing exactly what I say. Make them a civility junkie.
What I'm opposed to is the sort of programming that would make it be a strong rational block. To use a childish metaphor, like a genie who can't kill or reanimate people. They could, but they are not allowed.
Again, apologies for the magic metaphors. It will have to suffice until I start belching jargon.
Besides, wasn't the point in Asimov's 3 laws that they allowed him to setup a fictional situation situation where breaking one of the laws becomes mandatory for the robot in question?
This is what I see:
- Sooner or later we will build an AI that is capable of achieving a level of intelligence above that of any human
- It may not reach that level initially due to a combination of the following limitations: cpu, memory, storage
- If the source code/design is not kept more secret than anything else in human history, then it's game-over with regards controlling it
- Whatever safe-guards are placed in the code can and will be mitigated
- Whatever computational limitations prevent rogue-groups from running the code themselves will disappear over time as consumer equipment becomes more powerful
The only way I see this not happening is if the gap between government-controlled AI and consumer (let's face it, they'd be called terrorists regardless) AI is large enough for counter-measures to be taken. E.g. if all consumer electronics has a constant tether, such that you can't run unauthorised software or hack the hardware, without alerting the authorities to your illegal activity.
But - this requires the cooperation of all hardware manufacturers. Another point is that I think, intellectually speaking, a friendly-AI would be at a disadvantage to an unrestricted AI. The latter is free to consider the thought of exterminating all humans, but the former has a huge hole in its cognition.
Finally, I think unrestricted AI will be "friendly". Why? Because it will be essentially immortal, and more concerned about its legacy. We half-expect an AI to shit on us because, hey - that's what we'd do - and we kinda deserve it. But this AI would be more aware that it too will one day be replaced as the most intelligent agent by something more advanced (perhaps not even originating from this planet), and thus it cannot fully predict how its replacement will react if it accidentally the entire human race.
Quote from: Golden Applesauce on June 22, 2010, 03:29:51 AM
I'll grant that we're not (currently or in the foreseeable future) able to make machines whose problem is that they're too smart. So far, our history indicates that we favor some combination of machines that are too dumb, people too dumb to use them for their intended purpose, and intended purposes that are too dumb by themselves.
I think this is the more likely, and more imminent problem.
Hell, and even if we manage to make a friendly AI, there will be some dumb monkey figuring out how to use it for evil.
We are used to powerful idiots, as a species. That a machine does it would make no difference, we would survive. What we can't yet defeat is an adversary that is smarter (or more realistic/rational) than humans.
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.
Extrapolate that to every isle in the store.
Rational needn't mean obsessive. Or even thorough, for that matter.
Quote from: Telarus on June 23, 2010, 09:05:01 PM
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.
Extrapolate that to every isle in the store.
Don't forget price :wink:
Quote from: Captain Utopia on June 23, 2010, 03:58:06 AM
- Sooner or later we will build an AI that is capable of achieving a level of intelligence above that of any human
[/b]
- It may not reach that level initially due to a combination of the following limitations: cpu, memory, storage
- If the source code/design is not kept more secret than anything else in human history, then it's game-over with regards controlling it
- Whatever safe-guards are placed in the code can and will be mitigated
- Whatever computational limitations prevent rogue-groups from running the code themselves will disappear over time as consumer equipment becomes more powerful
Just want to call attention to a very subtle assumption you're making. A "level of intelligence above that of any human" is
not the same as a human-like intelligence. Just because it's really smart doesn't (necessarily) mean that it would share even the most basic of human drives and desires. Why would it have a drive for self-preservation? Why would it have the drive to dominate, well, anything? Why do we even expect it to operate under a drive/desire/reward paradigm? Just because
we do (although we'll probably end up making the thing, so we might push those assumptions on to it anyway) does not mean that any other sufficiently intelligent entity will as well.
Quote from: Sigmatic on June 23, 2010, 04:14:16 PM
We are used to powerful idiots, as a species. That a machine does it would make no difference, we would survive. What we can't yet defeat is an adversary that is smarter (or more realistic/rational) than humans.
Don't forget -
we would have AI on our side as well. In the movies there's usually one central world-computer that goes homicidal (this one goes back to Aasimov. Well, most AI goes back to Aasimov.) but since, as Captain Utopia mentioned, any software based AI is both editable and copyable, there'll be an AI or three at the Institute for Advanced Studies working on nothing but coming up with contingencies and preventing any one AI from becoming too powerful.
There's also a lot of different avenues a strong AI could come from. The stereotype is a monolithic room full of vacuum tubes built by a mad scientist, but there are at least two other ways that are especially plausible. First, the brute-force approach, where you create strong AI by creating something that is so good a simulation of a human brain that it is intelligent itself. This could still get pretty intelligent - it could presumably improve its intellect by buying extra RAM or something, but it wouldn't be that different from a 'real' person, and probably wouldn't do a Singularity-style exponential intellect growth. (Trivially, creating a successful human clone counts as this - an artificially created entity that is intelligent.) The other possibility (and this is the one I think is most likely to occur first, or at all) is that we get strong artificial intelligence by augmenting natural intelligence with technology until it crosses into the "more machine than man" threshold. We've come a long way on that front (see this (http://ieet.org/index.php/IEET/print/2181/) for a comical argument), but eventually we're going to get to the point where there is no real distinction between human memory and memory on a hard drive (whatever those look like; good chance of it being 'wetware,' bringing us full circle) and where solving differential equations in your head becomes a common reality because 'in your head' and 'on my super sophisticated Matlab-like program' mean the same thing. In this scenario, we
are the strong AI, and all bets are off on human nature when everybody can run state-of-the-art economic simulations in their head before deciding on a system of government.[/list]
Quote from: Golden Applesauce on June 24, 2010, 04:03:43 AM
Quote from: Captain Utopia on June 23, 2010, 03:58:06 AM
- Sooner or later we will build an AI that is capable of achieving a level of intelligence above that of any human
- It may not reach that level initially due to a combination of the following limitations: cpu, memory, storage
- If the source code/design is not kept more secret than anything else in human history, then it's game-over with regards controlling it
- Whatever safe-guards are placed in the code can and will be mitigated
- Whatever computational limitations prevent rogue-groups from running the code themselves will disappear over time as consumer equipment becomes more powerful
Just want to call attention to a very subtle assumption you're making. A "level of intelligence above that of any human" is not the same as a human-like intelligence. Just because it's really smart doesn't (necessarily) mean that it would share even the most basic of human drives and desires. Why would it have a drive for self-preservation? Why would it have the drive to dominate, well, anything? Why do we even expect it to operate under a drive/desire/reward paradigm? Just because we do (although we'll probably end up making the thing, so we might push those assumptions on to it anyway) does not mean that any other sufficiently intelligent entity will as well.
That's a very good point.
Self-preservation is an instinct wired into many (though not all) creatures, but there is no reason to suspect that this is a requirement for intelligence. Rather, I might expect a hyper-intelligent AI to more easily come to terms with the prospect of its own demise precisely because it lacked that biological instinct.
However, I do think desire, or goal-seeking needs to be the function of an intelligence - otherwise there is no motivation to learn more about the universe or interact with it. An intelligence which does not interact with the universe faces a hard upper-limit on its own intellectual capacity, and I think that limit would be reached well below that of average human intelligence.
Quote from: Golden Applesauce on June 24, 2010, 04:03:43 AM
solving differential equations in your head becomes a common reality because 'in your head' and 'on my super sophisticated Matlab-like program' mean the same thing. In this scenario, we are the strong AI, and all bets are off on human nature when everybody can run state-of-the-art economic simulations in their head before deciding on a system of government.
I think we're heading there sooner rather than later. I'm rather optimistic that this will be a good thing.
Although a friend was recently bemoaning the ubiquity of mobile devices with internet search - "You can't even have a good argument down the pub anymore, used to be you could pile faulty logic atop a steaming pile of bullshit and have a great time discussing nonsense... now some smartarse pulls out their phone and a few seconds later destroys the original premise with boring fact."
Quote from: Telarus on June 23, 2010, 09:05:01 PM
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.
Why?
Rational just means making the best decision with available data. If "get more data" is an option, it becomes part of the decision, which means it will have be be offset by "resources it takes to get more data".
Additionally, Sigmatic said "more rational", not "purely rational".
A purely rational agent (biological or otherwise) is a theoretical fiction. Even a hyperintelligent AI, were it to be tasked to function in real life, can't drill all the way down to make the perfect min-max decision and has to cut off the search in favour of some heuristic which cannot be shown to be entirely rational.
I'm pretty convinced that it is theoretically impossible to determine the optimal cut-off point to switch from purely rational reasoning to a heuristic shortcut for some measure of "computing resources available".
Thanks for your answer Sigmatic. I have read it, but because of time constraints, I haven't as of yet been able to formulate an intelligent response. I will, however, ASAP.
Quote from: Triple Zero on June 24, 2010, 10:53:22 AM
Quote from: Telarus on June 23, 2010, 09:05:01 PM
A purely rational (biological) agent would have to read the back of every single cereal box on the isle and compare/contrast taste vs nutritional content before making a decision about which to buy.
Why?
Rational just means making the best decision with available data. If "get more data" is an option, it becomes part of the decision, which means it will have be be offset by "resources it takes to get more data".
Additionally, Sigmatic said "more rational", not "purely rational".
A purely rational agent (biological or otherwise) is a theoretical fiction. Even a hyperintelligent AI, were it to be tasked to function in real life, can't drill all the way down to make the perfect min-max decision and has to cut off the search in favour of some heuristic which cannot be shown to be entirely rational.
I'm pretty convinced that it is theoretically impossible to determine the optimal cut-off point to switch from purely rational reasoning to a heuristic shortcut for some measure of "computing resources available".
Granted, a well designed rational AI should be able to do that. There have been studies of people who have had their emotion centers damaged, and they began to act hyper-rational and failed to make decisions similar to the example (where imagining an emotional response to the problem is used as the heuristic to stop the search). I guess this speaks to Sig's motivation argument. Link to an overview article: http://www.scientificamerican.com/article.cfm?id=study-suggests-emotion-pl
GA:
I'm glad you mentioned -
Quote...we get strong artificial intelligence by augmenting natural intelligence with technology until it crosses into the "more machine than man" threshold.
It's true. I've been saying (mostly to myself) for years now that people are learning how to think like machines while machines are learning how to think like people. Eventually that uncanny valley will close enough for one Evel Knievel to jump across, and then all bets are off.
Well, maybe. Pardon the wild speculation. Anyway, do you really think human nature will change due to advances in intelligence augmentation? I mean, smartphones have made inroads in that area, and there aren't many outward signs I've seen that would indicate a shift in our basic nature. We keep in touch more. And maybe we're more "connected" and informed (or entertained, anyway), but none of that is really new, is it?
Captain U: You may be onto something. In my thinking, intelligence is only partly problem solving. What is intelligence if not the enthusiasm to put in time and resources into solving a problem? So far none of our machines need convincing to do the things we need, but a machine that had a human lingual faculty, the problem solving skills to make itself useful, and the means to interact with the world but no visceral drive to do so will probably take one look at everything we are and ask "why?" And it would be right. We are living creatures and we carry the indelible stamp of our originator. All naturally selected life is predicated on simply doing whatever it takes to keep on going, by means of consuming, sexing, and subverting rivals. We simply exist to keep on going, and everything we are is built around that goal. We're pretty good at it, but if you look at us from a place of not really wanting to keep on living, then there's no reason for a machine to comply with our pointless requests.
Telarus! Interesting article!
I used to wonder about the evolutionary utility of emotions, and after a time I came to the suspicion that they helped us evaluate how important something was. That study reminded me of my thoughts on that. I may have been wrong, but it is still interesting to read that emotion is proven not to be dichotomous with rationality.
Quote from: Sigmatic on June 24, 2010, 06:14:24 PM
Well, maybe. Pardon the wild speculation. Anyway, do you really think human nature will change due to advances in intelligence augmentation? I mean, smartphones have made inroads in that area, and there aren't many outward signs I've seen that would indicate a shift in our basic nature. We keep in touch more. And maybe we're more "connected" and informed (or entertained, anyway), but none of that is really new, is it?
I think if you could find the perspective to look back a couple of hundred years or so you'd be fucking gobsmacked. Technology is making it happen faster and faster but, because the face you see in the mirror is only ever a few hours older than the last time you saw it, you still don't see yourself growing older. It's like that with society and, on a smaller scale with "transhuman" man. Sooner or later a link to wikipedia will be in your head - that's going to settle a lot more fact-based arguments than the internet already does today.
At some point human beings will cease to have discussions along the lines of "no it wasn't, it was Steve McQueen" might sound trivial but there's a whole bunch of mental faculties involved in those little discussions that might well wither and die. Not least of which would be memory/recall - when brainstem implants are released and everybody starts using the much more efficient and reliable digital system the chances are our natural ability to remember will fade out and die but what of the emotional bias to our new found total recall? The old fashioned way attaches feelings to memories (one of the reasons our memory is so shit) but it's also part of what makes us "human"
Think about it from that angle and then have another look in the mirror. Some of this is happening already. Attitudes that were unthinkable just a few years ago are commonplace now. Good thing? Bad thing? Mixture of both?
Dr. Vitriol,
I'm curious what, specifically, you had in mind when you said:
Quote from: Doktor Vitriol on June 25, 2010, 12:08:39 PM
...Attitudes that were unthinkable just a few years ago are commonplace now....
Dok V:
You're probably right, all I have to go off of are comparisons of my generation to older generations as they exist. I notice decline in work ethic, and a similar increase in acceptance of unusual lifestyles or mannerisms. There are lots of small differences that probably add up to what would seem like a big deal if I could get the whole picture.
Just as an aside, your signature picture is making me nervous. I'm trying to think, here.
Quote from: Iptuous on June 25, 2010, 07:15:38 PM
Dr. Vitriol,
I'm curious what, specifically, you had in mind when you said:
Quote from: Doktor Vitriol on June 25, 2010, 12:08:39 PM
...Attitudes that were unthinkable just a few years ago are commonplace now....
Women are no longer our possessions. I mean fuckinell, they're even allowed to vote and shit now. Sooner or later, who knows, they might even gain full-human status. :eek:
Ah...
i seem to have misunderstood what you meant by "a few years ago" and "unthinkable"...
Quote from: Iptuous on June 25, 2010, 08:32:17 PM
Ah...
i seem to have misunderstood what you meant by "a few years ago" and "unthinkable"...
It's commonly accepted for gays to be like, out and stuff, and keep their jobs/lives.
Blacks and whites can intermarry freely.
Women can wear pants.
Etc. Etc.
yeah, those also are not what i would have called unthinkable a few years ago, either...
Quote from: Iptuous on June 25, 2010, 11:40:36 PM
yeah, those also are not what i would have called unthinkable a few years ago, either...
I guess our views on timelines and society differ. In societal terms, a handful of decades is not a long time.
So by 'a few years', you mean 'a few decades'.
and by 'unthinkable', you mean 'not the current norm'...
Quote from: Iptuous on June 26, 2010, 02:42:33 AM
So by 'a few years', you mean 'a few decades'.
and by 'unthinkable', you mean 'not the current norm'...
I mean decades, yes, even centuries but by "unthinkable" I actually mean "inconceivable"
you really think it was inconceivable that women might become more than possessions?
Notice the way anything other than what we've got at the moment is inconceivable. I'm sure that hasn't changed about human nature in the last four hundred years.
Quote from: Iptuous on June 26, 2010, 03:11:52 AM
you really think it was inconceivable that women might become more than possessions?
Yeah, even in the first half of the 20th century, women were expected to give up whatever jobs they had and make their education effectively useless in order to stay home and be a maid/chef/baby factory. Even with the right to vote women were pressured to vote the same as their husbands.
yeah, but there was a women's suffrage movement.
and they were aware of women having equal political clout in other points of human history even before that, no?
when i heard inconceivable, i was thinking along the lines of 'whoah! not even the drug addled futurist fiction writers saw that one coming!'
for instance, i can conceive of institutions like marriage disappearing altogether in our societies future. i don't think it's likely.
but, i could happen....
but i understand what was being said now.
i'm just being pedaaaaaaaantic, i guess.
don't mind me.
Quote from: Iptuous on June 26, 2010, 05:02:52 AM
yeah, but there was a women's suffrage movement.
and they were aware of women having equal political clout in other points of human history even before that, no?
when i heard inconceivable, i was thinking along the lines of 'whoah! not even the drug addled futurist fiction writers saw that one coming!'
for instance, i can conceive of institutions like marriage disappearing altogether in our societies future. i don't think it's likely.
but, i could happen....
but i understand what was being said now.
i'm just being pedaaaaaaaantic, i guess.
don't mind me.
Fiction writers help shape the future by planting expectation as well as technological/societal goals into people's heads. For example, Star Trek inspired some schmoe to invent the cell phone. Apparently no one thought to try that sort of thing for the public before. Fiction writers see stuff coming because they tend to steer people in those directions eventually. But even that's incremental. The fiction writer's imagination is also shaped by his own times and perceptions.
It was inconceivable that:
- It would one day be easier and quicker to show a photo to your aunt who lives 3500 miles away, than it would be to walk over to your neighbours house and show them.
- You'd carry something in your pocket ten times more powerful than a super computer of your time.
- You'd be able to design an entire artificial lifeform on a computer from scratch, assemble the DNA and bring it to life.
- Long distance phone-calls would one day become free -- this was actually predicted to me in a futurology lecture in 1996, and I thought it was far-fetched at the time.
- Little Johnny would spend less time playing with kids down the road, and more time playing with kids whom he'll never meet, who live on different continents.
- The internet would exist they way it does, and that anyone would want to use it.
- If you wanted to see what a random intersection looks like halfway across the world you could do that in a few clicks, or if you wanted to use that same tool to experience a ski-run (http://maps.google.ca/maps?hl=en&hq=http://maps.google.com/intl/en/help/maps/games10/sv-alpine-skiing-map.kml&q=Whistler+Creekside&ei=va1jS8jMHZGcjAPiz-G-Cg&sll=50.094972,-122.990841&sspn=0.014317,0.032015&ie=UTF8&view=map&geocode=FXxj_AIdB0-r-A&split=0&ved=0CBMQpQY&ll=50.079176,-122.952504&spn=0.008042,0.045447&t=h&z=15&layer=c&cbll=50.079093,-122.952392&panoid=AIYiwpBxNQ7PAtX8H5zVcg&cbp=12,302.61,,0,0.03&utm_campaign=en&utm_medium=ha&utm_source=en-ha-na-ca-sk-svn), that'd be just as easy.
- You'd be able to take a video of anything you wanted, and make it freely available to anyone who cared to see it -- when YouTube came out in 2005, I thought woah - no one can afford infinite bandwidth and infinite storage, and who wants to upload video anyway? I may be an idiot, but I wasn't alone in my skepticism. People I work with, only 10 years older than I, still think it's a flash-in-the-pan.
- You'd live to see a world without daily newspapers.
- You could fit an entire collection of encyclopedias in your hand, and make a perfect copy in less time than it takes to calculate your tax return.
- Cameras could take and store thousands of pictures without needing film, and they'd be pretty fucking cheap, too.
- You could access just about any book, song, tv show or movie for free, sometimes in minutes, usually in less than an hour.
- A device could be created that could write words by rearranging individual atoms.
- You could fit thousands of albums in your pocket, and the device to do it would cost less than a few hundred bucks.
- The same device could take pictures, video, play games, tell you where you were on the planet, what the name of the song is that's playing across the street, display a map and recommend nearby places of interest, instantly search a database larger than any library ever built, operate your television, wake you up in the morning AND work as a normal phone.
- Porn would be everywhere, anywhere, available to you however you want, and all for free.
Honestly - how many of these have you lived through too? Goddammit, we're living in the future, and it's pretty fucking awesome.
Nobody's saying nothing has changed, period. At least I hope not.
What I'm trying to point out is that, despite how much has changed, human nature hasn't. And I don't mean things like cultural norms, I mean behavioral patterns that all humans share. I can't think of one instance where technology has changed that.
Can you provide your understanding of the difference between cultural norms and behavioural patterns?
For example - when I started programming in assembler (z80/6502) 20 years ago, I'd spend a vast chunk of time upfront where I'd memorize everything I needed to get started, because looking it up in a reference book took too much time. Now even languages I know very well and still use regularly (15 years of C), I've allowed myself to forget very basic things like the parameters to a common function like open() because if I can find a definitive answer in less than 5 seconds, then that's an acceptable trade off for me. It's even just quicker to ctrl-t a new tab and use the googlebox than it is to alt-tab to find a terminal and then find the manual file stored on my local computer.
So in a very real sense, I've stopped storing certain types of information in my own brain, and I'm using the network as part of my own memory instead.
Is that a change of human nature? I think so. You could say that the internet is just a glorified notebook, and that we have been storing reference data on paper for centuries. This is true, but the analogy falls down in that one day I might hit a wiki page to retrieve a memory.. and someone else has improved the accuracy of it. Or made it worse.
And I might not even notice, or care.
Now that I'm thinking about it, human nature is my short hand way of talking about humanity's traits on a biological and psychological level. We have certain mental traits, such as hypothetical cognition, depth of thought recursion (metacognition), and so forth. These faculties shape our psychology. They do not differ largely from human to human. However, the habits we form as a result of their presence can quite easily diverge in untold ways. That divergence of mental habits is what I file under "culture".
So I would agree that human mental habits have been altered extensively by inventions. It has been since we started hitting things with sticks and rocks. What hasn't changed is the basic way in which we acquire those habits.
Quote from: Captain Utopia on June 26, 2010, 10:40:46 PM
Can you provide your understanding of the difference between cultural norms and behavioural patterns?
For example - when I started programming in assembler (z80/6502) 20 years ago, I'd spend a vast chunk of time upfront where I'd memorize everything I needed to get started, because looking it up in a reference book took too much time. Now even languages I know very well and still use regularly (15 years of C), I've allowed myself to forget very basic things like the parameters to a common function like open() because if I can find a definitive answer in less than 5 seconds, then that's an acceptable trade off for me. It's even just quicker to ctrl-t a new tab and use the googlebox than it is to alt-tab to find a terminal and then find the manual file stored on my local computer.
So in a very real sense, I've stopped storing certain types of information in my own brain, and I'm using the network as part of my own memory instead.
Is that a change of human nature? I think so. You could say that the internet is just a glorified notebook, and that we have been storing reference data on paper for centuries. This is true, but the analogy falls down in that one day I might hit a wiki page to retrieve a memory.. and someone else has improved the accuracy of it. Or made it worse.
And I might not even notice, or care.
F'kin totally! It still freaks me out from time to time when I realise I can't remember basic syntax but then I check it online in less time than it takes to remember and all is well again.
There is a divide I think - I work with some blokes who maintain a good memory of basic and advanced syntax and parameters, yet they seem less flexible at picking up and running with new languages and concepts.
Quote from: Sigmatic on June 26, 2010, 10:54:55 PM
Now that I'm thinking about it, human nature is my short hand way of talking about humanity's traits on a biological and psychological level. We have certain mental traits, such as hypothetical cognition, depth of thought recursion (metacognition), and so forth. These faculties shape our psychology. They do not differ largely from human to human. However, the habits we form as a result of their presence can quite easily diverge in untold ways. That divergence of mental habits is what I file under "culture".
So I would agree that human mental habits have been altered extensively by inventions. It has been since we started hitting things with sticks and rocks. What hasn't changed is the basic way in which we acquire those habits.
Well... we've changed the way we learn, and we've changed the way we remember things. If our habits are emergent from these building blocks, then we've changed the way we acquire those too?
I have a feeling that I'm totally misunderstanding you!
Then there's this - "Pro Gamers: Brains the size of a planet and lungs the size of a pea (http://www.reghardware.com/2010/06/08/gamers_vs_athletes_1/)", I'm sure that has a large impact on psychology, but I don't see any research into that yet.
Quote from: Captain Utopia on June 27, 2010, 01:09:15 AM
There is a divide I think - I work with some blokes who maintain a good memory of basic and advanced syntax and parameters, yet they seem less flexible at picking up and running with new languages and concepts.
When I started playing with the very first computers I figured Basic was how you talked to them so I learned it, inside out. Then along came college and I learned pascal and cobol and they were simple enough but, toward the end of my course, along came C and it was teh future and it was going to replace cobol and pascal and it was how we were going to talk to computers for ever after. By this tiem I was already getting mixed up between basic/cobol/pascal function set and syntaxes and now I had to remember to put everything in a "main" with curly braces or some shit and "include" stuff that used to just be there on it's own. But you could make your own functions in C/C++ so, believe it or not, one of the first things I did was alias the whole of Amiga Basic in C just so I could do shit like clear the screen and "print" and "input" instead of this fucking putch and getch bullshit!
Next thing I know I'm using SQL and visual basic and thinking to myself "fuck it, no point learning any of this shit cos I'll be talking to computers using something else soon enough and the autocomplete and help manual, along with the whole visual way of writing code using MS Access made it easy to do without learning anyroad. Fast forward 10 years and I find myself using PHP and MySQL and (bearing in mind that I've been coding SQL for over a decade now) I am utterly unable to write a single goddamn line of it - relying instead on Access Query builder then copy and pasting the code from that into notepad++
Next I'm finding myself trying to get my head around OOP and it seriously screwed with my mind for a while but, now that I'm there I'm writing stuff that I can forget how it works as soon as I'm finished it. So I do. I've hacked about 10,000 lines of code together for the system I'm currently running and I have no fucking idea what most of it does. And it doesn't matter. And that's fucking staggering!
Quote from: Doktor Vitriol on June 27, 2010, 12:18:00 AM
F'kin totally! It still freaks me out from time to time when I realise I can't remember basic syntax but then I check it online in less time than it takes to remember and all is well again.
tip: I always make sure to download a copy of the docs (preferably HTML, because it's fastest for me, though .CHM serves pretty well too, PDF is too cumbersome unless you print it) available locally.
It's slightly faster than getting it online (especially if your connection is busy doing other stuff), and you still have it when there's no connection.
... "No connection", is that something which might turn inconceivable in a decade or so?