News:

Doing everything exactly opposite from "The Mainstream" is the same thing as doing everything exactly like "The Mainstream."  You're still using What Everyone Else is Doing as your primary point of reference.

Main Menu

I've come to some sort of conclusiony idea about all this

Started by Jasper, February 11, 2008, 07:14:59 PM

Previous topic - Next topic

Mesozoic Mister Nigel

It's hard for me to participate meaningfully in a discussion when the premise of the argument shifts every time the OP posts.

Felix, every time someone gives you a meaningful answer, you try to discredit it by changing the terms of your argument... which isn't really productive. Why not instead acknowledge their argument, THEN present either the ways in which you disagree in the context of your earlier argument, THEN present new related thoughts?

It's just frustrating to discuss anything rationally when you keep introducing different, tangentially-related arguments as answers to other people's perspectives. For instance, the massive and illogical shift from "technology affects every aspect of life" to "In the future, information will be intelligent and we will have self-replicating machines". WHUT?
"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


Jasper

Thread drift, Nigel.  I think the problem is that I'm just discussing and you're trying to debate.

I really do love talking about all this stuff, but I can't really restrict myself to just one point that easily.

Mesozoic Mister Nigel

I'm not talking about thread drift though, I'm talking about replies that are phrased as rebuttals, but make no logical sense as rebuttals.
"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


Jasper

I'm having a hard time picking up where you left off, actually.  It's difficult to keep track of several rebuts at once in a thread, and I'm amazed roger ever had the attention span for it.  I'll try harder.  Could you indicate what I'm to rebut, please?

Mesozoic Mister Nigel

Oh, it's not that you are supposed to rebut anything, it's that what often appear to be rebuttals are apparently not rebuttals at all, but introductions of new thoughts, but because they are phrased as direct replies to other people's posts (and often start out as rebuttals but then go in some other direction) it becomes difficult to follow any kind of cohesive discussion.
"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


Jasper

Yeah, I can ramble and stuff.  I try not to, but it's hard.  My mind really does work like a grape stomp, you know?  I just fill it with stuff, work it all together, then store the stuff for a while then something delicious comes out.

Mesozoic Mister Nigel

I don't mind rambling discussions at all, I'm just suggesting separating the rebuttals from the rambling so that your replies don't come across as "you're wrong and the reason is <totally unrelated argument>" so that other people (well, in this case just me) don't make the mistake of thinking you're engaging in a really illogical debate and try to defend their position, and then give up and wander off. I don't actually like debate much at all, but if I think someone is telling me I'm wrong I'll usually feel obligated to try to defend my position.

It's just a suggestion though, from ADD girl.
"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


Richter

I'm not sure I follow either POV here.  I've been working under a few assumptions with this sub - forum that seem functional so far.

1.  This is brainstorming.  Not every idea will be finished.
2.  Arguements will not be lost nor won.  Good luck even getting them acknowledged.
3.  If an idea you've thrown out is left unadressed, there is nothing the hive mind has to add.

Your results may vary.
Quote from: Eater of Clowns on May 22, 2015, 03:00:53 AM
Anyone ever think about how Richter inhabits the same reality as you and just scream and scream and scream, but in a good way?   :lulz:

Friendly Neighborhood Mentat

Bebek Sincap Ratatosk

Quote from: Richter on February 15, 2008, 08:44:46 PM
I'm not sure I follow either POV here.  I've been working under a few assumptions with this sub - forum that seem functional so far.

1.  This is brainstorming.  Not every idea will be finished.
2.  Arguements will not be lost nor won.  Good luck even getting them acknowledged.
3.  If an idea you've thrown out is left unadressed, there is nothing the hive mind has to add.

Your results may vary.

And here I was thinking that those points applied to the whole of the Internets ;-)

LAWL!!
- I don't see race. I just see cars going around in a circle.

"Back in my day, crazy meant something. Now everyone is crazy" - Charlie Manson

Triple Zero

Quote from: Dr. Felix Mackay on February 14, 2008, 01:59:43 AM
So you're saying tools are just labor saving devices like always, more or less?

no.

i was just saying that we are unable to predict the cultural impact of new technology and tools.

you can dream and fantasize all you want, but just take a look at my examples (and i can continue adding to that list a lot longer than you can come up with "counterexamples"), and see how it's not useful at all.

it is useful, maybe, to make new technology, but playing the "what if" game is fruitless.

for example, this Artifical Intelligence that you are so certain will appear Any Day Now, i can guarantee you two things:

1) it will take a completely different form than you are now expecting -- also because i strongly get the idea that you have no idea what the field of AI has accomplished so far, is researching right now, and which way it is trying to head. let me tell you this, they are not looking for a "talking computer".

2) whatever form this AI will take, we are in NO position to make any predictions about its cultural and societal impact when it arrives. which is why i think musings like these are a waste of time. instead i'd be better off improving my Machine Learning algorithms (next week .. hopefully)
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Richter

Quote from: Ratatosk on February 15, 2008, 08:59:49 PM
Quote from: Richter on February 15, 2008, 08:44:46 PM
I'm not sure I follow either POV here.  I've been working under a few assumptions with this sub - forum that seem functional so far.

1.  This is brainstorming.  Not every idea will be finished.
2.  Arguements will not be lost nor won.  Good luck even getting them acknowledged.
3.  If an idea you've thrown out is left unadressed, there is nothing the hive mind has to add.

Your results may vary.

And here I was thinking that those points applied to the whole of the Internets ;-)

LAWL!!

Some parts of the internet think they are serious buisiness, it seems. :lol:
Still, you're ridding the correct motorcylcel there Rat.
Quote from: Eater of Clowns on May 22, 2015, 03:00:53 AM
Anyone ever think about how Richter inhabits the same reality as you and just scream and scream and scream, but in a good way?   :lulz:

Friendly Neighborhood Mentat

Bebek Sincap Ratatosk

Quote from: triple zero on February 15, 2008, 09:06:21 PM
Quote from: Dr. Felix Mackay on February 14, 2008, 01:59:43 AM
So you're saying tools are just labor saving devices like always, more or less?

no.

i was just saying that we are unable to predict the cultural impact of new technology and tools.

you can dream and fantasize all you want, but just take a look at my examples (and i can continue adding to that list a lot longer than you can come up with "counterexamples"), and see how it's not useful at all.

it is useful, maybe, to make new technology, but playing the "what if" game is fruitless.

for example, this Artifical Intelligence that you are so certain will appear Any Day Now, i can guarantee you two things:

1) it will take a completely different form than you are now expecting -- also because i strongly get the idea that you have no idea what the field of AI has accomplished so far, is researching right now, and which way it is trying to head. let me tell you this, they are not looking for a "talking computer".

2) whatever form this AI will take, we are in NO position to make any predictions about its cultural and societal impact when it arrives. which is why i think musings like these are a waste of time. instead i'd be better off improving my Machine Learning algorithms (next week .. hopefully)


Zoooom goes TripZip and his snazzy motorcycle!!

On top of all that, the form that AI will take 9if it ever arrives) will be heavily influenced by our society. Consider how much work is going into making new robots "look" human. Good AI could fit in a box and do what it needed to do. However, such technology wouldn't be accepted by society (so AI researchers have found), thus they are constantly trying to make AI fit in a shell that appears more Human, even if it means they must constrain what they can do with AI.

It's sort of like Wearable Computers. When I built my first wearable, it involved an old camcorder viewfinder, a belt with battery packs, and a PC_104 system and a small clicker keyboard. The potential was impressive (google Steve Mann for details)... but wearables didn't catch on. Instead, computers turned into little handheld devices that were attached to the hip.

Society didn't accept cyborg looking systems.
- I don't see race. I just see cars going around in a circle.

"Back in my day, crazy meant something. Now everyone is crazy" - Charlie Manson

Triple Zero

but AI aren't robots. robots are just machines, mechanical. you don't go developing complex software like AI straight into a mechanical device. something that is able to reason, self-aware, etc is imo much more likely to first appear on a computer, without any mechanical limbs attached.

and if you want me to make a guess, but don't hold me to it :) i think it's likely that any self aware AI would sooner come into existence by accident, than by design. just somebody hooking the right systems together and crosses the threshold, is what i expect, if it's gonna happen.
Ex-Soviet Bloc Sexual Attack Swede of Tomorrow™
e-prime disclaimer: let it seem fairly unclear I understand the apparent subjectivity of the above statements. maybe.

INFORMATION SO POWERFUL, YOU ACTUALLY NEED LESS.

Mesozoic Mister Nigel

Quote from: Richter on February 15, 2008, 08:44:46 PM
I'm not sure I follow either POV here.  I've been working under a few assumptions with this sub - forum that seem functional so far.

1.  This is brainstorming.  Not every idea will be finished.
2.  Arguements will not be lost nor won.  Good luck even getting them acknowledged.
3.  If an idea you've thrown out is left unadressed, there is nothing the hive mind has to add.

Your results may vary.

The only thing I'm getting at is that it's hard for me to respond meaningfully to certain types of presentation, and one of them is "No, you're wrong because (insert unrelated brainstorming). The first part makes it hard for me to respond to the second part. This is just something I'm throwing out there, in the interest of communication and blah blah etc.
"I'm guessing it was January 2007, a meeting in Bethesda, we got a bag of bees and just started smashing them on the desk," Charles Wick said. "It was very complicated."


Jasper

Quote from: Nigel on February 16, 2008, 04:45:37 AM
Quote from: Richter on February 15, 2008, 08:44:46 PM
I'm not sure I follow either POV here.  I've been working under a few assumptions with this sub - forum that seem functional so far.

1.  This is brainstorming.  Not every idea will be finished.
2.  Arguements will not be lost nor won.  Good luck even getting them acknowledged.
3.  If an idea you've thrown out is left unadressed, there is nothing the hive mind has to add.

Your results may vary.

The only thing I'm getting at is that it's hard for me to respond meaningfully to certain types of presentation, and one of them is "No, you're wrong because (insert unrelated brainstorming). The first part makes it hard for me to respond to the second part. This is just something I'm throwing out there, in the interest of communication and blah blah etc.

And I sort of find it hard to respond to that.  Instead of explaining what you meant you're just repeating yourself.  Try again.

Quote from: triple zero on February 15, 2008, 09:06:21 PM
Quote from: Dr. Felix Mackay on February 14, 2008, 01:59:43 AM
So you're saying tools are just labor saving devices like always, more or less?

no.

i was just saying that we are unable to predict the cultural impact of new technology and tools.

you can dream and fantasize all you want, but just take a look at my examples (and i can continue adding to that list a lot longer than you can come up with "counterexamples"), and see how it's not useful at all.

it is useful, maybe, to make new technology, but playing the "what if" game is fruitless.

for example, this Artifical Intelligence that you are so certain will appear Any Day Now, i can guarantee you two things:

1) it will take a completely different form than you are now expecting -- also because i strongly get the idea that you have no idea what the field of AI has accomplished so far, is researching right now, and which way it is trying to head. let me tell you this, they are not looking for a "talking computer".

2) whatever form this AI will take, we are in NO position to make any predictions about its cultural and societal impact when it arrives. which is why i think musings like these are a waste of time. instead i'd be better off improving my Machine Learning algorithms (next week .. hopefully)

1) The Popsci Predictions Exchange uses a stock exchange format for betting on predictions, and is highly successful when compared to any unilateral prediction by any author.  Google it.

2)The "what if" game is not relevant.  I may mention works in progress but we're talking about the current direction of technology's role in society. 

3) You're really rude to just assume I have no idea what I'm talking about when you say I don't know what the field of AI has accomplished.  I don't know half of it, but I do make an effort, thanks.  You're just making appeals to authority because you're known here as a technology specialist.

4) If you think this thread is a waste of time, why are you still here?

Quote from: triple zero on February 15, 2008, 11:07:04 PM
but AI aren't robots. robots are just machines, mechanical. you don't go developing complex software like AI straight into a mechanical device. something that is able to reason, self-aware, etc is imo much more likely to first appear on a computer, without any mechanical limbs attached.

and if you want me to make a guess, but don't hold me to it :) i think it's likely that any self aware AI would sooner come into existence by accident, than by design. just somebody hooking the right systems together and crosses the threshold, is what i expect, if it's gonna happen.

Hooking the right systems together and crosses the threshold?  I'd be generous to assume something is lost in translation here, but you're fluent enough for me to think you just said AI will be an accident.  And you call me unscientific?  What's this threshold?  There's no threshold for consciousness, it's an analog phenomenon. 

My current theory of AI is that to be truly conscious you must have an array of physical sensors that feed constant data into a behaviour algorithm.  The model for intelligence simply does not need consciousness to be intelligent, and all intelligence is intrinsic to behaviour and how it reacts to new experiences and internal stimuli like abstract calculations such as emotions.  I could go on, because this has a high mindshare with me.

To address your main point, AI isn't robots right now, but it will inevitaby be so in my opinion because intelligence without a physical presence is incapable of doing real work, which is what we want robots for in the first place.  And in a physically enabled intelligence of any type it must have sensors to understand the world it's interfacing with, so it must de facto have senses.  And there you have it; my prediction is that robotic AI will occur because there is money in it (real work to be done) and they will be governed by behavioural intelligence which must require sensors and will THUSLY be comparatively conscious in the same way humans are.

And strong AI will perforce be MADE on a computer, because broccoli and cheese doesn't process binary code very fast.  So yes, it WILL occur first on a computer lacking robotics.  However, I am highly suspect of any robotic AI that was programmed on a computer without some kind of physical interface to train on.  I think any kind of useful strong AI with a robotic form will have to be given the form, then trained with algorithmic learning processes to attain a level of control that makes it useful.  The future of AI is in robotics because if it remains purely digital then it has far fewer ways of being a profit to it's creators.  You're just not following the money if you can't see that.