News:

Remember, its all a sociological experiment.  "You are doing exactly as I planned. My god you are all so predictable."  Repeat until you believe it.

Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - The Fundamentalist

#1
Wikipedia says that RAND still exists. </uninformed>

I did hear something about an "electronic Pearl Harbor".  Chinese hackers stole a very high amount of data from the DoD or something.  They left some infected flash drives lying around.

Or maybe it was that they installed a rootkit and were watching the system's activities for over a week.  I don't remember.
#2
Quote from: Requia ☣ on January 19, 2010, 06:10:33 AM
Quote from: The Fundamentalist on January 19, 2010, 03:41:36 AM
Quote from: Requia ☣ on October 24, 2009, 09:14:39 PM
Quote from: rong on October 24, 2009, 04:53:27 AM
be sure to drink your ovaltine

:?

Ovaltine key rings.  Haven't you ever seen A Christmas Story?

I should probably work out how to do this encryption stuff.  I know the theory, but I haven't actually tried it.

Is that the one with the BB Gun?

In which case, like 5 minutes of it (every year a different 5 minutes).

Yep.  Pretty sure that that's the correct way to watch it.
#3
Quote from: Requia ☣ on October 24, 2009, 09:14:39 PM
Quote from: rong on October 24, 2009, 04:53:27 AM
be sure to drink your ovaltine

:?

Ovaltine key rings.  Haven't you ever seen A Christmas Story?

I should probably work out how to do this encryption stuff.  I know the theory, but I haven't actually tried it.
#4
Quote from: BabylonHoruv on January 18, 2010, 12:54:04 AM
A fair amount of what is in the Principia is visual art so obviously we'd need to leave that out.

Perhaps this is a bit ambitious, but could auditory art (music) be substituted?
#6
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 04:51:58 AM
Quote from: NotPublished on January 15, 2010, 02:34:34 AM
Quote from: The Fundamentalist on January 15, 2010, 02:29:53 AM
I'm not sure I understand what you mean.

We already have learning machines.  I've even programmed one.  Artificial neural networks aren't too hard
Sorry I wasn't very clear,

If a machine has the potential to learn things to alter its own behavioural pattern - I am not comfortable with that idea. Its like opening up MS Word and it tells you to Go Fuck Yourself because it doesn't want to be a slave to you anymore (That would mean that it also feels emotion).

But if a machine can learn patterns to assist with its main intent, then of course that will be beneficial. Its like having predicive text on when SMSing. I can't use the shit but my sister does and she writes too fast.

There wouldn't be very much point in making MS Word or something similarly simple conscious.  It would be a reversal of all of the mechanization we've been doing throughout history.
#7
Hearsay: Robertson thinks that Haitians used Voodoo to overcome their French slavemasters in 1791.  Voodoo is of course Satanic as he sees it.  Anybody have backup for that?

(If I remember my history classes, that would mean that Robertson's existence is indirectly reliant on Satanism, because Haitian revolution -> Napoleon gives up dreams of empire in the Americas -> Napoleon sells Louisiana -> Robertson's ancestors not slaughtered in French-American War)
#8
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:29:53 AM
Quote from: NotPublished on January 15, 2010, 02:26:12 AM
QuoteI think that it's been pretty well established that consciousness precludes logical thinking.  Just because you're using a logical machine (the computer) to run it, that doesn't mean the thing itself is logical.
Of course machines are inheritly made dumb its all the instructions we feed it, it might look logical to us but it was just made using another persons logic.

So for a machine to be self-conscious it would have to be able to learn ... I don't agree with a learning machine like that.

*eta* I missed a word oops

I'm not sure I understand what you mean.

We already have learning machines.  I've even programmed one.  Artificial neural networks aren't too hard.
Quote from: Felix on January 15, 2010, 02:28:16 AM
I mentioned that I hate Descartes, right?

If you are willing to concede  that there is NOT some omnipotent demon creating a false world for you, and you are sure you are conscious, then it follows that the other things that are your species, act as smart or smarter than you, and have roughly the same genetic makeup as you, they are VERY likely to be conscious.  Beyond reasonable doubt.   That said, how do you prove beyond reasonable doubt that a non-human, let alone a non-animal, is conscious?

They act like humans do?
#9
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:24:57 AM
I agree with Babylon.

Another thing that may be of interest in consciousness: I'll dig up the story, but apparently neuroscientists found that people make their decisions before they're aware of them.  That is to say, our theory of mind of ourselves is slower than the brain, and so what you think of as your self is not yourself.

I guess that means that they proved the subconsciousness exists?
#10
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:20:41 AM
Quote from: NotPublished on January 15, 2010, 02:16:41 AM
Quote from: The Fundamentalist on January 15, 2010, 02:12:04 AM
If stress and so on were unneeded, then we probably wouldn't have them.  They'd be selected against in evolution.

Although, that might not be relevant in the modern world, I don't know.

Out of my interest, I have a question... what are possible fields that thinking machines can be used for, aside from mad science?

Assistance in Research and Development, Machines would have the ability to work logically and can produce some results faster.
Could make for a good PA
Exploration
Repetitive tasks!

Or for every human baby a machine is made and that machine does all the boring stuff :D (Ok bad idea I know but its fun imagine)

QuoteMachines would have the ability to work logically

I think that it's been pretty well established that consciousness precludes logical thinking.  Just because you're using a logical machine (the computer) to run it, that doesn't mean the thing itself is logical.

QuoteRepetitive tasks!

Then why would you be making it conscious?
#11
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:17:19 AM
Quote from: BabylonHoruv on January 15, 2010, 02:15:43 AM
Quote from: The Fundamentalist on January 15, 2010, 02:08:19 AM

So we have to make robots that can lie.

Yes, I think that AI research is the place to be for mad science.

We already have that.

Altruism too

http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie

Oh duh.  I already read about Avida disguising its intelligence to avoid artificial selection by examiners, how did I forget that?
#12
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:15:54 AM
Quote from: Iptuous on January 15, 2010, 02:12:27 AM
i was wondering if you thought it was possible to make a machine that was not conscious, but believed that it was.
I'm sure that's been a crappy sci-fi story many times, but...



#include <iostream>

int main()
{
   std::cout << "I am a conscious program!\n";
   return 0;
}


Sorta like that?

Quote from: Felix on January 15, 2010, 02:13:05 AM
The point of emotion experiencing robots would be that killing all the humans would be a horrifying thought. Just like any sane person thinks genocide is horrible. It may be appealing at times, but the act of doing so would cause too much psychological distress to bear.

...interesting.

...

We should give UAVs souls?
#13
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:12:04 AM
If stress and so on were unneeded, then we probably wouldn't have them.  They'd be selected against in evolution.

Although, that might not be relevant in the modern world, I don't know.

Out of my interest, I have a question... what are possible fields that thinking machines can be used for, aside from mad science?
#14
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 02:08:19 AM
Personally, I think that the bicameral model is a pretty interesting one for the origin of consciousness, although I haven't read that book yet and I've heard that it probably took place long before the author claimed if it did.  On the other hand, I might just like Snow Crash too much.

Quote from: The Right Reverend NigelIf we can get them going "Oh my god I feel kinda crappy; is my oil low or am I just still upset over losing my job or do I need to empty my condensation chamber?" and then make the sensations become more unmanageably overwhelming and difficult to distinguish, as well as decreasing their ability to reason and their fine motor skills as demands on them increase, I think we would just about have it.

Is it wrong that I laughed?

Quote from: FelixLying is definitely a skill that developed from evolutionary social pressures. Without the ability to model the thinking of others, how do you lie well?  For that matter, how do we empathize? Autistic people may be merely lacking in these evolutionary traits, I have heard.  That's how recently our social skills evolved. We still have people who don't have this trait as a dominant gene.

So we have to make robots that can lie.

Yes, I think that AI research is the place to be for mad science.
#15
Literate Chaotic / Re: FFFFUUUUUUUUUU
January 15, 2010, 01:56:03 AM
I'm not sure that I understand this problem.  After all, isn't it just as impossible to verify that other people have subjective experiences?

As for the importance of communication/linguistics, I do think that it's important.  It seems like a reasonable idea to me that consciousness evolved from lying.  That is, as soon as we needed to make models of what other people were thinking, we had to think ourselves.

(I am also very interested in AI.)