Principia Discordia

Principia Discordia => Techmology and Scientism => Topic started by: Telarus on January 10, 2011, 07:29:57 AM

Title: American Psychological Association to publish controversial 'PSI' paper
Post by: Telarus on January 10, 2011, 07:29:57 AM
http://dbem.ws/FeelingFuture.pdf


Sticking this here so I can read it when I'm not all feverish.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 10, 2011, 08:05:01 AM
I'm now envisioning an experiment where people are asked to chose the left or right box, one of which will have the erotic images and, one of which will have horrible mind rending things selected by Roger, and seeing which one gets picked more often.   :lulz:

Not done reading, but this is... interesting.  In particular the powers he's proposing actually have a reason to exist from a natural selection standpoint*.  Assuming he's not faking the data or lying about his methods, this actually holds up so far.  A great deal of repetition is necessary.

*Apparently my confidence in the theory of evolution is higher than my confidence in the laws of physics, interesting.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Cramulus on January 10, 2011, 03:45:19 PM
I, for one, am really excited that an established psychologist who already has a lot of credibility is looking at these things. It's difficult for parapsychologists to be taken seriously by mainstream psychological researchers because

(a) so many parapsychologists use really weird unproven explanations for the effects they find
(b) most parapsychologists don't have any cred in mainstream circles

the word Parapsychology itself makes a lot of psych researchers roll their eyes.. which is sort of sad because when paraspsychologists DO find something interesting, there are no ... hmm... social instruments for mainstream science to detect it. I think Bem's credibility will go a long way here.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 10, 2011, 07:36:38 PM
Psych researchers rolling their eyes at parapsychology has more to do with ~80 years of parapsychologists failing to produce any repeatable experimental results before the rest of the psych community said 'fuck it, get out'.

This paper isn't the same tired old debunked ideas being dragged out again, but new ideas that might be correct, and at the least require new debunking.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Jasper on January 10, 2011, 07:41:27 PM
Freely admitted, I did roll my eyes-  As a psych major, I'm definitely chagrined by the effect unscientific claims have had on the overall credibility of psychology as a science.  But if the paper turns out to be an appropriate application of psychological science with compelling results, I will accept them.  Plan to read this after school.  Maybe do a write-up?
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: LMNO on January 10, 2011, 07:45:11 PM
I'm wondering if the experiments have the rigor of a "hard science" experiment (physics, etc), or if it's a tightly controlled "correlation" experiment.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Jasper on January 10, 2011, 07:48:12 PM
No behavioral science can be as rigorous as a phys or chem experiment, ever.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: LMNO on January 10, 2011, 07:51:00 PM
I suppose.  Plus, as noted above, what is being suggested violates known physical laws; I'm guessing no one attempts to reconcile this.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Jasper on January 10, 2011, 07:52:36 PM
I'll try to post a write-up tonight.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Richter on January 10, 2011, 08:19:35 PM
Quote from: Sigmatic on January 10, 2011, 07:48:12 PM
No behavioral science can be as rigorous as a phys or chem experiment, ever.

Which is why it's important to do so with a very objective viewpoint and careful methodology including multiple levels of blinds and controls.  You literally CAN see diffferent results dependign on whether or not you're looking for them, when psychology is involved. 

Correlation is NOT relationship, as the professors made certain to demonstrate to us in school, due to the daunting number of factors invovled if nothing else.

Especially with parapsychology, even valid results with solid experimetnal design may be discarded, simply for the prejudice against the field, and biased view from a "Scholarly", "Scientific" community.  (Which REALLY irks me.)
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Jasper on January 10, 2011, 08:22:12 PM
Psych experiments are particularly hard because of "realism".  Like in the Asch conformity test with the glasses that was mentioned recently?  People will behave differently with regard to how they expect you to expect them to act.  Which is Fucking Frustrating.  QM is downright obliging by comparison.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 10, 2011, 08:33:18 PM
What exactly do you mean by a correlation experiment?  The only things like that in psychology that I'm aware of are questionnaire relationship type deals (IE, people with high scores on the authoritarian construct correlate to high aversion on INC-NON humor, stuff like that).  The two experiments I've read so far aren't anything like that.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Jasper on January 10, 2011, 08:40:03 PM
A lot of them are basically creating an environment, providing a stimulus, and observing behaviors. 
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: LMNO on January 10, 2011, 08:40:53 PM
Quote from: Requia ☣ on January 10, 2011, 08:33:18 PM
What exactly do you mean by a correlation experiment?  The only things like that in psychology that I'm aware of are questionnaire relationship type deals (IE, people with high scores on the authoritarian construct correlate to high aversion on INC-NON humor, stuff like that).  The two experiments I've read so far aren't anything like that.

I meant "When we do X, we get Y."


As opposed to "Because of Z (which works because of A and B), when we do X, we get Y."



Fucking violation of physical laws, how do they work?
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: The Good Reverend Roger on January 10, 2011, 08:58:46 PM
Quote from: Requia ☣ on January 10, 2011, 07:36:38 PM
Psych researchers rolling their eyes at parapsychology has more to do with ~80 years of parapsychologists failing to produce any repeatable experimental results before the rest of the psych community said 'fuck it, get out'.

This

QuoteThis paper isn't the same tired old debunked ideas being dragged out again, but new ideas that might be correct, and at the least require new debunking.

And that.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: The Good Reverend Roger on January 10, 2011, 09:00:14 PM
Quote from: LMNO, PhD on January 10, 2011, 07:51:00 PM
I suppose.  Plus, as noted above, what is being suggested violates known physical laws; I'm guessing no one attempts to reconcile this.

Violate away.

As long as you have good, repeatable data, I'm willing to re-scrutinize physical laws.

Let me say that again:  As long as you have good, repeatable data.

I'm not holding my breath.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Jasper on January 11, 2011, 12:30:09 AM
Experiment 1: Precognitive Detection of Erotic Stimuli

Basically, a computer test where you were supposed to look at two pictures of curtains, and then try to guess which one had the porn behind it.  The actual methodology seems sound when reviewed in detail, but I would quibble over their conclusions.  They feel that after 100 experimental observations, and a success rate of 53.1%, this is significantly higher than chance.  I would not be so eager to claim to have found anything.

Experiment 2: Precognitive Avoidance of Negative Stimuli

The way this test was presented: 

Quotethis is an experiment that tests for ESP (Extrasensory Perception). The experiment
is run entirely by computer and takes about 15 minutes....On each trial of the
experiment you will be shown a picture and its mirror image side by side and
asked to indicate which image you like better. The computer will then flash a
masked picture on the screen. The way in which this procedure tests for ESP will
be explained to you at the end of the session

and then,

Quotethe participant was shown a low-arousal, affectively
neutral picture and its mirror image side by side and asked to press one of two keys on the
keyboard to indicate which neutral picture he or she liked better

The computer did not determine which was which until after the choice was made, ruling out any remotely possible explanation by pattern matching.

Whenever the participant had indicated a preference for the target-to-be, the computer flashed a positively valenced picture
on the screen subliminally three times. Whenever the participant had indicated a preference for
the non-target, the computer subliminally flashed a highly-arousing, negatively valenced picture.


So yes, this test should test for whether a precognitive could foretell and avoid negative stimuli.

Page 19, table 2 shows their results. 

QuoteAs Table 2 reveals, all four analyses yielded comparable results, showing significant psi
performance across the 150 sessions. Recall, too, that the RNG used in this experiment was
tested in the simulation, described above in the discussion of Experiment 1, and was shown to be
free of nonrandom patterns that might correlate with participants' responses biases.

(http://i518.photobucket.com/albums/u346/heinous_simian/p19t2.png)

QuoteStimulus Seeking. In the present experiment, the correlation between stimulus seeking and
psi performance was .17 (p = .02). Table 3 reveals that the subsample of high stimulus seekers
achieved an effect size more than twice as large as that of the full sample. In contrast, the hit rate
of low stimulus seekers did not depart significantly from chance: 50.7%–50.8%, t < 1, p > .18,
and d < 0.10 in each of the four analyses.

Which I don't have enough stats under my belt to interpret confidently.  Anybody? 
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 11, 2011, 12:57:26 AM
Correlation values of .2 are normal in psychology, and usually accepted.  (The highest value I've ever seen is .4).
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Cain on January 11, 2011, 01:15:59 PM
Still seems an overall very weak test though.  Like Sigmatic, I think claiming a 53.1% success rate, over 100 observations, where the participants are offered two choices, is not anywhere near strong enough to stake claims on.

I'd want to see a lot more tests done, first of all replicating this one and seeing if the results are steady.  I'd then want to introduce tests with more options.  And then tests with brain scans, where the activity in the brain of both successful and non-successful participants could be studied and compared.

Because if ESP does exist, there must be a point at which it has a physical effect on the brain.  And if a physical effect of some sort cannot be found, then the only reasonable conclusion is that the participants are guessing, and some are getting lucky and others are not, in roughly the ratios one would expect given the test parameters (near 50%).
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Richter on January 11, 2011, 02:28:33 PM
53.1% out of 100 observations isn't signifigant in a test where only two choices are presented (If I'm following their methodology right).  Flip a coin 100 times, and 53.1% of the flips coming up tails wouldn't be proof of anything. 

Make it a 4 of 5 choice test, get the same results, replicate them, and THEN you've got somethign to base a claim on.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: The Johnny on January 11, 2011, 03:32:18 PM

What i have learned about correlation value significance (if i did learn correctly  :fnord:) is that the value has to be.....

2/3... that is 66.6%... OR above, to pass the "could go either way" threshold... otherwise its just a matter of insignificant correlation that may fall in the realms of the "error range".

And no, i dont know the proper terms.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Cramulus on January 11, 2011, 03:48:49 PM
Quote from: Sigmatic on January 11, 2011, 12:30:09 AM
Page 19, table 2 shows their results. 

QuoteAs Table 2 reveals, all four analyses yielded comparable results, showing significant psi
performance across the 150 sessions. Recall, too, that the RNG used in this experiment was
tested in the simulation, described above in the discussion of Experiment 1, and was shown to be
free of nonrandom patterns that might correlate with participants' responses biases.

(http://i518.photobucket.com/albums/u346/heinous_simian/p19t2.png)

QuoteStimulus Seeking. In the present experiment, the correlation between stimulus seeking and
psi performance was .17 (p = .02). Table 3 reveals that the subsample of high stimulus seekers
achieved an effect size more than twice as large as that of the full sample. In contrast, the hit rate
of low stimulus seekers did not depart significantly from chance: 50.7%–50.8%, t < 1, p > .18,
and d < 0.10 in each of the four analyses.

Which I don't have enough stats under my belt to interpret confidently.  Anybody? 

if I recall my stats correctly...

the important thing to look at here is the p value. The lower p is, the lower chance it is that the data is caused by random chance.

Most psych studies consider something a "real" correlation at a p value of .05? maybe more like .02. Parapsychological studies generally use a more rigorous p threshhold because they need airtight proof that their finding isn't due to chance.

a p value of .009 (table 2 columm 1) is highly significant - basically it means that the odds of this data being due to random chance is less than 1%.



I've been digging around for replication - which is what will make or break this paper - so far it looks like 3 groups have registered replication attempts, but I can't find data on them. Somebody did a replication of experiment 8 and did not replicate results, but I didn't read the paper.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 12, 2011, 03:36:38 AM
QuoteSomebody did a replication of experiment 8 and did not replicate results, but I didn't read the paper.

Title of the paper?
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Kai on January 12, 2011, 04:52:11 AM
I took a look at the methods and results for experiment one, and I don't trust their use of a t-test or lack of a control. Were I designing the experiment, I would have devised some sort of control group with equal numbers so that there were two variable sample sets. An one way ANOVA would allow a much more rigorous test of significance. On their work alone I would not consider 53.1% to be significant, and from a common sense standpoint of what psi would really entail, why, if it exists, would it be just slightly higher than the cutoff point? Also, the sample size (given the size of the overall population (6 billion)) is way way too small. And furthermore, why oh why do they draw the conclusion of psi on so little evidence? Why did they even FUCKING USE that word in their paper?

Goddammit. Now I'm pissed off.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 12, 2011, 08:45:34 AM
There is a control in the experiments, specifically the control group is the set of neutral pictures.  Since the hypothesizes is that *all* humans have reverse causative abilities, a second group of test subjects cannot be used as a control.  If I understand what a t-test is correctly, one cannot be used in this kind of experiment (at least, as I remember it a t-test assumes a control group with random assignment).  This is perfectly normal scientific procedure.

On the 53.1% thing, as Cram has pointed out, the strength of the effect is irrelevant, what matters is that the effect not have a high chance of being obtained at random (p<.05 is the standard).  Nor would it be rational to expect a large effect (if the effect was large it wouldn't take a tightly controlled experiment to detect it, we would already know from day to day experience).

The PSI thing... yeah that's fucking silly.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 13, 2011, 03:03:23 AM
There's a shocking number of criticisms out there from PhDs who don't appear to have read the paper.

My favorite so far is a guy talking, in a peer reviewed paper, about how 'Bem's Pyschic' (Apparently he was under the impression that Bem was working with a single psychic instead of the typical group of students used for psych experiments) would be able to bankrupt a casino with a 53.1% accuracy rate unless there was some reason he couldn't use the power for roulette (which according to the paper in the OP, you can't, for a couple different reasons).

However, there's no reason to doubt his math, and while I'm not really qualified to judge his premise (that the statistical methods used in psychology should be replaced), what I do understand suggests that there's little reason to expect successful repetition even if Bem is being honest.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Kai on January 13, 2011, 01:17:00 PM
Quote from: Requia ☣ on January 12, 2011, 08:45:34 AM
There is a control in the experiments, specifically the control group is the set of neutral pictures.  Since the hypothesizes is that *all* humans have reverse causative abilities, a second group of test subjects cannot be used as a control.  If I understand what a t-test is correctly, one cannot be used in this kind of experiment (at least, as I remember it a t-test assumes a control group with random assignment).  This is perfectly normal scientific procedure.

Then it's pseudoreplication. Either way, I don't trust the results on that account.

QuoteOn the 53.1% thing, as Cram has pointed out, the strength of the effect is irrelevant, what matters is that the effect not have a high chance of being obtained at random (p<.05 is the standard).  Nor would it be rational to expect a large effect (if the effect was large it wouldn't take a tightly controlled experiment to detect it, we would already know from day to day experience).

If there is really something going on, and it's going on in all humans, then this experiment could be run many more times and obtain the same result, thus eliminating the issue of pseudoreplication. I'm waiting.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 14, 2011, 03:59:28 AM
What exactly is pseudoreplication?
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: LMNO on January 14, 2011, 02:35:32 PM
From what I understand, lack of a suitable control.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Kai on January 14, 2011, 11:14:12 PM
Quote from: Requia ☣ on January 14, 2011, 03:59:28 AM
What exactly is pseudoreplication?

It's when the experimenter attempts to replicate, but the replicates aren't independent from one another. It could be that the control isn't independent from the treatment as in this case or it could be that the treatments aren't independent in space (ie there may be bias of place or study object/organism that introduced bias) or time (ie something about the time of day, or timing in general, is inserting bias). Other examples of pseudoreplication would be: taking multiple samples from the same organism and only that organism, conducting the experimental replicates on the same day, or in the same place. As an entomologist, the thoughts go through my head: what about weather? What if it was cloudy one day and sunny the next? I can't control it, but at least I can make it random by replicating across different days. How about my study location? Maybe it's just a bad location. I can eliminate location bias by setting up in several different places.

To have this be truly replicated, the authors would have had to had controls separate from treatment, OR, repeat the experiment on multiple occassions and locations with multiple /sets/ of people.

Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Requia ☣ on January 15, 2011, 12:06:26 AM
By control not being separate from the treatment I assume you mean that IE, in experiment 1 the same test subjects attempted to predict the location of the neutral and erotic pictures?  I've seen plenty of experiments like that in psychology and not seen comment on it, but I suppose it would be better procedure to separate them wherever possible.

The other stuff is fairly well known to be problematic in psychology, but nobody has come up with solutions that can be implemented without also increasing the budget for experiments past the point where anybody would get any work done.
Title: Re: American Psychological Association to publish controversial 'PSI' paper
Post by: Kai on January 15, 2011, 08:05:08 PM
Quote from: Requia ☣ on January 15, 2011, 12:06:26 AM
By control not being separate from the treatment I assume you mean that IE, in experiment 1 the same test subjects attempted to predict the location of the neutral and erotic pictures?  I've seen plenty of experiments like that in psychology and not seen comment on it, but I suppose it would be better procedure to separate them wherever possible.

The other stuff is fairly well known to be problematic in psychology, but nobody has come up with solutions that can be implemented without also increasing the budget for experiments past the point where anybody would get any work done.

I guess I'll just have to remain unwilling to accept their conclusions then.