Principia Discordia

Principia Discordia => Techmology and Scientism => Topic started by: Kai on March 11, 2012, 02:25:57 PM

Title: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Kai on March 11, 2012, 02:25:57 PM
Full summary over here: http://blogs.discovermagazine.com/notrocketscience/2012/03/10/failed-replication-bargh-psychology-study-doyen/

Basically, John Bargh, a researcher at Yale, conducted a psychological experiment to see if by talking about old age he could prime people to feel the effects of age (aka walking slower). This was back in 1996.

Just recently, Stephane Doyen attempted to replicate this study, but with blinding in some units, something that the original study did not have. Which ended up showing that the individuals walked slower "only when they were tested by experimenters who expected them to move slowly". A basic case of blinding revealing unconscious bias.

Bargh then wrote a blog post where he proceeded to throw a fit. (http://www.psychologytoday.com/blog/the-natural-unconscious/201203/nothing-in-their-heads) Mind you, it's the sort of gentlemanly like fit that you see in academic circles, but most obviously a case of a grown man loosing it because he wasn't right. He attacks the journal, the authors, and Ed Yong as well. The last was a particularly dumb move, since Yong is probably the best damn science journalist out there, and wasn't exactly going to let it go at that.

This all illustrates that scientists aren't really different than other people when it comes to territoriality, and that replication, /published/ replication studies, are needed now more than ever. Until a study is replicated, it's a sample size one.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fi
Post by: Nephew Twiddleton on March 11, 2012, 02:37:54 PM
Thats actually a really good idea. Any particular reason why follow up studies arent published?
Title: Re: Failure to replicate earlier study causes original author to have a hissy fi
Post by: Kai on March 11, 2012, 02:59:32 PM
Quote from: An Twidsteoir on March 11, 2012, 02:37:54 PM
Thats actually a really good idea. Any particular reason why follow up studies arent published?

Because there's this idiotic taboo against publishing negative results. The whole "you can't prove a negative" is deeply ingrained.

Quote from: The Art of Scientific Investigation - Beveridge pg 35Commoner, however, is the failure of an experiment to demonstrate something because the exact conditions necessary
are not known, such as Faraday's early repeated failures to obtain an electric current by means of a magnet. Such experiments demonstrate the well-known difficulty of proving a negative proposition, and the folly of drawing definite conclusions from them is usually appreciated by scientists. It is said that some research institutes deliberately destroy records of " negative
experiments ", and it is a commendable custom usually not to publish investigations which merely fail to substantiate the hypothesis they were designed to test.

Now, okay, it's not all stupid. There are good reasons not to publish an experiment that failed to substantiate your personal hypothesis. But in the case of replicating someone else's work, a negation is even more important than an affirmation. Results that do not point to one hypothesis will point to another, even if that hypothesis is "our experiment was faulty".

But since it's already drilled into scientists (and therefore journal editors) not to publish negative results, they don't get published usually.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fi
Post by: Nephew Twiddleton on March 11, 2012, 03:15:31 PM
Seems counterproductive if you cant publish follow up results that negate a previous hypothesis or at least throwit into question.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 11, 2012, 03:46:16 PM
I understand the reasoning, I just don't agree with it... not in the age of internet publishing, where it need not be prohibitively expensive to publish studies that essentially just serve to say "Yo, previous study is not to be relied on".

In addition, I think there is an interesting double standard, in that studies that prove, for instance, that there is no link between Atrazine and cancer have no problem getting published.

And I also find the high drama in the scientific community hilarious.  :lulz:

I swear, I should start a journal called "The Journal of Negative Findings". It would be SO popular.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fi
Post by: Oysters Rockefeller on March 11, 2012, 04:03:26 PM
Quote from: An Twidsteoir on March 11, 2012, 03:15:31 PM
Seems counterproductive if you cant publish follow up results that negate a previous hypothesis or at least throwit into question.

Agreed.

Quote from: Nigel on March 11, 2012, 03:46:16 PM
"Yo, previous study is not to be relied on".

Man, it's not even like the reactor reached critical mass or some shit.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 11, 2012, 04:15:51 PM
Also, I realize that behavioral sciences are a bit different, but I really took issue with the "There are many reasons for an experiment not to work" because findings that don't confirm your hypothesis does not equal "not working". How many fucking times did Bacall and Davis re-run their numbers and very expensive experiment? It was working FINE, it just didn't yield the results they were expecting, which led to an even more significant discovery.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Q. G. Pennyworth on March 11, 2012, 05:14:26 PM
Quote from: Nigel on March 11, 2012, 03:46:16 PM
I swear, I should start a journal called "The Journal of Negative Findings". It would be SO popular.

This should be a thing. Let's find some scientists and make it a thing.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 11, 2012, 07:53:57 PM
Quote from: Queen Gogira Pennyworth, BSW on March 11, 2012, 05:14:26 PM
Quote from: Nigel on March 11, 2012, 03:46:16 PM
I swear, I should start a journal called "The Journal of Negative Findings". It would be SO popular.

This should be a thing. Let's find some scientists and make it a thing.

I'm gonna try to do it. Seriously. If I don't forget.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Kai on March 12, 2012, 12:31:18 AM
Quote from: Nigel on March 11, 2012, 03:46:16 PM
I understand the reasoning, I just don't agree with it... not in the age of internet publishing, where it need not be prohibitively expensive to publish studies that essentially just serve to say "Yo, previous study is not to be relied on".

In addition, I think there is an interesting double standard, in that studies that prove, for instance, that there is no link between Atrazine and cancer have no problem getting published.

And I also find the high drama in the scientific community hilarious.  :lulz:

I swear, I should start a journal called "The Journal of Negative Findings". It would be SO popular.

Well, there's already a Journal of Irreproducible Results (http://www.jir.com/), so it should go over well.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 17, 2012, 05:11:48 PM
Quote from: ZL 'Kai' Burington, M.S. on March 12, 2012, 12:31:18 AM
Quote from: Nigel on March 11, 2012, 03:46:16 PM
I understand the reasoning, I just don't agree with it... not in the age of internet publishing, where it need not be prohibitively expensive to publish studies that essentially just serve to say "Yo, previous study is not to be relied on".

In addition, I think there is an interesting double standard, in that studies that prove, for instance, that there is no link between Atrazine and cancer have no problem getting published.

And I also find the high drama in the scientific community hilarious.  :lulz:

I swear, I should start a journal called "The Journal of Negative Findings". It would be SO popular.

Well, there's already a Journal of Irreproducible Results (http://www.jir.com/), so it should go over well.

The JIR is always entertaining. Nerdery in general is always entertaining, especially when nerds quibble.

This morning I was reading what may have been the pettiest debate ever, about the best longhand method of finding a square root. There was name-calling. It was amazing. I think that a journal that published nothing but negative findings would become a major locus for science drama.

Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 17, 2012, 06:09:26 PM
I also really enjoy the hell out of the Annals of Improbable Research, which most people are probably familiar with thanks to the Ig-Nobels and the Luxuriant Flowing Hair Club for Scientists (which I am looking forward to joining, once I am a scientist).
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Kai on March 18, 2012, 07:30:43 PM
Quote from: Nigel on March 17, 2012, 06:09:26 PM
I also really enjoy the hell out of the Annals of Improbable Research, which most people are probably familiar with thanks to the Ig-Nobels and the Luxuriant Flowing Hair Club for Scientists (which I am looking forward to joining, once I am a scientist).

I have yet to join the LFHCS. This is a moral imperative.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 19, 2012, 03:15:05 AM
Quote from: ZL 'Kai' Burington, M.S. on March 18, 2012, 07:30:43 PM
Quote from: Nigel on March 17, 2012, 06:09:26 PM
I also really enjoy the hell out of the Annals of Improbable Research, which most people are probably familiar with thanks to the Ig-Nobels and the Luxuriant Flowing Hair Club for Scientists (which I am looking forward to joining, once I am a scientist).

I have yet to join the LFHCS. This is a moral imperative.

I can't even believe that you're not already a member! I kind of just assumed that you were.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: hirley0 on March 20, 2012, 12:28:32 AM
 :fnord:  ~o~  (http://improbable.com/projects/hair/) v read top down v
SO: 2 sepant / 21March N. i'poise the prime problem R thumbnails
7:47Ab while its true i do NOT know what to do HERE is A link Anyway
:fnord: 3/21 ? thumbnail photo ?  (http://sf0.org/madbird/Player-Photograph/)

http://sf0.org/SFmedia/badges/newplayer.gif & 18
(http://sf0.org/SFmedia/badges/newplayer.gif) (http://media.turnofspeed.com/media/hub/18_26101659.jpg)  {18X18 piX

http://media.turnofspeed.com/media/hub/square_26101659.jpg
(http://media.turnofspeed.com/media/hub/square_26101659.jpg) {60x60

http://media.turnofspeed.com/media/hub/main_26101659.jpg
(http://media.turnofspeed.com/media/hub/main_26101659.jpg) {400x320

640x480
http://photos3.meetupstatic.com/photos/member/4/f/f/d/highres_12620477.jpeg
(http://photos3.meetupstatic.com/photos/member/4/f/f/d/highres_12620477.jpeg)
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Cain on March 25, 2012, 10:37:44 AM
QuoteLast month, I learned about a publication that has been quickly gaining popularity, the Journal of Negative Results in BioMedicine (JNRBM). Published, presumably, by a gang of dour curmudgeons who hate everything, JNRBM openly welcomes the data that other journals won't touch because it doesn't fit the unspoken rule that all articles must end on a cheery note of promise. ("This could lead to new therapies!" boast most journal articles, relying on the word "could" to keep their platitudes accurate and the exclamation point to boost excitement, stand for "factorial," or make a clicking sound, depending on your field.)

You might imagine that JNRBM is a place where losers gather to celebrate their failures, kind of like Best Buy or Division III football. But JNRBM meets two important needs in science reporting: the need to combat the positive spin known as publication bias and the need to make other scientists feel better about themselves.

(Unfortunately, if you don't work in biomedicine, you're still screwed. The Journal of Negative Results in Zoology, for example, is just called "not seeing animals." And the Journal of Negative Results in Homeopathy is the entire field of homeopathy.)

When it comes time to put our science into words, why do we pretend that the negative results never happened? Why do we have so much trouble accepting that sometimes our hypotheses are disproved? But most importantly, where was this freaking journal when I was in grad school? You can get published even when the experiment fails—it's the easiest way to pad your CV since the invention of 1.25-inch margins.

http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2012_02_24/caredit.a1200021
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Cain on March 25, 2012, 10:38:17 AM
The Journal for Negative Results in Political Science is also referred to as "history", for those wondering.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 25, 2012, 01:12:27 PM
Quote from: Cain on March 25, 2012, 10:38:17 AM
The Journal for Negative Results in Political Science is also referred to as "history", for those wondering.

:lulz:
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 25, 2012, 01:14:01 PM
Quote from: Cain on March 25, 2012, 10:37:44 AM
QuoteLast month, I learned about a publication that has been quickly gaining popularity, the Journal of Negative Results in BioMedicine (JNRBM). Published, presumably, by a gang of dour curmudgeons who hate everything, JNRBM openly welcomes the data that other journals won't touch because it doesn't fit the unspoken rule that all articles must end on a cheery note of promise. ("This could lead to new therapies!" boast most journal articles, relying on the word "could" to keep their platitudes accurate and the exclamation point to boost excitement, stand for "factorial," or make a clicking sound, depending on your field.)

You might imagine that JNRBM is a place where losers gather to celebrate their failures, kind of like Best Buy or Division III football. But JNRBM meets two important needs in science reporting: the need to combat the positive spin known as publication bias and the need to make other scientists feel better about themselves.

(Unfortunately, if you don't work in biomedicine, you're still screwed. The Journal of Negative Results in Zoology, for example, is just called "not seeing animals." And the Journal of Negative Results in Homeopathy is the entire field of homeopathy.)

When it comes time to put our science into words, why do we pretend that the negative results never happened? Why do we have so much trouble accepting that sometimes our hypotheses are disproved? But most importantly, where was this freaking journal when I was in grad school? You can get published even when the experiment fails—it's the easiest way to pad your CV since the invention of 1.25-inch margins.

http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2012_02_24/caredit.a1200021

This is fucking AWESOME.  :lulz:  :lulz:  :lulz:

Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: hirley0 on March 25, 2012, 01:15:14 PM
Read 253 times 5:15 pdT
Warning - while you were typing 2 new replies have been posted. You may wish to review your post.
Quote from: hirley0 on March 20, 2012, 12:28:32 AM
v read top down v

(http://sf0.org/SFmedia/badges/newplayer.gif) (http://media.turnofspeed.com/media/hub/18_26101659.jpg)  {18X18 piX
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 25, 2012, 01:20:53 PM
I am, however, kind of bummed that someone has already started that journal, and it's even in the field I want to go into. :(
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: hirley0 on March 25, 2012, 01:27:51 PM

Four OFFered
Quote from: Nigel on March 25, 2012, 01:20:53 PM
I am, however, kind of bummed that someone has already started that journal, and it's even in the field I want to go into. :(
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Cain on March 25, 2012, 01:29:05 PM
Quote from: Nigel on March 25, 2012, 01:20:53 PM
I am, however, kind of bummed that someone has already started that journal, and it's even in the field I want to go into. :(

On the other hand, you can start submitting papers already.  Get falsifying!
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: hirley0 on March 25, 2012, 01:31:54 PM
? TWO ?  Read 262 times)
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Mesozoic Mister Nigel on March 26, 2012, 07:32:11 AM
Quote from: Cain on March 25, 2012, 01:29:05 PM
Quote from: Nigel on March 25, 2012, 01:20:53 PM
I am, however, kind of bummed that someone has already started that journal, and it's even in the field I want to go into. :(

On the other hand, you can start submitting papers already.  Get falsifying!

:lulz:
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Rococo Modem Basilisk on March 31, 2012, 12:25:11 AM
The Language Log posted a very good article on the subject of the non-replication mentioned in the OP, and what it does and does not mean. It does not involve itself with Bargh's rebuttals (since that's entirely unrelated to the point -- Bargh's study was considered a landmark study and strongly influenced much of cogsci, in addition to being repeatedly cited both in academic works in the field and in works for a general audience). The original is here (http://languagelog.ldc.upenn.edu/nll/?p=3850), but because it's such a good article (and because many of you are likely to be too lazy to click the link, and will continue to argue about the tangential issue of whether or not the first guy on the authors list of the original study is an asshole) I will quote it in its entirety here.

Quote from:  Julie Sedivy
Replication Rumble
March 17, 2012 @ 3:16 pm · Filed by Julie Sedivy under Psychology of language

In other non-replication news lately: There's been a pretty kerfuffle this month in social psychology and science blogging corners over a recent failure to replicate a classic 1996 study of automatic priming by John Bargh, Mark Chen, and Lara Burrows. The non-replication drew the attention of science writer Ed Yong who blogged about it over at Discover, and naturally, of John Bargh, who elected to write a detailed and distinctly piqued rebuttal at Psychology Today.

The original paper reported three experiments; the one that's the target of controversy used a task in which subjects unscramble lists of words and isolate one word in the list that doesn't fit into the resulting sentence. The Bargh et al. study showed that when the experimental materials contained words that were associated with stereotypes of the elderly (e.g. Florida, bingo, gray, cautious), subjects walked more slowly down the hall upon leaving the lab compared to subjects who saw only neutral words. The result has been energetically cited, and has played no small role in spawning a swarm of experiments documenting various ways in which behavior can be impacted by situational or subliminal primes. The authors explained their findings by suggesting that when the concept of a social stereotype is activated (e.g. via word primes), this can prompt behaviors that are associated with that stereotype (e.g. slow walking).

But allegedly, despite scads of studies that have built on some of Bargh et al.'s conclusions, the slow-walker study has yet to be fully replicated, which motivated Stéphane Doyen and colleagues at the Université Libre de Bruxelles to undertake the job, reporting their attempts in a recently-published article in PLoS ONE. Their first experiment, which contained sober experimental precautions such as using automated timing systems and ensuring that experimenters were blind as to which conditions subjects were assigned to, failed to produce a priming effect. This led them to wonder in print whether the original priming results could have come from a failure to strictly implement double-blind experimental methods, which serves as the motivation for their second experiment.

The second study reported by Doyen et al. focused on whether an effect could be induced by specifically manipulating the experimenters' expectations of how the subjects would behave. Ten different experimenters were included; these experimenters were made aware of which of their subjects were assigned to the word prime condition, and which were assigned to the neutral word condition. However, half of them were led to expect that when primed, their subjects would walk more slowly as a result of the experimental manipulation, and half of them were led to believe they would walk more quickly. (In reality, all subjects received the same elderly-priming materials as in the first experiment). The paper doesn't go into detail as to how these experimenter expectations were established, other than to report that all this took place during "a one hour briefing and persuasion session prior to the first participant's session." In addition, the experimenters had their expectations reinforced by the behavior of their very first study subject, who was a confederate in cahoots with the researchers and obligingly walked quickly or slowly, as expected.

Not surprisingly, when subjects' walking speed was measured by the experimenters themselves on a stopwatch, their pace aligned with expectations: subjects in the word prime condition were timed at faster speeds than those in the neutral word condition when the experimenters expected that priming would speed them up, and conversely, when experimenters expected priming to slow the subjects down, they timed them at slower speeds in the word prime condition relative to the neutral word condition. This wasn't the whole story, though—the subjects' actual speed was also timed by an automated motion-sensitive system. Objective measures of walking speed showed that when the experimenters expected priming to accelerate their subjects, that's exactly what happened. But when they expected subjects to slow down as a result of the priming, there was no difference between the primed subjects and those in the neutral word condition.

This tells us that the actual walking speed of subjects isn't determined entirely by experimenters' expectations; if that were the case, subjects should have walked more quickly when expected to do so as a result of priming. But it does suggest that the priming effect can be either boosted or dampened by experimenter expectations-presumably because the experimenter is emitting subtle and possibly inadvertent cues that impact the subjects' behavior (it would have been interesting, for example, to measure the experimenters' speech rate).

The authors' take on all this is to conclude that:

Quotealthough automatic behavioral priming seems well established in the social cognition literature, it seems important to consider its limitations. In line with our result it seems that these methods need to be taken as an object of research per se before using it can be considered as an established phenomenon.

I'm really not sure what the above statement actually means. But it certainly invites a first-blush response of the Ohmygosh-is-all-this-stuff-we-thought-we-knew-about-unconscious behavioral-priming-wrong? variety. But it's worth waiting for that first flush to settle. Because in the end, the result in and of itself causes little trauma to the original Bargh et al. interpretation of their priming data, and none whatsoever to the more general issue of whether automatic behavioral priming exists.

First of all, the fact that experimenter expectations led to an effect on subjects' behavior doesn't mean that this accounts for the original Bargh et al. results. It just means that it has a measurable impact on any priming effects that may or may not occur. To find otherwise would be rather surprising, especially given the rather heavy-handed way in which these expectations seem to have been induced. (Bargh has countered the paper by claiming that in fact, their own study did implement double-blind methods; whether or not this was done rigorously enough, it certainly seems clear that the later Doyen et al. paper went to special lengths to create a salient experimenter bias above and beyond what would plausibly have existed in the earlier work).

So what we're really left with is the issue of how to interpret the non-replication. There are a number of possible reasons for this, some of them really boring, some of them mildly interesting, but most of them unrelated to the important theoretical questions. For example:

1. The non-replication itself is an experimental failure. In experiments involving humans, all "replications" are at best approximate. Other unforeseen aspects of the experimental design and implementation may have obscured a priming effect or led to unusually noisy data. For example, maybe the Belgian experimenter was attractive to the point of distraction. Maybe more of the undergraduate subjects were tested in the morning while still sluggish. Maybe the experimenter was flaky and inconsistent in implementing the study. Obviously, if an effect is repeatedly vulnerable to these kinds of obliterations, that can speak to the fragility of the effect; but the point is that for any single failure to replicate, we can't tell for sure what the source of the non-replication is. Perfectly robust results can be and often are drowned in noise inadvertently introduced somewhere in the experimental procedure. We can simply document that the failure to replicate occurred, while noting (and further testing) any obvious discrepancies from the original implementation.

2. The word primes may not have successfully triggered a stereotype for the elderly in the minds of the subjects, or the conceptual stereotype may not have had a strong association with slow walking movements. It's entirely conceivable that stereotypes would shift due to time or geographic location. A lot has happened demographically since 1991 when Bargh et al. first collected their data. Upon hearing about this study, for example, my own son remarked (referring to his alpine-skiing, Nepal-trekking grandmother): "Those subjects have obviously never met Nanny." In this case, there's no threat to Bargh's original theoretical contribution about the activation of social stereotypes as a driver of behavior; it's just that any given stereotype isn't going to be held by all populations.

3. There was nothing wrong with the stereotypes; the original result really was a statistical fluke, or an experimental artifact, or limited to a very narrow population or set of experimental circumstances. This eventuality is the most damaging to Bargh et al. But does it really threaten the more general conclusion that behavior can be unconsciously, or automatically primed? No; it simply casts doubt on the more specific interpretation of the results as being due to the activation of social stereotypes. In fact, it's hard to interpret Doyen et al.'s second study, which manipulated experimenter expectations, without appealing to unconscious behavioral priming (as fairly pointed out by Ed Yong in his post). Unless the experimenters actually violated experimental ethics outright by instructing the subjects to walk more slowly, it seems likely that the subjects were unconsciously picking up on experimenter cues (but which ones? Speech rate? Certain words?) unconsciously emitted by the experimenters. What's more, there are by now dozens and quite possibly hundreds of demonstrations of automatic priming effects using a variety of different experimental paradigms, some of which do apply the activation of stereotypes. (Some examples here and here.) Given that it's now 2012, not 1996 when the Bargh et al. paper first appeared, any non-replication of that original result is going to have to be interpreted within the context of that entire body of work.

So. Hardly material to launch a full-scale kerfuffle. This is just science plodding its plodding way towards its plodding approximation of truth. Enough with the rubbernecking already—there are no bloody conclusions to be found here, at least not yet.

So why am I bothering to add my voice to the fray? Because I think that it's very important that we actually talk about replication, what it means and doesn't mean, and that we do so in a way that moves beyond thinking about it as a cagematch between scientists.

When I talk to non-scientists, I'm distressed by a general illiteracy in the understanding of non-replication. All too often, failures to replicate are treated as abrupt reversals of truth. As if any new result, especially a startling or counterintuitive one, were anything but an opening gambit, not a declaration of truth. New studies, whether they replicate the result or not, are simply the next moves that change the way the board is now configured. But all too often, a failure to replicate is portrayed as an instance of science "changing its mind" or an indictment of the scientific method, when really, it's at the heart of the scientific method. When it comes down to it, the sound of non-replication isn't the sound of the puck being slapped into the opponent's net. It's the sound of a muttered "hmmm, what's going on here," the sound of science rolling up its sleeves with a sigh and settling in for a long night's work.

March 17, 2012 @ 3:16 pm · Filed by Julie Sedivy under Psychology of language

The TL;DR version: the original study was attempting to analyze the effect of cultural stereotypes in primed behaviors. A later study performed in a completely different country using a completely different language failed to get similar results, which led to an analysis of other variables. The result is that it seems that in Belgium, French-speaking students are more strongly influenced in their behavior by the expectations of the experimenters than by their stereotype of old people, whereas in an earlier study at Yale the expectations of the experimenters were not adequately controlled for and may have influenced results they thought could only be attributed to the stereotypes Yale psychology students had about old people.

This is arguably a good thing, because it will lead to exploration of unconscious use of non-verbal communication and its effects on primed responses, and because it will bring life back into the study of the effects of social stereotypes on primed responses (and probably how this differs between cultures). However, some media outlets and other dumb people will spin this as refutation (or even, if they are dumb enough, disproof) of primed responses.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: Kai on April 01, 2012, 03:37:54 AM
I'm far more concerned with the lack of replication and the distaste for replication with negative results

than I am for people who grasp every negative result as a disproof. Or more likely, latch onto results when they follow biases and discard when they don't.

The latter, well, people will be primates.

The former is something I can work to change.
Title: Re: Failure to replicate earlier study causes original author to have a hissy fit.
Post by: hirley0 on April 01, 2012, 02:15:20 PM
GOOD D-