Principia Discordia

Principia Discordia => Techmology and Scientism => Topic started by: Kai on October 13, 2013, 01:05:29 AM

Title: Journalist submits fake paper, passes peer review.
Post by: Kai on October 13, 2013, 01:05:29 AM
http://www.sciencemag.org/content/342/6154/60.full (http://www.sciencemag.org/content/342/6154/60.full)

QuoteI know [that it should have been rejected] because I wrote the paper. Ocorrafoo Cobange does not exist, nor does the Wassee Institute of Medicine. Over the past 10 months, I have submitted 304 versions of the wonder drug paper to open-access journals. More than half of the journals accepted the paper, failing to notice its fatal flaws. Beyond that headline result, the data from this sting operation reveal the contours of an emerging Wild West in academic publishing.

QuoteThe paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.

The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet.

QuoteThere are numerous red flags in the papers, with the most obvious in the first data plot. The graph's caption claims that it shows a "dose-dependent" effect on cell growth—the paper's linchpin result—but the data clearly show the opposite. The molecule is tested across a staggering five orders of magnitude of concentrations, all the way down to picomolar levels. And yet, the effect on the cells is modest and identical at every concentration.

One glance at the paper's Materials & Methods section reveals the obvious explanation for this outlandish result. The molecule was dissolved in a buffer containing an unusually large amount of ethanol. The control group of cells should have been treated with the same buffer, but they were not. Thus, the molecule's observed "effect" on cell growth is nothing more than the well-known cytotoxic effect of alcohol.

In short, of the 255 journals that the fake paper was submitted to (not counting the derelict journals), 157 of them passed peer review.

Michael Eisen points out the hilarity of this expose being published in Science: http://www.michaeleisen.org/blog/?p=1439

QuoteMy sting exposed the seedy underside of "subscription-based" scholarly publishing, where some journals routinely lower their standards – in this case by sending the paper to reviewers they knew would be sympathetic - in order to pump up their impact factor and increase subscription revenue. Maybe there are journals out there who do subscription-based publishing right – but my experience should serve as a warning to people thinking about submitting their work to Science and other journals like it.

OK – this isn't exactly what happened. I didn't actually write the paper. Far more frighteningly, it was a real paper that contained all of the flaws described above that was actually accepted, and ultimately published, by Science.

That's right, the arsenic-eating bacteria paper from last year was published by Science, a big name, "closed-access" journal. He goes on to argue:

QuoteBut it's nuts to construe this as a problem unique to open access publishing, if for no other reason than the study, didn't do the control of submitting the same paper to subscription-based publishers (UPDATE: The author, Bohannon emailed to say that, while his original intention was to look at all journals, practical constraints limited him to OA journals, and that Science played no role in this decision). We obviously don't know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper (it has many features in common with the arsenic DNA paper afterall). Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity. When Elsevier and other big commercial publishers pitch their "big deal", the main thing they push is the number of papers they have in their collection. And one look at many of their journals shows that they also will accept almost anything.

None of this will stop anti-open access campaigners  (hello Scholarly Kitchen) from spinning this as a repudiation for enabling fraud. But the real story is that a fair number of journals who actually carried out peer review still accepted the paper, and the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. Any scientist can quickly point to dozens of papers – including, and perhaps especially, in high impact journals – that are deeply, deeply flawed – the arsenic DNA story is one of many recent examples. As you probably know there has been a lot of smoke lately about the "reproducibility" problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn't work.

While I agree that pre-publication peer review is inconsistent, I don't think it's broken. It does reasonably well when the reviewers and editors are on task and not just passing papers through a pipeline. Once in a while a paper like the above gets through a big name journal, where the checks on methods weren't strict enough, but for the most part, pre-publication peer review does exactly what it is supposed to.

My disagreement with all these folks is on the /purpose/ of pre-PR: it is NOT to judge the scientific value of a paper.

Pre-PR is, at best, a low pass filter. It insures that editors don't get shamed (often) for publishing gaffs, and that scientists in a particular field of research don't have to trudge through miles and miles of dreck just to find a paper that might be worthwhile. As such, reviewers and editors clean up the writing, make sure the methods follow from the introduction and the conclusion follows from all of the above, check logical consistency and reasoning, read the methods carefully, and look over the statistics. This is the job of pre-PR.

The job of judging the scientific /worth/ of a paper is what POST-publication peer review is for. I have spent many hours during my graduate career in these classes called "journal clubs". A group of students meet and one of us presents a paper, which we have all read, and the rest of the hour is spent tearing it to shreds. This is Post-PR, and it is absolutely necessary and far, far more important than Pre-PR could ever be. This is where the judgement comes in. All this training wasn't to make me a better pre-PR reviewer, it was to make me a post-PR reviewer who is not bound by the opinions of a few, usually anonymous people.

The problem is, people treat pre-PR as if it is the be all, end all of the peer review process, as if what is published is automatically worthy, or correct, or not fraudulent, just because it "passed peer review". The solution isn't to change the pre-PR process, the solution is to actually do post-PR review, to not simply trust the contents of a paper because it has Science or Nature or PLoSOne in the header. If I am simply taking what I find in an article as Word of God because some unnamed people got together and decided it was good enough to go, then what is /my/ worth as a scientist? How am I any more useful than people who take whatever media spin gets distributed?

This semester, I'm taking a journal club like class, with a bit more structure, but the main event every week is always a paper presentation and discussion. I spend HOURS carefully reading through these papers, even if I am not myself presenting. I go through the steps on this page (http://violentmetaphors.com/2013/08/25/how-to-read-and-understand-a-scientific-paper-2/), not only so I have good questions to ask, but so I train this skill until it becomes second nature. This is how every scientist should address a paper they are doing more than a skim over, or considering for use in their research, or as background for a paper, or if controversial. It is necessary, it is not just a game, and it plays a far more important role than the so called "broken" pre-publication peer review.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Nephew Twiddleton on October 13, 2013, 01:47:25 AM
Great post, Kai. I think that the idea that people seem to have that getting published somehow makes it a fact is part of the reason that people spout of shit about climate change and such. Though to be fair science reporting needs to improve as well.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 13, 2013, 01:54:16 AM
It's really disheartening because the whole reason peer review exists is to prevent nonsense being published as science, and by falling down on the job journals are in effect not only failing to weed out the bullshit, but endorsing it.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 13, 2013, 02:03:14 AM
Quote from: Doktor Blight on October 13, 2013, 01:47:25 AM
Great post, Kai. I think that the idea that people seem to have that getting published somehow makes it a fact is part of the reason that people spout of shit about climate change and such. Though to be fair science reporting needs to improve as well.

Science reporting is darn good and gets better every year. The fine folks at Discover blogs and Nat Geo's Phenomenon do excellent work. The problem is, they are still overshadowed by the Old Media, like Huff Post, and do not get the exposure they deserve. I'm not quite sure why; maybe because hype rules media right now. Which is why I post stories by Ed Yong, Carl Zimmer, and Gwen Pearson on Facebook nearly ever day.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: LMNO on October 13, 2013, 05:26:19 AM
I really like
The idea of
Propping up
A post-PR pR.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 13, 2013, 12:12:50 PM
Quote from: LMNO, PhD (life continues) on October 13, 2013, 05:26:19 AM
I really like
The idea of
Propping up
A post-PR pR.

...I don't get it.

I know the abbreviations were less than optimal, but I think reading "pre-publication peer review" and "post-publication peer review" over and over would be worse.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: LMNO on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 15, 2013, 12:16:39 AM
Quote from: LMNO, PhD (life continues) on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.

No problem.

I actually was inspired by my Bavaria-born professor (in all his spirited yet harmless contentiousness) to bring up my ideas in today's lab meeting. And of course they tore it to shreds: when they do pre-PR, they consider scientific worth, and so do editors. Except...when I told them that I don't trust a paper based on it's journal, nor do I believe it till I read it carefully, they said that's the right way to be.

Which seems contradictory. If post-PR is the way to do things, that suggests that pre-PR as a mediocre filter is fine. Not that I will perpetrate such a thing with the paper I am currently reviewing, nor with future papers. But it does suggest that the system isn't broken when it actually is present, and that what's broken is people's unwillingness to read critically, and only believe after careful consideration. I thought that growing up with Science had shaped my rational tendencies, but it seems once again that PD is responsible for good, or at least /intelligent/, things in my life.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Reginald Ret on October 15, 2013, 09:15:16 AM
Quote from: Kai on October 15, 2013, 12:16:39 AM
Quote from: LMNO, PhD (life continues) on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.

No problem.

I actually was inspired by my Bavaria-born professor (in all his spirited yet harmless contentiousness) to bring up my ideas in today's lab meeting. And of course they tore it to shreds: when they do pre-PR, they consider scientific worth, and so do editors. Except...when I told them that I don't trust a paper based on it's journal, nor do I believe it till I read it carefully, they said that's the right way to be.

Which seems contradictory. If post-PR is the way to do things, that suggests that pre-PR as a mediocre filter is fine. Not that I will perpetrate such a thing with the paper I am currently reviewing, nor with future papers. But it does suggest that the system isn't broken when it actually is present, and that what's broken is people's unwillingness to read critically, and only believe after careful consideration. I thought that growing up with Science had shaped my rational tendencies, but it seems once again that PD is responsible for good, or at least /intelligent/, things in my life.
You are the best combination of SCIENCE! and PD.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 16, 2013, 07:25:31 AM
Quote from: Kai on October 15, 2013, 12:16:39 AM
Quote from: LMNO, PhD (life continues) on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.

No problem.

I actually was inspired by my Bavaria-born professor (in all his spirited yet harmless contentiousness) to bring up my ideas in today's lab meeting. And of course they tore it to shreds: when they do pre-PR, they consider scientific worth, and so do editors. Except...when I told them that I don't trust a paper based on it's journal, nor do I believe it till I read it carefully, they said that's the right way to be.

Which seems contradictory. If post-PR is the way to do things, that suggests that pre-PR as a mediocre filter is fine. Not that I will perpetrate such a thing with the paper I am currently reviewing, nor with future papers. But it does suggest that the system isn't broken when it actually is present, and that what's broken is people's unwillingness to read critically, and only believe after careful consideration. I thought that growing up with Science had shaped my rational tendencies, but it seems once again that PD is responsible for good, or at least /intelligent/, things in my life.

I suspect that the reality is that both pre-and-post PR are necessary.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 16, 2013, 12:19:04 PM
Quote from: Not Your Nigel on October 16, 2013, 07:25:31 AM
Quote from: Kai on October 15, 2013, 12:16:39 AM
Quote from: LMNO, PhD (life continues) on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.

No problem.

I actually was inspired by my Bavaria-born professor (in all his spirited yet harmless contentiousness) to bring up my ideas in today's lab meeting. And of course they tore it to shreds: when they do pre-PR, they consider scientific worth, and so do editors. Except...when I told them that I don't trust a paper based on it's journal, nor do I believe it till I read it carefully, they said that's the right way to be.

Which seems contradictory. If post-PR is the way to do things, that suggests that pre-PR as a mediocre filter is fine. Not that I will perpetrate such a thing with the paper I am currently reviewing, nor with future papers. But it does suggest that the system isn't broken when it actually is present, and that what's broken is people's unwillingness to read critically, and only believe after careful consideration. I thought that growing up with Science had shaped my rational tendencies, but it seems once again that PD is responsible for good, or at least /intelligent/, things in my life.

I suspect that the reality is that both pre-and-post PR are necessary.

Necessary, yes. But I don't see them as having equal importance. Especially since for all but a few papers I have no control over pre-PR. But I can post-publication peer review any paper I want.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 17, 2013, 05:40:42 AM
Quote from: Kai on October 16, 2013, 12:19:04 PM
Quote from: Not Your Nigel on October 16, 2013, 07:25:31 AM
Quote from: Kai on October 15, 2013, 12:16:39 AM
Quote from: LMNO, PhD (life continues) on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.

No problem.

I actually was inspired by my Bavaria-born professor (in all his spirited yet harmless contentiousness) to bring up my ideas in today's lab meeting. And of course they tore it to shreds: when they do pre-PR, they consider scientific worth, and so do editors. Except...when I told them that I don't trust a paper based on it's journal, nor do I believe it till I read it carefully, they said that's the right way to be.

Which seems contradictory. If post-PR is the way to do things, that suggests that pre-PR as a mediocre filter is fine. Not that I will perpetrate such a thing with the paper I am currently reviewing, nor with future papers. But it does suggest that the system isn't broken when it actually is present, and that what's broken is people's unwillingness to read critically, and only believe after careful consideration. I thought that growing up with Science had shaped my rational tendencies, but it seems once again that PD is responsible for good, or at least /intelligent/, things in my life.

I suspect that the reality is that both pre-and-post PR are necessary.

Necessary, yes. But I don't see them as having equal importance. Especially since for all but a few papers I have no control over pre-PR. But I can post-publication peer review any paper I want.

The advantage being that post-publication PR will make pre-publication PR tighter by necessity.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Reginald Ret on October 17, 2013, 12:20:05 PM
Quote from: Not Your Nigel on October 17, 2013, 05:40:42 AM
Quote from: Kai on October 16, 2013, 12:19:04 PM
Quote from: Not Your Nigel on October 16, 2013, 07:25:31 AM
Quote from: Kai on October 15, 2013, 12:16:39 AM
Quote from: LMNO, PhD (life continues) on October 14, 2013, 05:22:08 PM
Hm.  October 12 was a Saturday.  The post was at 11:40 pm.


:checks: Yup, I was drinking. 


I think I was saying that doing additional peer review after a peer-reviewed paper is publish would be a good idea.  Which, I believe, is what is already happening.


So, yeah.  Feel free to ignore that one.

No problem.

I actually was inspired by my Bavaria-born professor (in all his spirited yet harmless contentiousness) to bring up my ideas in today's lab meeting. And of course they tore it to shreds: when they do pre-PR, they consider scientific worth, and so do editors. Except...when I told them that I don't trust a paper based on it's journal, nor do I believe it till I read it carefully, they said that's the right way to be.

Which seems contradictory. If post-PR is the way to do things, that suggests that pre-PR as a mediocre filter is fine. Not that I will perpetrate such a thing with the paper I am currently reviewing, nor with future papers. But it does suggest that the system isn't broken when it actually is present, and that what's broken is people's unwillingness to read critically, and only believe after careful consideration. I thought that growing up with Science had shaped my rational tendencies, but it seems once again that PD is responsible for good, or at least /intelligent/, things in my life.

I suspect that the reality is that both pre-and-post PR are necessary.

Necessary, yes. But I don't see them as having equal importance. Especially since for all but a few papers I have no control over pre-PR. But I can post-publication peer review any paper I want.

The advantage being that post-publication PR will make pre-publication PR tighter by necessity.
I would imagine you would want the Pre-publication PR to be looser, that way no good stuff gets filtered out.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 17, 2013, 12:52:37 PM
I'm not concerned about filtering out. With the explosion of open access journals, it's very unlikely that an article of worth will not find some publishing outlet. It's far more likely that an unworthy paper will find get published than the inverse. And like I said, I'm not concerned about that either. Rather, I'm concerned about the perception that peer review is infallible.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: LMNO on October 17, 2013, 03:48:19 PM
Quote from: Kai on October 17, 2013, 12:52:37 PM
I'm not concerned about filtering out. With the explosion of open access journals, it's very unlikely that an article of worth will not find some publishing outlet. It's far more likely that an unworthy paper will find get published than the inverse. And like I said, I'm not concerned about that either. Rather, I'm concerned about the perception that peer review is infallible.

Now that you mention it, there does seem to be an Appeal to Authority aura around peer review.  "The argument is valid.  It was peer reviewed!"
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 17, 2013, 04:35:16 PM
Quote from: Kai on October 17, 2013, 12:52:37 PM
I'm not concerned about filtering out. With the explosion of open access journals, it's very unlikely that an article of worth will not find some publishing outlet. It's far more likely that an unworthy paper will find get published than the inverse. And like I said, I'm not concerned about that either. Rather, I'm concerned about the perception that peer review is infallible.

What Kai said.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 17, 2013, 04:41:43 PM
Quote from: LMNO, PhD (life continues) on October 17, 2013, 03:48:19 PM
Quote from: Kai on October 17, 2013, 12:52:37 PM
I'm not concerned about filtering out. With the explosion of open access journals, it's very unlikely that an article of worth will not find some publishing outlet. It's far more likely that an unworthy paper will find get published than the inverse. And like I said, I'm not concerned about that either. Rather, I'm concerned about the perception that peer review is infallible.

Now that you mention it, there does seem to be an Appeal to Authority aura around peer review.  "The argument is valid.  It was peer reviewed!"

yes, and the concept behind it actually should make that appeal sound; essentially, a non-expert should be able to point to a peer-reviewed article and say "I am not an expert, but a panel of experts found this research methodologically sound so I am offering it to support my position on X". Unfortunately, for whatever reason, a lot of peer reviewers seem to be falling down on the job, and in some cases don't even seem to be reading the articles they're passing. This is a huge problem, not just for the layperson but for other scientists who are basing their research on the existing body of research as communicated in, you guessed it, peer-reviewed papers.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: The Good Reverend Roger on October 17, 2013, 05:13:03 PM
Quote from: Not Your Nigel on October 17, 2013, 04:41:43 PM
Quote from: LMNO, PhD (life continues) on October 17, 2013, 03:48:19 PM
Quote from: Kai on October 17, 2013, 12:52:37 PM
I'm not concerned about filtering out. With the explosion of open access journals, it's very unlikely that an article of worth will not find some publishing outlet. It's far more likely that an unworthy paper will find get published than the inverse. And like I said, I'm not concerned about that either. Rather, I'm concerned about the perception that peer review is infallible.

Now that you mention it, there does seem to be an Appeal to Authority aura around peer review.  "The argument is valid.  It was peer reviewed!"

yes, and the concept behind it actually should make that appeal sound; essentially, a non-expert should be able to point to a peer-reviewed article and say "I am not an expert, but a panel of experts found this research methodologically sound so I am offering it to support my position on X". Unfortunately, for whatever reason, a lot of peer reviewers seem to be falling down on the job, and in some cases don't even seem to be reading the articles they're passing. This is a huge problem, not just for the layperson but for other scientists who are basing their research on the existing body of research as communicated in, you guessed it, peer-reviewed papers.

Yep.  So now I basically can't even trust the scientific community.  May as well listen to advertizing execs and religious whackjobs...Because it's the same fucking thing, when you get down to brass tacks.

You're hearing what someone WANTS you to believe is the truth, not the truth.

It's fucking disgusting.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: The Good Reverend Roger on October 17, 2013, 05:45:12 PM
Quote from: LMNO, PhD (life continues) on October 17, 2013, 03:48:19 PM
Quote from: Kai on October 17, 2013, 12:52:37 PM
I'm not concerned about filtering out. With the explosion of open access journals, it's very unlikely that an article of worth will not find some publishing outlet. It's far more likely that an unworthy paper will find get published than the inverse. And like I said, I'm not concerned about that either. Rather, I'm concerned about the perception that peer review is infallible.

Now that you mention it, there does seem to be an Appeal to Authority aura around peer review.  "The argument is valid.  It was peer reviewed!"

AH, MY PEOPLE, I LOVE YOU!
\
(http://newsimg.bbc.co.uk/media/images/47913000/jpg/_47913344_wakefield_512.jpg)
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 24, 2013, 06:29:14 PM
Okay. I'm done with the butthurt. Here's some food for Germans.

http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

QuoteAcademic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.

Various factors contribute to the problem. Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. "There is no cost to getting things wrong," says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline's persistent errors. "The cost is not getting them published."

The whole article is on mistakes and falsehoods in scientific publishing, and why replications (which are a kind of post publication peer review) are absolutely necessary and not happening. And you know what? This DOES upset me. I accept completely that peer reviewed journals are going to slip up sometimes, that peer reviewers are going to fail, that mistakes and falsehoods are going to be published. It happens, it's going to continue to happen, there's not a damn thing anyone can do to eliminate it completely. Which is why follow ups are so damn important.

Maybe Science really /is/ broken/short circuit, and if it IS, then the broken part is that it's become like media. The entire point is to pour out stories, with not a bit of thought to questioning whether the stories that just got poured out were any good. THATS the supposed self correcting, and since we've been letting the journalists do it FOR us, the letters are still PR but pronounced "public relations" and not "peer review". This is disturbing. And I don't know fuck all I can do about it.

Also, I've been wondering who the hell that guy in the picture is.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: The Good Reverend Roger on October 24, 2013, 06:34:58 PM
Andrew Wakefield.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 24, 2013, 06:44:31 PM
Quote from: Dirty Old Uncle Roger on October 24, 2013, 06:34:58 PM
Andrew Wakefield.

Oh. Well, what's worse, finding out about him later, or never finding out? Even if peer review works 99.9% of the time, 0.1% are still going to get through. Admittedly, the numbers are worse than that. The difference between the two is that in one, Science is aware of this and works to self correct it, and in the other, Science ignores the ever constant dilemma of peer review. I refuse to pretend that it can be perfect, that would really make the assholes "my people".
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 24, 2013, 06:53:34 PM
I just realized I need some help with this figure. The proportions don't seem right. Need some Bayes up in this here thing. LMNO?

(http://cdn.static-economist.com/sites/default/files/imagecache/original-size/images/print-edition/20131019_FBC916.png)
Title: Re: Journalist submits fake paper, passes peer review.
Post by: LMNO on October 24, 2013, 07:41:16 PM
At first blush, something does seems off.  I can't see exactly what it is, however.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 24, 2013, 09:54:47 PM
It's the false positives. 5% rate of false positives is 5% of those studies that had significant results, not five percent of the total, right?
Title: Re: Journalist submits fake paper, passes peer review.
Post by: LMNO on October 24, 2013, 10:25:56 PM
Not sure. It seems oddly worded.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: LMNO on October 24, 2013, 10:28:05 PM
Wait. 100 "true things", 5% error rate...


Ah! Where's the rigor? Shouldn't we be testing more than once, if we have a known error rate?
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Stupid Youngin on October 24, 2013, 10:44:57 PM
(http://24.media.tumblr.com/262a318ba4ae3021598482893fb8cd1f/tumblr_mv70dyLwl71qdzw1so1_500.jpg)
Title: Re: Journalist submits fake paper, passes peer review.
Post by: The Good Reverend Roger on October 24, 2013, 10:47:18 PM
That is an EXCELLENT work, and the sources provided are pretty Goddamn good.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 24, 2013, 11:18:04 PM
Quote from: LMNO, PhD (life continues) on October 24, 2013, 10:28:05 PM
Wait. 100 "true things", 5% error rate...


Ah! Where's the rigor? Shouldn't we be testing more than once, if we have a known error rate?

Something is just not right about the middle part of that figure. It needs some Bayes-jutsu.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 25, 2013, 12:23:32 AM
Quote from: Kai on October 24, 2013, 06:29:14 PM
Okay. I'm done with the butthurt. Here's some food for Germans.

http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

QuoteAcademic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.

Various factors contribute to the problem. Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. "There is no cost to getting things wrong," says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline's persistent errors. "The cost is not getting them published."

The whole article is on mistakes and falsehoods in scientific publishing, and why replications (which are a kind of post publication peer review) are absolutely necessary and not happening. And you know what? This DOES upset me. I accept completely that peer reviewed journals are going to slip up sometimes, that peer reviewers are going to fail, that mistakes and falsehoods are going to be published. It happens, it's going to continue to happen, there's not a damn thing anyone can do to eliminate it completely. Which is why follow ups are so damn important.

Maybe Science really /is/ broken/short circuit, and if it IS, then the broken part is that it's become like media. The entire point is to pour out stories, with not a bit of thought to questioning whether the stories that just got poured out were any good. THATS the supposed self correcting, and since we've been letting the journalists do it FOR us, the letters are still PR but pronounced "public relations" and not "peer review". This is disturbing. And I don't know fuck all I can do about it.

Also, I've been wondering who the hell that guy in the picture is.

Fantastic article, Kai! One of the questions that's been brought up somewhere around here is why negative findings are so rarely published, even though negative findings can stand to tell us more, more definitively, about a question than positive findings. They aren't sexy, they aren't speculative, but sometimes a solid "Nope!" (if you'll forgive me for the expression) can be more meaningful than a bright and shiny "Maybe".

I find it a bit troublesome that apparently not all scientists are required to take statistics. I admit that I hated statistics; that's no secret. I was bored to tears. But as time goes on I am finding that I am really really glad that I took them because it makes what I'm looking at make so much more sense when I'm trying to interpret and understand the results of a paper, including being able to look at powers and levels of significance and say "hmm, that is far too high an error rate with far too low an n for me to take these findings very seriously without a great deal of further investigation".
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 25, 2013, 12:35:45 AM
Quote from: Kai on October 24, 2013, 11:18:04 PM
Quote from: LMNO, PhD (life continues) on October 24, 2013, 10:28:05 PM
Wait. 100 "true things", 5% error rate...


Ah! Where's the rigor? Shouldn't we be testing more than once, if we have a known error rate?

Something is just not right about the middle part of that figure. It needs some Bayes-jutsu.

This is what it is saying:  there are 1000 test cases. 10% are "yes". There are 100 true "yeses" and 900 true "nos". A power of .8 means 80% of the true "yeses" will be captured by the test. That means there will be 20 apparent "nos" that are really "yeses". There is also a .05 false positive rate. That means that out of 900 TRUE "nos", 45 will appear to be "yeses". False positives look exactly like true positives.

However, although their math works out just fine if they are talking about a 5% false positive rate, they seem to have confused confidence level with false positive rate. That is not what a .05 confidence level is. A .05% confidence level is a measure of how likely the test is to have produced data this far or farther from a no-change mean by chance alone. It is not an error rate. Therefore all their numbers are hopelessly borked and meaningless.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 25, 2013, 01:54:40 AM
Quote from: Mrs. Nigelson on October 25, 2013, 12:35:45 AM
Quote from: Kai on October 24, 2013, 11:18:04 PM
Quote from: LMNO, PhD (life continues) on October 24, 2013, 10:28:05 PM
Wait. 100 "true things", 5% error rate...


Ah! Where's the rigor? Shouldn't we be testing more than once, if we have a known error rate?

Something is just not right about the middle part of that figure. It needs some Bayes-jutsu.

This is what it is saying:  there are 1000 test cases. 10% are "yes". There are 100 true "yeses" and 900 true "nos". A power of .8 means 80% of the true "yeses" will be captured by the test. That means there will be 20 apparent "nos" that are really "yeses". There is also a .05 false positive rate. That means that out of 900 TRUE "nos", 45 will appear to be "yeses". False positives look exactly like true positives.

However, although their math works out just fine if they are talking about a 5% false positive rate, they seem to have confused confidence level with false positive rate. That is not what a .05 confidence level is. A .05% confidence level is a measure of how likely the test is to have produced data this far or farther from a no-change mean by chance alone. It is not an error rate. Therefore all their numbers are hopelessly borked and meaningless.

Actually, they're right about that. Alpha is not only the p-value needed for significance, it is also the chance of a Type I error. That is, if the null hypothesis is true, it is the fraction of replications you would expect, by chance alone, the estimation of the parameter (usually the mean) to fall outside that interval and be considered significant. And...I think I just explained it.

The problem is, the article words the situation weirdly. Instead, it should be like this.

1. Of 1000 hypotheses, perhaps 100 of these will reject the null hypothesis in favor of the alternative.

2. Given an alpha of 0.5, we expect 1/20th of those tests that should not have rejected the null hypothesis to actually do so. 1/20th of 900 is 45. These are the false positives. Given a power of 0.8 [I'm not sure how they got a Power of 0.8, I just have to take their word for it, since calculating Power by hand is complicated, since calculating beta is complicated, and Power is 1 minus beta.] , our beta is 0.2, which means 20% of the time when we /should have/ rejected the null hypothesis, we will not. This is the type two error, and means that 0.2*80 = 20 negative tests are actually false negatives, they should have rejected the null hypothesis.

3. If researchers only publish positive test results,  that means that by chance alone there will be 9 false positives published for every 16 positives, meaning more than half. The ratio for false negatives to negatives is .02, which means the random chance for a false negative is much lower than false positive, if all tests are equally published.


Now, I have a few problems with this, The first is that people are generally not interested in detecting sameness, they are interested in detecting differences. At least in hypothesis testing And furthermore, those alpha and beta values? They are /tailored/ to a high standard of detecting differences. We could easily design a test where the rate of false negatives is higher, and all we have to do is decrease the alpha value. Make it tiny. Make it small enough, and the false negative level will skyrocket. (ETA: Really why we use 0.05 as our alpha is based around something called the central limit theorem, which has to do with central tendencies of variability and the rareness of extreme values. It assumes data are normally distributed. They aren't always.)

But here's the main problem, and that's the premise of assuming a very uneven ratio of negative to positive results. People do not run around testing hypotheses at random. It is, frankly, a waste of time. When the article assumes a 10 to 1 ratio of negatives and positives, it is exactly that, an assumption. What if we make it 50:50? Well then, 1/20th of 500 is 25, and 0.2 times 500 is 100. Which makes the ratio of false negatives to negatives 25:400 or 0.06, and the ratio of false positives to positives 100 to 475 or 0.2, which is a HELL of a lot lower than more than half.

This means the whole figure is nonsensical, because it is based on an untested assumption which is the ratio of negatives to positives in hypothesis testing. It does the /math/ right, but it starts from a flimsy premise. These post-hoc power tests have been looked down upon for years, this is not how you use power.

What you use power for is to decide on an appropriate sample size for the effect size you are looking for. In other words, if you are going to test a fertilizer, and you only care if the tree growth difference is larger than a foot (this is the effect size), power calculations can help tell you what an appropriate sample size would be to detect that difference between your control and treatment, given the natural variability and the desired alpha (again, the probability of a Type I error, usually 0.5). You can then rest assured that, if you have properly estimated the inherent variability, that the appropriate sample size will give you a significant p-value if and only if the effect size is as large as you would want it.

This is turning into a tangent, but it must be said: a significant p-value is /meaningless/ without knowing the effect size. You could say that the difference between those two tree fertilizers is significant, but if the actual difference is only a change in inches, who gives a shit? When you see a significant p-value in a paper, always always always check what the actual difference is, what the units are, and if the difference even matters.


And I think that's all for now. This message has been brought to you by the statistics software program R and the number 0.05.


ETA2: Oh, I think I just repeated what you said, except more complicated, and with more flailing at the end.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 25, 2013, 02:20:52 AM
Never mind, I am properly dizzied!  :lol:
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 25, 2013, 02:21:22 AM
Quote from: Kai on October 25, 2013, 01:54:40 AM
ETA2: Oh, I think I just repeated what you said, except more complicated, and with more flailing at the end.

Ahhhh OK thanks, my head was spinning a bit there!
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 25, 2013, 02:35:11 AM
Quote from: Mrs. Nigelson on October 25, 2013, 02:21:22 AM
Quote from: Kai on October 25, 2013, 01:54:40 AM
ETA2: Oh, I think I just repeated what you said, except more complicated, and with more flailing at the end.

Ahhhh OK thanks, my head was spinning a bit there!

Sorry! My head was spinning too, trying to figure out the math. But you have the right of it; these proportions of error are not meant for determining after the fact what the possibility of statistical error is. Alpha, beta, and power are supposed to be used for individual hypothesis tests, not for judging the error rate of a large number of different tests, and are supposed to be computed before the test, not after.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: The Good Reverend Roger on October 25, 2013, 02:36:30 AM
I have been gibbering monkey noises for the last 3 posts.

And I was once a math/physics major.

:horrormirth:
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Kai on October 25, 2013, 02:42:27 AM
Quote from: Mrs. Nigelson on October 25, 2013, 12:23:32 AM
Quote from: Kai on October 24, 2013, 06:29:14 PM
Okay. I'm done with the butthurt. Here's some food for Germans.

http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

QuoteAcademic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.

Various factors contribute to the problem. Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. "There is no cost to getting things wrong," says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline's persistent errors. "The cost is not getting them published."

The whole article is on mistakes and falsehoods in scientific publishing, and why replications (which are a kind of post publication peer review) are absolutely necessary and not happening. And you know what? This DOES upset me. I accept completely that peer reviewed journals are going to slip up sometimes, that peer reviewers are going to fail, that mistakes and falsehoods are going to be published. It happens, it's going to continue to happen, there's not a damn thing anyone can do to eliminate it completely. Which is why follow ups are so damn important.

Maybe Science really /is/ broken/short circuit, and if it IS, then the broken part is that it's become like media. The entire point is to pour out stories, with not a bit of thought to questioning whether the stories that just got poured out were any good. THATS the supposed self correcting, and since we've been letting the journalists do it FOR us, the letters are still PR but pronounced "public relations" and not "peer review". This is disturbing. And I don't know fuck all I can do about it.

Also, I've been wondering who the hell that guy in the picture is.

Fantastic article, Kai! One of the questions that's been brought up somewhere around here is why negative findings are so rarely published, even though negative findings can stand to tell us more, more definitively, about a question than positive findings. They aren't sexy, they aren't speculative, but sometimes a solid "Nope!" (if you'll forgive me for the expression) can be more meaningful than a bright and shiny "Maybe".

I find it a bit troublesome that apparently not all scientists are required to take statistics. I admit that I hated statistics; that's no secret. I was bored to tears. But as time goes on I am finding that I am really really glad that I took them because it makes what I'm looking at make so much more sense when I'm trying to interpret and understand the results of a paper, including being able to look at powers and levels of significance and say "hmm, that is far too high an error rate with far too low an n for me to take these findings very seriously without a great deal of further investigation".

To get back to this: yes, negative findings can tell us things. The really important thing is to follow up on both positive and negative results, repeat experiments, and question the authority of the literature. It takes time, but it must be done.

As for statistics...the necessity of statistics is determined by how little variability your data have, and how large your effect size is. If your effect size is huge, and your variability is low, then statistics is pretty much unnecessary. You just /look/ at the thing. A lot of time physicists don't use statistics. But biology, for example, is messy. There's a great deal of variability in biological systems, and the effect sizes are often small and still meaningful. Therefore, statistics is standard. In our PhD program, everyone is required to take at least one statistics course, sometimes multiple.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 25, 2013, 03:27:18 AM
Quote from: Kai on October 25, 2013, 02:35:11 AM
Quote from: Mrs. Nigelson on October 25, 2013, 02:21:22 AM
Quote from: Kai on October 25, 2013, 01:54:40 AM
ETA2: Oh, I think I just repeated what you said, except more complicated, and with more flailing at the end.

Ahhhh OK thanks, my head was spinning a bit there!

Sorry! My head was spinning too, trying to figure out the math. But you have the right of it; these proportions of error are not meant for determining after the fact what the possibility of statistical error is. Alpha, beta, and power are supposed to be used for individual hypothesis tests, not for judging the error rate of a large number of different tests, and are supposed to be computed before the test, not after.

Cool, we're on the same page then. I actually didn't get to what was wrong with it until I started walking through it. Their math is right only if their logic is right, and their logic is wrong so it's all fucked.
Title: Re: Journalist submits fake paper, passes peer review.
Post by: Mesozoic Mister Nigel on October 25, 2013, 03:46:38 AM
Posting to remind me to post ITT tomorrow, when my brain decides to come online again.