Thursday, February 12, 2015

How peer-review works…and doesn’t work (Part 2: When peer-review goes wrong)

Last week in Part 1 we looked at the basics of how the peer-review process works for scientific papers - what it does, and what it doesn't do. You can read the entire Part 1 article here.

The gist is that peer review is the first step in the evaluation process, merely an attempt to ensure that papers actually reflect some valid scientific study and are documented sufficiently so that others can more fully determine its veracity and relationship to the existing relevant science. Only after a paper is published can it be reviewed by the greater scientific community in the depth necessary to fully evaluate.

But what happens when peer review doesn't weed out the woefully unscientific (you know, the kind of "science" stuff Uncle George posts on his blog, right next to the conspiracy du jour)? How do some papers that are unsupportable on their face get published? How does peer review "fail?" The reasons are varied.

First, it may not have failed at all. As noted, the 2 or 3 peer-reviewers evaluate the paper for completeness, plausibility, logic, and description of methodology, results, statistics, discussion, and conclusions. If the paper appears valid it generally is accepted for publication. Peer-reviewers simply can't assess the entire body of related science to determine if the paper is valid; that's what the post publication review by the broader experts in the field accomplishes. Only after that broad scrutiny may some errors in analysis, interpretation, or other inconsistencies be found. Sometimes some major errors, even on rare occasions fraud, are identified. But mostly papers that are found lacking simply fade away without being cited by others.

Second, the quality standards of journals may vary. Each field generally has a journal or two or three that are considered the highest echelon of excellence, and there is high demand to be published in these journals because they are the most prestigious. But that demand overwhelms space for publication so these journals can select only the highest quality and most important papers for publication. Since there are hundreds of other journals available, authors can usually find a place to publish their research. While most of these journals also have high standards, there are some that perhaps are more interested in receiving page charges than ensuring the quality of the paper. Sometimes bad papers get published, and by bad I mean papers that should not have been published because they lack scientific robustness. In most cases, these papers also simply fade into obscurity as the scientific community sees no reason to cite them. In some cases, bad papers have been retracted from the publication long after they were published.

Third, there is what some have euphemistically called "pal-review." Scientists often collaborate on research, which is why published papers commonly have many authors instead of just one or two. That opens up the potential for peer-reviewers to have collaborated on other papers with the main author (though you aren't allowed to "peer-review" a paper you co-authored or worked on directly). But this isn't what constitutes pal-review. Pal-review is when some pal (i.e., friend/conspirator) abuses their position as editor of a journal to slip an otherwise obviously faulty paper into press. Sometimes the editor simply finds amenable reviewers to rubber-stamp the paper, but in rare cases the editor may skip any semblance of peer-review. Pal-review has been documented, for example, in the publication of climate denial papers. [In Part 3 we'll take a look at specific examples of climate denier abuse of the peer-review process.]

Finally, a new problem has cropped up with the advent of "open access journals." As any scientist knows, most journals are available only to people who can afford to pay for them (just as most books have to be purchased before reading). And they can be expensive. Many journals are accessible if you join the associated scientific society (e.g., members of the Society of Environmental Chemistry and Toxicology get access to two highly acclaimed journals as part of their membership). Students can usually access these journals in their University libraries. All of this requires some sort of paywall, and the general public generally can't get access to most new published scientific papers. Thus is the impetus for the open access movement.

Under open access, online-only journals allow anyone to read and download every paper it publishes. That's a great boon to public accessibility, but it presents the obvious potential for quality control problems. Like regular journals, open access journals have a wide range of quality standards. But there are also open access journals that can be called "predatory" because they will publish anything as long as the author pays the publication fee. This has led to some high profile examples of nonsense papers (including one that consisted entirely of repeated expletives) being "published." Clearly, there is a problem with this “pay-per-publish” model, one that raises questions about the integrity of the “non-peer-reviewed” publishing process. How deep a problem remains to be seen, with the key question being how do journals of all types ensure high quality control of published papers.

These examples give us some insight into the limits of peer-review, and the limits of open access. However, it must be emphasized that in the vast majority of cases, peer-review of scientific papers to screen for publication works. Most scientific papers are incremental, tearing off a piece of a very big canvas to examine and investigate. Individual papers rarely make a huge difference (though there are plenty of examples of where single papers are deeply influential). Normally any given paper goes into the mix with all the other relevant papers and only the sum total of all the information contained therein tells us a more or less whole story. So most of the time a not-so-robust paper getting published isn't a big deal.

That said, there are times when editors have quit after their failure to properly screen faulty papers. Sometimes papers are retracted after publication. Even occasionally there is fraud (though any scientist caught engaging in fraud is quickly retired into driving a taxi or some other non-scientist field). The cases of intentional abuse of the peer-review system are rare, but important enough that Part 3 of this series on peer-review will take a closer look at how some people have tried, and sometimes succeeded, in abusing the system. Given that it isn't only scientists, but political and lobbying interests, that are trying to discredit the science, Part 3 will be a critical read for all practicing scientists.

[Note: Peer-review graphic can be seen larger at http://undsci.berkeley.edu/article/howscienceworks_16]

No comments: