Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

Thursday, August 27, 2015

The Irony of Climate Deniers Attacking Published Journal Articles

A new peer-reviewed paper was published recently in the scientific journal Theoretical and Applied Climatology. Its title is "Learning from Mistakes in Climate Research" and the objective is to survey recent "denier" papers, that is, the rare papers that reject the unequivocal scientific consensus that human activity is warming our climate system. The authors - seven climate scientists and science communicators from Norway, the Netherlands, the United States, the UK, and Australia - highlighted the errors in fact and logic common to the selected denier papers.

Not surprisingly, the denier lobbyists and their network of front groups and bloggers attacked the study. The lines of attack ignored the validity of the actual points being made and focused instead on its publishing history and the impact factor of the journal. These attacks are about as ironic as you can get given that deniers rarely even attempt to publish in actual scientific journals (preferring instead to "publish" opinion pieces in business blogs). The one journal they publish most in has an impact factor that is essentially non-existent. As the proverb goes, those who live in glass houses should not throw stones.

But it was rejected by other journals?

Deniers (on Facebook and other non-scientific venues, mainly by non-scientist ideologues and/or conspiracy theorists) are trying to denigrate the study by suggesting it was rejected by other journals. Their false conclusion is that if a paper is rejected by other journals it must somehow be wrong. That false conclusion shows an incredible ignorance of how scientific publishing works.

In previous posts I discussed how peer-review works (and how deniers try to abuse the process) so I won't repeat the basics here. Scientific journals reject thousands of papers every year based on factors that have nothing to do with whether the paper is good or bad. In all fields there are journals professionals in those fields consider the most prestigious, and so those professionals tend to submit their papers to the best journals first. That demand for space runs up against the obvious limitations of space each journal has to fill, so the most prestigious journals reject the vast majority of papers received solely on the basis of no room to print them. Journals may also reject papers because the topic doesn't fit the narrow scope of that particular journal.

In short, rejection in scientific journals is common, and expected.

The reason for the initial rejection of this particular paper is likely because it is an untraditional paper that doesn't fit the scope of most journals. Most climate studies collect data on temperature, sea level, ice thickness, or hundreds of other measurable factors, do statistics, and report the results. This paper is more of a survey of other papers selected because they represent the tiny percentage of papers rejecting the unequivocal science. The goal was to see if there were commonalities in their methods or logic. There are limitations of such a survey (as there are will all studies), and the authors acknowledge those limitations. The observations they make may be incomplete because the survey didn't look at all denier papers, but they are valid.

The irony here is that deniers rarely publish scientific papers, and when they try to publish they often are rejected. Those rejections may include the same factors as above, but they also include rejection based on lack of veracity of the data they present and the logic used to derive conclusions. As the "Learning from Mistakes" paper highlights, even the rare denier papers that do make it through the publication process have serious flaws that invalidate their conclusions. In fact, denier conclusions often don't even agree with the data they present in their own paper, never mind with reality.

But the journal has a low "impact factor?"

These same deniers have suggested that the journal the paper was published in has a "low impact factor." They falsely conclude from this that the journal is not to be trusted. That's silly, and inaccurate.

To begin with, the journal in which this paper was ultimately published, Theoretical and Applied Climatology, is put out by Springer Science, a renowned publishing company in business since 1842. The journal is a continuation of journals that have been published since 1949. In recent years the journal has evolved into an Open Access format, that is, the papers are available in full as PDFs for free to the public.

An "Impact Factor" is a measure of the average number of citations of recent articles, that is, how often are those articles cited by other authors in newer papers. It's a rather arbitrary metric with many criticisms, and there are other metrics that are also used. It's use is based on the assumption that papers that are cited more are somehow more important, but impact factors tend to be biased towards journals that publish review articles (people cite review articles instead of each individual study reviewed) and journals that publish cutting edge news (like Science and Nature). The more specialized the journal, the fewer opportunities there are for citing it.

The reason deniers have focused on this one metric is because they think it allows them to dismiss the paper without having to address any of its points. That, and the fact that the denier lobbyists sent word out via their blogging networks to tell everyone to focus on it.

The most recent impact factor for Theoretical and Applied Climatology in 2014 was 2.015. This falls within the range of most climate journals.

Not surprisingly, for the rare attempts by deniers to publish, their preferred journal, Energy & Environment, had an impact factor of 0.319, which ranked it 90th out of 93 journals in its category. Hardly something to brag about, especially since its editor admitted to "following her political agenda" in choosing the papers to publish (mostly from a small group of deniers). Of course, deniers' favorite platform for "publishing," that is, blogs, have zero impact factor because they aren't peer-reviewed at all. Which is why virtually everything in denier blogs is wrong.

There are more instances of denier ignorance and double standards demonstrating they don't understand most of what they parrot from their denier blogs. I've cataloged many of them on this page under Exposing Climate Denialism.

The main goal of the denier lobbyists and their blogger network (including Facebook trolls) is to deflect from the valid points being made in the journal article "Learning from Mistakes in Climate Research." Those "mistakes" made by deniers may be intentional, as the history of people like Willie Soon and Richard Lindzen suggest. They include "cherry picking," "curve-fitting," and other factual and logical errors like drawing conclusions that aren't even supported by the data they themselves present. This likely happens because they start with the conclusions they want and try to force-fit the cherry picked data to support it.

There's a word for that.

Take the time to read the article as it important to read what the denier lobbyists have tried to hide from the public. Dana Nuccitelli, one of the co-authors on the paper and a regular contributor to the Guardian, has provided a nice summary of their findings. Because the journal is open access you can download the full paper from their website and read it yourself. And here is the PDF copy.

Thursday, July 2, 2015

How Scientific Peer-Review Works - The Series

Earlier this year I posted a series of articles explaining what scientific peer-review is, and what it isn't. The series was very popular so I've decided to create this single post that links to all the previous ones.

In Part 1 we gave a basic definition of peer-review, described the process, what it is expected to accomplish, and what it is not expected to accomplish. In a nutshell, scientists conduct research and then write that research up in a formal paper (including methods, results, how the statistics were done, conclusions, and some discussion of what it all means). The paper is then submitted to a scientific journal, whose editors send it out to other scientists in the field who are capable of reviewing it for clarity, content, and value to expanding our collective knowledge. The reviewers don't validate or invalidate the work, just make sure it meets some basic scientific principles and complete enough for others to 1) know what the researchers did, and 2) replicate it.

Part 2 looked at how peer-review can go wrong. Standards for scientific journals can differ, with some being akin to Ivy League colleges while others may be less stringent. The relatively rare problem of "pal-review" (common among climate deniers) was examined, as was the difficulties caused by some (but not all) of the new "open access journals."

Part 3 looked at some people who have intentionally abused the peer-review system. In addition to the other points point in the article, it also highlights a prime example of intentional abuse - the pal-review case in which Willie Soon and Sally Baliunas were paid to write an error-filled "review paper" (i.e., no new research) that was spearheaded through a suspect review process at a policy journal notorious for printing faulty (see "error-filled") papers by climate deniers funded by industry lobbyists.

The final article, Part 4, examined how the internet (which is not peer-reviewed) has been used by climate denier lobbyists to bypass the peer-review system. One tactic used is posting something on a blog that would not withstand the scientific scrutiny of peer-review, then citing it as if it were valid science. Another tactic is to take any paper that did get through peer-review (which, as Part 1 noted, is only the first, most basic review) and then promote that single paper as if it overturns 100+ years of unequivocal science and the more than 100,000 other peer-reviewed papers that demonstrate the single one to be wrong. As already noted, most denier papers don't stand up to even minor scrutiny.

The sum of these four articles, along with many other articles here on The Dake Page, provide a good background on how scientific peer-review works, what are its limitations, and how some lobbyists have tried to abuse or bypass completely the process. Be sure to follow the links in each article to sources and further information as these help flesh out the points made.

Thursday, March 5, 2015

How peer-review works...(Part 4: Using the internet to bypass peer review)

Part 4 of this series on how peer-review works...and doesn't work focuses on the power of the internet to rapidly spread the message of published papers to the public - and why that often elevates inconsequential papers to a level of importance that isn't warranted. Click on these links to read Part 1 (basics of peer review), Part 2 (when peer-review goes wrong), and Part 3 (abusing the system) of the peer-review series.

As noted in Part 3, sometimes the peer-review system can be abused. Two big examples are the outright fraud of Andrew Wakefield and the "pal review" scheme of Chris de Freitas that allowed Willie Soon to get his start fronting papers for oil industry lobbyists. Another abuse of the system is the creation of a "pal" journal for skeptics called Energy & Environment in which their Editor-in-Chief admits following her "political agenda" rather than scientific veracity.

These examples occurred early in what is now the ubiquitous presence of blogs where anyone can post anything they want. As noted by science authors Chris Mooney and Sheril Kirshenbaum in their book Unscientific America,"There's tons of information available [on the internet], but much of it is crap."

Blogs, of course, are not peer-reviewed, but some blogs can be reliable sources of discussion about the science. See here for how to tell reliable from unreliable blogs. But blogs have also been used to intentionally elevate inconsequential published papers to an undeserved iconic status, often to spread misinformation.

The recent publication of a paper in the Chinese Science Journal is a good example. An unknown journal with unknown standards of peer-review published a paper ostensibly about climate science by the fake "Lord" Christopher Monckton, the now infamous Willie Soon, David Legates, and William M. Briggs, all four of whom are well-known climate deniers who do little actual climate research. The paper argued that their simple model (despite deniers always dissing models) with arbitrarily restricted parameters that essentially gave them the results they wanted was used to declare that all the other more sophisticated models used by real climate scientists were "wrong." The paper was laughable on its face, completely unsupported by its own data, full of errors, and wildly over-interpreted. In the past, most such papers would simply be ignored because they don't stand up to scrutiny. More on why this time was different in a moment.

Another paper that got more attention than it deserved was one by Roy Spencer and William Braswell published in the journal Remote Sensing in 2011. Again a simplified model with questionable parametization dramatically over-interpreted results into some unjustified damnation of all the other science to date. The paper didn't stand up to scrutiny. In fact, the Editor-in-Chief resigned, stating that the paper had been sent out to reviewers best known for denying the science than doing it. Similar work by the authors had already been found lacking. Ironically, the paper Spencer is best known for is one in which he and co-author John Christy made major errors that, when corrected by others, reversed their initial conclusions. Like Willie Soon, Spencer and Christy are associated with oil-industry and libertarian lobbying groups.

There are other examples of seemingly inconsequential published papers, but many many more examples of papers that were never published, that somehow take on a life of their own in the blogosphere. Suddenly a paper that doesn't stand up to scientific scrutiny is hailed as "blowing gaping holes in global warming alarmism" in an Op-Ed by a paid lobbyist lawyer. To any educated and informed person the immediate response is something akin to "huh??"

Sometimes the oddest things go viral in the Facebook and blogosphere world. Often we don't understand how something "caught on" (like the gold vs blue dress meme), but in the case of climate denial the virality is intentional and a product of public relations/lobbyist networks designed for exactly that purpose. The process is the same as used by the tobacco industry to deny smoking causes cancer. It goes roughly like this:

1) Professional denier lobbyists seed their network of media transmitters.

These are often Forbes, Fox News, the Wall Street Journal, and various other right wing media outlets owned by Rupert Murdoch and like-minded media moguls. Often the pieces are written by other lobbyists (e.g., lobbyist/lawyer James Taylor of the Heartland Institute). These outlets usually have a "go-to" guy who will write up intentional misinformation about the paper (or a blog); such "go-tos" include Christopher Booker, David Rose, Matt Ridley, and James Delingpole who essentially relay the lobbyist talking points into the main-stream media.

2) Professional front groups further saturate the blogosphere.

Industry supported blogs like Climate Depot, Watts Up With That, Climate Audit, JoNova, and others make sure the professional lobbyist talking points get out to the ideologically motivated amateur climate deniers. Often these professional front groups will print verbatim what was seeded by the paid folks in step 1. These front groups also may pay people to be "sock-puppets," that is, commenters on Facebook and other blogs to insert and reiterate denier misinformation into the public discussion.

3) Amateur climate deniers plagiarize and completely saturate the blogosphere

The reason the professional lobbyists and front groups spend so much time putting out misinformation is that they know the amateur climate deniers won't understand it enough to see how obviously bogus it is. Professional deniers also know that most amateur deniers simply don't care that so much of the information is so blatantly, and laughably, false. Professional deniers, in fact, count on this willful ignorance. So amateur deniers simply parrot and plagiarize the talking points fed to them and repeat it ad nauseam no matter how many times the falsehoods are corrected.

This process can take an insignificant paper and make it the most important thing on Earth. The fact that most papers usually only examine a small piece of a huge puzzle is either ignored or lost to ignorance. In the past, insignificant or faulty published papers would simply fade away; today those same papers may be given a false level of importance. These are joined by papers that aren't even papers - blog posts, propaganda pieces, opinion pieces, and even random quotes taken out of context and given an entire story line completely divorced from (and often opposite of) the actual story.

And this is done intentionally by the climate denier lobbyists.

[Note: Peer-review graphic can be seen larger at http://undsci.berkeley.edu/article/howscienceworks_16]

Thursday, February 19, 2015

How peer-review works…and doesn’t work (Part 3: Abusing the system)

This is a continuation of the series on how peer-review works…and doesn’t work. Part 1 looked at the basics of how the peer-review process works for scientific papers - what it does, and what it doesn't do. You can read the entire Part 1 article here. Part 2 looked at what happens when peer-review goes wrong. You can read the entire Part 2 article here. Now we’ll take a look at some cases where the peer-review system has been abused. 

Before starting Part 3, however, it must be stressed that any inadequacies so far discussed are exceptions to the rule. Peer-review almost always does what it is supposed to do – a first screen to make sure papers represent legitimate research and are fully documented so that they can be assessed by the larger scientific community. It’s rare that peer-review “fails” (see Part 2). It’s even rarer that papers are retracted once they are published. A study published in 2012 examined the 2047 retractions of papers indexed in the PubMed database (mostly biotechnology papers). That sounds like a lot until you realize that this was out of over 21 million published papers in that database, meaning less than 0.01% of published papers were retracted. Retraction is rare even though the bar for retracting papers has been lowered (i.e., it's much easier and faster to retract now than previously).
  
That said, let’s look at the cases where papers have been published that probably shouldn’t have been. The new problem of “open access journals,” i.e., those journals who publish for a fee, was mentioned in Part 2. The biggest concern here is that some of these “journals” are simply predatory publishers that will post online anything that is sent to them as long as the fee is paid. These predatory journals will likely disappear as people refuse to be associate with them, especially since they obviously aren’t really peer-reviewed. So while they may be a big headache right now, likely they will weed out the bad eggs through, not ironically, peer pressure. 

Which gets us to the real problem. The following examples highlight some of what can happen when unscrupulous people try to take advantage of the system. 

The most famous example of “pal review” as discussed in Part 2 is the publication of a climate related paper by Soon and Baliunas in the journal Climate Research in 2003. The paper was shuttled through the review process by fellow climate denier Chris de Freitas, an editor for the journal. Once published, the paper was roundly criticized by the scientific community as unsupportable on its face. Further review revealed that Soon and Baliunas were funded by the fossil fuel industry, that the conclusions stated were inconsistent with their own data (which were inconsistent with reality), and that de Frietas had a history of pushing through papers by climate deniers despite their obvious failings. Details of the controversy can be read here. Since then, Soon and a small group of lobbyist-associated authors have been implicated in a series of questionable papers that misrepresent the science. Often these papers are published in a journal called Energy and Environment, a non-science pal-review type of journal where the editor has acknowledged papers are published based on political motives. 

Following publication of the Soon and Baliunas paper described above, and also in one or two other cases where apparently fraudulent papers were published in peer-reviewed journals, senior editors chose to resign. While reputations of any scientists involved can be severely damaged, for some this doesn’t appear to matter much as long as the lobbyist funding continues (e.g., Soon was recently accused of violating basic ethics conventions by failing to disclose his fossil fuel industry funding in a paper he co-authored with the usual band of climate deniers). 

There isn’t much that can be done about such papers other than to keep strengthening peer-review standards, a difficult proposition given the thousands of journals that now compete for papers to publish. Sometimes the papers are retracted, but as noted above, retractions are rare, though increasing.  

This latter point can actually work against legitimate scientists. In the past, scientific papers were scrutinized and critiqued by other scientists, and that feedback helped move the science along. Now the papers are more accessible to the general public through blogs, the public is more likely to get a "spun" version of the paper than the actual science. While press releases by the scientific organizations may be poorly worded, the real problem is when bloggers, either intentionally or unintentionally, get the gist of the paper's findings wrong. So the public may be misinformed. Worse, the papers are read by political and lobbying interests, which would be okay if they honestly evaluated the science. But that isn’t the case. Most political operatives and lobbyists have a particular policy view and are not hesitant to misrepresent the science if they feel doing so will help them achieve their preferred policy action – which in most cases is no action at all. These operatives and lobbyists can exert tremendous pressure on journals that, at least in one recent case, can lead to legitimate, scientifically robust, papers being retracted solely because the journal feared an expensive legal battle with lobbyists. This sets a dangerous precedent. 

In addition, there are many cases of politicians saying things about science that are not scientific. Senator James Inhofe is notorious for arguing that the science of man-made climate change is all a hoax, originally basing this politically convenient opinion on the 2003 Soon and Baliunas paper, which many suggest was the main motivation for the paper being funded by the petroleum industry. Not surprisingly, Inhofe’s home state of Oklahoma is highly dependent on the oil and gas industry and that industry routinely lavishes upon him significant campaign funding. This is true of other politicians as well. And yes, health and environmental advocacy groups also financially support their preferred politicians and feed them information that supports their advocacy. The main difference is that health and environmental lobbyists generally pressure politicians to listen to the scientists while fossil fuel lobbyists generally pressure politicians to listen to, well, the fossil fuel lobbyists and their small cadre of associated scientists who disagree with the vast overriding consensus of the science. 

But that’s a topic for another post.  

To recap, the last three weeks have taken a look at the peer-review process – what it is, and what it isn’t. We’ve looked at some ways that peer-review can “fail,” and some ways that people have abused the process. Due to the space limitations of a blog format, these discussions are necessarily incomplete. The links provide more detail on some of the points being made, but there are many others that could also be discussed in greater depth. The main points to understand are that peer-review is merely the first step in the scientific evaluation process, and only after publication can the greater scientific community scrutinize the studies being presented. Sometimes bad papers get published, but most of the time they are inconsequential. Attempts at fraud do happen, and while relatively rare, can have significant impacts (e.g., see Andrew Wakefield).
  
Overall, peer-review works, and is necessary. There are challenges for the future because of predatory practices related to the “open access” nature of the worldwide web, but these are likely to be worked out so that some combination of public access and quality assurance can be achieved. 

In Part 4 we'll take a look at how some papers that might have been inconsequential in the past can now be artificially elevated into a level of importance they don't merit. We'll explore the role of the internet in making this happen, both for good and for evil.

[Note: Peer-review graphic can be seen larger at http://undsci.berkeley.edu/article/howscienceworks_16

Thursday, February 12, 2015

How peer-review works…and doesn’t work (Part 2: When peer-review goes wrong)

Last week in Part 1 we looked at the basics of how the peer-review process works for scientific papers - what it does, and what it doesn't do. You can read the entire Part 1 article here.

The gist is that peer review is the first step in the evaluation process, merely an attempt to ensure that papers actually reflect some valid scientific study and are documented sufficiently so that others can more fully determine its veracity and relationship to the existing relevant science. Only after a paper is published can it be reviewed by the greater scientific community in the depth necessary to fully evaluate.

But what happens when peer review doesn't weed out the woefully unscientific (you know, the kind of "science" stuff Uncle George posts on his blog, right next to the conspiracy du jour)? How do some papers that are unsupportable on their face get published? How does peer review "fail?" The reasons are varied.

First, it may not have failed at all. As noted, the 2 or 3 peer-reviewers evaluate the paper for completeness, plausibility, logic, and description of methodology, results, statistics, discussion, and conclusions. If the paper appears valid it generally is accepted for publication. Peer-reviewers simply can't assess the entire body of related science to determine if the paper is valid; that's what the post publication review by the broader experts in the field accomplishes. Only after that broad scrutiny may some errors in analysis, interpretation, or other inconsistencies be found. Sometimes some major errors, even on rare occasions fraud, are identified. But mostly papers that are found lacking simply fade away without being cited by others.

Second, the quality standards of journals may vary. Each field generally has a journal or two or three that are considered the highest echelon of excellence, and there is high demand to be published in these journals because they are the most prestigious. But that demand overwhelms space for publication so these journals can select only the highest quality and most important papers for publication. Since there are hundreds of other journals available, authors can usually find a place to publish their research. While most of these journals also have high standards, there are some that perhaps are more interested in receiving page charges than ensuring the quality of the paper. Sometimes bad papers get published, and by bad I mean papers that should not have been published because they lack scientific robustness. In most cases, these papers also simply fade into obscurity as the scientific community sees no reason to cite them. In some cases, bad papers have been retracted from the publication long after they were published.

Third, there is what some have euphemistically called "pal-review." Scientists often collaborate on research, which is why published papers commonly have many authors instead of just one or two. That opens up the potential for peer-reviewers to have collaborated on other papers with the main author (though you aren't allowed to "peer-review" a paper you co-authored or worked on directly). But this isn't what constitutes pal-review. Pal-review is when some pal (i.e., friend/conspirator) abuses their position as editor of a journal to slip an otherwise obviously faulty paper into press. Sometimes the editor simply finds amenable reviewers to rubber-stamp the paper, but in rare cases the editor may skip any semblance of peer-review. Pal-review has been documented, for example, in the publication of climate denial papers. [In Part 3 we'll take a look at specific examples of climate denier abuse of the peer-review process.]

Finally, a new problem has cropped up with the advent of "open access journals." As any scientist knows, most journals are available only to people who can afford to pay for them (just as most books have to be purchased before reading). And they can be expensive. Many journals are accessible if you join the associated scientific society (e.g., members of the Society of Environmental Chemistry and Toxicology get access to two highly acclaimed journals as part of their membership). Students can usually access these journals in their University libraries. All of this requires some sort of paywall, and the general public generally can't get access to most new published scientific papers. Thus is the impetus for the open access movement.

Under open access, online-only journals allow anyone to read and download every paper it publishes. That's a great boon to public accessibility, but it presents the obvious potential for quality control problems. Like regular journals, open access journals have a wide range of quality standards. But there are also open access journals that can be called "predatory" because they will publish anything as long as the author pays the publication fee. This has led to some high profile examples of nonsense papers (including one that consisted entirely of repeated expletives) being "published." Clearly, there is a problem with this “pay-per-publish” model, one that raises questions about the integrity of the “non-peer-reviewed” publishing process. How deep a problem remains to be seen, with the key question being how do journals of all types ensure high quality control of published papers.

These examples give us some insight into the limits of peer-review, and the limits of open access. However, it must be emphasized that in the vast majority of cases, peer-review of scientific papers to screen for publication works. Most scientific papers are incremental, tearing off a piece of a very big canvas to examine and investigate. Individual papers rarely make a huge difference (though there are plenty of examples of where single papers are deeply influential). Normally any given paper goes into the mix with all the other relevant papers and only the sum total of all the information contained therein tells us a more or less whole story. So most of the time a not-so-robust paper getting published isn't a big deal.

That said, there are times when editors have quit after their failure to properly screen faulty papers. Sometimes papers are retracted after publication. Even occasionally there is fraud (though any scientist caught engaging in fraud is quickly retired into driving a taxi or some other non-scientist field). The cases of intentional abuse of the peer-review system are rare, but important enough that Part 3 of this series on peer-review will take a closer look at how some people have tried, and sometimes succeeded, in abusing the system. Given that it isn't only scientists, but political and lobbying interests, that are trying to discredit the science, Part 3 will be a critical read for all practicing scientists.

[Note: Peer-review graphic can be seen larger at http://undsci.berkeley.edu/article/howscienceworks_16]

Thursday, February 5, 2015

How peer-review works…and doesn’t work (Part 1)


You’ll see the term “peer-review” a lot on these pages, as well as on both scientific and denialist blogs, and in the media. Unfortunately, the term is often used incorrectly, sometimes on purpose, but mostly because the process isn’t clear to the public. This extended post will take a shot at explaining what peer-review is…and what it isn’t. We’ll talk about how it works…and why it sometimes doesn’t work.

In its most basic sense, peer-review is when a scientist’s research paper is evaluated by his “peers” to determine if it meets the basic standards required for publication in a scientific journal. But this simple definition doesn’t really explain the process, so let’s explore that in greater depth.

To get us started, let’s define what we mean by “peer.” We’re not talking about the kind of “peer” we think of when we say “a jury of our peers.” In that situation, it simply means other citizens. For a jury you often want to get some cross-section of the community – college educated and not, male and female, white collar and blue collar employment. Everyone in the community is your “peer” and the final jury empaneled is largely a factor of the random order of the selection from the jury pool (plus a little selective tweaking by lawyers for the defendant and plaintiff).

In science a “peer” is somewhat different. To be a peer you need to have knowledge of the highly specialized subject of the paper being reviewed. If the paper is about climate science, you obviously need to have sufficient knowledge of climate science to be able to review the paper effectively. Sending a climate paper to a brain surgeon for review makes no more sense than going to a chiropractor to have your cows milked. With that in mind, a “peer” would be another climate scientist. [Needless to say, if the paper is about brain surgery, you would not send it to a climate scientist for review.]

Every legitimate (i.e., peer-reviewed) journal has a staff of editors to manage the process of review and publication. These editors will receive research papers from the authors, determine what scientists out there have the necessary expertise to effectively review the paper, and coordinate the reviews and feedback to the authors. Most journals will send the paper to three peer-reviewers, though for particularly important and/or potentially contentious papers they may be sent to four or even five peer-reviewers. While the editor is the go-between, the authors generally do not know who the proposed paper has been sent to for peer review. In many cases, but not all, the peer reviewers also don’t know the name of the author who submitted the paper. These peers review the paper and provide their comments and recommendations: publish as is, publish if minor errors and/or questions are addressed, publish if major errors are addressed, or reject it because it fails to meet even the most basic standards of veracity. 

Okay, so what are these peer reviewers looking for? Mostly they are looking to ensure that the research has been conducted, reported, and evaluated adequately. And it has to be research. Blogs don’t normally get any peer-review, which is why most of what you read on blogs is opinion and not science. [But, some blogs can discuss the published science – see article here for how to discern a reliable blog from an unreliable blog.]

Peers who are reviewing a potential paper for publication as themselves a series of questions. The first question is always, “does this paper fit into the scope of the journal?” Since journals tend to focus in on narrow topics, papers that don’t fit that topic shouldn’t even be considered. Luckily, there are a myriad of journals with overlapping scopes, so a good research paper should easily be able to find a place to be published. With that as a given, the questions peer reviewers ask include: Is the scope of the research study clearly presented? Do they review the prior literature on that topic? Are the stipulated premises valid? Do they adequately explain the methodology so others can see how they conducted the study? Do the authors present the results in full and clearly? Do the data tables and graphics look correct? Are the statistical procedures clearly explained and valid? Are the conclusions reached logically derived from the data presented?

While that sounds like a lot, the idea of peer-review is not to approve or disapprove of the research or conclusions. The goal is merely to ensure that the paper documents and demonstrates a well-thought-out and conducted scientific study. If it does, then it usually is published in the journal.

Done, right?

Actually, getting through this initial peer-review should be considered only the first step in the scientific review process. What most people think of as peer-review just makes sure the paper appears sufficiently documented, is a significant contribution to the science, and should be made available to the scientific community at large through publication. But only then – once the paper is out in what scientists call “the literature” - does it begin to be closely scrutinized by the broader scientific community. Scientists in the field will read it and evaluate it and, often, debate it. Are the author’s points defensible? Does it agree or conflict with existing literature. Does the new paper enhance our knowledge? Are there any mistakes the initial peer-reviewers missed? Does it stand up to scrutiny?
This could go on for some time. If the paper makes important points, especially if it changes our view of the science, it will get cited by other papers who do follow up research. Papers cited a lot tend to be important papers. 

Many people have the impression that getting a paper peer-reviewed means it is “science.” That isn’t exactly true. “Science” isn’t a single scientific paper; science is the compendium of scientific papers published on a particular topic.

This point is critical.

Scientific research usually works by increments. Individual studies don’t investigate, for example, “is global warming happening?” That is too big a chunk to evaluate. Instead, a study may test whether CO2 can make an atmosphere warmer. This was done in many studies in the laboratory by many different independent researchers. Each study is written up and published in scientific journals.  There are dozens (actually, hundreds) of studies in the last 150 years that examine this exact same question using many different methodologies, all of which are published in journals. The sum total of all of those studies – each looking at the same thing from different angles – tell us without any doubt that yes, CO2 can make an atmosphere warmer. 

Other studies may look at “how much warmer?” Or “if it works in the lab, does it also work in the global atmosphere?” Or any number of related questions. Still other studies look at the effect of clouds, the impacts of a warmer climate on extreme weather events, the acidification of the oceans, etc. All researched, all published, all scrutinized by dozens or hundreds or even thousands of other scientists. Eventually the data are so overwhelming and so clear and undeniable that everyone recognizes the fact of the science. That is the case for evolution, gravity, and yes, man-made climate change.

One more aspect of peer-review is important – it never stops. Scientists continue to conduct new studies to examine new questions. The new results are assessed in the context of all the other results – do they agree or disagree with our current understanding? Do they enhance our knowledge? Do they change our understanding of the science? All of these questions are revisited with every new study and every new chance at peer-review. In the case of climate change, new studies overwhelmingly confirm that human activity is warming the climate system.

Finally, I mentioned earlier that the initial peer-review process (deciding whether to publish or not) doesn’t make a final determination about the defensibility of the paper. That comes afterward, when any questions from other scientists will have to be addressed by the authors. That is part of the process. But sometimes, a paper gets through peer-review that isn’t supportable even on its face. In the next post I’ll talk about what happens when unsupportable papers get published. I’ll also talk about why some journals have intentionally low standards or “pal-review” systems. Lastly, I’ll talk about the challenges created by a new breed of journals – the “pay-per-publish” type that raises questions about the integrity of the “non-peer-reviewed” publishing process.

[Note: The above is Part 1 of a series on peer-review, how it works and doesn't work, and how some people try to influence the public through bypassing peer-review. Click on the links to read Part 2, Part 3, and Part 4.]

[Note: Peer-review graphic can be seen larger at http://undsci.berkeley.edu/article/howscienceworks_16]

Friday, August 17, 2012

EPA Announces Availability of Risk Assessment Plans for 2012 Work Plan Chemicals

This morning the EPA published Peer Review Plans for the risk assessments on the seven chemicals previously identified as 2012 work plan chemicals. According to EPA, "the plans, which form part of the Agency's Peer Review Agenda, describe the focus of the risk assessment being conducted on each chemical, indicate how peer reviewers will be selected and how the peer review will be conducted, and provide the time line for the reviews."

The External Review Drafts of the plans still need to be published in the Federal Register, and when that happens and the risk assessments become officially available there will be a 60-day public comment period. There will also be conference calls of the peer review panel in which the public can provide additional comments. 

EPA notes that the public can access and submit comments on the individual peer review plans for each chemical by using the following links: