Thursday, August 27, 2015

The Irony of Climate Deniers Attacking Published Journal Articles

A new peer-reviewed paper was published recently in the scientific journal Theoretical and Applied Climatology. Its title is "Learning from Mistakes in Climate Research" and the objective is to survey recent "denier" papers, that is, the rare papers that reject the unequivocal scientific consensus that human activity is warming our climate system. The authors - seven climate scientists and science communicators from Norway, the Netherlands, the United States, the UK, and Australia - highlighted the errors in fact and logic common to the selected denier papers.

Not surprisingly, the denier lobbyists and their network of front groups and bloggers attacked the study. The lines of attack ignored the validity of the actual points being made and focused instead on its publishing history and the impact factor of the journal. These attacks are about as ironic as you can get given that deniers rarely even attempt to publish in actual scientific journals (preferring instead to "publish" opinion pieces in business blogs). The one journal they publish most in has an impact factor that is essentially non-existent. As the proverb goes, those who live in glass houses should not throw stones.

But it was rejected by other journals?

Deniers (on Facebook and other non-scientific venues, mainly by non-scientist ideologues and/or conspiracy theorists) are trying to denigrate the study by suggesting it was rejected by other journals. Their false conclusion is that if a paper is rejected by other journals it must somehow be wrong. That false conclusion shows an incredible ignorance of how scientific publishing works.

In previous posts I discussed how peer-review works (and how deniers try to abuse the process) so I won't repeat the basics here. Scientific journals reject thousands of papers every year based on factors that have nothing to do with whether the paper is good or bad. In all fields there are journals professionals in those fields consider the most prestigious, and so those professionals tend to submit their papers to the best journals first. That demand for space runs up against the obvious limitations of space each journal has to fill, so the most prestigious journals reject the vast majority of papers received solely on the basis of no room to print them. Journals may also reject papers because the topic doesn't fit the narrow scope of that particular journal.

In short, rejection in scientific journals is common, and expected.

The reason for the initial rejection of this particular paper is likely because it is an untraditional paper that doesn't fit the scope of most journals. Most climate studies collect data on temperature, sea level, ice thickness, or hundreds of other measurable factors, do statistics, and report the results. This paper is more of a survey of other papers selected because they represent the tiny percentage of papers rejecting the unequivocal science. The goal was to see if there were commonalities in their methods or logic. There are limitations of such a survey (as there are will all studies), and the authors acknowledge those limitations. The observations they make may be incomplete because the survey didn't look at all denier papers, but they are valid.

The irony here is that deniers rarely publish scientific papers, and when they try to publish they often are rejected. Those rejections may include the same factors as above, but they also include rejection based on lack of veracity of the data they present and the logic used to derive conclusions. As the "Learning from Mistakes" paper highlights, even the rare denier papers that do make it through the publication process have serious flaws that invalidate their conclusions. In fact, denier conclusions often don't even agree with the data they present in their own paper, never mind with reality.

But the journal has a low "impact factor?"

These same deniers have suggested that the journal the paper was published in has a "low impact factor." They falsely conclude from this that the journal is not to be trusted. That's silly, and inaccurate.

To begin with, the journal in which this paper was ultimately published, Theoretical and Applied Climatology, is put out by Springer Science, a renowned publishing company in business since 1842. The journal is a continuation of journals that have been published since 1949. In recent years the journal has evolved into an Open Access format, that is, the papers are available in full as PDFs for free to the public.

An "Impact Factor" is a measure of the average number of citations of recent articles, that is, how often are those articles cited by other authors in newer papers. It's a rather arbitrary metric with many criticisms, and there are other metrics that are also used. It's use is based on the assumption that papers that are cited more are somehow more important, but impact factors tend to be biased towards journals that publish review articles (people cite review articles instead of each individual study reviewed) and journals that publish cutting edge news (like Science and Nature). The more specialized the journal, the fewer opportunities there are for citing it.

The reason deniers have focused on this one metric is because they think it allows them to dismiss the paper without having to address any of its points. That, and the fact that the denier lobbyists sent word out via their blogging networks to tell everyone to focus on it.

The most recent impact factor for Theoretical and Applied Climatology in 2014 was 2.015. This falls within the range of most climate journals.

Not surprisingly, for the rare attempts by deniers to publish, their preferred journal, Energy & Environment, had an impact factor of 0.319, which ranked it 90th out of 93 journals in its category. Hardly something to brag about, especially since its editor admitted to "following her political agenda" in choosing the papers to publish (mostly from a small group of deniers). Of course, deniers' favorite platform for "publishing," that is, blogs, have zero impact factor because they aren't peer-reviewed at all. Which is why virtually everything in denier blogs is wrong.

There are more instances of denier ignorance and double standards demonstrating they don't understand most of what they parrot from their denier blogs. I've cataloged many of them on this page under Exposing Climate Denialism.

The main goal of the denier lobbyists and their blogger network (including Facebook trolls) is to deflect from the valid points being made in the journal article "Learning from Mistakes in Climate Research." Those "mistakes" made by deniers may be intentional, as the history of people like Willie Soon and Richard Lindzen suggest. They include "cherry picking," "curve-fitting," and other factual and logical errors like drawing conclusions that aren't even supported by the data they themselves present. This likely happens because they start with the conclusions they want and try to force-fit the cherry picked data to support it.

There's a word for that.

Take the time to read the article as it important to read what the denier lobbyists have tried to hide from the public. Dana Nuccitelli, one of the co-authors on the paper and a regular contributor to the Guardian, has provided a nice summary of their findings. Because the journal is open access you can download the full paper from their website and read it yourself. And here is the PDF copy.