To evaluate that possibility, a new paper published in the journal of Theoretical and Applied Climatology examines a selection of contrarian climate science research and attempts to replicate their results. The idea is that accurate scientific research should be replicable, and through replication we can also identify any methodological flaws in that research. The study also seeks to answer the question, why do these contrarian papers come to a different conclusion than 97% of the climate science literature?
This new study was authored by Rasmus Benestad, myself (Dana Nuccitelli), Stephan Lewandowsky, Katharine Hayhoe, Hans Olav Hygen, Rob van Dorland, and John Cook. Benestad (who did the lion’s share of the work for this paper) created a tool using the R programming language to replicate the results and methods used in a number of frequently-referenced research papers that reject the expert consensus on human-caused global warming. In using this tool, we discovered some common themes among the contrarian research papers.
Cherry picking was the most common characteristic they shared. We found that many contrarian research papers omitted important contextual information or ignored key data that did not fit the research conclusions. For example, in the discussion of a 2011 paper by Humlum et al. in our supplementary material, we note,
The core of the analysis carried out by [Humlum et al.] involved wavelet-based curve-fitting, with a vague idea that the moon and solar cycles somehow can affect the Earth’s climate. The most severe problem with the paper, however, was that it had discarded a large fraction of data for the Holocene which did not fit their claims.
When we tried to reproduce their model of the lunar and solar influence on the climate, we found that the model only simulated their temperature data reasonably accurately for the 4,000-year period they considered. However, for the 6,000 years’ worth of earlier data they threw out, their model couldn’t reproduce the temperature changes. The authors argued that their model could be used to forecast future climate changes, but there’s no reason to trust a model forecast if it can’t accurately reproduce the past.
We found that the ‘curve fitting’ approach also used in the Humlum paper is another common theme in contrarian climate research. ‘Curve fitting’ describes taking several different variables, usually with regular cycles, and stretching them out until the combination fits a given curve (in this case, temperature data). It’s a practice I discuss in my book, about which mathematician John von Neumann once said,
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.
Good modeling will constrain the possible values of the parameters being used so that they reflect known physics, but bad ‘curve fitting’ doesn’t limit itself to physical realities. For example, we discuss research by Nicola Scafetta and Craig Loehle, who often publish papers trying to blame global warming on the orbital cycles of Jupiter and Saturn.
This particular argument also displays a clear lack of plausible physics, which was another common theme we identified among contrarian climate research. In another example, Ferenc Miskolczi argued in 2007 and 2010 papers that the greenhouse effect has become saturated, but as I also discuss in my book, the ‘saturated greenhouse effect’ myth was debunked in the early 20th century. As we note in the supplementary material to our paper, Miskolczi left out some important known physics in order to revive this century-old myth.
This represents just a small sampling of the contrarian studies and flawed methodologies that we identified in our paper; we examined 38 papers in all. As we note, the same replication approach could be applied to papers that are consistent with the expert consensus on human-caused global warming, and undoubtedly some methodological errors would be uncovered. However, these types of flaws were the norm, not the exception, among the contrarian papers that we examined. As lead author Rasmus Benestad wrote,
we specifically chose a targeted selection to find out why they got different answers, and the easiest way to do so was to select the most visible contrarian papers … Our hypothesis was that the chosen contrarian paper was valid, and our approach was to try to falsify this hypothesis by repeating the work with a critical eye.
If we could find flaws or weaknesses, then we would be able to explain why the results were different from the mainstream. Otherwise, the differences would be a result of genuine uncertainty.
After all this, the conclusions were surprisingly unsurprising in my mind. The replication revealed a wide range of types of errors, shortcomings, and flaws involving both statistics and physics.
You may have noticed another characteristic of contrarian climate research – there is no cohesive, consistent alternative theory to human-caused global warming. Some blame global warming on the sun, others on orbital cycles of other planets, others on ocean cycles, and so on. There is a 97% expert consensus on a cohesive theory that’s overwhelmingly supported by the scientific evidence, but the 2–3% of papers that reject that consensus are all over the map, even contradicting each other. The one thing they seem to have in common is methodological flaws like cherry picking, curve fitting, ignoring inconvenient data, and disregarding known physics.
If any of the contrarians were a modern-day Galileo, he would present a theory that’s supported by the scientific evidence and that’s not based on methodological errors. Such a sound theory would convince scientific experts, and a consensus would begin to form. Instead, as our paper shows, the contrarians have presented a variety of contradictory alternatives based on methodological flaws, which therefore have failed to convince scientific experts.
Human-caused global warming is the only exception. It’s based on overwhelming, consistent scientific evidence and has therefore convinced over 97% of scientific experts that it’s correct.