Editor’s Note: Patrick Michaels, Ph.D., is a past president of the American Association of State Climatologists and for 30 years was a research professor of environmental sciences at the University of Virginia. Michaels was honored with the Courage in Defense of Science Award from The Heartland Institute at the 13th International Conference on Climate Change (ICCC-13).
Burnett: You coauthored a book titled Lukewarming: The New Climate Science that Changes Everything. What is the thesis of the book?
Michaels: Global warming is often presented as one of two alternative visions: left unattended, it will bring about an unmitigated disaster with tremendous consequences to humanity and the planet; or there is really “no such thing”—meaning little to no warming and little to no influence of human greenhouse-gas emissions on climate.
There is a third way. I call it the ‘lukewarm” synthesis, which holds the surface temperature indeed is a bit warmer than it was around 1900, that people have something (but hardly everything) to do with that change, and that a synthesis of models and observations leads to the inescapable conclusion future warming—within the policy-responsible window—is likely to be modest and possibly quite beneficial.
Burnett: Your ICCC-13 presentation discussed the misuse of climate data. Could you give a couple of examples of how climate data has been manipulated or misused to push alarming climate claims?
Michaels: Climate records are in a constant state of revision—which seems odd, as there is only so much data. One blatant example is the so-called “pause-buster” dataset published by Thomas Karl and colleagues in Science in 2015.
It had been well-known that, for largely unknown reasons, surface warming “stopped” after the 1997-1999 El Niño/La Niña cycle. Statistically, there was no significant warming trend from then through late 2014.
In the 1990s, Karl developed a new global history replacing satellite-sensed sea-surface temperatures with, in part, readings from cooling intake tubes onboard oceangoing vessels. This is an inferior dataset, as ships are of different composition and size, with different thermal characteristics and cooling tubes at different depths. At the same time, Karl’s employer, the National Oceanic and Atmospheric Administration (NOAA), launched a large series of drifting buoys with calibrated instrumentation to measure surface temperatures over the ocean. The new dataset took the very good, unbiased, buoy temperatures and raised them all 0.2 degrees Fahrenheit, to match the bad intake-tube temperatures. As more and more buoys were deployed, this was guaranteed to produce a warming trend.
Another strange example relates to the oft-repeated nostrum that tropical cyclones—hurricanes, typhoons, etc.—-are becoming more severe. Disturbingly, the third (2014) National Assessment of climate change impacts on the United States, published quadrennially by the U.S. Global Change Research Program, showed a graphic of hurricane power from 1970 through 2009, superimposing a red, upward-pointing line from 1982 through 2009, with the obvious intent of showing a stark increase in recent hurricane power.
Why did the data end in 2009 in a study published in 2014? It would have been easy to include data from 2010 through 2013. But had it done so, the upward trend would have vanished, with activity returning to its 1970 through 1982 baseline.
Such errors, omissions, and commissions always seem to point in the direction of “it’s worse than we thought,” which is perhaps why retiring NOAA scientist John Bates accused the agency of “putting its thumb on the scales” of climate data in 2017.
Burnett: What have you found to be the most disturbing aspect of the way climate research and climate policy have developed over the past two decades?
Michaels: There has been a profound suspension of the normal rules of science in the climate arena. John Christy and Richard McNider at the University of Alabama-Huntsville have demonstrated climate models are predicting several times too much warming to have occurred since 1979, when satellite-sensed temperature records begin, in the tropical mid-troposphere.
This is exceedingly important because it is the difference between surface and upper air temperatures that determines buoyancy or upward motion, allowing the abundant moisture over the tropical oceans to ultimately be transported northward to feed mid-latitude agriculture. Around 90 percent of the moisture that feeds the U.S. midwestern agricultural belt—in many respects, the most productive large agricultural area on Earth—originates over the tropics. Consequently, getting the moisture flux into this region wrong means future forecasts of world food supply as affected by climate change will be useless.
This error is made in 31 of the 32 groups of models employed in the latest [fifth] Scientific Assessment of the United Nations’ Intergovernmental Panel on Climate Change. Yet it is this community of models that is used to drive impact models such as those for agriculture.
A much better scientific practice would be to use the one model suite that works: the Russian version labelled INM-CM4, but INM-CM4 projects less warming in the coming century than any of the other model suites.
The debates surrounding climate science and policy have become so vitriolic because of the tremendous amounts of money and power at stake in this issue. Proponents of drastic policy intervention know their models are on thin ice, and they will do anything to keep this fact from becoming general knowledge, including wrecking the career of anyone who might say something about it.
H. Sterling Burnett, Ph.D. ([email protected]) is a senior fellow at The Heartland Institute.