Is Climate Forecasting Immune from Occam’s Razor?

Published December 13, 2015

Several authors have argued that the hypothesis of dangerous manmade global warming fails the test of Occam’s razor because the simple hypothesis of natural variation fits the data with fewer assumptions. As Harold Jeffreys noted, “simpler laws have the greater prior probability”. But are forecasts of dangerous warming immune from Occam’s razor?

It is on the basis of forecasts that the political leaders and government officials gathered in Paris are discussing agreements that would impose extraordinarily disruptive and expensive policies on the nations of the world. Those forecasts—called scenarios and projections by the U.N. Intergovernmental Panel on Climate Change (IPCC)—are the product of complex computer models involving multitudes of interacting assumptions.

The finding of Kesten Green and Scott Armstrong’s recent review that complexity increased forecast errors by 27% on average should give delegates at the Paris climate policy talks pause for thought. Occam’s razor would appear to apply to scientific forecasting, too.

At this year’s International Symposium on Forecasting, Kesten and Scott presented a review of the IPCC’s modeling procedures using a nine-item checklist on conformance with evidence-based guidance on simplicity in forecasting.

They found that the IPCC procedures have a “simplicity rating” of 19%. That figure contrasts with a simplicity rating of 93% for the Green, Armstrong and Soon no-change (no-trend) model of long-term global average temperatures.

Given the vast sums that have been spent on the IPCC process and how seriously the outputs are being taken by the Paris delegates, is it possible that alarm over dangerous manmade global warming is an exception to Occam’s razor in forecasting?

Apparently not. The evidence presented by a notional bet between Scott Armstrong and Al Gore—represented by forecasts from the simple no-trend model and the IPCC model “business as usual” projected warming rate of 0.03C per annum, respectively—is that the IPCC’s preference for complexity has increased the size of forecast errors by as much as 45% over a seven year period.

Earlier evidence in Green, Armstrong, and Soon’s (2009) validation study found that the IPCC’s complex forecasting models increased the size of forecast errors by seven times relative to the simple no-change model for the period of exponentially increasing atmospheric CO2 from 1851 to 1975.

Kesten and Scott’s conference paper abstract and slides are available from ResearchGate, here.

Their paper, “Simple versus complex forecasting: The evidence” and their Simplicity Checklist are available from the Simple-Forecasting.com pages of the ForecastingPrinciples.com (ForPrin.com) website. (You can do your own ratings of the IPCC procedures, to check if your ratings might lead to a different conclusion.) The original Green, Armstrong, and Soon validation study of IPCC forecasting is available here.

[First published at Watts Up With That.]