In researching another point, I came across an interesting argument from a 2015 blog post by Niskanen Center’s Jerry Taylor. (It summarized his contribution in a debate on carbon taxes.) It seems that JT is making a pretty bad inference from a chart, but since I’m not predisposed to agree with him, I’m seeking feedback from you folks.
We also hear quite a bit from the Right about how the computer models have wildly over-predicted warming and thus should not be informing our policy going forward. Again, courtesy of Berkeley Earth, let’s see how the computer models used in the fourth IPCC report (released in 2007) perform when run against Berkeley Earth’s historical temperature record.
The multi-colored lines represent runs from the climate models featured in the fourth IPCC report. The heavy black line represents the Berkeley Earth land temperature record. The heavy red line represents the average of the various model runs. It would appear that the climate models used by the IPCC are now pretty good at replicating temperatures and are not, on balance, running hot.
So my question: If the models were published in 2007, I’m assuming that means they were calibrated up to 2007 (or very recent) observations, right? If so, then the goodness of fit before 2007 isn’t really relevant. What matters is how the models performed out of sample, i.e. from 2007 forward.
And as Taylor’s own chart shows, the models predicted much more warming after 2007, than actually occurred.
So doesn’t this chart prove the exact opposite of JT’s point?