24 Dec 2019

Should We Trust the Climate Models? Part 1 of 3

Climate Change 17 Comments

The title is pretty self-explanatory. In this first post, I spend most of the article making sure the reader understands what a “95% confidence envelope” is, to then properly interpret the graphs that pro-“consensus” scientists share, to ostensibly demonstrate how reliable the latest batch of models is.

17 Responses to “Should We Trust the Climate Models? Part 1 of 3”

  1. Transformer says:

    Just eyeballing the chart it looks like the measured data is still rising on a steady trend , just at a rate somewhat below what the forecasts had predicted. Is that a fair reading ?

    • Bob Murphy says:

      Yes Transformer. And to be clear, just about everyone in this debate agrees that the earth is warmer now than it was in 1750.

      • Transformer says:

        I was focussing more on the post-2005 forecasts as I think you said before that data the forecast data was was hindcast and only after was forecast.

        BTW: I was curious what the dotted lines meant in your figure 3 and after a bit of googling found the following:

        It was ‘found that there have been two significant issues – the first is that volcanoes (and the cooling associated with their emissions) was underestimated post-2000 in these runs, and secondly, that solar forcing in recent years has been lower than was anticipated. ‘

        (http://www.realclimate.org/index.php/archives/2015/06/noaa-temperature-record-updates-and-the-hiatus/)

        If I am understanding this correctly scientists have been looking at the data and the models and have identified at least 2 areas that once factored in bring the (adjusted) forecasts much closer to the actual data without necessarily affecting the long term trend (this is what the dotted lines mean) . Wouldn’t this be a reason for us to have less reason to distrust the models?

        • Bob Murphy says:

          Transformer, ah, I probably should’ve been clearer about that, at least in an endnote. I actually linked to the article you found, in the part of my article where I said: “I invite the reader to peruse the relevant blog posts (HERE and HERE for an earlier, fuller discussion), where the RealClimate blogger interprets these charts.”

          The second HERE hyperlink in my article was to the one you found.

          I’ll come back to this issue in either Part 2 or Part 3, but for now: Yes, after your prediction doesn’t work, you can scratch your head and find items that help you out. But there are probably other things that cut the other way, that they weren’t looking for. Pat Michaels found an example regarding the attempt to rehabilitate James Hansen’s 1980s predictions. (I’ll cover this in my future post.)

          • Transformer says:

            Thanks, look forward to the next parts (and you are right, the quote came from the link you provided so I misspoke when I said I googled for it
            – I had been googling for other related information and got confused what was googled for and what was just clicked through to).

        • Matt M says:

          “Wouldn’t this be a reason for us to have less reason to distrust the models?”

          Going back to the carnival analogy – let’s say that after your test, which has left you skeptical (but unable to statistically reject the hypothesis) of a fair coin, the carnival barker says “Ah, I see you’re still skeptical. Well, let me just assure you that the low amount of heads you observed was due to this man standing nearby wearing a large magnet that was affecting the outcome of the toss. Had he not been there, the results would have been more in-line with a fair coin. I’ll make sure he leaves before we start the wager.” You taking the wager now?

          Ultimately speaking, I think the carnival analogy fails; however, because *everyone* is generally skeptical of carnival barkers. Whereas, with climate change, one’s belief or skepticism in the consensus seems to correlate almost perfectly with one’s priors in terms of whether we should be naturally trusting, or naturally skeptical, of mainstream scientists/economists/journalists/politicians.

          We all assume the carnival barker is probably trying to get one over on us, for entirely selfish reasons. But ~50% of the population will *not* assume that about mainstream scientists/economists/journalists/politicians. Those people will happily believe any justification that the “consensus” provides to explain their somewhat suspicious results…

          • Transformer says:

            Good comment.

            I think it was smart of Bob to introduce the carnival thought experiment because it puts the image of hucksterism into our minds and sets us up to accept the possibility that perhaps mainstream climate scientists are also engaged in hucksterism.

            Bob’s line of reasoning appears to be ‘Mainstream climate scientists put together some models to forecasts the earth’s temperature and these models has a tendency to overestimate it. We should therefore be very skeptical about the usefulness of these model and perhaps (by way of an analogy) the climate scientists who defend them are in fact just hucksters trying to dupe us.’

            I’m waiting for parts 2 and 3 but I’m sure hoping they provide a stronger case for the prosecution than part 1!

  2. Transformer says:

    FYI: Looks like using my old email address my comments just get eaten, with a fake one they go for moderation.

  3. Harold says:

    Goodnesss, It is almost as if the global conspiracy to prevent any dissenting views was total nonsese! Maybe scientists are actually reporting their real findings after all! Reading Tels comments I understood that such things were not possible.

    However, if you cannot discern that the Cornwall Alliance are science deniers, it casts doubt on what you have to say on this subject. Use the data, use the science, but when you defend obvious deniers I wonder the wool is being pulled over my eyes.

    Statistics is complex. Climate science is compex. I like to think that someone is engaging in a genuine effort to explain or discover the truth. Otherwise is is relatively easy to tell a false but convincing story. How can I tell if the person is tryng to convey truth or pushing an agenda? One way is if they can discern obvious agenda-pushers and science deniers.

    • Dan says:

      One way to easily dismiss a crank is if they use the term “science denier.” I see someone say that and I can safely assume they have an agenda and it’s not worth pretending otherwise.

  4. Bitter Clinger says:

    Dr. Murphy, I got out my binomial distribution tables and spent an enjoyable couple of hours reviewing them. The problem that I see is that the confidence limits in the CMIP5 graph are NOT statistical if I understand how they create it correctly. I believe the way they created it was the simulation run with the most optimistic inputs is the lower bound and the upper bound is the most pessimistic inputs. (or vice versa depending on your prejudice) If that is the case then the fact that the actual most closely resembles the optimistic models, or is even more optimistic that the most optimistic model, is not a matter of probabilities. It is a problem in that the model just doesn’t have optimistic enough inputs or has a bias toward warming.

    In mechanical engineering we use a cornucopia of models and computational programs to predict stress, strain, deflections, and thermal distribution in very complex structures and machines. We (in the engineering profession) understand that some of these models do NOT converge with the classical mathematical calculations let alone with reality, that they are approximations and care must be taken when they are used. An example: if you want to calculate the length of a diagonal of a one inch square you take the square root of the sum of the squares of the two sides, 1.414 inches. In finite element analysis you would move over one side then down to the other corner getting an answer of 2. Even as the length of the steps decreases, i.e. you move over ½ then down ½ then over ½ and finally down ½ you still get 2. If you integrate with the step size going to zero you still get 2 because the model cannot do the diagonal even though it looks like the diagonal.

    I am looking forward to where you are going with Parts II and III.

    • Bob Murphy says:

      Bitter Clinger, you may be right. From my investigation into the “Working Group on the Social Cost of Carbon”–which is NOT the same thing as these charts coming from RealClimate.org–I came away thinking that they really did run the same model with the same inputs thousands of times, in order to get a distribution of outcomes. (I.e. so there was randomness built into the models.) Those models however were the Integrated Assessment Models created by economists; they are not the climate simulation models.

  5. Matt M says:

    Another key difference between climate change and the carnival analogy is that, for approximately half of the population, all of the “negative effects” of climate change policy are things they actually wanted all along anyway. (On a personal note, this is the primary reason I am highly skeptical of everything having to do with climate change, because the people most strongly pushing it consider things that most of us would classify as “costs” to actually be “benefits”)

    Imagine if the carnival wager wasn’t “If you win, the guy pays you, but if you lose, you pay him” but rather “If you win, the guy pays you, but if you lose, he finds a random carnival participant other than you and steals the money from them.”

    Some people would still be opposed to that, because they’d find it to be violating their sense of morality. Others would look upon it and say, “So you’re telling me that even if I lose the bet, I don’t lose anything? Sounds great!”

  6. Capt. J Parker says:

    Great IER article Dr. Murphy. The carnival con man really was slick. His mastery of statistics allowed him to hide his con game in plain sight. It took me a little while (to my shame since I think of myself as numerate in basic statistical hypothesis testing) but, I think I now have a clear idea of what exactly the con man’s con really was.
    It was in two parts:

    Part A) The con man, quite cleverly, got us to agree that an appropriate Null Hypothesis is that the coin is a fair coin. This is the exact wrong Null Hypothesis to agree to. The appropriate Null Hypothesis if you are trying to prove something is assume that what you are trying to prove is wrong. If you are trying to prove that the your new drug cures cancer better than a placebo then the appropriate Null Hypothesis is that the new drug is no better than the placebo. If you are trying to prove the coin is fair, the appropriate Null Hypothesis is that the coin is biased. If you are trying to prove that your climate models correctly predict the path of global average temperature the appropriate Null Hypothesis is that the Models don’t predict the path.

    Part B) This is the really sly part. The con man says “lets look at the data from my coin and compare it to the 95% confidence interval of a fair coin. This will allow us to reject or not reject the Null Hypothesis that the coin is fair.” So, together we look at the data and sure enough, there is no reason to reject the Hypothesis. If we protest that had we stopped the experiment at 12 flips of the coin we would have rejected the Null Hypothesis the con man could, quite justifiably say 25 flips is a larger sample size and all things equal, larger sample sizes give more accurate results. So far, everything the con man did here in part B statistically is correct given our willingness to be talked into a Null Hypothesis that the coin is fair. Now, for part B the con. The con man says “Because the data lie within the 95% confidence interval for a fair coin, we do not reject the Null Hypothesis, AND THIS MEANS THAT THE COIN HAS BEEN SHOWN TO BE FAIR, STATISTICALLY. This last bit in CAPS is totally bogus statistical reasoning by the con man. Failing to reject the Null Hypothesis in a statistical test does NOT mean the Null Hypothesis is true like the con man would have us believe. Failing to reject the Null Hypothesis only means the experiment we ran did not generate enough evidence to prove that the coin is biased. The Con mans use of a 95% confidence interval sets a pretty high bar for proof that the coin is biased,one unlikely to be met if the coin is only biased by a small amount.

    So, If figure 3 in the IER piece really is an attempt by the figures authors to say “because the 95% spread of the model outputs encompass global average temperature measurements that this is statistical proof that the models are valid.” It really would be an outrageous error of statistical reasoning.

  7. Bitter Clinger says:

    Captain, you are misinterpreting the statistics. 95% is not a High Bar it is a low one. Think of the 100% confidence limits (got to be better right?) where heads will show up between 1 and 25 times in a sample of 25. The correct way to read the numbers is to look at the Table of Cumulative Binomial Probabilities. With a defective rate (heads) of 50% only 2.2 percent of the time will you get seven or less heads. You say, (and this is why I got out my tables book) “Maybe the coin throws heads only 40% of the time.” This would at least make it an even bet. With a 40% defective rate you would expect to see seven or less 15% of the time. 85% of the time you would see eight or more. So maybe the defective rate is only 30%. Then you would expect to see seven or less 51% of the time and eight and above 49% of the time. Twenty-five is a good sized sample, since this is statistics only Dr. Murphy knows for sure what he used for a rate. Maybe he will tell us.

    • Bob Murphy says:

      I just made up the coin flip outcomes, to try to get it close to what the climate models looked like.

    • Capt. J Parker says:

      Bitter,
      I think you and I have the same understanding of the statistics. I said “The Con man’s use of a 95% confidence interval sets a pretty high bar for proof that the coin is biased,” Proof that the coin is biased comes from rejecting the con man’s null hypothesis that the coin is fair. A statistical test were the null hypothesis is rejected if the coin toss falls outside the 95% confidence interval of a fair coin is a “high bar” for rejecting the null hypothesis and proving the coin is biased. The 95% confidence interval test is a low bar for finding no reason to reject the null hypothesis (which, despite the con man’s claim, doesn’t prove anything about the coin one way or another.)

Leave a Reply to Bitter Clinger

Cancel Reply