30
Jun
2015
Bias in the Published Estimates of Social Cost of Carbon
This is a bit technical, but if you care about the climate change policy debate, you should try to get through it. I made it as easy as possible.
This is a bit technical, but if you care about the climate change policy debate, you should try to get through it. I made it as easy as possible.
This is fairly standard publication bias, the existence of which is well established in such fields as medicine. Interestingly, the “climate community” seems to be completely oblivious to the effect of publication bias, either unaware of it or presuming themselves immune to it.
It’s no trivial matter to attempt to account for it’s effects, but attempting to is necessary. This is the first attempt I’m aware of in any climate related research area.
The same authors have done climate sensitivity too:
http://mpra.ub.uni-muenchen.de/64455/
Very interesting I don’t quite get this bit “If there were no bias, and if (say) the “true” SCC were $30/ton of CO2, then in the chart above we should see a bell curve centered at about $110 on the x-axis (because $30 x 3.67 = $110”
Why should the precision of the result get better at the correct (accurate) value? Souldn’t we see a horizontal line?
I learnt a new word recently – heteroscedasticity. Basically something is homoscedastic if the error is independent of the magnitude of the X axis. That is we get a horizontal line for the SE vs X axis. We are seeing an apparent heteroscedastic response – i.e. the deviation gets lower as magintude of X gets lower. Just wanted to use that word – you don’t get too many opportunities.
Just checking now – I think I will answer my own question shortly. Firing from the hip again.
Think I have it. The funnel plot is used in medical studies to indicate publication bias. In medical trials, larger studies should have better precision and better accuracy (be close to the “true” result.) As a consequence the large trials cluster at the top of the plot near the correct answer. In the absence of bias, other trials should scatter randomly above and below in something akin to a bell curve. Publication bias is revealed by skewed distribution, indicating a non-random scatter.
As wiki has it “If high precision studies really are different from low precision studies with respect to effect size (e.g., due to different populations examined) a funnel plot may give a wrong impression of publication bias.” I don’t know if we can really say that studies reporting a high precision for SCC are necessarily the most accurate. Given the complication of the models it seems plausible that precision of different models could vary in a systematic rather than random way, meaning that “high precision” studies could be different from “low precision” studies with respect to effect size. In part, as Bob has pointed out, the social cost of carbon is a philosophical rather than scientific problem, depending on how we discount the future, so to some extent there is not actually a “right” answer.
However, the existence of publication bias as described would not come as a surprise to me – seems quite reasonable. I doubt the accuracy of the quantitative adjustments however.
Did you see my comment about this on FB? I’ll repeat it here:
Surely an increasing SCC spread is simply the mechanical result of using a damage function test that is convex in temperature (an emminently sensible and standard assumption to all these models). Far from pointing to publication bias, the widening confidence intervals are thus exactly what I would expect given the model setup. A toy example:
Let’s assume that climate damages are simply the square of temperature, D = T^2. Now let’s say your model predicts that temperature lies between 0 and 2 degrees with uniform probability. The implied spread on damages is then (2^2 – 0^2 =) 4.
What happens if your model instead suggests a higher temperature prediction between 1 and 3 degrees? I.e. exactly the same uncertainty range, just shifted up by one. Well, now your damages spread increases to (3^2 – 1^2 =) 8!
Clearly the increase in this little example has nothing to do with publication bias and everything to do with the mechanics of the model setup. So, can you convince me that the very same thing isn’t happening with the published results?
Grant,
Yeah, I had the same thought.
You are describing heteroscedasticity! The example given in Wikipedia is measuring the altitude of a space rocket. At low altitudes you can measure precisely, but as it goes higher precision decreases. The authors do discuss this – I am sure they have not just completely overlooked it. However, this is why I doubt the quantitative conclusions – there is no reason to think that the only factor affecting precision is the same factor that affects magnitude.
Reporting bias is only one way to explain the data.
The paper is here for anyone interested.
https://www.cerge-ei.cz/pdf/wp/Wp533.pdf
I think this is relevent to the current discussion:
“The three methods of detecting selective reporting introduced above are designed for regression estimates of the parameter in question and require the ratio of the point estimate to the standard error to be t-distributed.” They point out that SCC is not so distributed, but “In contrast, we can use the intuition behind the two methods based on the analysis of funnel plot asymmetry: small and large estimates with the same precision should have the same probability of being reported.”
There may be some other reason why large values with high precision or low values with low precision are not present in the data.
The results are heteroscedastic, agreed. But why?
In the rocket example you site, heteroscedasticity increases as a function of the variable you wish to measure, altitude. Precision is reduced because of atmospheric distortion and such. Variables that have a small effect on the measurement initially see their effect increased at greater distance.
SCC is derived from a model, so there are no unmeasured variables (or to be exact, the model does not include unmeasured variables). Rather, the uncertainty comes in from the uncertainty of estimates of the carbon cycle, climate sensitivity, the projected damages caused, and discounting. If our model assumes, for example, that actual climate sensitivity is not known very precisely, then any estimates that include climate sensitivity are going to have lower precision.
It just seems kind of fishy that if your model assumes that we’re doing a pretty good job of measuring all of the factors that would contribute to SCC that the estimates tend to be on the lower end of the scale.
Publication bias is one way to obtain the data. There are others. Before we can conclude that publication bias is the explanation we would need to rule out the others. From my reading of the paper it did not look as though the other ways to get this result had been ruled out, buit it is quite complicated and I can not be sure.
What is sure is that this is a differerent situation from a medical trial, where accurate studies should be more precise also because larger samples improve both accuracy and precision. Here we have interlocking models, and the source of the dispersion is not clear. I am still not convinced that my initial response – that we should get a horizontal line – was wrong, but I am guessing really.