Blog Archive

Sunday, May 25, 2014

Andrew Gelman, WaPo: Richard Tol's gremlins and the more likely seriously detrimental economic impacts of higher global temperatures

The gremlins did it? Iffy statistics drive strong policy recommendations

(Christopher Ziemnowicz)
(Christopher Ziemnowicz)
Adam Marcus reports that an influential paper from 2009 on the economic effects of climate change was recently revised because of errors and omissions in the data. According to the paper’s author, Richard Tol:
Gremlins intervened in the preparation of my paper “The Economic Effects of Climate Change” . . . minus signs were dropped from the two impact estimates . . . [also there were] two overlooked estimates . . .
I’m not sure what is up with the gremlin, but in my own work I’ve introduced major errors into the analyses on occasion. Sometimes it happens when I’m copying numbers from a printed source into a typed document, or even when editing a document. So I could definitely see how minus signs could disappear.
Carmen Reinhart, professor of economics at Harvard Kennedy School. She was a professor at the University of Maryland in 2010, when this photograph was taken. (Andrew Harrer/Bloomberg News)
Carmen Reinhart, professor of economics at Harvard Kennedy School. She was a professor at the University of Maryland in 2010, when this photograph was taken. (Andrew Harrer/Bloomberg News)
I’m reminded of the recent case of Reinhart and Rogoff who introduced major errors in data processing and analysis into an influential paper on macroeconomic policy. The errors were undiscovered for years.
In both these cases, I think the problem is not so much the errors — mistakes will happen, and it’s understandable that researchers will be less likely to catch their errors if they go in the direction that support their views — but rather that there are many fragile links in the chain that connects data to policy recommendations. This is one reason that many people are starting to recommend that the “paper trail” of the statistical analysis be more transparent, so that researchers (including me!) simply aren’t able to make mistakes such as accidentally removing a minus sign in a computer file or losing a column of data in an Excel spreadsheet.
  FILE - In this March 7, 2008 file photo, Kenneth Rogoff, Professor of Economics and Public Policy at Harvard, delivers a speech at the International Symposium of the Banque de France, in Paris.(AP Photo/Jacques Brinon, file)
Kenneth Rogoff, professor of economics and public policy at Harvard.(Jacques Brinon/AP)
The other problem is that people who are caught out in their mistakes often go on and on about how the mistakes don’t really alter their conclusions. For example, here’s Richard Tol the effects of his missing minus signs and new data (in addition to adding the overlooked estimates, he added five more recent estimates that had appeared after the 2009 paper had been written):
Although the numbers have changed, the conclusions have not. The difference between the new and old results is not statistically significant. There is no qualitative change either.
But then he also says:
The assessment of the impacts of profound climate change has been revised: We are now less pessimistic than we used to be.
Also this:
The original impact curve projects an impact of −15 (−7 to −33) percent of income for a 5°C warming, whereas the corrected and updated curve has −6 (−3 to −21) percent. This is relevant because the benefits of climate policy are correspondingly revised downwards.
I’m a little bit worried because this last sentence contains so many minus signs, but I’ll assume they’ve all been checked carefully.
In any case, I think Tol needs to get his story straight: in one place he said the qualitative conclusions did not change; in another place he points to some changes and say they are relevant for policy.
Okay, so all this got me wondering: What exactly was being claimed here? Here are the two graphs from Tol’s revised paper:
Tol1
Figure 1 shows the estimates based on the corrected data (with the minus 1's restored) and the overlooked previous studies. Indeed, the curves aren’t very different, but . . . what is that big positive point at 1.0 degrees? This point seems to be (a) singlehandedly keeping the estimate above zero, and (b) driving a big curve in that fitted quadratic line. This point is credited to this 2002 paper by Tol. The list also includes a negative estimate of the impact of a 2.5 degree warming, also from Tol  in 1995. Lots of research got done between 1995 and 2002, I assume — indeed, the 2002 paper reports “a new set of estimates” and “two methodological improvements,” so I’m surprised that the 1995 estimate was just kept in as is.
Here’s the point. In his 2002 paper, Tol found a positive economic effect of a small increase in temperature. But, assuming he still stands by his 1995 paper, he finds a big drop in economic performance as temperature rises still more: the estimated effect (compared to no warming) is plus 2.3 percent of gross domestic product for a 1 degree increase, and then drops to minus 1.9 percent for a 2.5 degree increase. So there’s some major nonlinearity in his model. This could well be correct, but I think it would make sense for him to explore what in his model is driving this result.
Okay, now here’s Tol’s Figure 2 which shows the estimates including the new studies:
Tol2
That point at (1, 2.5) remains, and now there’s a new outlier at (3.2, minus 11.5). I don’t know what’s the story with these, but of course either of them could be correct; the whole point of this sort of meta-analysis is a recognition that different studies are using completely different methods with completely different sets of assumptions.
That quadratic curve
But I want to go back to one point, which is Tol’s remark that the revised estimate based on the new data “is relevant because the benefits of climate policy are correspondingly revised downwards.”
What’s going on here? The green curves in Figure 2 look reasonable in the sense that they go through the data, which really is all I have to work with here. The real weirdness comes in Figure 1, not so much with the curve going above 0 but with the steep declining slope which leads to huge negative impacts when extrapolating beyond the range of the points on the graph. This seems to be coming from the assumption that the curve is a quadratic (that is, a curve of the form y = ax – bx^2). But why must it be quadratic? Where’s that coming from? The answer (I’m pretty sure): nowhere at all. It’s just an extrapolation.
If you want to extrapolate, I think it would make more sense to extrapolate within the contexts of the individual reports rather than to draw a curve through the estimates obtained by different authors at different times.
Outliers
One problem which Tol didn’t note was the role of the changing minus signs in interpreting the estimates that were not garbled. In particular, his estimate of a big positive impact at 1 degree is a clear outlier in his analysis. Did he look into that in the original paper? I took a look, and here’s what he wrote, back in 2009:
Given that the studies in Table 1 use different methods, it is striking that the estimates are in broad agreement on a number of points—indeed, the uncertainty analysis displayed in Figure 1 reveals that no estimate is an obvious outlier.
In this way, a misclassification of a couple of points can affect the interpretation of a third point.
Tol also wrote:
The fitted line in Figure 1 suggests that the turning point in terms of economic benefits occurs at about 1.1 degrees Celsius warming (with a standard deviation of 0.7 degrees Celsius).
This turning point has disappeared in his new Figure 2, so, again, I do think the new analysis has changed his conclusions in a real way.
P.S. Discussion of global warming has of course become politicized (see this post by Bob Ward for some background), and indeed Tol used the scare-word “shrill” in his 2009 article. So I should probably emphasize that in that paper he also wrote of “considerable uncertainty about the economic impact of climate change … negative surprises are more likely than positive ones. … The policy implication is that reduction of greenhouse gas emissions should err on the ambitious side.”

Comments:


alchemistfrombristol
"Scientists have known about global warming for decades. It's real. Let's move on to what we can do about it." http://clmtr.lt/c/HC10cc0cMJ
Richard Tol
@Andrew 
Fair points all, but errata are rather restricted in length and content. A quadratic curve fits well for the original data, but fails badly for the expanded data-set. So in a new paper, accepted earlier today for Computational Economics, I look at alternative curves, including non-parametric ones. The qualitative insights of the JEP paper are then restored. 
 
The initial positive impacts indeed largely hinge on Tol (2002), although Mendelsohn et al. (2000) still foresee net positive impacts at 2.5K. Both papers have stood the test of time. The initial positive impacts are driven by three factors: reduced costs of winter heating; reduced cold-related deaths; and carbon dioxide fertilization. These have been confirmed in later studies. 
 
Anyway, the initial benefits are sunk benefits, irrelevant for policy, so I don't understand why people get so excited. 
 
People who think that change is necessarily bad would be well-advised to read David Hume.
 
Finally a third point, the Tol 2009 analysis cascades into the DICE (Nordhaus') model which uses it as a part of the calibration of the damage function. SO you probably can move some of those other points down. 
 
Quicksand
Tom Curtis had a fine response to why a non=llinear fit was appropriate, from and then there is physics 
-------------------- 
In response to Eli’s concerns, we know that some where between 5 and 15 C above preindustrial, welfare loss approaches 100%. No linear fit to the data, whether Tol 2002 is included or not, is compatable with that. Therefore the fit must at least be quadratic, and may be worse than that. 
 
If that is not enough, we also know that welfare loss at 0.6 C above preindustrial (ie, 0 C above the 1986-2005 average) is 0% by definition; and that welfare loss at -0.6 C is likely to be greater than 0%. Further, it is likely very large at -6 C (glacial conditions). Again these facts are inconsistent with a linear fit. 
--------------------- 
 
In this respect quadratic is only parsimony, and yes, the implications of terminating the two outliers has been discussed. Richard is not on board. 
As Eli wrote over at Retraction Watch a couple of days ago (May 21) 
 
What Tol is saying is that if one includes his outlier even with corrections the curve is rather banana like with a net positive benefit at modest warming. As Frank Ackerman has shown this is a result of the curious way in which Tol’s FUND model calculates agricultural effects, the positive benefit is large and completely different from every other study. 
 
If you do not include the Tol and Anthoff result, then you pretty much get a simple aX^2 fit which is negative everywhere, rather than what Tol shows (see And Then There is Physics for a long discussion of this). If you force the fit using Tol’s data, then to encompass the other results the curve has to descend more rapidly past 3 C of warming taking into account the other models including the questionable Tol mode. This means that Tol’s fit predicts more damage at higher warming than the aX^2 fit however, as the IPCC AR5 points out, Integrated Assessment models (IAMs) past 3C warming are economic fiction, because the damage would be so great and from so many directions not included in the IAMs that the models lose validity. 
 
But what does Eli know, he's only a bunny. Perhaps the good Professor Tol will accept this better from thee.

No comments: