Wednesday, August 12, 2009

Gavin Schmidt, Real Climate: PETM Weirdness

PETM Weirdness

Real Climate — Gavin Schmidt, 10 August 2009

The Paleocene-Eocene Thermal Maximum (PETM) was a very weird period around 55 million years ago. However, the press coverage and discussion of a recent paper on the subject was weirder still.

For those of you not familiar with this period in Earth’s history, the PETM is a very singular event in the Cenozoic (last 65 million years). It was the largest and most abrupt perturbation to the carbon cycle over that whole period, defined by an absolutely huge negative isotope spike (> 3 permil in 13C). Although there are smaller analogs later in the Eocene, the size of the carbon flux that must have been brought into the ocean/atmosphere carbon cycle in that one event, is on a par with the entire reserve of conventional fossil fuels at present. A really big number – but exactly how big?

The story starts off innocently enough with a new paper by Richard Zeebe and colleagues in Nature Geoscience to tackle exactly this question. They use a carbon cycle model, tuned to conditions in the Paleocene, to constrain the amount of carbon that must have come into the system to cause both the sharp isotopic spike and a very clear change in the “carbonate compensation depth” (CCD) – this is the depth at which carbonates dissolve in sea water (a function of the pH, pressure, total carbon amount, etc.). There is strong evidence that the the CCD rose hundreds of meters over the PETM – causing clear dissolution events in shallower ocean sediment cores. What Zeebe et al. come up with is that around 3000 Gt carbon must have been added to the system – a significant increase on the original estimates of about half that much made a decade or so ago, though less than some high end speculations.

Temperature changes at the same time as this huge carbon spike were large, too. Note that this is happening on a Paleocene background climate that we don’t fully understand either – the polar amplification in very warm paleo-climates is much larger than we’ve been able to explain using standard models. Estimates range from 5 to 9 °C warming (with some additional uncertainty due to potential problems with the proxy data) – smaller in the tropics than at higher latitudes.

Putting these two bits of evidence together is where it starts to get tricky.

First of all, how much does atmospheric CO2 rise if you add 3000 GtC to the system in a (geologically) short period of time? Zeebe et al. did this calculation and the answer is about 700 ppmv – quite a lot eh? However, that is a perturbation to the Paleocene carbon cycle – which they assume has a base CO2 level of 1000 ppm, and so you only get a 70% increase – i.e., not even a doubling of CO2. And since the forcing that goes along with an increase in CO2 is logarithmic, it is the percent change in CO2 that matters rather than the absolute increase. The radiative forcing associated with that is about 2.6 W/m². Unfortunately, we don’t (yet) have very good estimates of background CO2 levels in Paleocene. The proxies we do have suggest significantly higher values than today, but they aren’t precise. Levels could have been less than 1000 ppm, or even significantly more.

If (and this is a key assumption that we’ll get to later) this was the only forcing associated with the PETM event, how much warmer would we expect the planet to get? One might be tempted to use the standard ‘Charney’ climate sensitivity (2-4.5 ºC per doubling of CO2) that is discussed so much in the IPCC reports. That would give you a mere 1.5-3 ºC warming, which appears inadequate. However, this is inappropriate for at least two reasons. First, the Charney sensitivity is a quite carefully defined metric that is used to compare a certain class of atmospheric models. It assumes that there are no other changes in atmospheric composition (aerosols, methane, ozone) and no changes in vegetation, ice sheets or ocean circulation. It is not the warming we expect if we just increase CO2 and let everything else adjust.

In fact, the concept we should be looking at is the Earth System Sensitivity (a usage I am trying to get more widely adopted) as we mentioned last year in our discussion of ‘Target CO2‘. The point is that all of those factors left out of the Charney sensitivity are going to change, and we are interested in the response of the whole Earth System – not just an idealised little piece of it that happens to fit with what was included in GCMs in 1979.

Now for the Paleocene, it is unlikely that changes in ice sheets were very relevant (there weren’t any to speak of). But changes in vegetation, ozone, methane and aerosols (of various sorts) would certainly be expected. Estimates of the ESS taken from the Pliocene, or from the changes over the whole Cenozoic, imply that the ESS is likely to be larger than the Charney sensitivity since vegetation, ozone and methane feedbacks are all amplifying. I’m on an upcoming paper that suggests a value about 50% bigger, while Jim Hansen has suggested a value about twice as big as Charney. That would give you an expected range of temperature increases of 2-5 ºC (our estimate) or 3-6 ºC (Hansen) (note that uncertainty bands are increasing here, but the ranges are starting to overlap with the observations). All of this assumes that there are no huge non-linearities in climate sensitivity in radically different climates – something we aren’t at all sure about either.

But let’s go back to the first key assumption – that CO2 forcing is the only direct impact of the PETM event. The source of all this carbon has to satisfy two key constraints – it must be from a very depleted biogenic source and it needs to be relatively accessible. The leading candidate for this is methane hydrate – a kind of methane ice that is found in cold conditions and under pressure on continental margins – often capping large deposits of methane gas itself. Our information about such deposits in the Paleocene is sketchy to say the least, but there are plenty of ideas as to why a large outgassing of these deposits might have occurred (tectonic uplift in the proto-Indian ocean, volcanic activity in the North Atlantic, switches in deep ocean temperature due to the closure of key gateways into the Arctic, etc.).

Putting aside the issue of the trigger though, we have the fascinating question of what happens to the methane that would be released in such a scenario. The standard assumption (used in the Zeebe et al. paper) is that the methane would oxidise (to CO2) relatively quickly, and so you don’t need to worry about the details. But work that Drew Shindell and I did a few years ago suggested that this might not quite be true. We found that atmospheric chemistry feedbacks in such a circumstance could increase the impact of methane releases by a factor of 4 or so. While this isn’t enough to sustain a high methane concentration for tens of thousands of years following an initial pulse, it might be enough to enhance the peak radiative forcing if the methane was being released continuously over a few thousand years. The increase in the case of a 3000-GtC pulse would be on the order of a couple of W/m2 – for as long as the methane was being released. That would be a significant boost to the CO2-only forcing given above – and enough (at least for relatively short parts of the PETM) to bring the temperature and forcing estimates into line.

Of course, much of this is speculative given the difficulty in working out what actually happened 55 million years ago. The press response to the Zeebe et al. paper was, however, very predictable.

The problems probably started with the title of the paper “Carbon dioxide forcing alone insufficient to explain Palaeocene–Eocene Thermal Maximum warming” which on its own might have been unproblematic. However, it was paired with a press release from Rice University that was titled “Global warming: Our best guess is likely wrong,” containing the statement from Jerry Dickens that “There appears to be something fundamentally wrong with the way temperature and carbon are linked in climate models.”

Since the know-nothings agree one hundred per cent with these two last statements, it took no time at all for the press release to get passed along by Marc Morano, posted on Drudge, and declared the final nail in the coffin for ‘alarmist’ global warming science on WUWT (Andrew Freedman at WaPo has a good discussion of this). The fact that what was really being said was that climate sensitivity is probably larger than produced in standard climate models seemed to pass almost all of these people by (though a few of their more astute commenters did pick up on it). Regardless, the message went out that ‘climate models are wrong’ with the implicit sub-text that current global warming is nothing to worry about. Almost the exact opposite point that the authors wanted to make (another press release from U. Hawaii was much better in that respect).

What might have been done differently?

First off, headlines and titles that simply confirm someone’s prior belief (even if that belief is completely at odds with the substance of the paper) are a really bad idea. Many people do not go beyond the headline – they read it, they agree with it, they move on. Also one should avoid truisms. All ‘models’ are indeed wrong – they are models, not perfect representations of the real world. The real question is whether they are useful – what do they underestimate? overestimate? and are they sufficiently complete? Thus a much better title for the press release would have been more specific “”Global warming: Our best guess is likely too small” – and much less misinterpretable!

Secondly, a lot of the confusion is related to the use of the word ‘model’ itself. When people hear ‘climate model,’ they generally think of the big ocean-atmosphere models run by GISS, NCAR or Hadley Centre, etc., for the 20th Century climate and for future scenarios. The model used in Zeebe et al. was not one of these, instead it was a relatively sophisticated carbon cycle model that tracks the different elements of the carbon cycle, but not the changes in climate. The conclusions of the study related to the sensitivity of the climate used the standard range of sensitivities from IPCC TAR (1.5-4.5 ºC for a doubling of CO2), which have been constrained – not by climate models – but by observed climate changes. Thus nothing in the paper related to the commonly accepted ‘climate models’ at all, yet most of the commentary made the incorrect association.

To summarise, there is still a great deal of mystery about the PETM – the trigger, where the carbon came from and what happened to it – and the latest research hasn’t tied up all the many loose ends. Whether the solution lies in something ‘fundamental’ as Dickens surmises (possibly related to our basic inability to explain the latitudinal gradients in any of the very warm climates), or whether it’s a combination of a different forcing function combined with more inclusive ideas about climate sensitivity, is yet to be determined. However, we can all agree that it remains a tantalisingly relevant episode of Earth history.

Comments (pop-up) (71)

Link: http://www.realclimate.org/index.php/archives/2009/08/petm-weirdness/

No comments:

Post a Comment