by Michael E. Mann, Scientific American, March 18, 2014 (see also April edition)
If the world continues to burn fossil fuels at the current rate, global warming will rise 2 degrees Celsius by 2036, crossing a threshold that many scientists think will hurt all aspects of human civilization: food, water, health, energy, economy and national security. In my article "False Hope" in the April 2014 Scientific American, I reveal dramatic curves that show why the world will reach this temperature limit so quickly, and also why the recent slowdown in the rate of temperature increase, if it continues, will only buy us another 10 years.
These numbers come from calculations made by me and several colleagues. We plugged values of Earth’s “equilibrium climate sensitivity”—a common measure of the heating effect of greenhouse gases—into a so-called energy balance model. Scientists use the model to investigate possible climate scenarios.
You can try this exercise yourself. The text below explains the variables and steps involved. You can download the climate data here and the model code here. And you can compare your results with mine, which are here. You can also change the variables to see what other future scenarios are possible. One note: the model runs on MatLab software, which can be obtained here.
We employed a simple zero-dimensional Energy Balance Model (“EBM”—see references 1 through 5 below) of the form
C dT/dt = S(1-a)/4 + FGHG -A-B T + w(t)
to model the forced response of the climate to estimate natural and anthropogenic radiative forcing.
T is the temperature of Earth’s surface (approximated as the surface of a 70-meter-depth, mixed-layer ocean covering 70% of Earth’s surface area). C = 2.08 x 108 J K-1m-2 and is an effective heat capacity that accounts for the thermal inertia of the mixed-layer ocean, but does not allow for heat exchange with the deep ocean as in more elaborate “upwelling-diffusion models” (ref. 6). S ≈ 1370 Wm-2 is the solar constant, and a ≈ 0.3 is the effective surface albedo. FGHG is the radiative forcing by greenhouse gases, and w(t) represents random weather effects, which was set to zero to analyze the pure radiative forced response.
The linear “gray body” approximation (ref. 3) LW = A+B T was used to model outgoing longwave radiation in a way that accounts for the greenhouse effect. The choice A = 221.3 WK-1m-2 and B = 1.25 Wm-2 yields a realistic preindustrial global mean temperature T = 14.8 oC and an equilibrium climate sensitivity (ECS) of DT2xCO2 = 3.0 oC, consistent with midrange estimates by the International Panel on Climate Change (ref. 7). B can be varied to change the ECS of the EBM. For example, the higher value B = 1.5 Wm-2 yields a more conservative ECS of DT2xCO2= 2.5 oC.
Energy Balance Model Simulations
Historical Simulations. The model was driven with estimated annual natural and anthropogenic forcing over the years A.D. 850 to 2012. Greenhouse radiative forcing was calculated using the approximation (ref. 8) FGHG = 5.35log(CO2e/280), where 280 parts per million (ppm) is the preindustrial CO2 level and CO2e is the “equivalent” anthropogenic CO2. We used the CO2 data from ref. 9, scaled to give CO2e values 20 percent larger than CO2 alone (for example, in 2009 CO2 was 380 ppm whereas CO2e was estimated at 455 ppm). Northern Hemisphere anthropogenic tropospheric aerosol forcing was not available for ref. 9 so was taken instead from ref. 2, with an increase in amplitude by 5 percent to accommodate a slightly larger indirect effect than in ref. 2, and a linear extrapolation of the original series (which ends in 1999) to extend though 2012.
Estimated past changes in solar irradiance were prescribed as a change in the solar constant S whereas forcing by volcanic aerosols was prescribed as a change in the surface albedo a. Solar and volcanic forcing were taken from the General Circulation Model (GCM) simulation of ref. 3 described in the section above, with the following modifications: (1) solar forcing was rescaled under the assumption of a 0.1 percent change from Maunder Minimum to present, more consistent with recent estimates (ref. 9); (2) volcanic forcing was applied as the mean of the latitudinally varying volcanic forcing of ref. 9; (3) values for both series were updated through 2012.
Future Projections: For the purpose of the “business as usual” future projections, we have linearly extrapolated the CO2 radiative forcing forward to 2100, based on the trend over the past decade (which is roughly equivalent, from a radiative forcing standpoint, to a forward projection of the exponential historical trajectory of CO2 emissions). We assume constant solar output, and assume no climatically significant future volcanic eruptions.
We have assumed that tropospheric aerosols decrease exponentially from their current values with a time constant of 60 years. This gives a net anthropogenic forcing change from 2000 to 2100 of 3.5 Wm-2, roughly equivalent to the International Panel on Climate Change’s 5th assessment report “RCP6” scenario, a future emissions scenario that assumes only modest efforts at mitigation.
Stabilization Scenarios: For the stabilization scenarios, we relax (with a 20-year time constant) the CO2 concentration to a maximum specified value at 2100. We considered cases both where the anthropogenic tropospheric aerosol burden is assumed to (a) decrease exponentially from their current values with a time constant of 60 years, as in the future projections discussed in the previous section, and alternatively (b) remain constant at its current value.
Additional Details. Sensitivity analyses of the historical simulations were performed by Mann et al 2012 (ref. 4) with respect to (i) the equilibrium climate sensitivity assumed (varied from DT2xCO2 = 2-4 oC); the (ii) solar scaling (0.25 percent in place of 0.1 percent Maunder Minimum to present change); (iii) the volcanic aerosol loading estimates used; and (iv) the scaling of volcanic radiative forcing with respect to aerosol loading to account for possible size distribution effects. All of the alternative choices described above were found to yield qualitatively similar results.
Matlab source code for the energy balance model, data used in the calculations and the simulation results discussed in the Scientific American article are available at: www.meteo.psu.edu/holocene/public_html/supplements/EBMProjections
3 comments:
Over the past 20 years, 97.3% of GCM's have failed the observational test. It is therefore difficult for me to believe any dire prediction projected out twenty years. GCM's continue to be plagued with the most common of computer problems - GIGO.
You are completely wrong, of course. Even Hansen's simple model was quite good and fairly accurate.
Computer models are by definition imperfect.
In fact, tamino has a great new post out showing just how good Hansen's simple model was:
http://tamino.wordpress.com/2014/03/21/hansens-1988-predictions/
Post a Comment