Thursday, July 9, 2009

Gavin Schmidt of Real Climate -- The Edge Interview of June 29, 2009

There is a simple way to produce a perfect model of our climate that will predict the weather with 100% accuracy. First, start with a universe that is exactly like ours; then wait 13 billion years.

THE PHYSICS THAT WE KNOW [6.29.09]
A Conversation with Gavin Schmidt

gavin schmidt

Introduction

There is a simple way to produce a perfect model of our climate that will predict the weather with 100% accuracy. First, start with a universe that is exactly like ours; then wait 13 billion years.

But if you want something useful right now, if you want to construct a means of taking the knowledge that we have and use it to predict future climate, you build computer simulations. Your models are messy, complicated, in constant need of fine tuning, exacting and inexact at the same time. You're using the past to predict the future, extrapolating the very complicated from the very simple, and relying on an ever-changing data stream to inform the outcome.

Climatologist Gavin Schmidt explains: "How do you ask questions about expectations in the future? Obviously, you have to have things that are based on the physics that we know. You have to have things that are based on processes we can go and measure, that has to be based on our ability to understand the climate that we have now. Why do you get seasonal cycles? Why do you get storms? What controls the frequency of these events over a winter, over a longer period? What controls the frequency of, say, El Nino events in the tropical Pacific that have impacts on rainfall in California or in Peru or in Indonesia? How do you understand all of those things?"

"We approach this is in a very ambitious way."

"What we have decided, as a scientific endeavor, is to extrapolate as much as we can from our knowledge of the individual processes that we can measure: evaporation from the ocean, the formation of a cloud, rainfall coming from a cloud, changes in the wind patterns as a function of the pressure field, changes in the jet stream. What we have tried to do is encapsulate those small-scale processes, put them altogether, and see if we can predict the emerging properties of that fundamental complex system."

— Russell Weinberger

GAVIN SCHMIDT is a climatologist with NASA's Goddard Institute for Space Studies in New York, where he models past, present, and future climate. GAVIN SCHMIDT is a climatologist with NASA's Goddard Institute for Space Studies in New York, where he models past, present, and future climate. His essay "Why Hasn't Specialization Led To The Balkanization Of Science?" in included in What's Next? Dispatches on the Future of Science, edited By Max Brockman

Gavin Schmidt's Edge Bio Page


Get Adobe Flash player

THE PHYSICS THAT WE KNOW

[GAVIN SCHMIDT:] In terms of environmental problems, the key question that faces us now (and will face us for at least the next century) is to what extent are the changes that we are making to the atmosphere — to the oceans, to the composition of the air — going to impact things that matter? How are they going to impact sea level changes? How they are going to impact temperature changes? How they are going to impact rainfall and hydrological resources?

Everywhere you go you see societies based around certain expectations for what their climate is. How far do you build away from the shore? How do you design your agriculture? What kind of air conditioning system do you put in a building? All of these things depend on the expectations you have for what temperature it is going to be during the summer time or how high a storm surge reaches when you have a Northeasterly storm. All of these things require an expectation that has been built over hundreds of years but that now is changing.

When you have expectations based on past information that aren't any longer going to be valid expectations or you have a suspicion that they are no longer going to be valid, how do you come up with new expectations? How do you inform decisions that are being made now that will affect how people react to climate in 10, 20, 30, 50 years time? We are building infrastructure now that has those kinds of lifetimes and yet are we using our best estimate of what is going to happen in the future to inform those decisions? The answer is pretty much no. We know that is not being done.

So, how do you ask questions about expectations in the future? Obviously, you have to have things that are based on the physics that we know. You have to have things that are based on processes we can go and measure, that has to be based on our ability to understand the climate that we have now. Why do you get seasonal cycles? Why do you get storms? What controls the frequency of these events over a winter, over a longer period? What controls the frequency of, say, El Nino events in the tropical Pacific that have impacts on rainfall in California or in Peru or in Indonesia? How do you understand all of those things?

We approach this is in a very ambitious way.

What we have decided, as a scientific endeavor, is to extrapolate as much as we can from our knowledge of the individual processes that we can measure: evaporation from the ocean, the formation of a cloud, rainfall coming from a cloud, changes in the wind patterns as a function of the pressure field, changes in the jet stream. What we have tried to do is encapsulate those small-scale processes, put them altogether, and see if we can predict the emerging properties of that fundamental complex system.

This a very ambitious thing to attempt to do because there is a lot of complexity, a lot of structure in the climate that is not a priori predictable from any small scale process. The wet and dry seasons in the tropics come about because of the combination of the seasonal cycle of the orbit around the earth, changes in evaporation, changes in moist convection (the process that creates the big cumulus towers and thunderstorms), water vapor transports because of the moist convection, because of the Hadley Cell that gets set up as a function of all those things. It's a very complex environment. I can't say how it is going to change if evaporation was a little bit different or the sensitivity of evaporation was a little bit different to what we understand now.

We have been quite successful at building these models on the basis of small-scale processes to produce large-scale simulation of the emerging properties of the climate system. We understand why we have a seasonal cycle; we understand why we have storms in the mid latitudes; we understand what controls the ebb and flow of the seasonal sea ice distribution in the Arctic. We have good estimates for all the things that are going on. But we don't have perfect estimates. Instead, we have maybe 20 different groups around the world who have put together their best shot at what all those process are, which ones are important and which ones are not important, and they have all produced their own separate digital world, their digital climate. They are all a little bit different and they all have a little bit different sensitivity. So if I change one element in those models, for instance the amount of carbon dioxide in the atmosphere, then they all react in slightly different ways.

In some respects they all act in very similar ways — for instance, when you put in more carbon dioxide, which is a green house gas, it increases the opacity of the atmosphere and it warms up the surface. That is a universal feature of these models and it is universal because it is based on very, very fundamental physics that you don't actually need a climate model to work out. But when it comes to aspects which are slightly more relevant – I mean, nobody lives in the global mean atmosphere, nobody has the global mean temperature as an important part of their expectations – things change. When it comes to something like rainfall in the American Southwest or rainfall in the Sahel or the monsoon system in India, it turns out that those different assumptions that we made in building those models (the slightly different decisions about what was important and what wasn't important) have a very important effect on the sensitivity of very complex elements of the climate.

Some models suggest very strongly that the American Southwest will dry in a warming world; some models suggest that the Sahel will dry in a warming world. But other models suggest the exact opposite. Now, let's just imagine that the models have an equal pedigree in terms of the scientists who have worked on them and in terms of the papers that have been published — it's not quite true but it's a good working assumption. With these two models, you have two estimates — one says it's going to get wetter and one says it's going to get drier. What do you do? Is there anything that you can say at all? That is a really difficult question.

There are a couple of other issues that come up. It turns out that if you take the average of these 20 models, that average is a better model than any one of the 20 models. It has a better prediction of the seasonal cycle of rainfall; it has a better prediction of surface air temperatures; it has a better prediction of cloudiness. That is a little bit odd because these aren't random. You can't rely on the central limit theorem to demonstrate that that must be the case, because these aren't random samples. They are not 20 random samples of the space of all possible climate models. They have been tuned and they have been calibrated and they have been worked on for many years — everybody is trying to get the right answer.

In the same way that you can't make an average arithmetic be more correct than the correct arithmetic, it's not obvious that the average climate model should be better than all of the other climate models. So for example if I wanted to know what 2+2 was and I just picked a set of random numbers, the answer by averaging all those random numbers is unlikely to be four. Yet when you come to climate models, that is kind of what you get. You get all the climate models and they give you some numbers between three and five and they give you something that is very close to four. Obviously, it's not pure mathematics — it's physics, it's approximations, there is empirical tuning that goes on. But it's very odd that the average of all the models is better than any individual model.

Does that mean that the average of all the models predictions is better than any individual model's prediction? That doesn't follow either because it may be that all the models contain errors which, for today's climate, average out when you bring them together. Who is the say what controls their sensitivity since we know that, in each model the sensitivity is being controlled by slightly different elements?

You need to have some kind of evaluation. I don't like to use the word validation because it implies a kind of binary/true-false set up. But you need an evaluation; you need tests of the model's sensitivity compared to something in the real world that can give you some credibility that that model has the right sensitivity. That is very difficult. For instance, let's imagine that the models that I want to pay attention to are the ones that get the best seasonal cycle of rainfall. So I rank the models, give them a score, and I get the top 10 models that come in with the best score for that metric. Then somebody else says, no, I think it's more important that they get the annual mean right or they get the inter-annual variability — the variability from one year to another. Well, I could do that same ranking. It turns out that if I do that ranking for three different metrics — there is nothing that says that one metric is better than the other — I end up with 10 completely different rankings. Not only are the rankings uncorrelated one from to the other, depending on the metric, the projections — the estimates that you get going into the future — turn out to be uncorrelated to the score as well. I get the same spread if I take the top 10 models over here than I had for the whole set. So there will still be some positive ones, there will still be some negative ones when it I look, for instance, at projected rainfall in the American Southwest.

That is a real problem. How do you deal with these models in an intelligent way? What can you bring to bear from the observational record where either over the 20th century or longer (paleo-climate records or what have you)? how do you bring that information to bear to test whether the models have any predictable skill, have any skill in their predications? That is really what I spent all my time on: trying to find ways to constrain the models to improve the Bayesian subjective probability that they are telling you anything of any use. It's not that we have been working in a complete vacuum for the last 30 years. These models are relatively mature and people have been thinking about these things ever since the beginning.

There are lots of examples in the current climate where you can demonstrate that the models have skill. The response to the Mount Pinatubo eruption in 1991. This was a big volcano in the Philippines. It put a huge amount of sulphur dioxide and sulphur aerosols into the atmosphere. They spread around the stratosphere, stayed there for about two to three years. These aerosols are reflective; they are white. So the sun comes in, there are these aerosols, it gets reflected out. It acted as a kind of sun shade over the planet and it caused the planet to cool. Our group (though this is before my time) before this cooling happened, did the calculations with their model at the time, and said, that the cooling will reach a maximum of about half a degree in about two years time. Lo and behold, such a thing happened. If you go back – and we had lots and lots of information about what happened over that period, what happened to radiation at the top of the atmosphere, what happened to the winds that changes the function of the temperature gradients in the lower stratosphere, what happened to water vapor – we can see whether the models got the right answer for the right reason, and for the most part they do. So that was a good real prediction in real time that could be tested in a short amount of time.

The problem with climate prediction and projections going out to 2030 and 2050 is that we don't anticipate that they can be tested in the way you can test a weather forecast. It takes about 20 years to evaluate because there is so much unforced variability in the system which we can't predict — the chaotic component of the climate system — which is not predictable beyond two weeks, even theoretically. That is something that we can't really get a handle on. We can only look at the climate problem once we have had a long enough time for that chaotic noise to be washed out, so that we can see that there is a full signal that is significantly larger than the inter-annual or the interdecadal variability. That is a real problem because society has demanded answers of us and isn't going to wait 20 years for us to update.

We did this 20 years ago and the predictions that we made then have been more or less validated, given both the imperfections that we had then and the uncertainty in how we thought things were going to change in the future. So there is a track record that shows that these models are realistic. But the questions that were asked 20 years ago were relatively simple compared to the questions that are being asked now. The issue of climate change has become so tied into many other questions, such as biosphere degradation, habitat loss, over-development, inappropriate development, energy security, etc.. All of these questions are much more immediate and acute than climate change as a whole. Yet climate change impacts very strongly on how you might deal with a lot of those issues. Society is not willing just to wait for the scientists to say "come back in 20 years and we will tell you whether our predictions are any good or not." It's tricky. People want answers and the need to validate those answers but we have to do it in a way that is not the standard: make a prediction, test it; make a prediction, test it. The time scales are just too long.

The thing that you have with climate and really with any observational science, as opposed to a laboratory science, is that you have history. Essentially you have 4.5 billion years of earth history, of which we know increasingly little the further you go back. But we do know a fair bit about how climate has changed in the past. We know about the ice ages 20 thousand years ago. We know about oscillations in the ocean circulation that happened around 8,000 years ago. We know that 6,000 years ago the Sahara was much wetter than it is now. We have theories for why all of those things happened based on our knowledge of planetary dynamics, how the orbit has changed in that time period, how the de-glaciation (the melting of the big ice sheets from about 20,000 years ago to about 8,000 years ago) proceeded. We have clues about that in ice core records, in the continual uplift of where the ice sheets used to be, in drainage pathways of the paleo great lakes that existed at that time. We can see where the beaches were. There are a lot of clues in the landscape, in the geology, in the soils, in the sea, in the mud, in the ice, in tree rings, in corals that give us clues about how things changed in the past. But all of those clues are very indirect. They're not real thermometers. They're not rain gauges. They're not satellites. They are telling us things that are connected to climate but are not really the same as climate. Interpreting them has always been problematic because they are often a function of, not one particular thing that is changing the climate, but maybe four or five different things, all of which are changing in different ways at different times.

Over the last five years or so we have spent an enormous amount of effort making the climate models that we use much more complete. It used to be that we would have basically the atomospheric circulation and the water cycle — those are the key elements to the climate system. But there is a lot more going on. There is mineral dust. There are aerosols and these aerosols interact with the clouds, they interact with radiation, they have interactions with atmospheric chemistry, they have interactions with air pollution and other kinds of emissions to produce ozone (also a greenhouse gas, but it is something that is generated within the atmosphere rather than being emitted as a pollutant). Those aerosols and those other elements of atmospheric composition make the whole problem much more complicated and they add huge numbers of extra pathways that allow temperature changes or hydrological cycle changes or wind changes to intersect with greenhouse gases and temperatures and the like.

The neat thing is that these same chemicals are also very closely related to the things we measure in ice cores and in mud in the bottom of the ocean. We can measure dust records in the ice cores, which tell us how much dust got to Greenland pretty much every year for the last 100,000 years. That is telling us something about where that dust was coming from. It tells us something about the atmospheric circulation, but it's one variable that depends on many different inputs. But because we have now included that in the climate models, we can now ask questions like, "given this hypothesis for why the climate changed at that point, does the simulated dust record that we would have gotten in our numerical virtual Greenland match up to what we actually see in the real Greenland?" Then we can go back and look at the climate changes and the ideas that we have had for why the climate changed in the past and evaluate how well the models do with really large changes in climate. That is potentially much more useful than testing the models against the seasonal cycles today because you are testing against a real climate change as opposed to a proxy for climate change.

Things that happen over the seasons are very different than things that happen due to an increase carbon dioxide over time. They are a very different physically; the time scales are different; you have different kinds of feedbacks. But if you go back into the past, you can see those same long term feedback effects that control what is going to happen in the future, operating over a similar time period. The reason the Sahara was green 6,000 years ago is that we were a little bit closer to the sun during Northern Hemisphere summers because of the way the orbit of the earth works. We're on an ellipse and there is a point where we are close to the sun, there is a point where we are far away from the sun. Right now we are closest to the sun in January; 6,000 years ago we were closest to the sun in August. So August is Northern Hemisphere summer and you are going to get warmer summers. As you have warmer summers, that moves the thermal equator to the north, and the rain bands tend to follow that thermal equator, go much further into the Sahara than they would today. The models show that same sensitivity and that gives you some hope that these models are actually telling us something realistic.

The problem is that most of the modeling groups don't do those kinds of experiments. Right now we are in the midst of building a new huge database of model simulations that will be used for the next IPCC report. The IPCC (the Intergovernment Panel on Climate Change) is an assessment body which goes around looking at all the things that are in the scientific literature and coming up with an assessment what it all means. The community of climate modelers know that these things are coming up and what they do a few years beforehand is they put together a huge database of simulations that people can look at, so that by the time the IPCC comes along and says, "what is going on in the world of climate modeling?", there will be lots of information about all these different climate models. We are working to make sure that within these sets of simulations, people are running their models for paleo-climate simulations, so that we can do exactly what I was trying to allude to: can we rate the models based on how well they do in the paleo-climate? Does that give us an idea that models with low sensitivity are better for the future or is it going to be the ones with high sensitivity? We are going to have a metric that is much closer to what we think we need to make up some kind of assessment of how credible we think the projections are going to be.

Freeman Dyson has made a critique of models. I don't know Freeman Dyson; I've met his children. He seems like a very smart person. He has done some very interesting physics. He seems like a guy I would like to know. Yet his statements about climate, climate models, climate modelers, Jim Hansen in particular, are not the statements you would expect a smart person to make. It's like Shakespeare writing a play and then pulling a quote from a penny dreadful sheet that he found in the street. It just seems very inconsistent that somebody who thinks so hard and is so smart about so many things says dumb things like, oh, climate modelers think that their models are real and can't see the real world. I paraphrase but he said something very similar. It betrays a complete ignorance of either climate modelers, climate models or what it is that climate science is all about. His statements about Jim Hansen were very similar.

Jim Hansen (my boss, so take what I say with a pinch of salt if you prefer) is not anything like the caricature that Dyson painted, and anybody who says that has never met him, has never read anything that he has actually written, and is just responding to, I imagine, the kind of online simulacrum that you sometimes find with people who are high profile that actually bears no relation to their real ideas or personality or expertise. I'm much less famous than Jim Hansen but I sometimes see discussions about me and my opinions and my expertise that are just so far removed from anything that I would ever say or would ever think that it is laughable. Yet Dyson, who is a smart person, seems to be reading something like that, as opposed to investigating and talking to these people for himself. I find that puzzling.

Climate change is one of those scientific topics where people perceive that the science itself is imbued with some moral, political or economic meaning. Just like stem cell research or evolution of genetically modified food — people react very strongly to what they perceive the science implies, to the extent where they attack the science rather than discuss the issue of how that science implies something that is more fundamental. It turns out that in many of these fields, people are much more wedded to their moral, political, tribal viewpoints than they are to scientific method and scientific inquiry. This is not surprising, and it takes years to beat that into graduate students — it's not something that people are naturally going to come up with. Yet when you have great scientists who are able to imply their scientific thinking in many, many different fields, and when it comes to one particular issue seem to not be thinking as critically, it's a surprise. So statements like Freeman Dyson's or Will Happer's or Karry Mullis or Linus Pauling toward the end of his life, these are very smart people who have been lauded for being very smart their entire lives. Sometimes they say things that are very strange.

I understand that Dyson is a bit of a contrarian and that part I don't find bothersome in the slightest. Anybody who is a good scientist has to have that contrarian streak. They have to be the kind of person who says, "oh yeah? Prove it." You have to be that way. You can't just go around agreeing with everyone if you are going to make a contribution. You have to listen to everything that is going on. You see where the weak points are, you see where the assumptions are, and you burrow down into assumptions and you say, "is that really justified?" Quite often you find that it isn't. When it isn't justified and it has important consequences, then you have made a contribution as a scientist.

An example: I was talking about paleo-climate before. A lot of the interpretation of paleo-climate rests on very weak assumptions and my modest contributions to the field has been in tackling exactly those assumptions. So you have to have that contrarian streak. You have to have that questioning streak. That doesn't surprise me in the least that most good scientists have that attitude. But in the best scientists that attitude is also married with a humility — maybe you don't know everything that is going on. You can come into a field and say, "these people seem to be making this assumption, how have they analyzed it?" Generally speaking, they have analyzed it to death. When you come into a new field or when you comment on a field that isn't something that you have grown up with over time, you have to come in with a humility that says, these people are smart as well, and let me see how they have used their smarts. I didn't get that sense when Freeman Dyson was talking about climate change.


I started off with pure math, applied math, fluid mechanics, special relativity, that kind of stuff. What I saw as I went through my education was a very clear winnowing out — not between really smart people and not smart people — but between people who had an aesthetic sense for the kinds of problems that they found interesting and useful. The way that would work out is that those interested in theoretical physics were the people who enjoyed finding a problem that is amenable just to being thought at. This is not trivial. There aren't that many problems like that, but when you have that can be thought at you can come up with the key insight, then you have something that really changes the world. Einstein thinking about special relativity is a good example. So is QED. Those problems have an aesthetic quality to them that is very attractive: it's not messy, it's not horribly complex (general relativity and things that have come from it are very complex, of course, but they stem from relatively simple systems).

Then there the kinds of problems that attract a different kind of thinker: really, really complex problems, such as the human body or an individual cell or the climate system or solar physics. These are subjects that don't fit into the same aesthetic that special relatively fits into. They demand, right from the beginning, that you deal with multiple conflicting and intersecting elements. They are horribly non-linear right from the word go; they are horribly complex. There is never going to be a theory of climate that somebody is going to come up with just by thinking about how the climate should work. People have tried, but they all fall pretty much at the very first hurdle. It is, to use a phrase, irreducibly complex.

And you can't get away from that. You can't think that the climate is ever going to yield by just being thought about. It needs to be thought about and measured and analyzed and thought about again and measured and analyzed and all of these disparate elements have to brought in together. The reason why climate models have grown up to be as complicated and as complete as they are is not because of a lack of imagination from the people who are using them. It's because that is the way the real world is and that is the way the field has made progress. It hasn't made progress by people sitting in a room coming up with theories for how climate should work. It's made progress because people have made complex assumptions; they have built these things into models of varying complexity, all the way to the GCMs, (the big climate models that I was talking about earlier); they have been tested against very complex data from satellites, from intense observation campaigns, from In-situ observations. At the same time, all these models are plagued with uncertainty and have the problem of changing measurements. The data collecting is always improving and our understanding of different processes is always growing. But that doesn't make things simpler, it makes things more complex. It means you need to add another element to your model. It means you have to measure everything again and run the model again.

You have to be the kind of scientist who embraces complexity in order to make progress in this kind of field. There are a lot of scientists who do not have that approach to complexity, specific kinds of physicists who have self-selected themselves as the people who want to deal with aesthetically pleasing problems from their point of view. Climate change is not that kind of science.


Now, there are some very big questions that we face as climate scientists and some very specific problems that we need to approach. Let me give you an example of an analysis that I think will be very interesting.

Every individual storm in the mid-latitudes is different — each has a different shape, a different amount of rainfall, the clouds are different, etc. The ability of the climate model to reproduce exactly the same weather pattern that we have seen over time is just about zero. That is, trying to reproduce exactly what has happened in the right time sequence season by season, day by day, is something we are not going to be able to do.

But what we are interested in is what happens to the generic storm. There is enough similarity between one low-pressure system and another low-pressure system that if you put them all together, you would come up with a generic storm. It would have a lot of information that was common to all of those storms but not the information that was unique to any one storm that happened to be in one particular configuration.

There is a constellation of satellites called the A-Train that is run by NASA: five polar orbiting satellites, that fly in formation so that there is about a 20 minutes difference from the first one and the last one. They are flying in something like a train and they are all pointing pretty much the same point on the surface of the earth as they are traversing. They are measuring many, many different things: they are measuring the temperature of the atmosphere, how many aerosols there are, how much sea ice there is, the amount of chlorophyl in the ocean below, the winds at the surface, etc. etc. Every time they pass over they will see a little bit of a storm. They will pass over a storm and the next day they might pass over it again as it has shifted a little bit to the west.

Collectively, over the 7 or so years that these satellites have been up there, they have seen many, many different storms which very similar characteristics. Wouldn't it be great if you could just take that satellite data and then make that composite storm based on the weather models that tell us where the storms were? You would take a time-space map of weather models that tracked the storms in a given area and see when the satellites were passing over storms. Collect all of that data and you'd be able to come up with a statistically average storm for that area over that whole seven year period.

This would be a tremendously valuable tool that we could use to develop our models – to compare the ACTUAL average storm with the model's prediction. You would think that someone would do this study but nobody has. All of that satellite data is there (it's only a few petabytes), all of that model information is there. Why hasn't the study been done? The reason is each individual data stream for each of those different individual instruments on the satellites — even though they are all run by NASA — is in a different place. And the way that the time/space information has been collated for each of those instruments is different. There is no single portal that would allow you to filter that data without having to the entire data set to your hard drive and sorting it yourself. Of course you can't do that because it's petabytes of data and there is no hard drive that contains petabytes of data.

The same is true for the models: all the models come in the same kind of box, but if you want the temperatures, you have to download the entire grid of temperatures for every, for every month for the entire period. There is no intelligent filtering in-situ so, again, you have to download gigabytes and gigabytes of data in order to derive one small set of numbers. Even if your network connection would allow you to download that much information in any kind of reasonable time period.

So that is the sort of problem we face. Using the satellite data to see how well the models predict an average storm or the processes within it is a completely sensible thing to be doing. But it's completely impossible. It would take, I imagine, something like 100 man years to do it with the current data set as it is currently configured.

But this doesn't require a huge a leap in technology to solve. This is really just a processing problem. It's the kind of thing that falls in between the cracks because it's not interesting enough a research problem for a computer scientist to be interested in, but it's too large a task for a scientist to want to devote any time to. So you have this big grey area in between the cool research on networks/computer science/machine learning, and what is needed to approach very interesting scientific questions that is far too large for any one individual or group of scientists to put together themselves.

It's just not getting any attention and that is something, it seems to me, that is going to be a defining quality for the information-rich 21st century — that there is going to be a gap between what is needed by certain groups of people and what is ready to be supplied by the groups of people who are doing things that are much cooler. The question I am asking is, how do you fill that in? How do you get funding agencies to understand that this might be a little pedestrian but is absolutely fundamental?

Google had an initiative called Google Research Data Sets, where I spent a lot of time talking to people, giving them exactly this problem and challenging them to do something about it. They have the capacity. They have the know-how. It was a very interesting idea but, in the end, they canceled the whole project.

And this brings us to the question of "what do we do about it"? How should scientists get involved in policy. Lots of different scientists come to very different decisions about that. I think it's nice that different scientists come to different decisions and that there is a range of opinion about how strongly one should interact with the policy process.

Personally, I don't pretend to be an economist; I don't pretend to be a sociologist; I don't pretend to be an expert in environmental regulation. So I generally don't comment on whether a cap and trade system is better than a carbon tax system or whether or not it is better that it is being run by the EPA. I leave that kind of stuff for the people who focus on that much more specifically, and I'm pretty much willing to find the most interesting and objective of them and give them the benefit of the doubt.

It's clear that there are a lot of people who talk about politics who are neither interesting nor objective. When it comes to discussing what to do about climate change, it appears to be a fact of life that people will use the worst and least intelligent arguments to make political points. If they can do that by sounding pseudoscientific — by quoting a paper here or misrepresenting another scientist's work over there — then they will. That surprised me before I really looked into it. It no longer surprises me.

I don't advocate for political solutions. If I do advocate for something, (and if you put your voice into the public sphere, then it has to be to advocate for something. Why would you do it otherwise?) My advocacy is much more towards having more intelligent discussions, which is completely naive and stupid and I realize that.

Five years ago I was less of a public person — less of a public persona in climate-science than I am now. But at the time, the voices that were being heard discussing climate change were completely divorced from the actuality of what people were coming through with the science. The Wall Street Journal was featuring full-on attacks on scientists; congress was filled with know-nothings; and, in the mainstream media, every time there was a story, you would have to one of the five obligatory contrarians pop up and say "oh no, everything is going to be fine."

It was just distortion upon distortion, and there was no advocacy from the community that was actually studying this. There was no public voice for the community. There were a few scientists who would step out occasionally — Steve Schneider is one. But there was no community pushing to correct the record or to inform people about what the science actually showed — what was certain, what was uncertain and how uncertain it was. I started dabbling in public outreach: I started sending letters to the editor, the occasional Op-Ed, I talked to journalists. All to very little effect.

I found myself repeating myself and then asking why? Why isn't there a repository of all the answers to these questions, which are always the same question, which always come up time and time again and still do now? So I started thinking, how could you really improve the level of context? Can you provide people with resources that would allow them to assess the argument — not whether or not a given policy is the right one, but whether there is an argument to justify such a policy?

That is to say, some people on the policy side have decided — as an a priori assumption — that it's impossible to argue against somebody's argument without arguing against their conclusion. I reject that fundamentally. If people make a stupid argument in order to support any policy, whether I agree with that policy or not, it is still a stupid argument and they shouldn't use it. I think you can point out that it's a stupid argument without it reflecting on the actual policy outcome. There are good arguments and bad arguments for most good policies. If we can just have the good arguments for the different policies battling it out and not have to worry about the stupid arguments, then we might make progress. Okay, so that is obviously naive because when we are talking about politics the idea that we can have more elevated conversations in this information-rich world is something that may be a little more than a pipe dream. But it's something that I think is worth striving for.

Over the last five years I have spent a lot of time building up resources either through the blog, etc. We spend a lot of time building backgrounders for journalists, staffers, and for science advisors of various ilks. We're building up resources that people can use so that they can tell what is a good argument and what is a bad argument. And there has been a shift. There has been a shift in the media; there has been a shift in the majority of people who advise policymakers; there has been a shift in policymakers. So I think that this kind of effort — and not just by me but by other people who are equally concerned — has elevated the conversation somewhat.

This leads to maybe the final question that I think about, which is, "how do you increase the signal to noise ratio in communication about complex issues?" We battle with this on a small scale in our blogs comment threads. In unmoderated forums about climate change, it just devolves immediately into, "you're a Nazi, no you're a fascist," blah, blah, blah. Any semblance of an idea that you could actually talk about what aerosols do to the hydrological cycle without it devolving into name calling seems to be fantasy. It is very tiresome.

The problem is that the noise serves various people's purposes. It's not that the noise is accidental. A lot of the noise when it comes to climate is deliberate because the increase of noise means you don't hear the signal, and if you don't hear the signal you can't do anything about it, and so everything just gets left alone. Increasing the level of noise is a deliberate political tactic. It's been used by all segments of the political spectrum for different problems. With the climate issue in the US and not elsewhere, it's used by a particular segment of the political community in ways that is personally distressing. How do you deal with that? That is a question that I'm always asking myself and I haven't gotten an answer to that one.

Link to Real Climate: http://www.realclimate.org

Link: http://www.edge.org/3rd_culture/schmidt09/schmidt09_index.html

No comments:

Post a Comment