We recently got a request from Tom Cole, a water quality researcher, to explain some of the issues in climate modelling seen from his perspective as a fellow numerical modeller. His (slightly paraphrased) questions are the basis for this post, and hopefully the answers may provide some enlightment for modellers and non-modellers alike!
(NB. The answers refer specifically to the GISS climate model for which I have first hand knowledge, but apply more generally to the other large scale models too. Apologies in advance for some of the unavoidable technicalities…)
- What schemes are you using for solving the partial differential equations? Are they free of numerical errors?
A. Partial differential equations arise naturally from equations of motion for the atmosphere and ocean. For the solution of the basic momentum and transport schemes, you need to approximate these equations on a grid of some sort, with different groups use varying techniques ranging from standard Arakawa leap-frog schemes to more sophisticated semi-Lagrangian schemes. Transport of tracers (heat, water, trace gases etc) is usually higher order and as non-diffusive as possible since maintainence of gradients and tracer conservation are of the utmost importance. No scheme is completely free of numerical error but the properties of wave propogation, tracer dispersion etc. generally compare well to observations in the real world. It should be pointed out though, that the dynamics are only a small part of the physics included in the models.
- Have you made tests to determine if the model results depend on resolution? In other words, have you increased the detail sufficiently so that the results are no longer dependent upon the size of an individual grid box?
A. It is obviously impossible to formally prove this all the way down to the microphysical scale, but in the range where this can be tested, there doesn’t appear to be a large dependence of the important climate variables (such as climate sensitivity) on resolution. Some aspects of solutions clearly improve at higher resolution (the definition of fronts in low pressure systems for instance) while some aspects degrade (holding everything else constant). See Schmidt et al (in press) for an example of where the atmospheric resolution was doubled in comparison to the standard run to very little effect. For ocean models, the situation is less clear since most models used in climate runs still do not resolve the mesoscale eddy field (Gulf Stream rings and the like) and so the issue is still open. However, there is a fundamental difference between climate models which include more and more physics as the resolution decreases, compared to solving a simple set of equations which are fixed regardless of resolution. Since much of the physics takes place at scales significantly below the grid box scale (moist convective plumes , cloud condensation, etc.), those unresolved features must be parameterised. These parameterisations will change as you approach the scales of the real physics, and thus so will the model equations (since you can’t paramterise an effect and resolve it at the same time!). Thus climate models are designed to work at a specific scale (or small range of scales), and thus cannot be expected to have the same convergence properties as a more pure problem.
- What are the dominant external forcing functions?
A. The most basic external forcing is the distribution of sunlight at the top of the atmosphere, and this is known very accurately. There is some uncertainty in the mean solar irradiance (‘the solar non-constant’) but that uncertainty is small in terms of the estimating the mean climate (though it is a problem for simulating climate change earlier than about 1950). Depending on the model configuration, the atmospheric composition of trace gases (CO2, CH4, O3 etc.) and aerosols (dust, sulphates, nitrates, black carbon (soot), organic carbon etc.) are also external inputs. For some of these, there is indeed a great deal of uncertainty, especially in their evolution over time.
- What are the sources of intrinsic variability?
A. Intrinsic variability occurs on all time scales – from the synoptic (‘weather’) to centennial scales (involving circulations of the deep ocean). The sources of this variability are in the basic instabilities of the system that lead (for instance) to the mid-latitude storms, tropical convection, the ocean thermohaline circulation, the ENSO phenomena in the Pacific etc.
- How do errors in estimating the forcing functions, or in simulating the internal variability impact the results?
A. Good question. Uncertainties in the forcing functions can be tested and that leads directly to uncertainties in simulations of past climate. Sometimes the uncertainties can be constrained (but not eliminated) by comparison of the modelled climate change to the observations but often many different scenarios could be consistent with the observations given known uncertainties in (for instance) the model’s climate sensitivity. Errors in the simulation of internal variability have more subtle impacts on the results. Obsviously, if a certain mode of variability is not very well simulated, changes in that mode through time are not likely to be of much use. Sometimes results are robust over a wide range of simulated variability and in such cases the phenomena can be considered robust (see Santer et al, 2005 for an example in the tropical atmosphere). Thus the answer will depend on the circumstances, and it will affect some parts of the model more than others.
- During any model application (except when performed in a laboratory where all the forcing functions can be known and controlled), a modeler will always revisit the external forcing function data and see if varying them one way or the other results in better model predictions. At first glance, most “scientist” would cry “FOUL” and throw a yellow flag at you. I have had to do this a number of times, but, without fail, further investigation has shown that the data were indeed wrong, and providing better forcing function data resulted in better model predictions. Have you any example of the model forcing a revisit of the data that showed that the data were indeed not describing what was actually going on during a given time period? In other words, did the model say “you can’t get there from here” and thus point you in the right direction? This is powerful evidence about the utility of any given model, and is the only way to justify massaging of input data.
A. Original transient climate simulations in the 1970s just used the changes in CO2 as the forcing, and although that did ok, other forcings were already known to be important (volcanos, other greenhouse gases, aerosols etc.). As these extra forcing terms have been added, the match to observations has improved. There are also many examples of where a model result has helped discover problems in the observations that the model was being compared to, or has helped resolve seemingly contradictory observations. The MSU data is a good example (Santer et al, 2005), as is the difference in the isotope and bolehole temperature reconstructions for Greenland ice cores (Werner et al, 2000) to give two very different examples. There are many others.
- The minimum amount of observed data that you have to reproduce in order to gain some confidence in your model is that you have to reproduce periods of time when temperatures are increasing and when they are decreasing. Have you queried the model as to what the dominant mechanism(s) is/are that caused the cooling? If so, is/are the mechanism(s) plausible? Can the be verified independently?
A. This isn’t much of a test. The models are pretty stable in the absence of forcing changes (although there is some centennial variability as noted above, related mostly to ocean circulation/sea ice interactions). Of the forcings factors that cause cooling, they involve increasing amounts of reflective aerosols, deforestation, reducing greenhouse gases, having more volcanoes etc. For periods such as the last ice age, increases in ice sheets are a big cooling factor, and more recently, the 1940s-1970s cooling is a combination of increasing aerosols, increasing volcanoes (particularly Mt. Agung in 1963) and a slight decline in solar forcing, overcoming a relatively slow growth in greenhouse gases. All of these things are physically plausible, and the verification lies in the prediction of ancillary changes (water vapour changes, circulation etc.) that were observed, but that aren’t specifically related to the global mean temperature.
- Have you tested the model against simplified analytical solutions? Are you able to accurately reproduce analytical results?
A. Unfortunately, analytical results are in very short supply in climate science. If there was an analytical solution for climate we wouldn’t need numerical models at all! Some individual components can be tested against standard solutions (i.e. idealised tracer distributions for the atmospheric dynamics, the radiation scheme against first-principle line-by-line solutions etc..), but for the climate system as a whole, only numerical results exist. For the evaluation of that, you need to compare to real (but imperfect) observational data.
- How do you address the issue that models cannot be used to predict the future? In other words, models can only predict what might happen under a given set of conditions, not what will happen in the future.
A. Exactly. This is what the IPCC scenario excercise is all about, and why the model simulations for the future are called projections, not predictions. No-one in this game ever thinks they are predicting the future, although it often gets translated that way in the popular press. We take assumptions that people have made for the future (and this is not restricted to IPCC) and see what consequences that would have for the climate. Sometimes though those assumed conditions eventually turn out to be quite close to reality, and so it is worth revisiting the old projections, and evaluating the results. The simulation used by Hansen in his Senate testimony in 1988 is a good example, as are projections of the impact of Mt Pinatubo made in 1991.
- In my opinion, Crichton’s most valid criticism of modeling work is that there is no independent study of model results by other investigators. How do you address this?
A. It might be valid if it were true, but it isn’t. For instance, for the next IPCC report, over 300 independent teams are analysing the model results from over 20 different models. These results have been organised and submitted by the individual modelling groups to a central repository where anyone can analyse them. The code for many models can also be downloaded and run on your home computer. Plus you have the multiple independent teams of modellers themselves. The modellers do their best, but they can’t evaluate every field or process by themselves, and so having these analyses done by outside teams is extremely helpful. Sometimes it points out problems, sometimes it shows an unanticipated good match to data (the second kind of result is more pleasing of course!). There is a significant learning curve when you begin to deal with climate models (because they are comlpex), but assuming that this implies that the process is not open or ‘scientific’ is incorrect.
- I have been working on the same code for over 27 years, and I can guarantee that it is not bug free. A debuggers job is never done. How long has your code been in development?
A. The GISS code has a pedigree that goes back to the late 1970s – and some code still dates back to the original coding in 1981-1983 (it’s easy to recognise since it was coded in Fortran 66 for a punch-card reader). Most has been rewritten subsequently to more modern standards, and while we think we’ve found most of the important errors, we occasionally come across minor bugs. So far, in the code we used for the IPCC simulations last year, we have found three minor bugs that do not appear to have any noticeable impact on the results.
On a final note, an implicit background to these kinds of questions is often the perception that scientific concern about global warming is wholly based on these (imperfect) models. This is not the case. Theoretical physics and observed data provide plenty of evidence for the effect of greenhouse gases on climate. The models are used to fill out the details and to make robust quantiative projections, but they are not fundamental to the case for anthropogenic warming. They are great tools for looking at these problems though.
Gerald Machnee says
Re #20 – ((Did I hear right? NOAA’s and other models are predicting a very cold 2005-06 winter? Can anyone explain how this happened? What kind of model projects a very cold winter following a world wide very warm fall?))
I do not know where you heard that, but the following is from the NOAA site –
Oct. 12, 2005 – NOAA announced the 2005-2006 U.S. Winter Outlook today for the months December, January and February. NOAA forecasters expect warmer-than-normal temperatures in most of the U.S. The precipitation outlook is less certain, showing equal chances of above, near or below normal precipitation for much of the country. (Click NOAA image for larger view of forecast winter temperatures for the USA. Click here for high resolution version. Please credit – NOAA.)
“Even though the average temperature over the three-month winter season is forecast to be above normal in much of the country, there still will be bouts of winter weather with cold temperatures and frozen precipitation,” said retired Navy Vice Admiral Conrad C. Lautenbacher, Jr., Ph.D., undersecretary of commerce for oceans and atmosphere and NOAA administrator.
[Response: The original may have been the Met Office forecast… http://www.metoffice.gov.uk/corporate/pressoffice/2005/pr20051028b.html – but the NAO based and std models seem to disagree… – William]
Gerald Machnee says
RE # 20 & # 51 – NOAA’s forecast is for USA and this following is for Europe:
http://www.metoffice.gov.uk/corporate/pressoffice/2005/pr20051028b.html
So The two areas do not have to be similar.