New rule: When declaring that climate models are misleading in a high profile paper, maybe looking at some model output first would be a good idea.
[Read more…] about New rule for high profile papers
Climate modelling
Tropical tropospheric trends
Once more unto the breach, dear friends, once more!
Some old-timers will remember a series of ‘bombshell’ papers back in 2004 which were going to “knock the stuffing out” of the consensus position on climate change science (see here for example). Needless to say, nothing of the sort happened. The issue in two of those papers was whether satellite and radiosonde data were globally consistent with model simulations over the same time. Those papers claimed that they weren’t, but they did so based on a great deal of over-confidence in observational data accuracy (see here or here for how that turned out) and an insufficient appreciation of the statistics of trends over short time periods.
Well, the same authors (Douglass, Pearson and Singer, now joined by Christy) are back with a new (but necessarily more constrained) claim, but with the same over-confidence in observational accuracy and a similar lack of appreciation of short term statistics.
[Read more…] about Tropical tropospheric trends
A phenomenological sequel
Does climate sensitivity depend on the cause of the change?
Can a response to a forcing wait and then bounce up after a period of inertness?
Does the existence of an 11-year time-scale prove the existence of solar forcing?
Why does the amplitude of the secular response drop when a long-term trend is added?
[Read more…] about A phenomenological sequel
The certainty of uncertainty
A paper on climate sensitivity today in Science will no doubt see a great deal of press in the next few weeks. In “Why is climate sensitivity so unpredictable?”, Gerard Roe and Marcia Baker explore the origin of the range of climate sensitivities typically cited in the literature. In particular they seek to explain the characteristic shape of the distribution of estimated climate sensitivities. This distribution includes a long tail towards values much higher than the standard 2-4.5 degrees C change in temperature (for a doubling of CO2) commonly referred to.
In essence, what Roe and Baker show is that this characteristic shape arises from the non-linear relationship between the strength of climate feedbacks (f) and the resulting temperature response (deltaT), which is proportional to 1/(1-f). They show that this places a strong constraint on our ability to determine a specific “true” value of climate sensitivity, S. These results could well be taken to suggest that climate sensitivity is so uncertain as to be effectively unknowable. This would be quite wrong.
Regional Climate Projections
Regional Climate Projections in the IPCC AR4
How does anthropogenic global warming (AGW) affect me? The answer to this question will perhaps be one of the most relevant concerns in the future, and is discussed in chapter 11 of the IPCC assessment report 4 (AR4) working group 1 (WG1) (the chapter also has some supplementary material). The problem of obtaining regional information from GCMs is not trivial, and has been discussed in a previous post here at RC and the IPCC third assessment report (TAR) also provided a good background on this topic.
The climate projections presented in the IPCC AR4 are from the latest set of coordinated GCM simulations, archived at the Program for Climate Model Diagnosis and Intercomparison (PCMDI). This is the most important new information that AR4 contains concerning the future projections. These climate model simulations (the multi-model data set, or just ‘MMD’) are often referred to as the AR4 simulations, but they are now officially being referred to as CMIP3.
One of the most challenging and uncertain aspects of present-day climate research is associated with the prediction of a regional response to a global forcing. Although the science of regional climate projections has progressed significantly since last IPCC report, slight displacement in circulation characteristics, systematic errors in energy/moisture transport, coarse representation of ocean currents/processes, crude parameterisation of sub-grid- and land surface processes, and overly simplified topography used in present-day climate models, make accurate and detailed analysis difficult.
I think that the authors of chapter 11 over-all have done a very thorough job, although there are a few points which I believe could be improved. Chapter 11 of the IPCC AR4 working group I (WGI) divides the world into different continents or types of regions (e.g. ‘Small islands’ and ‘Polar regions’), and then discusses these separately. It provides a nice overview of the key climate characteristics for each region. Each section also provides a short round up of the evaluations of the performance of the climate models, discussing their weaknesses in terms of reproducing regional and local climate characteristics.
Musings about models
With the blogosphere all a-flutter with discussions of hundredths of degrees adjustments to the surface temperature record, you probably missed a couple of actually interesting stories last week.
Tipping points
Oft-discussed and frequently abused, tipping points are very rarely actually defined. Tim Lenton does a good job in this recent article. A tipping ‘element’ for climate purposes is defined as
The parameters controlling the system can be transparently combined into a single control, and there exists a critical value of this control from which a small perturbation leads to a qualitative change in a crucial feature of the system, after some observation time.
and the examples that he thinks have the potential to be large scale tipping elements are: Arctic sea-ice, a reorganisation of the Atlantic thermohaline circulation, melt of the Greenland or West Antarctic Ice Sheets, dieback of the Amazon rainforest, a greening of the Sahara, Indian summer monsoon collapse, boreal forest dieback and ocean methane hydrates.
To that list, we’d probably add any number of ecosystems where small changes can have cascading effects – such as fisheries. It’s interesting to note that most of these elements include physics that modellers are least confident about – hydrology, ice sheets and vegetation dynamics.
Prediction vs. Projections
As we discussed recently in connection with climate ‘forecasting‘, the kinds of simulations used in AR4 are all ‘projections’ i.e. runs that attempt to estimate the forced response of the climate to emission changes, but that don’t attempt to estimate the trajectory of the unforced ‘weather’. As we mentioned briefly, that leads to a ‘sweet spot’ for forecasting of a couple of decades into the future where the initial condition uncertainty dies away, but the uncertainty in the emission scenario is not yet so large as to be dominating. Last week there was a paper by Smith and colleagues in Science that tried to fill in those early years, using a model that initialises the heat content from the upper ocean – with the idea that the structure of those anomalies control the ‘weather’ progression over the next few years.
They find that their initialisation makes a difference for a about a decade, but that at longer timescales the results look like the standard projections (i.e. 0.2 to 0.3ºC per decade warming). One big caveat is that they aren’t able to predict El Niño events, and since they account for a great deal of the interannual global temperature anomaly, that is a limitation. Nonetheless, this is a good step forward and people should be looking out for whether their predictions – for a plateau until 2009 and then a big ramp up – materialise over the next few years.
Model ensembles as probabilities
A rather esoteric point of discussion concerning ‘Bayesian priors’ got a mainstream outing this week in the Economist. The very narrow point in question is to what extent model ensembles are probability distributions. i.e. if only 10% of models show a particular behaviour, does this mean that the likelihood of this happening is 10%?
The answer is no. The other 90% could all be missing some key piece of physics.
However, there has been a bit of confusion generated though through the work of climateprediction.net – the multi-thousand member perturbed parameter ensembles that, notoriously, suggested that climate sensitivity could be as high as 11 ºC in a paper a couple of years back. The very specific issue is whether the histograms generated through that process could be considered a probability distribution function or not. (‘Not’ is the correct answer).
The point in the Economist article is that one can demonstrate that very clearly by changing the variables you are perturbing (in the example they use an inverse). If you evenly sample X, or evenly sample 1/X (or any other function of X) you will get a different distribution of results. Then instead of (in one case) getting 10% of models runs to show behaviour X, now maybe 30% of models will. And all this is completely independent of any change to the physics.
My only complaint about the Economist piece is the conclusion that, because of this inherent ambiguity, dealing with it becomes a ‘logistical nightmare’ – that’s is incorrect. What should happen is that people should stop trying to think that counting finite samples of model ensembles can give a probability. Nothing else changes.
Green and Armstrong’s scientific forecast
There is a new critique of IPCC climate projections doing the rounds of the blogosphere from two ‘scientific forecasters’, Kesten Green and Scott Armstrong, who claim that since the IPCC projections are not ‘scientific forecasts’ they must perforce be wrong and that a naive model of no change in future is likely to be more accurate that any IPCC conclusion. This ignores the fact that IPCC projections have already proved themselves better than such a naive model, but their critique is novel enough to be worth a mention.
[Read more…] about Green and Armstrong’s scientific forecast
Why global climate models do not give a realistic description of the local climate


Global climate
Global climate statistics, such as the global mean temperature, provide good indicators as to how our global climate varies (e.g. see here). However, most people are not directly affected by global climate statistics. They care about the local climate; the temperature, rainfall and wind where they are. When you look at the impacts of a climate change or specific adaptations to a climate change, you often need to know how a global warming will affect the local climate.
Yet, whereas the global climate models (GCMs) tend to describe the global climate statistics reasonably well, they do not provide a representative description of the local climate. Regional climate models (RCMs) do a better job at representing climate on a smaller scale, but their spatial resolution is still fairly coarse compared to how the local climate may vary spatially in regions with complex terrain. This fact is not a general flaw of climate models, but just the climate models’ limitation. I will try to explain why this is below.
Hansen’s 1988 projections
At Jim Hansen’s now famous congressional testimony given in the hot summer of 1988, he showed GISS model projections of continued global warming assuming further increases in human produced greenhouse gases. This was one of the earliest transient climate model experiments and so rightly gets a fair bit of attention when the reliability of model projections are discussed. There have however been an awful lot of mis-statements over the years – some based on pure dishonesty, some based on simple confusion. Hansen himself (and, for full disclosure, my boss), revisited those simulations in a paper last year, where he showed a rather impressive match between the recently observed data and the model projections. But how impressive is this really? and what can be concluded from the subsequent years of observations?
[Read more…] about Hansen’s 1988 projections
Learning from a simple model
A lot of what gets discussed here in relation to the greenhouse effect is relatively simple, and yet can be confusing to the lay reader. A useful way of demonstrating that simplicity is to use a stripped down mathematical model that is complex enough to include some interesting physics, but simple enough so that you can just write down the answer. This is the staple of most textbooks on the subject, but there are questions that arise in discussions here that don’t ever get addressed in most textbooks. Yet simple models can be useful there too.
I’ll try and cover a few ‘greenhouse’ issues that come up in multiple contexts in the climate debate. Why does ‘radiative forcing’ work as method for comparing different physical impacts on the climate, and why you can’t calculate climate sensitivity just by looking at the surface energy budget. There will be mathematics, but hopefully it won’t be too painful.
[Read more…] about Learning from a simple model