With the blogosphere all a-flutter with discussions of hundredths of degrees adjustments to the surface temperature record, you probably missed a couple of actually interesting stories last week.
Tipping points
Oft-discussed and frequently abused, tipping points are very rarely actually defined. Tim Lenton does a good job in this recent article. A tipping ‘element’ for climate purposes is defined as
The parameters controlling the system can be transparently combined into a single control, and there exists a critical value of this control from which a small perturbation leads to a qualitative change in a crucial feature of the system, after some observation time.
and the examples that he thinks have the potential to be large scale tipping elements are: Arctic sea-ice, a reorganisation of the Atlantic thermohaline circulation, melt of the Greenland or West Antarctic Ice Sheets, dieback of the Amazon rainforest, a greening of the Sahara, Indian summer monsoon collapse, boreal forest dieback and ocean methane hydrates.
To that list, we’d probably add any number of ecosystems where small changes can have cascading effects – such as fisheries. It’s interesting to note that most of these elements include physics that modellers are least confident about – hydrology, ice sheets and vegetation dynamics.
Prediction vs. Projections
As we discussed recently in connection with climate ‘forecasting‘, the kinds of simulations used in AR4 are all ‘projections’ i.e. runs that attempt to estimate the forced response of the climate to emission changes, but that don’t attempt to estimate the trajectory of the unforced ‘weather’. As we mentioned briefly, that leads to a ‘sweet spot’ for forecasting of a couple of decades into the future where the initial condition uncertainty dies away, but the uncertainty in the emission scenario is not yet so large as to be dominating. Last week there was a paper by Smith and colleagues in Science that tried to fill in those early years, using a model that initialises the heat content from the upper ocean – with the idea that the structure of those anomalies control the ‘weather’ progression over the next few years.
They find that their initialisation makes a difference for a about a decade, but that at longer timescales the results look like the standard projections (i.e. 0.2 to 0.3ºC per decade warming). One big caveat is that they aren’t able to predict El Niño events, and since they account for a great deal of the interannual global temperature anomaly, that is a limitation. Nonetheless, this is a good step forward and people should be looking out for whether their predictions – for a plateau until 2009 and then a big ramp up – materialise over the next few years.
Model ensembles as probabilities
A rather esoteric point of discussion concerning ‘Bayesian priors’ got a mainstream outing this week in the Economist. The very narrow point in question is to what extent model ensembles are probability distributions. i.e. if only 10% of models show a particular behaviour, does this mean that the likelihood of this happening is 10%?
The answer is no. The other 90% could all be missing some key piece of physics.
However, there has been a bit of confusion generated though through the work of climateprediction.net – the multi-thousand member perturbed parameter ensembles that, notoriously, suggested that climate sensitivity could be as high as 11 ºC in a paper a couple of years back. The very specific issue is whether the histograms generated through that process could be considered a probability distribution function or not. (‘Not’ is the correct answer).
The point in the Economist article is that one can demonstrate that very clearly by changing the variables you are perturbing (in the example they use an inverse). If you evenly sample X, or evenly sample 1/X (or any other function of X) you will get a different distribution of results. Then instead of (in one case) getting 10% of models runs to show behaviour X, now maybe 30% of models will. And all this is completely independent of any change to the physics.
My only complaint about the Economist piece is the conclusion that, because of this inherent ambiguity, dealing with it becomes a ‘logistical nightmare’ – that’s is incorrect. What should happen is that people should stop trying to think that counting finite samples of model ensembles can give a probability. Nothing else changes.