Two and a half years ago, a paper was published in Nature purporting to be a real prediction of how global temperatures would develop, based on a method for initialising the ocean state using temperature observations (Keenlyside et al, 2008) (K08). In the subsequent period, this paper has been highly cited, very often in a misleading way by contrarians (for instance, Lindzen misrepresents it on a regular basis). But what of the paper’s actual claims, how are they holding up?
At the time K08 was published, we wrote two posts on the topic pointing out that a) the methodology was not very mature (and in our opinion, not likely to work), and b) that the temperature predictions being made (for the 10 year overlapping periods Nov 2000-Oct 2010, Nov 2005-Oct 2015 etc.), were very unlikely to come true. These critiques were framed as a bet to see whether the authors were serious about their predictions, similar in conception to other bets that have been offered on climate related matters. This offer was studiously ignored by the scientists involved, who may have thought the whole exercise was beneath them. Oh well.
However, with the publication of the October 2010 temperatures from HadCRUT, the first prediction period has now ended, and so the predictions can be assessed. Looking first at the global mean temperatures…
we can see clearly that while K08 projected 0.06ºC cooling, the temperature record from HadCRUT (which was the basis of the bet) shows 0.07ºC warming (using GISTEMP, it is 0.11ºC). As in K08 this refers to T(Nov 2000:Oct 2010) as compared to T(Nov 1994:Oct 2004). For reference, the IPCC AR4 ensemble gives 0.129±0.075ºC (1) (and a range of -0.07 to 0.30ºC related to internal variability in the simulations) (using full annual means).
More interestingly, we can look at the regional pattern. The K08 supplemental data showed their predicted anomaly along with anomalies from a free-running version of their model the standard IPCC results for the 2005-2015 period (which is half over), rather than the 2000-2010 period, but the patterns might be expected to be similar:
The anomalies are with respect to the average of all the decadal periods they looked at, which is roughly (though not exactly equal to) a 1955-2004 baseline. The actual temperature changes for 2000-2010, using GISTEMP for convenience, look like this:
It is striking to what extent they resemble the spatial pattern seen in the AR4 ensemble free-running version rather than the initiallised forecast, though there are also some correlations there too (for instance, west of the Antarctic peninsula, related to the ozone-hole and GHG related increase in the Southern Annular Mode).
It is worth emphasising that the RC bet offer was not frivolously made, but reflected some very clear indications in the paper that the predictions would not come true (as explained in our second post). Specifically, their ‘free’ model run, without data assimilation, performed better in hindcasts when compared to observed data, i.e. the new assimilation technique degraded the model performance. Both previous hindcasts showing cooling of the model were wrong. Since global warming took off in the 1970s, the observed data have never shown a cooling in their chosen metric (ten-year means spaced 5 years apart). Other climate models run for standard global warming scenarios only rarely show this level of cooling. On the other hand, there is a simple explanation for such a temporary cooling in a model: an artifact known as ‘coupling shock’ (e.g. Rahmstorf 1995), which arises when the ocean is switched over from a forced to a coupled mode of operation, something that has no counterpart in the real world.
The basic issue is that nudging surface temperatures in the North Atlantic closer to observed data would probably nudge the Atlantic overturning circulation in the wrong direction since changing the temperature without changing the salinity will give the opposite buoyancy forcing to what would be needed. The model indeed shows negative skill in the critical regions of the North Atlantic which are most affected by the overturning circulation. All this can be seen from the paper. Last but not least, by the time the paper was published three quarters of the 2000-2010 forecast period were over with no sign of the predicted cooling – barring an unprecedented massive temperature drop, the prediction was always very unlikely.
Was this then an “improved climate prediction“? The answer is clearly no.
So what can we conclude? First off, the basic idea of short term predictions using initialised ocean data is a priori a good one. Many groups around the world are exploring to what extent this is possible, and what techniques will be the most successful. However, before claiming that a new methodology is an improvement on other efforts and that it predicts a very counter-intuitive result, a lot of effort is required to demonstrate that even theoretically or in ideal circumstances that it will work. This can involve ‘perfect model’ experiments (where you test to see whether you can predict the evolution of a model simulation given only what we know about the real world), or hindcasts (as used by K08), and only where there is demonstrated skill is there any point in making a prediction for the real world. It is nonetheless important to try new methods, and even when they fail, lessons can be learned about how to improve things going forward.
It is perhaps inevitable that novel prediction methods that appear to ‘go against the mainstream’ are going to be higher profile than they warrant in retrospect – such is the way of the world. But scientists need to appreciate that these high profile statements will be taken and spread far more widely than they possibly anticipate. Thus it behoves them to be scrupulous in explaining the context, giving the caveats and making clear the experimental nature of any new result. This is undoubtedly hard, especially where there are people ready to twist anything to fit an anti-AGW agenda, but we should at least try.
Note, we asked Noel Keenlyside if he wanted to comment on our assessment of their prediction, and he declined to do so. We would be still be happy to post any of his or his co-authors comments in response though.
Update Dec 2: The Stuttgarter Zeitung newspaper (in German) followed up on this and got the following comments from the authors:
Keenlyside:
“The forecast for global mean temperature which we published highlights the ability of natural variability to cause climate fluctuations on decadal scale, even on a global scale. I am still completely convinced that this is correct.”
Latif:
“I do not want to comment on this.”
Then an indirect quote: the fact that warming for 2000-2010 was greater than predicted in their study does in itself not speak against their study, and then
“You have to look at this long-term. I would not weigh a few years earlier or later too much.” But if the forecast turns out to be wrong by 2015, “I will be the last one to deny it”.
Brian Dodge says
“Brian, I am talking about record heat, and you counter with temperatures from January? We all know that the temperature increase in the latter 20th century was primarily due to increased winter and night-time readings.” Glad to see you admit that the global warming predictions of Svante Arrhenius made in 1896 have been confirmed.
When I did my analysis back in spring 2009 I did the same analysis for June 2008 as for January 2009; the mean interval to the previous high was 28 years, stdev 23 yr, Of those sites that had record highs June 2008, there were 65 previous June record highs from 1997-2007, 26 from 50-59, and only 6 from 30-39.
The statements from your reference http://www.islandnet.com/~see/weather/almanac/arc2006/alm06jul.htm
“The heat began in the heartland in late June.”
“Mid-month provided the national peak in summer heat. July 12 through 14 recorded the hottest three-day period in US history …”
“But hot as it was, the average temperature for the US (in the 48 contiguous states) of 77.2oF (25.1oC) in July (2006) fell just shy of the record of 77.5oF (25.3oC) set in July 1936,” don’t support your statement that “Five of the ten hottest summers occurred in the 1930s.”
It is the same sort of cherrypicking claim as “it’s been cooling since 1998”.
FWIW even the denialist site icecap.us admits that it took “from the 1920s to the 1950s ” (49 years) to get six of their 10 warmest years, and only from 1990-2008 (18 years) to get four more – (49/6)/(18/4) = 1.8 times as frequent in the more recent period.
If you plug the phrase “five of the ten hottest summers” into google, you get only 3 hits, all from denialist websites, and two of them claim that they occurred in the 1800’s, not 1930’s.
Its also amusing that you castigate Maya for only looking at “number of Atlantic tropical storms in 2010”, when your reference is only about 2 months in 1936.
“The growth of population, demographic shifts to more storm-prone locations, the growth of wealth have collectively made the nation more vulnerable to climate extremes.”
I already took out the growth of wealth using your 400% inflation figure – leaving a fifteen fold increase in losses.
Correcting for the fourfold increase in population (301 million in 2008 versus 76 million in 1950) leaves “only” a 3.8 fold increase in losses.
As for demographics, looking at the losses for 2009 from http://www.weather.gov/om/hazstats.shtml by state, and comparing the percent population growth(compared to the national average growth – we’ve already corrected for that) in the top 25 loss states (94% of the total losses), we find that the average population growth is only 52% of the population growth for the nation as a whole.
Only one state in the top 30, Florida, 12th most “storm prone location,” has a demographic shift to it (pop. growth 1.67 times the national average), and only accounts for 2.14% of the losses.
Returning to my original point that some “are paying disproportionately” for the externalization of FF emissions, the people in the states with the ten highest losses paid 4.6 as much per capita as the average, and 125 times as much as in the least affected ten states.
Grabski says
Why is the GISS data so out of phase with UAH, RSS, even HADCRUT all showing temps falling in Q4 2010 while GISS shows a sharp rise?