One of the interesting things about being a scientist is seeing how unexpected observations can galvanize the community into looking at a problem in a different way than before. A good example of this is the unexpectedly low Arctic sea ice minimum in 2007 and the near-repeat in 2008. What was unexpected was not the long term decline of summer ice (this has long been a robust prediction), but the size of 2007 and 2008 decreases which were much larger than any model had hinted at. This model-data mismatch raises a number of obvious questions – were the data reliable? are the models missing some key physics? is the comparison being done appropriately? – and some less obvious ones – to what extent is the summer sea ice minimum even predictable? what is the role of pre-conditioning from the previous year vs. the stochastic nature of the weather patterns in any particular summer?
The concentration of polar expertise on the last couple of questions has increased enormously in the last couple of years, and the summer minimum of 2009 will be a good test of some of the ideas that are being discussed. The point is that whether 2009 is or is not a record-setting or near-record setting minimum, the science behind what happens is going to be a lot more interesting than the September headline.
In the wake of the 2007 minimum, a lot of energy went in to discussing what this meant for 2008. Had the Arctic moved into a different regime where such minima would become normal or was this an outlier caused by exceptional weather patterns? Actually this is a bit of false dichotomy since they aren’t exclusive. Exceptional patterns of winds are always going to be the proximate cause of any extreme ice extent, but the regime provides a background upon which those patterns act. For instance, in the paper by Nghiem et al, they showed the influence of wind patterns in moving a lot of thick ice out of the Arctic in early 2007, but also showed that similar patterns had not had the same impact in other years with higher background amounts of ice.
This ‘background’ influence implies that there might indeed be the possibility of forecasting the sea ice minimum a few months ahead of time. And anytime there is the potential to make and test predictions in seasonal forecasting, scientists usually jump at the chance. So it proved for 2008.
Some forecasting efforts were organised through the SEARCH group of polar researchers, and I am aware of at least two informal betting pools that were set up. Another group of forecasts can be found from the Arctic ice forecasting center at the University of Colorado. I personally don’t think that the intrinsic worth of a successful prediction of overall sea ice extent or area is that societally relevant – interest in open shipping lanes that might be commercially important need much more fine-grained information for instance – but I think the predictions are interesting for improving understanding of Arctic processes themselves (and hopefully that improved understanding will eventually feed into the models and provide better tests and targets for their simulations).
What was particularly interesting about last years forecasts was the vast range of forecasting strategies. Some were just expert guestimates, some people used linear regression on past data, some were simply based on persistence, or persistence of the trend. In more mature forecasting endeavours, the methods tend to be more clustered around one or two proven strategies, but in this case the background work is still underway.
Estimates made in June 2008 for the September minimum extent showed a wide range – from around 2.9 to 5.6 M km2. One of the lowest estimates assumed that the key criteria was the survivability of first year ice. If one took that to be a fixed percentage based on past behaviour, then because there was so much first year ice around in early 2008, the minimum would be very low (see also Drobot et al, 2008). This turned out not to be a great approach – much more first year ice survived than was predicted by this method. The key difference was the much greater amount of first year ice there was near the pole. Some of the higher values assumed a simple reversion to trend (i.e. extrapolation forward from the long-term trend to 2008).
Only a couple of the forecasts used physics-based models to make the prediction (for instance, Zhang et al, 2008). This is somewhat surprising until one realises how much work is needed to do this properly. You need real time data to initialise the models, you need to do multiple realisations to average over any sensitivity to the weather, and even then you might not get a range of values that was tight enough to provide useful information.
So how did people do? The actual 2008 September minimum was 4.7 M km2, which was close to the median of the June forecasts (4.4 M km2) – and remember that the 2007 minimum was 4.3 M km2. However, the spread was quite wide. The best estimates used both numerical models and statistical predictors (for instance the amount of ice thicker than 1m). But have these approaches matured this time around?
In this year’s June outlook, there is significantly more clustering around the median, and a smaller spread (3.2 to 5.0 M km2) than last year. As with last year, the lowest forecast is based on a low survivability criteria for first year ice and I expect that this (as with last year) will not pan out – things have changed too much for previous decades’ statistical fits on this metric to be applicable. However, the group with the low forecast have put in a ‘less aggressive’ forecast (4.7 M km2) which is right at the median. That would be equal to last year’s minimum, but not a new record. It would still be well below the sea ice trend expected by the IPCC AR4 models (Stroeve et al, 2008).
There is an obvious excitement related to how this will pan out, but it’s important that the thrill of getting a prediction right doesn’t translate into actually wanting the situation to get worse. Arctic ice cover is not just a number, but rather a metric of a profound and disruptive change in an important ecosystem and element of the climate. While it doesn’t look at all likely, the best outcome would be for all the estimates to be too low.
manacker says
Blog 16 Aug RC Anne van der Bom
Anne van der Bom (840)
Sorry for delay in responding. The first reply got lost.
You state that the war in Iraq cost the USA a trillion dollars. Would you say this was “taxpayer money well spent”? I certainly would not, based on the not very impressive result that has occurred (plus the many lives that were lost in the process).
Comparing one bad investment with another even worse one does not make it any better.
I do not live in the USA, but I doubt that the average US taxpayer would agree with your statement that the US taxpayer can afford this cost in order to reduce global warming by 2050 by 0.05C.
Now as far as “sidestepping” is concerned, I am beginning to detect a “nit-pick” here, Anne.
I have said that man cannot change the climate. Spending $1 trillion to result in an immeasurable 0.05C reduction in warming is not “changing the climate”. I have seen no specific actionable proposals that will change the climate. Have you?
Now to taking 2050 rather than 2100 as the date for measuring the impact of the improvement: I firmly believe that we (including all the scientists, computer engineers, report writers and politicians of this world) have no notion what our climate will be in year 2100. There are just too many unknown factors and “outliers” out there that can move our climate projection into a completely different direction.
To say that the proposed US projects will theoretically reduce year 2100 warming by 0.08 or even 0.2C would be a joke. If we are in the middle of a new Maunder Minimum by then, and our globally and annually averaged land and sea surface temperature has sunk by 0.8C below the 2000 level as a result (rather than warming by twice or three times this amount, as is currently being projected), we will not be worrying too much about what the result of these projects has been.
The coal-fired plants will probably not be in operation 90 years from now, as they will have been replaced as they wear out and more economical and environmentally acceptable alternates become available to replace them. We don’t even have a clue today what the most economical alternates will be 90 years from now, do we? Will nuclear fusion be an economically and environmentally viable alternate by then? Will solar and wind power be able to overcome their inherent “on-line” disadvantage through new technology? Who knows?
Read Nassim Taleb’s book, “The Black Swan”, to see for yourself the folly of trying to make long-term predictions and why these inevitably fail.
And remember the long-range (60-year) forecast made back in 1860 that Manchester would be covered by two meters of horse manure by 1920, due to the rapidly expanding number of horse-drawn carriages and buggies.
Max
manacker says
Gavin,
Found the Forster and Collins study you cited.
http://www.springerlink.com/content/37eb1l5mfl20mb7k/
“ Variability, both in the observed value and in the climate model s feedback parameter, between different ensemble members, suggests that the long-term water vapour feedback associated with global climate change could still be a factor of 2 or 3 different than the mean observed value found here and the model water vapour feedback could be quite different from this value; although a small water vapour feedback appears unlikely.”
This is not a statement of a robust correlation between the empirically observed values (relating to the post-Pinatubo cooling) and model results for warming.
But I still agree with you that there are studies out there that show such a correlation.
Max
Jim Bouldin says
Pulllleeeeeeeeeeeeeeeeeeeeeeeeeeez don’t feed the troll folks. Waste of perfectly good food.
Mark says
“To my statement that $1 trillion investment for 0.05°C theoretical warming averted is a poor return on investment…”
And what did you use to calculate the $1 trillion and 1/20th C warming aversion?
The black-hole calculator at arms length???
Please show all your workings and assumptions for that value.
It certainly isn’t the one made by a REAL economist…
Mark says
“Lay out a specific actionable proposal, John, rather than hypothesizing about what “doing nothing” means and costs. ”
Reduce energy use 30% in the short term in the first world.
Move 50% of power capacity to renewable sources by the mid term.
Move away from fossil fuels and non-renewable sources completely in the long term.
Kevin McKinney says
What Jim Bouldin said.
Martin Vermeer says
Manacker, I don’t think 0.2% of US GDP to achieve a 0.05C warming redction is a bad deal at all. Assuming your numbers are right, and we can extrapolate linearly, it will cost 4% of US GDP for every degree of warming prevented.
This is the kind of money nations are prepared to pay to evert an existential threat — rightly, without going bankrupt. Think defence budgets. This extrapolation agrees with those found in the AR4 WG3 report BTW, suggesting it’s in the ball park.
That’s percentage of US GDP only. Of course the spending will have to be global… methinks you’re engaging in alarmism.
Andrew P says
Interesting that methane hasn’t been increasing in concentration in the atmosphere..
Also interesting that while Gavin (RealClimate author) said of the June and July 2009 ice minimum predictions
“The point is that whether 2009 is or is not a record-setting or near-record setting minimum, the science behind what happens is going to be a lot more interesting than the September headline.”
If 2009 is an interesting test science, I’m assuming all of his readers and he agree that the science failed miserably. The minimum was not 4.5 million sq km it was 5.3 million sq km. The national snow and ice data center said ice had entered a “death spiral” in 2007. I’m not saying ice won’t keep decreasing in the long term, it probably will (although the -PDO will probably induce La Ninas which help it stabilize in the next decade or two). But whatever new science the 2009 minimum was supposed to be testing according to Gavin must be invalidated or reevaluated now at least right? Either that or Gavin was wrong to say the 2009 minimum constituted a test of that science.
I assume all of you have the intellectual honesty to pick one of the two. Is Gavin’s statement that the 2009 ice minimum constituted a test of new ice science
A) Correct, and that new science is partially or totally invalidated
B) False.
[Response: I think it safe to say that detailed estimates of the September minimum made in June still need some work. – gavin]