by Gavin Schmidt and Stefan Rahmstorf
Two stories this week, a paper in Nature (Stainforth et al, 2005) describing preliminary results of the climateprediction.net experiments, and the Meeting the Climate Challenge report from a high level political group have lead to dramatic headlines. On the Nature paper, BBC online reported that “temperatures around the world could rise by as much as 11ºC “; on the latter report it headlined: “Climate crisis near ‘in 10 years’”. Does this mean there is new evidence that climate change is more serious than previously thought? We think not.
Both issues touch on the issue of uncertainty, in particular, the uncertainty in the global climate sensitivity.
It is important to know roughly what the climate sensitivity of the planet is. There are a number of ways to do this, using either climate models or data or a combination of both. From the earliest experiments model estimates have ranged from around 2 to 5°C (for 2xCO2). The most quoted range comes from the 1979 Charney report. There, two models were looked at (from Suki Manabe and Jim Hansen) which had a 2 and 4°C sensitivity, respectively. Jule Charney added a half a degree uncertainty at the low and high end and thus the range became 1.5 to 4.5°C. Thus, this early range stood on rather shaky grounds. It has lasted for a surprisingly long time, with subsequent results neither challenging it, nor being able to narrow it down further. Subsequent model estimates have pretty much fallen within those limits, though the actual range for the state-of-the-art models being analysed for the next IPCC report is 2.6 to 4.1°C. (Note that the range of climate sensitivity is not the same as the temperature range projected for 2100 (1.4 to 5.8°C), which also includes uncertainty in projected emissions. The uncertainty due purely to the climate sensitivity for any one scenario is around half that range.)
Attempts have also been made to constrain climate sensitivity from observations. Ideally, we would need a time when the climate was at an equilibrium state, and with good estimates of the forcings that maintained that state, and good data for the global mean temperature change. The 20th Century has the best estimates of the global mean temperature changes but the climate has not been in equilibrium (as shown by the increasing heat content of the oceans). Also, due to the multiplicity of anthropogenic and natural effects on the climate over this time (i.e. aerosols, land-use change, greenhouse gases, ozone changes, solar, volcanic etc.) it is difficult to accurately define the forcings. Thus estimates based purely on the modern period do not have enough precision to be useful. For instance, total forcings since 1850 are around 1.6+/-1 W/m2, the temperature change is around 0.7+/-0.1 °C and the current rate of warming of the ocean (to correct for the non-equilibrium conditions) is around ~0.75 W/m2. Together, that implies a sensitivity of 0.8 +/- 1 °C/W/m2 or 3.2+/-4°C for 2xCO2). More sophisticated methods of looking at the modern data don’t provide more of a constraint either (i.e. Forest et al., 2002; Knutti et al. 2002). (This large uncertainty essentially due to the uncertainty in the aerosol forcing; it is also the main reason why the magnitude of global dimming has little or no implication for climate sensitivity).
What about paleo-climate? An early attempt to use the Vostok ice core data in a regression analysis (Lorius et al., 1990) resulted in a climate sensitivity of 3-4ºC. The best period for these purposes is the last glacial maximum. This was a relatively stable climate (for several thousand years, 20,000 years ago), and a period where we have reasonable estimates of the radiative forcing (albedo changes from ice sheets and vegetation changes, greenhouse gas concentrations (derived from ice cores) and an increase in the atmospheric dust load) and temperature changes. A reasonable estimate of the forcings is 6.6+/-1.5 W/m2 (roughly half from albedo changes, slightly less than half from greenhouse gases – CO2, CH4, N2O). The global temperature changes were around 5.5 +/-0.5°C (compared to pre-industrial climate). This estimate then gives 0.8 +/- 0.2°C/(W/m2), or ~3+/-1°C for 2xCO2. This is actually quite a strong constraint, as we will see.
With this background, what should one make of the climateprediction.net results? They show that the sensitivity to 2xCO2 of a large multi-model ensemble with different parameters ranges from 2 to 11°C. This shows that it is possible to construct models with rather extreme behavior – whether these are realistic is another matter. To test for this, the models must be compared with data. Stainforth et al. subject their resulting models only to very weak data constraints, namely only to data for the annual-mean present-day climate. Since this does not include any climatic variations (not even the seasonal cycle), let alone a test period with a different CO2 level, this data test is unable to constrain the upper limit of the climate sensitivity range. The fact that even model versions with very high climate sensitivities pass their test does not show that the real world could have such high climate sensitivity; it merely shows that the test they use is not very selective. Our feeling is that once the validation becomes more comprehensive, most of the extremely high sensitivity examples will fail (particularly on the seasonal cycle, which tests for variations rather than just a mean).
A yet more stringent test for realistic climate sensitivity is the application of a model to a climate with different CO2 levels. Consider the implications for glacial climate of a sensitivity of twice the most likely value of 3°C, i.e. 6°C. This would imply that either the glacial forcings were only half what we thought, or that the temperature changes were twice what we infer. This would be extremely difficult to square with the paleo-data. Obviously the situation becomes even more untenable for the larger values (>6°C). Hence, we feel that the most important result of the study of Stainforth et al. is that by far most of the models had climate sensitivities between 2ºC and 4ºC, giving additional support to the widely accepted range (Update: As mentioned in the follow up post, this clustering is mainly a function of the sensitivity of the original model and the random nature of the perturbations). The fact that some of the models had much higher sensitivities should not be over-interpreted.
The ‘Meeting the Climate Challenge’ report tried to quantify what is meant by ‘dangerous’ interference in climate. All countries including the US and Australia have signed the Framework Convention on Climate Change which obligates them to prevent ‘dangerous’ interference with the climate system. Actually quantifying what this means is rather tricky. For various reasons (although some are subjective) they suggest that any global warming above 2°C (above the pre-industrial) is likely to be increasingly dangerous. The issue is how one prevents such an outcome given the uncertainty in the climate sensitivity.
The analysis used in this report is based on a study by Baer and Athanasiou. They perform a probability calculation assuming that any of the climate sensitivities in the IPCC range are equally likely. This is a relatively conservative assumption (since it does not include the really high sensitivities that we argued above are ruled out by paleo-data). The results suggest that in order to avoid ‘dangerous’ climate change with a reasonable probability (>90%), the maximum forcing that could be allowed is around 2 W/m2 over pre-industrial levels. This corresponds to a CO2 level of around 400 ppm, assuming all other forcings were at pre-industrial levels. This limit is to some extent subjective, but it is similar (though a little lower) than the level proposed by Jim Hansen.
Note that this is not the same as simply reaching the 400 ppmv CO2 level (which is highly likely to happen over the next ten to 15 years). The reason is because the other forcings (aerosols mostly) have collectively diminished the total forcing up to now. Currently this is about 1.6 W/m2. Whether and when we reach 2 W/m2 total forcing is a function of the changes in many different forcings. CFCs are projected to decline in the future and CH4 is currently steady (and possibly could be reduced), however aerosol growth rates are quite uncertain.
Is there a “point of no return” or “critical threshold” that will be crossed when the forcings exceed this level, as reported in some media? We don’t believe there is scientific evidence for this. However, as was pointed out at an international symposium on this topic last year in Beijing by Carlo Jaeger: setting a limit is a sensible way to collectively deal with a risk. A speed limit is a prime example. When we set a speed limit at 60 mph, there is no “critical threshold” there – nothing terrible happens if you go to 65 or 70 mph, say. But perhaps at 90 mph the fatalities would clearly exceed acceptable levels. Setting a limit to global warming at 2ºC above pre-industrial temperature is the official policy target of the European Union, and is probably a sensible limit in this sense. But, just like speed limits, it may be difficult to adhere to.
Uncertainty in climate sensitivity is not going to disappear any time soon, and should therefore be built into assessments of future climate. However, it is not a completely free variable, and the extremely high end values that have been discussed in media reports over the last couple of weeks are not scientifically credible.
Peter J. Wetzel says
I do have some question about the uncertainty of the forcing during the Last Glacial Maximum (LGM, around 20,000 years ago). How certain is it that ice sheet mass balances were in equilibrium during that time? Back-of-the-envelope calculations show that the latent heat absorbed by melting of ice after surges (e.g., the melting of >1500 years of ice accumulation during Dansgaard-Oeschger events — which seem to have happened in unison across the northern hemisphere, or the longer >5ky Bond cycles) can significantly contribute to the global energy balance. The LGM is defined by ice extent, which would be greatest after a major collapse and surge of the ice domes. The cooling contributed by ice melt could reduce the implied sensitivity to CO2, possibly quite a lot depending on the assumptions used for melt rate.
[Response: The reason why the LGM is picked because it was the ~2000 yr period of maximum ice volume (as evidenced from the benthic O18 record). Thus it was neither growing nor melting significantly. Obviously things were not completely static over that period (since Milankovitch forcing continuously changes), but the ocean currents, greenhouse gas changes and temperatures seem to have been stable longer enough to assume a rough radiative equilibrium. You can estimate how close to equilibrium the climate must be over such a time period assuming for instance that all the energy imbalance goes to melting ice: an imbalance of say 0.1W/m2 would over 2000 years melts/grows approx 18m of ice. Thus if the change in ice volume is constrained to be less than that, that gives a corresponding constraint to the imbalance. As you can see, the sustained imbalance must have been a small number (and is well within the uncertainties in the calculation. – gavin]
Rick Watkins says
I have no knowledge of climate modeling, though after reading the observations above regarding the climate prediction project I have some general questions from the angle of those donating time and effort.
Given the limitations (weaknesses/flaws?) of the climate model and/or the handling of the data you mentioned, is this project serious climate modeling science? Are the results such that they can be used to advance climate modeling science? Are the limitations…etc. due to the fact that the project is in its early stages? Are those donating computer time, in good faith, wasting that time? Is it being squandered? Would/could this obvious mine of public interest and enthusiasm (not to mention their computer power!) be better exploited in alternative climate modeling projects?
I think these questions are relevant because I got the impression you were a bit dismissive about the entire project. Is that a fair and accurate appraisal?
[Response: No! As a project, this is very useful because it’s the kind of thing that could not have been done before and that is always worthy in modelling. There will be further work done on these results, and there will be more interesting experiments performed. Eventually, we will get a better idea of how wide the spread of climate model parameterisations can be while still passing the stringent valdiation tests that state-of-the-art models must pass. My criticism mainly concerns how these preliminary results have been presented in the media. It was heavily implied that an 11 degree climate sensitivity is a real possibility, which in my opinion is not credible. While that makes for dramatic headlines, it does not contribute to a sensible discussion of the issues. So, I would encourage you to continue supporting the climateprediction.net effort and help speed along the more interesting results. – gavin]
imipak says
Three climate change related stories have reached the mainstream media (well, the BBC at any rate) in the last ten days. First the ‘International Climate Taskforce’ with their controversial claim about 400ppm being a critical threshold; this seems to have been a largely political / policy driven report. Secondly, the climateprediction.net Nature paper discussed above, which relates to actual, published, science. Thirdly, the BBC is now reporting ( http://news.bbc.co.uk/1/hi/england/oxfordshire/4218441.stm ) a WWF sponsored study, the source for which I have been unable to locate as yet. The BBC’s report emphasises (a) 2 oC temp increase possible by 2026, and (b) the possible extinction of polar bears. (Whilst the latter is the obvious hook for the rest of the mass media, the specific trends / predictions / new model results remain vague until the actual paper’s available: can anyone offer insights into where it fits on the science < ==> policy/politics continuum?) Oh, and the BBC Horizon ‘Global dimming’ story was broadcast a week ago, too, I believe. Interesting times, though the time needed to keep up with the science (as an interested layperson) is becoming rather overwhelming…
Mike Atkinson says
I think this is the WWF paper.
dave says
Please explain these numbers and “implies a sensitivity”:
I don’t understand the notation °C/W/m2. I had thought there was W/m2 and °C and some translation between them averaged over the surface area of the planet.
thanks
[Response: Sorry if that isn’t clear. The climate sensitivity is the proportionality constant between the forcings (in W/m2) and the temperature change (°C). Therefore the unit for the sensitivity is the number of degrees per each unit of forcing i.e. °C/(W/m2). So given a forcing (in this case 0.85 W/m2 (= 1.6 W/m2 minus 0.75 W/m2 for the ocean heat content change), and a temperature change 0.7°C, the sensitivity is 0.7/0.85 = ~0.8 °C/(W/m2) (leaving off the error bars for clarity)- gavin]
Ana Unruh Cohen says
Thank you for your treatment of the Meeting the Climate Challenge report and providing a link to the report. As the Associate Director for Environmental Policy at the Center for American Progress, I’ve spent the last week trying to explain to the media that the Taskforce did not say that climate collapse was 10 years away! The Taskforce’s goal was to make a set of policy recommendations that would provide fresh ideas and catalyze new actions to reduce the emission of greenhouse gases.
Given their current understanding of the science and its uncertainties, the Taskforce members feel that establishing an international goal of not exceeding a 2 degree C rise (over pre-industrial levels) in global average temperature is important for prompting action and as negotiations for the next round of climate accords for the period after the Kyoto Protocol (beyond 2012) begin. Although the report does say that achieving a concentration of 400ppm of carbon dioxide by 2100 gives the best chance (80% according to the Baer research; see footnote 14) of preventing a 2 degree C rise, the Taskforce members are well aware that other emission pathways are possible and that between now and 2100 our understanding of the climate system will improve. They intentionally did not set a concentration limit in order to provide flexibility as our understanding of the climate system improves.
Your website is invaluable. Keep up the good work.
dave says
Re: my question #6: Thanks. That was easy enough – I’m not feeling too science-challenged today! I didn’t know that data since 1850 as summarized is essentially useless for estimating climate sensitivity given that the Earth’s radiative heat exchange is not in equilibrium over that period. Here’s a paper Can we defuse The Global Warming Time Bomb? by James Hansen which explains some of the physics details (see Box 1 in that paper, called Climate Forcings, Sensitivity, Response Time and Feedbacks and Figure 4, which gives the numbers in W/m2 for the various forcings). This paper also discusses in some detail the paleoclimate results regarding the LGM that you site.
I understand why you say that we needn’t take these very high estimates (11 degrees C) seriously. However, I see a lot of political danger in publishing these estimates. The qualifying factors for the estimates will be ignored by those with an axe to grind. And the great uncertainty range can be used to political advantage. I’m not so sure the BBC and Stainforth et. al. have done us a big favor in this case given that the probability density function (most runs predicted) = about 3. For example, #7:
Of course, our chances at leveling out at 400 ppmv CO2 are indistinguishable from zero.
Finally, so how do I say “1°C” and get it to format correctly?
loveall says
If it is true that very high,the earth will as hot as sun.
John Finn says
Gavin
On climate sensitivity. As a ‘quick and dirty’ estimate why can’t we use the Stefan-Boltzmann equation. In particular use the first derviative (i.e. dE/dT) then calculate the sensitivity from the inverse (dT/dE) which gives
1/(4 x Sigma * T^3)
where Sigma = 5.67 x 10^-8 ; T = 288k (current earth temperature).
This gives a sensitivity of 0.18 deg C per W/m2 which obviously doesn’t agree with the 0.75 deg C per W/m2 – but does agree more with some modern day observations, e.g. Pinatubo.
[Response: The Earth is not a blackbody. Therefore theories that apply to blackbodies don’t work. The low sensitivity you quote is just as incredible as the high numbers discussed above (think about what it would imply for the glacial climate). The notion that the response to Pinatubo can determine equilibrium climate sensitivity is rather odd, and is not supported by evidence. Models with average sensitivities do a very good job simulating the Pinatubo eruption and aftermath (see Soden et al , 2002; Shindell et al, 2004). -gavin]
John Finn says
For instance, total forcings since 1850 are around 1.6+/-1 W/m2, the temperature change is around 0.7+/-0.1 °C
Gavin
As around half the temperature rise took place BEFORE the bulk of the increase in forcing, this seems like false logic.
[Response: The forcings have been increasing since 1850 (see here), and taking the longest period possible minimises the influence of intrinsic decadal variability in the climate system. But you seem to have missed the point entirely. The calculation is being done to show that it does not provide a good constraint. Looking at just the last 30 years would be even worse. – gavin]
DrMaggie says
Re #4: Judging from information at the WWF International website (www.panda.org), I believe the WWF report referred to by e.g. BBC is to be found via this link. The text linked to in #4 above is rather an abstract for a paper to be presented at the upcoming climate conference in the UK.
Scott Robertson says
If you can believe it, this link is from Fox News re:Shrinking Glaciers. Although it is an AP story, they still ran it, without contrarian opinion.
http://www.foxnews.com/story/0,2933,145824,00.html
Dave Frame says
Hi Gavin,
I enjoyed your discussion of the Stainforth et al. paper, and agree with a lot of what you have to say. These are preliminary results, and we have yet to apply tougher constraints. Claudio Piani is currently working on a paper which attempts to provide a measure of model skill compared to recent climate (this work is in parallel to the sorts of things David Sexton has been doing at the Hadley Centre for the QUMP experiment, and similar to some of the work that has been undertaken as part of CMIP-2). That may well rule out quite a few models (not just (or maybe even) the high sensitivity ones). We also have plans to use the results from this first phase of the experiment in a fully coupled ensemble which will simulate 1950-2000, which will be a significant step along the road. [Later this year, we hope.] This will allow us to compare models to data under realistic 20th century forcings (where realistic is an ensemble in itself…). That should provide quite a good constraint on the models.
As you say, paleoclimate simulations provide a constraint on this sort of experiment, and we are proposing a “paleo-prediction.net” ensemble with Paul Valdes in a follow-up experiment currently under peer-review for funding. If I had to bet a fiver on it, I’d agree with you that the high sensitvities are unlikely, but until we’ve run the ensemble (accounting for uncertainties in the paleo record and cogniscent of the possibilities of a non-constant sensitivity between now and the reasonably distant past, amongst other things) I don’t think we’re in a position to actually rule them out. Interestingly, our results are actually pretty consistent with a lot of the recent literature on sensitivity: All studies comparing simple models with recent climate change (from Andronova and Schlesinger, 2001, onwards) find high sensitivities (more than 8K, say) are consistent (at the few-percent level) with the observed record unless they are ruled out a priori. Now we find general circulation models displaying such sensitivities that are not significantly less consistent with current climate observations than the standard models used by the IPCC.
As coordinator of climateprediction.net (and as an author on Stainforth et al.) I groaned at some of the media coverage (London’s Metro was particularly bad, and I was a bit embarrassed by the way Sky News edited an interview I did with them – I made a whole bunch of qualifications, none of which were aired). Where they got a chance to tell the full story, I think Dave Stainforth and Myles managed to get the message across reasonably well, but where the journos were clearly focused on the 11 degree angle of the story (which is a part of it…) things got a lot more untidy. Worst of all were the second-hand articles.
Anyway, thanks for the article, which adds a lot of context that mainstream media reports usually lack. Good luck with the site (which is an excellent idea) and thanks for thinking of us in your links! [If we can contribute anything to realclimate.org please just let us know – it’s a very worthwhile endeavour.]
Cheers,
Dave Frame
climateprediction.net coordinator
[Response: Thanks for your response. The Andronova study like the Forest and Knutti papers uses the instrumental period to try and constrain sensitivity, and so suffers from the same problems discussed above. I agree that a priori we can’t assume that the high end simulations will fall by the wayside once more validation is done, but that is my hunch (based on model valdiation that we perform at GISS and my own experience with paleo-climate modelling). The media can be a difficult beast to control, and I’ve found (through much trial and error) that as well as telling them what you want them to say, you have to be extremely clear about what you don’t want them to say. I wish you luck in the next phases of the project, and look forward to seeing the results. – gavin]
John Finn says
The forcings have been increasing since 1850 (see here)..
I’ve seen there – and the net/cumulative/total forcings are zero/negative up until atound the mid-1920s. But any increase then won’t have any effect for several years surely – because of the lag you’ve discussed earlier.
Tom Huntington says
It is my understanding that the uncertainties regarding climate sensitivity to a nominal 2XCO2 forcing is primarily a function of the uncertainties in (1) future atmospheric aerosol concentrations; both sulfate-type (cooling) and black carbon-type (warming), (2) feedbacks associated with aerosol effects on the properties of clouds (e.g. will cloud droplets become more reflective?), (3) changes in surface albedo of snow & ice due to changes in temperature and deposition of mineral and black carbon particulates, and last, but arguably most significantly (4) the intensity of the positive feedback that comes from the inevitable (?) increase in the concentration of water vapor in the atmosphere as the atmosphere warms as indicated by the Clausius-Clapeyron equation. Your analysis of the issue of sensitivity seems to be largely restricted to the effects of aerosols.
On a separate point, my understanding of the global dimming issue is that aerosols responsible for dimming were possibly masking a higher climate sensitivity to GHG increases than would have been inferred in the absence of this dimming, suggesting that further increases in GHGs without compensating aerosol increases would result in more warming than would have been predicted prior to the acknowledgement of the global dimming phenomenon.
[Response: You may be confusing two issues here. The 2xCO2 experiment is not in any sense a prediction. It is just an experiment done to estimate the climate sensitivity and it is not affected by uncertainies in other historical forcings (like solar or aerosols). Note that the water vapour feedback in the 2xCO2 experiments is an important part of the response. Projections for the future, and indeed hindcasts for 20th Century, do depend on the other forcings. It is the uncertainty in those forcings (particularly aerosols and their various direct and indirect effects) that prevent the historical period from constraining global climate sensitivity. The reason therefore why global dimming really doesn’t have an implication for the climate sensitivity is precisely because of those uncertainties – look at the error bars on my back of the envelope calculation in paragraph 3. – gavin]
John Finn says
A couple of questions on the following:
A reasonable estimate of the forcings is 6.6+/-1.5 W/m2 (roughly half from albedo changes, slightly less than half from greenhouse gases – CO2, CH4, N2O).
This implies a forcing of 3 W/m2 for albedo changes presumably due to additional ice/snow sheets. Is there a reference somewhere which explains how this is calculated? I understand that the current albedo of the earth is responsible for about 107 W/m2.
Also what forcing is assumed for the reduction of water vapour?
It would help if any response to this included current forcing for WV. I accept this may be variable – but some approximate range of values will do.
Also on a previous post regarding the application of the Stefan Boltzmann, you say
The Earth is not a blackbody. Therefore theories that apply to blackbodies don’t work.
I know the earth is not a true black body – but I thought the equation still held if an emissivity factor was included. As the earth’s emissivity is around 90% this will, admittedly, increase the sensitivity – but not substantially.
[Response: All forcings are calculated by changing the boundary conditions (in this case the distribution of glacial ice, and looking to see what the change in net radiation is while keeping everything else constant. Water vapour is a feedback, not a forcing (though since people keep failing to understand the distinction I will do a post on this topic at some point). S-B works only in the absence of feedbacks. The quantification of the feedbacks is the whole point of the exercise. – gavin]
Mike Atkinson says
It seems that the ice age climate constraining a 2xCO2 doubling Climate Sensitivity is dependent on the assumption that the sensitivity is linear in the entire range of CO2 values from ice age levels (much below present) to 2x preindustrial values.
You also seem to be apportioning ice age climate sensitivity among albedo, CO2 and atmospheric dust (ignoring other forcings) in a way that the errors in CO2 forcing and temperature change are correlated and then assuming that they are uncorrelated. I may be wrong about this, I’m still learning, and you may be justified in treating them as uncorrelated even if they are correlated.
Can other ice age forcings be ignored? In particular variations of the Solar constant (how I love variable constants!) and ozone.
[Response: All of these are good points. Actually the basic assumption is that climate sensitivity is the roughly constant (not linear) for warming and cooling effects. This is not the case exactly in climate models, but it’s a reasonable approximation. We could argue about the exact size of the effect, but I doubt that it would exceed the error bars quoted. The LGM forcings include other well-mixed GHGs (CH4 and N2O) since they are constrained by ice core records. Other forcings (O3, solar, other aerosols) may play a role but are currently extremely poorly constrained (i.e. not at all). If subsequent investigation found that they were indeed important (which a priori they aren’t expected to be) , then the calculation would need to be revised. It seems more likely that we have considered the biggest players. I don’t understand your point about errors in CO2 and T being correlated. The CO2 level comes from half a dozen different ice core analyses, while the temperature data come from marine sediments, pollen analyses, isotopes, corals etc. Why would the errors in the different proxies be correlated? – gavin]
[Response: also… re solar forcing and ice ages: the ice ages are fairly regular and match the timescales of orbital forcing. There is no known solar variation on this timescale. It would be odd if there just happened to be a solar cycle on exactly the 100 kyr timescale – William]
Joel Shore says
Just to follow-up on John Finn’s question (#10), if one puts in a rough value for the emissivity of the earth (whatever that might be), so one is no longer assuming it is a perfect blackbody, then does the resulting estimate for climate sensitivity correspond to what one would expect in the absence of any feedback effects?
I.e., does that provide a reasonable estimate of the direct effect of the forcing before feedbacks, or are there other reasons why it still too simplistic even for that?
[Response: In the absence of any feedbacks that is what you’d get. But feedbacks are the whole point – if you have any kind of climate system, you get feedbacks. The interesting question is how big they are, and you can’t get that assuming Stefan-Boltzmann. – gavin]
Mike Atkinson says
About CO2 sensitivity / temperature correlation. I meant that the attribution of the forcing between Ice albedo, CO2, etc. might be dependent on the temperature. For instance would the CO2 sensitivity in °C/(W/m2) be dependent on the temperature (whether a 5 or 6°C drop)?
[Response: Yes. Sensitivity is the ratio of the temperature to the forcings – so if the temperature estimate changed, so would the implied sensitivity. I built in an uncertainty of 0.5 °C in that, but you can easily do the math if you think I underestimate the error or got the mean wrong. – gavin]
Lynn Vincentnathan says
It is sort of reassuring that 11 degrees C is far-fetched. I say “sort of,” since 5.8 degrees (which you suggest as the more scientifically founded upper possibility) may be dangerous enough. Please correct my faulty understanding, but I have read (secondary sources) that 251 million years ago it is thought there was 6 degrees global warming (from natural causes), and that this triggered massive CO2 and CH4 releases, leading to runaway global warming, and massive extinction. Is this wrong? I understand 5.8 is a possibility, not a high probability, but if such warming were to happen, could this lead to runaway global warming? Or would that require something higher, or is that whole scenario so unlikely as to be dismissed outright?
[Response: The 5.8 degrees is not for climate sensitivity, that is a projection for 2100 using the biggest likely sensitivity and the fastest growth in greenhouse gases. Many of the media reports also confused the two temperature ranges. The biggest sensitivity currently for any of the state-of-the-art climate models is 4.1 deg C for 2xCO2, but it may be as high as 5 deg C. As far as information concering the Permian-Triassic extinction event goes, it is all pretty speculative. The amount of available data is very sparse, and so while these kind of ‘deep time’ paleoclimate questions are a lot of fun, the lack of detail means that they have limited implications for today’s climate. Runaway greenhouse warming can occur for really extreme conditions (Venus at present, Earth in maybe 5 billion years time when the sun becomes a red giant), but is not a possibility for the next hundred years. I find it rather an overused term in climate discussions. – gavin]
Peter J. Wetzel says
Just to quickly interject a little help for Gavin: The Stephan-Boltzman argument applies to the “top of atmosphere” exchange of, and balance of radiation. But all the really meaty “good stuff” regarding feedbacks and climate change happens between the top of the atmosphere and the surface (where we are attemting to define sensitivies).
How does CO2 increase affect the water vapor exchange, the cloud amount and its incredibly complex feedbacks involving aerosols, precipitation efficiency, and the resultant radiation balance at the 1-2 meter height thermometer shelters where humanity defines his/her climate? How do the complex feedbacks change atmospheric circulation patterns, and the interaction of these patterns to changes in ice cap topography (e.g. at the LGM)? How do these feedbacks and the atmospheric circulation changes interact to affect the ocean circulation, including shifts in location of deep water formation and cold water upwelling, depth of the thermocline, etc. etc.?
The problem is mighty complex. Science is doing its best to grapple with all this complexity — to understand it, and to model it under the constraints of finite computational power. “Earth system modeling” is a valuable tool for testing the sensitivities. But fundamental research into understaning the processes that drive the “Earth system” is also still rapidly advancing.
The “best answer” science can provide today is sure to be superceded by a much “better answer” quickly. Hang loose and keep an eye on this site! :~)
P.S. A lot of reseach energy is being devoted to the study of Methane Clathrates — a huge source of greenhouse gases which could be released from the ocean if the thermocline (the buoyant stable layer of warm water which overlies the near-freezing deep ocean) dropped in depth considerably (due to GHG warming), or especially if the deep ocean waters were warmed by very, very extreme changes from the current climate, such that deep water temperatures no longer hovered within 4C of freezing, but warmed to something like 18C. It would seem to be required that very drastic warming of the deep ocean is the only way that this source of Methane would be released and trigger a “runaway” greenhouse warming.
dave says
Re #21 The End Permian Extinction
I think it is a mistake to casually dismiss recent findings on this extinction as “all pretty speculative”. More and more data is coming to light on this event. There are complete sediment records (especially in China) that have allowed much more thorough inspection events at the Permo-Triassic boundary. Take a look at How to kill (almost) all life, from which I quote:
The oxygen isotope record does indeed indicate a rise of about 6 degrees C just before the worst happened. Global warming brought on by increased atmospheric CO2 from volcanism is thought to be the cause of this extinction. A theory about an impact at the time has little support. Furthermore, the large negative shift in the C13 isotope at the time can only be explained by the release of light carbon from methane clathrates (#22). A new paper by Ward et. al. Science, Jan 20, 2005 provides more light on what happened. Oxygen levels dropped, the oceans became stratified, a critical warming threshold was exceeded.
I’m not saying we’re on the verge of another extinction like the end Permian. However, with these big numbers being thrown around (for example 8K in Dave Frame’s post #14), it seems prudent to remember what may have happened 251 mya when over 90% of Earth’s species went extinct.
John Finn says
[Response: All forcings are calculated by changing the boundary conditions (in this case the distribution of glacial ice, and looking to see what the change in net radiation is while keeping everything else constant. Water vapour is a feedback, not a forcing (though since people keep failing to understand the distinction I will do a post on this topic at some point). S-B works only in the absence of feedbacks. The quantification of the feedbacks is the whole point of the exercise. – gavin]
Right. So can we take it from this that the climate forcing from feedbacks are far and away the most dominant factor – despite the fact that they haven’t been accurately “quantified”. We can wait for the post on Water Vapour and feedback effect for a response to this. Also could you include a comment on the NASA report (around March 2004) by Minschwaner?? which, from observations, suggested that “we may be over-estimating feedbacks”.
But a question for now.
The 0.5 deg C which is still “in the pipeline” , when is this going to become evident. Put it this way if atmospheric levels of CO2 were fixed at to-day’s level (380ppm) indefinitely when would we see global temperatures 0.5 deg C higher than to-day.
John Finn says
Just to quickly interject a little help for Gavin: The Stephan-Boltzman argument applies to the “top of atmosphere” exchange of, and balance of radiation. But all the really meaty “good stuff” regarding feedbacks and climate change happens between the top of the atmosphere and the surface (where we are attemting to define sensitivies).
Peter
This is fair enough. But surely S-B can be applied at the surface asw well. Around 240 W/m2 are emitted from the “top of the atmosphere” at around 255k (as S-B confirms) – but around 390 W/m2 at 288k are emitted from the surface. The difference is due, as you say to the “meaty bit in the middle”. A rough calculation here would suggest a sensitivity of about 0.22 deg C. BUT after a long drawn-out process we’ve established — not just that feedbacks are not included (as Gavin seems to think I’ve misunderstood) – but the actual magnitude of the feedbacks.
These seem to be around 2-3 times the direct forcing from CO2 alone.
It would help if someone could confirm this.
Jeffrey Davis says
In reference to dave’s comment (23), Ward’s time frame for the Permian extinction is 10 million years. Human beings won’t be human beings in 10 million years regardless of CO2 levels.
Lynn Vincentnathan says
Re #21, thanks for the clarification. I guess what I meant by “runaway global warming” was, could a high end warming scenario (using the biggest likely sensitivity and fastest growth in GHGs), eventually trigger “natural” positive feedback loops of GHG emissions (not referring to people using their ACs more due to the warming & thus emitting more GHGs causing more warming), even if people reduced their GHG emissions, say by 90%, in which the emissions from nature & the warming would continue to increase for at least some time period (I understand we are too far from the sun to have a permanent and extreme “Venus effect,” unless the sun becomes a lot hotter/brighter). And I’m not necessarily looking at only 2100, but simply “in the future,” by 2200 or 2300, or whatever.
As mentioned in an earlier post, as a layperson, I am more interested in avoiding false negatives, so I don’t need 95, 90, or even 20% certainty to be thinking about this. For example, I might take an umbrella if only a 20% chance of rain has been predicted.
And regarding an article I read in 2004 about 2 years of acceleration in atmospheric CO2 concentrations. What if this were to become a trend and continue, would this indicate the positive feedback loops are becoming more prominent, and the negative feedback loops less prominent?
I know most of you on this site, both the questioners & experts, are way above my knowledge on the subject, but I do appreciate your efforts to explain this to layperson like myself. I do talk about the subject, and I would not want to be way off track.
dave says
Re: dangerous interference
As of today, the “Avoiding Dangerous Climate Change” is in session at the Hadley Centre. Here is the program for the conference. Many of the papers being presented there are available online.
Peter J. Wetzel says
For John Finn:
The S-B relationship is fundamental to define the climate of the Earth at every level of the atmosphere from the top to the surface. At the crux of cliate models, you will find the integration of that equation through all atmospheric layers, accounting for the emission contributions of all the radiationally active matter in those layers.
You’ve used the S-B formula to calculate dQ/dT at typical terrestrial temperatures. Your calculation describes how much difference in infrared radiational heating, dQ, results from a given increment of temperature change, assuming emissivity and everything else remain fixed.
CO2 sensitivity is defined by doing the integration of the S-B equation through an ensemble of typical present-day atmospheric conditions twice, first with CO2 at 280ppm, then with it at 560ppm. Because CO2 makes the atmosphere more opaque to infrared radiation, and because the atmosphere gets colder as you get higher, the “effective radiation temperature” of the infrared radiation leaving the earth is made colder by increasing CO2 (fewer Watts per square meter of infrared radiation leave the top of the atmosphere). It is the reduced amount of radiation leaving the top of the atmosphere that changes the earth’s balance of heat, and therefore defines the “direct radiative forcing” caused by doubling CO2.
Now it should be obvious that this pair of calculations using a fixed, present-day atmosphere with 1xCO2 and 2xCO2 does not account for the feedbacks. If 4W/m2 less heat escapes the top of the atmosphere but the same amount of heat is still coming in from the sun, some physical change must occur in order to restore the energy balance. The most intuitively reasonable thing that could happen is that the atmosphere would warm up until 4 more W/m2 are emitted by the S-B law. But how will this warming be distributed through the atmosphere? This is where the complexity of the feedbacks begins.
The temperature sensitivity discussion is all about how the atmosphere, the oceans, the biosphere and the cryosphere adjust to this forcing change. The “degrees of freedom” and the range of time scales involved in the adjustment process are immense, and are not all well understood. And many things besides just temperature end up adjusting.
I apologize if this was all too basic. Maybe I missed the point.
But I’d like to inject one more facet of the discussion. The uncontrolled experiment which we are performing and attempting to observe and model involves many more anthropogenic forcings than just CO2. The infamous IPCC Figure 6.6 actually just skims the surface of the impacts humanity is imposing on Earth. The figure is infamous from my perspective because it conveys an impression of much more certainty than I believe a responsible “flagship” figure, featured prominently in the executive summary, should do.
The big, big, HUGE uncertainty bar that you *do not* see there, belonging to the cloud feedbacks is the most egregious issue. Beyond that, there are other major missing components, left out because they are the most uncertain, and certainly not because they can be expected to have little affect. Of these others, the most important ones are the 2nd aerosol indirect effect, and Land-use change effects beyond albedo — especially the infrared emission effect. Both of these just happen to be very likely to be net cooling effects.
Pat N: Self-only says
I would like to see discussion about the most recent period of rapid global warming … leading to the Paleocene Eocene Thermal Maximum (PETM) about 55 million years ago … including differences and similarities to the climate projections for this century … and beyond.
There is evidence that a large area of dense forests “rapidly” became swamp … where the northern Great Plains now lie.
Evidence includes fossils of subtropical/tropical conditions existing 50 million years ago in the lakes of the area now known as southwest Wyoming, northeast Utah and northwest Colorado.
“Diplomystus” fossil fish from southwest Wyoming can be viewed here at realclimate by clicking my name (a hotlink) that follows…
dave says
Re: #26 and the end-Permian (#23)
I know this is a comment thread but the comment #26 distorts the science, so let’s get that right.
The period of study by Ward et. al. is 10 million years. That is not the time-scale of the extinction. I haven’t gotten a look at that paper yet, although the title is “Abrupt and Gradual extinction….” The actual extinction took place at the boundary (sediment beds 25/26 in the paper I cited) synchronous with the Siberian trap volcanism (251.1 +/- .3 mya) on a scale which is 3 orders of magnitude less (counted in 10’s of thousands of years) than the 10 million years you cite. Measurements from 251 mya can not be more precise. Look at the paper I cited in #23 and “Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis” by Richard J. Twitchett abstract here. Sorry, not online. The time-frame given there is 10 to 60 kya. At this level of resolution, nothing further can be said about time-scales.
So, your comment does not affect mine at all. Obviously, what was happening at the end-Permian is very different than what is going on now. I merely said it is prudent to keep these kind of results in mind and would further add that the Earth’s systems may have surprises in store for us. If you require further references, I will be pleased to provide them.
Jan Hollan says
Regarding the correspondence of temperature and radiative (S-B) fluxes, I made
a scheme back in 1999, trying to visualize the problem. With newer numbers, it’s now available as warmin_ppf within http://amper.ped.muni.cz/gw/articles/ (there are the source PostScript I wrote, “easy” to edit, and pdf/png made from it). I left the current forcing at 3 W/m2, even if may be too much (I did not want to change the greenhouse scheme warmin_en). The equivalent (bb) temperatures of downward atmospheric radiation are 1.8, 3.05 and 6.2 °C.
jenik
DrMaggie says
Re #30:
While it of course is very interesting to find out more about the climate in different parts of the world during ancient periods of Earth’s development, I feel that it might be quite important to remember that e.g. 55 million years BP, the distribution of land masses across the globe was quite different from what it is now. (Some interesting illustrations of continental drift can be found here.)
Could one of the experts comment on how this would affect the balance of the various climate forcings? I guess that e.g. the overall albedo could be affected, as well as the solar heating of the oceans and the land surfaces (continents were placed at different latitudes). Also the potential pathways of oceanic circulation patterns would be affcted, and the presence or absence of large mountain ranges would impact on wind and precipitation patterns…
John Finn says
Re #29
Peter
Many thanks for your response. No need to apologise for being “too basic” – it helps to confirm my understanding.
My posts on S-B were intended to provoke discussion on issues similar to the ones you raised in your post.
Hopefully realclimate will do an article some time.
Thanks again
Tom Huntington says
For Peter Wetzl
If the second indirect effect of aerosols can be simplified as
“the development of precipitation and thus the cloud liquid-water path, lifetime of individual clouds and the consequent geographic extent of cloudiness”
quoted from the IPCC TAR
http://www.grida.no/climate/ipcc_tar/wg1/185.htm
and the evidence now points towards increasing evaporation (at least over the oceans) and precipitation (globally), can you explain how this is “very likely a net cooling effect” as described in Comment #29, when it could also be argued that this is consistent with an increase in lower atmospheric water vapor content (this particular feedback resulting in warming)? What is the best guess of the experts regarding the balance of the cooling versus warming effects of increasing clouds/water vapor?
Will increasing cloudiness necessarily result in net cooling? What is the balance of the cooling effects of reflectivity versus warming effects of insulation at night?
Can you suggest an updated revision to IPCC Figure 6.6 that reflect advances in understanding since its publication in 2001? I would also love to see a comparable figure that showed the feedbacks (like water vapor) in the same watts per square meter format.
[Response: The most up-to-date estimates that we have made to the forcings diagram are available: http://www.giss.nasa.gov/data/simodel/efficacy/ and in particular fig 28. -gavin]
Jeffrey Davis says
Responding to #31
My reference to the 10 million year time frame and not-being-human wasn’t because of extinction but because of evolution. 10 million years ago the hominids were very different from the homo sapiens sapiens that emerged around 40,000 years ago. Lashings of apologies for the confusion.
Pat N: Self-only says
Re: 33, 30
The continents 55 myrs ago were in about the same place then, as now. The main difference was the gap between North America and South America. I think the effect of no gap on global climate is minor. As sea level rises again, perhaps the gap would reappear?
John Bolduc says
There is a paper posted on the program for the “Avoding Dangerous Climate Change” conference currently taking place in England titled “Why Delaying Climate Action is a Gamble” by Kallbekken & Rive of the CICERO Center for Int’l Climate & Envt Research. The paper is at http://www.stabilisation2005.com/programme.html on the Day 3 agenda. The paper appears to conclude that if we wait 20 years to begin reducing GHG emissions, assuming a modest amount of mitigation in the short term, we will have to reduce emissions at a 3 to 7 times greater rate than if we start now in order to keep warming to a 3 degree C increase around 2100. The authors note that it’s a gamble because if the political feasibility of acting doesn’t improve and the costs don’t decrease, it will be that much harder to take action. I’m wondering what other climate scientists think about making such a forecast? I assume the paper is not peer reviewed yet.
Peter J. Wetzel says
Response to Tom Huntington, #35:
The second aerosol indirect effect is more likely to cause cooling than warming because, to the best current knowledge, high clouds are more likely to warm climate, whereas low clouds are more likely to cool. However high clouds are much less likely to produce precipitation than low clouds. Thus second aerosol indirect effects predominantly operate on low clouds, increasing the endurance of cloud liquid water in these clouds.
The second aerosol indirect effect can, *under some circumstances* increase cloud lifetimes. But the more general statement that I used, “increasing the endurance of cloud liquid water”, does not always translate into longer cloud lifetimes, particularly in the widespread areas of nearly overcast marine stratocumulus which dominate considerable areas of the globe. In the core of these areas, there is little hope of increasing cloud lifetime since the cloud cover is almost continuous already. It is only on the periphery of the large marine stratocumulus cloud banks where lifetime might hope to be extended. And then it is only on the downwind side of these cloud banks where clouds are dissipating (rather than on the side where they are forming), where the second effect might be expected to increase cloud extent due to increased cloud lifetime.
Increasing low cloud extent is expected to cool climate, based on our best understanding at present. Increasing the endurance of cloud liquid water within cloud of fixed extent will also cool, due to the increased albedo of such clouds.
You go on to interject the projected increase in atmospheric water vapor content into the discussion. The second aerosol indirect effect may play some role in increasing the atmospheric *total* water content — note that cloud droplets are liquid water, not water vapor. There has been discussion here about the potential for aerosols to increase the average residence time of atmospheric water.
However when you comment that: “this is consistent with an increase in lower atmospheric water vapor content (this particular feedback resulting in warming)” you are straying into an entirely different feedback. Whether consistent or not, the putative increase in water vapor due to GHG warming has a separate hypothesized cause (rooted simply in the Clausius-Clapeyron relationship). There is no compelling need to discuss the two effects as though they were inextricably linked (There are undoubtedly second order links (feedbacks) which connect the arguments, but the first order processes are easily separable.)
Therefore when you ask about the general effects of cloud feedbacks on climate, you have moved well beyond the scope of a discussion about aerosol second indirect effects. But to answer those questions, the results of climate model work reported in the
IPCC TAR (see Fig. 7.2) show huge differences among models in the overall cloud radiative forcing. Five models predict net cooling, five predict net warming due to cloud feedbacks. The uncertainty range for net radiation effects stretches from -1.1 to +2.9 W/m2. And this is one of the most notable missing uncertainties in the infamous IPCC TAR Fig 6.6, which I complained about in #29.
And along those lines, I’d like to end by complaining about the lack of uncertainty bars in the figure which Gavin presented in his response to your post. My complaint is not about the most probable value of the various forcings, direct and indirect, but about irresponsibly conveying, by error of omission, a proper sense of the underlying uncertainty in those numbers.
Observer says
Thanks..
This one, day 2, is very instructive
http://www.stabilisation2005.com/day1/leemans.pdf
John Finn says
I’m sorry, guys, but it’s time to be brutal. I’ve had it with climate and weather predictions from computer models. Last Sunday, Hadley Weather Centre – the UK’s esteemed centre of excellence for climate research – began to issue reports through the media – about an icy arctic blast we could expect at the end of the week. This would all start on Friday (to-day) with heavy rain from the NW which would drag cold air in behind it. Every night this week – the story was the same – prepare for a real taste of winter. That is – until last night – when the “icy blast” became “temperatures drop back by one or two degrees”. It wouldn’t have been so bad but I was going to take Friday off work – but I thought better of it in light of the heavy rain/sleet/cold we could expect. I actually had lunch to-day sitting OUTSIDE in shirt sleeves. I would have been in the garden all day had it not been for the completely useless, incompetent jerks at Hadley.
Don’t bother going on about this being weather and not climate. I know the difference – but do the computer models. At the end of the day the models are just mathematical representations of climatic (or meteorological) processes as they are understood by the people who develop them. In any case, for the past 2 years (at least ) Hadley have predicted the mean global temperature at the start of the year. Now this doesn’t involve giving the forecast for a given day or week – it’s supposed to be an indicator of the way climate is changing. They based their predictions on 1961-90 anomalies (as per CRU) so actual numbers will be a bit different to GISS. Anyway for the past 15 years the CRU record has shown global temperatures to be between 0.3 to 0.6 deg above the 1961-90 average, so a reasonable guess might be the mid point , i.e. 0.45 deg. Actually if you’d guessed 0.45 – you’d have been remarkably close – just 0.02 deg out in 2003 and spot on in 2004. So what about the Hadley computer model (the HAD….something … 3CM.. or other). It managed to over-estimate 2003 by 0.08 deg and 2004 by 0.06 deg. Bearing in mind the likely range of the results this was a woeful performance. Still if we take the average and extrapolate over the next 100 years they’re only about 7 degrees adrift.
Once again sorry – but you still don’t know anywhere near enough to be able to provide the slightest clue as to how climate might behave in the next 100, 50 – or even 10 years.
PS I’ve go Monday off now – so it’ll probably teem it down then.
[Response: yes, you’ve confused weather and climate. Stating that you know you’ve done so doesn’t help your case, it just makes it worse. Ditto the forecasts of temperature for a year ahead – William]
John Finn says
yes, you’ve confused weather and climate……. – William
No I haven’t. I’m pointing out that both are predicted/projected using mathematical models – both are sensitive to small changes in initial conditions – neither can cope with unforeseen events (i.e the ‘butterfly effect’) – and both are capable of being massively in error.
[Response: the difference is that climate *isn’t* sensitive to small perturbations in its initial conditions (or at least, it is believed to be so). As a rough analogy, weather prediction is an initial value problem; climate is a boundary value problem. The climate of climate models is provably not dependent on the initial conditions – William]
John Finn says
William
I notice that you are involved in climate modelling. I have a question about the Mt Pinatubo eruption.
The claim is/was that the cooling effect on the earth due to the eruption was predicted (or simulated) by the models. Ok – I’ll but that. My question is – how much of a cooling effect was there? I’ve seen 0.5 degrees quoted – but what does this mean – is it for one day? – one month? – or the average for a year?
I understand that the Pinatubo effect is thought to be spread over 3 separate years, i.e 1992-94. You must, therefore, be able to determine what the temperature anomalies (w.r.t 1951-80 mean as per GISS) for those 3 years – if Pinatubo had not taken place. I mean how else would you know what the cooling was if you didn’t know what the temperatures would have been. I’d be grateful for any information you have on that.
One final point, William. I think you ought to change the photo at your work page link. The family photo with the 2 kids is much better.
[Response: I am involved in modelling, but so is Gavin, and he has a much better idea about Pinatubo than me. http://maui.net/~jstark/nasa.html says the effect was predicted and observed at 0.3 oc globally; that appears to be roughly consistent with the record: http://www.cru.uea.ac.uk/cru/info/warming/. As for the photo – work wouldn’t let me get away with it – William]
stephan harrison says
Response to John Finn (number 43). An incomplete analogy relates to the sea. I doubt whether any computer model can predict the movement of a grain of beach sand moved around by a breaking wave, since Navier-Stokes equations are rather difficult to solve. But we can predict the tides a long way into the future because we understand the dominant forcings. Scale matters.
Eli Rabett says
WRT #45 and #46
See
Science, Vol 296, Issue 5568, 727-730 , 26 April 2002
Global Cooling After the Eruption of Mount Pinatubo: A Test of Climate Feedback by Water Vapor
Brian J. Soden, Richard T. Wetherald, Georgiy L. Stenchikov and Alan Robock
The sensitivity of Earth’s climate to an external radiative forcing depends critically on the response of water vapor. We use the global cooling and drying of the atmosphere that was observed after the eruption of Mount Pinatubo to test model predictions of the climate feedback from water vapor. Here, we first highlight the success of the model in reproducing the observed drying after the volcanic eruption. Then, by comparing model simulations with and without water vapor feedback, we demonstrate the importance of the atmospheric drying in amplifying the temperature change and show that, without the strong positive feedback from water vapor, the model is unable to reproduce the observed cooling. These results provide quantitative evidence of the reliability of water vapor feedback in current climate models, which is crucial to their use for global warming projections.
Also
The Dust Settles on Water Vapor Feedback Anthony D. Del Genio
Science 2002 296: 665-666. (in Perspectives)
A search on Pinatubo in Science pulls up other interesting articles
Jeffrey Davis says
When you bake a raisin bread, you’ve got no idea where the individual raisins will end up, but at the end of an hour you still have a loaf of raisin bread. Every time.
You’re looking at mistakes in calling the weather as cumulative. They aren’t.
John Finn says
When you bake a raisin bread, you’ve got no idea where the individual raisins will end up, but at the end of an hour you still have a loaf of raisin bread. Every time.
Yes – but you know how many raisins you put in. In other words you have an accurate measurement of all the ingredients – not so with the weather … or climate – ask the modellers!
You’re looking at mistakes in calling the weather as cumulative. They aren’t.
The weather forecast was a bit tongue-in-cheek – a wind-up – not intended to be taken too seriously. However I don’t necessarily agree that the criticism of the annual predictions are not valid.
I’ll be happy for someone to clarify this but don’t the models run through a succession of time cycles for a number of defined ‘columns’ of the atmosphere – so isn’t there a chance that a small initial error could accumulate.
[Response: As William pointed out, weather is an initial value problem where the chaotic nature of the dynamics causes slightly different initial conditions to lead to widely divergent solutions after only a few weeks. Climate is a boundary value problem that looks at the statistics of the weather and tries to average over all of the possible paths. The ‘number of raisins’ is the energy coming in from the sun. Annual or seasonal climate forecasts are difficult because they rely on the oecans behaving predictably – which sometimes they do (i.e. during ENSO events), but mostly they are part of the coupled climate system and so have ‘weather’ as well. This is why climate forecasts are mostly for much longer periods where those variations as well can be averaged out. – gavin]
Eli Rabett says
No one counts the raisins that you add to raisin bread. At best you have an estimate, assuming you were anal enough to count the number of raisins in a cup a few times. OTOH after the bread is baked you can observe the average raisin density and comment on how cheap or generous the baker was. While this does not lead to wildly divergent raisin bread (you would have to add a lot of yeast for that)……