par Gavin Schmidt et Stefan Rahmstorf (traduit par Thibault de Garidel et Gilles Delaygue)
Deux travaux sortis cette semaine, un papier publié dans Nature (Stainforth et al., 2005) décrivant des résultats préliminaires de l’expérience climateprediction.net, et le rapport Meeting the Climate Challenge d’un groupe politique, ont mené à des titres catastrophistes dans les médias. Sur l’article de Nature, BBC on line indique que les “températures globales pourraient s’élever de près de 11ºC”; sur le second rapport il est écrit: “Une crise climatique proche pour dans 10 ans”. [N-d-T. : Voir également Le Monde : Un réchauffement climatique de plus de 6ºC n’est plus à exclure ; Libération : Prédire chez soi]. Cela signifie-t-il que de nouvelles preuves montrent un changement climatique plus sérieux que précédemment estimé ? Nous ne le pensons pas.
(suite…)
Ces deux papiers touchent la question de l’incertitude, en particulier de l’incertitude sur la sensibilité climatique globale.
Il est important de savoir à peu près quelle est la sensibilité climatique de la Terre. Il existe différentes manières de la déterminer, en utilisant des modèles numériques du climat, des données ou enfin une combinaison des deux. A partir des premières expériences, les estimations de la sensibilité par les modèles ont été de 2 à 5°C (pour 2xCO2 – doublement de la concentration en CO2). La gamme la plus citée vient du rapport de Charney, publié en 1979. Dans ce rapport, deux modèles avaient été employés (par Suki Manabe et Jim Hansen) donnant respectivement une sensibilité de 2 et 4°C. Jule Charney avait ajouté un demi degré d’incertitude aux extrémités basses et hautes transformant ainsi la gamme de 1,5 à 4,5°C. Cette fourchette reposait donc sur une base plutôt instable. Cette estimation de la sensibilité a été utilisée pendant un nombre surprenant d’années; les résultats de travaux postérieurs ne les modifiant pas, et ne réduisant pas non plus la gamme d’incertitude. Les évaluations postérieures de la sensibilité à partir des modèles tombent pour la plupart dans ces limites, même si le prochain rapport du GIEC (IPCC), le plus à la pointe, fournit une fourchette de 2,6 à 4,1°C.
(Il faut noter que la gamme de sensibilité climatique de la Terre est différente de la température projetée en 2100, 1,4 à 5,8°C, qui inclut également l’incertitude sur les émissions de carbone liées a l’activité humaine. L’incertitude due purement à la sensibilité climatique pour n’importe quel scénario d’émission représente environ la moitié de cette gamme.)
La sensibilité climatique a également été estimée à partir d’observations. Le scénario idéal pour l’estimer serait une période de temps avec un climat à l’équilibre, de bonne connaissances des forçages maintenant cet état, et de bonnes données indicatrices du changement de la température moyenne globale. Le 20ième siècle a les meilleures estimations des changements de température moyennes globales, mais le climat n’a pas été à l’équilibre (comme le montre la hausse de la quantité de chaleur contenue dans les océans). En outre, en raison de la multiplicité d’effets liés à l’action humaine et ceux naturels sur le climat pendant le 20ième siecle (c.-à-d. les aérosols, changement d’utilisation du sol, gaz à effet de serre, ozone, forçages solaires, volcaniques, etc) il est difficile de définir exactement la part relative des différents forçages. Ainsi les évaluations basées sur le 20ième siecle n’ont pas assez de précision pour être utiles. Par exemple, le total des forçages depuis 1850 est autour de 1,6±1 W/m2 , le changement de température globale est de 0,7±0,1 °C et le taux actuel de réchauffement de l’océan (afin de corriger les conditions hors équilibre) est autour de ~0,75 W/m2 . Tout ceci implique une sensibilité de 0,8 ±1 °C/W/m2 équivalent à 3,2±4°C pour 2xCO2. Des méthodes plus sophistiquées d’analyse des données modernes ne fournissent pas plus de contraintes (c.-à-d. Forrest et coll., 2002; Knutti et coll., 2002) (cette forte incertitude est essentiellement due à l’incertitude sur le forçage lié aux aérosols; c’est également la raison principale pour laquelle l’importance d’un ‘obscurcissement global’ (“global dimming”) a peu ou pas d’implication sur la sensibilité du climat).
Quid des paléoclimats ? Une analyse de régression des données issues des carottes de glace de Vostok (Lorius et coll., 1990), a permis d’estimer la sensibilité du climat à 3-4ºC. La meilleure période pour ces estimations est le dernier maximum glaciaire. C’était une période pendant laquelle le climat était relativement stable (pendant plusieurs milliers années, il y a 20000 ans), et une période pour laquelle nous avons des évaluations fiables du forçage radiatif (changements d’albédo liés aux calottes de glace et à la végétation, concentrations en gaz à effet de serre –issues des carottes de glace– et augmentation de la charge atmosphérique en poussières) ainsi que des changements de température. Les forçages sont estimés à 6,6±1,5 W/m2 (approximativement pour moitié à cause des changements de l’albédo, et pour une petite moitié à cause des gaz à effet de serre – CO2, CH4, N2O). Les changements de température globale sont estimés à environ 5,5±0.5°C (par rapport au climat pré-industriel). Cette évaluation donne alors une sensibilité de 0,8±0.2°C/(W/m2), équivalent à ~3±1°C pour 2xCO2 . C’est une contrainte réellement très forte, comme on va le voir.
A partir de ces connaissances, que tirer des résultats de climateprediction.net ? Ils prouvent que la sensibilité à 2xCO2 d’un large ‘ensemble’ de simulations, multi-modèles, et avec différents paramètres, s’étend de 2 à 11°C. Ceci prouve qu’il est possible de construire des modèles avec des comportement extrêmes – que ces modèles soient réalistes est une autre question. Pour déterminer leur réalisme, les modèles doivent être comparés aux données. Stainforth et al. comparent leurs modèles à de très faibles contraintes issues des données, à savoir uniquement aux moyennes annuelles du climat moderne. Comme ceci n’inclut aucune variation climatique (à commencer par le cycle saisonnier), et encore moins une période de test du modèle avec une concentration différente en CO2, cette validation par les données ne permet pas de contraindre la limite supérieure de la sensibilité climatique. Le fait que même des versions de modèle avec de fortes valeur de sensibilité climatique réussissent le test de validation ne prouve pas que la Terre ait une telle sensibilité climatique élevée, cela prouve simplement que le test de validation des modèles n’est pas très sélectif. Notre sentiment est qu’une fois que la validation sera plus complète, la plupart des cas de sensibilité extrêmement élevée échoueront (en particulier sur le cycle saisonnier, qui détermine des variations plutôt que juste une moyenne).
Un test encore plus rigoureux pour une sensibilité climatique réaliste serait l’application d’un modèle à des climats avec différents niveaux de CO2. Considérons l’implication pour le climat glaciaire d’une sensibilité double de la valeur le plus probable de 3°C, c.-à-d. 6°C. Ceci impliquerait soit que les forçages glaciaires étaient la moitié de ce que nous pensions, ou que les changements de température sont doubles de ce que nous estimons. Cela serait extrêmement difficile à concilier avec les paléo-données. Evidemment cette situation devient encore plus difficilement tenable avec les paléo-données pour des valeurs de sensibilité plus grandes (>6°C). Par conséquent, nous estimons que le résultat le plus important de l’étude de Stainforth et al. est que de loin la majorité des modèles indiquent une sensibilité du climat comprise entre 2ºC et 4ºC, confortant la gamme largement admise. Le fait que certains des modèles indiquent des sensibilités beaucoup plus élevées ne devrait pas être sur-interprété.
Le rapport ‘Meeting the Climate Challenge’ a essayé de quantifier ce qui est signifié par interférence ‘dangereuse’ du climat. Tous les pays, les USA et l’Australie compris, ont signé la Convention-Cadre sur le Changement Climatique qui les oblige à prévenir toute interférence ‘dangereuse’ avec le système climatique. En fait quantifier ce seuil est assez délicat. Pour différentes raisons (bien que certaines subjectives), ils proposent qu’un réchauffement global supérieur à 2°C (par rapport à la période pré-industrielle) deviendra de plus en plus dangereux. Le problème réside dans la limitation de ce réchauffement étant données les incertitudes sur la sensibilité climatique.
L’analyse utilisée dans ce rapport se fonde sur une étude de Baer et Athanasiou. Ils effectuent un calcul de probabilité en supposant que chaque valeur de sensibilité dans la gamme du GIEC (IPCC) est équiprobable. C’est une hypothèse relativement basse (puisqu’elle n’inclut pas les sensibilités très élevées qui sont –comme nous l’avons montré– infirmées par les paléo-données). ces résultats suggèrent qu’afin d’éviter un changement ‘dangereux’ du climat avec une probabilité raisonnable (>90%), le forçage maximum permis se situe autour de 2 W/m2 au dessus des niveaux pré-industriels. Ce forçage correspond à une concentration atmosphérique d’environ 400 ppm de CO2, en supposant tous les autres forçages à leur niveau pré-industriel. Cette limite est dans une certaine mesure subjective, mais semblable (bien que légèrement inférieure) au niveau proposé par Jim Hansen.
Il est à noter que ceci n’est pas équivalent à simplement atteindre ce niveau de 400 ppmv de CO2 (qui devrait très probablement être atteint d’ici 10 à 15 ans). Ceci tient au fait que les autres forçages (aérosols surtout) ont diminué collectivement le forçage total jusqu’à maintenant. Le niveau actuel est d’environ 1,6 W/m2. Si nous atteindrons, et quand, un forçage total de 2 W/m2 est fonction des changements de nombreux différents forçages. Les CFCs doivent diminuer à l’avenir et le CH4 est actuellement stabilisé (et probablement pourrait être réduit), mais les taux de croissance des aérosols sont tout à fait incertains.
Existe-t-il un “point de non retour” ou “un seuil critique” qui sera franchi quand les forçages excèderont ce niveau, comme rapporté dans quelques médias ? Nous ne croyons pas qu’il y ait de base scientifique à cette hypothèse. Cependant, comme cela a été précisé l’année dernière à Beijing dans un colloque international à ce sujet par Carlo Jaeger : fixer une limite est une manière sensée de traiter collectivement un risque. Une limite de vitesse est un exemple typique. Quand nous fixons une limite de vitesse à 130 km/h, il n’y a aucun “seuil critique” – rien de terrible ne se produit si vous allez à 140 ou 150 km/h. Mais peut-être à 160 km/h les morts excèderaient clairement les niveaux acceptables. Fixer une limite au réchauffement global à 2ºC de plus que la température pré-industrielle est l’objectif politique officiel de l’Union Européenne, et c’est probablement une limite sensée. Mais, comme pour les limites de vitesse, il peut être difficile d’y souscrire.
L’incertitude sur la sensibilité du climat ne va pas disparaître bientôt, et devrait donc être implémentée dans les évaluations du climat futur. Cependant, ce n’est pas une variable complètement libre, et les valeurs extrêmement élevées discutées dans les médias au cours des deux dernières semaines ne sont pas scientifiquement crédibles.
Peter J. Wetzel says
I do have some question about the uncertainty of the forcing during the Last Glacial Maximum (LGM, around 20,000 years ago). How certain is it that ice sheet mass balances were in equilibrium during that time? Back-of-the-envelope calculations show that the latent heat absorbed by melting of ice after surges (e.g., the melting of >1500 years of ice accumulation during Dansgaard-Oeschger events — which seem to have happened in unison across the northern hemisphere, or the longer >5ky Bond cycles) can significantly contribute to the global energy balance. The LGM is defined by ice extent, which would be greatest after a major collapse and surge of the ice domes. The cooling contributed by ice melt could reduce the implied sensitivity to CO2, possibly quite a lot depending on the assumptions used for melt rate.
[Response: The reason why the LGM is picked because it was the ~2000 yr period of maximum ice volume (as evidenced from the benthic O18 record). Thus it was neither growing nor melting significantly. Obviously things were not completely static over that period (since Milankovitch forcing continuously changes), but the ocean currents, greenhouse gas changes and temperatures seem to have been stable longer enough to assume a rough radiative equilibrium. You can estimate how close to equilibrium the climate must be over such a time period assuming for instance that all the energy imbalance goes to melting ice: an imbalance of say 0.1W/m2 would over 2000 years melts/grows approx 18m of ice. Thus if the change in ice volume is constrained to be less than that, that gives a corresponding constraint to the imbalance. As you can see, the sustained imbalance must have been a small number (and is well within the uncertainties in the calculation. – gavin]
Rick Watkins says
I have no knowledge of climate modeling, though after reading the observations above regarding the climate prediction project I have some general questions from the angle of those donating time and effort.
Given the limitations (weaknesses/flaws?) of the climate model and/or the handling of the data you mentioned, is this project serious climate modeling science? Are the results such that they can be used to advance climate modeling science? Are the limitations…etc. due to the fact that the project is in its early stages? Are those donating computer time, in good faith, wasting that time? Is it being squandered? Would/could this obvious mine of public interest and enthusiasm (not to mention their computer power!) be better exploited in alternative climate modeling projects?
I think these questions are relevant because I got the impression you were a bit dismissive about the entire project. Is that a fair and accurate appraisal?
[Response: No! As a project, this is very useful because it’s the kind of thing that could not have been done before and that is always worthy in modelling. There will be further work done on these results, and there will be more interesting experiments performed. Eventually, we will get a better idea of how wide the spread of climate model parameterisations can be while still passing the stringent valdiation tests that state-of-the-art models must pass. My criticism mainly concerns how these preliminary results have been presented in the media. It was heavily implied that an 11 degree climate sensitivity is a real possibility, which in my opinion is not credible. While that makes for dramatic headlines, it does not contribute to a sensible discussion of the issues. So, I would encourage you to continue supporting the climateprediction.net effort and help speed along the more interesting results. – gavin]
imipak says
Three climate change related stories have reached the mainstream media (well, the BBC at any rate) in the last ten days. First the ‘International Climate Taskforce’ with their controversial claim about 400ppm being a critical threshold; this seems to have been a largely political / policy driven report. Secondly, the climateprediction.net Nature paper discussed above, which relates to actual, published, science. Thirdly, the BBC is now reporting ( http://news.bbc.co.uk/1/hi/england/oxfordshire/4218441.stm ) a WWF sponsored study, the source for which I have been unable to locate as yet. The BBC’s report emphasises (a) 2 oC temp increase possible by 2026, and (b) the possible extinction of polar bears. (Whilst the latter is the obvious hook for the rest of the mass media, the specific trends / predictions / new model results remain vague until the actual paper’s available: can anyone offer insights into where it fits on the science < ==> policy/politics continuum?) Oh, and the BBC Horizon ‘Global dimming’ story was broadcast a week ago, too, I believe. Interesting times, though the time needed to keep up with the science (as an interested layperson) is becoming rather overwhelming…
Mike Atkinson says
I think this is the WWF paper.
dave says
Please explain these numbers and “implies a sensitivity”:
I don’t understand the notation °C/W/m2. I had thought there was W/m2 and °C and some translation between them averaged over the surface area of the planet.
thanks
[Response: Sorry if that isn’t clear. The climate sensitivity is the proportionality constant between the forcings (in W/m2) and the temperature change (°C). Therefore the unit for the sensitivity is the number of degrees per each unit of forcing i.e. °C/(W/m2). So given a forcing (in this case 0.85 W/m2 (= 1.6 W/m2 minus 0.75 W/m2 for the ocean heat content change), and a temperature change 0.7°C, the sensitivity is 0.7/0.85 = ~0.8 °C/(W/m2) (leaving off the error bars for clarity)- gavin]
Ana Unruh Cohen says
Thank you for your treatment of the Meeting the Climate Challenge report and providing a link to the report. As the Associate Director for Environmental Policy at the Center for American Progress, I’ve spent the last week trying to explain to the media that the Taskforce did not say that climate collapse was 10 years away! The Taskforce’s goal was to make a set of policy recommendations that would provide fresh ideas and catalyze new actions to reduce the emission of greenhouse gases.
Given their current understanding of the science and its uncertainties, the Taskforce members feel that establishing an international goal of not exceeding a 2 degree C rise (over pre-industrial levels) in global average temperature is important for prompting action and as negotiations for the next round of climate accords for the period after the Kyoto Protocol (beyond 2012) begin. Although the report does say that achieving a concentration of 400ppm of carbon dioxide by 2100 gives the best chance (80% according to the Baer research; see footnote 14) of preventing a 2 degree C rise, the Taskforce members are well aware that other emission pathways are possible and that between now and 2100 our understanding of the climate system will improve. They intentionally did not set a concentration limit in order to provide flexibility as our understanding of the climate system improves.
Your website is invaluable. Keep up the good work.
dave says
Re: my question #6: Thanks. That was easy enough – I’m not feeling too science-challenged today! I didn’t know that data since 1850 as summarized is essentially useless for estimating climate sensitivity given that the Earth’s radiative heat exchange is not in equilibrium over that period. Here’s a paper Can we defuse The Global Warming Time Bomb? by James Hansen which explains some of the physics details (see Box 1 in that paper, called Climate Forcings, Sensitivity, Response Time and Feedbacks and Figure 4, which gives the numbers in W/m2 for the various forcings). This paper also discusses in some detail the paleoclimate results regarding the LGM that you site.
I understand why you say that we needn’t take these very high estimates (11 degrees C) seriously. However, I see a lot of political danger in publishing these estimates. The qualifying factors for the estimates will be ignored by those with an axe to grind. And the great uncertainty range can be used to political advantage. I’m not so sure the BBC and Stainforth et. al. have done us a big favor in this case given that the probability density function (most runs predicted) = about 3. For example, #7:
Of course, our chances at leveling out at 400 ppmv CO2 are indistinguishable from zero.
Finally, so how do I say “1°C” and get it to format correctly?
loveall says
If it is true that very high,the earth will as hot as sun.
John Finn says
Gavin
On climate sensitivity. As a ‘quick and dirty’ estimate why can’t we use the Stefan-Boltzmann equation. In particular use the first derviative (i.e. dE/dT) then calculate the sensitivity from the inverse (dT/dE) which gives
1/(4 x Sigma * T^3)
where Sigma = 5.67 x 10^-8 ; T = 288k (current earth temperature).
This gives a sensitivity of 0.18 deg C per W/m2 which obviously doesn’t agree with the 0.75 deg C per W/m2 – but does agree more with some modern day observations, e.g. Pinatubo.
[Response: The Earth is not a blackbody. Therefore theories that apply to blackbodies don’t work. The low sensitivity you quote is just as incredible as the high numbers discussed above (think about what it would imply for the glacial climate). The notion that the response to Pinatubo can determine equilibrium climate sensitivity is rather odd, and is not supported by evidence. Models with average sensitivities do a very good job simulating the Pinatubo eruption and aftermath (see Soden et al , 2002; Shindell et al, 2004). -gavin]
John Finn says
For instance, total forcings since 1850 are around 1.6+/-1 W/m2, the temperature change is around 0.7+/-0.1 °C
Gavin
As around half the temperature rise took place BEFORE the bulk of the increase in forcing, this seems like false logic.
[Response: The forcings have been increasing since 1850 (see here), and taking the longest period possible minimises the influence of intrinsic decadal variability in the climate system. But you seem to have missed the point entirely. The calculation is being done to show that it does not provide a good constraint. Looking at just the last 30 years would be even worse. – gavin]
DrMaggie says
Re #4: Judging from information at the WWF International website (www.panda.org), I believe the WWF report referred to by e.g. BBC is to be found via this link. The text linked to in #4 above is rather an abstract for a paper to be presented at the upcoming climate conference in the UK.
Scott Robertson says
If you can believe it, this link is from Fox News re:Shrinking Glaciers. Although it is an AP story, they still ran it, without contrarian opinion.
http://www.foxnews.com/story/0,2933,145824,00.html
Dave Frame says
Hi Gavin,
I enjoyed your discussion of the Stainforth et al. paper, and agree with a lot of what you have to say. These are preliminary results, and we have yet to apply tougher constraints. Claudio Piani is currently working on a paper which attempts to provide a measure of model skill compared to recent climate (this work is in parallel to the sorts of things David Sexton has been doing at the Hadley Centre for the QUMP experiment, and similar to some of the work that has been undertaken as part of CMIP-2). That may well rule out quite a few models (not just (or maybe even) the high sensitivity ones). We also have plans to use the results from this first phase of the experiment in a fully coupled ensemble which will simulate 1950-2000, which will be a significant step along the road. [Later this year, we hope.] This will allow us to compare models to data under realistic 20th century forcings (where realistic is an ensemble in itself…). That should provide quite a good constraint on the models.
As you say, paleoclimate simulations provide a constraint on this sort of experiment, and we are proposing a “paleo-prediction.net” ensemble with Paul Valdes in a follow-up experiment currently under peer-review for funding. If I had to bet a fiver on it, I’d agree with you that the high sensitvities are unlikely, but until we’ve run the ensemble (accounting for uncertainties in the paleo record and cogniscent of the possibilities of a non-constant sensitivity between now and the reasonably distant past, amongst other things) I don’t think we’re in a position to actually rule them out. Interestingly, our results are actually pretty consistent with a lot of the recent literature on sensitivity: All studies comparing simple models with recent climate change (from Andronova and Schlesinger, 2001, onwards) find high sensitivities (more than 8K, say) are consistent (at the few-percent level) with the observed record unless they are ruled out a priori. Now we find general circulation models displaying such sensitivities that are not significantly less consistent with current climate observations than the standard models used by the IPCC.
As coordinator of climateprediction.net (and as an author on Stainforth et al.) I groaned at some of the media coverage (London’s Metro was particularly bad, and I was a bit embarrassed by the way Sky News edited an interview I did with them – I made a whole bunch of qualifications, none of which were aired). Where they got a chance to tell the full story, I think Dave Stainforth and Myles managed to get the message across reasonably well, but where the journos were clearly focused on the 11 degree angle of the story (which is a part of it…) things got a lot more untidy. Worst of all were the second-hand articles.
Anyway, thanks for the article, which adds a lot of context that mainstream media reports usually lack. Good luck with the site (which is an excellent idea) and thanks for thinking of us in your links! [If we can contribute anything to realclimate.org please just let us know – it’s a very worthwhile endeavour.]
Cheers,
Dave Frame
climateprediction.net coordinator
[Response: Thanks for your response. The Andronova study like the Forest and Knutti papers uses the instrumental period to try and constrain sensitivity, and so suffers from the same problems discussed above. I agree that a priori we can’t assume that the high end simulations will fall by the wayside once more validation is done, but that is my hunch (based on model valdiation that we perform at GISS and my own experience with paleo-climate modelling). The media can be a difficult beast to control, and I’ve found (through much trial and error) that as well as telling them what you want them to say, you have to be extremely clear about what you don’t want them to say. I wish you luck in the next phases of the project, and look forward to seeing the results. – gavin]
John Finn says
The forcings have been increasing since 1850 (see here)..
I’ve seen there – and the net/cumulative/total forcings are zero/negative up until atound the mid-1920s. But any increase then won’t have any effect for several years surely – because of the lag you’ve discussed earlier.
Tom Huntington says
It is my understanding that the uncertainties regarding climate sensitivity to a nominal 2XCO2 forcing is primarily a function of the uncertainties in (1) future atmospheric aerosol concentrations; both sulfate-type (cooling) and black carbon-type (warming), (2) feedbacks associated with aerosol effects on the properties of clouds (e.g. will cloud droplets become more reflective?), (3) changes in surface albedo of snow & ice due to changes in temperature and deposition of mineral and black carbon particulates, and last, but arguably most significantly (4) the intensity of the positive feedback that comes from the inevitable (?) increase in the concentration of water vapor in the atmosphere as the atmosphere warms as indicated by the Clausius-Clapeyron equation. Your analysis of the issue of sensitivity seems to be largely restricted to the effects of aerosols.
On a separate point, my understanding of the global dimming issue is that aerosols responsible for dimming were possibly masking a higher climate sensitivity to GHG increases than would have been inferred in the absence of this dimming, suggesting that further increases in GHGs without compensating aerosol increases would result in more warming than would have been predicted prior to the acknowledgement of the global dimming phenomenon.
[Response: You may be confusing two issues here. The 2xCO2 experiment is not in any sense a prediction. It is just an experiment done to estimate the climate sensitivity and it is not affected by uncertainies in other historical forcings (like solar or aerosols). Note that the water vapour feedback in the 2xCO2 experiments is an important part of the response. Projections for the future, and indeed hindcasts for 20th Century, do depend on the other forcings. It is the uncertainty in those forcings (particularly aerosols and their various direct and indirect effects) that prevent the historical period from constraining global climate sensitivity. The reason therefore why global dimming really doesn’t have an implication for the climate sensitivity is precisely because of those uncertainties – look at the error bars on my back of the envelope calculation in paragraph 3. – gavin]
John Finn says
A couple of questions on the following:
A reasonable estimate of the forcings is 6.6+/-1.5 W/m2 (roughly half from albedo changes, slightly less than half from greenhouse gases – CO2, CH4, N2O).
This implies a forcing of 3 W/m2 for albedo changes presumably due to additional ice/snow sheets. Is there a reference somewhere which explains how this is calculated? I understand that the current albedo of the earth is responsible for about 107 W/m2.
Also what forcing is assumed for the reduction of water vapour?
It would help if any response to this included current forcing for WV. I accept this may be variable – but some approximate range of values will do.
Also on a previous post regarding the application of the Stefan Boltzmann, you say
The Earth is not a blackbody. Therefore theories that apply to blackbodies don’t work.
I know the earth is not a true black body – but I thought the equation still held if an emissivity factor was included. As the earth’s emissivity is around 90% this will, admittedly, increase the sensitivity – but not substantially.
[Response: All forcings are calculated by changing the boundary conditions (in this case the distribution of glacial ice, and looking to see what the change in net radiation is while keeping everything else constant. Water vapour is a feedback, not a forcing (though since people keep failing to understand the distinction I will do a post on this topic at some point). S-B works only in the absence of feedbacks. The quantification of the feedbacks is the whole point of the exercise. – gavin]
Mike Atkinson says
It seems that the ice age climate constraining a 2xCO2 doubling Climate Sensitivity is dependent on the assumption that the sensitivity is linear in the entire range of CO2 values from ice age levels (much below present) to 2x preindustrial values.
You also seem to be apportioning ice age climate sensitivity among albedo, CO2 and atmospheric dust (ignoring other forcings) in a way that the errors in CO2 forcing and temperature change are correlated and then assuming that they are uncorrelated. I may be wrong about this, I’m still learning, and you may be justified in treating them as uncorrelated even if they are correlated.
Can other ice age forcings be ignored? In particular variations of the Solar constant (how I love variable constants!) and ozone.
[Response: All of these are good points. Actually the basic assumption is that climate sensitivity is the roughly constant (not linear) for warming and cooling effects. This is not the case exactly in climate models, but it’s a reasonable approximation. We could argue about the exact size of the effect, but I doubt that it would exceed the error bars quoted. The LGM forcings include other well-mixed GHGs (CH4 and N2O) since they are constrained by ice core records. Other forcings (O3, solar, other aerosols) may play a role but are currently extremely poorly constrained (i.e. not at all). If subsequent investigation found that they were indeed important (which a priori they aren’t expected to be) , then the calculation would need to be revised. It seems more likely that we have considered the biggest players. I don’t understand your point about errors in CO2 and T being correlated. The CO2 level comes from half a dozen different ice core analyses, while the temperature data come from marine sediments, pollen analyses, isotopes, corals etc. Why would the errors in the different proxies be correlated? – gavin]
[Response: also… re solar forcing and ice ages: the ice ages are fairly regular and match the timescales of orbital forcing. There is no known solar variation on this timescale. It would be odd if there just happened to be a solar cycle on exactly the 100 kyr timescale – William]
Joel Shore says
Just to follow-up on John Finn’s question (#10), if one puts in a rough value for the emissivity of the earth (whatever that might be), so one is no longer assuming it is a perfect blackbody, then does the resulting estimate for climate sensitivity correspond to what one would expect in the absence of any feedback effects?
I.e., does that provide a reasonable estimate of the direct effect of the forcing before feedbacks, or are there other reasons why it still too simplistic even for that?
[Response: In the absence of any feedbacks that is what you’d get. But feedbacks are the whole point – if you have any kind of climate system, you get feedbacks. The interesting question is how big they are, and you can’t get that assuming Stefan-Boltzmann. – gavin]
Mike Atkinson says
About CO2 sensitivity / temperature correlation. I meant that the attribution of the forcing between Ice albedo, CO2, etc. might be dependent on the temperature. For instance would the CO2 sensitivity in °C/(W/m2) be dependent on the temperature (whether a 5 or 6°C drop)?
[Response: Yes. Sensitivity is the ratio of the temperature to the forcings – so if the temperature estimate changed, so would the implied sensitivity. I built in an uncertainty of 0.5 °C in that, but you can easily do the math if you think I underestimate the error or got the mean wrong. – gavin]
Lynn Vincentnathan says
It is sort of reassuring that 11 degrees C is far-fetched. I say “sort of,” since 5.8 degrees (which you suggest as the more scientifically founded upper possibility) may be dangerous enough. Please correct my faulty understanding, but I have read (secondary sources) that 251 million years ago it is thought there was 6 degrees global warming (from natural causes), and that this triggered massive CO2 and CH4 releases, leading to runaway global warming, and massive extinction. Is this wrong? I understand 5.8 is a possibility, not a high probability, but if such warming were to happen, could this lead to runaway global warming? Or would that require something higher, or is that whole scenario so unlikely as to be dismissed outright?
[Response: The 5.8 degrees is not for climate sensitivity, that is a projection for 2100 using the biggest likely sensitivity and the fastest growth in greenhouse gases. Many of the media reports also confused the two temperature ranges. The biggest sensitivity currently for any of the state-of-the-art climate models is 4.1 deg C for 2xCO2, but it may be as high as 5 deg C. As far as information concering the Permian-Triassic extinction event goes, it is all pretty speculative. The amount of available data is very sparse, and so while these kind of ‘deep time’ paleoclimate questions are a lot of fun, the lack of detail means that they have limited implications for today’s climate. Runaway greenhouse warming can occur for really extreme conditions (Venus at present, Earth in maybe 5 billion years time when the sun becomes a red giant), but is not a possibility for the next hundred years. I find it rather an overused term in climate discussions. – gavin]
Peter J. Wetzel says
Just to quickly interject a little help for Gavin: The Stephan-Boltzman argument applies to the “top of atmosphere” exchange of, and balance of radiation. But all the really meaty “good stuff” regarding feedbacks and climate change happens between the top of the atmosphere and the surface (where we are attemting to define sensitivies).
How does CO2 increase affect the water vapor exchange, the cloud amount and its incredibly complex feedbacks involving aerosols, precipitation efficiency, and the resultant radiation balance at the 1-2 meter height thermometer shelters where humanity defines his/her climate? How do the complex feedbacks change atmospheric circulation patterns, and the interaction of these patterns to changes in ice cap topography (e.g. at the LGM)? How do these feedbacks and the atmospheric circulation changes interact to affect the ocean circulation, including shifts in location of deep water formation and cold water upwelling, depth of the thermocline, etc. etc.?
The problem is mighty complex. Science is doing its best to grapple with all this complexity — to understand it, and to model it under the constraints of finite computational power. “Earth system modeling” is a valuable tool for testing the sensitivities. But fundamental research into understaning the processes that drive the “Earth system” is also still rapidly advancing.
The “best answer” science can provide today is sure to be superceded by a much “better answer” quickly. Hang loose and keep an eye on this site! :~)
P.S. A lot of reseach energy is being devoted to the study of Methane Clathrates — a huge source of greenhouse gases which could be released from the ocean if the thermocline (the buoyant stable layer of warm water which overlies the near-freezing deep ocean) dropped in depth considerably (due to GHG warming), or especially if the deep ocean waters were warmed by very, very extreme changes from the current climate, such that deep water temperatures no longer hovered within 4C of freezing, but warmed to something like 18C. It would seem to be required that very drastic warming of the deep ocean is the only way that this source of Methane would be released and trigger a “runaway” greenhouse warming.
dave says
Re #21 The End Permian Extinction
I think it is a mistake to casually dismiss recent findings on this extinction as “all pretty speculative”. More and more data is coming to light on this event. There are complete sediment records (especially in China) that have allowed much more thorough inspection events at the Permo-Triassic boundary. Take a look at How to kill (almost) all life, from which I quote:
The oxygen isotope record does indeed indicate a rise of about 6 degrees C just before the worst happened. Global warming brought on by increased atmospheric CO2 from volcanism is thought to be the cause of this extinction. A theory about an impact at the time has little support. Furthermore, the large negative shift in the C13 isotope at the time can only be explained by the release of light carbon from methane clathrates (#22). A new paper by Ward et. al. Science, Jan 20, 2005 provides more light on what happened. Oxygen levels dropped, the oceans became stratified, a critical warming threshold was exceeded.
I’m not saying we’re on the verge of another extinction like the end Permian. However, with these big numbers being thrown around (for example 8K in Dave Frame’s post #14), it seems prudent to remember what may have happened 251 mya when over 90% of Earth’s species went extinct.
John Finn says
[Response: All forcings are calculated by changing the boundary conditions (in this case the distribution of glacial ice, and looking to see what the change in net radiation is while keeping everything else constant. Water vapour is a feedback, not a forcing (though since people keep failing to understand the distinction I will do a post on this topic at some point). S-B works only in the absence of feedbacks. The quantification of the feedbacks is the whole point of the exercise. – gavin]
Right. So can we take it from this that the climate forcing from feedbacks are far and away the most dominant factor – despite the fact that they haven’t been accurately “quantified”. We can wait for the post on Water Vapour and feedback effect for a response to this. Also could you include a comment on the NASA report (around March 2004) by Minschwaner?? which, from observations, suggested that “we may be over-estimating feedbacks”.
But a question for now.
The 0.5 deg C which is still “in the pipeline” , when is this going to become evident. Put it this way if atmospheric levels of CO2 were fixed at to-day’s level (380ppm) indefinitely when would we see global temperatures 0.5 deg C higher than to-day.
John Finn says
Just to quickly interject a little help for Gavin: The Stephan-Boltzman argument applies to the “top of atmosphere” exchange of, and balance of radiation. But all the really meaty “good stuff” regarding feedbacks and climate change happens between the top of the atmosphere and the surface (where we are attemting to define sensitivies).
Peter
This is fair enough. But surely S-B can be applied at the surface asw well. Around 240 W/m2 are emitted from the “top of the atmosphere” at around 255k (as S-B confirms) – but around 390 W/m2 at 288k are emitted from the surface. The difference is due, as you say to the “meaty bit in the middle”. A rough calculation here would suggest a sensitivity of about 0.22 deg C. BUT after a long drawn-out process we’ve established — not just that feedbacks are not included (as Gavin seems to think I’ve misunderstood) – but the actual magnitude of the feedbacks.
These seem to be around 2-3 times the direct forcing from CO2 alone.
It would help if someone could confirm this.
Jeffrey Davis says
In reference to dave’s comment (23), Ward’s time frame for the Permian extinction is 10 million years. Human beings won’t be human beings in 10 million years regardless of CO2 levels.
Lynn Vincentnathan says
Re #21, thanks for the clarification. I guess what I meant by “runaway global warming” was, could a high end warming scenario (using the biggest likely sensitivity and fastest growth in GHGs), eventually trigger “natural” positive feedback loops of GHG emissions (not referring to people using their ACs more due to the warming & thus emitting more GHGs causing more warming), even if people reduced their GHG emissions, say by 90%, in which the emissions from nature & the warming would continue to increase for at least some time period (I understand we are too far from the sun to have a permanent and extreme “Venus effect,” unless the sun becomes a lot hotter/brighter). And I’m not necessarily looking at only 2100, but simply “in the future,” by 2200 or 2300, or whatever.
As mentioned in an earlier post, as a layperson, I am more interested in avoiding false negatives, so I don’t need 95, 90, or even 20% certainty to be thinking about this. For example, I might take an umbrella if only a 20% chance of rain has been predicted.
And regarding an article I read in 2004 about 2 years of acceleration in atmospheric CO2 concentrations. What if this were to become a trend and continue, would this indicate the positive feedback loops are becoming more prominent, and the negative feedback loops less prominent?
I know most of you on this site, both the questioners & experts, are way above my knowledge on the subject, but I do appreciate your efforts to explain this to layperson like myself. I do talk about the subject, and I would not want to be way off track.
dave says
Re: dangerous interference
As of today, the “Avoiding Dangerous Climate Change” is in session at the Hadley Centre. Here is the program for the conference. Many of the papers being presented there are available online.
Peter J. Wetzel says
For John Finn:
The S-B relationship is fundamental to define the climate of the Earth at every level of the atmosphere from the top to the surface. At the crux of cliate models, you will find the integration of that equation through all atmospheric layers, accounting for the emission contributions of all the radiationally active matter in those layers.
You’ve used the S-B formula to calculate dQ/dT at typical terrestrial temperatures. Your calculation describes how much difference in infrared radiational heating, dQ, results from a given increment of temperature change, assuming emissivity and everything else remain fixed.
CO2 sensitivity is defined by doing the integration of the S-B equation through an ensemble of typical present-day atmospheric conditions twice, first with CO2 at 280ppm, then with it at 560ppm. Because CO2 makes the atmosphere more opaque to infrared radiation, and because the atmosphere gets colder as you get higher, the “effective radiation temperature” of the infrared radiation leaving the earth is made colder by increasing CO2 (fewer Watts per square meter of infrared radiation leave the top of the atmosphere). It is the reduced amount of radiation leaving the top of the atmosphere that changes the earth’s balance of heat, and therefore defines the “direct radiative forcing” caused by doubling CO2.
Now it should be obvious that this pair of calculations using a fixed, present-day atmosphere with 1xCO2 and 2xCO2 does not account for the feedbacks. If 4W/m2 less heat escapes the top of the atmosphere but the same amount of heat is still coming in from the sun, some physical change must occur in order to restore the energy balance. The most intuitively reasonable thing that could happen is that the atmosphere would warm up until 4 more W/m2 are emitted by the S-B law. But how will this warming be distributed through the atmosphere? This is where the complexity of the feedbacks begins.
The temperature sensitivity discussion is all about how the atmosphere, the oceans, the biosphere and the cryosphere adjust to this forcing change. The “degrees of freedom” and the range of time scales involved in the adjustment process are immense, and are not all well understood. And many things besides just temperature end up adjusting.
I apologize if this was all too basic. Maybe I missed the point.
But I’d like to inject one more facet of the discussion. The uncontrolled experiment which we are performing and attempting to observe and model involves many more anthropogenic forcings than just CO2. The infamous IPCC Figure 6.6 actually just skims the surface of the impacts humanity is imposing on Earth. The figure is infamous from my perspective because it conveys an impression of much more certainty than I believe a responsible “flagship” figure, featured prominently in the executive summary, should do.
The big, big, HUGE uncertainty bar that you *do not* see there, belonging to the cloud feedbacks is the most egregious issue. Beyond that, there are other major missing components, left out because they are the most uncertain, and certainly not because they can be expected to have little affect. Of these others, the most important ones are the 2nd aerosol indirect effect, and Land-use change effects beyond albedo — especially the infrared emission effect. Both of these just happen to be very likely to be net cooling effects.
Pat N: Self-only says
I would like to see discussion about the most recent period of rapid global warming … leading to the Paleocene Eocene Thermal Maximum (PETM) about 55 million years ago … including differences and similarities to the climate projections for this century … and beyond.
There is evidence that a large area of dense forests “rapidly” became swamp … where the northern Great Plains now lie.
Evidence includes fossils of subtropical/tropical conditions existing 50 million years ago in the lakes of the area now known as southwest Wyoming, northeast Utah and northwest Colorado.
“Diplomystus” fossil fish from southwest Wyoming can be viewed here at realclimate by clicking my name (a hotlink) that follows…
dave says
Re: #26 and the end-Permian (#23)
I know this is a comment thread but the comment #26 distorts the science, so let’s get that right.
The period of study by Ward et. al. is 10 million years. That is not the time-scale of the extinction. I haven’t gotten a look at that paper yet, although the title is “Abrupt and Gradual extinction….” The actual extinction took place at the boundary (sediment beds 25/26 in the paper I cited) synchronous with the Siberian trap volcanism (251.1 +/- .3 mya) on a scale which is 3 orders of magnitude less (counted in 10’s of thousands of years) than the 10 million years you cite. Measurements from 251 mya can not be more precise. Look at the paper I cited in #23 and “Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis” by Richard J. Twitchett abstract here. Sorry, not online. The time-frame given there is 10 to 60 kya. At this level of resolution, nothing further can be said about time-scales.
So, your comment does not affect mine at all. Obviously, what was happening at the end-Permian is very different than what is going on now. I merely said it is prudent to keep these kind of results in mind and would further add that the Earth’s systems may have surprises in store for us. If you require further references, I will be pleased to provide them.
Jan Hollan says
Regarding the correspondence of temperature and radiative (S-B) fluxes, I made
a scheme back in 1999, trying to visualize the problem. With newer numbers, it’s now available as warmin_ppf within http://amper.ped.muni.cz/gw/articles/ (there are the source PostScript I wrote, “easy” to edit, and pdf/png made from it). I left the current forcing at 3 W/m2, even if may be too much (I did not want to change the greenhouse scheme warmin_en). The equivalent (bb) temperatures of downward atmospheric radiation are 1.8, 3.05 and 6.2 °C.
jenik
DrMaggie says
Re #30:
While it of course is very interesting to find out more about the climate in different parts of the world during ancient periods of Earth’s development, I feel that it might be quite important to remember that e.g. 55 million years BP, the distribution of land masses across the globe was quite different from what it is now. (Some interesting illustrations of continental drift can be found here.)
Could one of the experts comment on how this would affect the balance of the various climate forcings? I guess that e.g. the overall albedo could be affected, as well as the solar heating of the oceans and the land surfaces (continents were placed at different latitudes). Also the potential pathways of oceanic circulation patterns would be affcted, and the presence or absence of large mountain ranges would impact on wind and precipitation patterns…
John Finn says
Re #29
Peter
Many thanks for your response. No need to apologise for being “too basic” – it helps to confirm my understanding.
My posts on S-B were intended to provoke discussion on issues similar to the ones you raised in your post.
Hopefully realclimate will do an article some time.
Thanks again
Tom Huntington says
For Peter Wetzl
If the second indirect effect of aerosols can be simplified as
“the development of precipitation and thus the cloud liquid-water path, lifetime of individual clouds and the consequent geographic extent of cloudiness”
quoted from the IPCC TAR
http://www.grida.no/climate/ipcc_tar/wg1/185.htm
and the evidence now points towards increasing evaporation (at least over the oceans) and precipitation (globally), can you explain how this is “very likely a net cooling effect” as described in Comment #29, when it could also be argued that this is consistent with an increase in lower atmospheric water vapor content (this particular feedback resulting in warming)? What is the best guess of the experts regarding the balance of the cooling versus warming effects of increasing clouds/water vapor?
Will increasing cloudiness necessarily result in net cooling? What is the balance of the cooling effects of reflectivity versus warming effects of insulation at night?
Can you suggest an updated revision to IPCC Figure 6.6 that reflect advances in understanding since its publication in 2001? I would also love to see a comparable figure that showed the feedbacks (like water vapor) in the same watts per square meter format.
[Response: The most up-to-date estimates that we have made to the forcings diagram are available: http://www.giss.nasa.gov/data/simodel/efficacy/ and in particular fig 28. -gavin]
Jeffrey Davis says
Responding to #31
My reference to the 10 million year time frame and not-being-human wasn’t because of extinction but because of evolution. 10 million years ago the hominids were very different from the homo sapiens sapiens that emerged around 40,000 years ago. Lashings of apologies for the confusion.
Pat N: Self-only says
Re: 33, 30
The continents 55 myrs ago were in about the same place then, as now. The main difference was the gap between North America and South America. I think the effect of no gap on global climate is minor. As sea level rises again, perhaps the gap would reappear?
John Bolduc says
There is a paper posted on the program for the “Avoding Dangerous Climate Change” conference currently taking place in England titled “Why Delaying Climate Action is a Gamble” by Kallbekken & Rive of the CICERO Center for Int’l Climate & Envt Research. The paper is at http://www.stabilisation2005.com/programme.html on the Day 3 agenda. The paper appears to conclude that if we wait 20 years to begin reducing GHG emissions, assuming a modest amount of mitigation in the short term, we will have to reduce emissions at a 3 to 7 times greater rate than if we start now in order to keep warming to a 3 degree C increase around 2100. The authors note that it’s a gamble because if the political feasibility of acting doesn’t improve and the costs don’t decrease, it will be that much harder to take action. I’m wondering what other climate scientists think about making such a forecast? I assume the paper is not peer reviewed yet.
Peter J. Wetzel says
Response to Tom Huntington, #35:
The second aerosol indirect effect is more likely to cause cooling than warming because, to the best current knowledge, high clouds are more likely to warm climate, whereas low clouds are more likely to cool. However high clouds are much less likely to produce precipitation than low clouds. Thus second aerosol indirect effects predominantly operate on low clouds, increasing the endurance of cloud liquid water in these clouds.
The second aerosol indirect effect can, *under some circumstances* increase cloud lifetimes. But the more general statement that I used, “increasing the endurance of cloud liquid water”, does not always translate into longer cloud lifetimes, particularly in the widespread areas of nearly overcast marine stratocumulus which dominate considerable areas of the globe. In the core of these areas, there is little hope of increasing cloud lifetime since the cloud cover is almost continuous already. It is only on the periphery of the large marine stratocumulus cloud banks where lifetime might hope to be extended. And then it is only on the downwind side of these cloud banks where clouds are dissipating (rather than on the side where they are forming), where the second effect might be expected to increase cloud extent due to increased cloud lifetime.
Increasing low cloud extent is expected to cool climate, based on our best understanding at present. Increasing the endurance of cloud liquid water within cloud of fixed extent will also cool, due to the increased albedo of such clouds.
You go on to interject the projected increase in atmospheric water vapor content into the discussion. The second aerosol indirect effect may play some role in increasing the atmospheric *total* water content — note that cloud droplets are liquid water, not water vapor. There has been discussion here about the potential for aerosols to increase the average residence time of atmospheric water.
However when you comment that: “this is consistent with an increase in lower atmospheric water vapor content (this particular feedback resulting in warming)” you are straying into an entirely different feedback. Whether consistent or not, the putative increase in water vapor due to GHG warming has a separate hypothesized cause (rooted simply in the Clausius-Clapeyron relationship). There is no compelling need to discuss the two effects as though they were inextricably linked (There are undoubtedly second order links (feedbacks) which connect the arguments, but the first order processes are easily separable.)
Therefore when you ask about the general effects of cloud feedbacks on climate, you have moved well beyond the scope of a discussion about aerosol second indirect effects. But to answer those questions, the results of climate model work reported in the
IPCC TAR (see Fig. 7.2) show huge differences among models in the overall cloud radiative forcing. Five models predict net cooling, five predict net warming due to cloud feedbacks. The uncertainty range for net radiation effects stretches from -1.1 to +2.9 W/m2. And this is one of the most notable missing uncertainties in the infamous IPCC TAR Fig 6.6, which I complained about in #29.
And along those lines, I’d like to end by complaining about the lack of uncertainty bars in the figure which Gavin presented in his response to your post. My complaint is not about the most probable value of the various forcings, direct and indirect, but about irresponsibly conveying, by error of omission, a proper sense of the underlying uncertainty in those numbers.
Observer says
Thanks..
This one, day 2, is very instructive
http://www.stabilisation2005.com/day1/leemans.pdf
John Finn says
I’m sorry, guys, but it’s time to be brutal. I’ve had it with climate and weather predictions from computer models. Last Sunday, Hadley Weather Centre – the UK’s esteemed centre of excellence for climate research – began to issue reports through the media – about an icy arctic blast we could expect at the end of the week. This would all start on Friday (to-day) with heavy rain from the NW which would drag cold air in behind it. Every night this week – the story was the same – prepare for a real taste of winter. That is – until last night – when the “icy blast” became “temperatures drop back by one or two degrees”. It wouldn’t have been so bad but I was going to take Friday off work – but I thought better of it in light of the heavy rain/sleet/cold we could expect. I actually had lunch to-day sitting OUTSIDE in shirt sleeves. I would have been in the garden all day had it not been for the completely useless, incompetent jerks at Hadley.
Don’t bother going on about this being weather and not climate. I know the difference – but do the computer models. At the end of the day the models are just mathematical representations of climatic (or meteorological) processes as they are understood by the people who develop them. In any case, for the past 2 years (at least ) Hadley have predicted the mean global temperature at the start of the year. Now this doesn’t involve giving the forecast for a given day or week – it’s supposed to be an indicator of the way climate is changing. They based their predictions on 1961-90 anomalies (as per CRU) so actual numbers will be a bit different to GISS. Anyway for the past 15 years the CRU record has shown global temperatures to be between 0.3 to 0.6 deg above the 1961-90 average, so a reasonable guess might be the mid point , i.e. 0.45 deg. Actually if you’d guessed 0.45 – you’d have been remarkably close – just 0.02 deg out in 2003 and spot on in 2004. So what about the Hadley computer model (the HAD….something … 3CM.. or other). It managed to over-estimate 2003 by 0.08 deg and 2004 by 0.06 deg. Bearing in mind the likely range of the results this was a woeful performance. Still if we take the average and extrapolate over the next 100 years they’re only about 7 degrees adrift.
Once again sorry – but you still don’t know anywhere near enough to be able to provide the slightest clue as to how climate might behave in the next 100, 50 – or even 10 years.
PS I’ve go Monday off now – so it’ll probably teem it down then.
[Response: yes, you’ve confused weather and climate. Stating that you know you’ve done so doesn’t help your case, it just makes it worse. Ditto the forecasts of temperature for a year ahead – William]
John Finn says
yes, you’ve confused weather and climate……. – William
No I haven’t. I’m pointing out that both are predicted/projected using mathematical models – both are sensitive to small changes in initial conditions – neither can cope with unforeseen events (i.e the ‘butterfly effect’) – and both are capable of being massively in error.
[Response: the difference is that climate *isn’t* sensitive to small perturbations in its initial conditions (or at least, it is believed to be so). As a rough analogy, weather prediction is an initial value problem; climate is a boundary value problem. The climate of climate models is provably not dependent on the initial conditions – William]
John Finn says
William
I notice that you are involved in climate modelling. I have a question about the Mt Pinatubo eruption.
The claim is/was that the cooling effect on the earth due to the eruption was predicted (or simulated) by the models. Ok – I’ll but that. My question is – how much of a cooling effect was there? I’ve seen 0.5 degrees quoted – but what does this mean – is it for one day? – one month? – or the average for a year?
I understand that the Pinatubo effect is thought to be spread over 3 separate years, i.e 1992-94. You must, therefore, be able to determine what the temperature anomalies (w.r.t 1951-80 mean as per GISS) for those 3 years – if Pinatubo had not taken place. I mean how else would you know what the cooling was if you didn’t know what the temperatures would have been. I’d be grateful for any information you have on that.
One final point, William. I think you ought to change the photo at your work page link. The family photo with the 2 kids is much better.
[Response: I am involved in modelling, but so is Gavin, and he has a much better idea about Pinatubo than me. http://maui.net/~jstark/nasa.html says the effect was predicted and observed at 0.3 oc globally; that appears to be roughly consistent with the record: http://www.cru.uea.ac.uk/cru/info/warming/. As for the photo – work wouldn’t let me get away with it – William]
stephan harrison says
Response to John Finn (number 43). An incomplete analogy relates to the sea. I doubt whether any computer model can predict the movement of a grain of beach sand moved around by a breaking wave, since Navier-Stokes equations are rather difficult to solve. But we can predict the tides a long way into the future because we understand the dominant forcings. Scale matters.
Eli Rabett says
WRT #45 and #46
See
Science, Vol 296, Issue 5568, 727-730 , 26 April 2002
Global Cooling After the Eruption of Mount Pinatubo: A Test of Climate Feedback by Water Vapor
Brian J. Soden, Richard T. Wetherald, Georgiy L. Stenchikov and Alan Robock
The sensitivity of Earth’s climate to an external radiative forcing depends critically on the response of water vapor. We use the global cooling and drying of the atmosphere that was observed after the eruption of Mount Pinatubo to test model predictions of the climate feedback from water vapor. Here, we first highlight the success of the model in reproducing the observed drying after the volcanic eruption. Then, by comparing model simulations with and without water vapor feedback, we demonstrate the importance of the atmospheric drying in amplifying the temperature change and show that, without the strong positive feedback from water vapor, the model is unable to reproduce the observed cooling. These results provide quantitative evidence of the reliability of water vapor feedback in current climate models, which is crucial to their use for global warming projections.
Also
The Dust Settles on Water Vapor Feedback Anthony D. Del Genio
Science 2002 296: 665-666. (in Perspectives)
A search on Pinatubo in Science pulls up other interesting articles
Jeffrey Davis says
When you bake a raisin bread, you’ve got no idea where the individual raisins will end up, but at the end of an hour you still have a loaf of raisin bread. Every time.
You’re looking at mistakes in calling the weather as cumulative. They aren’t.
John Finn says
When you bake a raisin bread, you’ve got no idea where the individual raisins will end up, but at the end of an hour you still have a loaf of raisin bread. Every time.
Yes – but you know how many raisins you put in. In other words you have an accurate measurement of all the ingredients – not so with the weather … or climate – ask the modellers!
You’re looking at mistakes in calling the weather as cumulative. They aren’t.
The weather forecast was a bit tongue-in-cheek – a wind-up – not intended to be taken too seriously. However I don’t necessarily agree that the criticism of the annual predictions are not valid.
I’ll be happy for someone to clarify this but don’t the models run through a succession of time cycles for a number of defined ‘columns’ of the atmosphere – so isn’t there a chance that a small initial error could accumulate.
[Response: As William pointed out, weather is an initial value problem where the chaotic nature of the dynamics causes slightly different initial conditions to lead to widely divergent solutions after only a few weeks. Climate is a boundary value problem that looks at the statistics of the weather and tries to average over all of the possible paths. The ‘number of raisins’ is the energy coming in from the sun. Annual or seasonal climate forecasts are difficult because they rely on the oecans behaving predictably – which sometimes they do (i.e. during ENSO events), but mostly they are part of the coupled climate system and so have ‘weather’ as well. This is why climate forecasts are mostly for much longer periods where those variations as well can be averaged out. – gavin]
Eli Rabett says
No one counts the raisins that you add to raisin bread. At best you have an estimate, assuming you were anal enough to count the number of raisins in a cup a few times. OTOH after the bread is baked you can observe the average raisin density and comment on how cheap or generous the baker was. While this does not lead to wildly divergent raisin bread (you would have to add a lot of yeast for that)……