Billet par l’invité Spencer R. Weart, American Institute of Physics (traduit par Jean-Denis Vauguet)
Je reçois fréquemment des emails de personnes, diplômées en sciences, à la recherche d’une réponse simple quant au calcul du réchauffement global futur induit par les émissions de gaz à effet de serre. « Quelles sont les équations physiques et les données sur les gaz nécessaires pour prédire à coup sûr l’élévation de température ? » Cette question est d’autant plus naturelle que nombre d’exposés publiques sur le thème de l’effet de serre en font une affaire de physique somme toute élémentaire. Les personnes qui me contactent, typiquement des ingénieurs expérimentés, ne peuvent dès lors que trouver louche que les experts semblent éluder leurs questions. Certains tentent de trouver une réponse par eux-mêmes (Lord Monckton par exemple) et se plaignent d’être déboutés par ces mêmes experts, qui rejettent leurs bels édifices logiques.
Prouver la véracité du danger d’un réchauffement climatique en une ou deux pages d’équations : cette demande émanant des ingénieurs semble bien raisonnable et s’accompagne d’une longue histoire, laquelle montre bien comment la construction d’un modèle climatique trahie la recherche a priori d’une réponse plus ou moins simple.
La manière la plus directe pour calculer la température de surface de la Terre serait de considérer l’atmosphère comme une seule couche uniforme, tel un panneau de verre en suspension à la surface (c’est-à-dire à peu de chose près ce qu’on voit dans les explications triviales de l’effet de serre ici et là). Mais les équations associées à ce type de modèle délivrent un résultat en température qui n’est même pas de l’ordre du plausible quant au réchauffement. Il n’est pas possible de travailler sur une moyenne globale, car elle détruit inévitablement les importantes différences entre les transferts de chaleur dans une atmosphère dense, chaude et humide d’une part, une atmosphère fine, froide et sèche d’autre part. Dès le XIXe siècle, les physiciens sont passés à un modèle 1D : l’atmosphère étant posée comme possédant une structure verticale identique en tout point du globe, ils étudièrent la façon dont la radiation était transmise ou absorbée lors de sa montée ou descente à travers une colonne d’air, de la surface de la Terre au sommet de l’atmosphère. Il s’agit-là de l’étude du transfert radiatif, une branche théorique tout aussi élégante que difficile. Elle explique comment la lumière solaire traverse chaque couche de l’atmosphère, jusqu’à la surface, comment l’énergie thermique réemise par la surface chauffe à son tour ces couches, ainsi que ses modes de diffusion entre les couches (réflection, échappée finale vers l’espace).
Lorsque les étudiants apprennent la physique, des systèmes simples leur sont enseignés : ils reposent sur peu de lois, puissantes, qui donnent des résultats précis. Une page ou deux d’équations suffit pour en faire le tour. Peu de professeurs mettent l’accent ou même mentionnent que ces systèmes sont en fait issus d’ensembles plus larges, bien moins dociles. Le modèle 1D de l’atmosphère ne peut par exemple pas être résolu en une seule belle page de mathématiques. Vous devez décomposer la colonne d’air en un ensemble de couches et réaliser des calculs manuels ou numériques pour chacune d’elles. Il se trouve que pour compliquer la donne, le dioxyde de carbone et la vapeur d’eau (les deux principaux gaz à effet de serre) absorbent et se dispersent différemment selon la longueur d’onde du rayonnement : les calculs deviennent immanquablement répétitifs, du fait de la décomposition du spectre à considérer.
Il a fallut attendre les années 50 pour que les scientifiques disposent de bonnes données pour l’émission infrarouge et d’ordinateurs suffisamment puissants pour gérer les immenses quantités de calculs nécessaires. Gilbert N. Plass a utilisé données et ordinateurs pour démontrer qu’un ajout de dioxyde de carbone à une colonne d’air doit induire une augmentation de la température de surface; mais pour ce qui est de la valeur calculée, personne n’y croyait (2,5 degrés de plus si la taux de CO2 doublait). Les critiques pointaient du doigt l’oubli d’un certain nombre d’effets essentiels. Pour commencer, si la température commence à augmenter, l’atmosphère doit contenir plus de vapeur d’eau, générant son propre effet de serre et induisant une hausse plus importante des températures. Toutefois, dans le même temps, plus de vapeur d’eau ne signifie t-il pas plus de nuages, parasols naturels éventuels de la Terre ? Ni Plass, ni personne avant lui n’avait essayé de calculer l’effet sur la formation des nuages (pour des détails et des références, cf. ce site).
Fritz Möller proposa alors un calcul novateur qui prenait en compte l’augmentation de l’humidité absolue en fonction de la température. Que n’avait-il tenté ! Ses calculs montrait un énorme feedback (rétroaction positive). En réponse à une augmentation de l’humidité, la vapeur d’eau induisait bien son effet de serre, et la température montait en flèche… si bien que le modèle pouvait donner à peu près n’importe quelle valeur élevée ! Ce résultat étrange poussa Syukuro Manabe à développer un modèle 1D un peu plus réaliste. Il introduisit l’effet des courants ascendants qui transportent de la chaleur depuis la surface, un phénomène que presque aucun des calculs précédents n’avait réussi à prendre en compte. Il apparut clairement pourquoi la température dans l’estimation de Möller s’envolait : il n’avait tout simplement pas tenu compte du fait que l’air chaud s’élèverait. Manabe travailla également à une estimation rapide de l’effet des nuages. En 1967, en collaboration avec Richard Wetherald, il était prêt à faire des prédictions dans le cas où la teneur en CO2 doublerait. Leur modèle estimait également une réponse positive en température, d’environ deux degrés. Ce fut certainement le premier article qui fit prendre conscience à de nombreux scientifiques qu’ils devraient peut-être commencer à réfléchir sérieusement à l’idée d’un réchauffement climatique. Le calcul numérique devint, pour ainsi dire, une « preuve de principe. »
Il serait assez malaisé de proposer cet article de Manabe-Wetherald à notre ingénieur à la recherche d’une démonstration du fait que le réchauffement global est un problème en soi : l’article en question ne donne qu’un rapide aperçu d’un ensemble de calculs longs et complexes qui ont, pour ainsi dire, eu lieu en coulisses. Par ailleurs, personne à l’époque de sa parution ou depuis lors n’aurait attaché beaucoup de valeur aux estimations avancées. De nombreux facteurs n’étaient toujours pas intégrés au modèle utilisé. Par exemple, c’est seulement dans les années 70 que les scientifiques ont réalisés qu’ils devaient considérer les interactions entre la fumée, les poussières, tous les aérosols divers issus de l’activité humaine, et les rayonnements, ainsi que la façon dont ces aérosols influaient sur la formation des nuages. Etc, etc.
Le problème du réchauffement climatique n’est pas un cas unique ; les climatologues se sont déjà heurtés à de pareils murs. Voyez par exemple les tentatives de calculs des alizés, une composante simple et essentielle de la dynamique de l’atmosphère. Pendant des générations, les théoriciens ont accumulé des idées sur les équations gouvernant le comportement d’un fluide et le transfert de chaleur à la surface d’une sphère en rotation, dans l’espoir de construire une description précise de la structure des cellules convectives et des vents de notre planète, le tout en quelques lignes d’équations. Ou quelques pages. Ou dizaines de pages… ? Échecs répétés. C’est seulement avec l’avènement des calculateurs dans les années 60 que des gens furent en mesure d’apporter une solution à ce problème, moyennant plusieurs millions de calculs numériques. Si bien que, si quelqu’un nous demande aujourd’hui une « explication » du phénomène des alizés, nous pouvons nous épancher sur tout un ensemble de sujets propices à la discussion — chauffage aux tropiques, rotation de la Terre, instabilité barocline — mais s’il s’agit d’en venir aux détails, de donner dans le quantitatif, nous ne pouvons faire mieux que d’ensevelir notre interlocuteur sous des tonnes de papiers donnant les résultats des innombrables calculs effectués.
Attention : je ne suis pas entrain de dire que nous ne comprenons pas l’effet de serre ou ce genre de chose. Nous comprenons très bien la physique élémentaire qu’il y a derrière, et nous sommes capables d’en expliquer l’essentiel, dans les grandes lignes et dans la minute, à un public non scientifique (voyons voir… « les gaz à effet de serre laissent passer la lumière en provenance du Soleil, dans les longueurs d’onde du visible, laquelle lumière atteint la surface de la Terre, qui se réchauffe. La surface réemet dans l’infrarouge un rayonnement qui est cette fois plus ou moins absorbé par les gaz à effet de serre selon leur nature, ce qui réchauffe l’air. Cet air renvoie une partie de cette énergie vers la surface, ce qui la maintient plus chaude que si les gaz n’étaient pas présents. »). Une explication plus technique, toujours à destination de non-scientifiques, pourrait tenir en quelques paragraphes. Mais si vous voulez argumenter sur des valeurs fiables — si vous voulez savoir si une augmentation des taux en gaz à effet de serre induit un réchauffement mineur ou catastrophique — vous devez tout à coup tenir compte de l’humidité, de la convection, de la pollution en aérosols, de tout un tas d’autres composantes du système climatique, le tout regroupé dans de longs processus de calculs numériques.
La physique est riche de phénomènes simples en apparence mais dont l’estimation par le calcul ne peut se faire en termes simples. Le réchauffement climatique est l’un d’eux. Les gens se languissent d’un moyen rapide de déterminer sans ambiguïté l’ampleur du réchauffement à venir. Hélas, de tels calculs n’existent pas. La hausse actuelle des température est un phénomène nouveau qui résulte de l’interaction de centaines de facteurs. Les gens qui refusent de reconnaître cette complexité ne devraient dès lors pas être surpris de ne pas se voir donner de formule magique.
Mark says
Gilbert’s figure of 2.5 thought, I think, a very good starting point.
The question being asked is “how does greenhouse gasses affect the earths temperature?”. And this one answers “If you consider only CO2 (which we are responsible for), 2.5 degrees per doubling.”
Now if someone wants to gainsay that, they are no longer asking the question, are they. They are asking for an answer that they like.
Now you can just tell them “well, you do the maths then and we’ll look over the result, but by default, this is what would happen and you must take in any changes that occur whether they increase sensitivity or reduce it”. Or just ignore them and let them know that if they wanted a different answer, they should have asked a different question.
Rick Kennerly says
I’m trying to do some laymen-level research on various change models predicting climate winners and losers in North America, particularly related to areas to retire to in, say, 20 years. Where do you think I could find some understandable middle of the road graphics, etc?
Andrew says
Thanks! This is a great (and eloquent) description of the incrementing complexity required of models to go from the interactions between atoms and photons to those between Earth system components.
As someone who’s used to working with complex (= simple; compared to reality!) models of biogeochemistry, I’ve gotten so used to requiring simulations to explore behaviour that I no longer give it a second thought. This has given me an institutionalised mindset that, at times, blinkers my understanding of why others misunderstand. This article is a helpful, if obvious, reminder that not everyone else is in the same boat! ;-)
Best regards,
Andrew.
Spencer says
#3 Andrew: I’m trying to figure out whether your question is off-topic or not! To get reliable predictions for climate on regions as small as a US state will take more computing power and understanding of climate processes than we have at present. So far, experts are pretty sure that the Southwest from Calif. to Texas will have worse heat waves and drought (=forest fires), which may have started already. This is pretty robust; people have been predicting it for decades. Most other US regions aside from Alaska don’t seem too sensitive to near-term climate change… ironically, the country most responsible for the greenhouse gases in the atmosphere is going to feel the effects later than many innocent countries. We do expect everywhere the generalities of climate change: hotter summer nights and later frost, more variability such as more intense droughts and rainstorms. And don’t get beachfront property near high tide if you are looking more than a couple of decades ahead. All that was predictable from simple hand-waving considerations, and indeed has been predicted for half a century.
If you have access to Science magazine, see the report Aug. 15 p. 909 of an article on this in press in Geophysical Research Letters, with a nice graphic.
Jeffrey Davis says
The degree of complexity takes the issue into a region where people not intimate with the complexity are simply outsiders. Since there are consequences to the issue that are the opposite of “academic”, those who want to stir the world to action are on the horns of a dilemma. What they want in the way of response and what’s possible for them to communicate to the layman are at such a remove that it will be a miracle if anything actually happens to stem climate change. In an ideal world, “Trust us” coupled with a well-organized primer (like the IPCC reports) would be sufficient, but as we all know and as Fate would have it, there are Other Interests involved. And as Upton Sinclair heartrendingly pointed out, “It’s hard to get a man to understand something when his paycheck depends upon him not understanding it.” And there are lots and lots of paychecks involved.
Gsaun039 says
As an engineer, and one that has worked his entire career on issues involving energy and the environment, and most specifically on air/atmospheric issues, I take some exception to the characterization of “engineers.”
While I realized long ago that simplistic equations and approaches can get you nonsense as an answer, I now tell people who attempt this that only when they’ve gone through the process line by line, layer by layer will they have some sense of what is involved. As noted, many will simply give up realizing that they have not the scientific prowess or time to go through the exercise.
That said, I do often point out the most straight forward of the simple laws that govern the processes involved and point out that the theory does not rely on correlations of measurements to create GHG theory. Rather, I point out that the correlations support the underlying physics and are not just some random correlations without some underpinning science to them. One need only go to such sites as http://www.venganza.org/about/open-letter/ and look at the relationship between global temperature and the number of pirates.
As an engineer attempting to deal with the consequences, I like to know what future I am planning for.
Ron Taylor says
Thanks, Spencer, for a very lucid explanation. Most senior engineers, however, should understand this level of complexity and the confidence one must have in solutions obtained by numerical integration using digital computers. Otherwise, we never could have launched a single missile or space vehicle.
Michael says
“Physics is rich in phenomena that are simple in appearance but cannot be calculated in simple terms. Global warming is like that. People may yearn for a short, clear way to predict how much warming we are likely to face. Alas, no such simple calculation exists. The actual temperature rise is an emergent property resulting from interactions among hundreds of factors. People who refuse to acknowledge that complexity should not be surprised when their demands for an easy calculation go unanswered.”
Ah! But then people who offer “simple” solutions to this complex problem shouldn’t be so surprised when they are asked to provide the simple proof, should they? It is a double edged sword, this complexity. If you are asking for change, shouldn’t you be required to accurately predict what your change will produce? But apparently that isn’t possible.
Hank Roberts says
> If you are asking for change, shouldn’t you be required to accurately predict
> what your change will produce?
You’ve stated the precautionary principle. But as Dr. Weart points out, accurate prediction was available.
Maurizio Morabito says
Would it be possible to have an actual senior engineer present their (presumably, mainstream) views of anthropogenic climate change and of the use of models?
As a (senior? and scientifically trained) engineer myself, I can guess what Mr Weart is aiming at, but he’s still using a language that brings down no barrier. For example, a statement such as
“Gilbert N. Plass used the data and computers to demonstrate that adding carbon dioxide to a column of air would raise the surface temperature”
will and does definitely make people suspicious.
You see, I have seen dozens, and I am sure there are out there hundreds of thousands of designs that have been “demonstrated” in a computer only to fail miserably when put into practice.
In fact, one point that I don’t think Mr Weart realizes (and likely, it’s all part of the miscommunication) is that it’s the engineers that have to deal with the actual world out there, and all its complexity, starting from but having to go beyond what calculations (formulae and/or models) suggest.
It really is the job of engineers to understand the complexity of the real world, and to make things work within that complexity.
There is little point in arguing to your manager that, say, in the computer your revolutionary design of a car needs only 2 gallons per 100 miles, when the actual thing is measured as drinking much more than that.
The one rule common to all engineered system is, the more stuff you put in, the higher the chances that something will go wrong. In Mr Weart’s case: the more factors need to be made to interact using models and supercomputers to calculate “global warming”, the higher the chance that the computed answer won’t be the right one.
Therefore, rather than accusing engineers of looking for simple answers (likely, misunderstanding them), Mr Weart should try to bridge the gap.
An example of another scientific endeavor, apart from climate change, where extremely complex, just-made modelling has been successfully applied as-is into an engineering project, would definitely be a good starting point.
Hank Roberts says
http://www.timesonline.co.uk/tol/news/environment/article4690900.ece
Chris Dudley says
For those who want a “simple” place to start, the work of the 1920’s on understanding the structures of stellar interiors is worth review. A number of approximations, many developed by Eddington, made calculation tractable with the methods of those days, often a room full of woman calculators. Much more detailed opacities are used now and the atmospheres of stars above their photospheres also get detailed treatment. It turns out that the physics is not controversial, just complicated. It is a fun study, but give it a few months to start to get a grip on it.
Bryson Brown says
Michael (#8): Change of some kind or other is going to happen– it always does; it’s not a choice between ‘keeping things the same’ and ‘changing them’. Why do those who advocate a reduction in our GHG emissions owe you a ‘simple’ proof of what the consequences of their proposal would be, while you don’t owe them a simple proof that continuing our GHG emissions won’t have disastrous consequences? Are you foolish enough to just say, ‘well, there’s no disaster yet’, so maybe it’s OK? If so, consider this old joke: a man falling from the top of the Empire State building falls past the 20th floor and someone yells at him, ‘how are you’– he answers, ‘alright so far’… The evidence that serious problems are already developing, and that much worse are yet to come, is very strong.
crust says
Michael (#8), just because you can’t predict something in a page doesn’t mean you can’t “accurately predict” it.
Consider an example from pure mathematics: Most results that are at all deep haven’t (and likely can’t) be proven in one page or ten or in many cases even a hundred. But that doesn’t undermine the validity of the proof, say, of the solvability of groups of odd order (a famous example from 1963 of a very simple, very important result that took about 250 pages of very dense reasoning and hasn’t been materially condensed in the four decades since then).
Ray Ladbury says
Michael #8: It would appear that we have what is actually a pretty simple argument here, though based on complex diagnostics: The patient says, “Doctor, Doctor, it hurts when I raise CO2 levels.”
To which the Doctor riplies: “Don’t do that.”
Once we have established a causal link between CO2 and climate–and by any reasonable standard, we have–if we remove the cause, we avoid the effect. It would be much more difficult to justify any sort of geoengineering aproach without a huge level of effort by your standards, but reducing CO2 emissions is a no-miss solution.
Gösta Oscarsson says
Reading the paper I came to think of an experience I had, a (large) number of years ago, with what then was called micro simulation. A number of social scientists attached probabilities to each individual in a region as to for instance their demographic and economic behaviour and made assumptions as to how they all would react to the behaviour of all the others. (Obviously very complicated calculations, far beyond the grasp of a simple central bureaucrat = me.) What made me suspicious (and here we have my link to the present paper) was that the scientists said that they after each computation checked the aggregate results against macro data on the region in question and when they got “monstrous” and “unrealistic” descriptions of the future region they changed their assumptions untill the result was “reasonable”. This “method” made me very doubtful. Do you see the parallel?
Mark says
Mauritzio, #10.
OK, so please furnish us with an Earth Mk II so we can avoid having to use a mathematical simulation.
Or do you have any explanation as to WHY the simulations are incorrect?
Or are you just looking for an answer you’re happy with?
Doug H says
Great post! Does anyone know where can one obtain a fairly detailed laypersons description of modern bio-geo-chemo-climate models anywhere that lists all the positive and negative feedbacks?
Jeffrey Davis says
If you are asking for change, shouldn’t you be required to accurately predict what your change will produce? But apparently that isn’t possible.
In this country, we’re terribly familiar with acting upon insufficient information about a potential catastrophe. Our current bill for that “1% possibility” will reach more than $1 trillion dollars before the last veteran maimed by the war is laid to rest.
Compare the transparency of the science with the political shenanigans which prefaced the war. Yes, there’s complexity with the science of AGW, but complexity is a “mote” compared with the “beam” of the 3 Card Monty game involved in manipulating intelligence.
Much of the change encouraged by scientists ought to dovetail nicely with the necessity brought about by Peak Oil and the good involved in extricating our political choices from the swamp of Mid Eastern grievances. The solution to all these things looks remarkably similar: burn a lot less carbon.
Ray Ladbury says
Maurizio Morabito, I would not attach great importance to Spencer’s use of “engineer” as questioner. He is merely taking an engineer as an example of an educated professional not acquainted with details of climate science. As indicated by debacles in the APS Forum on Physics and Society and elsewhere, one could equally take “physicist,” “statistician,” or even “meteorologist.”
The point stands that without concerted effort to understand details of the science, even an educated person will come to incorrect conclusions. Nobody is saying that the models reproduce reality, but rather that they give us insights into how the real world is working as long as we have prepared ourselves by decades of study.
Richard Pauli says
“You don’t need a weatherman to know which way the wind blows”
Is there a similar phrase for climatology?
Maybe, ‘You don’t need a climatologist to tell which way the heating goes’
Doesn’t quite have the rhythm and poetry of Bob Dylan. We all want elegant and simple statements that reveal truth about change.
Andrew says
#4: Oops. I don’t think I asked a question. I was just shamelessly offering praise. ;-)
I thought that your piece was a very clear articulation of why one can’t simply jump from a handful of first-order equations to a prediction of the consequences of a doubling of CO2. Aside from the plethora of processes that even complicated climate models exclude, there’s a big chunk of simulation necessary to tease out the all of the emergent properties necessary for answering even seemingly simple climate questions.
Anyway, great article.
Andrew.
austin says
I’d like to see the models take into account Hurricanes. These are vast dissapative systems. But they may be the least of them.
Douglas says
Following up on #11. If modeling problems are so complex that they defy closed-form solution, a sound engineering response, in my experience, is to establish sufficient correlation of the model to reality before using it to project new scenarios. If it is not capable of predicting reality, the usual request is to go look for the missing variables or relationships until it does correlate sufficiently. 3 questions then:
(1) How well have climate models of 10-25 years ago predicted climate-scale change over the last 10-25 years?
(2) Do climate models contain the physical models/factors that recently resulted in the toggle of the PDO and resulting projections by some that we might see a delay in global warming for 10 years or so? And if so, why did they not predict it?
(3) If these factors are not present, what efforts are ongoing to address this?
Can you point to any papers or discussions on this topic if the answer is too involved to give here?
George Musser says
A question about the basic physics explanation:
Earth is in radiative equilibrium, at least over the long term. In a basic-physics explanation, it is better to describe the present warming trend as a transient response (as the system seeks out a new equilibrium at higher temperature) or a steady-state response (as a new value of opacity enters into the radiative balance)?
George
(forgive me if this is a duplicate posting – I’m having trouble with the captcha)
John Farley says
Spencer Weart, it’s a good question: how can you calculate global warming mathematically?
Here’s my contribution: a link to an essay that I published recently for nonscientists, “The Case for Modern Anthropogenic Global Warming”. It’s a reply to the contrarian arguments advanced by Alexander Cockburn in 2007 on the CounterPunch website and in the pages of The Nation.
http://monthlyreview.org/080728farley.php
Because the essay was written for a nontechnical audience, all equations have been banished to endnotes.
In this essay, endnotes 12 and 15 give some simple calculations.
It IS possible to calculate, using the Stefan-Boltzmann equation, the temperature of a hypothetical Earth with no greenhouse gases in the atmosphere. It’s below the freezing point of water. This shows that the greenhouse effect is a BIG effect.
People are often surprised that global warming, caused by the enhanced greenhouse effect, is large enough to cause a problem, even though carbon dioxide is only present in the atmosphere at a level of hundreds of parts per million. Instead, the real surprise is that global warming is a small as it is!
-John Farley
Professor of Physics, UNLV
Jim Eager says
Re Michael @8: “If you are asking for change, shouldn’t you be required to accurately predict what your change will produce? But apparently that isn’t possible.”
And that impossibility is what makes it so risky: we know in general what will happen, but we don’t know exactly what will happen, which is why we can’t rule out the worst case scenario. Sounds like precisely a time to invoke the precautionary principle, no?
John Lang says
Since the GHG sensitivity estimates are completely based on the results of climate models, a method of testing of how accurate the computer models are must be undertaken.
There is a clear (moral, social, economic and scientific) responsibility to do so given what is at stake.
[Response: That isn’t the case (sensitivity estimates are also made via paleo-climate and modern changes), and secondly, that testing is already being undertaken (read the IPCC AR4 report for an assessment of that work. – gavin]
anna says
as far as simple a proof goes, not being a scientist myself i’m pretty happy with ockham’s razor (though final year of high school physics is also pretty useful). i find it entirely plausible that if you pump 37% more of a potent and important GHG into the atmosphere, then you can expect it to have an impact.
as for engineers, i work with them regularly (civil and mechanical mainly) and they reliably and usefully design structures with a safety margin built in. so, if a beam will be reasonably expected to carry X load, it will actually be designed to carry say a 10X load. this is basically the “just to be on the safe side” factor, because complex systems are necessarily difficult to predict.
the important and entirely reasonable thing here though is that they design things on the safe side because the consequences of not doing so could be devastating.
and this is only for one beam on one floor of one building, we’re talking about the very planet that sustains us.
i’m not particularly worried about super accurate predictions, the stakes are just so high and human behaviour and the climate system so complex, i’d rather play it safe with my future (and my children’s, your’s, your children’s, etc, etc).
the quest for super accurate predictions actually reminds me of a Borges story where a people are so obsessed with making the perfect map they end up with a map as big as the land it maps. we can wait for perfectly accurate predictions but they will be useless because they will only ever be perfectly accurate *after* the event has been observed.
Ray Ladbury says
Jeffrey Davis says: “Compare the transparency of the science with the political shenanigans which prefaced the war.”
Oh, let’s not, shall we? I’d really like to hold science to a higher standard than politics–particularly the politics of a kleptocratic regime.
[Response: Agreed. The war is OT. – gavin]
Ray Ladbury says
Gösta Oscarsson says: “What made me suspicious (and here we have my link to the present paper) was that the scientists said that they after each computation checked the aggregate results against macro data on the region in question and when they got “monstrous” and “unrealistic” descriptions of the future region they changed their assumptions untill the result was “reasonable”. This “method” made me very doubtful. Do you see the parallel?”
Actually, not at all. Big difference between this and the dynamical approach used for climate modeling.
Jim Eager says
Re John Farley @26, your Monthly Review critique of Cockburn’s arguments is a thorough and valuable resource, but I fear your rebuttal of Contrarian Claim 3, the lag of CO2 increase behind temperature rise at the end of an ice age, has a serious deficiency in that it does not describe the subsequent role of CO2 as a feedback in adding more warmth, and thus the ability of CO2 to act as both a feedback in the case of deglaciation, and as a forcing when added directly to the atmosphere as at present.
Hank Roberts says
John Lang Says:
8 September 2008 at 1:44 PM
” Since the GHG sensitivity estimates are completely based on …” and then states his misunderstanding.
John, where did you get this mistaken idea? Did you read it somewhere? Could be you’re misreading a good source, or you’re being fooled by fake “facts” — ??
John Mashey says
“These people, typically senior engineers, get suspicious”.
Please, can we get deeper than “senior engineers” – that really isn’t improving insight. If we want to do that, we need to probe a lot deeper than just “senior engineers”.
Let me offer a speculation, although not yet a serious hypothesis:
1. SPECULATION
Amongst technically-trained people, and ignoring any economic/ideological leanings:
1) Some are used to having
a) Proofs
OR
b) Simple formulae
OR
c) Simulations that provide exact, correct answers, and must do so to be useful
d) And sometimes, exposure to simulations/models that they think should give good answers, but don’t.
2) Whereas others:
a) Are used to missing data, error bars,
b) Complex combinations of formulae
c) Models with varying resolutions, approximations, and that give probabilistic projections, often only via ensemble simulations.
d) Models that are “wrong”, but very useful.
My conjecture is that people in category 1) are much more likely to be disbelieving, whether in science, math, or engineering.
2. ANECDOTAL EXAMPLES:
1) In this thread, a well-educated scientist (Keith) was convinced that climate models couldn’t be useful, because he was used to models (protein-folding) where even a slight mismodel of the real world at one step caused final results to diverge wildly … just as a one-byte wrong change in source code can produce broken results.
See #197 where I explained this to him, and #233 where light dawned, and if you’re a glutton for detail: #66, #75,l #89, #1230, #132, #145, $151, #166 for a sample.
2) See Discussion here, especially between John O’Connor & I. See #64 and #78. John is an EE who does software configuration management. When someone runs a rebuild of a large software system, everything must be *perfect*. There’s no such thing as “close”.
Also in that thread, Keith returned with some more comments (#137) and me with (#146), i.e., that protein-folding was about as far away from climate modeling as you could get.
3) Walter Manny is a Yale EE who teaches calculus in high school. He’s posted here occasionally (Ray may recall him :-), and participated in a long discussion at Deltoid, and has strong (contrarian) views. In many areas of high school/college math, there are proofs, methods known for centuries, and answers that are clearly right or wrong.
4) “moonsoonevans” at Deltoid, in #21 & #32 describes some reasons for his skepticism, #35 is where light dawns on me. He’s in financial services, had experienced many cases where computer simulations done by smart people didn’t yield the claimed benefits. In #35 I tried to explain the difference.
All this says that if one is talking with an open-minded technical person, one must understand where *they* are coming from, and be able to give appropriate examples and comparisons, because many people’s day-to-day experience with models and simulations might lead them to think climate scientists are nuts.
3. A FEW SPECIFIC DISCIPLINES & CONJECTURES
1) Electrical engineers (a *huge* group, of which only tiny fraction are here)
Many EEs these days do logic design, which requires (essentially) perfection, not just in the design, but (especially) in simulation.
Design + input =>(logic simulator) => results
At any step, the design may or may not be bug-free, but the simulator *must* predict the results that the real design would do given the input, exactly, bit for bit. Many test-cases have builtin self-checks, but the bottom line is that every test-case yields PASS or FAIL, and the simulator must be right.
Many people buy simulators (from folks like Cadence or Synopsys), and run thousands of computers day and night simulating millions of test-case inputs. But, with a million test-cases, they’re not looking for an ensemble that provides a distribution, they’re looking for the set of test-cases to cover all the important cases, and for EVERY one to pass, having been simulated correctly. This has some resemblance to the protein-folding problem mentioned above.
Now, at lower levels of timing and circuit design, it isn’t just ones and zeroes (there’s lots of analog waveforms, probabilistic timing issues, where one must guarantee enough margin, etc). When I’d tease my circuit designer friends “Give me honest ones and zeroes”, they’d bring in really ugly, glitchy HSPICE waveforms and say “so much for your ones and zeroes”. (This is more like the molecular “docking” problems that Keith’s colleagues mentioned.)
At these levels, people try to set up rules (“design rules”) so that logic designers can just act at the ones-and-zeroes level.
If one looks at EEs who worry about semiconductor manufacturing, they think hard about yields, failure attribution, and live with time-series. (Standard answer to “We got better yield this month, how do you think it looks?” was “Two points don’t make a trend.”
2) Software engineers
Programs often have bugs, but even a bug-free program can fall apart if you change the wrong one byte of code, i.e., fragile. (I don’t recall the source, but the old saw goes something like: if skyscrapers were like software, the first woodpecker would knock them over.)
Configuration management / software rebuilds are fairly automated these days, and they must be correct. One cannot include the wrong version of code, or compile with incompatible options.
Performance engineering and benchmarking tend to be more probabilistic-oriented, and although a lot of people want to believe in one number (once the mythical “MIPS” rating), we’ve (mostly) fixed that over the last 20 years. Good performance engineers have always given relative performance ranges and said YMMV (Your Mileage May Vary).
3) Mechanical engineers
This, I expect, varies. In some cases, closed-form equations work pretty well. In other cases, one is using big structural dynamics and computational fluid dynamics codes to obtain “good-enough” approximations to reality before actually building something. For example, automobiles are extensively modeled via “crash codes”.
4) Petroleum engineers
It’s been a while, but certainly, people who do seismic analysis and reservoir modeling *start* with data from the real world, analyze it to make decisions, so ought to be a little more accustomed to probabilistic analyses.
5) Financial engineers (Google: financial engineering)
Not having physics to constrain simulations yields some wild results, although at least, some people are very comfortable with risk, uncertainty, and ensemble projections. I especially like Sam Savage’s Flaw of Averages”.
On the other hand, when Nobel Economists lose $B (LTCM), I’m not surprised there is skepticism about climate models.
4. CONCLUSIONS
That’s a speculative start. I do *not* think lumping a large group together as “senior engineers” helps progress, because I have at least anecdotal evidence that the sources of skepticism tend to be attached to the kinds of models and (maybe) simulations that someone does day-by-day. The problem is that many people tend to generalize from their discipline to others, and especially if they have trouble getting useful models, they tend to be suspicious of others’.
At one extreme, people have long-established mathematical proofs, and answers that are clearly right or wrong.
At the other extreme, people have to make decisions based on the best approximations they can get, and if their discipline has good-enough approximations, they tend to think one way, and if the approximations aren’t so good, they may think another about equations and climate models.
Jeffrey Davis says
re: 28
“given what is at stake.”
If the likelihood of a ruinous car crash were greater than 2.5% per year, the insurance companies wouldn’t let me in the door. By the time we get to a rise of 2C, I think the issues of precaution and mitigation are going to be moot. One can’t divvy up probabilities exactly, but the IPCC pegs a rise in temps of between 1.5-4.5C at the 95% level of certainty. Do you think you could get car insurance at that level of risk?
Fred Jorgensen says
Re 27: Jim Yeager:
“Sounds like the time to invoke the precautionary principle.”
It depends on the cost of the precaution, the magnitude of adverse consequences, and the degree of certainty.
If, to bring down CO2 levels to 275 ppm or so, we need a global WWII type mobilization, rationing, and restriction of freedoms, then we better have one heck of a solid case!
mugwump says
John Farley, #26. I followed your link, and read your footnote 15 which is of particular interest to those (like myself) who are looking for methods to estimate climate sensitivity without using GCMs:
The problem with this argument is the glacial – interglacial temperature increase is accompanied by a large decrease in the amount of ice and snow covering the earth, and hence in the amount of sunlight reflected back towards space. Today there is relatively little ice left, and what there is is near the poles which contributes little to reflection (the poles are dark for half the year and for the other half the sun is on the horizon so they present very little “cross-section” to the sun). Thus, we would expect a much stronger positive feedback from glacial to interglacial than we would today from a similar increase in forcing.
[Response: That is already factored in since the ice sheets are imposed as a forcing in this kind of calculation. In general, you are probably correct – the existence of extensive snow/ice cover increases sensitivity, but for the ranges of climate change we expect in the next 100 or so years that does not seem to be a big effect. – gavin]
Jim Eager says
Re Fred Jorgensen @36: “It depends on the cost of the precaution, the magnitude of adverse consequences, and the degree of certainty.”
Fred, you left out the cost of not taking the precaution, and the magnitude of adverse consequences of not taking the precaution, yet you specifically set the bar at 275 ppm.
Now why would you do that?
Guenter Hess says
Dear Maurizio Morabito #10,
I think there is actually a more encouraging explanation to Spencer Weart’s observations, which also supports your statement.
Firstly, at our excellent universities we train scientist and engineers in the basics of their respective fields during their undergraduate studies.
Secondly, during their graduate studies we encourage them to go beyond the basics, discover new things, and go beyond the current knowledge and acquire new knowledge on their own. One way to do this is challenging hypothesis, theories, assumptions of complex models and I guess even sometimes the consensus.
That is how progress is made at the universities and in the six sigma breakthrough methodology in the industry.
Thirdly, in industry we also train them to look for simple rules and models that can be implemented with ease in a production process. This is sometimes necessary to make money.
Actually, I think the linear feedback model that was adapted by climate science is just a typical linearization you would apply in industry to model a more complex problem.
I also agree with you, many of them therefore understand the complexities of real world problems.
Therefore, I am very optimistic that with the help of all those excellently trained scientists and engineers mankind will survive the warming.
David B. Benson says
Fred Jorgensen (36) — Don’t need any of that, just 1–2% of the world’s gross product for 70–100 years. Probably it can be done for quite a bit less eventually. That shouldn’t stop us from starting now.
Chris Colose says
Spencer Weart, thanks for the great writeup.
Gavin,
concerning your last response to mugwump, I guess I assumed incorrectly that Milankovitch forcing was amplified by reflective and GHG feedbacks… but instead ice sheet responses are considered a forcing? I may just be getting tripped on words rather than anything radiatively significant, (I’m assuming that the “forcing” part is a reduction in surface albedo, though the semantics make no sense to me). How exactly are ice sheets to be considered a forcing, unless we consider freshening of the THC or something along those lines? I do agree with the comment though, that under situations of a same radiative forcing, two Earth’s (one with ice, one with little ice) would expect differences in sensitivity. But you’d have better insight to the actual quantitative aspects of that than me.
[Response: It’s the difference between the Charney sensitivity (fast feedbacks included, ice sheets, vegetation change not) and the Earth System Sensitivity (all feedbacks included). Most discussion has focused on the former, and that is the context for the LGM calculation. The ESS is different – and indeed if you try and calculate the ESS to orbital forcing you get extreme values (because the regionality/seasonality of the orbital forcing is so important for the ice sheets). – gavin]
Hank Roberts says
Doug H. wrote:
> Does anyone know where can one obtain a fairly
> detailed laypersons description of modern
> bio-geo-chemo-climate models anywhere
Yes, the IPCC, or the Start Here links at top of pag> that lists all the positive and negative feedbacks?
Nope, it’s not such a simple question that all the answers are completely known even for a single moment in time, and they will change over time.
Example (you can follow articles like this forward in time to see how the questions get worked on):
from twenty-one years ago (before ocean pH change was noticed as an issue):
http://www.see.ed.ac.uk/~shs/Global%20warming/Data%20sources/Charlson.pdf
Fred Jorgensen says
Re 38: Jim Eager:
The cost of not taking the precaution would be the ‘Adverse Consequences’ in the broadest term.
NASA’s Jim Hansen recently set the bar at 350 ppm, but there have been calls for pre-industrial,
historically stable levels of 280 or so.
Re. 36: David B. Benson: But the world’s GDP isn’t the benchmark! China and India want US
standard of living, – with the premium for low CO2 technology paid by the developed world.
Much more than 1-2 % of our GDP I would think! And we ‘may’ only have 10 years or so before the
speculative ‘tipping point’! Looks like an enormous project, so we’d better NOT start down that path
without a very high degree of certainty, since the cost of ‘precaution’ (insurance), could
be used to solve more critical problems elsewhere.
John Lang says
Hank Roberts Says:
8 September 2008 at 2:24 PM
“John Lang Says:
8 September 2008 at 1:44 PM
” Since the GHG sensitivity estimates are completely based on …” and then states his misunderstanding.
John, where did you get this mistaken idea? Did you read it somewhere? Could be you’re misreading a good source, or you’re being fooled by fake “facts” — ??”
That is what the commentary by Spencer Weart says.
David B. Benson says
Fred Jorgensen (43) — Well, when I worked it out, 1–2% was enough to
(1) Deeply sequester, as biochar or torrified wood, enough carbon to offset the annual addictions of excess carbon to the active carbon cycle;
(2) Just enough funding left to do the same with about 1% of the existing excess carbon; hence the need for 70–100 years.
There are many variations on the above theme; it could be combined with some CO2 CCS; some of the torrified wood could directly replace coal; etc.
The idea is simply to show that it can be done; we ought to start doing some of it right away. Moving away from fossil fuels will not alone suffice; just concrete production will surely continue to contribute about 0.4 GtC per year.
The Wonderer says
John Mashey: Let me offer another speculation. “Senior Engineers” and other Senior Technical Leaders who are worth their salt, think more broadly and have a knack for telling scat from shinola. The people to whom you refer are stuck in a very narrow line of thinking and are unable to see the broader picture.
If anyone out there thinks we engineers are a tough crowd, watch out for the bakers and steelworkers ;)
Richard Hill says
The linked presentation should appeal to engineers,
it explains surface temperature in an understandable way
using only gravity, radiation and convection.
The GHG effect is proved unnecessary.
It would be appreciated if someone could point out the error
or provide a link to the rebuttal.
The Thermodynamic Atmosphere Effect
– explained stepwise
by Heinz Thieme
German Version: http://people.freenet.de/klima/indexAtmos.htm
Using a set of technical models of planets with and without an atmosphere the reasons are
explained for differences in surface temperature of the planet without an atmosphere
compared with the temperature in the ground layer of atmosphere of the other planet. The
differences are caused by thermodynamic characteristics of these gases, which cause the
mass and the composition of the atmosphere, and the atmospheric pressure, which results
from gravity.
Trying to avoid the spam filter, link to english version is at…
http://www.geo(largetowns).com/atmosco2/atmos.htm
[Response: Nonsense, I’m afraid. He assumes that the planet has an adiabatic lapse rate (heating with increasing pressure), a very specially chosen amount of mass and radiation only at the edge of the planet. This might work, but does not resemble the real world in any respect. – gavin]
captdallas2 says
Interesting, there is no simple answer. I am curious why you didn’t reference the brilliant math used (2 plus 4 divided by 2 equals 3 degrees) to determine climate sensitivity. Tsonis et al has a paper on dynamical models that isn’t all that simple to understand. I do recommend it as a good read though.
Chris McGrath says
Thanks for your clear explanation of some of the basic concepts Spencer. Your book is very clear and well written too.
One aspect of the “basic” physics that puzzles me is the lag time of CO2 from burning fossil fuels in the atmosphere.
Archer (2007: 123) suggests, “The bottom line is that about 15-30% of the CO2 released by burning fossil fuel will still be in the atmosphere in 1000 years, and 7% will remain after 100,000 years.”
Why is there “only” a lag of 800 years in the icecore record between the earth coming out of a glacial period and the response of CO2?
Reference: Archer D (2007) “Global Warming: Understanding the Forecast” (Blackwell Publishing).
Kevin McKinney says
re #46: “That is what the commentary by Spencer Weart says.”
Not anywhere that I could see; there is a lot about what is needed to model the greenhouse effect, to be sure, but nowhere a statement about what other evidence may or may not exist.
For a bit more about the other evidence, see Gavin’s inline response to the original query at #28, with more at #37.