Guest commentary by Spencer R. Weart, American Institute of Physics
I often get emails from scientifically trained people who are looking for a straightforward calculation of the global warming that greenhouse gas emissions will bring. What are the physics equations and data on gases that predict just how far the temperature will rise? A natural question, when public expositions of the greenhouse effect usually present it as a matter of elementary physics. These people, typically senior engineers, get suspicious when experts seem to evade their question. Some try to work out the answer themselves (Lord Monckton for example) and complain that the experts dismiss their beautiful logic.
The engineers’ demand that the case for dangerous global warming be proved with a page or so of equations does sound reasonable, and it has a long history. The history reveals how the nature of the climate system inevitably betrays a lover of simple answers.
The simplest approach to calculating the Earth’s surface temperature would be to treat the atmosphere as a single uniform slab, like a pane of glass suspended above the surface (much as we see in elementary explanations of the “greenhouse” effect). But the equations do not yield a number for global warming that is even remotely plausible. You can’t work with an average, squashing together the way heat radiation goes through the dense, warm, humid lower atmosphere with the way it goes through the thin, cold, dry upper atmosphere. Already in the 19th century, physicists moved on to a “one-dimensional” model. That is, they pretended that the atmosphere was the same everywhere around the planet, and studied how radiation was transmitted or absorbed as it went up or down through a column of air stretching from ground level to the top of the atmosphere. This is the study of “radiative transfer,” an elegant and difficult branch of theory. You would figure how sunlight passed through each layer of the atmosphere to the surface, and how the heat energy that was radiated back up from the surface heated up each layer, and was shuttled back and forth among the layers, or escaped into space.
When students learn physics, they are taught about many simple systems that bow to the power of a few laws, yielding wonderfully precise answers: a page or so of equations and you’re done. Teachers rarely point out that these systems are plucked from a far larger set of systems that are mostly nowhere near so tractable. The one-dimensional atmospheric model can’t be solved with a page of mathematics. You have to divide the column of air into a set of levels, get out your pencil or computer, and calculate what happens at each level. Worse, carbon dioxide and water vapor (the two main greenhouse gases) absorb and scatter differently at different wavelengths. So you have to make the same long set of calculations repeatedly, once for each section of the radiation spectrum.
It was not until the 1950s that scientists had both good data on the absorption of infrared radiation, and digital computers that could speed through the multitudinous calculations. Gilbert N. Plass used the data and computers to demonstrate that adding carbon dioxide to a column of air would raise the surface temperature. But nobody believed the precise number he calculated (2.5ºC of warming if the level of CO2 doubled). Critics pointed out that he had ignored a number of crucial effects. First of all, if global temperature started to rise, the atmosphere would contain more water vapor. Its own greenhouse effect would make for more warming. On the other hand, with more water vapor wouldn’t there be more clouds? And wouldn’t those shade the planet and make for less warming? Neither Plass nor anyone before him had tried to calculate changes in cloudiness. (For details and references see this history site.)
Fritz Möller followed up with a pioneering computation that took into account the increase of absolute humidity with temperature. Oops… his results showed a monstrous feedback. As the humidity rose, the water vapor would add its greenhouse effect, and the temperature might soar. The model could give an almost arbitrarily high temperature! This weird result stimulated Syukuro Manabe to develop a more realistic one-dimensional model. He included in his column of air the way convective updrafts carry heat up from the surface, a basic process that nearly every earlier calculation had failed to take into account. It was no wonder Möller’s surface had heated up without limit: his model had not used the fact that hot air would rise. Manabe also worked up a rough calculation for the effects of clouds. By 1967, in collaboration with Richard Wetherald, he was ready to see what might result from raising the level of CO2. Their model predicted that if the amount of CO2 doubled, global temperature would rise roughly two degrees C. This was probably the first paper to convince many scientists that they needed to think seriously about greenhouse warming. The computation was, so to speak, a “proof of principle.”
But it would do little good to present a copy of the Manabe-Wetherald paper to a senior engineer who demands a proof that global warming is a problem. The paper gives only a sketch of complex and lengthy computations that take place, so to speak, offstage. And nobody at the time or since would trust the paper’s numbers as a precise prediction. There were still too many important factors that the model did not include. For example, it was only in the 1970s that scientists realized they had to take into account how smoke, dust and other aerosols from human activity interact with radiation, and how the aerosols affect cloudiness as well. And so on and so forth.
The greenhouse problem was not the first time climatologists hit this wall. Consider, for example, attempts to calculate the trade winds, a simple and important feature of the atmosphere. For generations, theorists wrote down the basic equations for fluid flow and heat transfer on the surface of a rotating sphere, aiming to produce a precise description of our planet’s structure of convective cells and winds in a few lines of equations… or a few pages… or a few dozen pages. They always failed. It was only with the advent of powerful digital computers in the 1960s that people were able to solve the problem through millions of numerical computations. If someone asks for an “explanation” of the trade winds, we can wave our hands and talk about tropical heating, the rotation of the earth and baroclinic instability. But if we are pressed for details with actual numbers, we can do no more than dump a truckload of printouts showing all the arithmetic computations.
I’m not saying we don’t understand the greenhouse effect. We understand the basic physics just fine, and can explain it in a minute to a curious non-scientist. (Like this: greenhouse gases let sunlight through to the Earth’s surface, which gets warm; the surface sends infrared radiation back up, which is absorbed by the gases at various levels and warms up the air; the air radiates some of this energy back to the surface, keeping it warmer than it would be without the gases.) For a scientist, you can give a technical explanation in a few paragraphs. But if you want to get reliable numbers – if you want to know whether raising the level of greenhouse gases will bring a trivial warming or a catastrophe – you have to figure in humidity, convection, aerosol pollution, and a pile of other features of the climate system, all fitted together in lengthy computer runs.
Physics is rich in phenomena that are simple in appearance but cannot be calculated in simple terms. Global warming is like that. People may yearn for a short, clear way to predict how much warming we are likely to face. Alas, no such simple calculation exists. The actual temperature rise is an emergent property resulting from interactions among hundreds of factors. People who refuse to acknowledge that complexity should not be surprised when their demands for an easy calculation go unanswered.
Hank Roberts says
> the cooling effect of continental collision? Is he suggesting
> that runaway warming was prevented by this happy accident? And
> if so, does that not imply that that is what is necessarily in store
Too much confusion thre to parse.
That was _not_ a geoengineering proposal by Hansen.
It was an example of one form of biogeochemical cycling, q.v.
A few minutes’ searching turned up a few hundred sources.
Here are two as examples.
As always read the footnotes, and after time passes look for citation by subsequent authors.
http://www.nature.com/nature/journal/v445/n7128/full/nature05516.html#B1
http://www.pnas.org/content/early/2008/09/22/0805382105.abstract
Published online before print September 22, 2008,
doi: 10.1073/pnas.0805382105
Abstract
India’s northward flight and collision with Asia was a major driver of global tectonics in the Cenozoic and, we argue, of atmospheric CO2 concentration (pCO2) and thus global climate. Subduction of Tethyan oceanic crust with a carpet of carbonate-rich pelagic sediments deposited during transit beneath the high-productivity equatorial belt resulted in a component flux of CO2 delivery to the atmosphere capable to maintain high pCO2 levels and warm climate conditions until the decarbonation factory shut down with the collision of Greater India with Asia at the Early Eocene climatic optimum at ≈50 Ma. At about this time, the India continent and the highly weatherable Deccan Traps drifted into the equatorial humid belt where uptake of CO2 by efficient silicate weathering further perturbed the delicate equilibrium between CO2 input to and removal from the atmosphere toward progressively lower pCO2 levels, thus marking the onset of a cooling trend over the Middle and Late Eocene that some suggest triggered the rapid expansion of Antarctic ice sheets at around the Eocene-Oligocene boundary.
Richard Sycamore says
#600
You’re begging the question. I was asking if temperature was running away at the time of that collision. I understand the putative “decarbonation” cooling mechanism. That is not the question. You assert that temperature was not running away. What’s the proof?
#601
As above. Hundreds of sources are not required to answer one simple question.
Hank Roberts says
> temperature was not running away. What’s the proof?
You’re here, typing. That’s the proof. Strong anthropic climatology.
We have seen excursions, within a livable range (water in all three states, vapor, ice, and liquid, has been present at all times).
Hank Roberts says
Liquid water (the “Goldilocks Question”
here, this is one of many sources:
Ch. 13 (Kasting): Runaway greenhouses and runaway glaciations: how stable is Earth’s climate?
http://books.google.com/books?hl=en&lr=&id=peOHcKtQxZAC&oi=fnd&pg=PA349&dq=did+runaway+climate+change+happen%3F&ots=AOSEU6uRBX&sig=TX7_x24sRKtABMgguwTDScRfLFk
That’s a link to the picture of the first page of Ch. 13 from:
Frontiers of Climate Modeling
By Jeffrey T. Kiehl, Veerabhadran Ramanathan
Cambridge University Press, 2006
ISBN 0521791324, 9780521791328
Hank Roberts says
Simpler way to answer this — look in Dr. Weart’s History.
Click the link provided below to search there on “runaway”
The first hit there explains what the word means; see the references.
http://www.aip.org/servlet/SearchClimate?collection=CLIMATE&queryText=runaway&SEARCH-97.x=0&SEARCH-97.y=0
Here’s a 1988 Sci. Am. reference as well:
http://www.geosc.psu.edu/~kasting/PersonalPage/Pdf/Scientific_American_88.pdf
Richard Sycamore says
I’m not seeking a definition, Hank. No dictionaries required. I’m asking if Earth’s climate was running away at the time of the continental collision. A reply would begin with, say, “affirmative” or “negative”, and then supply a reason. Have you read all the references you cite? What do they say on this question?
Lloyd Flack says
#602 Richard Sycamore,
There was no runaway change under way at the time, just amplifications of any changes in forcings. There was plenty of time for the climate to reach new equilibria after any change in carbon dioxide levels. Most of the feedbacks are very quick on a geological time scale but some are not so on ours.
Changes in radiation flux due to changes in water vapour and clouds occur in weeks. The consequent temperature changes can take decades.
Carbon dioxide can take centuries to reach new equilibria and can take tens of thousands of years to return completely to previous levels once a source has stopped emitting. On a geological scale this is quick.
Richard Sycamore says
#607
Thank you for an honest attempt to answer the question. This more or less agrees with the fictional model I have in mind. But the question is: what is the basis for this model? How do you know that the equilibrium point was rising prior to the continental collision, and that it was not the result of a runaway event across a tipping point? And how do you know that the equilibrium point would not have continued to rise were it not for continental collision? Are these just guesses? Or are they strong inferences from a megayear-long GCM run?
Hank Roberts says
> what is the basis for this model?
Wrong assumption. It’s not a model.
> Are these just guesses?
No.
> are they strong inferences from a megayear-long GCM run?
No.
> How do you know that the equilibrium point
Wrong assumption. Variation, within a livable (so far) range.
Yes, I read each of the pages I suggested to you, yesterday before I posted them. Three pages. You can do it too. I have confidence.
> What do they say on this question?
You can look it up.
Otherwise all you have is some guy on a blog as a source, and can go on arguing endlessly.
You wouldn’t like doing that, would you?
Richard Sycamore says
#609
The Sci. Am. paper dates to 1988. But Ruddiman’s hypothesis, cited by Hansen, is more recent than that. So rather than citing old science, why not just give me your modern-day synopsis? You can do it. I have confidence.
Hank Roberts says
https://www.realclimate.org/index.php?p=272
5 July 2006 Runaway tipping points of no return
Gavin’s several-paragraph introduction there sums it up well.
Read it all, don’t rely on snippets posted by some guy on a blog:
“People often conclude that the existence of positive feedbacks must imply ‘runaway’ effects i.e. the system spiralling out of control. However, while positive feedbacks are obviously necessary for such an effect, they do not by any means force that to happen. Even in simple systems, small positive feedbacks can lead to stable situations as long as the ‘gain’ factor is less than one (i.e. for every initial change in the quantity, the feedback change is less than the original one). A simple example leads to a geometric series … This series converges if |r|<1, and diverges (‘runs away’) otherwise. You can think of the Earth’s climate (unlike Venus’) as having an ‘r‘ less than one, i.e. no ‘runaway’ effects, but plenty of positive feedbacks.”
Richard Sycamore says
#611
Hank, your presumption that I need a primer is incorrect. My question is well-posed and quite specific; there is no need to cite glossaries or spurious textbook material. If you can not answer it, that is fine, please let someone else try. Thanks.
Hank Roberts says
> no need …
I haven’t cited either a glossary or a textbook; either might help.
Best of luck, I’m done.
Lloyd Flack says
Gavin,
What is the story behind the non-launch of DSCOVR? Internal NASA politics,outside political pressure or what?
Also isn’t another similar satellite in the L2 position required as well to observe the night time changes? After all much of the trend comes from the reduction of the re-radiation of heat at night.
Hank Roberts says
Some source material:
http://mitchellanderson.blogspot.com/2008/03/revealed-bush-killed-dscovr-mission.html
(FOIA results)
http://www.scribd.com/doc/2318466/Scientis-Letters-Only
http://www.sciencemag.org/cgi/content/citation/311/5762/775c
L2 is
— about 4x as far as the Moon, and
— would look at Earth directly against the background of the Sun, not the best way to get consistent readings of Earth’s night side.
Hank Roberts says
Hey, good news! I’m wrong, smart people knew better, and plans exist:
L-1 and L-2 observatories for Earth science in the post-2010 era
Wiscombe, W.; Herman, J.; Valero, F.
Geoscience and Remote Sensing Symposium, 2002. IGARSS apos;02. 2002 IEEE International
Volume 1, Issue , 2002 Page(s): 365 – 367 vol.1
Digital Object Identifier 10.1109/IGARSS.2002.1025041
Summary: Twin observatories 1.5 million km from Earth along the Earth-Sun line offer revolutionary possibilities for Earth observation and scientific progress.
http://start1.jpl.nasa.gov/caseStudies/eao-l2.cfm
The Sun-Earth L2 point is a location from which a spacecraft would see Earth permanently eclipse the Sun, leaving only a thin ring of sunlight (called the solar annulus) surrounding the planet. It’s a relatively stable position, thanks to the combined gravitational effects of the Sun and Earth. As Earth orbits the Sun, a spacecraft at L2 would remain about 1.5 million km (about one million miles) above Earth’s nightside.
This configuration makes it a uniquely desirable place from which to study long-term changes in Earth’s atmosphere. Spectrometers stationed at L2 would enable scientists to analyze the atmosphere by observing its effect on the sunlight shining through it.
Our task was to develop a concept for a space telescope mission to make a detailed study, from the L2 vantage point, of the atmosphere’s constituents and dynamics over a span of 10 years. The hypothetical launch date was set at 2025 to 2030, to allow time for needed technology improvements. …
Rod B says
Hank, et al: How much better is the spectrum data from the L1/L2 satellites than regular orbiting satellites?
Ray Ladbury says
Rod and Hank,
On L1 and L2, First, the desirability is that the satellites have a mostly constant view of Earth rotating beneath them. I say mostly constant, as the satellites actually orbit the L2/L1 point. The distance is also sufficiently large that the satellites have a full view of the hemisphere they focus on. There is also the advantage that you don’t have variations due to eclipse of the sun (i.e. pretty constant illumination).
How good the spectral measurements are depends on the instruments, but the Lagrange points provide a good platform for instrument optimization. How good can they get? Well, the James Webb Space Telescope’s detectors will have 6 electron read noise (we hope)!
FWIW, Discovr/Triana is now out of mothballs. Not sure exactly what will be done to its instrument suite, but it looks like it might fly in some form or another.
Hank Roberts says
Rod, recall I’m just some guy on a blog and know nothing (grin), I’m just pointing to sources I find.
Short answer: a consistent location — a stable point of view — has been lacking, and will be invaluable in collecting satellite data on Earth’s climate. Many open questions could have been resolved by now. See various longer answers about the planned mission for Triana/DSCOVR, easy to find
A shotgun search of Google here:
http://www.google.com/search?q=satellite+data+various+sets+pieced+together
Turns up, just as an example:
PMOD vs ACRIM (part 2) 7/27/07 …piecing the various satellite data series together to form a single continuous data set is a rather complex process.
tamino.wordpress.com/2007/07/27/pmod-vs-acrim-part-2/
Hank Roberts says
PS — an article worth reading in full, on topic:
http://www.geolsoc.org.uk/gsl/geoscientist/features/page2617.html
Excerpt below only the closing paragraphs; read the article to understand why they reach this conclusion.
——-
Current atmospheric carbon dioxide concentrations, at over 380ppm, already exceed the peak level of carbon dioxide during past interglacials that we have measured (from air bubbles trapped in ice cores by the European Project for Ice Coring in Antarctica (EPICA)) going back over 400,000). Consequently, even though the full climate impact of this greenhouse ‘climate forcing’ has not yet become manifest, it has been delivered.
When might we expect to see massive methane hydrate dissociation? This is a good question but (currently) a complete unknown. One thing we do know is that it is unlikely to be before the Earth warms in excess of the last glacial maximum; we may therefore have a few decades of complete safety left to us, at the very least. Second, given that oceanic mixing times are in the order of centuries, it may be we have a few centuries to go at the most. (Though remember: the first 300m or so of the ocean have already begun to warm – the fuse is lit.)
…
The IPCC 2007 Assessment skates around the early Eocene-analogue issues. Its chapter on palaeoclimates does have a subsection on the early Eocene and the chapter on the atmosphere does cover the present CIE due to fossil fuel release and deforestation. However otherwise the IPCC do not connect the two. The closest it comes to making a definitive statement on the Eocene event as an analogue is:
“Although there is still too much uncertainty in the data to derive a quantitative estimate of climate sensitivity from the PETM [Palaeocene Eocene Thermal Maximum], the event is a striking example of massive carbon release and related extreme climatic warming.”
Clearly the implications for future research are considerable. Research into the biosphere processes of the early Eocene (or the Toarcian, another CIE event) simply has not had anything like the multi-million pound investment that other areas of climate science have been afforded. If it is not in the literature then it is not in the IPCC assessments. However now that those few scientists who have worked on the Eocene (and Toarcian) have done enough to demonstrate that past CIEs events very likely represent palaeoanalogues to the warming we are currently inducing, we now need to grapple with the detail.
All this will come as no surprise to Geoscientist readers, because last year some members of the Stratigraphy Commission flagged Eocene and Toarcian climate analogues as a priority for policy makers. Perhaps it is time to consider the IETM as ‘a striking example’ that warrants more than a brief subsection within an IPCC assessment chapter?
Hank Roberts says
Yeek!
http://www.nature.com/nature/journal/v427/n6970/abs/nature02172.html
Letters to Nature
Nature 427, 142-144 (8 January 2004) | doi:10.1038/nature02172; Received 16 June 2003; Accepted 5 November 2003
Critically pressured free-gas reservoirs below gas-hydrate provinces
Matthew J. Hornbach1, Demian M. Saffer1 and W. Steven Holbrook1
1. Department of Geology and Geophysics, University of Wyoming, Laramie, Wyoming 82071, USA
Correspondence to: Matthew J. Hornbach1 Email: mhornbac@uwyo.edu
Top of page
Palaeoceanographic data have been used to suggest that methane hydrates play a significant role in global climate change. The mechanism by which methane is released during periods of global warming is, however, poorly understood1. In particular, the size and role of the free-gas zone below gas-hydrate provinces remain relatively unconstrained, largely because the base of the free-gas zone is not a phase boundary and has thus defied systematic description. Here we evaluate the possibility that the maximum thickness of an interconnected free-gas zone is mechanically regulated by valving caused by fault slip in overlying sediments2. Our results suggest that a critical gas column exists below most hydrate provinces in basin settings, implying that these provinces are poised for mechanical failure and are therefore highly sensitive to changes in am-bi-ent conditions3. We estimate that the global free-gas reservoir may contain from one-sixth to two-thirds of the total methane trapped in hydrate4. If gas accumulations are critically thick along passive continental slopes, we calculate that a 5 °C temperature increase at the sea floor could result in a release of approx. 2,000 Gt of methane from the free-gas zone, offering a mechanism for rapid methane release during global warming events.
——-
Cited by ten more recent articles:
http://www.nature.com/cited/cited.html?doi=10.1038/nature02172
——hyphens added—-
Hank Roberts says
Some of the citing articles are from 2008. Is the concept of “valving” as new to the professionals as it seems to be here?
I’d always had the notion that methane was expected to trickle out slowly with slow temperature change. But this seems … different.
jay says
Just a short question from an interested party (definitely a beginner) – did the temperature of the sun rise before the fairly recent switch of the poles? If the magnetic poles are meandering at a geologically rapid pace, would this have any effect on the temperature of the earth’s core?