A few weeks ago I was at a meeting in Cambridge that discussed how (or whether) paleo-climate information can reduce the known uncertainties in future climate simulations.
The uncertainties in the impacts of rising greenhouse gases on multiple systems are significant: the potential impact on ENSO or the overturning circulation in the North Atlantic, probable feedbacks on atmospheric composition (CO2, CH4, N2O, aerosols), the predictability of decadal climate change, global climate sensitivity itself, and perhaps most importantly, what will happen to ice sheets and regional rainfall in a warming climate.
The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.
The first point that can be made is a simple one. If something happened in the past, that means it’s possible! Thus evidence for past climate changes in ENSO, ice sheets and the carbon cycle (for instance) demonstrate quite clearly that these systems are indeed sensitive to external changes. Therefore, assuming that they can’t change in the future would be foolish. This is basic, but not really useful in a practical sense.
All future projections rely on models of some sort. Dominant in the climate issue are the large scale ocean-atmosphere GCMs that were discussed extensively in the latest IPCC report, but other kinds of simpler or more specialised or more conceptual models can also be used. The reason those other models are still useful is that the GCMs are not complete. That is, they do not contain all the possible interactions that we know from the paleo record and modern observations can occur. This is a second point – interactions seen in the record, say between carbon dioxide levels or dust amounts and Milankovitch forcing imply that there are mechanisms that connect them. Those mechanisms may be only imperfectly known, but the paleo-record does highlight the need to quantify these mechanisms for models to be more complete.
The third point, and possibly the most important, is that the paleo-record is useful for model evaluation. All episodes in climate history (in principle) should allow us to quantify how good the models are and how appropriate are our hypotheses for climate change in the past. It’s vital to note the connection though – models embody much data and assumptions about how climate works, but for their climate to change you need a hypothesis – like a change in the Earth’s orbit, or volcanic activity, or solar changes etc. Comparing model simulations to observational data is then a test of the two factors together. Even if the hypothesis is that a change is due to intrinsic variability, a simulation of a model to look for the magnitude of intrinsic changes (possibly due to multiple steady states or similar) is still a test both of the model and the hypothesis. If the test fails, it shows that one or other elements (or both) must be lacking or that the data may be incomplete or mis-interpreted. If it passes, then we a have a self-consistent explanation of the observed change that may, however, not be unique (but it’s a good start!).
But what is the relevance of these tests? What can a successful model of the impacts of a change in the North Atlantic overturning circulation or a shift in the Earth’s orbit really do for future projections? This is where most of the attention is being directed. The key unknown is whether the skill of a model on a paleo-climate question is correlated to the magnitude of change in a scenario. If there is no correlation – i.e. the projections of the models that do well on the paleo-climate test span the same range as the models that did badly, then nothing much has been gained. If however, one could show that the models that did best, for instance at mid-Holocene rainfall changes, systematically gave a different projection, for instance, of greater changes in the Indian Monsoon under increasing GHGs, then we would have reason to weight the different model projections to come up with a revised assessment. Similarly, if an ice sheet model can’t match the rapid melt seen during the deglaciation, then its credibility in projecting future melt rates would/should be lessened.
Unfortunately apart from a few coordinated experiments for the last glacial period and the mid-Holocene (i.e. PMIP) with models that don’t necessarily overlap with those in the AR4 archive, this database of model results and tests just doesn’t exist. Of course, individual models have looked at many various paleo-climate events ranging from the Little Ice Age to the Cretaceous, but this serves mainly as an advance scouting party to determine the lay of the land rather than a full road map. Thus we are faced with two problems – we do not yet know which paleo-climate events are likely to be most useful (though everyone has their ideas), and we do not have the databases that allow you to match the paleo simulations with the future projections.
In looking at the paleo record for useful model tests, there are two classes of problems: what happened at a specific time, or what the response is to a specific forcing or event. The first requires a full description of the different forcings at one time, the second a collection of data over many time periods associated with one forcing. An example of the first approach would be the last glacial maximum where the changes in orbit, greenhouse gases, dust, ice sheets and vegetation (at least) all need to be included. The second class is typified by looking for the response to volcanoes by lumping together all the years after big eruptions. Similar approaches could be developed in the first class for the mid-Pliocene, the 8.2 kyr event, the Eemian (last inter-glacial), early Holocene, the deglaciation, the early Eocene, the PETM, the Little Ice Age etc. and for the second class, orbital forcing, solar forcing, Dansgaard-Oeschger events, Heinrich events etc.
But there is still one element lacking. For most of these cases, our knowledge of changes at these times is fragmentary, spread over dozens to hundreds of papers and subject to multiple interpretations. In short, it’s a mess. The missing element is the work required to pull all of that together and produce a synthesis that can be easily compared to the models. That this synthesis is only rarely done underlines the difficulties involved. To be sure there are good examples – CLIMAP (and its recent update, MARGO) for the LGM ocean temperatures, the vegetation and precipitation databases for the mid-Holocene at PMIP, the spatially resolved temperature patterns over the last few hundred years from multiple proxies, etc. Each of these have been used very successfully in model-data comparisons and have been hugely influential inside and outside the paleo-community.
It may seem odd that this kind of study is not undertaken more often, but there are reasons. Most fundamentally it is because the tools and techniques required for doing good synthesis work are not the same as those for making measurements or for developing models. It could in fact be described as a new kind of science (though in essence it is not new at all) requiring, perhaps, a new kind of scientist. One who is at ease in dealing with the disparate sources of paleo-data and aware of the problems, and yet conscious of what is needed (and why) by modellers. Or additionally modellers who understand what the proxy data depends on and who can build that into the models themselves making for more direct model-data comparisons.
Should the paleo-community therefore increase the emphasis on synthesis and allocate more funds and positions accordingly? This is often a contentious issue since whenever people discuss the need for work to be done to integrate existing information, some will question whether the primacy of new data gathering is being threatened. This meeting was no exception. However, I am convinced that this debate isn’t the zero sum game implied by the argument. On the contrary, synthesising the information from a highly technical field and making it useful for others outside is a fundamental part of increasing respect for the field as a whole and actually increases the size of the pot available in the long term. Yet the lack of appropriately skilled people who can gain the respect of the data gatherers and deliver the ‘value added’ products to the modellers remains a serious obstacle.
Despite the problems and the undoubted challenges in bringing paleo-data/model comparisons up to a new level, it was heartening to see these issues tackled head on. The desire to turn throwaway lines in grant applications into real science was actually quite inspiring – so much so that I should probably stop writing blog posts and get on with it.
The above condensed version of the meeting is heavily influenced by conversations and talks there, particularly with Peter Huybers, Paul Valdes, Eric Wolff and Sandy Harrison among others.
Hank Roberts says
Barton, I don’t know which one of the many to suggest!
http://www.google.com/search?q=debunking+%22400+scientists%22+climate
Ray Ladbury says
Greg Smith, presuming you are actually serious about wanting to learn about climate modeling, you have a steep learning curve. Your criticisms to date don’t exactly demonstrate that you’ve mastered the basics. For this reason, I suggest you start with some systematic study of the basic physics. Ray Pierrehumbert has written a pretty darn good textbook with problems. Start here:
http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
Shorter accounts can be found on this website, e.g. here
https://www.realclimate.org/index.php/archives/2007/08/the-co2-problem-in-6-easy-steps/
and here
https://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/
If what you are looking at is why climate models have adopted a value for CO2 forcing of 3 degrees celsius per doubling, that is a bit more involved. There are more than 20 GCMs actively being used and they all use somewhat different methods for constraint. The fact that they agree as well as they do ought to tell you something. The fact that where they do not agree, all the uncertainty is on the high side ought to tell you more. If you are serious, I am afraid that you will have to learn the science.
As to dynamical models in general, do you understand the difference between them and statistical models?
pat n says
Re 187, Ferdinand,
As shown on the Eemian Vostok Ice Core plot … it’s clear that methane lagged temperature until about 129,000 thousand years ago but that temperature, methane and CO2 peaked about the same time (128,000 years ago). The peak CO2 concentration 128,000 years ago was near 290 ppm – much lower than current.
It’s also clear that things other than temperature were at work during the Eemain – thus I believe we cannot assume as you said that … “A temperature increase of 3°C will not give more than 100 ppbv more methane” …
Hank Roberts says
Barton, if you meant the newer “500 Scientists” bogus list from Heartland, that’s debunked here:
http://scienceblogs.com/deltoid/2008/05/i_must_be_psychic.php
Rod B says
Martin (153), I agree. My supposition that N2 (might) convert some of its translation energy to blackbody infrared was just a blue-sky thought. I had no reason to believe it.
Also, if a CO2 emits (radiates) energy that it just absorbed into a precise quantized energy level wouldn’t it radiate out the exact same energy regardless of its environmental temperature? Collectively the sum emission from a large number of molecules would be less than the energy absorbed (and look “cooler”) but that’s because a bunch do not re-emit at all.
Ray (154), I still disagree with your distinctions, though not the concept. Blackbody broadband radiation characteristics are different from quantized line spectrum radiation. It is true that a molecule absorbs precise quantized energy packets (and re-emitting similar) from a pool of broadband blackbody radiation and doesn’t know the difference. But the energies and genesis are not directly the same. On the other hand, I guess radiation is radiation
You say, “…diatomic gasses such as N2 and O2 will only radiate as a result of a collisional interaction that alters their magnetic symmetry.” This is new to me and interesting. Does this happen often?
Paul Middents (155) says, “…What is problematic…? Whether a molecule actually fills one of its energy pots from the availability environmental radiation is a quantum mechanic process, and it might or might not.
Geoff Wexler (159) says, “…Matter can only radiate by jumping between quantum levels…”
Not true. All of your radio, TV, cellular, e.g. radiation does not happen that way. I’m with you on the rest of your post.
I have been tempted as a skeptic to make something out of the extremely minor amount of CO2 e.g. causing such major problems. But it is not valid. There are all kinds of things that rely on relatively minute amounts to accomplish big things: most catalyst aided reactions for instance; a teensy amount of poison can fell a 300lb guy in seconds. CO2 seems to act mainly as a shovel: scoop up the radiation, shovel it out to N2 or O2; a little shovel can handle tons of coal.
Chris N says
With my traning in chemical engineering, I’m wondering if the climate is behaving like a distillation column. Like adding more steam (i.e., heat) to a reboiler, GG’s absorbed more of the heat that would have been lost to space. In a distillation column, when more steam is sent to the reboiler (i.e., a heat exchanger at the bottom of the tower), then more liquid is vaporized and sent up the tower. If unchecked, more of the higher boiling component (think water and ethanol as the two main components inside the tower) is sent up the tower. Eventually, the purity of the ethanol going overhead as the distillate decreases as the concentration of water increases (not good!). So, how does one compensate if steam to the reboiler (i.e., the source of heat to the system) cannot be cut back? Easy, you send some the condensed overhead stream (called reflux) back down the tower as a liquid to cool it down. So, if more heat is being redirected back to the earth’s surface due to GG’s, then it appears to me that the atmosphere over the oceans (such as in the SH) causes more water to be vaporized. But instead of producing a positive feedback, it condenses at some altitude as it is loses heat. Eventually, this heat finds its way out to space. If all this happens very fast (i.e., it’s in equilibrium), I don’t see how “climate sensitivity” occurs.
So, to continue the analogy, the incoming irradiance plus the additional heat from GG absorption is like the steam to the reboiler, and the coldness of outer space is the overhead condenser. When more heat is applied (via GG’s), all that means is more vaporization and condensation going on between the earth’s surface and outer space. This is exactly what happens in a tower when more steam and reflux is added. There is more vapor traffic up the tower due to additional heat added to the reboiler, and when distillate is sent down the tower to compensate, there is more liquid traffic going down the tower. Overall, this does tend to increase the temperature at the bottom of the tower (or the earth’s surface, if you will) and lower the temperature at the top of the tower (or the stratosphere, if you will). However, both temperatures reach a plateau. The temperature does not runaway. In the example above, the overhead and bottoms temperature of the distallation tower reach their respective boiling points at the pressure of the tower. So, if the tower was run at atmospheric pressure, the bottoms stream can never exceed 212 F (the bp of pure water) and the top stream can never exceed 176 F (the bp of the ethanol-water azeoptrope, if I recall).
Maybe this is why we are seeing no temperature increases across the SH (mostly covered by oceans), and the highest temperature increases across the NH, particularly Asia (the largest land mass of all). In essence, there is not enough moisture to carry the heat into the atmosphere where it can “lose” it (via condensation) at higher altitudes. It also emphasizes the importance of convection, since it is convection that transfers heat up the tower.
Rod B says
All of the above, et al: I greatly appreciate the responses re atmospheres (gases) emitting blackbody-type radiation. They do, however, illuminate a significant dilemma. Part of Kirchhoff’s assertions (law) was the universality of blackbody radiation (which, incidentally, was known of before Planck solved the UV problem with his formula and quantizing) which said the all bodies radiate per their temperature regardless of the body’s form, shape, or substance. Planck continued this belief, and Einstein and others confirmed it, Einstein explicitly discussing gases, e.g., and going to the extreme and claiming a single atom/molecule (the extreme of a gas) has a characteristic radiation. However, the latter was never fully recognized by physicists (and really hard to visualize, especially if the single atom’s radiation is continuous broadband!), and this was the initial kink in the armor.
Most (at least all of the few that I’ve seen, Prof. Kimberly Strong of University of Toronto, PHY 315 Course Notes, e.g.) atmospheric science textbooks develop energy flux equations and results by dividing an atmosphere into multiple layers and analyzing flux, absorption (with absorption constants and optical depth stuff) and energy transfer using Planck’s formulation of blackbody radiation. On the other hand it can be inferred that this is just a construct and might not reflect physical realities – it’s based on monochromatic formulations (Kirchhoff, Beer-Lambert, Schwarzchild, et al), e.g., and has to be integrated somehow to cover the full spectrum. But, back to the first hand, they do calculate Earth’s actual atmospheric absorption on a broadband Planck distribution, though the actual absorption is a very small portion of the incoming SW and LW radiation. The SW absorption process is materially different than LW IR absorption, however, and might not be a conflict.
One recent paper (Robitaille of Ohio SU, 2006 – citation for Hank’s appetite :-) ) explicitly refutes the universality stuff. It goes a step too far though, IMO, by stating only solids can generate blackbody radiation. The paper nimbly ignores stars for example, which until maybe white dwarf or neutron stage are anything but solid. But then of course a plethora of respected scientists and others, many posting here, refute a gas’ ability to radiate ala Planck, except in the case of very dense gases.
Continuing the back and forth, if gases (or equivalent) do not radiate ala blackbody, how is the Cosmic Background Radiation explained? (I’ve asked this here before to no avail.)
Another part of Kirchhoff’s law(s) does pose a problem, as some here have asserted: emissivity equals absorptivity. (NB! This is not the same as emission = absorption, which is not true.) If an atmosphere is emitting blackbody radiation it also must have the capability to equally absorb blackbody-type radiation, though this again is a monochromatic property strangely within a broadband concept that I’ll just ignore for now – it gives me a headache. This means that the selected atmospheric surface– tropopause for discussion here (a strange two-sided surface at that… but never mind) – in addition to radiating outward must absorb from below. Since the temp of the tropopause is less than the surface it must be a net blackbody absorber of the radiation emanating from the surface. This is a problem since N2 and O2 seem to have no atomic/molecular mechanism to absorb such radiation. Unless it absorbs it directly into translation energy modes, which to me is not intuitively obvious, or some other obscure process unique to blackbody broad spectrum radiation – really weird. I don’t really know. Anybody??
This is all relevant to my line of inquiry and skepticism from two angles. One, this physical process, critical to GW/AGW, ought to be pretty well-known and clear, but it doesn’t seem to be. Two, I’m still trying to account for the “massive” downward LW radiation of 324 watts, just 66 watts less than the surface’s IR emission and 129 watts more than is emitted from atmosphere and clouds and escaping into space. Clouds I understand can radiate a bunch – blackbody type too (?); 67 watts absorbed from SW insolation is a noticeable source, though I don’t know how that gets converted to down welling infrared. CO2 and H2O re-emissions seem woefully short, the vast majority of their absorbed IR going to heat N2 and O2 via collision. Maybe then, my thought, it is enhanced by atmospheric blackbody down welling. Though even if the atmosphere did radiate ala Planck, it doesn’t seem it would radiate much.
I do not recall ever seeing a spectral analysis of the “back draft” infrared LW radiation reaching the surface. Is there such a thing?
See what I’m curious about and getting at?
Rod B says
Get off it, dhogaza (184). Greg S. did not say “if we can’t understand everything, we understand nothing”. He said it is arrogant to claim 100% knowledge certain of a system that is less than certain. And who is anti-climate scientist??
Rod B says
Ray (186) the fundamental generation of radiation is an accelerating charge. You can easily get that from ionized molecules or substances that otherwise have “free” electrons. The energy “states” of free/ionized electrons are continuous and have no quantized modes or levels. In some cases vibrating whole molecules can also display charge acceleration.
Barton Paul Levenson says
Rod B writes:
Yes, but who’s claiming that? It’s a straw-man argument.
Mike says
Ray Ladbury post #202, says “If what you are looking at is why climate models have adopted a value for CO2 forcing of 3 degrees celsius per doubling, that is a bit more involved.”
You could start with the IPCC report, section 8.6.3.2. There, they say that the basic Equilibrium Climate Sensitivity (ECS) of 1.2 (this compares OK with the 1.1 in Stephen E Schwartz’s paper http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf
). They then bump it up to 1.9 for “water vapor feedback”, and from there to 3.2 for “cloud feedback”. They also make it abundantly clear that they have no idea how clouds work (that permeates the whole IPCC report), and that the cloud “feedback” they use for ECS is a major uncertainty.
So it appears that the figure they use for ECS, 3.2 deg C per doubling of CO2, is thoroughly unreliable.
Ray Ladbury says
Mike, Actually, to say that the cloud feedback is unconstrained is inaccurate. Yes there are uncertainties, but it is extremely difficult to get a model that comes even close to paleoclimate data without such a feedback. What is more, if you look at the uncertainties, they are almost all on the high side rather than the low side. If you are hoping that somehow clouds or aerosols will bail us out and make the climate crisis go away, you are delusional.
Ray Ladbury says
Chris N., The problem with your argument is that there is still a significant amount of CO2 well above the cloudtops where almost all the water has condensed out. CO2 remains well mixed even into the stratosphere, so it adds to the greenhouse effect until adiabatic cooling peters out. The reason the Southern hemisphere has warmed less than the North does have to do with the preponderance of water in the South. However it has more to do with the oceans as a heat reservoir than it does with the radiative properties of cloud condensation.
Martin Vermeer says
Rod B #207:
What is the problem? The CO2 in the air absorbs indeed radiation coming from below (and from above, and from the side), and at the same time emits both upward and downward (and sideward) — but only for frequencies within the CO2 absorption/emission band, where the air is sufficiently opaque thanks to the presence of CO2.
For frequencies outside the CO2 absorption/emission band (ignoring water vapour and other GHG), nothing happens — the radiation flies off from the surface into space unimpeded, N2 and O2 molecules do nothing, just bounce with each other and the CO2 molecules to maintain local thermodynamic equilibrium. The air is transparent at these frequencies — this is how the desert at night cools so quickly.
…and note that this (the absorbtion-emission-absorbtion thing) happens not only at the tropopause, but throughout the troposphere [and indeed into the stratosphere for the core of the absorbtion band]. Every layer of air is a little cooler than the one below it, and a little warmer than the one above it. They are all emitting and absorbing to/from their neighbours, and the net effect is that heat slowly migrates upward. As I pointed out earlier, heat transport happens along (and requires) a temperature gradient.
I have some difficulty visualizing what your problem is, as these things have been obvious to me since my student days (actually I learned them for the Sun, not the Earth). I now realize that they are indeed difficult.
Alan K says
#189 “James Hansen’s earliest circulation model from the 1980s included a random volcanic eruption and the model prediction matched the short-term aerosol cooling effect of the 1991 Pinatubo eruption fairly closely”
did/does that model agree or disagree with the Keenlyside/Leibniz projections for a short term cooling?
Martin Vermeer says
Rod, you could do a lot worse than read up on Ray Pierrehumbert’s upcoming book. It’s not an easy read, but the fundamentals are very well explained. You would be interested in Chap. 3 (and also 2 and 4).
http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
Ray Ladbury says
Rod,
First, it is true that free electrons can absorb and radiate at any frequency. However, except in metals and plasmas, free electrons are quite rare. Ionization is a high-energy process–with activation energies on the order of a few eV. If you do the math, at room temperature, the proportion of molecules that are ionized there are only a few thousand ionized molecules per mole of gas. This compares with a few percent of CO2 molecules in the vibrational state corresponding to 15 micron absorption. The fact of the matter is that CO2 simply won’t radiate outside of it’s quantized energy transitions–there is no continuum of radiation corresponding to the blackbody curve. Instead, what you have is that proportion of photons in the absorption band coming into thermal equilibrium with the gas and being present in the amount expected if the phantom Planck curve for that temperature were present. And for the rest of the spectrum, the gas is transparent–it’s as if it weren’t there. That’s what is meant by grey-body. Now, multibody interactions can distort the absorption (and emission) somewhat, so you get some absorption in the tails of the absorption line, but this is feeble.
In a solid, of course, there are many more vibrational states, so you have transitions corresponding to more energies, and more of the black-body curve gets filled in. And for a metal, the free electrons fill the curve in pretty well. Even, so, there is no such thing as a perfect blackbody, because there is no such thing as a perfect absorber/emitter. Since photons are noninteracting, the only way the photon gas comes into thermal equilibrium is by being absorbed and emitted by surrounding matter.
Note: this is why you have different portions of the emission curve for Earth seeming to correspond to blackbodies at different temperatures. The temperature corresponds to the altitude at which an emitted photon has a snowball’s chance in hell of escaping without being reabsorbed by another gas molecule.
As to absorption of IR by N2 and O2, see:
http://www.iop.org/EJ/article/0022-3700/10/3/018/jbv10i3p495.pdf?request-id=094497b6-2e62-4983-9806-5396039081a5
Basically, two like molecules have no dipole, so they don’t radiate when they rotate vibrate, etc. However, when there’s a collision, you can induce a dipole, and then you can get absorption. Again, there’s an energy threshold, so it’s not a terribly common process.
Martin Vermeer says
Mike #211, I am a bit surprised, especially in the light of the well-known uncertainties in the cloud feedback acknowledged by the IPCC, at your confidence that ECS would be as little as 1.2°C. The water vapour amplification is pretty robust; there isn’t a whole lot of wiggle room there. To get from an ECS before clouds of 1.9 to one after clouds of 1.2, you need a negative cloud feedback of -60%. That’s a lot. And for all we know, if it really were that big it could just as well be positive. A positive feedback of +60% would produce an ECS of no less than 4.75°C (!).
The IPCC value of 3.2°C, which contrary to the above speculative ones has some science to back it up, would then correspond to +40%.
Rod B says
ps — and to say, as Martin does, that the only difference between line and blackbody radiation is simple emissivity, is torturous contortion. It’s blackbody if it has an emissivity of 1.0 at 548.35 nm wavelength and 0.0 everywhere else??!!??
Chris Colose says
Mike (211)
The climate sensitivty for 2x CO2 is 1.2 C without feedbacks (except for the T^4 feedback from Planck). We know with high confidence that the WV feedback is strongly positive. The Schwartz paper is interesting, but has been discussed here at RC, and probably is not a very good estimate because of using a “1-box” model of the climate system, whereby the oceans/atmsophere/land all seem to hat up in sync with each other. In fact the oceans do not heat up as quickly as land and a lot of the heat is “in the pipeline” so if you level off changing conditions, you have to allow the system to return into equilibrium for a true climate sensitivity.
As for clouds, while there is considerable uncertaintty in the magnitude (and sign), we have strong evidence against substantial negative feedback, and so no low climate sensitivity.
Rod B says
Barton Paul Levenson (196) says, “Geoff Wexler writes: ‘But what about a more careful thought experiment, what would happen to our greenhouse if the oxygen and nitrogen were removed?’
The greenhouse effect would be a bit less due to the lower total pressure.”
Interesting thought question. Wouldn’t the greenhouse effect be considerably less because CO2 would easily approach saturation because it has no convenient way to relax its vibrational energy, which it mostly does through collisions with N2 and O2?
Lawrence McLean says
Re #206, Chris N
I am a Chemical Engineer as well, frankly, you should have more confidence with the Climate scientists as represented by the contributers to this site and the IPCC.
I am an Australian, and I can tell you that your line: “Maybe this is why we are seeing no temperature increases across the SH…”, is way off the mark. Check out:
http://www.bom.gov.au/cgi-bin/silo/reg/cli_chg/timeseries.cgi?variable=tmean®ion=aus&season=0112
Rod B says
Barton (210): “Yes, but who’s claiming that?” — I dunno; if the shoe fits it can be worn.
“It’s a straw-man argument.” — probably so. I don’t know why dhogaza hammered it, but I didn’t want to let it pass…
Ferdinand Engelbeen says
Re #188 Cobblyworld (and in part #203 pat n),
It is difficult to know the real world influence of CO2 on temperature if there is an overlap in (initial) temperature increase/decrease and CO2 increase/decrease, as is the case in most periods during the past near million years. But there is one period where CO2 didn’t follow temperature, the end of the Eemian (113-105 kyr BP). That period points to a low influence of CO2 on temperature.
Methane did follow temperature more closely than CO2 for all periods, including the end of the Eemian. Thus no solid evidence for the real influence of methane can be deduced from ice cores. There is physical evidence from spectral analyses that the about 400 ppbv increase of methane (at the start of the Eemian) would increase heat retainment with about 0.4 W/m2, which gives an offset of about 0.12°C (without feedbacks). The total increase was about 9 to 11°C (depending of which temperature proxy is taken). Even with a lot of feedbacks, methane seems of not much help in the whole process.
Hansen summed up the forcing caused by the two main GHGs (CO2 and methane), which reached a change of ~3 W/m2. With an increase of 0.6°C at the surface, the increase in heat retention is completely set off.
One can have a lot of discussion about the real effect of GHGs, but it seems to me that a small change in albedo over the ice age transitions (a few % more or less clouds e.g.) has more effect than the whole increase/decrease of GHGs.
Anyway, one can not compare the effect of a small increase/decrease in energy over the ice age transitions (when a lot of albedo changes occur) with the effect of the same increase/decrease at one of the more or less bistable situations: either an ice age (to colder ends) or an interglacial (to warmer ends).
The Eemian can be used as analogue of the effect of temperature on methane and CO2 levels in the pre-industrial past. For CO2, there was a quite good linear relationship with temperature of about 8 ppmv/°C. I haven’t calculated it for methane in the full Vostok ice core, but based on the Eemian graph, the relationship is about 40 ppbv/°C, with a maximum of around 700 ppbv at maximum temperature.
That was without human intervention. Nowadays we are at much higher levels of about 1800 ppbv, due to human influences, but these haven’t had much direct effect in the Arctic tundra or oceans. Only a future temperature effect would have an influence, comparable to what happened during the Eemian, which is minor compared to what humans now emit…
dhogaza says
Rod B:
He didn’t say so explicitly, but his strawman argument strongly implies it.
And, if you read Greg Smith’s posts carefully, I’m sure that even you can deduce that despite his initial claim, he is no scientist.
CobblyWorlds says
#211 Mike,
“thoroughly unreliable”?
What, like your assesment of how a 3degC peak of PDF is calculated?
e.g.
Annan & Hargreaves: “Using multiple observationally-based constraints to estimate climate sensitivity.” Suggests a pdf centred on ~3degC.
Pre-print: http://www.jamstec.go.jp/frcgc/research/d5/jdannan/GRL_sensitivity.pdf
RC’s take on it: https://www.realclimate.org/index.php/archives/2006/03/climate-sensitivity-plus-a-change/
That study combines estimates constrained with observations. So if the “Iris Effect” were working in past events (Into the LGM/volcanic eruptions/20th Century), it’s implicitly included.
If you’re going to need to constrain unknowns in the models, using observations of the real world seems wise, as that’s what’s being modelled.
So it’s not “thoroughly unreliable”, it’s a best estimate with what is available. That is all.
By the way, for those trying to foster complacency from uncertainty, how do YOU demonstrate that a high (>4.5degC) sensitivity is unfeasible?
Hank Roberts says
> it appears
Only if you misread the report. Try Annan or Connolley on this.
Jim Galasyn says
Re the “400 scientists” list in 199:
Martin Vermeer says
Rod B #207:
Curiosity is a great thing… googling for “downwelling infrared spectrum” brought up http://lidar.ssec.wisc.edu/papers/dhd_thes/node3.htm with a nice picture.
I suggest you compare with Dave Archer’s simulator http://geosci.uchicago.edu/~archer/cgimodels/radiation.html
Enter: Sensor Altitude 0, Looking up, Midlatitude Summer for nearly the same spectrum :-)
Jim Galasyn says
More fun fallout from Listgate:
Martin Vermeer says
Yes, very good one. And one that cannot be answered by merely thinking it over. The concept of local thermodynamic equilibrium comes in: the population of the vibration levels, i.e., the percentage of CO2 molecules in a given excited state, depends only on the energy level of that state relative to the ground state [and possibly its degeneracy], and the temperature (which is a sensible concept only for LTE). It doesn’t make any difference how many friendly neighbourhood N2, O2 molecules are hanging around :-)
So your answer is no. Thinking conceptually like this isn’t easy and Ray L is much better at explaining these things.
Ray Ladbury says
Rod B., Nothing says that the collisional relaxation MUST occur with a collision with N2 or O2. It could be Ar, CO2, CH4… It’s just that with lower pressure, collisions are less frequent–as Barton indicated.
Rod B says
Martin (214), I was probably too obscure. The problem is mine, with my contention that the atmosphere radiates ala black body, continuous spectrum and all.
I couldn’t tell if your 2nd from last paragraph was referring to all atmosphere gases, or just CO2 and other GHGs per earlier in your post. If the latter I don’t think the absorption-re-emission chain transfers temperature, at least directly. If the former, that would imply atmospheric blackbody radiation as I contend. Though it’s going the wrong way to support the massive down welling; maybe the right way ala thermodynamics.
Alastair McDonald says
Re #192 where Greg Smith Says:
“Perhaps you or others in this forum could point me towards a site/papers that explain exactly and concisely what variables are in your “dynamic models” … ”
How about “Climate modelling uncertainty” from the BBC?
For more detail you could try this: “A Climate Modelling Primer” by McGuffie & Henderson-Sellers http://www.amazon.com/Climate-Modelling-Research-Developments-Climatology/dp/0471955582
Cheers, Alastair.
Rod B says
Martin (229): Thanks for the reference; it looks useful. I can’t figure out how Google knows to hide this stuff when I search ;-) . I need to pour over it, but at first glance and eyeballing the integration it looks like the received IR surface power is significantly short from what the budget graphics call for — 324 watts. Any insight into this?
Alastair McDonald says
Re #230 where # Martin Vermeer Says:
8 May 2008 at 11:46 AM
Curiosity is a great thing… googling for “downwelling infrared spectrum” brought up http://lidar.ssec.wisc.edu/papers/dhd_thes/node3.htm with a nice picture.
I suggest you compare with Dave Archer’s simulator http://geosci.uchicago.edu/~archer/cgimodels/radiation.html
Enter: Sensor Altitude 0, Looking up, Midlatitude Summer for nearly the same spectrum :-)
It is a good match, but that is not surprising since both figures are calculated using similar computer codes: FASCODE and MODTRAN. :-)
Cheers, Alastair.
Chris N says
Moderator,
Please remove by prior post. It was done in haste with 3 small kids on my legs. Also, a little harsh towards the end. Please substitute with the reply below.
Lawrence,
I’m afraid you are off the mark. The source of my data is the RSS satellite data, summarized below in C/period for 1979 till present:
NH minus Tropics (20 / 82.5)
0.0024 monthly
0.288 decade
Tropics (-20 / 20)
0.0014 monthly
0.168 decade
SH minus Tropics (-20 / -70)
0.0005 monthly
0.060 decade
World (-70 / 82.5)
0.0015 monthly
0.180 decade
Trends since Jan 1998 are shown below:
World (or global) trend is slightly negative at -0.036 C/decade
NH minus Tropics (20 / 82.5)
0.0003 monthly
0.036 decade
Tropics (-20 / 20)
-0.0004 monthly
-0.048 decade
SH minus Tropics (-20 / -70)
-0.0008 monthly
-0.096 decade
Also, world (or global) trend between 1979 and Dec 1993 was 0.05 C/decade. So, what were the trends between Jan 1994 and Dec 1998 (inclusive of the 1998 peak for dramatic effect)?
World (-70 / 82.5)
0.008 monthly
0.96 decade
NH minus Tropics (20 / 82.5)
0.0066 monthly
0.792 decade
Tropics (-20 / 20)
0.0105 monthly
1.260 decade
SH minus Tropics (-20 / -70)
0.0067 monthly
0.804 decade
So basically ALL of the accrued warming since 1979 occurred in a 5 year span (1994 through 1998). That’s it! Let me point out that it was during this period that the divergence between NH and SH anomalies began, and, since 1998, has essentially held steady at approx. 0.4 delta C (or 0.5 C NH vs. 0.1 C SH).
I recognize that climate modelers have done a lot of work, but the models are not satisfying to me until they can explain the above data. In fact, you have confirmed my observation that warming is occuring over land, not water!
I have said before that there could many reasons for this besides CO2 driven climate change: land use changes, less aerosols, solar effects, etc. Please note that in my distillation example above, more heat and reflux added to the tower increases the bottom temperature and decreases the overhead temperature (as predicted by greenhouse gas theory, vis a vis, surface and stratospheric temps). However, it all depends on starting conditions (or assumptions with regard to climate models). If the bottom purity of the tower is already 99%, then adding more heat to the reboiler to increase purity to 99.9%, then the bp temp increases only from 99 C to 100 C. The analogy for the climate is whether or not the climate is close to zero or negative climate sensitivity. So, I’m not denying the effects of CO2, but questioning the validity of the models, which in my opinion, haven’t convinced me at all that more CO2 equates to more than a small amount of temperature increase.
Chris N says
Just saw the thread regarding climate bets. Now, that’s interesting!
Mike says
CobblyWorlds #226
I read both the links you supplied. I confess to not having enough time to read them slowly and look for confirming info elsewhere.
However – correct me if I am wrong – my quick read of the papers seemed to say that the ECS was based on observed temperature changes vs observed CO2 concentrations, once other factors had been removed.
If I have understood this correctly, then it seems to be pretty bad. When you apply the ECS so reached, and add in the other known factors, then the result necessarily matches the observed temperature change, and it looks like you have an accurate working model.
But, if your initial assumptions are wrong – for example if some of the temperature change was caused by a natural factor that you haven’t taken into account – then your ECS factor is wrong.
Martin Vermeer says
Alastair #236: Oops, yes. Perhaps I should read the articles I link to…
Anyway, Figure 3 in http://lidar.ssec.wisc.edu/papers/dhd_thes/node5.htm seems to be a comparison with measured data :-)
Martin Vermeer says
Rod #233:
The radiative heat transport takes place only between GHG molecules, and almost exclusively (for Earth) within the absorbtion bands, not the continuum. However, due to LTE (local thermodynamic equilibrium) there is only one temperature at every level, in which all molecular species share.
(Note that convective heat transport is also important)
The net migration of heat is much smaller than the total amounts of heat sent up and down between levels. The “down” part of that, for those lower atmospheric layers that can radiate directly to the ground, represents the downwelling.
Martin Vermeer says
Rod B #235
No… I haven’t studied this particular point at all. Don’t be tricked though by the units on the vertical axis, the graph has “…per steradian”, which seems to differ from Archer’s graphs.
CobblyWorlds says
#224 Ferdinand Engelbeen,
1)
You say:
“But there is one period where CO2 didn’t follow temperature, the end of the Eemian (113-105 kyr BP). That period points to a low influence of CO2 on temperature.”
As CO2 isn’t the only factor in global average temperature it’s hardly surpising that you’ll get mismatches when focussing on the one variable on short timescales in a multi-variable situation. You could say the same about ice sheets/CO2/CH4 at ~10.4k yrs BPk, where all three show little connection to deltaTs(corrected).
Because there are unknowns (known as well as unknown ones), Hansen’s approach as shown in CC & Trace Gasses (post 188 above) to model Temperature as a function of ice sheet albedo and CH4/CO2/N2O is much more likely to pay dividends and avoid incorrect conclusions. And it does so in the good general agreement between the simple model and observations as shown in figure 2c.
Given that the door of uncertainty swings both ways, clouds may be either a +ve or -ve feedback, neither of us know for sure. But as I pointed out in post 226 above, a ballpark 3degC sensitvity looks reasonable, and that implicitly factors in cloud feedbacks.
2)
You also say:
“Methane did follow temperature more closely than CO2 for all periods, including the end of the Eemian.”
Which, as I previously stated, appears to tell us more about it’s sources and fate in the atmosphere than it does about Methane’s impact per se.
3)
Finally you say:
“Nowadays we are at much higher levels of about 1800 ppbv, due to human influences, but these haven’t had much direct effect in the Arctic tundra or oceans. Only a future temperature effect would have an influence, comparable to what happened during the Eemian, which is minor compared to what humans now emit…”
a)
See my post #174 above, points 1 to 3 and the implicit 4th point presented as a question. Can you tell me where I’m wrong?
As I said above, we’re forcing things so fast that I really am not sure past analogues can reasonably be used.
b)
Are you sure we’re not seeing a direct effect on the Arctic?
e.g.1 Shindell 1999 “Northern Hemisphere winter climate response to greenhouse gas, ozone, solar and volcanic forcing” found GHG forcing caused a tendency towards +ve mode AO – Fram Strait outflushing.
e.g.2 Bitz & Roe 2004 “A Mechanism for the High Rate of Sea Ice Thinning in the Arctic Ocean.” Just one of mechanisms by which small forcings can be amplified by the Arctic ice cap.
http://www.atmos.washington.edu/~bitz/Bitz_and_Roe_2004.pdf
We see such amplification of initial cause (Milankovitch cycles) in the ice ages. But the evidence clearly shows such cycles to be the most likely prime cause (e.g. Roe’s “In defence of Milankovitch.”). With only one instance of the current “experiment” (not several cycles), there remains more uncertainty as to the root driver of the Arctic ice cap’s ongoing demise than with regards the ice-ages. But it is not reasonable to claim human activity hasn’t “had much direct effect”, when there is evidence to show human activity may indeed by the prime mover of the changes we see in the Arctic.
That aside we have the coincidence of what is going on there now in light of the recent human caused global warming e.g:
“although the Arctic is still not as warm as it was during the Eemian interglacial 125,000 years ago [e.g., Andersen et al., 2004], the present rate of sea ice loss will likely push the system out of this natural envelope within a century….
There is no paleoclimatic evidence for a seasonally ice free Arctic during the last 800 millennia.
“Overpeck et al 2005 “Arctic System on Trajectory to New, Seasonally Ice-Free State” http://atoc.colorado.edu/~dcn/reprints/Overpeck_etal_EOS2005.pdf
It currently looks like “within a century” could be changed to “by 2020”. Especially when you consider that the seasonally ice free state is intrinsically linked to the fate of perennial ice, who’s area is dropping off a cliff:
Nghiem et al 2007 “Rapid reduction of Arctic perennial sea ice.” figure 3. http://meteo.lcd.lu/globalwarming/Nghiem/rapid_reduction_of_Arctic_perennial_sea_ice.pdf
David donovan says
Re 236.
Alastair…you did not read carefully enough. Read the caption and the text at the bottom of the page again (Fig. 1 at lidar.ssec.wisc.edu.)
Figure 1: Illustration of atmospheric downwelling radiance relative to values derived from the Planck function for various temperatures. Also noted are the absorption regions for various atmospheric constituents. The spikes in the measured radiance, between 1400 and 1800 cmtex2html_wrap_inline2771, are a result of water vapor absorption lines which become opaque within the instrument.
Note the reference to the “spikes in the measured radiance”….
Cheers,
Dave
Geoff Wexler says
Re” #205
Rod B Says
“Matter can only radiate by jumping between quantum level Not true. All of your radio, TV, cellular, e.g. radiation does not happen that way”
I don’t completely agree with your correction and certainly not with your choice of example. This is off topic so I shall be brief. While it is possible to partially de-quantise the explanation of the special role of CO2 and H20 it is not possible to do that with the difference between a copper aerial (analagous to CO2) and a failed one made from glass (analogous to molecular oxygen). No amount of simplifying can get away from the essential role of quantum mechanics in the latter. The reason is that there is no classical counterpart to the Pauli exclusion principle. Suppose you apply an alternating voltage to a piece of copper wire; it works by exciting the electrons into nearby vacant quantum levels thus upsetting the previous balance between electrons moving parallel to the direction of the electric field and those moving the opposite way. When you apply the same voltage to an insulator, the electrons discover that all the nearby quantum levels are full, so nothing much happens unless the electric field is large enough to cause a breakdown involving a big quantum jump to some more empty quantum levels.
Geoff Wexler says
Re” #205
Rod B Says
“Matter can only radiate by jumping between quantum level Not true. All of your radio, TV, cellular, e.g. radiation does not happen that way”
I don’t completely agree with your correction and certainly not with your choice of example. This is off topic so I shall be brief. While it is possible to partially de-quantise the explanation of the special role of CO2 and H20 it is not possible to do that with the difference between a copper aerial (analagous to CO2) and a failed one made from glass (analogous to molecular oxygen). No amount of simplifying can get away from the essential role of quantum mecahnics in the latter. The reason is that there is no classical counterpart to the Pauli exclusion principle. Suppose you apply an alternating voltage to a piece of copper wire; it works by exciting the electrons into empty quantum levels in the copper thus upsetting the previous balance between electrons moving with the electric field and those moving against it. When you apply the same voltage to an insulator, the difficulty is that all the available relevant quantum levels are full, so nothing much happens unless the electric field is large enough to cause a breakdown involving a big quantum jump.
Alastair McDonald says
Re My #236.
Please ignore that post. I was confused because the graph was in the theory section of the paper, whereas I would have thought it fitted more appropriately in the results.
Rod,
How did you calculate the shortfall of 324 W?
Cheers, Alastair.
Geoff Wexler says
Re: #231
Back to Earth. I want to make sure that I have this summary right. In those regions which matter to the greenhouse , (even high up), is it always the case that LTE is a good approximation and that the temperatures of CO2, H2O, air (O2 and N2)at a particular point are equal?
Thanks for responses to my previous question and to the link in #217 which I shall look out for later.
Martin Vermeer says
Geoff Wexler #248:
At some point high up it will break down… I read that in the mesosphere even laser action may occur. But it applies at least in the troposphere and lower stratosphere, where most of the greenhouse effect lives.
Found this: http://rabett.blogspot.com/2007/03/what-is-local-thermodynamic-equilibrium.html
Ray Ladbury says
Geoff, I would say that LTE is a pretty good approximation, and yes, this would imply aproximately equal temperatures for all constituents of Air. I also don’t know if you saw my link to this story:
http://www.aip.org/pnu/2008/split/862-1.html
Also seems to be germane to some of your questions.