A few weeks ago I was at a meeting in Cambridge that discussed how (or whether) paleo-climate information can reduce the known uncertainties in future climate simulations.
The uncertainties in the impacts of rising greenhouse gases on multiple systems are significant: the potential impact on ENSO or the overturning circulation in the North Atlantic, probable feedbacks on atmospheric composition (CO2, CH4, N2O, aerosols), the predictability of decadal climate change, global climate sensitivity itself, and perhaps most importantly, what will happen to ice sheets and regional rainfall in a warming climate.
The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.
The first point that can be made is a simple one. If something happened in the past, that means it’s possible! Thus evidence for past climate changes in ENSO, ice sheets and the carbon cycle (for instance) demonstrate quite clearly that these systems are indeed sensitive to external changes. Therefore, assuming that they can’t change in the future would be foolish. This is basic, but not really useful in a practical sense.
All future projections rely on models of some sort. Dominant in the climate issue are the large scale ocean-atmosphere GCMs that were discussed extensively in the latest IPCC report, but other kinds of simpler or more specialised or more conceptual models can also be used. The reason those other models are still useful is that the GCMs are not complete. That is, they do not contain all the possible interactions that we know from the paleo record and modern observations can occur. This is a second point – interactions seen in the record, say between carbon dioxide levels or dust amounts and Milankovitch forcing imply that there are mechanisms that connect them. Those mechanisms may be only imperfectly known, but the paleo-record does highlight the need to quantify these mechanisms for models to be more complete.
The third point, and possibly the most important, is that the paleo-record is useful for model evaluation. All episodes in climate history (in principle) should allow us to quantify how good the models are and how appropriate are our hypotheses for climate change in the past. It’s vital to note the connection though – models embody much data and assumptions about how climate works, but for their climate to change you need a hypothesis – like a change in the Earth’s orbit, or volcanic activity, or solar changes etc. Comparing model simulations to observational data is then a test of the two factors together. Even if the hypothesis is that a change is due to intrinsic variability, a simulation of a model to look for the magnitude of intrinsic changes (possibly due to multiple steady states or similar) is still a test both of the model and the hypothesis. If the test fails, it shows that one or other elements (or both) must be lacking or that the data may be incomplete or mis-interpreted. If it passes, then we a have a self-consistent explanation of the observed change that may, however, not be unique (but it’s a good start!).
But what is the relevance of these tests? What can a successful model of the impacts of a change in the North Atlantic overturning circulation or a shift in the Earth’s orbit really do for future projections? This is where most of the attention is being directed. The key unknown is whether the skill of a model on a paleo-climate question is correlated to the magnitude of change in a scenario. If there is no correlation – i.e. the projections of the models that do well on the paleo-climate test span the same range as the models that did badly, then nothing much has been gained. If however, one could show that the models that did best, for instance at mid-Holocene rainfall changes, systematically gave a different projection, for instance, of greater changes in the Indian Monsoon under increasing GHGs, then we would have reason to weight the different model projections to come up with a revised assessment. Similarly, if an ice sheet model can’t match the rapid melt seen during the deglaciation, then its credibility in projecting future melt rates would/should be lessened.
Unfortunately apart from a few coordinated experiments for the last glacial period and the mid-Holocene (i.e. PMIP) with models that don’t necessarily overlap with those in the AR4 archive, this database of model results and tests just doesn’t exist. Of course, individual models have looked at many various paleo-climate events ranging from the Little Ice Age to the Cretaceous, but this serves mainly as an advance scouting party to determine the lay of the land rather than a full road map. Thus we are faced with two problems – we do not yet know which paleo-climate events are likely to be most useful (though everyone has their ideas), and we do not have the databases that allow you to match the paleo simulations with the future projections.
In looking at the paleo record for useful model tests, there are two classes of problems: what happened at a specific time, or what the response is to a specific forcing or event. The first requires a full description of the different forcings at one time, the second a collection of data over many time periods associated with one forcing. An example of the first approach would be the last glacial maximum where the changes in orbit, greenhouse gases, dust, ice sheets and vegetation (at least) all need to be included. The second class is typified by looking for the response to volcanoes by lumping together all the years after big eruptions. Similar approaches could be developed in the first class for the mid-Pliocene, the 8.2 kyr event, the Eemian (last inter-glacial), early Holocene, the deglaciation, the early Eocene, the PETM, the Little Ice Age etc. and for the second class, orbital forcing, solar forcing, Dansgaard-Oeschger events, Heinrich events etc.
But there is still one element lacking. For most of these cases, our knowledge of changes at these times is fragmentary, spread over dozens to hundreds of papers and subject to multiple interpretations. In short, it’s a mess. The missing element is the work required to pull all of that together and produce a synthesis that can be easily compared to the models. That this synthesis is only rarely done underlines the difficulties involved. To be sure there are good examples – CLIMAP (and its recent update, MARGO) for the LGM ocean temperatures, the vegetation and precipitation databases for the mid-Holocene at PMIP, the spatially resolved temperature patterns over the last few hundred years from multiple proxies, etc. Each of these have been used very successfully in model-data comparisons and have been hugely influential inside and outside the paleo-community.
It may seem odd that this kind of study is not undertaken more often, but there are reasons. Most fundamentally it is because the tools and techniques required for doing good synthesis work are not the same as those for making measurements or for developing models. It could in fact be described as a new kind of science (though in essence it is not new at all) requiring, perhaps, a new kind of scientist. One who is at ease in dealing with the disparate sources of paleo-data and aware of the problems, and yet conscious of what is needed (and why) by modellers. Or additionally modellers who understand what the proxy data depends on and who can build that into the models themselves making for more direct model-data comparisons.
Should the paleo-community therefore increase the emphasis on synthesis and allocate more funds and positions accordingly? This is often a contentious issue since whenever people discuss the need for work to be done to integrate existing information, some will question whether the primacy of new data gathering is being threatened. This meeting was no exception. However, I am convinced that this debate isn’t the zero sum game implied by the argument. On the contrary, synthesising the information from a highly technical field and making it useful for others outside is a fundamental part of increasing respect for the field as a whole and actually increases the size of the pot available in the long term. Yet the lack of appropriately skilled people who can gain the respect of the data gatherers and deliver the ‘value added’ products to the modellers remains a serious obstacle.
Despite the problems and the undoubted challenges in bringing paleo-data/model comparisons up to a new level, it was heartening to see these issues tackled head on. The desire to turn throwaway lines in grant applications into real science was actually quite inspiring – so much so that I should probably stop writing blog posts and get on with it.
The above condensed version of the meeting is heavily influenced by conversations and talks there, particularly with Peter Huybers, Paul Valdes, Eric Wolff and Sandy Harrison among others.
Hank Roberts says
Mike, it’s the “so” part of your statement following ‘logarithmic’ that ain’t so. Look this up, here and elsewhere. You wrote:
> so it is approaching the absurd to argue that
> AGGs will now overpower the natural forces.
Where do you get that conclusion? Why are you relying on whoever told you that? You’re being fooled. It’s a bogus claim, it’s an old, familiar, bogus claim.
Try putting the term in the search box, along with climate.
One response you’ll find here, I’ll quote in full as an example of how you can check your beliefs before you post them:
https://www.realclimate.org/index.php/archives/2006/10/attribution-of-20th-century-climate-change-to-cosub2sub/
“[Response:If the temperature-CO2 relation were as simple as Lubos suggests, all would indeed be simple. But it isn’t, as he knows full well. That the T-CO2 relation is approximately logarithmic is no surprise, its why future T increases tend to be approximately linear when CO2 increases exponentially – see for example http://www.grida.no/climate/ipcc_tar/wg1/fig9-5.htm – William]”
Hank Roberts says
Rod B, I tried, but I assure you it can’t be done as you describe.
I created an empty universe. Okay, no mass, no velocity.
I created a single particle in that universe. Hmmm, how can we measure its mass or velocity in relation to ….. um …. nothing.
Try it yourself (grin).
Barton Paul Levenson says
Mike posts:
How much would be enough? How much allowance are they actually making? Can you put numbers to those two questions? If not, your question may be incoherent.
Ray Ladbury says
Rod B., OK, let’s think about equipartition. For a single molecule, we have 3 types of energy–potential energy (coulomb, vibrational, etc.), kinetic energy about the center of mass and kinetic energy of the center of mass. Kinetic energy about the center of mass and potential energy associated with a particular degree of freedom freely interchange. However, you can’t turn motion ABOUT the center of mass into motion OF the center of mass. Statistical mechanics and therefore thermodynamics really only work when you have a large assembly of molecules. How large? Well, it depends on how well you want thermo/stat. mech. to work–i.e. what sort of fluctuations can you tolerate. When you look at nonequilibrium stat mech, the fluctuation ARE the physics, but that is a much tougher nut to crack.
Ray Ladbury says
Mike, you are still handwaving. Take a look at the magnitudes of the effects you are talking about. Take a look at the time dependence of each. There’s a reason why Raypierre said, (to paraphrase)–solar effects go up and down… Temperature trends go up. And yes, the trend is still up.
Alastair McDonald says
Re #404 A single molecule has more than 3 types of energy.
“Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of an electronic, vibrational, rotational, nuclear and translational component, such that:
E = E{electronic}+ E{vibrational} + E{rotational} + E{nuclear} + E{translational},
where E(electronic) is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.”
See: http://en.wikipedia.org/wiki/Energy_level#Molecules
The temperature (measured with a thermometer) of a gas is directly related to the average of the E(translational) energy of the molecules. It there is only one molecule then the average value corresponds to its value :-)
Moreover, the other energies can also be represented by temperatures, which in thermal equilibrium are all equal to the E(translational) temperature due to the Principle of Equipartition of Energy.
In that case, it does not makes sense to talk of thermodynamic equilibrium of only a single molecule, since the equipartition of energy is achieved by collisions and one molecule cannot collide with itself. In fact even local thermodynamic equilibrium breaks down at high altitudes where the gas is so rare that collisions no longer drive the other energy levels.
HTH,
Cheers, Alastair.
Alastair McDonald says
Re #380 where Barton Paul Levenson Says:
“
Not really. Water vapor rains out quickly. The average molecule of water vapor stays in the atmosphere nine days.”
It is not the length of time that a molecule spends in the atmosphere which determines the greenhouse effect. It is the number of molecules in the atmosphere. The problem with the long atmospheric life time of CO2 molecules is not that they will cause more heating, but that even after we stop emitting them the heating will continue.
It is the short life time of water vapour which creates the danger of a runaway. Using your figures, the water in the atmosphere is replaced every nine days. That means that any alteration to the water cycle could be complete withing nine days. Melting of the permafrost and the release CO2 and CH4 takes decades at least, but a change in global humidity patterns with alterations in the greenhouse effect of water vapour and clouds could happen with a timescale of weeks rather than decades.
Rod B says
Alastair (369). To test your math (though mine is probably not that rigorous) I’ll start with about 30 w/m^2 reflecting off the surface. I would also use more like 0.75 (vs. 0.9) for Arctic albedo. Fresh snow is at best 0.8-0.85+; old snow is less; oddly ice is not that high, ~0.35. I would also use more like 0.15 (vs. 0.1) for the rest of the surface; there is a pile of surface 0.1-0.4; the ocean is probably less than 0.1, though there is vagueness and variability there that might play, particularly at high off the normal angle of incidence.
Taking your 14M Arctic area + 496M all else = 510M tells me that “all else” accounts for ~0.875×30= 26+ watts, the Arctic ~4 watts. This would say losing all of Arctic ice and snow would create at best a 4 watt, not 8 (though still nothing to sneeze at, I guess) equivalent forcing. Even this doesn’t account for the much higher albedo from the now uncovered Arctic waters, at least in the winter with its very high angle, nor the ocean’s albedo vagueness due to wave motion.
I don’t know if this is a quibble or not. And even if correct I don’t know if it changes Gavin’s “very important” assessment. But I’d be interested in your or Gavin’s thoughts.
Rod B says
Hank, I’ve admitted before it’s velocity (or temperature) can not be measured with anything much better than 100% error. But that doesn’t mean it doesn’t exist!
Ray Ladbury says
Alastair, I was trying to be quite general–electronic, nuclear and vibrational would all be associated with potential energy. My main point was that you can’t turn motion about the CM into motion of the CM without an external action.
Rod B says
Ray and Alastair: one point you gentlemen seem to agree on is that in normal circumstances a single molecule will not shift its energy around its own stores (DoFs) to accomplish equipartition; the only way it can get to the even-ness it desires is through collision with another molecule. Correct? Is there any remote goofy quantum mechanical thing where it might… once in a great while? (not that this matters, though…)
A related question: does the energy of the photon (hf) match one of the vibration energy levels of, say, CO2? I did a back of the envelope calc that told me the photon energy was much greater than vibration energy levels, which I assumed strongly incented the molecule to spread the energy around. (Ray, if not temperature, does a molecule have a soul? ;-) ) Though I couldn’t understand how/why it gets absorbed. Maybe my math was just off….
Mike says
Hank Roberts #401. The RealClimate link you posted is all about attribution within models and how complex it is.
My starting point is that there is “increasingly reliable evidence” for a substantial effect from solar variance [IPCC 1.4.3], and that this has been left out of the models. The evidence is or appears to be empirical evidence, so the issue of model complexity does not arise. I believe that I have provided enough links and information that the part of my assertion concerning the omission of the full effect of solar variance from the models is beyond dispute. ie, I believe that you could validly try to argue that it is too small to be concerned about (evidence would be needed), but not that it was not omitted.
My quotes from the Judith Lean (Woods and Lean) paper in #398 demonstrate that my view is very clearly supported. I don’t know about Woods, but Lean was a lead author for chapter 2 of the latest IPCC report. The paper is not an old paper, it appears to have been published in NASA’s ‘The Earth Observer’ Jul/Aug 2007 http://eospso.gsfc.nasa.gov/eos_observ/pdf/Jul_Aug07.pdf.
(The link I gave originally appeared not to contain a date)
What no-one can do at this moment is to quantify the missing mechanism. The empirical data does suggest that solar variation is several times more powerful than allowed for in the models. This was confirmed by BPL #279.
I argue that the models lack credibility if significant empirical evidence has been ignored purely on the grounds that it was not possible to model it. That is quite simply not a valid excuse – if this was traditional science, not a modelling exercise, can you imagine a scientist ever getting away with the statement “we didn’t understand that part, so we ignored it”?
[Response: Which models ignore this? (try reading Shindell et al, 2006 for instance). The implication that there ‘must’ be a missing mechanism is not clearly supported at all. Within the uncertainties in the solar forcing, climate sensitivity and reported impacts the model results fit. It would take substantial improvements in the accuracy of all three components to determine if there really is a mismatch. Finally, no physical model can include effects whose physical justification is unknown. How can they? If proponents of ‘missing mechanisms’ ever get round to quantitatively working out the implications of their mechanisms, they’ll be included and their implications assessed. Until then, they are just hand waving. – gavin]
Alastair McDonald says
Re #408 where Rod B wrote “I don’t know if this is a quibble or not. And even if correct I don’t know if it changes Gavin’s “very important” assessment. But I’d be interested in your or Gavin’s thoughts.”
[edit]
Suggesting that my figures are out by 100% is just a quibble! That is because of something that I have not yet told you. Only about 10% of the radiation from the surface escapes directly to space, so when more incoming solar radiation is absorbed, the surface temperature has to increase considerably in order to bring the incoming and outgoing radiation back into balance. This problem is made worse because the greenhouse effect increases with temperature, applying a positive feedback to the surface temperature.
It is this positive feedback which determines the new surface temperature, not the amount of the change in albedo.
Cheers, Alastair.
Alastair McDonald says
Re #410 & 411
I do not agree that a collision is required for a molecule to change state. All high energy states have half-lives, and eventually an excited molecule will return to its rest state even without any external activation. In the Old Quantum Theory, emissions could be either stimulated (as the result of a collision with a photon) or spontaneous due to to the half life being exceeded. Since then it has been discovered that radiationless transition can occur, so a vibrationally excited molecule will first relax into a rotationally excited molecule, and then its energy will collapse into translational energy.
Cheers, Alastair.
Philip Machanick says
#407 Alastair: why do you think the amount of water vapour would suddenly increase and keep on increasing? As soon as there’s a temperature dip, the holding capacity of the atmosphere drops. This happens overnight for example. If something else causes warming the average amount of water vapour would be expected to go up, amplifying the warming effect. If there is no external impulse, what would cause a runaway water-driven warming cycle as you are suggesting?
Alastair McDonald says
Re #411
Only the photons with the energy of the vibration will be absorbed, so the easiest way to calculate the energy of the vibration is to find the energy of the photon that caused it.
Otherwise you need to know the mass of the atoms and the strength of the atomic bonds. Where did you find those?
Cheers, Alastair.
Rod B says
Alastair (413), yes, I understand that, but it isn’t my line of inquiry which is: Does the loss of Artic ice and snow cover really decrease the albedo of the solar rays striking the area and presumably then increase the solar absorption to the large level that has been assumed and stated. My question (not really a contention — I’m just asking and wondering) stems from the maybe inappropriateness of 1) using averages (1/4 of incoming to spread it evenly around the globe, or evenly spread over an intercepting disk when it’s really stronger at the peak of the intercepted hemisphere and weaker at the edges — where the Arctic is), and 2) the persistence of some albedo when all the ice/snow is gone by the reflection off the ocean water due to the angle of incidence or wave motion.
Alastair McDonald says
Re #415
Philip,
The temperature does dip after 12 hours in the tropics, but that is only after the concentration of water vapour has become so great that there is a daily tropical storm.
In the Arctic the day can last up to half a year. During the middle two months (seven nine day periods) of summer the solar flux at the pole is equal to that at the winter tropic. What that means is that during the summer the Arctic will become sub tropical. This is not as outlandish as it seems because there is geological evidence those conditions did exist there in the past. Fossils of crocodiles have been found on Ellesmere Island, and there are coal measures in Alaska which were formed when it was as close to the North Pole as it is now.
Although that may allow the Inuit to become vegans, it bodes ill for the Pacific Islanders, Bangladeshi, Floridians and New Yorkers, since the Greenland ice sheet is unlikely to last long in those warmer conditions.
Cheers, Alastair.
Rod B says
Alastair (416), Actually I don’t recall where I got that — turns out I didn’t. What I did was compare the energy of a 15micrometer photon with the expected translation energy ala 1/2mv^2 at some rule of thumb temperature of the molecule (Ray, relax; just talking here) of around 300K. I’ve lost my envelope but I recall the photon energy being much greater than the translation energy, which struck me as odd though I have no science to question it myself.. The photon matching a vibration level makes total sense.
Ray Ladbury says
Rod, your math was off, the energy of the absorbed photon has to be very close to the energy difference of the excited and ground state or there is no absorption cross section. Also, Alastair’s allusion to a nonradiative transition is not particularly germane here, as such transitions are rare–rotational states rarely have the same energy as a vibrational state. What he says about half-life is spot on, but the radiative half-life of CO2’s excited vibrational state is sufficiently long that collisional relaxation is more likely than radiative relaxation.
Alastair McDonald says
Re #417
In my post I asked (perhaps rather clumsily) what you meant by
but it was editted out. I am still not sure to which of Gavin’s very many important assessments you were referring. Rather than wait for your reply I will assume it was this one:
I still can’t work out what Gavin’s means. But if he means that the change in albedo will only be significant locally then he has a point but he is ignoring the tele-connections which drive the climate.
For instance over the last few days I have pointed out that:
1) A warmer Arctic will lead to a melting of the Greenland ice and a global rise in sea levels.
2) The warmer ocean will take longer to freeze in winter leading to less seasonal ice and an earlier summer melt.
3) The loss of the polar vortex will lead to less wind shear so there will be more hurricanes and less tornadoes.
4) Without the cold polar air flowing south and lifting the moist subtopical air there will be no mid latitude clouds. Thus the sub tropical deserts will move northwards and the tropical jungles will expand, provided they are not destroyed by man for agricultual use.
This disruption to the northern hemisphere is bound to alter the southern hemisphere climate, if only by changing the path of the ITCZ.
The atmosphere is like a water bed. If you push down on one spot it will pop up somewhere else. If you stop pushing down at the north pole with the polar vortex and have rising air there powered by natural convection, then that air has to descend somewhere. And those changes could all happen in a period of nine days!
Cheers, Alastair.
Phil. Felton says
Re #419
The energy of the photons absorbed by CO2 correspond to ~2kcal/mole so if that were converted to translational energy of those same molecules they would become rather hot! Since in our atmosphere they’re surrounded by a ‘bath’ of N2 & O2 molecules they share that energy with the whole atmosphere.
Alastair McDonald says
Re #419, 420 & 422
Rod your calculations were correct. In the troposphere the air is too cold to fully excite the 15um band. The technical term is to say that the band is “frozen out.”
Since the effective temperature of the Earth is 253 K it seems strange to see how the IPCC mechanism of the outgoing longwave radiation originating in the upper troposphere where that temperture occurs can be correct!
Cheers, Alastair.
Ray Ladbury says
Alastair, even at 253 K, roughly 2% of molecules will have an energy equal or greater to the energy of the vibrational state of CO2 corresponding to the 15 micron line. Even at 200 K, its ~0.8% of molecules having that energy. I don’t call that frozen out. Rather it’s what gives rise to the blackbody (or in this case greybody) spectrum.
Alastair McDonald says
Re #424
Ray,
Yes but …. If only 2% of the molecules are excited then the radiation will only have an intensity of 2% of B(T). The models seem to use 100% of B(T).
Can you explain how you calculate the 2%, please. It is higher than I expected, but lower than the evidence from the heat capacity of CO2. One answer to that might be that it is 2% of the 1000 collision during the relaxation time, and so 20% of the molecules that are excited.
Cheers, Alastair.
Mike says
Gavin’s response to my #412 “no physical model can include effects whose physical justification is unknown. How can they?”.
Q.E.D. – Not modelled.
Rod B says
Ray (420), et al, thanks. Does this also say that a molecule can not transition rotation or vibration energy to its own translation energy, which is not inhibited by discrete large-spaced energy bands/lines? I think this is what was stated earlier but I wished to verify. To re-summarize, this says a molecule with an absorbed photon (in say vibration) can relax in two and only two ways: 1) re-emit an equivalent energy photon, or 2) collide with another molecule, with the latter being far more likely in the troposphere. True?
Rod B says
Ray and Alastair: I’m confused. 2% may not be frozen out but it sure sounds like considerably less than open-armed welcome! Does this say that at 253K (about mid Troposphere) the quantum probabilities say that 2% of CO2 molecules will absorb a photon and 98% will not?
Rod B says
Alastair (421), Basically I was inquiring about the statement (paraphrasing) that if the Arctic ice (and snow) goes albedo goes to pot and all of that energy now gets absorbed. I basically asked Gavin if it really was that significant; he in essence said yes. I’m still trying to hone in on the degree with the question stemming from 1) the insolation hitting the Arctic surface (getting reflected or absorbed) is less than what one gets by using the model averages, and 2) the “lost” albedo will in fact not all be lost but some will remain by virtue of the water, angle of incidence, waves. (Though this gets a little messy in the summer with the reduced actual insolation going on all day or for very long days. On the other hand, in the winter there is zero reflection or absorption above the Circle whether there is ice and snow there or not.) I suspect this will still boil down to a level of degree.
BTW, I am not (yet?) questioning all of the other (non-albedo stuff) results that you summarize.
Alastair McDonald says
Re #428
Rod,
No, it says that only 2% of the molecules are excited and able to emit (Forget Kirchhoff’s Law e = a for the moment. It only applies if radiation is the sole flow of energy.) 100% of the CO2 molecules will still be able to absorb a photon.
But the constant volume specific heat of CO2 at 250K is 26.5 J/mol K. This implies that there are 3 translational, 2 rotational, and 0.9 vibrational degrees of freedom operating. In other words only 10% of the vibrational DoF is frozen out.
So how do we get from 2% to 80%?
Ray Ladbury says
Rod and Alastair, the 2% comes from the Maxwell distribution–2% of which is above the vibrational energy. Not sure about the specific heat, but I’m not sure it is relevant.
Martin Vermeer says
#425, #414 Alastair:
No… it should be 100% if the gas is optically thick and absorbing. Which it very much is due to the resonance in the band.
The Planck curve B(T) and the Maxwell distribution are directly related.
Not without the help of another molecule, because of conservation of momentum (look at it in the rest frame of the molecule… where you gonna get the momentum to change the molecule’s translational motion?)
Martin Vermeer says
Alastair #430:
Actually, cf. http://www.wag.caltech.edu/home/jang/genchem/infrared.htm , the vibrational (or bending) mode at 15µm is doubly degenerate: in the molecule O=C=O the C can move both up-down and out-of-plane forward-backward. So, not 10% but 55% of the vibrational DoF (for this mode) is frozen out.
Alastair McDonald says
Rod,
If you are determined that global warming is not a threat it is not possible for me to make a cast iron case proving I am correct. This is because earth science is fractal and for every rule there is an exception. For instance, the rule that there is no albedo effect at the poles regions in winter because night there lasts 6 months, can be contradicted by pointing out that there is weak sunshine near the Arctic Circle. So when I say there is an ice albedo effect which causes a positive feedback, that is a rule with the minor exceptions that you are pointing out.
Here are a couple of press releases which seem relevant to the questions you are asking.
Models Underestimate Loss of Arctic Sea Ice
http://nsidc.org/news/press/20070430_StroeveGRL.html
Arctic ice more vulnerable to sunny weather
http://www.agu.org/sci_soc/prrl/2008-13.html
http://www.ucar.edu/news/releases/2008/arcticice.jsp
HTH,
Cheers, Alastair.
Alastair McDonald says
Re #432
Thanks for the point about conservation of momentum. I had forgotten that.
However, although the atmosphere will behave as a black body to the CO2 band, while the optical depth of absorption will depend on the concentration of CO2, the optical depth of emission will depend on the percentage of molecules that are excited. In other words, Kirchhoff’s Law of e = a will not hold close to the Earth’s surface as well as not holding at the TOP of the atmosphere where LTE breaks down.
Cheers, Alastair.
Martin Vermeer says
#435 Alastair McDonald:
I beg to differ :-) . The absorbtivity a(f) of a layer of air will depend on frequency f, absolute concentration of CO2, and the thickness of the layer. The total emission coming from this same layer will be equal to a(f)*B(f), as Kirchhoff requires.
A thin layer will also be optically thin (a(f) << 1.0) and emit less than Planck states. An optically thick layer (which in the core of the CO2 band doesn’t have to be geometrically very thick!) will emit 100% of Planck. Proximity to the Earth surface has nothing to do with it. The percentage of excited molecules indeed controls emission, but is itself tightly regulated through collisional transfer conform LTE, leading to this result.
You are correct that when LTE breaks down (as it does in the thermosphere), all bets are off.
Alastair McDonald says
Martin,
You wrote:
I beg to agree :-) and differ :-(
The absorbtivity a(f) of the lowest layer of air will depend on frequency f, absolute concentration of CO2, and the thickness of the layer. It will equal a(f)*B(f,T) where T is the surface temperature.
The total emissions will come from those molecules which are excited. Unexcited molecules cannot emit. Since much of the molecular excitation is frozen out, the emission from the layer will be less than B(f,T) where T is the air temperature. (For illustrative purposes we can assume that the surface and the lowest layer air temperatures are the same.)
This implies that the layer is receiving more radiation than it emits, and so warms. That is why you have to open the ventilation in a greenhouse on a sunny day. The air heats up.
Cheers, Alastair.
Ray Ladbury says
Martin, I think that what Alastair is saying is that at the boundaries, you will have a net flux of energy in the form of IR photons, so right there you will not have LTE. Near the surface, you will have more energy in IR photon flux than would be expected at LTE (and for blackbody) so the atmosphere must heat up in response. (actually, this applies at all levels of an adibatically cooling atmosphere, right?)
At the edge of the inky darkness of space,you have a cooling gradient, so there is a paucity of IR photons. Correct me if I’m wrong.
Martin Vermeer says
Re #437 Alastair:
Emission will be a(f)*B(f,T). As Kirchhoff says. a(f) is the absorbtivity of the layer. If you choose the layer thin enough, a(f) < 1.0. Bringing up the “freezing out” of excited states is particularly unfruitful. It is not the reason emission is less than B(f,T) — on the contrary, the freezing out effect is implicit in B(f,T), namely its high-frequency turn-down.
This is a totally unrelated thing. That’s visible light, totally different f, and it gets absorbed by the ground and transferred to the air.
Yes, the layer also absorbs IR coming from above — and from the ground. But Kirchhoff stands. I still don’t get why you want to disagree with me :-)
Ray #438: Yes I disagree with you too :-) . Not with the net flux thing… sure there will be flux imbalances all the time; but, you know, it’s hard to create non-LTE conditions. It takes a well-equipped lab. It happens up at 100 km altitude, where the air is so thin that molecules can absorb photons and hang around in the excited state without meeting any other molecules for a non-trivial amount of time. Then you get population inversions and potentially even lasing action.
Within 99% of the atmosphere, the mean free path is so short that even in small air parcels, all energy states will reach equilibrium populations on a time scale short wrt their natural decay times. That means LTE. And that means Kirchhoff.
For clarity, be aware that LTE is defined only considering massive particles, not photons; a common misconception:
http://en.wikipedia.org/wiki/Thermodynamic_equilibrium#Local_thermodynamic_equilibrium
Alastair McDonald says
Martin,
You write “I still don’t get why you want to disagree with me :-) ”
The greenhouse effect operates by the carbon dioxide and water vapour absorbing the thermal radiation from the Earth’s surface and so heating the air. This was first shown by Horace de Saussure, before the discovery of infra red radiation, using what is now called a hot box. That is what Fourier refers to in his pioneering paper translated here by Raypierre. If the air in the atmosphere behaved as you described that effect would not be possible.
So I am afraid that we must agree to differ :-(
Cheers, Alastair.
Ray Ladbury says
Martin, my language may have been imprecise–I’m considering equilibrium with the photon gas as well as the greenhouse.
Alastair–the radiation is not just from the surface, but also from the rest of the atmosphere.
Rod B says
my brain is starting to hurt… but I’ll catch up; this is great stuff!
Martin Vermeer says
#441 Ray: but by that concept the whole atmosphere is in disequilibrium: sunlight at 6000K passing through it everywhere!
#440 Alastair:
No, that’s a misconception… seems I’ve run out of easy ways to explain it… anybody else? These things are really well understood, but I don’t want to appeal to authority here.
#442: Rod: never stop learning :-)
Abbe Mac says
Re #441,
Ray, I agree that the thermal (infra-red)radiation is both from the surface and from the atmosphere. What I am saying is that the the intensity from the surface is greater than that from the atmosphere on a line by line basis. It is the IPCC who are ignoring the radiation from the surface because they assume that the surface and the surface air temperatures are the same, and that both the surface and the air emit as blackbody radiators.
But obviously I have a little more research to do before I can write my paper :-)
Cheers, Alastair
Rod B says
Alastair (434), I’m simply trying to get an accurate view of the increased solar absorption caused by the loss of Arctic ice and snow. It still might be significant, as Gavin stated, but I’m just trying to find out what it is really. The loosey-goosey ballpark gut estimates presented as gospel are a bit disconcerting, as is the little bit of glibness (and contradictions!) even in the professional studies you referenced. My poor choice of the term “model” is badly misleading and didn’t help my question. I was not referring to climate models, just simple mathematical algorithms. Simply that the standard process of taking the incoming insolation and dividing by 4 to spread it over the full surface of the globe, e.g., is not accurate when assessing absorption/reflection over a small region.
It’s generally stated like (paraphrasing) “Arctic ice reflects 90% of solar, and when the ice is gone the water absorbs 95-100 of the solar.” For starters the angle off the normal at say 70 degrees N. Lat. (roughly the normal extent of Arctic ice/snow) varies from about 50 degrees to 90 degrees depending on the season. This says the regional insolation in watts/meter^2 has to be lowered by COS, or by a factor of from 1.0 to ~0.63. (Though the correct base has to be determined which might mitigate this adjustment.) Then you have to use an accurate reflectivity for ice/snow albedo which is likely more like 0.6+ average, not 0.9, and can be as low as 0.35 (and as high of 0.85+). Next a more accurate reflectivity for water which is close to zero, but in fact reaches near 0.4 at an angle of 10 degrees — halfway down the ice extent in spring and fall, and ~0.15 at 20 degrees covering all of the normal ice extent.
I don’t know if any of this is going to much change the assertions. My hunch is probably not — much. But I’m bothered with the significant lack of rigor that goes into these assertions. Makes me wonder: were they (the assertions) scientifically and mathematically assessed? Or did they just sound pretty good and satisfy gut feel (not to mention the party line)?
Rod B says
Taking just one bite at a time: I hadn’t thought of the momentum aspect, either; interesting. This says (to repeat and earlier question) that within a molecule there can never be an energy exchange between translation and vibration or rotation — in either direction. True? Problem: I could swear I have seen (but not read) studies that assessed just such transitions. Could you have these transitions and maintain momentum within a system of (many) molecules? Or is this idea just goofy. (Ray probably likes it :-) .)
Ray Ladbury says
Rod B., It all depends on the interactions, but basically you need to conserve: 1)Energy (including mass, of course), 2)Momentum, 3)Angular momentum, 4)Charge, 5)Other conserved quantum numbers (e.g. spin, electron number, baryon number…).
If you have a collection of many, many molecules, you can get some collective effects, but all these things are still conserved. If you bring the Weak nuclear force (which unifies with electromagnetism at some mass scale), you can violate conservation of electron number (e.g. eletron goes to muon, etc.), but you qlways have energy, momentum, angular momentum (including spin) and charge. Rod, there are whole textbooks dedicated to what kinds of interactions are possible under what conditions and how often, but unless you’re a laser jock, it’s a handfull of rules that govern transitions.
CobblyWorlds says
#445 Rod,
Like you I’ve been following the physics with interest (like you my brain hurts). But if I can interject on the ice issue…
Why do you seek such exactitude?
There are always going to be uncertainties, the ballpark figures I personally use for albedo feedback (80/20) still give a +60% gain in absorption of insolation, 60% or 80% (90/10) the message is: A substantial gain. I will have sourced that 80/20 from somewhere, and although I can’t remember where, I do recall choosing those as examples precisely because the source was trying to make a conservative estimate. Despite my using 80/20 I wouldn’t waste time arguing with someone for using 90/10.
If we’re talking precision in this respect you’d have to be specific in both time and space (Lat/Long). For example that would allow even the factoring in of details like the angle between the Sun and the average direction of waves, not to mention weather (sea surface roughness).
Furthermore there are plenty of factors in what’s going on in the Arctic, from Arctic Oscillation related outrafting through the Fram Strait (Zhang/Wallace), to the propensity of thicker ice to lose mass more rapidly than thin (Bitz/Roe). But if you want to see the impact of ice albedo feedback, check out the summer trend in extent as opposed to the other seasons: http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seasonal.extent.1900-2007.jpg
I’m watching the coast off the Canadian Archipelago like a hawk right now. Because I suspect a prominent role for mechnical failure in the final stages of the perennial ice. i.e Today’s QuikSCAT: http://manati.orbit.nesdis.noaa.gov/ice_image21/D08142.NHEIMSK.GIF
Check out this animated gif: http://ice-glaces.ec.gc.ca/content_contenu/SIE/Beaufort/ANIM-BE2007.gif
Pay particular attention to just before 9 January 2008. Ice is not strong under tension, so with the right ocean/wind forcing it can go fast when it has an edge to open sea. Now imagine just an increase of just 50% of total available insolation in the summer in each of those cracks. Whatever the melt rate was there, it just doubled.
Yet, all that said and I’m not even confident enough to bet against William Connelly that this years minimum will beat last years. Year to Year is the ice’s equivalent of “weather” in terms of the minima. But in the longer run, I’m a pretty well convinced that the ice cap will move rapidly into a seasonally ice free state within 10 years.
If you’re interested feel free to ask for the bracketed references. Otherwise I’m back to reading, and lurking.
Cobbly.
Barton Paul Levenson says
Rod, here’s a bit of computer code to simulate the ice-albedo feedback. The language is Just Basic, but it should be easy to translate to C or Fortran or whatever you prefer to use:
toRads = 3.14159265358979 / 180
topLat = 90
bottomLat = 60
top = topLat * toRads
bottom = bottomLat * toRads
icePart = sin(top) – sin(bottom)
groundPart = 1 – icePart
Aice = 0.6
Aground = 0.26
iceContrib = icePart * Aice
groundContrib = groundPart * Aground
A = iceContrib + groundContrib
F = (1366.1 / 4) * (1 – A)
Te = (F / 5.6704e-08) ^ 0.25
print
print topLat, using(“#.###”, icePart), Aice, using(“#.####”, iceContrib)
print bottomLat, using(“#.###”, groundPart), Aground, using(“#.####”, groundContrib)
print “Earth albedo:”, , using(“#.####”, A)
print “Te: “; using(“###.#”, Te)
end
If your ice and snow extend from 60 degrees North (or South) to 90 degrees N (S), and assuming albedo is 0.6 for polar caps but 0.26 everywhere else (I’ve conflated cloud albedo with ground albedo, of course), you get:
90 0.134 0.6 0.0804
60 0.866 0.26 0.2252
Earth albedo: 0.3056
Te: 254.3
If you then make your lower polar boundary 61 degrees, due to melting at the edges due to global warming, you then get:
90 0.125 0.6 0.0752
61 0.875 0.26 0.2274
Earth albedo: 0.3026
Te: 254.6
I.e., 1.1% of Earth’s surface has gotten darker, and the effective temperature of the Earth has gone up by 0.3 K.
Phil. Felton says
Re #448
Looking at today’s Quikscat it looks like there’s a large lead opened up along the shore up towards Ellesmere Island.