A few weeks ago I was at a meeting in Cambridge that discussed how (or whether) paleo-climate information can reduce the known uncertainties in future climate simulations.
The uncertainties in the impacts of rising greenhouse gases on multiple systems are significant: the potential impact on ENSO or the overturning circulation in the North Atlantic, probable feedbacks on atmospheric composition (CO2, CH4, N2O, aerosols), the predictability of decadal climate change, global climate sensitivity itself, and perhaps most importantly, what will happen to ice sheets and regional rainfall in a warming climate.
The reason why paleo-climate information may be key in these cases is because all of these climate components have changed in the past. If we can understand why and how those changes occurred then, that might inform our projections of changes in the future. Unfortunately, the simplest use of the record – just going back to a point that had similar conditions to what we expect for the future – doesn’t work very well because there are no good analogs for the perturbations we are making. The world has never before seen such a rapid rise in greenhouse gases with the present-day configuration of the continents and with large amounts of polar ice. So more sophisticated approaches must be developed and this meeting was devoted to examining them.
The first point that can be made is a simple one. If something happened in the past, that means it’s possible! Thus evidence for past climate changes in ENSO, ice sheets and the carbon cycle (for instance) demonstrate quite clearly that these systems are indeed sensitive to external changes. Therefore, assuming that they can’t change in the future would be foolish. This is basic, but not really useful in a practical sense.
All future projections rely on models of some sort. Dominant in the climate issue are the large scale ocean-atmosphere GCMs that were discussed extensively in the latest IPCC report, but other kinds of simpler or more specialised or more conceptual models can also be used. The reason those other models are still useful is that the GCMs are not complete. That is, they do not contain all the possible interactions that we know from the paleo record and modern observations can occur. This is a second point – interactions seen in the record, say between carbon dioxide levels or dust amounts and Milankovitch forcing imply that there are mechanisms that connect them. Those mechanisms may be only imperfectly known, but the paleo-record does highlight the need to quantify these mechanisms for models to be more complete.
The third point, and possibly the most important, is that the paleo-record is useful for model evaluation. All episodes in climate history (in principle) should allow us to quantify how good the models are and how appropriate are our hypotheses for climate change in the past. It’s vital to note the connection though – models embody much data and assumptions about how climate works, but for their climate to change you need a hypothesis – like a change in the Earth’s orbit, or volcanic activity, or solar changes etc. Comparing model simulations to observational data is then a test of the two factors together. Even if the hypothesis is that a change is due to intrinsic variability, a simulation of a model to look for the magnitude of intrinsic changes (possibly due to multiple steady states or similar) is still a test both of the model and the hypothesis. If the test fails, it shows that one or other elements (or both) must be lacking or that the data may be incomplete or mis-interpreted. If it passes, then we a have a self-consistent explanation of the observed change that may, however, not be unique (but it’s a good start!).
But what is the relevance of these tests? What can a successful model of the impacts of a change in the North Atlantic overturning circulation or a shift in the Earth’s orbit really do for future projections? This is where most of the attention is being directed. The key unknown is whether the skill of a model on a paleo-climate question is correlated to the magnitude of change in a scenario. If there is no correlation – i.e. the projections of the models that do well on the paleo-climate test span the same range as the models that did badly, then nothing much has been gained. If however, one could show that the models that did best, for instance at mid-Holocene rainfall changes, systematically gave a different projection, for instance, of greater changes in the Indian Monsoon under increasing GHGs, then we would have reason to weight the different model projections to come up with a revised assessment. Similarly, if an ice sheet model can’t match the rapid melt seen during the deglaciation, then its credibility in projecting future melt rates would/should be lessened.
Unfortunately apart from a few coordinated experiments for the last glacial period and the mid-Holocene (i.e. PMIP) with models that don’t necessarily overlap with those in the AR4 archive, this database of model results and tests just doesn’t exist. Of course, individual models have looked at many various paleo-climate events ranging from the Little Ice Age to the Cretaceous, but this serves mainly as an advance scouting party to determine the lay of the land rather than a full road map. Thus we are faced with two problems – we do not yet know which paleo-climate events are likely to be most useful (though everyone has their ideas), and we do not have the databases that allow you to match the paleo simulations with the future projections.
In looking at the paleo record for useful model tests, there are two classes of problems: what happened at a specific time, or what the response is to a specific forcing or event. The first requires a full description of the different forcings at one time, the second a collection of data over many time periods associated with one forcing. An example of the first approach would be the last glacial maximum where the changes in orbit, greenhouse gases, dust, ice sheets and vegetation (at least) all need to be included. The second class is typified by looking for the response to volcanoes by lumping together all the years after big eruptions. Similar approaches could be developed in the first class for the mid-Pliocene, the 8.2 kyr event, the Eemian (last inter-glacial), early Holocene, the deglaciation, the early Eocene, the PETM, the Little Ice Age etc. and for the second class, orbital forcing, solar forcing, Dansgaard-Oeschger events, Heinrich events etc.
But there is still one element lacking. For most of these cases, our knowledge of changes at these times is fragmentary, spread over dozens to hundreds of papers and subject to multiple interpretations. In short, it’s a mess. The missing element is the work required to pull all of that together and produce a synthesis that can be easily compared to the models. That this synthesis is only rarely done underlines the difficulties involved. To be sure there are good examples – CLIMAP (and its recent update, MARGO) for the LGM ocean temperatures, the vegetation and precipitation databases for the mid-Holocene at PMIP, the spatially resolved temperature patterns over the last few hundred years from multiple proxies, etc. Each of these have been used very successfully in model-data comparisons and have been hugely influential inside and outside the paleo-community.
It may seem odd that this kind of study is not undertaken more often, but there are reasons. Most fundamentally it is because the tools and techniques required for doing good synthesis work are not the same as those for making measurements or for developing models. It could in fact be described as a new kind of science (though in essence it is not new at all) requiring, perhaps, a new kind of scientist. One who is at ease in dealing with the disparate sources of paleo-data and aware of the problems, and yet conscious of what is needed (and why) by modellers. Or additionally modellers who understand what the proxy data depends on and who can build that into the models themselves making for more direct model-data comparisons.
Should the paleo-community therefore increase the emphasis on synthesis and allocate more funds and positions accordingly? This is often a contentious issue since whenever people discuss the need for work to be done to integrate existing information, some will question whether the primacy of new data gathering is being threatened. This meeting was no exception. However, I am convinced that this debate isn’t the zero sum game implied by the argument. On the contrary, synthesising the information from a highly technical field and making it useful for others outside is a fundamental part of increasing respect for the field as a whole and actually increases the size of the pot available in the long term. Yet the lack of appropriately skilled people who can gain the respect of the data gatherers and deliver the ‘value added’ products to the modellers remains a serious obstacle.
Despite the problems and the undoubted challenges in bringing paleo-data/model comparisons up to a new level, it was heartening to see these issues tackled head on. The desire to turn throwaway lines in grant applications into real science was actually quite inspiring – so much so that I should probably stop writing blog posts and get on with it.
The above condensed version of the meeting is heavily influenced by conversations and talks there, particularly with Peter Huybers, Paul Valdes, Eric Wolff and Sandy Harrison among others.
Rod B says
Alastair, the 324 watts of “re-emitted” LW radiation absorbed by the surface is right off the ubiquitous “tree and branch” graph/image of the earth’s energy balance, used by IPCC, e.g.
Barton Paul Levenson says
Re 221 and
Well, I believe Hart (1978), following Elsasser (1960), related optical thickness to the square root of pressure, if that helps. (And the inverse fourth power of temperature.)
Alastair McDonald says
Re #251
Rob,
I guess that you mean the Back Radiation shown in the IPCC AR4 FAQ 1.1 Figure 1.
If you run David Archer’s model and change the following settings:
Ground T. Offset C to 4.8
Locality to Midlatitude Winter
Sensor Altitude to 0 and Looking up
they should give you conditions similar to that from the thesis Fig. 1. (Setting the Ground T. Offset to 4.8 makes the surface temperature 277K, the same as for the thesis.)
I out will be 218.8 W/m2 a bit short of 324 W/m2 but then you would not expect midlatitude winter to give the global average value. 277K is 4C, or 40F, so it was not a typical day!
HTH,
Cheers, Alastair.
Rod B says
Martin (241), This is a quickie but salient response while I digest the rest and other responses.
You said, “…The radiative heat transport takes place only between GHG molecules, and almost exclusively (for Earth) within the absorbtion bands, not the continuum. …”
I do not think this is true. Simply, it has been stated here a number of times that the most likely molecular energy transfer process here is, after a CO2 (say) molecule absorbs a photon usually into a quantum vibration state (which must be the start of the chain), the most likely, by far, next step is a collision with another molecule (by the numbers far most likely N2 or O2) that transfers the energy from the CO2 vibration to N2 (say) translation, possibly with a couple of non-interesting steps. It has been said that at low altitudes a collision is 1000+ times more likely to happen before the CO2 regurgitates its vibration bond energy with another radiated photon. This is the basic way the surface radiated energy gets transferred to atmospheric heat – in the common temperature meaning of the term.
Secondly, while we call the energy added to vibrational bonds “heat” and even refer to its “characteristic” temperature, it does not increase the [common] temperature of CO2, which is evidenced only through translation energy. Similarly, if it re-emits the “heat” (energy, really) as a photon, it does not cool. So the absorb-emit-absorb-emit chain through CO2 only will not add temperature to the atmosphere.
This has been the general (but not broad) consensus. If disagreers want to refute (again), I would welcome it with relish – I’m still trying to conclusively learn this stuff.
Mike says
CobblyWorlds #226
You say “Using multiple observationally-based constraints to estimate climate sensitivity.” and “That study combines estimates constrained with observations.”
I said in #239 this was pretty bad.
On reflection, it is worse than bad, it potentially destroys the IPCC case.
Let me explain.
If you theorise that CO2 warms the planet, and then in your model for CO2’s warming effect you constrain the model so that it matches actual temperature changes, then you cannot use that model (to the extent that it is so constrained) to demonstrate how much warming the CO2 gives. The reason is that you have used the required outcome as a basic input to the model.
In the case of clouds and ECS, from what you are saying, the modellers have done exactly this, and from the figures I have seen, there is a large impact not minor.
So, to the extent that they have used “constraint by observation”, the IPCC are NOT entitled to claim that the models show that the 20th century warming was caused by CO2.
They are only entitled to claim that IF the 20th century warming was caused by CO2 THEN more warming can be expected in future if CO2 levels continue to increase.
I have had a quick scan of the IPCC report, and there are about a dozen references to “constrained by observation” which look like they could be making the same error. I haven’t had time to go through them carefully.
PS. How do you get an indented quote into a post?
[Response: Use <blockquote> </blockquote> . However you would only have a point if the data too which the model was tuned was also the same as the data against which it was tested. It isn’t. Models are predominantly trained against climatology (i.e. the statistics of ‘modern’ climate – means, variability etc.) and not the forced changes (such as the 20th C trend or the response to Pinatubo or the LGM). Thus all of those tests are properly ‘out of sample’ and are valid for demonstrating utility. – gavin]
Ray Ladbury says
Rod, Note that Martin said “radiative heat transport”. Other molecules for the most part do not radiate. Second, you are sort of mixing the concepts of temperature and energy. Things are a bit complicated. Consider a mole of CO2 near absolute zero. We have 3 degrees of freedom, so E~1.5kT. Now heat things up near 0 degrees C. Most of the molecules still have 3 degrees of freedom, but a few now are sufficiently energetic that they can excite vibrational modes collisionally, so the proportionality between E and kt will not be just a wee bit more than 1.5 and will increase until almost all of the molecules can be excited, so the proportionality will be 2.5, and so on. Now, it is true that exciting a vibrational mode of a molecule will not by itself increase temperature, but neither will accelerating a single molecule. Temperature is an intensive property.
CobblyWorlds says
#239 Mike,
You state:
“But, if your initial assumptions are wrong – for example if some of the temperature change was caused by a natural factor that you haven’t taken into account – then your ECS factor is wrong.”
Firstly, that’s one of the reasons why there’ll be a probability distribution function as seen in figure 1 of Annan/Hargreaves, as opposed to a precise figure. Some factors, such as aerosols, generally cause the uncertainty to tend to the higher side, as aerosols block incoming sunlight and cause relative “cooling” at the ground. Others like cloud response will cause a +/- uncertainty. Clouds could warm, as well as cool, although I understand a slight warming effect is expected based on current research (which I don’t follow in detail).
However as the Earth is suspended in a (near) vacuum, it cannot lose the bulk of it’s ‘heat’ by conduction or convection, it can only lose heat by radiation. So the bulk of balancing against incoming solar radiation has to happen due to albedo (how much sunlight is reflected back into space) and infra red emission. To see this it’s best to start with a basic climatology course. For example see: www-paoc.mit.edu/labweb/notes/chap2.pdf There are 12 course modules in pdfs, who’s address is of the form “www-paoc.mit.edu/labweb/notes/chapX.pdf” where X= 1 to 12.
I can also strongly recommend the resources under the “Start Here” and “Index” button at the top of each RealClimate page.
#255 Gavin, thanks for that reply.
Julian Flood says
Re 171 quote Some interesting point is that the d13C content of methane is slightly increasing. This may point to a change in source (either more human induced – like from rice paddies) or more fossil methane, but I have no figures seen (yet) to know the d13C content of different methane sources. Unfortunately, d13C measurements in methane only started at the moment that methane levels stabilised. unquote
I have seen (a construction which hides the fact that I have forgotten where) a prediction that soil methane emissions were suppressed by acid rain and pollution controls would lead to an increase: it might be worth looking at the correctness of that prediction.
JF
Rod B says
Ray (250), looks like a fascinating link. Thanks.
Martin Vermeer says
Rod B #254: Ray L has already shortly answered, but let me try to be more concrete in addressing your misconceptions (which are getting less and less, and tougher and tougher to address).
This is true. Here we need a lecture on local thermodynamic equilibrium and degrees of freedom :-) Again, as I said earlier, it is not very fruitful to look at what’s happening to individual molecules and photons; the statistics are much simpler than these individual narratives, precisely because of LTE.
Every molecule can move in a number of different ways. These are called degrees of freedom. For a single atom (like argon), there are three such degrees of freedom, linear movement in the x,y and z directions. Now, as molecules collide, energy will shift back and forth between these three directions of motion. Write the total kinetic energy as
1/2 mv2 = 1/2 m (vx2+vy2+vz2).
You see the separate contributions of each velocity component vx, vy and vz.
Now, the total amount of kinetic energy in the x direction, on average per molecule under LTE, will be
1/2 m ave(vx2) = 1/2 kT,
where T is the temperature. The same holds in the y and z directions, actually 1/2 kT is the energy per molecule per degree of freedom.
When you look at a whole mole instead of a single molecule, this makes macroscopic sense, but you have to multiply by Avogadro’s number and write R, the gas constant, instead of k, Bolzmann’s constant.
Now we have three co-ordinates x, y an z, i.e., three DoF. This means for a one-atomic gas that the heat content per molecule will be 3/2 kT, again for temperature T.
This is pretty much the classical definition of temperature: the amount of Joules that have been squeezed into every DoF available for the molecules to undergo random motions in. It is then easy to show theoretically that two bodies in contact at the same temperature will not undergo any net exchange of heat. (Those two “bodies” could be, e.g., the nitrogen and the CO2 in a given parcel of air.)
Now if you look at two- or three-atomic molecules, there are more possibilities for them to move randomly: they may rotate, or vibrate, or bonds in the molecule may bend. This adds DoFs. Each of them can again contain an amount 1/2 kT of energy, on average per molecule. (This relates directly to the specific heat of a gas, the way the no. of DoFs has been classically measured — there is a long and fascinating story there in the lead-up to quantum theory.)
The important thing to note here is now, that also these DoFs interchange energy all the time with the linear-motion ones and with each other. And the energy they contain is temperature too, in the classical sense. The equipartition of energy over the different DoF is achieved on a very short timescale.
(Imagine a two-atomic molecule being hit off-center. This would add — or remove — rotational energy to/from the molecule, the balance being the change in linear kinetic energy of the molecules. Same with vibrations. It’s really just like in classical mechanics. Determining the interaction cross-sections though requires quantum theory — irrelevant here. The beauty of LTE is that you don’t have to know such gory details)
A core misconception. The energy added to CO2 vibration becomes part of the total heat content of the parcel of air considered, as it will be immediately equipartitioned with the other molecules and their various DoF in the parcel. By definition then, it will drive up the parcel’s temperature T. Your distinction between “characteristic” and “common” temperature does not exist for LTE.
Same misconception in reverse.
It will do so if the amount emitted by an air parcel is less than that absorbed by the same parcel. Only in a stationary state that difference would be zero overall.
Honestly hope this helps.
Mike says
CobblyWorld, and Gavin : It makes no difference what modelling techniques the IPCC uses, and it doesn’t matter how much it thinks that it has used different data for tuning and testing. GIGO still applies, but can be indirect and therefore difficult to detect.
The basic problem is that there is always the underlying assumption that climate is driven by CO2 – not totally, but to the point where anything not attributable to something else will get attributed, directly or indirectly, to CO2 (man-made GGs).
This can obviously happen directly, but let me give you just two examples of where it has, or might have, come in indirectly.
1. The full effect of solar variance is acknowledged by the IPCC to be greater than would be expected by measuring the change in direct radiation. But they openly acknowledge that they don’t understand the mechanism so they can’t model it, and haven’t modelled it. Consequently, every instance of “constrained by observation” has the capacity to wrongly ascribe climate changes to GGs when they are actually caused by solar variation.
2. TS.6.4.2 in the IPCC report says : “Large uncertainties remain about how clouds might respond to global climate change”. This shows that the IPCC believes that cloud behaviour is driven by climate. Consequently, every instance of “constrained by observation” has the capacity to wrongly ascribe changes in clouds to some component of climate, and never realise that the driving was actually the other way round – that the component of climate in fact reacted to clouds.
I have only given two examples, but there could well be more. Every one of the references to “constrained by observation” would have to be checked out for possible invalid assumptions like these.
The end result is a set of models, all of which give a good match to recent climate measurements at the time of modelling (because the model has been made to fit observations), but give a poor match to climate from the date of modelling onwards. And this is what is happening now.
[Response: Just because you say so, doesn’t make it true. The response of the models to CO2 is a function of basic radiation physics combined with numerous feedbacks – it doesn’t change as a function of what you think happened to solar or aerosols or mysterious substance X. – gavin]
Hank Roberts says
Mike, where are you getting what you’re writing? What are you basing it on? I’ve been trying searches using phrases, and the only pages I come up with consistently are from scienceandpublicpolicy. You may be drawing from some other source, but would you care to tell us what you’re reading?
Geoff Wexler says
#237 Chris N Says
I think your table of trends might be confusing to some readers. Trends have been discussed as a lead article in Realclimate and by Tamino briefly as a comment in Realclimate and again on his web site and by Stoat. So wouldn’t it be better if your version was made to be consistent with theirs?
You wrote that ” trends since Jan 1998 are shown below:
World (or global) trend is slightly negative at -0.036 C/decade (RSS satellite)
Is this from a least squares fit? if so what was the error estimate using simple stats.?
Tamino’s comment in Realclimate came with +/- error estimates; thus
0.21 +/- .19 C/decade (from GISS GLB_TSST)
or 0.07 +/ 0.19 (from HADCRUT3):
for the same decade. The second was obviously useless because of the large error estimate. But Tamino (on his web site) discusses how nearly all estimates based on one decade are useless. He explains how interannual correlation increases the errors. That is hard to reconcile with your table of trends.
You then go on to discuss “the trends between Jan 1994 and Dec 1998” to show “dramatic effect” i.e. 0.792C decade.
It would have been clearer if you had not referred to these short term fluctuations as dramatic trends.
Ray Ladbury says
Mike, You need to understand the difference between statistical modeling (e.g. fitting parametric models to data) and dynamical modeling. In the former, you are looking to estimate parametric values from some goodness of fit criterion–in effect saying how well can this model work under the best of circumstances. (Note: You can also ask how far off you can be for some confidence level.) The test of the model is the goodness of fit criterion, and you will often see several models compared or even averaged on that basis.
In dynamical modeling, you are defining the model based on the physics (or dynamics) of the phenomenon–in effect saying what physical processes are important. Now you may use data to estimate some of these processes, but the test for the model is to validate it against independent data. The two types of modeling are very different, and it sounds as if you are thinking more in terms of statistical modeling.
Chris N says
Geoff,
I agree with your points. As a matter of point, I used least squares analysis with RSS data. I suspect the error bars are greater than the trend itself (like the your HADCRUT example above).
But the bigger issue is this: can the climate models (or current agw theory) explain the microtrends (either by decade, or by hemisphere, or by land/water surface)? If you want to argue that these trends are not statistically significant, then fine. I won’t argue with you, but I don’t agree with the concept. I think you guys are missing stuff right under your noses with regard to aerosols, climate sensitiviy by region, etc. The short-term trend data was only to show these microtrends.
Thanks for the clarification. If I post similar trend data in the future, I’ll qualify it with caveats.
Mike says
Gavin “Just because you say so, doesn’t make it true”. True. Nullius in verba. But equally, just because you dispute it, doesn’t make it false.
What I am saying is pretty basic stuff.
Hank Roberts #262 : I am not getting it from anywhere. Really! I haven’t seen scienceandpublicpolicy, but maybe I should look at it.
My education was in maths and science (a long time ago), followed by a career in computing. I didn’t doubt the “global warming” message until I read the IPCC report in order to learn more about it. I got a gut feeling, as I read it, that the body of the document didn’t support the findings, but it took me quite a long time to work out where. Obviously I have read papers and articles in other places, which have helped fill in some of the blanks, but I try to accept nothing if I can’t verify it at least in my own mind. I have tended to avoid the overtly partisan material on both sides.
The position I have reached – and it is always open to change as new information becomes available – is that the IPCC report badly overstates the impact of CO2 (chiefly because of the modelling reasons that I have explained) and understates the impact of natural factors (ditto). Exactly what the ratio should be, I know not, but the real world certainly would appear to be heavily tilted towards the natural factors on, probably, all time scales. Certainly the behaviour of the world’s climate over the last few years suggests that the natural factors are easily able to overpower the CO2 effect on a decadal sort of time-scale.
Just coming back to the modelling briefly : the output of the models gave a good match to the temperature increases of the last 15 or so years of the 20th century. Given what we know about El Nino, the non-modelling of solar variation, the major uncertainties about clouds and various other uncertainties, if the models reflected the known science faithfully, THEY WOULD BE VERY UNLIKELY TO GIVE A GOOD MATCH. The reason they did give a good match is almost certainly because of the “constraint by observation” modelling. Let me put it the other way round, and this may seem extremely paradoxical but please think about it carefully, it is a far more sensible statement than it might first appear to be : The fact that the models gave a good match to climate over that period tells you that it is extremely likely that there is something seriously wrong with the models!!
[Response: Brilliant logic. We should therefore pay attention to models in inverse correlation to how well they do. We should apply that to other walks of life as well – fly planes that have the worst crash records, abandon quantum mechanics and general relativity, buy stocks based on the size of a company’s losses etc. – gavin]
CobblyWorlds says
#261 Mike,
I think Ray has hit the nail on the head, this is not a case of fitting.
Just to illustrate Gavin and Ray’s points:
I posted this paper in another (strangely similar) discussion above: http://pubs.giss.nasa.gov/abstracts/2007/Hansen_etal_2.html
It’s a pdf file, download and check out figure 2c. What’s happening there is that the calculated and observed temperatures are plotted against each other, and agree well. But crucially for you is how that agreement was reached. Starting from section 2 “Climate Sensitivity”:
2a) They discuss the derivation of an equation giving the effective forcing for 3 GHGs (CO2/CH4/NO2).
2b) They discuss using sea level changes to calculate changes in global albedo due to ice sheet advance/recession.
2c) Then they show that when you combine the calculated temperature deviations using changes in GHGs and ice sheet albedo, the match with actual* global average temperature is good. *note: they also discuss what they use as a proxy for global average temperature (see note 2 page 6).
Note here that they have not “fitted” the Geenhouse Gas(GHG) impact, that has been derived previously and independently. There are points where the calculated temperature and proxy for global average temperature do not agree in terms of time and amplitude. Some of the reasons for such deviations are apparent from the discussions in parts 2a and 2b and elsewhere in the paper. The time domain mismatches are argued to be due to data resolution issues; were they reflecting real time-mismatches the result would be physically unsustainable (e.g. 160degC temperature increases).
Both GHG and albedo changes must impact global average temperature because they affect the energy balance of the planet. GHGs affect IR “heat loss” to space, and albedo due to ice sheets affect how much incoming sunlight is reflected back to space. So it is physically reasonable that the evidence should point to these 2 primary factors as accounting for most of the global average temperature change in the ice-ages.
As to the degree those factors may affect future global temperature: Now we don’t have those massive ice-sheets, so ice-albedo is a far less significant factor. But we’re still left with GHG sensitvity, and the implication that the equilibrium climate sensitivity is about 3degC for a doubling of CO2. The Hansen paper I refer to here is not the only source suggesting ~3degC for 2xCO2.
Geoff Wexler says
Re: My previous comment #263.
Perhaps I should have described these short term estimates of rates of warming as being misleading rather than useless. Isn’t this quote from #237 another example of the dangers of using a telephoto lens when a wide angle one would have been appropriate?
“So basically ALL of the accrued warming since 1979 occurred in a 5 year span (1994 through 1998). That’s it!”
Ray Ladbury says
Chris N., First, global climate models are not intended to reflect short-term variability or regional effects. However, you can put in perturbations corresponding to short-term events and see how they evolve. When you do, you find that a lot depends on initial conditions, which are not certain. I agree that regional and short-term behavior can be interesting, but it is inherently more uncertain than true global climate, which depends mainly on energy balance. Realclimate had a piece on regional climate projections:
https://www.realclimate.org/index.php/archives/2007/08/regional-climate-projections/
[Response: Ray, you need to be a little careful here. GCMs do calculate regional and local changes and have plenty of short term variability. The issue is whether there is any predictability at those scales. The problem is that there is a lot of variability at these scales and so the forced signal (from GHGs) is hard to detect over short time periods. – gavin]
CobblyWorlds says
#266 Mike,
Wrong.
On the most basic of levels this should be apparent because nobody has “fitted by observation” to predict weather accurately on week/month timescales (at least not in the British climate they haven’t). If this alleged fitting works at long timescales, it should be better at short.
But you’re mainly wrong because climate is emergent order on long timescales.
I cannot predict the next toss of a coin at all. Not in the sense that I can say with anymore skill than a random guess what the next toss will bring, heads or tails.
But that does not mean I cannot say with some certainty that after 100 tosses I will have near equal occurences of heads and tails. Futhermore I can assert with confidence that as my set of results grows larger, 1000, 10,000, 100,000 etc etc. The sets of occurrences tend closer to being equal. i.e. the statistics are predictable. If that fails then I strongly suspect that the coin is biassed, because the theory is general and sound. Likewise, with ever rising CO2 when there is a “global cooling” blip, it tells me that some other factor is involved, as that theory is general and sound. (To be clear, CO2 is far from the only anthropogenic forcing, but it is the one that we can really play havoc with, we’re only about 1/10 through estimated available fossil fuels(mainly coal).)
Similarly weather models explore weather (much less than 30 years), climate models explore climate (greater than 30 years).
Remember I recommended RC’s index? Well try this: https://www.realclimate.org/index.php/archives/2005/11/chaos-and-climate/
PS I’m also not a climate scientist, just a former ‘sceptic’ with an Electronics degree who got into climate science whilst examining his scepticism. When I say sceptic, actually I was misinformed with my critical faculties in neutral.
Hank Roberts says
> scienceandpublicpolicy, but maybe I should
Check the references. Safer to stick with science sites, lest you be easily led into believing PR that agrees with your preconceptions. http://www.sourcewatch.org/index.php?title=Science_and_Public_Policy_Institute
joel says
I have a question about climate models.
Has anybody modeled the climate for Earth assuming that the Earth axis is tilted 0 degrees? What would it look like. Then How would the climate change as the Earth’s axis become more and more tilted?
Is there any model capable of doing that, that is, starting with a 0 degree tilt and then correctly getting our climate with its 23 degree tilt?
As a lark, this website was my first hit on google for:
Earth axis tilt
http://www.divulgence.net/
How did this site get to the top of the google hit list?
Joel
[Response: All the models use the tilt, precession and eccentricity of the orbit as input parameters, and they tend to get changed when doing paleo-climate experiments. I’m not sure that there have been many experiments with tilt=0 since I have no idea when that was likely to have been the case (if ever). You would expect reduced seasonality, reductions in polar ice for instance, and I’d anticipate significant differences in the ITCZ and monsoonal rains. But you should try it and see (EdGCM is probably your friend). – gavin]
Mike says
Gavin, I expected better of you. “fly planes that have the worst crash records” – that was, frankly, pathetic. I’m sorry, I really don’t want to use language like that here, but how else could I put it? You have maintained a high standard here, up to now.
What I asked you to do was “please think about it carefully, it is a far more sensible statement than it might first appear to be”. No matter how much you disagree with me, what I was saying could not in any way whatsoever lead to flying planes with bad crash records.
The point was that, with all the uncertainties acknowledged by the IPCC, with El Nino not represented in the models, with solar variation acknowledged as not being represented in the models, etc, etc, etc, it was actually UNLIKELY that the models would give a good fit to climate.
About the “30 years” bit, and weather versus climate (CobblyWorlds) : The “good fit” given by the models, as claimed by its proponents, is either for a period of about the last 15 years of the 20th century, or for the average slope over the whole 20th century. There are significant deviations in the middle, but these don’t go past 30 years or so. The most vocal case for global warming is based around the last 15 years of the 20th century, and it would seem that we can now agree that this is nonsense – a longer period is required.
If you look at the whole 20th century (the man-made CO2 thing really can’t apply over a much longer period than that), the graphs I have seen supporting the models give a 2-point fit (average slope is the same). But I would have found the models more credible if they had shown less slope – there was a net increase in solar activity over the 20th century, and it ended with a big El Nino (factors not represented in the models).
Now if you look at the 20th century vs solar variation, you see quite a good fit – certainly a much better fit. And then if you look at solar variation against long term climate, you still see a good fit – and this is over periods of really major climate changes that have given sea-level rises of 120m+, etc. Purely on an intuitive level, this tells you that there is something wrong if the models attribute all of the 20th century warming to AGGs. But you have to get past intuition, and work out where the error comes from. I have done that to my own satisfaction.
This next sentence is a level of argument that I have tried to avoid in the past, wanting to stick to essentials, but given the reference to “30 years” it seems I have to point it out : The proponents of the conventional view are claiming that the recent drop in temperature is caused by La Nina so we should ignore it as it has no long term implication. But I didn’t hear them say that about the previous El Nino.
The point is that we have to try to be even-handed.
There has been a fair amount of information given in response to my last 2 posts, and I haven’t gone through it yet. I will, but it will take me some time to do so.
[Response: Well I hate to further disappoint you, but your point is both ill-informed and illogical. First off, all the models have ENSO-like behaviour – what isn’t included is the exact sequencing of these events. Secondly, most include reasonable estimates – and some would say overestimates – of the solar changes since the 19th Century. The first issue implies that the models’ short term trends cannot be compared to the observations, but has little impact on longer time scales. The solar change is frankly in the noise of the mean forcing over the 20th Century (most of the uncertainty is related to aerosol trends, not solar). The net impact of these and other issues is obviously to degrade the match to the one realisation that we have of the real climate. However where is there any evidence that the fit is ‘too good’? Your whole point is predicated on nothing. As to whether anyone has blamed 1998’s high on El Nino – just look around – those statements are everywhere. – gavin]
Ray Ladbury says
Gavin, Thanks for the clarification. I realize there are regional predictions, my point was meant to say that is not the purpose of the GCMs–so it’s not surprising they aren’t tuned for that. I should have been clearer.
Ray Ladbury says
Mike, you have some serious misconceptions about how climate modeling is done–again dynamic vs statistical modeling.
As an example:
http://iri.columbia.edu/climate/ENSO/background/prediction.html
I have to agree with Hank. Where are you getting these crazy ideas from? Did you see the paper cited by CobblyWorlds in #226? If you remove any of those constraints–especially those from volcanic eruptions–the probability distribution becomes very asymmetric toward high forcing.
Mike says
What I am saying is so simple.
1. Stuff is missing.
2. The IPCC say the missing stuff is significant.
3. Because of the missing stuff, there should be a difference between model output and actual temperature.
4. There isn’t.
5. Therefore the model may be in error.
How much is missing?
Estimates of the net change in solar irradiance over the 20th century, that I can find, vary from about +0.12 W m-2 to +2 W m-2. The latter was from J Lean 2000 and has been challenged, so I’ll stick with +0.12. This tallies with the IPCC’s 0.12 from 1750 (TS.2.4), since 1900 and 1750 seem to have been about equal.
IPCC report, para 1.4.3
“The solar cycle variation in irradiance corresponds to an 11-year cycle in radiative forcing which varies by about 0.2 W m–2. There is increasingly reliable evidence of its influence on atmospheric temperatures and circulations, particularly in the higher atmosphere [refs]. Calculations with three-dimensional models [refs] suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree celsius.”
11 peer-reviewed papers are referenced where I put [refs].
So we should be looking for the models to understate 20th century warming by approximately 60% (0.12 / 0.2) of “a few tenths of a degree Celsius”, plus an additional factor to upgrade the period of time from a part of an 11-year cycle up to something nearer equilibrium, less a very small amount that the IPCC actually allowed. Only approximately, because we don’t know if it’s linear, we know there are a lot of uncertainties, and we don’t know what other factors there are. But given that the 20th century warming was only about 0.7 deg C, this is obviously significant.
The models didn’t understate. They hit the 20th century warming spot on. Where did the discrepancy come from? I couldn’t find any other factors that offset it (that doesn’t mean they don’t exist), but I did find some “constrained by observation” places that did explain it.
Gavin : The statements about El Nino didn’t start appearing until after the ones about the (later) La Nina. But who said what doesn’t affect the science, so this is a diversion.
I still haven’t read the other posted links, but will try to find the time.
Mike says
I have read all the papers linked here in reply to my last two posts. Let me know if I have missed any. I’ll keep my comments as short as I can, so ask if you want me to expand.
“Using multiple observationally-based constraints to estimate climate sensitivity”, J. D. Annan and J. C. Hargreaves
from 3.1
“If the net forcing [AGGs vs aerosols] is small, then climate sensitivity would have to be very high to explain the observed warming.”
This shows that there is an assumption that all the temperature increase is caused by AGGs. That is the point I was making.
“Climate change and trace gases”, James Hansen et al.
This whole paper is a stretch. They go to extraordinary lengths to try to convince themselves that they can explain the paleo cycles. All the time, they assume that the CO2 is what is causing the warming. Consequently, the warming phases are quite easy for them – they just say there are strong feedbacks, some fast some slow to match the time lags. But the long cooling phases are a big problem.
This is an early paragraph, from page 1928 :
Figure 1a reveals remarkable correspondence of Vostok temperature and global GHG climate forcing. [I agree, it does] The temperature change appears to usually lead the gas changes by typically several hundred years, as discussed below and indicated in figure 1b. This suggests that warming climate causes a net release of these GHGs by the ocean, soils and biosphere. GHGs are thus a powerful amplifier of climate change, comparable to the surface albedo feedback, as quantified below. The GHGs, because they change almost simultaneously with the climate, are a major ‘cause’ of glacial-to-interglacial climate change, as shown below, even if, as seems likely, they slightly lag the climate change and thus are not the initial instigator of change.
Try reading that paragraph like this : The major cycles over geological time are caused by natural factors as yet not understood. Throughout these cycles, the atmospheric CO2 behaves exactly as expected. During warming, CO2 is released from the oceans etc, so atmospheric CO2 increases. Similarly, it decreases during cooling. There is a time-lag.
RealClimate “Climate sensitivity: Plus ça change…” 24 March 2006
This paper mercifully puts James Hansen et al out of their misery : “…generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales – the atmosphere and upper ocean on multi-decadal periods.”. And “we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation”.
So the paleo temperature cycles are caused by changes in insolation, not CO2 – do you recognise in that anything that I have been saying? .But think more – IF the AGW theory is right then it HAS to apply to natural CO2 during paleo cycles. If the paleo cycles don’t behave as per AGW theory then either the theory is wrong, or there are more powerful natural forces in operation. If there are more powerful natural forces in operation, then those forces – which are still unknown – must still exist today. ie, they must have at least some influence. Therefore, it is plain wrong to assign all temperature change – other than from other known factors – to AGGs by using “constrained by observation” modelling techniques.
NB. In talking about sensitivity, the paper still assumes that CO2 is responsible.
RealClimate “Chaos and Climate” 4 November 2005.
Not relevant to this argument.
The paper on “Overview of the ENSO System” – statistical and dynamical models. Yes I understand that. It doesn’t affect my argument.
Ray Ladbury says
Mike, I’ll say it again. What you are missing is even the vaguest understanding of dynamical modeling. In dynamical models, you are restricted to the known physics. You keep harping on about “natural factors”. What natural factors? CO2 sensitivity is constrained by multiple lines of evidence to their current value. It is true that each by itself is not inconsistent with a broad range of values. The thing is that the sensitivity has to explain all of them, and that is a fairly tight constraint. Also if you look at the individual constraints in Annan and Hargreave, only the LGM constraint is not inconsistent with low values–and it is the most poorly known. Volcanic constraints are arguably the best known as they are based on real-time data, and they are crucial for narrowing the range of possible values.
You are falling into a classic crackpot mode of assuming that because you don’t understand something (something to which you’ve devoted perhaps a few month’s effort while climate scientists spend a career), then it must be wrong.
Look, take the same tack you are taking right now and apply it to evolution or the Big Bang or relativity or quantum mechanics, and you’ll find you don’t understand those eigher with a few month’s effort. Does that make them wrong, too?
Barton Paul Levenson says
Re #276:
Mike, you are conflating energy flux densities (watts per square meter) with temperature changes (Kelvins). A change of 0.2 watts per square meter in radiative forcing corresponds to about 0.15 K of temperature change, not most of 0.7 K.
Lazlo says
Interesting. OK I’ll take you guys on as long as it’s a level playing field. I have been mightily offended by the attack on scientific rationalism exemplified by your AGW extremists. I am prepared to bet my own money on a cooling in the next decade. You have to be betting your own personal funds, not research money, OK?
[Response: This bet is specific to the Keenlyside et al authors. If you want to bet generally on global cooling (which we would not advise), please talk to Brian Schmidt or James Annan. They have appropriately constructed bets already worked out. – gavin]
Mike says
Ray Ladbury #278 : “In dynamical models, you are restricted to the known physics.”.
The “IPCC” models are basically dynamical models – they model the known physics. But there are uncertainties, and they have used some “closed loop” techniques to eliminate or reduce some of the uncertainty. This is a perfectly valid modelling technique in most circumstances. Its danger is that it can lead to invalid conclusions if an invalid assumption has been used. I contend that that is what has happened here. Please, read on …..
Barton Paul Levenson #279. “A change of 0.2 watts per square meter in radiative forcing corresponds to about 0.15 K of temperature change, not most of 0.7 K”. Many thanks. That is absolutely my point. 0.2 watts per sq m CANNOT POSSIBLY deliver “a few tenths of a degree Celsius”. Now read the IPCC paragraph again. It says there is “increasingly reliable evidence” that it does!!
From IPCC report, para 1.4.3
“The solar cycle variation in irradiance corresponds to an 11-year cycle in radiative forcing which varies by about 0.2 W m–2. There is increasingly reliable evidence of its influence on atmospheric temperatures and circulations, particularly in the higher atmosphere [refs]. Calculations with three-dimensional models [refs] suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree celsius.”
And it does it in just a few years, not the extended period needed to approach equilibrium, so the effect is actually stronger than first appears.
Surely now you can all see that something significant is going on with solar variation, that is NOT built into the models. The observed effect of solar variation is much greater than its direct RF. There HAS TO BE a “feedback” mechanism that has not yet been identified.
This unidentified “feedback” mechanism invalidates the use of some of the “closed loop” techniques in the models, because its existence invalidates the underlying assumptions.
[Response: The IPCC reports quotes a result from the models and from that you claim that there is something missing in the models? I’m confused. – gavin]
Ray Ladbury says
Mike, you are positing a new, unknown mechanism on the basis of YOUR interpretation of an account in a summary report (not even the peer-reviewed studies themselves). And again, I’m not sure what you mean by a “closed loop” technique. That’s a very vague term, and whether there would be any spurious results from such a procedure would depend a great deal on the details.
However, the most serious argument against your proposal is that it is at odds with the data–especially that provided by volcanic eruptions, which do not correlate with solar cycle. Almost by themselves, these provide a constraint against values below 1.5 degrees C per doubling.
Hank Roberts says
Mike, you missed the key point in what you quoted:
> surface temperature changes
That’s the changes in the temperature on the surface, that’s all they’re talking about there.
Hank Roberts says
Mike, see also the extended discussion that ends
“… The five lines of evidence discussed above suggest that the lack of such secular variation undermines the circumstantial evidence for a ‘hidden’ source of irradiance variability and that there therefore also might be a floor in TSI, such that TSI during Grand Minima would simply be that observed at current solar minima.”
http://solarcycle24.forumco.com/topic~TOPIC_ID~4~whichpage~18.asp#top
There, search for
Leif Svalgaard’s post 05/11/2008 : 11:06:54
Mike says
“The IPCC reports quotes a result from the models and from that you claim that there is something missing in the models? I’m confused. – gavin”
The models used to estimate the actual effect of changes in solar radiation are using a different approach to the models as referenced in the IPCC report. The output from the former shows that there is something missing in the latter.
If you want to follow it up, this paragraph shows which papers to find in the references at the back of section 1 of the IPCC report :
“There is increasingly reliable evidence of its infl uence on atmospheric temperatures and circulations, particularly in the higher atmosphere (Reid, 1991; Brasseur, 1993; Balachandran and Rind, 1995; Haigh, 1996; Labitzke and van Loon, 1997; van Loon and Labitzke, 2000). Calculations with three-dimensional models (Wetherald and Manabe, 1975; Cubasch et al., 1997; Lean and Rind, 1998; Tett et al., 1999; Cubasch and Voss, 2000) suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree celsius.”
Hank Roberts #283 : Surface temperature changes are exactly what the IPCC report deals with. From the Summary for Policymakers:
“Eleven of the last twelve years (1995–2006) rank among the 12 warmest years in the instrumental record of global surface temperature9 (since 1850). The updated 100-year linear trend (1906 to 2005) of 0.74°C [0.56°C to 0.92°C] is therefore larger than the corresponding trend for 1901 to 2000 given in the TAR of 0.6°C [0.4°C to 0.8°C]. The linear warming trend over the last 50 years (0.13°C [0.10°C to 0.16°C] per decade) is nearly twice that for the last 100 years. The total temperature increase from 1850–1899 to 2001–2005 is 0.76°C [0.57°C to 0.95°C].”
Hank Roberts #284 : I haven’t got access to that link. I’ll have to register etc. I’ll get back to you.
Mike says
Ray Ladbury #282 – I put “closed loop” in quotes because it is a term used in control models, and the “IPCC” models, although they share many of the same features, are not actually used for control.
All it means is that discrepancies between the outputs and observations are used to modify the inputs. In the case of the IPCC models, the inputs in question are some of the many parameters that are used to calculate results from the scientific formulae.
Hank Roberts says
Mike, put the names (e.g. Cubasch) in the Search box at the top of the page to see previous discussion of those papers.
It sounds like you’re trying to say that since the planet has warmed, but the sun hasn’t been shown to be responsible for the observed warming, there must be some hidden solar connection to make up the difference.
But the difference is explainable by CO2 increases.
If you want to argue against that, you also need to postulate a second hidden forcing, a negative one, to zero out the warming explainable from increasing CO2 in the models.
Or did I miss your point?
Mike says
Hank Roberts #284 : I did a search for “Leif Svalgaard” and it came up with a number of interesting-looking articles and papers. The first one has something in it for both of us:
http://environment.newscientist.com/article/mg19125691.100
It confirms what I have been saying about the influence of the Sun. It identifies two possible mechanisms, ultraviolet and cosmic rays. But it also has this:
“”The temperature of the Earth in the past few decades does not correlate with solar activity at all,” Solanki says. He estimates that solar activity is responsible for only 30 per cent, at most, of the warming since 1970. The rest must be the result of man-made greenhouse gases, and a crash in solar activity won’t do anything to get rid of them.”
Still, 30% is not trivial. And he has forgotten the El Nino. And a crash in solar activity resulting in a crash in temperature would cool the oceans which would then take up more CO2 … :) … no, I won’t go there!! (it probably wouldn’t be fast enough). Interesting that the bumping-up of ECS that I referred to many posts ago, from 1.9 to 3.2, is not a lot greater than this 30% (30% of 3.2 is about 1, 3.2-1.9 is 1.3).
I think this opens up a whole new range of thought.
Hank Roberts says
Well, new to you. Dig into it, I think you’ll find Leif’s posts at solarcycle24 suggest there’s nothing hidden away about this and it’s a formerly significant influence now getting lost in the CO2 change.
Ray Ladbury says
Mike, you are trying to get a more or less monotonic trend (modula noise) from an oscillatory source term–show me a differential equation (with real coefficients) that does that. You really aren’t approaching this thing very systematically, and that is a sure way to go down a blind alley. People study this stuff for years. To think that you can come in and show that the professionals who have been working at this for decades are wrong is downright arrogant and foolhardy. I am not saying that you can’t understand what is going on–only that it is clear that you do not yet, and what is more you won’t without systematic study.
Rod B says
I’ve been away so this is belated. Sorry.
A quickie: Geoff (245), you say, “….Suppose you apply an alternating voltage to a piece of copper wire; it works by exciting the electrons into nearby vacant quantum levels…. ”
I may have totally misread what you meant, but that’s not at all how electric current flows. The electrons’ quantized energy levels have nothing to do with it — at least in most cases and all copper cases.
Al Tekhasski says
Gavin, your idea to use paleorecords to validate GCMs is hopeless. The reason is that paleorecords are integrals, convolution of vastly multi-dimensional climate variables onto few one-dimensional functions, with uncertain time resolution, contaminated with noise, and morphed over ages. This process cannot be inverted, mathematically; thousands of different climate models can produce identical (within their error margins) paleorecords.
Second, you continue to operate under a narrow assumption that global climate is a system with only one attractor – fixed point, and all changes to global climate must have an external cause. Given our current knowledge about dynamics of turbulence, dynamics of oceans, and dynamics of mantle convection, this assumption seems to be quite myopic.
Rod B says
Martin (260) and Ray, et al: Martin says, “…it is not very fruitful to look at what’s happening to individual molecules and photons; the statistics are much simpler than these individual narratives, precisely because of LTE…”
I disagree philosophically. Granted, it is more fruitful to assess the macro stuff with statistics, LTE, etc. But the foundation for all of this absorption and transport is at the atomic/molecular level. The precise function (within QM limits) of individual molecules and their pieces is critical to all of this happening, and, what bothers me, it’s not at all clear that there is knowledgeable agreement on how the fundemental atomic level process works. Logically, this of course does not refute the macro theory necessarily. But it sure as hell doesn’t provide a resounding vote of confidence, either.
Martin, I think I agree with your 260 post with probably a semantics difference (which none-the-less is important). I basically said that excited vibration and rotation states do not effect the “classic” temperature of a molecule (or, a bunch of them for Ray’s sake). You said/implied they do — “…vibration becomes part of the total heat content of the parcel of air considered, as it will be immediately equipartitioned…” Equapartitioning moves the vibration energy (no classic temp) to translation (classic temp) within the same molecule or to another molecule via collision. I would further agree that there is a strong inclination to equipartition out vibration energy as average molecules, as I’ve read, are highly unlikely to pick up “LTE” energy in vibrations unless the temperature approaches ~1000K. (NOT true for rotation, however.) It would also seem to me that CO2 picking up vibration energy from anything other than absorbed infrared photons, like collisions and the LTE push, is highly unlikely.
So, again (to somewhat disagree), a CO2 molecule either absorbing a quantized photon into vibration or emitting a similar photon, per se (NOTHING ELSE happening) has no effect on the temperature of the gaseous environment where the CO2 lives.
CO2: absorb-emit-absorb-emit-absorb-emit to space: no temperature change.
CO2: absorb-emit-absorb-crash: atmospheric temperature change.
I think I understand the degrees of freedom (well, as best as a layman can, I guess). I think this aligns with and is analogous to my thought. As the local temperature environment increases E = 3/2kT goes to E= ~5/2kT goes to E= ~ 7/2kT (in not a very precise fashion ala QM) as the additional degrees of freedom come into play. This forms the basis for specific heat, and, specifically, specific heat increasing as DoFs come into play which is adding (some) energy to a substance without increasing its temperature.
Phil. Felton says
Re #293
“I would further agree that there is a strong inclination to equipartition out vibration energy as average molecules, as I’ve read, are highly unlikely to pick up “LTE” energy in vibrations unless the temperature approaches ~1000K. (NOT true for rotation, however.) It would also seem to me that CO2 picking up vibration energy from anything other than absorbed infrared photons, like collisions and the LTE push, is highly unlikely.”
Somewhat unlikely, not ‘highly unlikely’, according to Boltzmann CO2 at room T has about 5% of its molecules with more than the first vibrational energy.
CobblyWorlds says
Mike,
It really pays to learn first then make a conclusion, rather than establish your conclusion and thrash about looking for the evidence to support it.
PS, your post 277 is as wrong as just about everything else you’ve posted here. But you seem to have moved on so far…
Martin Vermeer says
Rod B #293:
For what it’s worth, you are right (provided you exclude that particular molecule from the body of gas you’re studying the temperature of)… but it’s worth very little :-)
CO2: crash-emit-absorb-emit to space: atmospheric temperature change (opposite direction, cooling)
The important thing, both classical and quantum, is time reversal symmetry: a reaction happens just as easily in the reverse as in the forward direction. This is why LTE doesn’t need to be informed about interaction cross-sections. This is why absorbtivity == emissivity. And absorb-crash == crash-emit.
(I see you left out emit-to-earth-surface… )
Ray Ladbury says
Rod B., I do not know what “classic temperature” is. The definition of temperature is partial of Energy wrt Entropy. Under some circumstances (e.g. for some gasses at some temperatures), this is 1.5k. However, you can’t say that this is generally true and define a “classic temperature” in terms of it–your definition will not be physically meaningful.
As to likelihood of collisions exciting vibrational modes–do the math. At room temperature, about 4% of molecules have an energy (mostly kinetic) equal to or greater than the vibrationally excited state of CO2 corresponding to 15 microns. Even at 208 K, roughly 1% of molecules have this energy. That is why the blackbody curve for these energies peaks in the IR.
You basically have the idea down about how DOF relates to specific heat–it’s very precise for a given molecule. Now you have to look how the Maxwell distribution for the gas as a whole comes into play.
Ray Ladbury says
Al Tekhasski, that is the difference between a scientist and a nonscientist. When confronted with a problem of daunting complexity, the layman will throw up his hands, while the scientist will try to find as many constraints and independent data sources as possible and come up with a way to solve the problem. And, perhaps surprisingly, it works most of the time. And when it does not work, it certainly informs you of the fact, because you get contradictions among your data.
Mike says
Hank Roberts #278. Yes, you did miss my point. You said “It sounds like you’re trying to say that since the planet has warmed, but the sun hasn’t been shown to be responsible for the observed warming, there must be some hidden solar connection to make up the difference. .. But the difference is explainable by CO2 increases. .. If you want to argue against that, you also need to postulate a second hidden forcing, a negative one, to zero out the warming explainable from increasing CO2 in the models.”
My point is that the models have taken away most of the Sun’s warming, by simply ignoring the evidence, and given it to the CO2 warming, by overestimating ECS. I have identified the places in the IPCC report where both occur.
Hank Roberts #289. According to the agreement required on registration, solarcycle24 won’t protect my personal information, so I’m not prepared to register. You’ll have to extract the relevant info and post it here.
Ray Ladbury #290. I have no idea what you are referring to, by “a more or less monotonic trend (modula noise) from an oscillatory source term”. I can’t see how it relates to the IPCC having ignored the effect of solar variance. Re the rest of your post : let’s stick to the subject.
I can provide no evidence (links, documents, etc) for the following : I have spoken recently to four highly respected scientists in the climate field. Those conversations have strengthened my view that I am on the right track. One was review author for one of the working groups for the latest IPCC report, did not dispute the idea that the sun had greater effect on climate than in the models, but simply said that there was no known mechanism so they couldn’t model it. Another, after a long and interesting discussion about the topic, said that the next year or two would be interesting if sunspot activity stayed low for another 6 months.
Ray Ladbury says
Mike, Think about this in a logical, linear fashion:
CO2 forcing is constrained by multiple (>5), independent lines of evidence. For what you are asserting to be true, all of these constraints would have to be wrong and all in the same way and by about the same amount. How likely do you think that is?
You would also have to come up with a new mechansim for heating the planet that gave rise to a 30 year warming trend. And it needs to explain why the stratosphere is cooling etc.