A couple of months ago, we discussed a short paper by Matthews and Weaver on the ‘climate change commitment’ – how much change are we going to see purely because of previous emissions. In my write up, I contrasted the results in M&W (assuming zero CO2 emissions from now on) with a constant concentration scenario (roughly equivalent to an immediate cut of 70% in CO2 emissions), however, as a few people pointed out in the comments, this exclusive focus on CO2 is a little artificial.
I have elsewhere been a big advocate of paying attention to the multi-faceted nature of the anthropogenic emissions (including aerosols and radiatively and chemically active short-lived species), both because that gives a more useful assessment of what it is that we are doing that drives climate change, and also because it is vital information for judging the effectiveness of any proposed policy for a suite of public issues (climate, air pollution, public health etc.). Thus, I shouldn’t have neglected to include these other factors in discussions of the climate change commitment.
Luckily, some estimates do exist in the literature of what happens if we ceased all human emissions of climatically important factors. One such estimate is from Hare and Meinshausen (2006), whose results are illustrated here:
The curve (1) is the result for zero emissions of all of the anthropogenic inputs (in this case, CO2, CH4, N2O, CFCs, SO2, CO, VOCs and NOx). The conclusion is that, in the absence of any human emissions, the expectation would be for quite a sharp warming with elevated temperatures lasting almost until 2050. The reason is that the reflective aerosols (sulphates) decrease in abundance very quickly and so their cooling effect is removed faster than the warming impact of the well-mixed GHGs disappears.
This calculation is done with a somewhat simplified model, and so it might be a little different with a more state-of-the-art ESM (for instance, including more aerosol species like black carbon and a more complete interaction between the chemistry and aerosol species), but the basic result is likely to be robust.
Obviously, this is not a realistic scenario for anything that could really happen, but it does illustrate a couple of points that are relevant for policy. Firstly, the full emissions profile of any particular activity or sector needs to be considered – exclusively focusing on CO2 might give a misleading picture of the climate impact. Secondly, timescales are important. The shorter the time horizon, the larger the impact of short-lived species (aerosols, ozone, etc.). However, the short-lived species provide both warming and cooling effects and the balance between them will vary depending on the activity. Good initial targets for policy measures to reduce emissions might therefore be those where both the short and long-lived components increase warming.
Patrick 027 says
Re CFU – it seems we’ve all had reason to disagree with Rod in the past (the whole what emits blackbody radiation on the sun fiasco), but I think you might be reading into his comments something that isn’t intended. I interpreted his point to be that when heat is added to a material, not all the heat goes into the translational kinetic energy; some goes into other forms, which don’t directly contribute to the temperature (at least for an ideal gas; don’t know as much about condensed matter, plasmas offhand so much), thus increasing the heat capacity – but that is not to say that the energy can be sequestered away without affecting temperature, because the energy generally is always being exchanged between different forms including that is part of the molecular basis for temperature. Maintaining a LTE, a given addition of heat will be apportioned in some way among these forms (not the same as saying the energy is locked into each form without equal and opposite exchanges occuring continually – it is that exchange which tends to drive the proportions to equilibrium values); the proportions are different for different materials, thus the heat capacity per unit material is different for different materials.
Completely Fed Up says
Re #651. I posited something similar earlier (post #585), but Rod B doesn’t actually seem to be intimating that. He seems to be thinking that because one molecule is excited rotationally, that the energy stays there and never gets expressed as kinetic (temperature) energy.
Yet that energy is passed off to other molecules that may not have that excitation energy step available and therefore an inelastic collision would result in a greater energy content of the gas expressed in mobility.
Completely Fed Up says
Further to #649, the only AGW hypothesis is that CO2 levels have increased due to human activities.
THAT’S ALL.
If you want to disprove an AGW hypothesis you need to disprove that our CO2 emissions are not causing the atmospheric concentrations of same to increase.
Rod B says
CFU, it’s hard to grasp, I know, but I’m talking about non-equilibrium and unsatisfied equipartition (which is the normal situation). It could also be called transient conditions. You can ask any of your scientist (or even engineering) friends to explain that.
Rod B says
Patrick 027, a set-to perhaps, but not a fiasco! ;-)
Completely Fed Up says
“but I’m talking about non-equilibrium and unsatisfied equipartition ”
Then you’re not talking about temperature.
Except you’re using the word temperature to describe the thing that isn’t temperature.
This would be why “it’s hard to grasp”.
http://www.elizabethmapstone.co.uk/wow/argument.htm
Norman says
CM #650,
Your point on empirical testing would show that an incorrect assumption can lead to an incorrect conclusion, but the empirical data is still valid and was the same (rock and dust) regardless of the initial assumption or the final conclusion. Even if an experiment is poorly designed, the data collected from it is still valid.
Many intelligent people have responded to my questions and offered lots of thinking material. I still wonder if an empirical test has been run to demonstrate the layer effect and the conclusion that saturation of IR bands does not apply. IR spectrums run on any absorbing material (and it would not matter the material, all should show the effect…absorb the IR energy, warm up and then begin to emit detectable IR) should show this effect. As one scans an IR absrobing material, stop the scan at the primary absorbing band and allow the material to heat up, then rather than transmittance going to zero it should start to pick up again and reach a stable point based upon the configuration of the sample cell (how much of the emission IR will reach the detector). After turning off the IR source the detector should still pick up IR energy until the absorbing material has cooled down enough that the sensor will no longer detect.
This test has probably been run I just can’t find it. This is the type of information I would like to go along with the theories and ideas…an empirical test to demonstrate, yes indeed what we state is happening.
In our atmosphere, if 10 meters is the extinction pathlength of IR for the concentration of CO2, you would have several of the layers. In the IR spectrometer setting, maybe 10 different sample cells. Each can absorb the energy from the others to see how much the final cell will receive from the first.
Thanks!
Brian Dodge says
“How can a detector ever read 100% absorption of CO2?” Norman — 15 June 2010 @ 11:22 PM
Use a rotating mirror to shine the light through a reference cell, a sample cell, and to block the beam falling on a detector. Measure the amplified output of the detector. When the beam is being blocked, the output will be electronic noise, stray signals from IR from warm components, and DC offsets from the electronics. If the engineer(that would be me) did an excellent job of controlling the temperature of the detector and the rest of the instrument environment, and designing the amplifier chain from the detector to the A to D converter, you can average 100 measurements and find that the result fluctuates less than the 20th LSB on the 24 bit convertor. Call that “zero”. Average another 100 measurements while the light is shining through the reference cell, filled with dry nitrogen, and call that “100% transmission” . With a little luck, and a lot of design work involving hair pulling, cursing, and about 20 iterations of building new electronics, and better systems to flush the am@#$bient laboratory atmosphere which the PI keeps changing by smoking cigars from the instrument, that number will be 1 MSB, plus or minus less than one 20th LSB. Measure the light through the sample cell containing some concentration of CO2; if the number is the same as “zero” is the transmitted light actually zero(100% absorption), or just less than the resolution of your instrument? Now dilute the sample by 1000 fold, and remeasure – you will likely get a reading, and you can calculate, using Beer-Lambert Law, what the attenuation was at the original concentration, with some error depending on how accurate your dilution was. Getting a million fold(~2^20) dynamic range in a lab instrument is very difficult – your typical spectrophotometer will have a dynamic range of less than 10,000, so “100% absorption” = 99.99% absorption =99.999% absorption.
Note that the Beer-Lambert Law equation, , implies that transmittance never can equal zero, but can be pretty damn close.
Doug Bostrom says
Brian, you forgot the moment when you discover you’ve been trying to fix the problem of the scope probe being set to 1x for some unknown period of time by replacing components on the test article when normally the probe is always on 10x so you didn’t think to check it. Also, the stupid little set-screws coming loose on the banana plugs…
FurryCatHerder says
CFU @ 645:
Right, and insulation — what you suggested — is described here:
A cave is cool because the temperature of the earth at the depths involved tends towards the mean temperature of the area. That thermal mass (earth) then acts as a heat sink which absorbs heat from the air in the cave, to the extent that it is warmer, or releases heat to the air in the cave, to the extent that it is cooler.
In the case of cave temperatures, you can’t say that it’s “insulation” keeping the cave cool because you can’t replace the “insulation” of the earth with an arbitrary R-value insulator — say, a vacuum.
Next time you mention “Insulation”, don’t respond with articles about heat conduction.
Rod B says
CFU, are you saying there must be equilibrium (at least LTE) and equipartition before there can be any temperature at all??
Patrick 027 says
Re my 641 –
“the exact shape of the curve depends on the way temperature varies with height, but knowing that the same set of optical properties are just translated by some wavelength interval outward from the center of the band, the shape of the curve from where CO2 starts to make a significant difference to where OLR becomes saturated stays the same.”
That’s actually based on an additional assumption. First, the potential forcing is assumed to be approximately constant over intervals of the same width as the spectral shift (SHIFT) in optical properties, at the wavelengths where the flux changes are significant. This can be achieved if
1. the Planck functions for the temperatures found within the relevant emission weighting functions are nearly constant over such an interval, for the wavelengths at which the changes in fluxes are significant. This will generally always be true for a sufficiently small shift, and it is a good first approximation for CO2 for the conditions being considered.
2. the other contributions to optical thickness in the relevant parts of the spectrum (such as water vapor and clouds) don’t vary much over an interval of size SHIFT in those parts of the spectrum – both in total amount and in spatial distribution (at any given horizontal location and time, and thus in global average effect for a particularly climatic state).
It might be possible to play around with those conditions and keep the potential CO2 forcing nearly constant over SHIFT intervals and yet have a change in the shape of the OLR curve at the effective band edges. But as I said, this is an approximate approach. The bumps in the CO2 spectrum will have an effect anyway.
——————
Fig 1 of http://www.atmo.arizona.edu/students/courselinks/spring04/atmo451b/pdf/RadiationBudget.pdf
is where I got the OLR graph. Note the smoother curve is a Planck function (in terms of flux per unit area – this is pi * Planck function in terms of intensity) for surface temperature. A similar graph and related graphs are found at http://chriscolose.wordpress.com/2010/03/02/global-warming-mapsgraphs-2/ – but that is graphed over frequency instead of wavelength. You can work from either graph – just keep SHIFT in the same units (frequency or wavelength) as are used in the Planck function and OLR (per unit frequency or per unit wavelength), etc.
You can overlay Planck functions (in flux/area) for different temperatures to see where **saturation occurs for OLR. Where a radiant flux/area is at the Planck function value for some temperature, the flux/area has that brightness temperature. As optical thickness from absorption is increased, Brightness temperature of OLR can only approach a nonzero minimum because the atmosphere’s temperature doesn’t go to absolute zero going upward. Knowing that optical thickness of CO2 has a sharper peak than the OLR dip for CO2, one can infer **saturation has occured at the bottom of the dip, and one can find the brightness temperature for that saturation OLR value by finding a Planck function that fits it.
(PS if you don’t want to calculate new Planck functions, just take the one plotted, copy it and then stretch it or compress it. For a temperature change from T0 to T1, when graphed over wavelength, stretch over the wavelength axis by T0/T1 (if T1 > T0, that’s actually a compression), and then stretch along the flux/area axis by a factor (T1/T0)^5. Note the area under the curve changes by a factor (T1/T0)^4. If graphed over frequency, stretch along the frequency axis by a factor of T1/T0, and stretch along the flux/area axis by a factor of (T1/T0)^3 – note again the area changes by a factor (T1/T0)^4. In either case, keep the origin fixed, or else, shift the maximum of the curve in or out from the 0 of the spectrum by the same amount that the curve is stretched or compressed in that dimension (Wien’s displacement law).
The potential forcing (per unit interval of the spectrum) is the difference between OLR without CO2 and the Planck function for saturation brightness temperature.
The potential forcing at the tropopause level is between OLR absent CO2 and zero, because of the absence of any other optical thickness in the stratosphere at the relevant wavelengths.
(Note that it is possible, depending on variations in optical thickness and distribution with height, to encounter situations where the saturation brightness temperature would vary over wavelength.)
See absorption spectra of gases here (I presume this is in terms of optical thickness along a vertical path through the whole atmosphere. Note it is a logarithmic scale and over frequency).
http://www.atm.ox.ac.uk/group/mipas/atlas/index.html
Patrick 027 says
**saturation – saturation of OLR is reversed when optical thickness increases enough so that the warmer upper stratosphere becomes sufficiently optically thick (see the sharp peak in the middle of the OLR dip for CO2). This adds some additional amount to stratospheric cooling with increased CO2. if the stratosphere were isotherml, the saturation would not reverse.
Completely Fed Up says
“Heat energy can be transferred by conduction, convection, radiation or by actual movement of material from one location to another.”
Again, with the wrong basic ideas. Isn’t convection actual movement of material from one location to another?
When you’re complaining of others not knowing science, it would be a good idea to at least read about science yourself, FCH.
Ray Ladbury says
Rod B,
Technically, yes, there does have to be equilibrium (which includes equipartition) before intensive thermodynamic quantities (e.g. temperature, pressure, chemical potential) can be defined. Nonequilibrium systems tend to yield absurd results–e.g. negative temperature for inverted populations, which are hotter than infinite temperature!
LTE is an attempt to get around this for systems not too far from equilibrium or locally at equilibrium and quasi-isolated from the rest of the surroundings. Nonequilibrium thermodynamics/stat mech is still a frontier of physics.
Barton Paul Levenson says
Rod 661,
YES. LTE means, in effect, “you can measure the temperature.”
Geoff Wexler says
It depends how rigorous you want to be. The concepts of temperature and entropy can easily be extended to conditions slightly away from local thermodynamic equilibrium. You have to be much more careful if you are far away from local equilbrium.
The concept of entropy under conditions of non-equilibrium is of course essential, when thinking about transport problems and entropy production.
Also you may be able to think about the system as divided into two two pieces of which one has a clearly defined temperature. Consider some ordinary gases inside a black body enclosure mixed with photons sharing a common temperature. Fine; but the temperature of the photon gas becomes a half useless concept in the case of radiation transfer, because different photons have been emitted by matter at different temperatures and are all mixed up. As pointed out by CFU et al the same difficulty applies to highly excited gases made of matter.
Incidentally has anyone seen a good calculation of radiation transfer at very low pressures for which the LTE approx. might break down very badly?
Completely Fed Up says
“Incidentally has anyone seen a good calculation of radiation transfer at very low pressures for which the LTE approx. might break down very badly?”
The Sun’s corona is a good example. A million degrees but very tenuous. So tenuous that many astrophysicists refute the million degrees because it’s not going to be an ideal gas.
The exosphere of earth is another one.
Completely Fed Up says
Rod B, you never answered my question: how do you get an ideal gas law out of a gas where molecules don’t collide?
Rod B says
It seems like you (Ray and BPL) are getting lost in the forest for all of the trees.
Since in any quantity of a gas you will always have a set of molecules (and in atmospheric temps, a very large set) that are not excited in say vibration because of Boltzmann equation, that quantity of gas is not and won’t be in equipartition. So, I can’t measure the temperature? — maybe because it has no temperature???!!!?
Since LTE is a hypothetical convention one can say temperature can only be measured (which, BTW, was not CFU’s contention, that being temperature didn’t even exist — though he might have just misspoke) if you have LTE by simply defining LTE as small as one wants. If I have two cubic micrometers of gas, one um^3 at 20 degrees, the other at 21 degrees, I can’t measure the temperature because my 2 um^3 is not at LTV?? If this is correct then theoretically one can never measure the temperature of any gas. Any amount of gas is not in exact LTV due to Maxwell-Boltzmann distribution. Yet we take the temperature of a bunch of gas all the time… by simply taking a completely reasonable and accurate average of molecular energy — call it a LTE average. Ergo, taken to its natural extreme, LTV does not exist — ever — anywhere.
The hypothetical convention of LTE was intended to be helpful. Getting extremely picky and exact doesn’t help things.
Rod B says
CFU, “getting an ideal gas law out of a gas where molecules don’t collide” wasn’t answered because it has absolutely nothing to do with my comments, I never said/implied/hinted at such, and it’s inane.. Answering your question is a complete waste of time — as this post is.
Gilles says
“Rod B, you never answered my question: how do you get an ideal gas law out of a gas where molecules don’t collide?”
Actually it’s enough that they get thermalized with an external bath, they can only collide with the walls. The ideal gas law deals actually with particles WITHOUT interactions – any interaction potential will cause a small departure of this law by application of the virial theorem, even if they help thermalization.
[Response: What nonsense. – gavin]
Completely Fed Up says
Ideally, a scientist would know what an ideal gas is.
Some call themselves scientists and do not.
I think this shows the provenance of their training.
Completely Fed Up says
“I never said/implied/hinted at such,”
Yes you did.
“621
Rod B says:
15 June 2010 at 2:23 PM
CFU, I was talking only of the energy that went into rotation or vibration, not what if you then moved that energy somewhere else.”
Which was in response to:
#614:
““it seems to me that if the polyatomic molecules have higher specific heats because some of the energy is added to (unfrozen) rotation and vibration, that added energy that goes into rotation and vibration does not raise the temperature.”
It seems to me you’ve forgotten that that energy can be imparted to another molecule that isn’t excited rotationally, which can express it kinetically in movement. Which is temperature.”
Which means that for a gas where you are considering there’s no movement from rovibrational to translational expressions of energy retention, they cannot collide.
” and it’s inane.”
At least we agree there.
Completely Fed Up says
“Since in any quantity of a gas you will always have a set of molecules (and in atmospheric temps, a very large set) that are not excited in say vibration because of Boltzmann equation, that quantity of gas is not and won’t be in equipartition.”
And this is not temperature. This is quantisation. The relative occupancies of various energetic states will accord to those proposed under Boltzman’s equations.
But the fact of an occupancy is not an indicator of temperature.
“So, I can’t measure the temperature? — maybe because it has no temperature???!!!?”
Yes, any one molecule has no temperature. Any one energetic state being unoccupied is no indicator of temperature. You’re not measuring temperature but still calling it temperature.
We’re not seeing the forest because you’re busy pointing to all the pins you’ve thrown on the floor, calling them “trees”.
Patrick 027 says
Re 664 –
convection involves movement of material, but interestingly, one could consider differentiating between convection that redistributes heat with no net redistribution of mass or composition, verses that which redistributes mass and/or composition along with heat (ie convection of latent heat; or an extreme example, the movement of the Earth around the sun, ‘convecting’ the heat it has through space along with itself)… Just an interesting side-note. The later case can easily be from lower to higher temperature (evaporative cooling can make a wet surface colder than the air temperature if RH < 100 %; the increase in entropy of the redistribution of water molecules makes up for the decrease in entropy by the increased temperature difference).
Patrick 027 says
Re 670 Rod B –
You can have LTE in approximation. You can also have an average temperature over heat capacity, or mass, or volume, or emission-weighting function, etc, where LTE is approximately satisfied for small pieces of the whole.
The equipartition of energy doesn’t refer to all molecules having the same energy; it’s more about the way energy is distributed among different forms (translational kinetic, modes of vibration and rotation, etc.). The equilibrium distribution (at LTE) for translational kinetic energy is exponential – the mode is actually zero energy, the number of particles that have a given amount of energy decreases exponentially with increasing energy with an e-folding scale equal to the average energy per particle.
Re 672 Gilles – perhaps you were thinking of the lack of certain kinds of interaction, such as attractive forces between molecules, and also, the nonzero volume that the molecules have.
Completely Fed Up says
“interestingly, one could consider differentiating between convection that redistributes heat with no net redistribution of mass or composition verses that which redistributes mass and/or composition along with heat (ie convection of latent heat;”
Isn’t that a false difference?
If I take a hot kettle from one vacuum sealed unit to another vacuum sealed unit, no temperature has moved, so how can you say that heat has been transported: I’ve moved a kettle.
This is not heat transfer.
It’s only heat transfer when that extra energy is passed off to something else and then becomes unrecoverable (entropic).
Patrick 027 says
Re Rod B, though not generally applicable to atmospheric radiation, an interesting example of quasi-LTE with a radiatively-important non-LTE aspect:
First:
A = 2*n^2 / (h^3 * c^2)
βe = A*E^2 / ( exp[E/(kB*T)] – 1 )
is a form of the Planck function, where it is in terms of photon number intensity per unit of the spectrum (spectrum in terms of photon energy)
from “The physics of solar cells By Jenny Nelson” (the parts available via Google books):
βe = A*E^2 / ( exp[(E-Δμ)/(kB*Ta)] – 1 )
Which is equal to the Planck Function value for E and T0:
βe = A*E^2 / ( exp[E/(kB*T0)] – 1 )
Where E/T0 = (E-Δμ)/Ta
T0 = Ta*E/(E-Δμ)
(E-Δμ)*T0 = E*Ta
βe (given in terms of Ta) is the photon intensity in equilibrium with two electronic bands in quasi-thermodynamic equilibrium with the material with temperature Ta (via phonons (not a spelling error), I think) so that each band has a population distribution fitting a Fermi distribution about a quasi-fermi level with a temperature Ta, but where the two bands are not in equilibrium with each other – their quasi-fermi levels are seperated by Δμ.
I think this situation comes about when the electrons have been exited to one band from another, and are able to relax toward an LTE within each band by exchanging phonons (?) with the material they are in, which can happen rapidly – this brings the distribution within each band toward a fermi distribution with a temperature the same as the material – but because of the slower rate of interactions with photons, the total population is kept in one band or another, thus keeping the two bands out of equilibrium from each other (their fermi levels are different). (see book I referenced above.)
I was able to determine from the fermi distribution that for the fraction of occupied states f1 and f2 at each of two energy levels E1 and E2 with the same fermi level and T = T0, f1 and f2 are the same if T = Ta and the quasi-fermi levels of the two bands are seperated by Δμ (see below).
Thus, the brightness temperature (at a photon energy E) of radiation in equilibrium with two energy levels (seperated by E) in two bands (via electron-hole pair generation and recombination) is that for which a fermi distribution could describe the electron and hole populations in each energy level if the two energy levels were in thermodynamic equilibrium with each other at that temperature (a common fermi distribution with a common fermi level). This makes sense because the rate of photon emission and absorption should have some relationship to the populations of electrons and holes (For two energy levels E1 and E2 where E2 > E1, emission should be related to the number of electrons in E2 and the number of holes in E1, and absorption should be related to the number of electrons in E1 and holes in E2 – however, there are some things about that relationship that I don’t understand).
——–
In a fermi distribution, where fx is the fraction f of states that are occupied at energy level Ex, and
μy is the (quasi) fermi level when the fermi distribution is for a temperature Ty:
Ex – μy must be proportional to Ty to keep constant fx.
Thus, to keep constant f1 while shifting μy and Ty, from y = 0 to y = 1:
(E1 – μ1)/(E1 – μ0) = T1/T0
(E1 – μ1) = T1/T0 * (E1 – μ0)
μ1 = E1*(1 – T1/T0) + μ0*T1/T0
Now, where f1 = f of E1 at both μ1,T1 and μ0,T0
and where f2 = f of E2 at both μ2,T2 and μ0,T0
and
Δμ = μ2-μ1
E = E2 – E1
μ1 = E1*(1 – T1/T0) + μ0*T1/T0
μ2 = E2*(1 – T2/T0) + μ0*T2/T0
Δμ
= μ2-μ1
= E2*(1 – T2/T0) – E1*(1 – T1/T0) + μ0*(T2-T1)/T0
Δμ = E + (E1*T1 – E2*T2)/T0 + μ0*(T2-T1)/T0
(E-Δμ)*T0 = E2*T2 – E1*T1 – μ0*(T2-T1)
(E-Δμ)*T0 = (E2-μ0)*T2 – (E1-μ0)*T1
(E-Δμ)*T0 = (E2-μ0)*T2 + (μ0-E1)*T1
When T2 = T1 = Ta,
(E-Δμ)*T0 = [(E2-μ0)+(μ0-E1)]*Ta = E*Ta
Patrick 027 says
“f1 and f2 are the same ” not the same as each other, but each is held constant, if T0 and Ta are related by a function of E and Δμ.
Rod B says
CFU (674, 675, etc.), I have no clue what you are talking about in the context of my posts. Then again you have no clue what I’m saying — or maybe you’re just ignoring it for the fun, I dunno. This, by definition, can go nowhere.
Rod B says
Patrick 027 (677), you say, “…equipartition of energy… is… about the way energy is distributed among different forms (translational kinetic, modes of vibration and rotation, etc.)”
Absolutely. It also says with equipartition each mode’s degrees of freedom hold 1/2kT equal amounts of energy. But equipartition is an idealized state. In my example very few molecules (it’s possible to be none per Boltzmann’s equation) in a bunch of gas are actually in equipartition which means the bunch of gas is not in equipartition. None-the-less, there is a bona fide temperature of that gas. (Some treatises, while trying to be helpful, confusingly talk only of equipartition as among the three degrees of translation energy when discussing temperature/kinetic energy/M-B distribution. Might be good for explaining the basics but leaves much of the accurate detail out.)
For a contained system, if my gas moves toward equipartition, translation energy will be transferred to rotation and/or vibration. Its (thermal) temperature will likewise decrease.
I fully agree that you can have LTE in approximation and have an average temperature where LTE is approximately satisfied for small pieces of the whole. (In some cases “small pieces” might be pretty large.) I was responding to the comments: “there does have to be equilibrium (which includes equipartition) before… temperature… can be defined” (though Ray might not have meant it that strongly), and “LTE means, in effect, “you can measure the temperature.””
Patrick 027 says
Re 682 – well, that some modes are quantized may distort the equipartition pattern, but whatever the case is, if, allowing all the microscopic interactions to continue, equilibrium is reached, the energy distributions among particles and modes, etc, corresponds to what is characteristic of that material at that temperature under whatever conditions are there, so far as I know.
You can have quasi-LTE, and for example, the LTE we refer to as requiring emissivity (into a particular direction at a particular frequency at a particular polarization) to be equal to absorptivity (from a particular direction at a particular frequency at a particular polarization) is actually a quasi-TE because it can occur without chemical (or nuclear) thermodynamic equilibrium. The chemical (and nuclear) reaction rates are small enough (generally) that the disequilibrium between products and reactants doesn’t prevent the molecules, etc, from attaining a local thermodynamic equilibrium where all the matter (except photons which are interacting over larger, non-isothermal distances) fits the conditions for a single temperature. In the prior two comments I gave an example where a quasi-LTE could be reached where two populations of particles reached a quasi-LTE with another population via rapid exchanges of energy in one particular form, and within each population, complete LTE, but with a disequilibrium between two populations that has an important radiative effect; regarding interaction with photons, the two populations as a whole do not have a single temperature, but pairs of subsets of the populations have various effective temperatures different from the temperature of each of the populations. If you have a statistically-sufficient population size, you may be able to assign a temperature to a subset of particles – for example, there is a single brightness temperature for the photons reaching a location at some time from a direction with some frequency and some polarization, which is only determined by how many there are (for the given refractive index). You could have two populations with different temperatures sharing the same volume – that may be difficult with gases (?) because collisions between populations could occur if collisions within populations are allowed, but picture instead two metallic wire meshes, which are interwoven through each other but only conduct heat within themselves…
John E. Pearson says
672 Gavin said “what nonsense”.
There’s nothing wrong with Gille’s claim that ideal gases don’t require interparticle collisions. If you write down the partition function for a bunch of non-interacting particles in a finite volume you obtain the ideal gas equation of state. If you use a potential with a hard sphere repulsive core and weak long range attraction you get the van der waals equation of state.
http://en.wikipedia.org/wiki/Van_der_Waals_equation
Ray Ladbury says
Rod B., You are misinterpreting equipartition. If a mode is highly energetic, it will likely not be populated at low temperature (freeze-out). The just means that its occupation number is much less than one–as one would expect from the appropriate distribution (Fermi-Dirac, Bose-Einstein or Maxwell).
Why do you think we go to all the trouble to define LTE if it is not because of the difficulties of defining intensive thermodynamic quantities for nonequilibrium systems? In the greenhouse effect, you start with a system in LTE. Then it is illuminated by and absorbs a quantity of IR photons, taking it out of thermal equilibrium (e.g. there is much more energy in the vibrational modes than at thermal equilibrium). The system relaxes by sharing some energy with kinetic degrees of freedom. It then reaches equilibrium at a new, higher temperature.
Completely Fed Up says
Rod B, you have no clue about thermodynamics. This may be causing you some problems in comprehension.
FurryCatHerder says
CFU @ 664:
Oh, please.
You claimed that “insulation” is the solution to environmental control problems. Caves don’t stay cool because of “insulation”, nor can I cool my house by exchanging inside air at 78F and 40% RH with the current outside air at 78F and 84% RH. And as it cools overnight, that relative humidity is only going up. So, you’re simply wrong on two counts.
Yes, insulation can maintain a thermal gradient, but at ever higher R-values (needed for winter), cooling demands rise during less-cool times of the year. There are roughly 40 mega-joules of heat added to the inside of my house, just from occupants and “basic conveniences”, each and every day, plus gallons of water from respiration and perspiration, to say nothing of cooking, bathing, leaving the lid up on the toilet, doing laundry, and so forth.
What R-value would keep my house a comfortable 76 to 78F year round? The average temperature, based on the weather station I have, is about 72F (and confirmed by the temperature inside “Inner Space Caverns”, up the road in Georgetown). Winter average is about 45F, Summer average is about 89F, annual heat energy produced inside the house is 14.5 giga-joules. Use 4,000 square feet for the area of the walls and ceilings (assume no heat gain or loss through the slab).
I get 460 watts for the average thermal energy (humans plus stuff), 420 square meters for the surface area, interior temperature is 25C, exterior temperature is 22C. Delta T is 3C. R comes out to be about 3 (in SI units) or R-15 in old money. That’s =way= below anyone’s recommendation for insulation.
For “insulation” to work, the R-factor between the 72F average temperature somewhere in that gradient and the 77F interior has to remain constant, even though the distance between the points moves — which isn’t how insulation works.
Reworking for winter heating demand, exterior temperature is 7C, delta T is 18C. R comes out to be about 16.4, or R-90 in old money, which is well above what anyone recommends for buildings.
Using R-90 year ’round, well, it just doesn’t work — heat gain from outside is still present in the Summer, except I still have to remove all the internally produced heat. Easier with all that insulation? Oh, sure. But R-90? On top of the 40,000 BTUs that would have to be removed, I’m still gaining 15,000 BTUs per day from the environment — even with R-90 insulation.
And what about Spring and Fall, when I currently don’t have to run the A/C or heat all that much? What do you think R-90 insulation is going to do then? Force me to run A/C, dehumidify the outdoor air I bring in, or all sorts of other things, all of which require energy.
How about instead of being the Voice of Snarkyness, you try to actually contribute? Instead of making things up, or misunderstanding the science, how about you learn why massive amounts of insulation aren’t the correct solution?
There are well-understood building techniques that can have very low solar gain during high temperature months, while not adding to environmental control costs during Spring and Fall, and still allowing for solar gain and passive solar heating during cool temperature months. Using interior thermal mass to regulate temperature works well, but you still have some internally generated heat and humidity (respiration and perspiration) that must be removed. The use of a large thermal mass can smooth out the energy demand over the course of a day, which benefits the grid, which makes reducing our energy demands easier.
As I’ve proven here, real world solutions are far more complex than “insulation” or everyone going back to living in caves and running a dehumidifier 24/7.
Gilles says
Gavin and CFU : there is absolutely no place where collisions are required to derive ideal gas law. The only assumption is thermodynamic equilibrium , or maximum entropy, and it can be achieved with interaction of an external bath (the walls practically). The Planck distribution IS an ideal gas, although a peculiar one (ideal gas of bosons with zero chemical potential, where quantum effects are important). And of course there is no collisions between photons.
The trick is that the relaxation to thermodynamic equilibrium is usually insured by collisions , but it is not a requirement of statistical physics – and paradoxally the interactions giving raise to collisions cause DEPARTURES from ideal law.
the point is raised by Patrick
“Re 672 Gilles – perhaps you were thinking of the lack of certain kinds of interaction, such as attractive forces between molecules, and also, the nonzero volume that the molecules have.”
true : the non ideal term of Van der Waals equation are caused by long range interactions (a/V^2 term) and finite volume (V-b term). So the ideal law limit applies actually for ZERO range interaction and ZERO volume. Now tell me : what can be the collision cross section of dimensionless particles with zero range interaction ? just by dimensional analysis, you’ll hardly find anything else than… zero.
Geoff Wexler says
re :#679
Thats a non-local example so the L in LTE is a bit misleading in that case.
Its also conceptually rather different from a typical transport problem where the lack of thermodynamic equilibrium is brought about by the interaction between different regions.One way of defining a temperature and an entropy is to imagine a thought experiment in which this interaction is stopped by isolating a tiny region.
That puts the L into it. It must not be too tiny, otherwise fluctuations will be significant. This thought experiment involves an approximation so
why ‘double count’ by adding a prefix ‘quasi’ ?
Completely Fed Up says
“there is absolutely no place where collisions are required to derive ideal gas law”
Go on, derive those laws without them.
Then enjoy the Nobel Prize for outstanding new work in the world of Physics.
Completely Fed Up says
“You claimed that “insulation” is the solution to environmental control problems.”
No I didn’t.
YOU have done so, just now.
But not me.
What I’ve claimed is that insulation is a simple and effective solution to the overuse of energy by many in the first world, USians especially.
But if you want to make up completely new strawmen arguments, go ahead.
TD;DR.
Geoff Wexler says
Non equilbrium ‘temperatures’ in biology.
[This is off topic so shall be brief.]
Muscle contraction and photosynthesis are both interesting. For example muscle is a remarkable engine. Its efficiency is , at first, hard to reconcile with Kelvin’s version of Carnot’s theorem. Asserting that the theorem, which assumes reversibility (based on equilibrium), is inapplicable, is a cautious way out.
An alternative is to estimate an effective temperature of the heat source i.e. excited products of reacting ATP whose effective temperature is perhaps 16 times higher than body temperature.
[I’m not sure if there are any comparable artificial examples].
FurryCatHerder says
CFU @ 691:
Moving the goalposts much?
@ 604 — You suggested running a fan. The overnight low was 77F, the RH was in the 80’s. How is FAN an any amount of insulation going to solve that problem?
@ 607 — You confused “thermal mass” for insulation. Insulation has no mass term, it’s surface area, watts, and a temperature differential. “Thermal mass” is specific heat times mass. You also seem to think that a 77F overnight low at 84% RH is “cool” enough to cool a house set for 78F at 45% RH.
@ 619 — You again confuse “insulation” for “thermal mass”. Thermal mass is measured in Joules / Degree K (or C, take your pick :) ), from the specific heat and the mass of the object. “Insulation” is measured in Degrees K * Meters^2 / Watt. One has a mass term, the other doesn’t. Joules and watts are at least related by “time”. I’m thinking you can’t go from meters^2 to kilograms all that easily …
Yeah, I want to refill my house with 77F air at 84% RH. What R-factor would keep inside air at 77F from being warmed by outside air with an average temperature of 86F, with an internal thermal load of 40,000 BTU per day? It’s IMPOSSIBLE. The gradient goes the wrong way for the amount of heat that has to get OUT.
@ 645 — You talk about temperature gradients, but “insulation” restricts heat flow based on a temperature differential. The temperature within the insulation changes at any given point based on the difference between the two sides. Therefore, the ability to “absorb” this heat changes — BY DEFINITION. Again, you’ve confused “thermal mass” with “insulation”.
Rod B says
Ray Ladbury, I don’t think I am misinterpreting equipartition. I’ve described it pretty much as your #685. Simply it is an idealistic theoretical construct that describes molecular tendencies but quite often is not realized in practice. I was merely asserting that one can measure the temperature of a volume of gas where most, and maybe even all, molecules are not in equipartition.
Nor have I disagreed with your words on LTE in the same post. It certainly is a tool that helps physicists get through “the difficulties of defining intensive thermodynamic quantities for non equilibrium systems…” (Though it did not make those difficulties disappear from the physics.) Simply again I merely claimed a system that is not (maybe temporarily) in LTE has temperature, though I went a bit further. Using your example:
Step 1: you start with a system in LTE me: it has a measurable thermal temperature;
Step 2: it is illuminated by and absorbs a quantity of IR photons, taking it out of thermal equilibrium (e.g. there is much more energy in the vibrational modes than at thermal equilibrium) me: it still has a measurable thermal temperature; (and BTW — the bit further part — the temperature is the same as it was in Step 1.)
Step 3: The system relaxes by sharing some energy with kinetic degrees of freedom. It then reaches equilibrium at a new, higher temperature. me: it still has a measurable thermal temperature; and as you say the temperature is higher than in Steps 1 and 2.
Where was it we disagreed?
Ray Ladbury says
Rod,
What you need to remember is 1)thermo really only applies to systems at or very near equilibrium, 2)equilibrium implies equipartition, 3)thermo applies only to a system with a very large number of partcles.
If you try to apply the definition of temperature to a nonequilibrium system you get fairly absurd results–e.g. that the system has a negative temperature which is greater than positive infinity (e.g. a system in inversion). So temperature during step-2 may be somewhat problematic.
Gilles says
[edit – see this]
Bob (Sphaerica) says
694 (Rod B),
I’m not sure where you guys are trying to get to with this, but it’s been entertaining to read. Still, I think the basic flaw in your 1-2-3 is that chemistry/thermodynamics is never so neatly discrete, and pretty much, overall, any collection of molecules is constantly very close to equilibrium, so close that it’s not worth thinking about it any other way, at least not in the things being discussed here (i.e. not a chemical reaction with very different starting and ending compositions).
There’s no “hold it, we have to wait for the absorbed energy to spread out” moment.
Chemistry is based on unimaginable numbers of atoms. Avogradro’s number, of course, the number of atoms in only 12 grams of C12, is a paltry 6 x 10^23. One “chunk” of atmosphere dwarfs that. It is constantly bombarded with IR, and before a proton can blink at a photon, a collision has occurred to put things on the way to a proper equipartition of energy.
Point: It’s just not valid to think in such discrete terms, unless it’s a mere thought experiment for trying to work something out.
So, with that said… where exactly was this conversation going? Looking through the thread, I really couldn’t figure out what the premise and expected/contested conclusion were.
Geoff Wexler says
#695 Ray
I’m afraid that I have not had time to read your entire sub-thread, so the following just refers to the 2nd paragraph of your comment. On this occasion I must disagree, although it is partly subjective being about terminology.
The inversion to which you and CFU refer, is metastable, and it is not absurd and makes very little difference to imagine that it is stable. Having an upper bound to the density of states (in energy) may be unfamiliar but is straightforward in magnetism ; it can only happen in quantum systems. There is thus a maximum amount of energy which can be dumped into such a system.
What is wrong with dumping in more than ‘half’ (an approximate half) the maximum amount of energy? It is a perfectly well framed physical problem. The statistical physics will tell you how the energy levels are occupied in the inversion in just the same way as in non-inversion . If you wish to ban temperature in the former case you will have to ban it in the latter as well. And of course the former has a negative Kelvin temperature. Bad luck Kelvin; it was not the only ‘mistake’ he made.
In some cases the results can probably be checked experimentally. Although I might be ignorant over this, I have not heard of any respect in which the results violate the physical interpretation of thermodynamics (even Kelvin’s)
The is a problem with T= 0 but there was already; these problems with Kelvin’s scale are easily remedied.
Please see the end of this comment:
https://www.realclimate.org/index.php/archives/2010/06/climate-change-commitment-ii/comment-page-13/#comment-177870
Patrick 027 says
Re Rod B. – I’m not sure about this, but in the case that LTE is disrupted by such energy fluxes, a system might develop several temperatures (?). Anyway, and I’m not saying that you’re saying otherwise, but the vast majority of the mass of the atmosphere for the vast majority of the time is able to remain near LTE even as such energy fluxes are absorbed and emitted. (PS either emission or absorption by itself could disrupt LTE).
Re FCH – Actually, underground temperature is less variable than surface temperature because of the combination of thermal mass (aka heat capacity) and insulation.
Re 689 Geoff Wexler – okay.
When I first became familiar with the concept of LTE, I knew it refered to an equilibrium distribution of energy among particles/molecules/atoms/electrons/lattice cells/etc. and the various forms it may take on a microscopic level. At some point it occured to me that LTE, if only sufficient for the purpose of assigning a single temperature, is not generally not really be TE completely, as that would entail chemical and nuclear reactions. Of course, TE and lack thereof, and entropy, etc, can be defined for a system where some forms of energy and entropy within the same volume may be excluded from the system (nuclear for example). Anyway, I wasn’t sure if the LTE sufficient for having a single (positive) temperature (?) should be considered LTE or quasi-LTE for that reason. But in this case, if it were quasi-LTE, the emissivity = absorptivity (to and from the same direction, respectively, for same frequency and polarization, location and time) would tend to hold.
I forget now whether Nelson used the term quasi-TE or quasi-LTE, but it was local in the sense that the two populations of electrons were in the same volume, and each approximately fits a fermi-distribution (sounds like LTE) maintained by rapid phonon (I think) interactions with the crystal lattice, achieving a temperature that is the same as the crystal lattice (sounds like LTE) but with the electron transfer between populations occuring more slowly, so that a perturbation from LTE can persist; the combinations of electrons and holes between populations are not in equilibrium with radiation with the brightness temperature equal to the temperature of the electrons within each population. Hence quasi-___. In order to have emissivity = absorptivity, the temperature with respect to emitting photons has to be an effective temperatue relating the electron-hole populations.
Patrick 027 says
Re FCH
Both heat capacity and insulation contribute to the limited temperature variation underground.
(If CFU was thinking of this correctly, CFU wasn’t clear in his wording.)
F(z) = downward heat flux per unit area at depth z (z positive downward in this case)
T'(z) = perturbation from some constant temperature (constant over z and t).
k = thermal conductivity
C = heat capacity per unit volume
F(z) = -k*∂T’/∂z (eq1)
C*∂T’/∂t = -∂F/∂z = k*∂^2(T’)/∂z^2 (eq2)
Let
F = F0*sin(ω*t-s*z)*exp(-n*z)
T’ = T0*sin(ω*t-r*z)*exp(-m*z) (where T’ is a deviation from some basic state constant temperature)
∂T’/∂z = -r * T0*cos(ω*t-r*z)*exp(-m*z) – m * T0*sin(ω*t-r*z)*exp(-m*z)
∂^2(T’)/∂z^2
= -r^2 * T0*sin(ω*t-r*z)*exp(-m*z) + 2*r*m * T0*cos(ω*t-r*z)*exp(-m*z) + m^2 * T0*sin(ω*t-r*z)*exp(-m*z)
= (m^2-r^2) * T0*sin(ω*t-r*z)*exp(-m*z) + 2*r*m * T0*cos(ω*t-r*z)*exp(-m*z)
∂T’/∂t = ω * T0*cos(ω*t-r*z)*exp(-m*z)
(subs into eq2)
C*ω/k * T0*cos(ω*t-r*z)*exp(-m*z) = (m^2-r^2) * T0*sin(ω*t-r*z)*exp(-m*z) + 2*r*m * T0*cos(ω*t-r*z)*exp(-m*z)
first term on right-hand side must be zero (sine instead of cosine):
therfore
m^2 = r^2
therefore
C*ω/k * T0*cos(ω*t-m*z)*exp(-m*z) = 2*m^2 * T0*cos(ω*t-m*z)*exp(-m*z)
C*ω/k = 2*m^2
m = sqrt(C*ω/(2*k))
(subs into eq1)
F(z) = -k*∂T’/∂z
F0/(m*k*T0) * sin(ω*t-s*z)*exp(-n*z) = [cos(ω*t-m*z)+sin(ω*t-m*z)]*exp(-m*z)
m = n
F0/(m*k*T0) * sin(ω*t-s*z) = [cos(ω*t-m*z)+sin(ω*t-m*z)] = sqrt(2) * sin(ω*t-m*z+π/4)
F0/T0 = sqrt(2) * m*k = sqrt(C*k*ω)
-s*z = -m*z+π/4
—————
T0/F0 = 1/sqrt(C*k*ω) = 1/[sqrt(2) * m*k]
m = sqrt(C*ω/(2*k))
F = F0*sin(ω*t-m*z+π/4)*exp(-m*z)
T’ = T0*sin(ω*t-m*z)*exp(-m*z)