Another open thread. OT comments from the Amazon drying thread have been moved over. As usual, substantive comments only please and no abuse.
Reader Interactions
844 Responses to "Unforced variations 3"
Patrick 027says
Re 798 – “line broadening hasn’t got anything to do with emissivity (inelastic collisions do, if they occur while emission is going on), unless you’re talking emissivity per unit wavelength, ”
My understanding, though it possibly could be wrong, is that the total emission cross section per unit substance, integrated over the spectrum, is conserved by line broadenning, and the same would be true for the cross section contributed by each line.
However, when lines are broadenned less, the cross section gets piled into some wavelengths and reduced at others; in the absence of scattering, emissivity = 1 – exp(-optical thickness) = 1 – (-emission cross section per unit area), and the integral over the spectrum is reduced by an absence of line broadenning (in other words, less line broadenning leaves cross sections piled up so that they hide each other to a larger extent, thus saturation is approached faster at some wavelengths while other wavelengths are more transparent, and the overall effect is increased transparency for the same amount of substance.
Robertsays
Well on the Colbert report last night a meteorologist and a climatologist debated global warming… I sort of wish that they had a climatologist go on and talk about Harries et al. 2001 or Wang and Liang 2009 instead of repeating the constant “ice is melting”. The question was are we causing it not is the climate warming… the meteorologist (one of the many who don’t support the AGW theory) sort of won the debate and I think he would of been easy to refute honestly.
Guy Callendar wrote in 1938: When radiation takes place from a thick layer of gas, the average depth within that layer from which the radiation comes will depend upon the density of the gas. Thus if the density of the atmospheric carbon dioxide is altered it will alter the altitude from which the sky radiation of this gas originates. An increase of carbon dioxide will lower the mean radiation focus, and because the temperature is higher near the surface the radiation is increased, without allowing for any increased absorption by a greater total thickness of the gas.
I commented: This interpretation is essentially the converse of the point (made years earlier by Nils Ekholm) that radiation escaping the atmosphere is controlled by the effective altitude of the radiating layer. In both interpretations, the increased infrared optical thickness moves the effective radiative focus along a temperature gradient: warmer near the surface in Callendar’s formulation, colder near the top of the atmosphere in the case of Ekholm’s. In both interpretations, this effect operates regardless of whether or not the principal wave band is “saturated,” that is, absorbs the maximum radiation possible.
So, a fair and accurate interpretation–especially that last sentence? Comments, anyone?
“Thus, [Bob_FJ wrote] if it is correct, that sceptical criticism would be defeated, but nevertheless, you argue against it! ”
Isn;t pointing out the PDO etc effects arguing against it?
No; If you study this graphical compilation, (RE 782) the bottom graph is PDO and AMO combined showing SST’s generally cooling between 1940 and 1975 and correlating rather well with HADCRUT. The correlation is also good for the whole period shown from 1900. Thus, if it is correct, that sceptical criticism would be defeated.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Completely Fed Up Reur 790/p16:
Quickly, sorry, I misunderstood your reference to 1999:
You forgot 1998 which, amongst denialists was, in 1999, an outlier indicating NOTHING. Though later on, 1998 became the date that was the END OF WARMING.
But surely you have got that the wrong way around! Was it not a “warmist” argument that 1998 was an outlier? Surely, to say it was NOT an outlier would support the sceptical point of view! Perhaps you should also consider that as each year passes, additional data becomes available, and assessments on that cumulative data can thus evolve. (on both sides of the fence)
Patrick 027says
Re 803 Kevin McKinney –
The effect approaches saturation when the effective altitude has shifted to a point where farther shifts don’t make much of a difference because of limited temperature variation over shorter distances. For ‘backradiation’ (down to the surface), saturation is reached as the effective altitude approaches the surface; for the tropopause level, saturation is approached as the effective altitude for upward emission from below and for downward emission from above both approach the troposphere. For outgoing LW radiation to space, there is potential for ‘saturation’ of a sort (I had described as a saturation but now I’m wondering if the term shouldn’t apply; nonetheless it is what it is) whenever the effective altitude approaches a minimum or maximum (assuming a broad-enough maximum or minimum relative to the emission weighting function so that the temperature variation has a significant effect, and temperature variations in front are thin enough not to dominate, which is generally the case in the atmosphere (most of the mass is in the troposphere, most of the rest is in the stratosphere, most of the rest is in the mesosphere); one could consider ‘saturation’ of a sort to occur when, with increasing optical thickness, the effective altitude is in the vicinity of the tropopause; the effective altitude doesn’t get past the stratopause in nearly all wavelengths.
The CO2, and I think, H2O greenhouse effects are saturated at some wavelengths for the tropopause level, and at the surface. The water vapor effect can become saturated at the surface over all energetically-significant LW wavelengths when the concentration is high enough, but it still won’t be saturated at all wavelengths at the tropopause level (for anything similar to Earthly conditions, and of course an equilibrium climate’s tropopause must be high enough to avoid saturation in at least some part of the spectrum so that a net LW flux out can balance solar heating). In the vicinity (of wavelengths) of signicant absorption by CO2, H2O vapor is nearly transparent in the stratosphere and so I’d presume in some part of the upper troposphere as well. Clouds will have effects, of course. For upward LW radiation at the tropopause, CO2 forms a plateau of elevated effective altitude, with slopes (generally down to effective altitude of emission from water vapor, the surface, and any clouds); adding more CO2 raises the effective altitude at the slopes, pushing up new slope outside the plateau, and bringing existing sloping parts into the flat part of the plateau – effectively widenning the plateau.
Patrick 027says
Re 802 – easy to refute –
perhaps via “we know CO2 has to cause some warming and that there are feedbacks, understood in terms of established physics, and the issue is just quantification within remaining uncertainty, whereas you’re ‘it’s the Atlantic, it’s the Pacific’ seems like a postulation without explanation of cause verses effect (and if the explanation is that the ocean circulation changes are heating the surface, then why isn’t the ocean cooling, … )”
Since weather forecasters seem to be the trusted celebrity faces of all things atmospheric, maybe RC should do something like “Are meteorologists different?”
Thanks, Patrick. By dint of very careful reading I can follow most of what you are saying.
It sounds as if the takeaway message for me would be that my summary statement is OK, but the statement about saturation is not so good. (That would seem to follow from your stating of the phenomenon of saturation in terms of the “shift along a temperature gradient.”)
Yes? If so, I’m going to strike the latter sentence!
Patrick 027says
Re 808 Kevin McKinney (re 805 re 804) –
I actually glossed over a key aspect of the sentence you were most interested in:
“In both interpretations, this effect operates regardless of whether or not the principal wave band is “saturated,” that is, absorbs the maximum radiation possible.”
“absorbing the maximum radiation possible” – if this is taken to mean a transmission approaching zero (optical thickness approaching infinity), then that is actually a different category of saturation.
I would consider striking the word ‘regardless’ from the sentence, at least with respect to radiative flux changes via effective altitude shifts, for the surface and at the tropopause, since the effect of altitude shifting can saturate in those cases (approach a limiting altitude).
But it is true that those types of saturations do not necessarily occur simultaneously, just as saturation at the surface (effective altitude approaching the surface) can occur either ‘before’ (especially for water vapor) or ‘after’ (high clouds) saturation at the tropopause (high clouds wouldn’t saturate the LW opacity for downward radiation from above but they could saturate that for upward radiation from below if high enough, so it wouldn’t be complete saturation by clouds alone, assuming clouds don’t actually straddle the tropopause.) (‘Before’ and ‘after’ refering to increasing LW optical thickness.)
For example, any sufficiently thick cloud at any height, or sufficiently thick water vapor layer (would be found closer to the surface) would nearly eliminate LW transparency through the atmosphere, but could still leave much room for effective altitude shifting by other agents, depending on the location of the cloud or water vapor. Generally, saturation of effective altitude shifting should lag saturation of eliminating transparency for a thick layer.
Patrick 027says
Re 791 Completely Fed Up – I had made a 1000-fold error, which I was specifically addressing in the comment you quoted. The NASA websites says the average density of the photosphere is less than 1 millionth of a gram per cubic cm and that the photosphere is about 500 km thick. 1e-6 g/cm3 = 1e-6 kg/L = 1e-3 kg/m3 [where xe-y = x * 10^-y (it’s a standard notation…wait, maybe the e is supposed to be capitalized to avoid confusion with the number e; okay then:]
1E-6 g/cm3 = 1E-6 kg/L = 1E-3 kg/m3
1E-3 kg/m3 * 500 km = 0.5 kg/m2
So, assuming the 500 km is not too imprecise relative to how far below 1e-3 kg/m3 the density actually is, the photosphere has less than about 0.5 kg/m2.
Regarding pressure,
based on:
Earth surface g = 9.81 m/s2,
Earth’s radius = 6371 km,
Sun’s radius = 695,500 km,
Sun’s mass = 333,000 times Earth’s mass
(therefore, Sun’s gravity at surface (photosphere) is about 27.9 times Earth’s, or about 274 m/s2; this shouldn’t change significantly through the depth of the photosphere, as it is a very small fraction of the radius of the Sun and contains a very small fraction of the Sun’s mass.)
(Interesting aside: Solar mass is considerably more concentrated toward the center than Earth’s.)
The pressure at the bottom of the photosphere contributed by the weight of the photosphere would be about less than 135 kPa, which is similar to Earth’s atmospheric pressure at sea level ( ~ 101 kPa average).
and:
R = 8.3143 J/(K mol) (well, close; I’ve seen some different values for the last digit or … two?)
H = R*T/g, where H is the scale height (the height over which pressure decreases by a factor of e, assuming hydrostatic balance)
Using an isothermal approximation, and using the ‘nice round number’ of 1 g/mol for atomic H, with T = 5780, The pressure and density increase by a factor of 17.3 times from the top to the bottom of the photosphere for atomic H; if the density of a plasma (two particles for every ionized H atom) is half that of atomic H for the same temperature and pressure (I’m really not sure on that point, though**), then it would be a factor of 4.16. Thus, assuming whatever is above the photosphere is concentrated enough to be mostly under the same gravitational acceleration, the photosphere has about 3.16/4.16 to 16.3/17.3 of all the mass above the base of the photosphere – using an isothermal approximation. The inverses of those ratios are 1.32 and 1.06, which are the total mass above the base of the photosphere divided by the mass of the photosphere, in the hydrostatic approximation (obviously breaks down for solar wind, but I’d guess it might still work for a majority of the mass outside the photosphere) and assuming the mass above the photosphere is near enough to the photosphere for the same gravitational acceleration to apply (and therefore the same surface area – they are approx. inversely proportional with the vast majority of the mass below).
With the same assumptions (hydrostatic approx.) and the isothermal approximation, the pressure at the base of the photosphere is about less than between 1.06 and 1.32 times 137 kPa. Since the top of the photosphere is cooler, those ratios (1.06 and 1.32) should be overestimates, unless the temperature rises sharply enough above the photosphere (I haven’t mined the NASA website for all useful information).
But for what it’s worth, 137 kPa at the base of the photosphere and the isothermal approximation of 5780 K imply a density of 2.9 g/m3 at the base of the photosphere and 0.94 g/m3 photospheric average, for the case of atomic H; the first value would be half of that for the pure plasma (if I am not missing something relevant about plasmas, which is certainly possible), but the second value would be more than half for the pure plasma because the scale height would be larger (less variation of pressure and density with height). Anyway, these numbers are consistent with the order of magnitude of density implied by the website.
(Too OT? Well it is an application of some physics that is important to Earth’s atmosphere as well…)
Patrick 027says
“Anyway, these numbers are consistent with the order of magnitude of density implied by the website.”
Was that circular reasoning? The information used to calculate the pressure owing to the weight of the photosphere was independent of temperature and molecular mass; however, the knowledge that this would be of the same order of magnitude as the total pressure was based on the scale height, which was based on those variables. So of course the average density calculated from the gas laws will be of the same order of magnitude as the value used to find pressure (would be exactly the same if the weight of overlying material were included). So it was meaningless that the result was similar to the input (except as a check on my math).
Patrick 027says
Re BobFJ “Was it not a “warmist” argument that 1998 was an outlier?”
Arguing that 1998 was an outlier relative to average years makes some sense (though outlier might be taken to imply that it should be ignored; that would not make sense; it really did happen). A ‘warmist’ in late 1998 might find it convenient to see it as the start of a dramatic new stage in the anthropogenic warming process, but where is such a ‘warmist’ to be found? Maybe some members of the public? I wouldn’t be surprised. But I don’t think you’d find many among scientists, including those among who approved of the IPCC reports, etc.
Of course, there are other ways of looking at 1998:
While a single data point, it might contribute to statistics showing a trend among years with strong El Ninos. Imagine, more generally, seperating the climate record into seperate records, with similar ENSO, SAM and NAM and NAO, perhaps even QBO, etc., indices/phases, and studying trends among them and correlations among the different trends between the different sets of time intervals, and also trends in the density of each set of intervals over time.
(PS The distinction between signal and noise depends on what you’re looking for. Noise is not just something that can obscure part of a climate signal, it is also a part of climate, and so one might find signals in the noise (changes in frequency or amplitude or shape, etc.).
Patrick, thanks for the further clarification. I think my second sentence needs to go in its entirety; it’s not really needed, and if the issue can’t be sufficiently clarified in a brief fashion, I’m better not raising it.
It’s been interesting, though, to learn a bit more about the different senses in which this apparently innocuous term “saturation” can be used. I wasn’t attaching the term to the altitude-shifting phenomenon. Is that a “normal” usage of the word?
In the Koch-Angstrom experiment, it was transmissivity that was at issue. But I thought that in that case, Koch was unable to drive transmissivity to zero? This issue goes to the heart of my struggles with the concept of saturation: what determines “the maximum possible” absorption? And when we say “maximum possible,” what circumstantial limits are required to obtain a valid description?
For those who missed it the first time, this is about my life-times-and-work article on Guy Callendar, the man who brought CO2 climate theory into the 20th century.
Re BobFJ “Was it not a “warmist” argument that 1998 was an outlier?”
Arguing that 1998 was an outlier relative to average years makes some sense (though outlier might be taken to imply that it should be ignored; that would not make sense; it really did happen).
The wording in my 805 on which you are commenting was a bit screwed-up, but I agree with all you say. Whilst there may well be some noise within the 1998 temperature value, I can’t see any justification for treating it as an outlier, because as you say, it was a real event of proven consequences. Nevertheless, I have seen some studies where temperature plots are “corrected” by “removing” ENSO. (Including I seem to remember an Oz study by the CSIRO or BMO afew years ago)
Patrick 027says
Re 811 myself – I made the mistake again! This time it was just a typo, though, I used the correct value ( 500 kg /m2) in calculating pressure…
Doug Bostromsays
2nd time a charm for CryoSat launch. Many interesting details here:
“It’s been interesting, though, to learn a bit more about the different senses in which this apparently innocuous term “saturation” can be used. I wasn’t attaching the term to the altitude-shifting phenomenon. Is that a “normal” usage of the word?”
I wouldn’t really know what the normal usage is, but given the concept of saturation, it should apply.
PS of course, while the effective altitude shifting effect can saturate as it approaches the tropopause, in terms of forcing, the tropopause can shift in a climate change, so the saturation after forcing is applied to a present climate is not the same as what exists after the climate response (I’m not saying this is a big effect, though; I haven’t myself seen it quantified; I’m guessing it’s small for a doubling of CO2, for example (I don’t think the tropopause rises so much as to halve the optical thickness of the CO2 in the stratosphere); but if a greenhouse gas with the same optical thickness over the whole LW range were added, the forcing would approach saturation (at the tropopause) but at the same point the climate sensitivity for that forcing would increase, because farther additions of the gas would still have effects, and the climate response would involve sufficient lifting of the tropopause to ‘unsaturate’ the LW opacity enough to have a net outward LW flux to balance the net inward SW flux (solar heating). (Perhaps the ultimate limit for greenhouse warming would be approached as temperatures get high enough in some sufficiently optically significant part of the climate system (not the thermosphere) for significant radiation to be emitted at the same wavelengths as solar heating.)) This gets into how radiative forcing and climate sensitivity can be different between a forced change and the reverse change, with the change in equilibrium climate being of the same magnitude in the absence of hysteresis.
PS II net LW fluxes from air to air – point of interest:
(wherein emission and absorption are part of the LW opacity, as opposed to pure scattering):
What’s interesting to note about a gas such as CO2, with it’s approximately triangular-shaped absorption spectrum when plotted in log(optical thickness)), is that to a first approximation, after the central part of the band is saturated with respect to distances considered, while forcing at the tropopause, and at the surface as conditions allow, and at the top of the atmosphere, continue to change with increasing CO2, changes in the direct air-to-air LW fluxes would be limited, except wherein there are overlaps with clouds or other gases. This is because the greatest net radiant fluxes occur when regions of different temperatures are optically thick enough (with enough of that being from absorption/emission, as opposed to just scattering (non-Raman/Compton)) to emit and absorb much of the fluxes that reach them, but also when the optical thickness over the distance from lower to higher temperatures is small enough for the radiation fluxes to reach across the distance. Higher opacity reduces the net LW fluxes by blocking photons from reaching across larger temperature variations, while small opacity (with less optical thickness from emission/absorption) makes the temperature variations more translarger temperature variations. For CO2, once the central part of the band is saturated (for a given spatial distance), the intervals of intermediate LW optical thickness (for a given spatial distance) stay approximately the same width as CO2 is added. Aside from deviations from the approximation of the CO2 spectrum as triangular in log(optical thickness), and aside from variations in blackbody intensity as the intervals shift outward from the center of the band, the net LW fluxes (for temperature variations on a given spatial scale) from air to air would not change from absorption and emission by CO2 alone. But adding CO2 can change the net fluxes among layers of water vapor (where the absorption spectrum overlaps significantly- not so much in the stratosphere, or, I think, the upper troposphere) and clouds and the surface and space or between any of those and clear dry air.
—-
“and if the issue can’t be sufficiently clarified in a brief fashion”
A very helpful concept is a weighting function. At any location (and time) P, frequency, polarization (where important), and in any given direction Q, the intensity of radiation coming from some direction can be attributed to a source that has some distribution over space (if only emission and absorption occur, then the distribution is along a single path (possibly curving due to refraction); if reflection occurs, it may be along a bent or branching path, if scattering occurs, it may fill a volume of space; but where emission cross section density approaches zero, the density of the source goes to zero, and the source is projected out of such areas into where the emission cross section density is found (in directions depending on scattering and reflection). The emission weighting function** is a distribution, that when multiplied by the black body intensity as a function of local temperature (assuming LTE) and then integrated over space, is equal to the intensity of radiation reaching P from Q. In LTE (and for an approximation assuming conditions change slowly relative to photon travel times), the emission weighting function for radiation at P from Q is equal to the absorption weighting function at P for radiation going toward Q. The radiation at P from Q is emitted from one weighting function and absorbed by another, and the radiation in the opposite direction comes from the other weighting function and is absorbed by the first, with emission as a function of temperature, so the net intensity depends on the temperatures of the pair of weighting functions. This can all be integrated over directions (See http://chriscolose.wordpress.com/2010/02/18/greenhouse-effect-revisited/#comment-2058 ) to give the fluxes and net flux per unit area across some defined surface at a location and the weighting functions for that. For a given density of absorption cross section, increasing the scattering cross section density concentrates the weighting functions toward P (and can make the weighting functions wrap around P and overlap), or if the absorption cross section density is zero around P, scattering can project the weighting functions to absorbing material on all sides (so they still can overlap); for a given density of scattering cross section density, increasing the absorption cross section density concentrates the weighting functions toward P. Thus in general, increasing opacity brings the weighting functions closer to P, and if already largely concentrated into a space with some overall spatial tendency in temperature, will bring the pairs of weighting functions closer to being isothermal, thus reducing net LW intensities and fluxes.
** I think a weighting function might (??) also be defined as just refer to emission of photons that reach a point without any scattering in between, but that’s not the relevant picture in this discussion.
BobFJsays
CTG Reur rant @ 800:
So that would be a fail on reading comprehension, then. CFU is pointing out that in 1999 the denialist meme de jour was that 1998 should be ignored as an outlier. Once 2008 came along, though, suddenly 1998 became the “starting point of the plateau”. But then came 2009, and suddenly 11 year trends went out of fashion, and 12 became the new 11. Whoops, there go the goalposts again.
1) Yes, I misunderstood CFU, and have apologised to him for my mistake.
2) To back-up CFU’s and your claim; do you have any reference to confirm that in 1999, a sceptic or sceptics (or even a denialist to use your term) claimed that 1998 was an outlier?
3) So, as time goes by, and as new data evolves over a decade, sceptics are not allowed to revise their hypotheses or opinions? (but “warmists” are allowed to say whatever they like?)
Gosh, [concerning “correction” for ENSO] if only there were some sort of statistical technique that could do that sort of correction. Not just for 1998/1999/2000, but for all years. Sort of like averaging out the noise or something. Like this maybe. Now where’s your plateau?
4) Yes, it would be nice to be confident of any statistical inferences, but never mind, there have been several studies attempting to “correct” temperature records by “removing” ENSO, including this one.
5) So where’s your plateau you CTG ask? You could perhaps consider this.
Oh, and by the way – most of your denialist friends have already given up on 1998 as the “turning point”, and talk about 2001 instead, as it avoids those inconvenient 1999/2000 low points. You guys really ought to get your story straight – all these contradictions just make it look as though you just don’t know what you’re talking about.
6) Well actually I’m aware of one sceptic that has talked of showing the decline from 2001, simply to appease objections from “warmists” that it is “unfair” to start at 1998 because it is an outlier. (Patrick 027 disputes that it is an outlier BTW)
7) 1998 is the El Nino high, and 1999 + 2000 are the “corrections” that are part of this ENSO loop. Thus 1999 + 2000 are NOT inconvenient as you assert.
BobFJsays
CTG, further my comment which is currently in moderation, located below the current 818:
I should add that I think that the suggestion of a 15-year plateau by some, and what Phil Jones has described as statistically insignificant warming, or something like that, is a bit of a stretch.
However, I believe that a plateau can clearly be seen starting at 1998, which is probably together with 1999 + 2000 mostly a real part of the ENSO with low noise contribution. The three years seem to be a compelling combination resulting from ocean turn-over.
The 1998 El Nino may have been an outlier in the sense that it was extreme relative to the average and median temperatures for a small-enough time period centered there, or an extreme if temperatures are plotted with a longer-term trend subtracted from the data (though in that perspective, I’m not sure if it wouldn’t be surpassed by a few other years depending on the length of time considered… – one thing I remember about 1998 was that El Nino had three peaks, whereas no other recored El Nino did, but how far back does such detailed recording go?).
But it wasn’t measurement error. It was real.
If this instance of internal variability was of a magnitude that has low-enough average frequency, then one could argue that including this data point (although with some cancelation by any subsequent ocean cooling attributed to it**) would skew the overall picture in some way for a time period that is not long enough; in that way real data can contribute to ‘measurement error’ in the sense of measuring the tendencies of the system (statistical uncertainties in identifying signals immersed in noise).
But so far as I know, the 1998 El Nino is not such a huge obstacle to figuring out the forced climate signal.
** I agree that in general, surface temperature changes caused by vertical heat redistributions in the oceans should tend to (in the absence of some idiosyncracy) cause a net heat gain or loss via changes in LW emission, so that surface+tropospere temperature changes caused by such oceanic heat redistributions should tend to cause a climate response in the opposite direction. This effect will be weaker if the climate sensitivity is higher (in the absence of some consequential idiosyncracy in the pattern of internal variability changes, relative to the pattern of a forced response). But when the oceanic circulation returns to an average condition (as it is likely to do if it is internal variability), the climate will tend to respond again to return to the same equilibrium, gaining back heat or losing the heat just gained.
Not having yet done the math, I wouldn’t expect the time-integrated cool anomaly of the response to such internal variability to be greater than the time-integrated warm anomaly that caused it (both of those anomalies being part of the internal variability).
Patrick 027says
“I wouldn’t expect the time-integrated cool anomaly of the response to such internal variability to be greater than the time-integrated warm anomaly ”
… unless there is some idiosyncracy that makes it so.
Patrick 027says
RE my 819:
“A very helpful concept is a weighting function. At any location (and time) P, frequency, polarization (where important), and in any given direction Q,” … “The emission weighting function** is a distribution, that when multiplied by the black body intensity as a function of local temperature (assuming LTE) and then integrated over space, is equal to the intensity of radiation reaching P from Q. In LTE (and for an approximation assuming conditions change slowly relative to photon travel times), the emission weighting function for radiation at P from Q is equal to the absorption weighting function at P for radiation going toward Q. The radiation at P from Q is emitted from one weighting function and absorbed by another, and the radiation in the opposite direction comes from the other weighting function and is absorbed by the first, with emission as a function of temperature, so the net intensity depends on the temperatures of the pair of weighting functions. This can all be integrated over directions” … “to give the fluxes and net flux per unit area across some defined surface at a location and the weighting functions for that.”
1. This description assumes the real component of the index of refraction n = 1. Variations in n affect radiation in what can be considered two ways (that are linked):
A. they bend the paths the radiation takes, thus affecting the weighting functions. a weighting function will tend to be compressed into regions of higher n and dispersed out of regions of lower n, which is related to – and includes the effects of – total internal reflection, and is related to the following point:
B. refraction compresses rays into a smaller solid angle (concentrating the intensity) as n increases, and do the opposite as n decreases (the intensity is proportional to the square of n, at least if n does not vary over direction at a given location (it might be the same even when n is not isotropic, but I’m not sure). Related to that, actual blackbody intensity is a function of n, so that, in the absence of any optical thickness or partial reflection outside a blackbody, the intensity of blackbody radiation coming from different blackbodies at different n will be the same when it reaches the same n.
It isn’t necessary to consider the value of n over the weighting functions once the weighting functions are established; the change in blackbody emission and refraction are accounted for by the effect of refraction on the weighting function; so one can use the n=1 value of Ibb to first compute the intensities at P back and forth in direction Q, and then multiply that by n^2 for the value of n at P (or some other procedure if anisotropy in n affects the relationship?) to find the actual intensities, and then integrate over direction to find a flux per unit area across a surface at P; if n is isotopric then the integration can be done first and the results can be multiplied by n^2.
2. This is how weighting functions determine fluxes and intensities across a location. The locations that are part of one weighting function will also be part of another weighting function for the fluxes and intensities at a different location.
An alternative perspective can be gained by considering The density of the emission weighting function for the intensity reaching point P1 from direction Q1 at some frequency and polarization that comes from a point P2.
Further, one can consider the portion of that density that leaves P2 in some direction Q2 with some polarization – let’s call this E1.
And one can consider the emission weighting function density E2 for radiation going in the opposite direction from P1 in direction Q1 with the polarization considered at P1 and reaching P2 from Q2 with the polarization considered at P2. Let n1 and n2 be the real components of the index of refraction at P1 and P2. f(n) = n^2 if n is isotropic; it might be the same if n is anisotropic but I’m not sure.
For LTE, the relationship for emission in one direction and absorption from that direction requires that the intensity (per unit volume at P1 per unit volume at P2, thus in units of W/m2 per unit spectrum (and per unit of range of polarizations) per steradian (unit of solid angle) per unit volume^2) emitted from P2 and aborbed by P1 is E1*E2*Ibb(P2)*f(nx), and the intensity in the opposite direction absorbed at P2 is E1*E2*Ibb(P1)*f(nx), where Ibb(P) is the blackbody intensity (for index of refraction = 1) for conditions at P. and nx is the n at the point Px where the intensity is measured. Thus the net intensity per unit volume at P1 per unit volume at P2 is proportional to the difference between Ibb(P1) and Ibb(P2), and is positive in the direction from warmer to colder temperature.
This can be integrated over polarizations and directions at P2, then over polarizations and directions at P1, and then over a volume V2 at P2, and then over a volume V1 at P1 (or the same steps in any order), to give the fluxes between V1 and V2 and the net flux from V1 to V2 (units of W per unit frequency, not W/m2 or etc.) And then integrated over frequency, of course.
This perspective shows the net flux is from a higher to lower temperature for LTE conditions.
3. Some stuff I don’t know so much about:
It occurs to me that even without Raman and Compton scattering, and stimulated emissions, fluorescence (non-LTE), etc, one could still have some dopper shifting of photons scattering off of moving particles, so the frequencies would still be spread out a little between emission and absorption (though doppler shiftinf of photons during scattering would require some transfer of energy between the photon and the scatterer, so this is really analogous to Compton scattering, although so far as I know Compton scattering itself is only between photons and electrons (or charged particles in general?). It’s possible I could be completely wrong about this, though (doppler-shifting by non-Raman, non-Compton scattering), but just in case… Including all these effects (and any other relativistic effects) would change the mathematics. Fortunately these are relatively small effects in the Earth’s atmosphere (?except maybe doppler shifting during scattering might be on the order of the doppler contribution to absorption and emission line broadenning, though pressure broadenning dominates for line broadenning for at least some portion of the atmosphere). I think it will still be the case that net radiant fluxes will be from higher to lower temperature – if the energy absorbed and then emitted without thermalization is still considered to be on it’s way… (and allowing different populations of particles in the same volume to have different temperatures). (Photons doppler shited in scattering would spread into different frequencies presumably to the same extent that photons from those frequencies spread into the first frequency, so …)
BobFJsays
Patrick 027, thanks your 822/823
I’ve always wondered how ENSO can be measured or given indices that really mean anything because the dynamics of the SST’s are surely highly variable both spatially and temporally. Is a short high intensity phase more significant than a long duration lower intensity phase, and what about surface area etc? Other factors such as wind patterns in both hot and cold phases must also be important. I would also question the reliability of the data published by NOAA that go back some 60 years.
The thermodynamics are also extremely complex. Apart from spatial issues, the SST’s are diurnally more stable than the air T’s, and heat transfer rates via conductive advection/convection thus depend on the “weather” and time of day. There is also highly variable cooling from evaporation, which Trenberth gives as the greatest global average HEAT loss process. And, as you say, there is long-wave EMR. How pray can all this stuff be weighted or given a calibration to global average T’s in each hemisphere?
“El Niño/Southern Oscillation (ENSO) is the most important coupled ocean-atmosphere phenomenon to cause global climate variability on interannual time scales. Here we attempt to monitor ENSO by basing the Multivariate ENSO Index (MEI) on the six main observed variables over the tropical Pacific. These six variables are: sea-level pressure (P), zonal (U) and meridional (V) components of the surface wind, sea surface temperature (S), surface air temperature (A), and total cloudiness fraction of the sky (C). These observations have been collected and published in COADS for many years. The MEI is computed separately for each of twelve sliding bi-monthly seasons…”
Hmmmm…. That should keep some academics busy for a while! …. And going back 60 years?
David B Benson , Reur 783:
BobFJ (782) — Thanks, but I’ll stick with just the AMO since it seems the more important and picks up some of the PDO variation in any case.
Tamino is moving his household and otherwise professionally very busy.
Thanks for that, but maybe the above could be of interest to you? I’m beginning to understand your position. BTW, the graph I cited of AMO + PDO, I now think is crap, and perhaps you were being polite.
David B. Bensonsays
BobFJ (825) — Since I use decadal averages, ENSO disappears except to the extent it contributes to the AMO index.
And yes, I try to be polite, but to a small extent the PDO will also contribute to the AMO index and with R^2=0.991 and no significant autocorrelation, there isn’t anything left to explain.
Patrick 027says
Re BobFJ
I think one measure of ENSO is the different in sea level pressure between two locations. Various other indices of low-frequency internal variability modes have been defined in that type of way.
Analysis to look for patterns in the fluctuations can identifity ‘Empirical Orthogonal Modes’ (I think that’s the term), which would describe patterns more completely (I don’t know the details of this, but it makes intuitive sense that such modes could be identified).
– “Is a short high intensity phase more significant than a long duration lower intensity phase,”
Just to be clear, the cool anomaly that could be expected to follow a warm anomly driven by oceanic heat rearrangement is not itself just the opposite phase of such a fluctuation. If an El Nino, by hiding cooler water beneath the surface, results in a surface+tropospheric temperature anomaly that, being higher than forced equilibrium, results in a radiative imbalance that leads to cooling, ultimately removing heat from the oceans, so that after the El Nino, the climate system has less heat than before and will tend to be cooler – this is not the same as a La Nina; the same process would, if the El Nino were permanent, remove heat from the climate system and tend to bring the global average surface temperature down; likewise, the global average surface+tropospheric temperature response to a permanent La Nina might fade over time with as the climate system gains heat. (Absent internal variability or averaged over it in the longer term, a radiative disequilibrium decays via temperature change toward equilibrium, being proportional to exp(-time/(climate sensitivity*heat capacity)) – but the spatial and temporal pattern associated with a particular shape of internal variability makes it less than obvious that the same climate sensitivity and heat capacity would apply to any particular mode of internal variability as would apply to any particular externally-forced changes, though it might be a good first assumption until established otherwise (whether or not is has been established otherwise, I’m not sure)).
Patrick 027says
(PS some modes of internal varibility might have intrinsically finite lifetimes (given constant everything else) – for example, QBO acts like an internal clock of the climate system, with a period generally over 2 years; it is driven by vertically propagating fluid-mechanic waves (in particular, equatorial Rossby-gravity and Kelvin waves), and the fluctuation continues even assuming the wave activity is produced at a constant rate.)
Patrick 027says
Re my 824 about that last part:
At full (local) thermodynamic equilibrium, the photons would be in thermodynamic equilibrium with the non-photons – with all non-photons, requiring those to be in thermodymic equilibrium amongst themselves.
In such a condition, fluorescence and phosphorescence and photons produced by chemical reactions, Raman scattering, Compton scattering, and any other scattering where photon energy is not conserved, and stimulated emission of radiation, all leave the photon intensity at the blackbody value for the temperature of the matter; emissivity = absorptivity in total including all contributions to emission and absorption.
When only the non-photon matter is in (quasi-)LTE, so that the absorbtion and emission of photons are in total or at particular frequencies (and/or polarizations) by particular energy transitions, are not balanced, then processes that involve contribution to photon energy before thermalization of absorbed photon energy or absorption before thermalization of the lack of energy that has been converted to photon energy can allow absorptivity from a direction to be different then emissivity to a direction for the same frequency and polarization.
Can anyone explain or find a link showing water vapour to CO2 heat flux ratios between say , mid latitudes, and the Arctic?
I read a paper which shows this ratio in the Arctic being 1/3 CO2/H2O, I must presume that it is a simplification, the long night Arctic has very little water vapour, CO2 must take more prominence in heat flux during that period. On the other hand, CO2 takes less prominence at lower latitudes, yet I read CO2 can be as much as 30% while water vapour close to 70%.
I am a bit perplexed of the apparent zonal similarities. But this must be due to simplifications. Surely CO2, weighs a higher influence during the Arctic long night.
[Response: You can look at the spatial distribution of the forcings here: http://data.giss.nasa.gov/efficacy/ look at the ‘Fa’ results for instance. The smaller forcing for CO2 near the pole is because of the drop off in upwelling LW (which is much smaller at the poles than it is in the subtropics). – gavin]
Much thanks Gavin. Adjusted forcing 5/4 * CO2 ratio between Arctic and mid-lats is about 1/1.3 while total precipitable water ratio between Arctic and mid lats is often 1/5. I am not coming close confirming 1/3 CO2/Water Vapour heat flux ratio for the Arctic. It seems I am missing something, but I am at a lost to know why its 30% when it should be a bit bigger.
Tom Ssays
Here is something interesting:
“Citizen Audit” verifies IPCC peer review claims. As has been stated by the moderators, Working Group I comes out looking very good. II and III not so much.
AR4 % not peer # not peer # references
report overall 30 5,587 18,531
Working Group 3 57 2,307 4,033
Working Group 2 34 2,849 8,272
Working Group 1 7 431 6,226
Before people have a bird, this does not say anything is wrong, but refutes the stated/implied claim that AR4 was totally peer reviewed.
David B. Bensonsays
One aspect of climatological and other physical data is the rather mysterious http://en.wikipedia.org/wiki/Pink_noise
observed in some spectral bands. Often, the lowest frequency reponses are quite flat and then have that 1/f noise at the middle frequencies, bending to http://en.wikipedia.org/wiki/Brownian_noise
and the highest frequencies.
The only mysterious part, at least for me, is the approximately 1/f noise in the midband and some articles of 1/f noise seem to agree. The first thing to note is the noise is 1/f in power and so 1/sqrt(f) in amplitude. The next thing to note is that there is often some spatial distribution aspect to the phyical system so that different reponses occur both in time and space.
However, the Laplace transform system function for a leakage-free, noninductive infinite (linear) transmission line is proportional to
T(s) = 1/sqrt(s)
and with a forcing of cos(at) with angular frequency a the reponse is proportional to
(1/sqrt(a))sin(at)
and some transients are over and hence has a pink noise spectrum when excited by white noise over the entire band in question.
So, rather approximately but still clearing some of the mystery of pink noise, the midband response is about as if it were a “perfect” coaxial cable, with analogous systems being
(1) conduction of heat through a slab of uniform thickness and
(2) diffusion of liquid in a homogeneous medium
both of which obviously are of climatological significance.
[Hope I didn’t make a mistake in the quite difficult determination of the inverse Laplace transform of (1/sqrt(s))(s/(s^2+a^2)).]
“Critics were claiming that the IPCC disobeyed its own rules — and I chimed in, assuming wrongly that it was IPCC process to only use peer-reviewed literature.
So mea culpa. I take it back. The IPCC lead authors and reviewers were in their rights to use “gray” literature so long as they made sure to follow the procedures outlined in the IPCC annex for such use. Of course, they should always double check to make sure the non-peer reviewed sources are in fact good science.”
Completely Fed Upsays
“Before people have a bird, this does not say anything is wrong, but refutes the stated/implied claim that AR4 was totally peer reviewed.”
Well, only because that claim is completely made up.
AR4 WGI is claimed to be totally peer reviewed.
Lets have a look at the titles, shall we:
Working Group I Report “The Physical Science Basis”
Working Group II Report “Impacts, Adaptation and Vulnerability”
Working Group III Report “Mitigation of Climate Change”
Now, what sort of science journal would have papers from “mitigation scientists”??? Would the sorts of engineers and sociologists be putting things about the impacts of climate change in a science paper? After all, the DoD produced several reports for internal use on the subject and they would never be put in a science journal.
You’ve done a lot of research to find out how many papers have been reviewed in the AR4 process.
Please show where you did the research on the claim that the AR4 process is totally from peer review work.
ktkxbye
CMsays
Re: AR4 non-peer-reviewed sources (Tom S., at #832)
The Noconsensus blog has put 43 volunteers (presumably “skeptics”) to work for five weeks counting peer-reviewed references in the IPCC Fourth Assessment Report (AR4). Their results are similar to the loose figures some of us discussed on this site a month ago, but the results of all their labors is make the IPCC sources look more peer-reviewed than we did.
On an earlier thread (now closed) we discussed Andreas Bjurström’s count of references in the Third Assessment Report (TAR) (posted at Pielke’s), and I posted some results from a simple, error-prone Perl-scripted count of journal articles in AR4 (“Comments on IPCC errors: facts and spin”, comments #594, 596, 598).
Both Andreas and I only sought to distinguish journal and non-journal references, which is not identical with peer-reviewed and non-peer-reviewed. In the following, however, I will assume that it’s close enough a proxy that direct comparisons are meaningful. I will also assume that the noconsensus count is largely correct (i.e., that the procedures they adopted prevented their loud and explicit anti-IPCC stance from biasing the results too much).
– Noconsensus found 93% peer-reviewed sources in WG1, more than my 90-92%, and far more than the 84% Andreas counted in the TAR. (I particularly underestimated chapters 5,7, and 8.)
– Noconsensus found 66% peer-reviewed sources in WG2, toward the high end of my 61-67% estimate, and up from 59% in the TAR according to Andreas. I didn’t post results for single chapters except ch. 16 (which matched well), but the only chapter where noconsensus gives a value below my minimum estimate was in ch. 9.
– Noconsensus found 43% peer-reviewed sources in WG3, up from 36% in TAR according to Andreas. My own estimates were too uncertain to be useful (34-69%) but encompass the noconsensus results (for each chapter).
I hope I don’t need to add that I disagree with the spin Noconsensus puts on their findings, and that the interesting questions are: are the sources right; when is “gray” literature necessary and appropriate; how should it be quality controlled; and to what extent have parts of the formally “gray” literature undergone quality control equivalent in practice to the peer review of scientific journals.
Tom, I’m afraid that I can’t accept those numbers at face value. I have been fed so much hooha with a straight face for so long now that random “auditors” have no credibility whatsoever with me.
Just don’t trust them to tell the truth. Or even recognize it, for that matter.
Perhaps someone will take on the Augean task of auditing the auditors?
Ray Ladburysays
Tom S.,
It is perfectly reasonable for WG2 and WG3 to use so-called grey literature–in fact it is inevitable. There are few journals for looking at consequences of climate change and virtually none for looking at mitigation. These two groups of necessity must rely on government reports and analyses by NGOs. This is yet another red herring from Jeff Id.
The crucial thing for WG2 is to bound the potential consequences of the various threats they assess. The bound need not be the best–it need only exceed the actual consequences and be finite. One can always sharpen one’s pencil if that threat winds up driving risk. Grey literature is fine for this purpose.
For WG3, we are looking at mitigation and the costs and efficacy thereof. As this subject is still in its infancy, grey literature is inevitable.
Pssst! CFU — if you invite people to support bogus claims, they will try.
If you instead calmly point to information refuting their notions, others will be glad for your intervention to dismiss rather than encourage the nonsense.
Just as a reminder how upset people can get about _other_ science questions:
One of the commenters there, on the question whether birds are dinosaurs:
“… calling anyone who voices disagreement stupid, ignorant, and a menace to society does nothing to help your cause and makes you look less intelligent than the person you are arguing against. I say this because people might want to think about this before launching into a diatribe against other people, thereby reinforcing the common fallacy that there is little difference between religious and scientific hardliners.”
CMsays
Re: gray literature — just to add to what Ray said:
When the government wants reliable, up-to-date figures on how much different policies will cost, projections of how much the economy will grow, and so on, they get their stats and projections from the specialized national and international number-crunching agencies that exist for this purpose. In other words, they get them from what is called “gray literature” in IPCC talk.
Climate sceptics are claiming to be shocked that trillion-dollar economic decisions (as they like to put it) might not be based 100% on peer-reviewed journal articles. But trillion-dollar decision are made all the time by, say, the U.S. government, balancing the federal budget and the national debt, based on “gray” reports from the Congressional Budget Office, the Department of the Treasury, the OECD, and so on. And you wouldn’t want it any other way. You really, really wouldn’t want the government to base economic policy only on what they can glean from journal articles written and refereed by ivory-tower professors. (Who would, in any case, probably be getting their data from … the CBO, the Treasury Dept., and so on.)
The IPCC reports are for policymakers, and the WG3 report is about policy options and about projecting how much carbon we’ll burn with various policy mixes under various scenarios for economic development, and so on. It’s not just that mitigation studies are in their infancy, it’s that they’re *inherently* reliant on “gray” sources.
Completely Fed Upsays
“If you instead calmly point to information refuting their notions, others will be glad for your intervention to dismiss rather than encourage the nonsense.”
Pssst. Hank.
Doesn’t work.
See “Zombie Argument”/”Rebunked”/et al.
Doug Bostromsays
CM says: 16 April 2010 at 2:37 PM
Slightly amplifying that excellent set of remarks regarding so-called grey literature that is actually analysis done by governmental units, we’d not only be foolish not exploit all the number crunching done on the behalf of the public by civil servants but we’d probably also be annoyed if we found out all that effort was for naught. Why purchase a pair of reading glasses and then squint at a book because we refuse to wear our spectacles?
David B. Bensonsays
After some checking around the web regarding coaxial cable, yes, I did the integral properly in my just prior comment. From
Links between the Annual, Milankovitch, and Continuum of Temperature Variability
Peter Huybers & William Curry
we discover that between millennial scale and centennial scale the power spectrum of temperature data rolls off at around (1/f^1.6) and from centennial scale to decadal scale the rool off is about (1/f^0.6). These are usually considered to be exmples of http://www.scholarpedia.org/article/1/f_noise
Witgrensays
And even if 93% of the WG1 sources are peer reviewed, that doesn’t necessarily mean that 93% of the report is accurate and 7% isn’t – “gray” literature is by no means automatically incorrect, it just means it’s not peer-reviewed to the same standard as published scientific work.
Patrick 027 says
Re 798 – “line broadening hasn’t got anything to do with emissivity (inelastic collisions do, if they occur while emission is going on), unless you’re talking emissivity per unit wavelength, ”
My understanding, though it possibly could be wrong, is that the total emission cross section per unit substance, integrated over the spectrum, is conserved by line broadenning, and the same would be true for the cross section contributed by each line.
However, when lines are broadenned less, the cross section gets piled into some wavelengths and reduced at others; in the absence of scattering, emissivity = 1 – exp(-optical thickness) = 1 – (-emission cross section per unit area), and the integral over the spectrum is reduced by an absence of line broadenning (in other words, less line broadenning leaves cross sections piled up so that they hide each other to a larger extent, thus saturation is approached faster at some wavelengths while other wavelengths are more transparent, and the overall effect is increased transparency for the same amount of substance.
Robert says
Well on the Colbert report last night a meteorologist and a climatologist debated global warming… I sort of wish that they had a climatologist go on and talk about Harries et al. 2001 or Wang and Liang 2009 instead of repeating the constant “ice is melting”. The question was are we causing it not is the climate warming… the meteorologist (one of the many who don’t support the AGW theory) sort of won the debate and I think he would of been easy to refute honestly.
Patrick 027 says
“emissivity = 1 – exp(-optical thickness)” …
That emissivity being for a single direction
Kevin McKinney says
A question to the peanut gallery.
Guy Callendar wrote in 1938:
When radiation takes place from a thick layer of gas, the average depth within that layer from which the radiation comes will depend upon the density of the gas. Thus if the density of the atmospheric carbon dioxide is altered it will alter the altitude from which the sky radiation of this gas originates. An increase of carbon dioxide will lower the mean radiation focus, and because the temperature is higher near the surface the radiation is increased, without allowing for any increased absorption by a greater total thickness of the gas.
I commented:
This interpretation is essentially the converse of the point (made years earlier by Nils Ekholm) that radiation escaping the atmosphere is controlled by the effective altitude of the radiating layer. In both interpretations, the increased infrared optical thickness moves the effective radiative focus along a temperature gradient: warmer near the surface in Callendar’s formulation, colder near the top of the atmosphere in the case of Ekholm’s. In both interpretations, this effect operates regardless of whether or not the principal wave band is “saturated,” that is, absorbs the maximum radiation possible.
So, a fair and accurate interpretation–especially that last sentence? Comments, anyone?
The article’s up, but edits can still be made!
http://hubpages.com/hub/Global-Warming-Science-And-The-Wars
BobFJ says
Completely Fed Up Reur 788/p16:
No; If you study this graphical compilation, (RE 782) the bottom graph is PDO and AMO combined showing SST’s generally cooling between 1940 and 1975 and correlating rather well with HADCRUT. The correlation is also good for the whole period shown from 1900. Thus, if it is correct, that sceptical criticism would be defeated.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Completely Fed Up Reur 790/p16:
Quickly, sorry, I misunderstood your reference to 1999:
But surely you have got that the wrong way around! Was it not a “warmist” argument that 1998 was an outlier? Surely, to say it was NOT an outlier would support the sceptical point of view! Perhaps you should also consider that as each year passes, additional data becomes available, and assessments on that cumulative data can thus evolve. (on both sides of the fence)
Patrick 027 says
Re 803 Kevin McKinney –
The effect approaches saturation when the effective altitude has shifted to a point where farther shifts don’t make much of a difference because of limited temperature variation over shorter distances. For ‘backradiation’ (down to the surface), saturation is reached as the effective altitude approaches the surface; for the tropopause level, saturation is approached as the effective altitude for upward emission from below and for downward emission from above both approach the troposphere. For outgoing LW radiation to space, there is potential for ‘saturation’ of a sort (I had described as a saturation but now I’m wondering if the term shouldn’t apply; nonetheless it is what it is) whenever the effective altitude approaches a minimum or maximum (assuming a broad-enough maximum or minimum relative to the emission weighting function so that the temperature variation has a significant effect, and temperature variations in front are thin enough not to dominate, which is generally the case in the atmosphere (most of the mass is in the troposphere, most of the rest is in the stratosphere, most of the rest is in the mesosphere); one could consider ‘saturation’ of a sort to occur when, with increasing optical thickness, the effective altitude is in the vicinity of the tropopause; the effective altitude doesn’t get past the stratopause in nearly all wavelengths.
The CO2, and I think, H2O greenhouse effects are saturated at some wavelengths for the tropopause level, and at the surface. The water vapor effect can become saturated at the surface over all energetically-significant LW wavelengths when the concentration is high enough, but it still won’t be saturated at all wavelengths at the tropopause level (for anything similar to Earthly conditions, and of course an equilibrium climate’s tropopause must be high enough to avoid saturation in at least some part of the spectrum so that a net LW flux out can balance solar heating). In the vicinity (of wavelengths) of signicant absorption by CO2, H2O vapor is nearly transparent in the stratosphere and so I’d presume in some part of the upper troposphere as well. Clouds will have effects, of course. For upward LW radiation at the tropopause, CO2 forms a plateau of elevated effective altitude, with slopes (generally down to effective altitude of emission from water vapor, the surface, and any clouds); adding more CO2 raises the effective altitude at the slopes, pushing up new slope outside the plateau, and bringing existing sloping parts into the flat part of the plateau – effectively widenning the plateau.
Patrick 027 says
Re 802 – easy to refute –
perhaps via “we know CO2 has to cause some warming and that there are feedbacks, understood in terms of established physics, and the issue is just quantification within remaining uncertainty, whereas you’re ‘it’s the Atlantic, it’s the Pacific’ seems like a postulation without explanation of cause verses effect (and if the explanation is that the ocean circulation changes are heating the surface, then why isn’t the ocean cooling, … )”
Any errors in that?
Radge Havers says
A while back RealClimate posted a piece called
Are geologists different?
Since weather forecasters seem to be the trusted celebrity faces of all things atmospheric, maybe RC should do something like “Are meteorologists different?”
Weather Forecasters on Global Warming
at dotearth.
Kevin McKinney says
Thanks, Patrick. By dint of very careful reading I can follow most of what you are saying.
It sounds as if the takeaway message for me would be that my summary statement is OK, but the statement about saturation is not so good. (That would seem to follow from your stating of the phenomenon of saturation in terms of the “shift along a temperature gradient.”)
Yes? If so, I’m going to strike the latter sentence!
Patrick 027 says
Re 808 Kevin McKinney (re 805 re 804) –
I actually glossed over a key aspect of the sentence you were most interested in:
“In both interpretations, this effect operates regardless of whether or not the principal wave band is “saturated,” that is, absorbs the maximum radiation possible.”
“absorbing the maximum radiation possible” – if this is taken to mean a transmission approaching zero (optical thickness approaching infinity), then that is actually a different category of saturation.
I would consider striking the word ‘regardless’ from the sentence, at least with respect to radiative flux changes via effective altitude shifts, for the surface and at the tropopause, since the effect of altitude shifting can saturate in those cases (approach a limiting altitude).
But it is true that those types of saturations do not necessarily occur simultaneously, just as saturation at the surface (effective altitude approaching the surface) can occur either ‘before’ (especially for water vapor) or ‘after’ (high clouds) saturation at the tropopause (high clouds wouldn’t saturate the LW opacity for downward radiation from above but they could saturate that for upward radiation from below if high enough, so it wouldn’t be complete saturation by clouds alone, assuming clouds don’t actually straddle the tropopause.) (‘Before’ and ‘after’ refering to increasing LW optical thickness.)
For example, any sufficiently thick cloud at any height, or sufficiently thick water vapor layer (would be found closer to the surface) would nearly eliminate LW transparency through the atmosphere, but could still leave much room for effective altitude shifting by other agents, depending on the location of the cloud or water vapor. Generally, saturation of effective altitude shifting should lag saturation of eliminating transparency for a thick layer.
Patrick 027 says
Re 791 Completely Fed Up – I had made a 1000-fold error, which I was specifically addressing in the comment you quoted. The NASA websites says the average density of the photosphere is less than 1 millionth of a gram per cubic cm and that the photosphere is about 500 km thick. 1e-6 g/cm3 = 1e-6 kg/L = 1e-3 kg/m3 [where xe-y = x * 10^-y (it’s a standard notation…wait, maybe the e is supposed to be capitalized to avoid confusion with the number e; okay then:]
1E-6 g/cm3 = 1E-6 kg/L = 1E-3 kg/m3
1E-3 kg/m3 * 500 km = 0.5 kg/m2
So, assuming the 500 km is not too imprecise relative to how far below 1e-3 kg/m3 the density actually is, the photosphere has less than about 0.5 kg/m2.
Regarding pressure,
based on:
Earth surface g = 9.81 m/s2,
Earth’s radius = 6371 km,
Sun’s radius = 695,500 km,
Sun’s mass = 333,000 times Earth’s mass
(therefore, Sun’s gravity at surface (photosphere) is about 27.9 times Earth’s, or about 274 m/s2; this shouldn’t change significantly through the depth of the photosphere, as it is a very small fraction of the radius of the Sun and contains a very small fraction of the Sun’s mass.)
(Interesting aside: Solar mass is considerably more concentrated toward the center than Earth’s.)
The pressure at the bottom of the photosphere contributed by the weight of the photosphere would be about less than 135 kPa, which is similar to Earth’s atmospheric pressure at sea level ( ~ 101 kPa average).
and:
R = 8.3143 J/(K mol) (well, close; I’ve seen some different values for the last digit or … two?)
H = R*T/g, where H is the scale height (the height over which pressure decreases by a factor of e, assuming hydrostatic balance)
Using an isothermal approximation, and using the ‘nice round number’ of 1 g/mol for atomic H, with T = 5780, The pressure and density increase by a factor of 17.3 times from the top to the bottom of the photosphere for atomic H; if the density of a plasma (two particles for every ionized H atom) is half that of atomic H for the same temperature and pressure (I’m really not sure on that point, though**), then it would be a factor of 4.16. Thus, assuming whatever is above the photosphere is concentrated enough to be mostly under the same gravitational acceleration, the photosphere has about 3.16/4.16 to 16.3/17.3 of all the mass above the base of the photosphere – using an isothermal approximation. The inverses of those ratios are 1.32 and 1.06, which are the total mass above the base of the photosphere divided by the mass of the photosphere, in the hydrostatic approximation (obviously breaks down for solar wind, but I’d guess it might still work for a majority of the mass outside the photosphere) and assuming the mass above the photosphere is near enough to the photosphere for the same gravitational acceleration to apply (and therefore the same surface area – they are approx. inversely proportional with the vast majority of the mass below).
With the same assumptions (hydrostatic approx.) and the isothermal approximation, the pressure at the base of the photosphere is about less than between 1.06 and 1.32 times 137 kPa. Since the top of the photosphere is cooler, those ratios (1.06 and 1.32) should be overestimates, unless the temperature rises sharply enough above the photosphere (I haven’t mined the NASA website for all useful information).
But for what it’s worth, 137 kPa at the base of the photosphere and the isothermal approximation of 5780 K imply a density of 2.9 g/m3 at the base of the photosphere and 0.94 g/m3 photospheric average, for the case of atomic H; the first value would be half of that for the pure plasma (if I am not missing something relevant about plasmas, which is certainly possible), but the second value would be more than half for the pure plasma because the scale height would be larger (less variation of pressure and density with height). Anyway, these numbers are consistent with the order of magnitude of density implied by the website.
(Too OT? Well it is an application of some physics that is important to Earth’s atmosphere as well…)
Patrick 027 says
“Anyway, these numbers are consistent with the order of magnitude of density implied by the website.”
Was that circular reasoning? The information used to calculate the pressure owing to the weight of the photosphere was independent of temperature and molecular mass; however, the knowledge that this would be of the same order of magnitude as the total pressure was based on the scale height, which was based on those variables. So of course the average density calculated from the gas laws will be of the same order of magnitude as the value used to find pressure (would be exactly the same if the weight of overlying material were included). So it was meaningless that the result was similar to the input (except as a check on my math).
Patrick 027 says
Re BobFJ “Was it not a “warmist” argument that 1998 was an outlier?”
Arguing that 1998 was an outlier relative to average years makes some sense (though outlier might be taken to imply that it should be ignored; that would not make sense; it really did happen). A ‘warmist’ in late 1998 might find it convenient to see it as the start of a dramatic new stage in the anthropogenic warming process, but where is such a ‘warmist’ to be found? Maybe some members of the public? I wouldn’t be surprised. But I don’t think you’d find many among scientists, including those among who approved of the IPCC reports, etc.
Of course, there are other ways of looking at 1998:
While a single data point, it might contribute to statistics showing a trend among years with strong El Ninos. Imagine, more generally, seperating the climate record into seperate records, with similar ENSO, SAM and NAM and NAO, perhaps even QBO, etc., indices/phases, and studying trends among them and correlations among the different trends between the different sets of time intervals, and also trends in the density of each set of intervals over time.
(PS The distinction between signal and noise depends on what you’re looking for. Noise is not just something that can obscure part of a climate signal, it is also a part of climate, and so one might find signals in the noise (changes in frequency or amplitude or shape, etc.).
Kevin McKinney says
Patrick, thanks for the further clarification. I think my second sentence needs to go in its entirety; it’s not really needed, and if the issue can’t be sufficiently clarified in a brief fashion, I’m better not raising it.
It’s been interesting, though, to learn a bit more about the different senses in which this apparently innocuous term “saturation” can be used. I wasn’t attaching the term to the altitude-shifting phenomenon. Is that a “normal” usage of the word?
In the Koch-Angstrom experiment, it was transmissivity that was at issue. But I thought that in that case, Koch was unable to drive transmissivity to zero? This issue goes to the heart of my struggles with the concept of saturation: what determines “the maximum possible” absorption? And when we say “maximum possible,” what circumstantial limits are required to obtain a valid description?
Kevin McKinney says
Edit done, Patrick; thanks again.
For those who missed it the first time, this is about my life-times-and-work article on Guy Callendar, the man who brought CO2 climate theory into the 20th century.
You can check it out here:
http://hubpages.com/hub/Global-Warming-Science-And-The-Wars
BobFJ says
Patrick 027 Reur 813:
The wording in my 805 on which you are commenting was a bit screwed-up, but I agree with all you say. Whilst there may well be some noise within the 1998 temperature value, I can’t see any justification for treating it as an outlier, because as you say, it was a real event of proven consequences. Nevertheless, I have seen some studies where temperature plots are “corrected” by “removing” ENSO. (Including I seem to remember an Oz study by the CSIRO or BMO afew years ago)
Patrick 027 says
Re 811 myself – I made the mistake again! This time it was just a typo, though, I used the correct value ( 500 kg /m2) in calculating pressure…
Doug Bostrom says
2nd time a charm for CryoSat launch. Many interesting details here:
http://www.spaceflightnow.com/news/n1004/08cryosatlaunch/
(Launched from a classic Cold War silo)
Patrick 027 says
Re Kevin McKinney –
“It’s been interesting, though, to learn a bit more about the different senses in which this apparently innocuous term “saturation” can be used. I wasn’t attaching the term to the altitude-shifting phenomenon. Is that a “normal” usage of the word?”
I wouldn’t really know what the normal usage is, but given the concept of saturation, it should apply.
PS of course, while the effective altitude shifting effect can saturate as it approaches the tropopause, in terms of forcing, the tropopause can shift in a climate change, so the saturation after forcing is applied to a present climate is not the same as what exists after the climate response (I’m not saying this is a big effect, though; I haven’t myself seen it quantified; I’m guessing it’s small for a doubling of CO2, for example (I don’t think the tropopause rises so much as to halve the optical thickness of the CO2 in the stratosphere); but if a greenhouse gas with the same optical thickness over the whole LW range were added, the forcing would approach saturation (at the tropopause) but at the same point the climate sensitivity for that forcing would increase, because farther additions of the gas would still have effects, and the climate response would involve sufficient lifting of the tropopause to ‘unsaturate’ the LW opacity enough to have a net outward LW flux to balance the net inward SW flux (solar heating). (Perhaps the ultimate limit for greenhouse warming would be approached as temperatures get high enough in some sufficiently optically significant part of the climate system (not the thermosphere) for significant radiation to be emitted at the same wavelengths as solar heating.)) This gets into how radiative forcing and climate sensitivity can be different between a forced change and the reverse change, with the change in equilibrium climate being of the same magnitude in the absence of hysteresis.
PS II net LW fluxes from air to air – point of interest:
(wherein emission and absorption are part of the LW opacity, as opposed to pure scattering):
What’s interesting to note about a gas such as CO2, with it’s approximately triangular-shaped absorption spectrum when plotted in log(optical thickness)), is that to a first approximation, after the central part of the band is saturated with respect to distances considered, while forcing at the tropopause, and at the surface as conditions allow, and at the top of the atmosphere, continue to change with increasing CO2, changes in the direct air-to-air LW fluxes would be limited, except wherein there are overlaps with clouds or other gases. This is because the greatest net radiant fluxes occur when regions of different temperatures are optically thick enough (with enough of that being from absorption/emission, as opposed to just scattering (non-Raman/Compton)) to emit and absorb much of the fluxes that reach them, but also when the optical thickness over the distance from lower to higher temperatures is small enough for the radiation fluxes to reach across the distance. Higher opacity reduces the net LW fluxes by blocking photons from reaching across larger temperature variations, while small opacity (with less optical thickness from emission/absorption) makes the temperature variations more translarger temperature variations. For CO2, once the central part of the band is saturated (for a given spatial distance), the intervals of intermediate LW optical thickness (for a given spatial distance) stay approximately the same width as CO2 is added. Aside from deviations from the approximation of the CO2 spectrum as triangular in log(optical thickness), and aside from variations in blackbody intensity as the intervals shift outward from the center of the band, the net LW fluxes (for temperature variations on a given spatial scale) from air to air would not change from absorption and emission by CO2 alone. But adding CO2 can change the net fluxes among layers of water vapor (where the absorption spectrum overlaps significantly- not so much in the stratosphere, or, I think, the upper troposphere) and clouds and the surface and space or between any of those and clear dry air.
—-
“and if the issue can’t be sufficiently clarified in a brief fashion”
A very helpful concept is a weighting function. At any location (and time) P, frequency, polarization (where important), and in any given direction Q, the intensity of radiation coming from some direction can be attributed to a source that has some distribution over space (if only emission and absorption occur, then the distribution is along a single path (possibly curving due to refraction); if reflection occurs, it may be along a bent or branching path, if scattering occurs, it may fill a volume of space; but where emission cross section density approaches zero, the density of the source goes to zero, and the source is projected out of such areas into where the emission cross section density is found (in directions depending on scattering and reflection). The emission weighting function** is a distribution, that when multiplied by the black body intensity as a function of local temperature (assuming LTE) and then integrated over space, is equal to the intensity of radiation reaching P from Q. In LTE (and for an approximation assuming conditions change slowly relative to photon travel times), the emission weighting function for radiation at P from Q is equal to the absorption weighting function at P for radiation going toward Q. The radiation at P from Q is emitted from one weighting function and absorbed by another, and the radiation in the opposite direction comes from the other weighting function and is absorbed by the first, with emission as a function of temperature, so the net intensity depends on the temperatures of the pair of weighting functions. This can all be integrated over directions (See http://chriscolose.wordpress.com/2010/02/18/greenhouse-effect-revisited/#comment-2058 ) to give the fluxes and net flux per unit area across some defined surface at a location and the weighting functions for that. For a given density of absorption cross section, increasing the scattering cross section density concentrates the weighting functions toward P (and can make the weighting functions wrap around P and overlap), or if the absorption cross section density is zero around P, scattering can project the weighting functions to absorbing material on all sides (so they still can overlap); for a given density of scattering cross section density, increasing the absorption cross section density concentrates the weighting functions toward P. Thus in general, increasing opacity brings the weighting functions closer to P, and if already largely concentrated into a space with some overall spatial tendency in temperature, will bring the pairs of weighting functions closer to being isothermal, thus reducing net LW intensities and fluxes.
** I think a weighting function might (??) also be defined as just refer to emission of photons that reach a point without any scattering in between, but that’s not the relevant picture in this discussion.
BobFJ says
CTG Reur rant @ 800:
1) Yes, I misunderstood CFU, and have apologised to him for my mistake.
2) To back-up CFU’s and your claim; do you have any reference to confirm that in 1999, a sceptic or sceptics (or even a denialist to use your term) claimed that 1998 was an outlier?
3) So, as time goes by, and as new data evolves over a decade, sceptics are not allowed to revise their hypotheses or opinions? (but “warmists” are allowed to say whatever they like?)
4) Yes, it would be nice to be confident of any statistical inferences, but never mind, there have been several studies attempting to “correct” temperature records by “removing” ENSO, including this one.
5) So where’s your plateau you CTG ask? You could perhaps consider this.
6) Well actually I’m aware of one sceptic that has talked of showing the decline from 2001, simply to appease objections from “warmists” that it is “unfair” to start at 1998 because it is an outlier. (Patrick 027 disputes that it is an outlier BTW)
7) 1998 is the El Nino high, and 1999 + 2000 are the “corrections” that are part of this ENSO loop. Thus 1999 + 2000 are NOT inconvenient as you assert.
BobFJ says
CTG, further my comment which is currently in moderation, located below the current 818:
I should add that I think that the suggestion of a 15-year plateau by some, and what Phil Jones has described as statistically insignificant warming, or something like that, is a bit of a stretch.
However, I believe that a plateau can clearly be seen starting at 1998, which is probably together with 1999 + 2000 mostly a real part of the ENSO with low noise contribution. The three years seem to be a compelling combination resulting from ocean turn-over.
Patrick 027: What do you think?
Patrick 027 says
Re 821 BobFJ –
On the ‘outlier’ issue:
I need to review the definition of outlier again:
http://en.wikipedia.org/wiki/Outlier
(at least the part I read seems about right).
The 1998 El Nino may have been an outlier in the sense that it was extreme relative to the average and median temperatures for a small-enough time period centered there, or an extreme if temperatures are plotted with a longer-term trend subtracted from the data (though in that perspective, I’m not sure if it wouldn’t be surpassed by a few other years depending on the length of time considered… – one thing I remember about 1998 was that El Nino had three peaks, whereas no other recored El Nino did, but how far back does such detailed recording go?).
But it wasn’t measurement error. It was real.
If this instance of internal variability was of a magnitude that has low-enough average frequency, then one could argue that including this data point (although with some cancelation by any subsequent ocean cooling attributed to it**) would skew the overall picture in some way for a time period that is not long enough; in that way real data can contribute to ‘measurement error’ in the sense of measuring the tendencies of the system (statistical uncertainties in identifying signals immersed in noise).
But so far as I know, the 1998 El Nino is not such a huge obstacle to figuring out the forced climate signal.
** I agree that in general, surface temperature changes caused by vertical heat redistributions in the oceans should tend to (in the absence of some idiosyncracy) cause a net heat gain or loss via changes in LW emission, so that surface+tropospere temperature changes caused by such oceanic heat redistributions should tend to cause a climate response in the opposite direction. This effect will be weaker if the climate sensitivity is higher (in the absence of some consequential idiosyncracy in the pattern of internal variability changes, relative to the pattern of a forced response). But when the oceanic circulation returns to an average condition (as it is likely to do if it is internal variability), the climate will tend to respond again to return to the same equilibrium, gaining back heat or losing the heat just gained.
Not having yet done the math, I wouldn’t expect the time-integrated cool anomaly of the response to such internal variability to be greater than the time-integrated warm anomaly that caused it (both of those anomalies being part of the internal variability).
Patrick 027 says
“I wouldn’t expect the time-integrated cool anomaly of the response to such internal variability to be greater than the time-integrated warm anomaly ”
… unless there is some idiosyncracy that makes it so.
Patrick 027 says
RE my 819:
“A very helpful concept is a weighting function. At any location (and time) P, frequency, polarization (where important), and in any given direction Q,” … “The emission weighting function** is a distribution, that when multiplied by the black body intensity as a function of local temperature (assuming LTE) and then integrated over space, is equal to the intensity of radiation reaching P from Q. In LTE (and for an approximation assuming conditions change slowly relative to photon travel times), the emission weighting function for radiation at P from Q is equal to the absorption weighting function at P for radiation going toward Q. The radiation at P from Q is emitted from one weighting function and absorbed by another, and the radiation in the opposite direction comes from the other weighting function and is absorbed by the first, with emission as a function of temperature, so the net intensity depends on the temperatures of the pair of weighting functions. This can all be integrated over directions” … “to give the fluxes and net flux per unit area across some defined surface at a location and the weighting functions for that.”
1. This description assumes the real component of the index of refraction n = 1. Variations in n affect radiation in what can be considered two ways (that are linked):
A. they bend the paths the radiation takes, thus affecting the weighting functions. a weighting function will tend to be compressed into regions of higher n and dispersed out of regions of lower n, which is related to – and includes the effects of – total internal reflection, and is related to the following point:
B. refraction compresses rays into a smaller solid angle (concentrating the intensity) as n increases, and do the opposite as n decreases (the intensity is proportional to the square of n, at least if n does not vary over direction at a given location (it might be the same even when n is not isotropic, but I’m not sure). Related to that, actual blackbody intensity is a function of n, so that, in the absence of any optical thickness or partial reflection outside a blackbody, the intensity of blackbody radiation coming from different blackbodies at different n will be the same when it reaches the same n.
It isn’t necessary to consider the value of n over the weighting functions once the weighting functions are established; the change in blackbody emission and refraction are accounted for by the effect of refraction on the weighting function; so one can use the n=1 value of Ibb to first compute the intensities at P back and forth in direction Q, and then multiply that by n^2 for the value of n at P (or some other procedure if anisotropy in n affects the relationship?) to find the actual intensities, and then integrate over direction to find a flux per unit area across a surface at P; if n is isotopric then the integration can be done first and the results can be multiplied by n^2.
2. This is how weighting functions determine fluxes and intensities across a location. The locations that are part of one weighting function will also be part of another weighting function for the fluxes and intensities at a different location.
An alternative perspective can be gained by considering The density of the emission weighting function for the intensity reaching point P1 from direction Q1 at some frequency and polarization that comes from a point P2.
Further, one can consider the portion of that density that leaves P2 in some direction Q2 with some polarization – let’s call this E1.
And one can consider the emission weighting function density E2 for radiation going in the opposite direction from P1 in direction Q1 with the polarization considered at P1 and reaching P2 from Q2 with the polarization considered at P2. Let n1 and n2 be the real components of the index of refraction at P1 and P2. f(n) = n^2 if n is isotropic; it might be the same if n is anisotropic but I’m not sure.
For LTE, the relationship for emission in one direction and absorption from that direction requires that the intensity (per unit volume at P1 per unit volume at P2, thus in units of W/m2 per unit spectrum (and per unit of range of polarizations) per steradian (unit of solid angle) per unit volume^2) emitted from P2 and aborbed by P1 is E1*E2*Ibb(P2)*f(nx), and the intensity in the opposite direction absorbed at P2 is E1*E2*Ibb(P1)*f(nx), where Ibb(P) is the blackbody intensity (for index of refraction = 1) for conditions at P. and nx is the n at the point Px where the intensity is measured. Thus the net intensity per unit volume at P1 per unit volume at P2 is proportional to the difference between Ibb(P1) and Ibb(P2), and is positive in the direction from warmer to colder temperature.
This can be integrated over polarizations and directions at P2, then over polarizations and directions at P1, and then over a volume V2 at P2, and then over a volume V1 at P1 (or the same steps in any order), to give the fluxes between V1 and V2 and the net flux from V1 to V2 (units of W per unit frequency, not W/m2 or etc.) And then integrated over frequency, of course.
This perspective shows the net flux is from a higher to lower temperature for LTE conditions.
3. Some stuff I don’t know so much about:
It occurs to me that even without Raman and Compton scattering, and stimulated emissions, fluorescence (non-LTE), etc, one could still have some dopper shifting of photons scattering off of moving particles, so the frequencies would still be spread out a little between emission and absorption (though doppler shiftinf of photons during scattering would require some transfer of energy between the photon and the scatterer, so this is really analogous to Compton scattering, although so far as I know Compton scattering itself is only between photons and electrons (or charged particles in general?). It’s possible I could be completely wrong about this, though (doppler-shifting by non-Raman, non-Compton scattering), but just in case… Including all these effects (and any other relativistic effects) would change the mathematics. Fortunately these are relatively small effects in the Earth’s atmosphere (?except maybe doppler shifting during scattering might be on the order of the doppler contribution to absorption and emission line broadenning, though pressure broadenning dominates for line broadenning for at least some portion of the atmosphere). I think it will still be the case that net radiant fluxes will be from higher to lower temperature – if the energy absorbed and then emitted without thermalization is still considered to be on it’s way… (and allowing different populations of particles in the same volume to have different temperatures). (Photons doppler shited in scattering would spread into different frequencies presumably to the same extent that photons from those frequencies spread into the first frequency, so …)
BobFJ says
Patrick 027, thanks your 822/823
I’ve always wondered how ENSO can be measured or given indices that really mean anything because the dynamics of the SST’s are surely highly variable both spatially and temporally. Is a short high intensity phase more significant than a long duration lower intensity phase, and what about surface area etc? Other factors such as wind patterns in both hot and cold phases must also be important. I would also question the reliability of the data published by NOAA that go back some 60 years.
The thermodynamics are also extremely complex. Apart from spatial issues, the SST’s are diurnally more stable than the air T’s, and heat transfer rates via conductive advection/convection thus depend on the “weather” and time of day. There is also highly variable cooling from evaporation, which Trenberth gives as the greatest global average HEAT loss process. And, as you say, there is long-wave EMR. How pray can all this stuff be weighted or given a calibration to global average T’s in each hemisphere?
This NOAA Multivariate ENSO index graph is interesting;
Here is some of the text qualifying it, my bold added:
“El Niño/Southern Oscillation (ENSO) is the most important coupled ocean-atmosphere phenomenon to cause global climate variability on interannual time scales. Here we attempt to monitor ENSO by basing the Multivariate ENSO Index (MEI) on the six main observed variables over the tropical Pacific. These six variables are: sea-level pressure (P), zonal (U) and meridional (V) components of the surface wind, sea surface temperature (S), surface air temperature (A), and total cloudiness fraction of the sky (C). These observations have been collected and published in COADS for many years. The MEI is computed separately for each of twelve sliding bi-monthly seasons…”
Hmmmm…. That should keep some academics busy for a while! …. And going back 60 years?
David B Benson , Reur 783:
Thanks for that, but maybe the above could be of interest to you? I’m beginning to understand your position. BTW, the graph I cited of AMO + PDO, I now think is crap, and perhaps you were being polite.
David B. Benson says
BobFJ (825) — Since I use decadal averages, ENSO disappears except to the extent it contributes to the AMO index.
And yes, I try to be polite, but to a small extent the PDO will also contribute to the AMO index and with R^2=0.991 and no significant autocorrelation, there isn’t anything left to explain.
Patrick 027 says
Re BobFJ
I think one measure of ENSO is the different in sea level pressure between two locations. Various other indices of low-frequency internal variability modes have been defined in that type of way.
Analysis to look for patterns in the fluctuations can identifity ‘Empirical Orthogonal Modes’ (I think that’s the term), which would describe patterns more completely (I don’t know the details of this, but it makes intuitive sense that such modes could be identified).
– “Is a short high intensity phase more significant than a long duration lower intensity phase,”
Just to be clear, the cool anomaly that could be expected to follow a warm anomly driven by oceanic heat rearrangement is not itself just the opposite phase of such a fluctuation. If an El Nino, by hiding cooler water beneath the surface, results in a surface+tropospheric temperature anomaly that, being higher than forced equilibrium, results in a radiative imbalance that leads to cooling, ultimately removing heat from the oceans, so that after the El Nino, the climate system has less heat than before and will tend to be cooler – this is not the same as a La Nina; the same process would, if the El Nino were permanent, remove heat from the climate system and tend to bring the global average surface temperature down; likewise, the global average surface+tropospheric temperature response to a permanent La Nina might fade over time with as the climate system gains heat. (Absent internal variability or averaged over it in the longer term, a radiative disequilibrium decays via temperature change toward equilibrium, being proportional to exp(-time/(climate sensitivity*heat capacity)) – but the spatial and temporal pattern associated with a particular shape of internal variability makes it less than obvious that the same climate sensitivity and heat capacity would apply to any particular mode of internal variability as would apply to any particular externally-forced changes, though it might be a good first assumption until established otherwise (whether or not is has been established otherwise, I’m not sure)).
Patrick 027 says
(PS some modes of internal varibility might have intrinsically finite lifetimes (given constant everything else) – for example, QBO acts like an internal clock of the climate system, with a period generally over 2 years; it is driven by vertically propagating fluid-mechanic waves (in particular, equatorial Rossby-gravity and Kelvin waves), and the fluctuation continues even assuming the wave activity is produced at a constant rate.)
Patrick 027 says
Re my 824 about that last part:
At full (local) thermodynamic equilibrium, the photons would be in thermodynamic equilibrium with the non-photons – with all non-photons, requiring those to be in thermodymic equilibrium amongst themselves.
In such a condition, fluorescence and phosphorescence and photons produced by chemical reactions, Raman scattering, Compton scattering, and any other scattering where photon energy is not conserved, and stimulated emission of radiation, all leave the photon intensity at the blackbody value for the temperature of the matter; emissivity = absorptivity in total including all contributions to emission and absorption.
When only the non-photon matter is in (quasi-)LTE, so that the absorbtion and emission of photons are in total or at particular frequencies (and/or polarizations) by particular energy transitions, are not balanced, then processes that involve contribution to photon energy before thermalization of absorbed photon energy or absorption before thermalization of the lack of energy that has been converted to photon energy can allow absorptivity from a direction to be different then emissivity to a direction for the same frequency and polarization.
wayne davidson says
Can anyone explain or find a link showing water vapour to CO2 heat flux ratios between say , mid latitudes, and the Arctic?
I read a paper which shows this ratio in the Arctic being 1/3 CO2/H2O, I must presume that it is a simplification, the long night Arctic has very little water vapour, CO2 must take more prominence in heat flux during that period. On the other hand, CO2 takes less prominence at lower latitudes, yet I read CO2 can be as much as 30% while water vapour close to 70%.
I am a bit perplexed of the apparent zonal similarities. But this must be due to simplifications. Surely CO2, weighs a higher influence during the Arctic long night.
[Response: You can look at the spatial distribution of the forcings here: http://data.giss.nasa.gov/efficacy/ look at the ‘Fa’ results for instance. The smaller forcing for CO2 near the pole is because of the drop off in upwelling LW (which is much smaller at the poles than it is in the subtropics). – gavin]
wayne davidson says
Much thanks Gavin. Adjusted forcing 5/4 * CO2 ratio between Arctic and mid-lats is about 1/1.3 while total precipitable water ratio between Arctic and mid lats is often 1/5. I am not coming close confirming 1/3 CO2/Water Vapour heat flux ratio for the Arctic. It seems I am missing something, but I am at a lost to know why its 30% when it should be a bit bigger.
Tom S says
Here is something interesting:
“Citizen Audit” verifies IPCC peer review claims. As has been stated by the moderators, Working Group I comes out looking very good. II and III not so much.
AR4 % not peer # not peer # references
report overall 30 5,587 18,531
Working Group 3 57 2,307 4,033
Working Group 2 34 2,849 8,272
Working Group 1 7 431 6,226
http://www.noconsensus.org/ipcc-audit/findings-detailed.php
Before people have a bird, this does not say anything is wrong, but refutes the stated/implied claim that AR4 was totally peer reviewed.
David B. Benson says
One aspect of climatological and other physical data is the rather mysterious
http://en.wikipedia.org/wiki/Pink_noise
observed in some spectral bands. Often, the lowest frequency reponses are quite flat and then have that 1/f noise at the middle frequencies, bending to
http://en.wikipedia.org/wiki/Brownian_noise
and the highest frequencies.
The only mysterious part, at least for me, is the approximately 1/f noise in the midband and some articles of 1/f noise seem to agree. The first thing to note is the noise is 1/f in power and so 1/sqrt(f) in amplitude. The next thing to note is that there is often some spatial distribution aspect to the phyical system so that different reponses occur both in time and space.
However, the Laplace transform system function for a leakage-free, noninductive infinite (linear) transmission line is proportional to
T(s) = 1/sqrt(s)
and with a forcing of cos(at) with angular frequency a the reponse is proportional to
(1/sqrt(a))sin(at)
and some transients are over and hence has a pink noise spectrum when excited by white noise over the entire band in question.
So, rather approximately but still clearing some of the mystery of pink noise, the midband response is about as if it were a “perfect” coaxial cable, with analogous systems being
(1) conduction of heat through a slab of uniform thickness and
(2) diffusion of liquid in a homogeneous medium
both of which obviously are of climatological significance.
[Hope I didn’t make a mistake in the quite difficult determination of the inverse Laplace transform of (1/sqrt(s))(s/(s^2+a^2)).]
Hank Roberts says
> refutes the stated/implied claim
As it’s been refuted for months, by the IPCC and those who looked it up.
Good summary here:
http://shewonk.wordpress.com/2010/01/26/mea-culpa-ipcc-and-gray-literature/
Brief quote:
“Critics were claiming that the IPCC disobeyed its own rules — and I chimed in, assuming wrongly that it was IPCC process to only use peer-reviewed literature.
So mea culpa. I take it back. The IPCC lead authors and reviewers were in their rights to use “gray” literature so long as they made sure to follow the procedures outlined in the IPCC annex for such use. Of course, they should always double check to make sure the non-peer reviewed sources are in fact good science.”
Completely Fed Up says
“Before people have a bird, this does not say anything is wrong, but refutes the stated/implied claim that AR4 was totally peer reviewed.”
Well, only because that claim is completely made up.
AR4 WGI is claimed to be totally peer reviewed.
Lets have a look at the titles, shall we:
Working Group I Report “The Physical Science Basis”
Working Group II Report “Impacts, Adaptation and Vulnerability”
Working Group III Report “Mitigation of Climate Change”
Now, what sort of science journal would have papers from “mitigation scientists”??? Would the sorts of engineers and sociologists be putting things about the impacts of climate change in a science paper? After all, the DoD produced several reports for internal use on the subject and they would never be put in a science journal.
You’ve done a lot of research to find out how many papers have been reviewed in the AR4 process.
Please show where you did the research on the claim that the AR4 process is totally from peer review work.
ktkxbye
CM says
Re: AR4 non-peer-reviewed sources (Tom S., at #832)
The Noconsensus blog has put 43 volunteers (presumably “skeptics”) to work for five weeks counting peer-reviewed references in the IPCC Fourth Assessment Report (AR4). Their results are similar to the loose figures some of us discussed on this site a month ago, but the results of all their labors is make the IPCC sources look more peer-reviewed than we did.
On an earlier thread (now closed) we discussed Andreas Bjurström’s count of references in the Third Assessment Report (TAR) (posted at Pielke’s), and I posted some results from a simple, error-prone Perl-scripted count of journal articles in AR4 (“Comments on IPCC errors: facts and spin”, comments #594, 596, 598).
Both Andreas and I only sought to distinguish journal and non-journal references, which is not identical with peer-reviewed and non-peer-reviewed. In the following, however, I will assume that it’s close enough a proxy that direct comparisons are meaningful. I will also assume that the noconsensus count is largely correct (i.e., that the procedures they adopted prevented their loud and explicit anti-IPCC stance from biasing the results too much).
– Noconsensus found 93% peer-reviewed sources in WG1, more than my 90-92%, and far more than the 84% Andreas counted in the TAR. (I particularly underestimated chapters 5,7, and 8.)
– Noconsensus found 66% peer-reviewed sources in WG2, toward the high end of my 61-67% estimate, and up from 59% in the TAR according to Andreas. I didn’t post results for single chapters except ch. 16 (which matched well), but the only chapter where noconsensus gives a value below my minimum estimate was in ch. 9.
– Noconsensus found 43% peer-reviewed sources in WG3, up from 36% in TAR according to Andreas. My own estimates were too uncertain to be useful (34-69%) but encompass the noconsensus results (for each chapter).
I hope I don’t need to add that I disagree with the spin Noconsensus puts on their findings, and that the interesting questions are: are the sources right; when is “gray” literature necessary and appropriate; how should it be quality controlled; and to what extent have parts of the formally “gray” literature undergone quality control equivalent in practice to the peer review of scientific journals.
Kevin McKinney says
#832–
Tom, I’m afraid that I can’t accept those numbers at face value. I have been fed so much hooha with a straight face for so long now that random “auditors” have no credibility whatsoever with me.
Just don’t trust them to tell the truth. Or even recognize it, for that matter.
Perhaps someone will take on the Augean task of auditing the auditors?
Ray Ladbury says
Tom S.,
It is perfectly reasonable for WG2 and WG3 to use so-called grey literature–in fact it is inevitable. There are few journals for looking at consequences of climate change and virtually none for looking at mitigation. These two groups of necessity must rely on government reports and analyses by NGOs. This is yet another red herring from Jeff Id.
The crucial thing for WG2 is to bound the potential consequences of the various threats they assess. The bound need not be the best–it need only exceed the actual consequences and be finite. One can always sharpen one’s pencil if that threat winds up driving risk. Grey literature is fine for this purpose.
For WG3, we are looking at mitigation and the costs and efficacy thereof. As this subject is still in its infancy, grey literature is inevitable.
Hank Roberts says
Pssst! CFU — if you invite people to support bogus claims, they will try.
If you instead calmly point to information refuting their notions, others will be glad for your intervention to dismiss rather than encourage the nonsense.
Just as a reminder how upset people can get about _other_ science questions:
http://scienceblogs.com/tetrapodzoology/2009/07/birds_cannot_be_dinosaurs.php#comment-1784640
One of the commenters there, on the question whether birds are dinosaurs:
“… calling anyone who voices disagreement stupid, ignorant, and a menace to society does nothing to help your cause and makes you look less intelligent than the person you are arguing against. I say this because people might want to think about this before launching into a diatribe against other people, thereby reinforcing the common fallacy that there is little difference between religious and scientific hardliners.”
CM says
Re: gray literature — just to add to what Ray said:
When the government wants reliable, up-to-date figures on how much different policies will cost, projections of how much the economy will grow, and so on, they get their stats and projections from the specialized national and international number-crunching agencies that exist for this purpose. In other words, they get them from what is called “gray literature” in IPCC talk.
Climate sceptics are claiming to be shocked that trillion-dollar economic decisions (as they like to put it) might not be based 100% on peer-reviewed journal articles. But trillion-dollar decision are made all the time by, say, the U.S. government, balancing the federal budget and the national debt, based on “gray” reports from the Congressional Budget Office, the Department of the Treasury, the OECD, and so on. And you wouldn’t want it any other way. You really, really wouldn’t want the government to base economic policy only on what they can glean from journal articles written and refereed by ivory-tower professors. (Who would, in any case, probably be getting their data from … the CBO, the Treasury Dept., and so on.)
The IPCC reports are for policymakers, and the WG3 report is about policy options and about projecting how much carbon we’ll burn with various policy mixes under various scenarios for economic development, and so on. It’s not just that mitigation studies are in their infancy, it’s that they’re *inherently* reliant on “gray” sources.
Completely Fed Up says
“If you instead calmly point to information refuting their notions, others will be glad for your intervention to dismiss rather than encourage the nonsense.”
Pssst. Hank.
Doesn’t work.
See “Zombie Argument”/”Rebunked”/et al.
Doug Bostrom says
CM says: 16 April 2010 at 2:37 PM
Slightly amplifying that excellent set of remarks regarding so-called grey literature that is actually analysis done by governmental units, we’d not only be foolish not exploit all the number crunching done on the behalf of the public by civil servants but we’d probably also be annoyed if we found out all that effort was for naught. Why purchase a pair of reading glasses and then squint at a book because we refuse to wear our spectacles?
David B. Benson says
After some checking around the web regarding coaxial cable, yes, I did the integral properly in my just prior comment. From
Links between the Annual, Milankovitch, and Continuum of Temperature Variability
Peter Huybers & William Curry
we discover that between millennial scale and centennial scale the power spectrum of temperature data rolls off at around (1/f^1.6) and from centennial scale to decadal scale the rool off is about (1/f^0.6). These are usually considered to be exmples of
http://www.scholarpedia.org/article/1/f_noise
Witgren says
And even if 93% of the WG1 sources are peer reviewed, that doesn’t necessarily mean that 93% of the report is accurate and 7% isn’t – “gray” literature is by no means automatically incorrect, it just means it’s not peer-reviewed to the same standard as published scientific work.