Alert readers will have noticed the fewer-than-normal postings over the last couple of weeks. This is related mostly to pressures associated with real work (remember that we do have day jobs). In my case, it is because of the preparations for the next IPCC assessment and the need for our group to have a functioning and reasonably realistic climate model with which to start the new round of simulations. These all need to be up and running very quickly if we are going to make the early 2010 deadlines.
But, to be frank, there has been another reason. When we started this blog, there was a lot of ground to cover – how climate models worked, the difference between short term noise and long term signal, how the carbon cycle worked, connections between climate change and air quality, aerosol effects, the relevance of paleo-climate, the nature of rapid climate change etc. These things were/are fun to talk about and it was/is easy for us to share our enthusiasm for the science and, more importantly, the scientific process.
However, recently there has been more of a sense that the issues being discussed (in the media or online) have a bit of a groundhog day quality to them. The same nonsense, the same logical fallacies, the same confusions – all seem to be endlessly repeated. The same strawmen are being constructed and demolished as if they were part of a make-work scheme for the building industry attached to the stimulus proposal. Indeed, the enthusiastic recycling of talking points long thought to have been dead and buried has been given a huge boost by the publication of a new book by Ian Plimer who seems to have been collecting them for years. Given the number of simply made–up ‘facts’ in that tome, one soon realises that the concept of an objective reality against which one should measure claims and judge arguments is not something that is universally shared. This is troubling – and although there is certainly a role for some to point out the incoherence of such arguments (which in that case Tim Lambert and Ian Enting are doing very well), it isn’t something that requires much in the way of physical understanding or scientific background. (As an aside this is a good video description of the now-classic Dunning and Kruger papers on how the people who are most wrong are the least able to perceive it).
The Onion had a great piece last week that encapsulates the trajectory of these discussions very well. This will of course be familiar to anyone who has followed a comment thread too far into the weeds, and is one of the main reasons why people with actual, constructive things to add to a discourse get discouraged from wading into wikipedia, blogs or the media. One has to hope that there is the possibility of progress before one engages.
However there is still cause to engage – not out of the hope that the people who make idiotic statements can be educated – but because bystanders deserve to know where better information can be found. Still, it can sometimes be hard to find the enthusiasm. A case in point is a 100+ comment thread criticising my recent book in which it was clear that not a single critic had read a word of it (you can find the thread easily enough if you need to – it’s too stupid to link to). Not only had no-one read it, none of the commenters even seemed to think they needed to – most found it easier to imagine what was contained within and criticise that instead. It is vaguely amusing in a somewhat uncomfortable way.
Communicating with people who won’t open the book, read the blog post or watch the program because they already ‘know’ what must be in it, is tough and probably not worth one’s time. But communication in general is worthwhile and finding ways to get even a few people to turn the page and allow themselves to be engaged by what is actually a fantastic human and scientific story, is something worth a lot of our time.
Along those lines, Randy Olson (a scientist-turned-filmmaker-and-author) has a new book coming out called “Don’t Be Such a Scientist: Talking Substance in an Age of Style” which could potentially be a useful addition to that discussion. There is a nice post over at Chris Mooney’s blog here, though read Bob Grumbine’s comments as well. (For those of you unfamiliar the Bob’s name, he was one of the stalwarts of the Usenet sci.environment discussions back in the ‘old’ days, along with Michael Tobis, Eli Rabett and our own William Connolley. He too has his own blog now).
All of this is really just an introduction to these questions: What is it that you feel needs more explaining? What interesting bits of the science would you like to know more about? Is there really anything new under the contrarian sun that needs addressing? Let us know in the comments and we’ll take a look. Thanks.
dhogaza says
Twenty-five miles, actually
One hundred miles west of Wendover. You forget Wells, though, a mere 60 miles. Who can forget the rooftop of that old motel proclaiming “Donna’s Ranch”?
The Victorville Solar Plant’s footprint is about 7 square miles.
It would be a visual blight, for sure, but would provide power sufficient to light Wendover, Wells, Ely (something like 75 miles south of that photo), and Elko, and all of the local ranches combined.
An area of about 10,000 square miles.
BTW a nuke would be a visual blight, too … and you’d need water to run it.
Neither nearly as gross as Wendover, though.
manacker says
Jim Eaton
The Victorville 2 Hybrid Power Project (10 percent solar and 90 percent natural gas) is sort of like the 50/50 rabbit and horsemeat stew (1 rabbit to 1 horse).
Mark says
“Of course logarithms and exponentials are closely related, but there is a difference and it is usually necessary to know which is which.”
Aye, but it ISN’T a decreasing exponential, is it and we can agree on that.
Deal?
Rod B says
Ray, you said (695),
after you told Peter Martin, “…the logarithmic approximation…gives reasonable (meaning maybe close enough but not numerically precise) results…”
all of which sounds like numerical precision (or maybe anything numerical) is not important as long as you have something qualitative. I wondered why this doesn’t also apply to the temperature increase as is commonly projected numerically at hundredths or at least tenths of degrees over the entire globe 10, 50, 100 years out — since this quantitive projection depends entirely on a precise quantitive projection of absorption about which you say, in essence, ‘who cares about that?’
As you know I for one have serious concerns with the numbers and precision used in forcing mathematics.
Mark says
“all of which sounds like numerical precision (or maybe anything numerical) is not important as long as you have something qualitative”
RodB, measu ring tempe rature is ASSU MING a lin ear depend ency on the expa nsion of alc ohol in a tube to amb ient tempera ture.
This is not true.
This is why thermom eters are defi ned based on the tem perature they are to me asure to (this is called “metrol ogy”). Wit hin this range, they are ass umed ca nonically true. OUTS IDE that range, not.
This would not be the case if the meas ured change were line arly depe ndent on tempe rature.
[edit]
(as is the filter here. I’ve put spaces in all words longer than 5 characters to get the bleeding thing to go in. What’s it complaining about?)
[Response: “ambien”-t. sorry about that. – gavin]
RichardC says
746 Peter said, ”
thickness of lagging (arb units).. 0, 1, 2, 3, 4
heat loss (arb units)…………… 16, 8, 4, 2, 1″
I may be thick, but if I have 1″ of insulation and it results in 8BTU of heat loss, I would expect 4″ of insulation to result in 2BTU of heat loss. Explain?
manacker says
Peter
An insulating material has a certain thermal conductivity.
The amount of heat transfer through this material is inversely proportional to:
a. The thickness
And directly proportional to:
b. The surface area
c. The temperature difference
Since b and c are constant, the heat transfer is inversely proportional to the thickness.
Maybe this is what you were saying, but it was not so clear.
Mark says
Flipping heck, Gavin.
Lets hope spammers don’t start using climate in their spamming, eh?
Mind you most of the spam I get is all the random section from books and no payload.
I wonder how they expect to sell anything when there’s no link, no product, just 30 plus random words whipped out in the email.
It’s not people buying stuff from it, it’s people buying spammer work,
Patrick 027 says
Re 754 Rod B. – The terms linear and logarithmic describe a qualitative aspect – they describe the structure of an equation that can be used to approximate a real relationship over some range of values. The coefficients in that relationship can be further specified and the significance of the error in the approximation can be quantified.
As I understand it, more detailed parameterizations of radiant fluxes are used in climate models – these are also approximations, but they can be compared to the more precise line-by-line evaluations.
For a range of CO2 values that we are currently in, there is almost a 4 W/m2 (maybe around 3.7 W/m2 +/- something) tropopause level (after stratospheric adjustment) radiative forcing per doubling of CO2. I think this includes the relatively minor SW forcing.
—-
Peter Martin – did you see my response to you earlier?
BobFJ says
Ray Ladbury 748:
I must confess that I’ve not checked the literature lately, but my recollection is that band broadening is a minor consideration, and that it is an extremely complex matter in an air column in the atmosphere, as distinct from observations in a tube in the lab. Your NASA reference only serves to confirm that it is all too complicated to calculate realistically. Intuitively, I repeat/suggest that AOTBE, the contribution of band broadening alone might be about 1%. (= a guess)
You might be considering the possibility of “new” absorption lines appearing within the bands under some circumstances, but I would include the strengthening of existing weak lines under the total broadening effect, and of course, increased concentration of CO2 has this effect. In terms of the argument, it is the CO2 concentration consequence that should be considered, and not any change in atmospheric pressure or whatnot. There is a lot of other stuff going on such as the complicating water vapour absorption variation in the same bands, and I tossed-in a sort-of qualifier of AOTBE in my original 677 to imply this. (AOTBE= All other things being equal)
And, of course some things are more equal than others
PeterMartin says
RichardC,
You say “…..if I have 1″ of insulation and it results in 8BTU of heat loss, I would expect 4″ of insulation to result in 2BTU of heat loss. Explain?”
No, if you put 1″ of insulation over a hot surface the heat flow will, say, halve. Put on another 1″ and it halves again.
Another example would be a the loss of a high freqency signal in a coaxial cable. It halves ( has a 3dB loss) every N metres.
So yes, the answer is a geometric progression, which is exactly the same as saying that it is logarithmic, but you do use an exponential function rather than a logarithmic function for the calculation.
Ray,
The point I have been making is the opposite of what you suggest. I would like the mathematics to be precise as possible, and make some sense at the limits.
I’m just curious as to why the concentration of CO2 in the atmosphere isn’t analogous to the thickness of insulation we’ve discussed. BTW It wasn’t my choice of examples.
I’d just like to know over what region the logarithmic approximation holds. Someone suggested anything over 1ppmv of CO2. But I must say, I find it hard to believe that going from 1 to 2ppmv would have the same effect as from 280 to 560ppmv.
Hank Roberts says
Stefan, what happens next?
From this conference and its reports, to the upcoming December meeting?
BobFJ says
RichardC 756, I see that Peter Martin is persisting with his unique theory on heat transfer via conduction, which BTW seems to be a red herring morph away from his original “concerns” that were also shaky/puzzling to the audience here.
You wrote:
I also see that in his 761, Pete persists in his argument, in which he is WRONG. (and you are correct). Here is one example of the accepted explanation of the process: (scroll down to Conduction)
http://sol.sci.uop.edu/~jfalward/heattransfer/heattransfer.html
Here again is his allegation, to which I’ve added a third line according to the actual science
thickness of lagging (arb units)……………………………………………… 0, 1, 2, 3,,,,,,, 4
heat loss according to Pete (arb units)……………………………..……. 16, 8, 4, 2,,,,,, 1?
Heat loss according to the thermodynamics of conduction alone: ………. 8, 4, 2.67, 2
(AOTBE, heat loss is inversely proportional to thickness of insulation)
Patrick 027 says
Re 760 – are we discussing line broadenning?
The basic shape of the CO2 spectrum regarding its most important effects (it absorbs in some other wavelength bands with generally less effect) is a multitude of absorption lines whose strength generally decays – I think roughly exponentially – away from 15 microns; thus, with any doubling of CO2, the wavelength interval encompassing lines surpassing some strength will widen outward from the 15 micron center. Without line broadenning, those lines would correspond to absroption and emission over an infinitesimal fraction of the spectrum – thus overall the atmosphere would be nearly transparent.
There is some broadenning due to quantum uncertainty. There is doppler broadenning, due to random molecular motions – this will be stronger at higher temperatures. There is pressure broadenning, or collisional broadenning, which is stronger at higher pressure. The strength of the lines also varies with temperature, but as far as I am aware, not so much as to be a significant feedback to climate change relative to the forcing. All these effects, or at least the line broadenning effects, do vary significantly over vertical distance in the atmosphere, and it is necessary to take them into account when evaluating radiative fluxes in a precise manner, but at any given vertical level, so far as I know, the radiative feedbacks from line strength and line broadenning changes are quite a bit smaller compared to the radiative forcing that would produced a climate change.
For CO2, line broadenning reshapes the absorption spectrum from infinesimal line widths to a series of peaks in absorption at line centers and minima in between lines, but with the general trend of decay outward from 15 microns applying to both the peaks and the minima in between. Thus, the width of the band of wavelengths in which some level of opacity is exceeded will have ‘fuzzy’ edges, where, going outward from 15 microns, there is some wavelength interval between the first minimum that falls below a level of opacity and the first line center that falls below the same level. But both the range line centers exceeding some level of opacity and the range of minima exceeding the same level will widen by some amount with an increase in CO2 concentration.
In clear dry air, this widening means that increasing CO2 concentration blocks more radiation from the warmer surface from reaching space; the total wavelength interval in which more than some fraction of radiation from below is replaced by a generally smaller radiative flux from the generally cooler CO2 increases. Because significant air-to-air net radiant energy transfer depends both on sufficient absorption and emission within the air and on sufficient transmission across distances over significant temperature variations, net air-to-air radiant energy transfer tends to occur most in wavelength intervals of moderate or intermediate opacity. Once the central portion of the CO2 band is sufficiently saturated (so there is little air-to-air net transfer near 15 microns), the interval of intermediate opacities shifts position in the spectrum but does not change size much, so increasing CO2 beyond the point of significant saturation near 15 microns does not have much affect on net air-to-air radiant energy tranfers.
When there are other agents in the air that can emit or absorb at the same wavelengths – such as water vapor and clouds – then the effects are altered – increasing CO2 can reduce net fluxes where water vapor or clouds contribute absorption and emission. Thus, additional CO2, even when the central portion of the band near 15 microns is saturated, will tend to decrease net radiant fluxes not just between the surface and space and from the surface to the air (when the temperature of the air is more similar to that of the surface going toward the surface) and from the air to space (except in the stratosphere and above, since the temperature is more different from space’s brightness temperature when going toward space within the stratosphere), but also between the air and clouds, between cloud layers, between clouds and the surface, between clouds and space, between humid air masses and other air masses, clouds, the surface, and space, etc.
At the same time, water vapor and clouds’ spectral overlap with CO2 reduces the effect of additional CO2 – if a cloud already blocks radiation from below and is higher up in the troposphere, and thus colder, additional CO2 just above the cloud at a similar temperature will not have much more effect on upward LW fluxes just above the cloud (LW radiation is at wavelengths longer than about 4 microns, whereas SW radiation is at shorter wavelengths; SW radiation is essentially all solar radiation, while LW radiation is mostly emitted by the Earth and atmosphere).
Because water vapor concentration increases downward within the troposphere, the reduction of the effect of CO2 by water vapor will be more significant at the surface; the CO2 in the upper troposphere hides the water vapor below it within a significant range of wavelengths around 15 microns.
Of course, all of this has to be weighted by the blackbody radiant intensity at each wavelength. Blackbody radiation intensity varies slowly enough relative to spectral features of CO2 such that the effect of doubling CO2 is qualitatively similar to what it would be if blackbody radiant intensity were independing of wavelength within the LW portion of the spectrum. For the range of temperatures found on the surface and within most of the atmosphere’s mass (not including the thermosphere – very little mass and very little opacity at most wavelengths), the wavelength of greatest blackbody radiant intensity per unit wavelength interval ranges from near 10 microns (which the short wavelength edge of significant CO2 absorption will go toward with increasing CO2) to near 15 microns (near the center of the dominant CO2 absorption band).
CO2 absorption is presently significant between about 12 and 18 microns. This range encompasses roughly 30 % of the radiant energy flux for temperatures typical of most of the surface, troposphere, stratosphere, and mesosphere.
Between the tropopause and space, water vapor is nearly transparent between roughly 7 microns and 25 to 30 microns, while CO2 absorption is still large near 15 microns. For the total atmosphere between the surface and space, water vapor opacity is small between about 8 and 12 microns, except in very humid air masses near the surface (where new absorption lines may emerge from interactions among water vapor molecules) – at a given relative humidity, water vapor feedback will reduce the transmissivity of the atmosphere to near zero in the whole LW portion of the spectrum with increasing temperatures – the effect (radiative feedback at the surface per unit temperature increase) is especially strong near 300 K; but additional CO2 will still reduce the net upward LW flux at the tropopause by blocking radiation from the humid air masses below.
Low level clouds have less effect on the tropopause level LW flux than upper level clouds because low level cloud tops tend to be more similar in temperature to the surface below and thus do not reduce the upward LW flux as much; their effect is also reduced by the CO2, H2O, and any clouds found at higher levels.
This all also depends on the vertical temperature profile. The general tendency is temperature to decrease within the troposphere, in a pattern shaped by the moist adiabatic lapse rate (because radiative fluxes by themselves would make the lower atmosphere unstable to convection; convection couples the temperature at different positions so that they tend to shift together – with some exceptions and variations from that pattern (such as where regional conditions make the atmosphere stable to convection – such as high latitudes at winter, and low level inversions caused by strong surface cooling at night, etc.), and increase in the stratosphere. But there are latitudinal, regional, seasonal, and diurnal variations in that pattern (and variations on timescales from day-to-day weather to ENSO, etc.). As I understand it, a climate model can numerically evaluate all those effects (and the effects of the spatial and temporal cloud and humidity variations, etc.).
Please see my other radiation comments above and references therein.
PeterMartin says
Patrick,
I did see your and other responses. I’m not disagreeing, in essence, with what is generally accepted but I still don’t understand WHY there is a general reluctance to quantify the logarithmic relationship between CO2 and temperature.
However if it was explained like this I would have no problem:
A good (but not perfect) analogy for the absorbtion of IR radiation in the atmosphere would be the absorbtion of RF power in a coaxial cable or the screw in coaxial attenuators that can be added to the cable with the purpose of achieving a known power reduction. 1dB for 10% absorbtion (approx), 3dB for 50%, 6dB for 75%, 10dB for 90%, etc. Meaning that 10% is passed in transmission through a 10dB attenuator.
These are indeed logarithmic in transmission. Screwing three together. Say 1dB +3dB +6dB =10dB Meaning that, again, 10% of the power passes in transmission. 90% is absorbed.
But what about the absorbed power? Is that logarithmic too?
No it isn’t. Consider a single 0.5dB coaxial attenuator which absorbs approximately 5% of the incident power. If the power is quite high then the attenuator will become noticably warmer. Then we add another one. That absorbs 5% of 95%. 4.75%. So it gets almost as warm too.
However, if we add another 4 x 0.5dB attenuators the total attenuation will be 3dB meaning that the signal level has halved. Consequently the sixth attenuator will only warm by approximately half the amount as the first one.
If 20 attenuators are connected in line the total attenuation will be 10dB and so the last ones will warm only very slightly.
So in effect there are three regions. A linear region for very low absorbtion. A logarithmic looking region for intermediate absorbtion. And a saturated region for high absorbtion.
Now I’m not saying that the levels of CO2 are in any way saturated or even anywhere near saturation! I can also anticipate that some might consider this analogy to be an oversimplification. All analogies are. But, I would suggest that it is less of a simplification than the logarithmic assumption.
PeterMartin says
PS Oops! I just realised that 1dB is 25% of power, and 0.5dB is in fact 12% of power, not 5% as I stated in in my previous post. But it doesn’t change the point of the argument.
BobFJ says
Patrick 027 Reur 764, what a magnificent post!
You did not have to convince me that absorption spectra and their variability/consequences/ interactions with other stuff in the atmosphere are made very complicated by a host of things. (not much like observations “made in a tube” in the lab).
I’ll be reading your post a few times yet, probably.
Just a couple of quick points I’d like to ask you:
1) You wrote towards the end: “…As I understand it, a climate model can numerically evaluate all those effects (and the effects of the spatial and temporal cloud and humidity variations, etc.)…”
I find this proposition to be mind-blowingly difficult both in terms of the myriad types of variously combinant calculations, spatially and temporally, but also with the vast amounts of data input required. (some of which may not be fully understood) Do you feel relaxed about this consideration?
2) As I understand it, the context of the discussion WRT bands/lines has been; what happens when CO2 increases within a relevant earthly range. (I suggested 300 – 600 ppm). I suggested that weak CO2 lines would strengthen as a consequence of increased CO2 alone, but that the effect would be small, and hazarded a guess of ~1% increase in total absorption, AOTBE.
What do you think?
BTW, I’m new here, and have only read from around post #580 down, so far.
Chris Colose says
761, Pater Martin
You could play around with David Archer’s online MODTRAN model and examine the quantity “I_out” for various atmospheric contexts. Here are some sample runs in a atmosphere with just CO2 as a non-condensible trace greenhouse gas and conserving relative humidity in the U.S. Standard atmosphere (1972):
0 ppmv = 295.317 W m^-2
0.25 ppmv = 293.119 W m^-2
0.5 ppmv = 292.02 W m^-2
0.75 ppmv = 291.235 W m^-2
1 ppmv = 290.576 W m^-2
1.25 ppmv = 290.01 W m^-2
1.5 ppmv = 289.539 W m^-2
1.75 ppmv = 289.1 W m^-2
2 ppmv = 288.692 W m^-2
.
.
.
5 ppmv = 285.457 W m^-2
5.25 ppmv = 285.238 W m^-2
.
.
.
280 ppmv = 268.25 W m^-2
280.25 ppmv = 268.219 W m^-2
Clearly the magnitude of change diminishes rapidly at just a couple parts per million, and is essentially negligible at pre-industrial or modern concentrations. Accordingly, the radiative efficiency for CO2 (a small perturbation of +1 ppmv at a background of 385 ppmv) is roughly .0139 W m^-2 ppm^-1. This is why HFC’s, CFC’s and other such compounds which are virtually non-existent in the atmosphere are so much better at perturbing the radiative balance per incremental change, and why methane is often remarked to be “20 times more powerful than CO2.” Instrinsically, CO2 is actually a better greenhouse gas at terrestrial-like climates but exists in much higher background concentrations.
manacker says
Peter,
In your search for a non-logarithmic CO2 / temperature relation that could hold even at the extreme ends of the theoretical atmospheric CO2 content, you probably do not need to look any further than the relation proposed by Hansen et al. (1988), cited in the IPCC TAR:
http://www.grida.no/climate/ipcc_tar/wg1/222.htm
This formula gives a roughly logarithmic relation within the practical extreme limits, i.e. from pre-industrial 1750 (at 280 ppmv) to the highest level, which we are likely to ever see anytime in the far distant future (1120 ppmv), when all fossil fuel reserves have been used up. It shows a very slightly lower warming from 280 to 560 ppmv than it does from 560 to 1120 ppmv (both around 1°C for 2xCO2). (See my post 723 for a plot of this equation.)
According to this formula a doubling of CO2 from 1 to 2 ppmv would add around 0.4°C, and a doubling from 0.5 to 1 ppmv adds only around 0.2°C, while a doubling from 140 to 280 ppmv adds 0.8°C.
At the extreme low end this formula (like any other) is probably suspect, but at least it avoids the “minus infinity” problem and it should work OK at the practical ranges we are ever going to see.
Possibly one of the other posters here might have more to say on the viability of the Hansen et al. formula versus the straight logarithmic equation used by Myhre et al.(or Lindzen or Kondratyev and Moskalenko, for that matter).
The practical difference is minimal in any case.
Max
Ray Ladbury says
Rod B. says, “As you know I for one have serious concerns with the numbers and precision used in forcing mathematics.”
Yes, I also know that you don’t understand it. OK, Rod, it’s time to your homework. I’m really serious. It’s very hard to take what you say seriously when it’s clear that you aren’t willing to wade through some math to understand WHY a logarithmic form is used.
First, look at the shape of the absorption lines in figure 4 of the reference I gave to BobFJ in #748. Note that the tails of the distribution do not go to zero nicely as would a Normal, lognormal or other well behaved function. No matter how far you go out, the ccontribution looks non-negligible. It looks kind of like a Cauchy distribution, right?
OK, play around with a Cauchy distribution in Excel or some other spreadsheet or download R (it’s free)
http://mathworld.wolfram.com/CauchyDistribution.html
Note that no matter how far out you go, the contribution isn’t negligible. What this means is that the incremental energy for every molecule of CO2 keeps decreasing (e.g. the 400th ppmv contributes less than the 100th ppmv), but it doesn’t go to zero. That is, the CO2 contribution is nowhere near saturation. Agreed?
OK. Now we need to find a relatively simple mathematical form that describes this. Motl’s exponential is right out, because it saturates. A power law with exponent between 0 and 1 won’t work because the 400th ppmv would contribute more to the energy absorbed than the 100th ppmv. Right?
Your assignment: Go find a relatively simple form that meets the physics I’ve outlined above. Until you go through this exercise, I don’t see how even you can consider your opinions credible.
BobFJ says
Patrick 027, further to my 763, there is something else to add concerning Peter Martin’s naïve assertions to you in his 761 that included:
The problem with this assertion is that Pete is apparently confusing heat transfer via conduction with the other common different forms that are simply expressed as radiation and convection. The (rather distracting/poor) analogy to the ~log relationship of EMR absorption, that was originally raised by Mark, I think, to demonstrate a non-linear relationship, is concerned with the effect upon heat conduction loss and lagging on a hot water pipe.
Returning to the quote above, Pete’s imagined hot surface loosing 16 units of heat, cannot be loosing it through the insulation, because there is no insulation, it being specified as 0 thick. Thus, the first bit of information in context in his little table is at 1 unit thick and 8 units of heat loss. Thus, we have no idea what is happening between 0 and ~1 thick. If the amb ient surroundings of the bare hot surface are a gas, then heat loss will be via radiation, convection, and conduction into the amb ient fluid itself. (not the insulation of thickness 0). If it is in a vacuum, it would be by radiation only. If there is progressively an amount of insulation, then there will be some conductive loss through the insulation.
In practice of course, there is no such thing as a pure conductive loss, because the process is imperfect. (other heat losses). However, there is no specific relationship between the primary conductive loss through the insulation, and the secondary heat losses. For instance, at the termination of the insulation, there will be net radiative losses, but only at the termination and not throughout it, etc, etc.
BobFJ says
Patrick 027,
I think Peter Martin’s 765 & 766 are addressed to you.
I don’t know if you are familiar with his tactics in debate, but I have experienced it elsewhere for a long time, and his ploy above of introducing red herrings that provoke responses are just one of his methods.
I think of it as:
Yet
Again
Waffling
Nonsense
BobFJ says
Patrick 027 & RichardC;
Whoops, sorry, my 771 was intended to have been for RichardC
manacker says
Peter Martin
To keep the discussion on how temperature increases with increased atmospheric CO2 within reasonable bounds, you should probably limit it to an upper maximum of around 1100 ppmv (or around 4x the “pre-industrial” level of 280 ppmv).
It is not possible for human CO2 emissions from all known and optimistically estimated fossil fuels on this planet to get us above that level, so barring a major natural disaster as has been suggested for the PETM there is no way we will ever see 1100+ ppmv CO2 in our atmosphere.
A formula that works between 280 and 1100 ppmv is really all that is needed in practice (other than just as a matter of scientific curiosity).
Max
Hank Roberts says
> barring a major natural disaster as has been suggested for the PETM
Don’t you have to bar _all_ natural feedbacks, to get such a low number?
You’re giving us the change in temperature from instantaneously doubling the number of CO2 molecules in the atmosphere, without any other changes that we know happen, if I read your chart correctly.
Ike Solem says
You know, I posted this back around #444, and for some reason it never showed up – r.e. Mike’s comments on AMO and PDO:
P.S. Mike, I did read your comments on how models might generate an AMO, but those models are probably not treating the so-called meridional overturning circulation correctly. See the following on the latest data from the Atlantic, which seems to put the notion of an “Atlantic conveyor belt” to rest:
ScienceDaily (May 14, 2009)
That’s the problem with the ocean component of models: insufficient data and too many assumptions by climate modelers. This appears to be changing, i.e. the recent decade-scale predictions based on more accurate ocean data:
Smith et al. 2007 Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model (pdf)
and, see this:
It seems clear that El Ninos are examples of ocean weather, just as hurricanes and mid-latitude fronts are examples of atmospheric weather. The ocean does not seem to be any less chaotic than the atmosphere, just much slower in its movements. If AMOs or PDOs exists (which is questionable), they must behave similarly.
Finally, if there is a mechanism for the AMO, why can’t it be described simply?
Please refer to Delworth and Mann (2000), Knight et al (2005), and the numerous references therein which describe in some detail the mechanisms… (at least, in the world of the climate models). – mike
The explanation in the paper for observed historical SST variations no longer holds much weight:
“The model variability involves fluctuations in the intensity of the thermohaline circulation in the North Atlantic” – Delworth & Mann 2000
That was also supposed to lead to a reduced ‘thermohaline circulation’ due to freshening water, etc. However, see the new data – the SST link is hardly so clear.
Also, (and as the paper notes), the historical records are far too short to apply time-series analysis (a 70-year cycle from 100 years of data???) with any degree of confidence – leaving what? Not very much – unless the stated solar cycle explanation is the driving force, which also seems unlikely. Yes, I actually read papers – something most journalists seem to avoid.
All in all, the existence of predictable periodic multidecadal oscillations in the world’s oceans remains highly questionable, and poor use of time series data analysis approaches plus oversimplified ocean circulation models is probably the cause. “Phase-locking” (as per the PDO claims) is even more unlikely – even the relatively well-understood El Nino ‘cycle’ displays sensitive dependence and low predictability, and that’s with a robust mechanism that explains just how an El Nino develops.
Hank Roberts says
Spinning still a problem:
“This week, the CBO ran the numbers on the Democratic cap-and-trade, and in the process, discredited the Republican talking points on the proposal.”
http://www.washingtonmonthly.com/archives/individual/2009_06/018764.php
Rod B says
Patrick 027 (759), I agree with that — that the relationship between CO2 concentration and forcing is linear-like sometimes and log-like other times. But Ray implied that even this distinction is not important (though I think he doesn’t really think this — probably just came out wrong.) My personal concern/question stems from taking this general qualitative form and then adding some very precise numbers to the equation and then maintaining those precise coefficients/exponents through a very large range of concentrations and situations.
Hank Roberts says
> 778 My personal concern/question … taking this … adding
> some very precise numbers … maintaining those … through a
> very large range of concentrations and situations.
Then don’t do that! Focus on the real world range of possibilities.
Give Chris Colose’s link to anyone who gets so confused.
25 June 2009 at 6:16 AM
Rod B says
manacker (769), et al: Now I’m really confused. It is commonly known that the primary forcing math used is a precise log relationship and the forcing-to-temp math is linear. This says that any doubling of CO2 will result in the same temperature increase no matter the base. How does this fit??
Jim Galasyn says
What’s all this about EPA suppressing some economist’s opinions of its “endangerment finding”?
Is the EPA suppressing or withholding information on global warming?
Deniers are going ape.
Rod B says
Ray, you’re still missing my point. I have no real quarrel with the log form relationship between CO2 concentration and forcing. My questioning is twofold: 1) I’m not totally swayed with the precise mathematics, i.e. 5.35*ln[C/Co]; I don’t fuss much at current and recent concentrations (though it has not always had this coefficient) even though there certainly is a range of possibilities. (One has to go with a single number in modeling so long as it seems reasonably accurate — I got no problem with this, either.) Whether the coefficient holds or is even close at different concentrations (600ppm? 800ppm? 1000ppm? 10,000ppm?) is problematical and not known with certainty — the self confidence of some scientists not withstanding, though it’s probably close at the near-end range. 2) I don’t know, and neither does anyone else know for certain, when the coefficient or even the log form changes. Is it A*ln[C/Co] at 10,000ppm? How do you know?
Hank Roberts says
RodB, you’re missing the point. Heck, you’re missing the broad side of the barn. This isn’t a Platonic ideal. It’s a planet. You can’t change just one thing, holding everything else constant. Pick any meaningful question you can ask about the environment and push it far enough and something else will surprise you.
That wouldn’t be a reason to delay doing the obvious, though.
Hank Roberts says
Hey, Jim Galasyn, good pointer, everyone should read this story you just pointed us to:
http://www.examiner.com/x-9111-SF-Environmental-Policy-Examiner~y2009m6d24-Is-the-EPA-suppressing-or-withholding-information-on-global-warming
The update reveals the devastating ability of a large organization* to lie to the news organizations, and how they mistakenly repeated the falsehoods in public.
This is how they spin the facts about reports on climate change.
I trust we’ll see this widely reported.
The Examiner took an hour or so to correct the error.
Let us know if you find CEI correcting the original bogus report.
______________________________________________________
* Competitive Enterprise Institute (CEI), caught lying
Mark says
“1) I’m not totally swayed with the precise mathematics, i.e. 5.35*ln[C/Co]; ”
No, you wouldn’t. It doesn’t say what you want it to say.
“Whether the coefficient holds or is even close at different concentrations (600ppm? 800ppm? 1000ppm? 10,000ppm?) is problematical and not known with certainty —”
That it holds at 100-600 IS.
Now if it gets to 600, we’re boned, so who freaking’ cares if it breaks down after that?
And how do you know it doesn’t hold at some level with certainty? It’s not like we don’t have computers to do the maths VERY quickly for us.
Or doesn’t maths work when computers do it?
“2) I don’t know, and neither does anyone else know for certain, when the coefficient or even the log form changes. Is it A*ln[C/Co] at 10,000ppm? How do you know?”
Well, what will have melted by then? All of the ice? 20m increase in water levels. The summer in new york (if you float on the water) will be what? 60C? More?
And you can’t think that “it goes up until you get to 1000ppm and then adding more CO2 *cools the planet*!!!”.
So since it’s already going to be ball-boilingly hot and there won’t be any land where most of our biggest cities exist and the centre of continents like the US will be dustbowls, how much do you think we should care about the precise mathematical concordance between temperature and CO2 concentrations?
Hank Roberts says
Teh goal in trolling is to drop a pithy item into the thread — crafted to elicit longer posts from others that further digress from the topic of discussion. The more long off-topic replies the better you did.
PeterMartin says
Ray,
You say “Motl’s equation fundamentally misunderstands the physics–or–it is deliberately misleading. You pick” I’d go for deliberately misleading!
Motl is a smart enough guy. However, the problem that I have with his equation is not so much its form as the way he’s fiddled the constants used in it (1.5 & 200), without any real justification, to try to prove the saturation effect will prevent temperatures rising any higher than 1.5degC.
However if you change the constants in his equation to something more reasonable, the graph comes out to be virtually identical to Hansen’s 1988 plot over the range of values 70ppmv to 560ppmv. Incidentally one is for climate forcing , and one is for temperature and I have made the conversion shown in the link below:
http://farm3.static.flickr.com/2562/3661605904_2bbcb70f32_o.png
This form of equation may not be valid for the reasons you suggest at higher levels of CO2, but I’m looking to find a reasonable value of the intersection on the y axis. (where the equations using log approximations are not valid either). This is effectively the temperature the Earth would be with no CO2 present in the atmosphere. I don’t believe the GHE would disappear completely. Water vapour would still have an effect.
Its not just idle curiousity. It would enable us to answer the question of how much does CO2 contribute to the natural GHE. If we can say that this is about 8 degC or 25% of the natural GHE of 33degC, the figure of 3-4 degs for a doubling of this level sounds much more plausible than the less than 1 degC that people like Lindzen would have us believe.
Hank Roberts says
Are you asking for a calculation for a hypothetical sphere of rock the size and location of Earth, as surrounded by various atmospheres?
That omits the climate. You might get the simple equation you’re describing (or it may have been done).
Or are you asking about climate sensitivity?
http://www.sciencemag.org/cgi/content/abstract/308/5727/1431
Kevin McKinney says
On another front wrt 10,000 ppm of CO2, we’re getting into the realm of direct toxicity about that point:
“Due to the health risks associated with carbon dioxide exposure, the U.S. Occupational Safety and Health Administration says that average exposure for healthy adults during an eight-hour work day should not exceed 5,000 ppm (0.5%).”
10,000 ppm causes discomfort in about 20% of subjects, and more frequently drowsiness.
(Wiki)
Jim Galasyn says
Hank, I’m impressed at how you got that horizontal rule in your post!
Hank Roberts says
________ it’s just the underscore character, repeated _______
Patrick 027 says
Re 782 – Rod, the calculations can be done to whatever degree of precision is worthwhile given access to computer power and the necessity of a degree of accuracy, etc. The logarithmic relationship is an approximation that can be plugged into ‘toy’ models. Climate models don’t, so far as I know, use a precalculated tropopause level radiative forcing for a given CO2 level; as I understand it, I think they use a parameterized simplification of radiation to calculate radiative fluxes on the scale of the grid spacing – these parameterizations can be checked against the more precise line-by-line calculations.
Sometimes the net result of very complex behavior can be described accurately in a greatly simplified description.
Re 787 Peter Martin – if you took all the CO2 out of the atmoosphere, the cooling would pull most of the water vapor out of the atmosphere as well.
Rod B says
Hank (777), ’twasn’t clear: is spinning still a problem with Republicans or with the Washington Monthly? ;-) I’d hold off on the hats and horns until the details and assumptions are checked and the economic dust settles.
Rod B says
Hank (783), and your point is what?? this whole business is loosey-goosey (like a planet, not a Platonic ideal) but we take your answers as unassailably accurate? Or are you saying the BAU scenario can just as easily drop the temp by a couple of degrees by 2100, since all is loose? Or is the IPCC projection right on (with of course the 90-95% thingy) despite all of the “flaky” stuff that went into it?
Is 5.35*ln[C/Co] the correct direct forcing math (period) for CO2 or not? What’s your view? Yes or No?
dhogaza says
Hank, you forgot to say, “on the eve of the big vote tomorrow (friday in case you don’t read this tonight, Thursday)”.
They’re not caught, truly and fairly, unless the bill passes tomorrow.
If it doesn’t, they can be crucified in every way imaginable but won’t care. This is just a calculated ploy to give cover to a couple of votes that they’re hoping will be enough to sink the bill.
PeterMartin says
Patrick,
” if you took all the CO2 out of the atmoosphere, the cooling would pull most of the water vapor out of the atmosphere as well.”
How much is “most”? And, again, the effect would be highly non-linear so how much of the GHE would be left? Would it produce a snowball earth? Or would the ocean still be ice free in tropical regions?
manacker says
RodB (780)
You ask:
“It is commonly known that the primary forcing math used is a precise log relationship and the forcing-to-temp math is linear. This says that any doubling of CO2 will result in the same temperature increase no matter the base. How does this fit??”
To answer your question: It fits fine for me, both theoretically and practically. I have no problem with the logarithmic function as proposed by Myhre et al., particularly within the practical limits we are ever likely to see on this planet.
But Peter Martin is desperately looking for a CO2 / temperature relationship that does not follow the logarithmic relationship (which says that any doubling of CO2 will result in the same temperature increase no matter the base) because he, personally, cannot accept that a theoretical 2xCO2 from 17.5 to 35 ppmv (for example) could have the same temperature impact as a possible future 2xCO2 from 280 to 560 ppmv. It appears that he believes that the logarithmic function is part of a conspiracy by Lindzen, Spencer and others to downplay future AGW.
So I proposed to Peter that he look at the somewhat modified Hansen et al. (1988) equation cited in the IPCC TAR. Within the range of 280 to 1120 ppmv this gives essentially the same result as the straight logarithmic function of Myhre et al., but it avoids the “minus infinity” problem at very, very low (hypothetical) CO2 levels, with which Peter has been struggling.
It still doesn’t solve Peter’s problem, though.
Max
manacker says
Hank Roberts
To your #775. You wrote:
“Don’t you have to bar _all_ natural feedbacks, to get such a low number?
You’re giving us the change in temperature from instantaneously doubling the number of CO2 molecules in the atmosphere, without any other changes that we know happen, if I read your chart correctly.”
There are two points here.
First is the maximum level of atmospheric CO2 we can ever expect to see from anthropogenic sources. This appears to be around 1,000 ppmv or barely 4x the “natural” pre-industrial level of 280 ppmv. That’s all there is out there. Only a natural disaster, such as that which has been suggested for the PETM, could change this significantly.
The second point is the amount of temperature increase we could reasonably expect to see from this all-time maximum CO2 level.
If we stick with the no feedback equilibrium forcings according to IPCC TAR (Myhre et al., Shi and Hansen et al.), we see that this all-time temperature increase is between 1.7 and 1.9°C.
All the rest is suggested positive feedbacks from water vapor, clouds and surface albedo, minus the negative lapse rate feedback.
As these are all anything but certain based on the empirical evidence supported by actual physical observations, we can probably ignore them for now.
As it looks at this point, observed water vapor feedback is a fraction of what would theoretically occur if relative humidity remained constant as assumed by the models and clouds appear to actually exert a fairly strong net negative feedback instead of a strong positive feedback as assumed by the models.
So this tells us that the combustion of all the fossil fuels on this planet will result in a theoretical greenhouse warming of under 2°C (excluding the suggested increase due to net “positive” feedbacks).
[Response: Nonsense. We can “ignore feedbacks”? Maybe if we lived on the moon. – gavin]
manacker says
Hank Roberts
Some back-up numbers.
How much CO2 is there in all the world’s fossil fuels?
How high would atmospheric CO2 concentration be when all fossil fuels have been consumed?
1. World oil reserves
1,317 billion bbl proven reserves (Oil and Gas Journal, 2007)
530 billion bbl all optimistically estimated sources other than oil shale
2,500 billion bbl worldwide oil shale
4,347 billion bbl
= 569 billion mt
(1 mt = 7.64 bbl)
Current consumption = 75 million bbl/day
Equals 159 years at current consumption level (Note: Consumption is projected to increase short-term and then decline, as reserves begin to dwindle and costs increase.)
2. World coal reserves
840 billion mt recoverable reserves (U.S. Energy Information Administration)
660 billion mt optimistically assumed recoverable new finds
1,500 billion mt
Current consumption = 6.2 billion mt/year
Equals 242 years at current consumption level
3. World natural gas reserves
176 trillion cubic meters proven reserves 2007 (O+GJ)
200 trillion cubic meters optimistically assumed recoverable new finds
376 trillion cubic meters
Current consumption = 2.8 trillion cubic meters /year
Equals 134 years at current consumption level
4. CO2 generated from oil
Oil = 85% carbon
1 mt oil generates 0.85 mt C or 0.85 * 44 / 12 = 3.12 mt CO2
But 25% of oil is used for non-combustion (plastics, petrochemicals, etc.)
0.75 * 569 * 3.12 = 1,330 GtCO2
5. CO2 generated from coal
Coal = 91% carbon
1 mt coal generates 0.91 mt C or 0.91 * 44 / 12 = 3.34 mt CO2
(assume all of coal is used for combustion)
Equals 1,500* 3.34 = 5,005 Gt CO2
6. CO2 generated from natural gas
1 cubic meter of methane generates 2.0 kg CO2
But 20% of natural gas is used for non-combustion (fertilizers, petrochemicals, etc.)
0.8 * 376 * 2 = 603 Gt CO2
7. Total, all fossil fuels generate: 1,330 + 5,005 + 603 = 6,938 Gt CO2
8. Mass of atmosphere = 5,140,000 Gt
If 100% of emitted CO2 stays in atmosphere:
1,000,000 * 6,938 / 5,140,000 = 1,350 ppm (mass)
But currently less than 60% stays in atmosphere. Assume in future this increases to 70% of total = 945 ppm (mass) = 622 ppmv additional CO2
Today’s CO2 level = 385 ppmv
9. Long-term future level (when all fossil fuels have been used up) = 385 + 622 = 1,007 ppmv.
That’s it, folks!
So what does this mean as far as temperature is concerned?
Whether we use the IPCC TAR equation proposed by Myhre et al., by Shi or by Hansen et al. doesn’t make much difference. The total calculated anthropogenic temperature increase from pre-industrial year 1750 (no net positive or negative impact from feedbacks) is:
1.7°C Myhre et al.
1.8°C Shi
1.9°C Hansen et al.
All three equations agree that at equilibrium we have already seen 0.4°C of this warming (at 385 ppmv CO2 today), so this leaves us the following all-time additional warming from CO2 at equilibrium:
1.3°C Myhre et al.
1.4°C Shi
1.5°C Hansen et al.
Anybody have any better numbers?
Max
[Response: Huh? Add in methane hydrates, oil shale and tar sands – easily doubles these numbers. And you calculation of the temperature change is way off. 1000ppm gives a forcing of 6.8 W/m2, and an expected temperature rise of ~3.5 to 7 deg C. -gavin]
Mark says
“2,500 billion bbl worldwide oil shale
4,347 billion bbl
= 569 billion mt
(1 mt = 7.64 bbl)
Current consumption = 75 million bbl/day”
And in refining that oil shale, how much oil gets used up to power the refining?
Or dealing with shale (which you don’t want in your oil when refining).