Guest commentary by Spencer R. Weart, American Institute of Physics
I often get emails from scientifically trained people who are looking for a straightforward calculation of the global warming that greenhouse gas emissions will bring. What are the physics equations and data on gases that predict just how far the temperature will rise? A natural question, when public expositions of the greenhouse effect usually present it as a matter of elementary physics. These people, typically senior engineers, get suspicious when experts seem to evade their question. Some try to work out the answer themselves (Lord Monckton for example) and complain that the experts dismiss their beautiful logic.
The engineers’ demand that the case for dangerous global warming be proved with a page or so of equations does sound reasonable, and it has a long history. The history reveals how the nature of the climate system inevitably betrays a lover of simple answers.
The simplest approach to calculating the Earth’s surface temperature would be to treat the atmosphere as a single uniform slab, like a pane of glass suspended above the surface (much as we see in elementary explanations of the “greenhouse” effect). But the equations do not yield a number for global warming that is even remotely plausible. You can’t work with an average, squashing together the way heat radiation goes through the dense, warm, humid lower atmosphere with the way it goes through the thin, cold, dry upper atmosphere. Already in the 19th century, physicists moved on to a “one-dimensional” model. That is, they pretended that the atmosphere was the same everywhere around the planet, and studied how radiation was transmitted or absorbed as it went up or down through a column of air stretching from ground level to the top of the atmosphere. This is the study of “radiative transfer,” an elegant and difficult branch of theory. You would figure how sunlight passed through each layer of the atmosphere to the surface, and how the heat energy that was radiated back up from the surface heated up each layer, and was shuttled back and forth among the layers, or escaped into space.
When students learn physics, they are taught about many simple systems that bow to the power of a few laws, yielding wonderfully precise answers: a page or so of equations and you’re done. Teachers rarely point out that these systems are plucked from a far larger set of systems that are mostly nowhere near so tractable. The one-dimensional atmospheric model can’t be solved with a page of mathematics. You have to divide the column of air into a set of levels, get out your pencil or computer, and calculate what happens at each level. Worse, carbon dioxide and water vapor (the two main greenhouse gases) absorb and scatter differently at different wavelengths. So you have to make the same long set of calculations repeatedly, once for each section of the radiation spectrum.
It was not until the 1950s that scientists had both good data on the absorption of infrared radiation, and digital computers that could speed through the multitudinous calculations. Gilbert N. Plass used the data and computers to demonstrate that adding carbon dioxide to a column of air would raise the surface temperature. But nobody believed the precise number he calculated (2.5ºC of warming if the level of CO2 doubled). Critics pointed out that he had ignored a number of crucial effects. First of all, if global temperature started to rise, the atmosphere would contain more water vapor. Its own greenhouse effect would make for more warming. On the other hand, with more water vapor wouldn’t there be more clouds? And wouldn’t those shade the planet and make for less warming? Neither Plass nor anyone before him had tried to calculate changes in cloudiness. (For details and references see this history site.)
Fritz Möller followed up with a pioneering computation that took into account the increase of absolute humidity with temperature. Oops… his results showed a monstrous feedback. As the humidity rose, the water vapor would add its greenhouse effect, and the temperature might soar. The model could give an almost arbitrarily high temperature! This weird result stimulated Syukuro Manabe to develop a more realistic one-dimensional model. He included in his column of air the way convective updrafts carry heat up from the surface, a basic process that nearly every earlier calculation had failed to take into account. It was no wonder Möller’s surface had heated up without limit: his model had not used the fact that hot air would rise. Manabe also worked up a rough calculation for the effects of clouds. By 1967, in collaboration with Richard Wetherald, he was ready to see what might result from raising the level of CO2. Their model predicted that if the amount of CO2 doubled, global temperature would rise roughly two degrees C. This was probably the first paper to convince many scientists that they needed to think seriously about greenhouse warming. The computation was, so to speak, a “proof of principle.”
But it would do little good to present a copy of the Manabe-Wetherald paper to a senior engineer who demands a proof that global warming is a problem. The paper gives only a sketch of complex and lengthy computations that take place, so to speak, offstage. And nobody at the time or since would trust the paper’s numbers as a precise prediction. There were still too many important factors that the model did not include. For example, it was only in the 1970s that scientists realized they had to take into account how smoke, dust and other aerosols from human activity interact with radiation, and how the aerosols affect cloudiness as well. And so on and so forth.
The greenhouse problem was not the first time climatologists hit this wall. Consider, for example, attempts to calculate the trade winds, a simple and important feature of the atmosphere. For generations, theorists wrote down the basic equations for fluid flow and heat transfer on the surface of a rotating sphere, aiming to produce a precise description of our planet’s structure of convective cells and winds in a few lines of equations… or a few pages… or a few dozen pages. They always failed. It was only with the advent of powerful digital computers in the 1960s that people were able to solve the problem through millions of numerical computations. If someone asks for an “explanation” of the trade winds, we can wave our hands and talk about tropical heating, the rotation of the earth and baroclinic instability. But if we are pressed for details with actual numbers, we can do no more than dump a truckload of printouts showing all the arithmetic computations.
I’m not saying we don’t understand the greenhouse effect. We understand the basic physics just fine, and can explain it in a minute to a curious non-scientist. (Like this: greenhouse gases let sunlight through to the Earth’s surface, which gets warm; the surface sends infrared radiation back up, which is absorbed by the gases at various levels and warms up the air; the air radiates some of this energy back to the surface, keeping it warmer than it would be without the gases.) For a scientist, you can give a technical explanation in a few paragraphs. But if you want to get reliable numbers – if you want to know whether raising the level of greenhouse gases will bring a trivial warming or a catastrophe – you have to figure in humidity, convection, aerosol pollution, and a pile of other features of the climate system, all fitted together in lengthy computer runs.
Physics is rich in phenomena that are simple in appearance but cannot be calculated in simple terms. Global warming is like that. People may yearn for a short, clear way to predict how much warming we are likely to face. Alas, no such simple calculation exists. The actual temperature rise is an emergent property resulting from interactions among hundreds of factors. People who refuse to acknowledge that complexity should not be surprised when their demands for an easy calculation go unanswered.
TEBB says
I’m back catching up on reading comments today and as a layman looking at this entire thread or whatever it is called, this comment by Mashey seems to hit the nail on the head as to what I see happening here from a human behavior standpoint:
“…good interdiscplinary work means building bridges from one part to another, not rushing over to another piece with dynamite and trying to blow it up.”
It seems Mugwumps and others are strong in their own respective fields and making the mistake of rushing over to climatology with dynamite, instead of taking the time to learn climatology.
Spencer says
Apologies to all engineers! I only meant to report an observation: of those who emailed me complaining that they couldn’t find a simple scientific calculation of the greenhouse effect on the Web, a surprisingly high fraction identified themselves as engineers, and senior in the sense of “experienced,” “retired” or the like. This is a small sample and certainly not characteristic of the vast majority of engineers.
mugwump says
RE #143:
Nick Gotts, my original estimate was from eyeballing the graphs in Roe and Baker. I actually calculated the answer at #141. There’s no way to justify Gavin’s interpretation that > 4.5C and < 2C are both unlikely under their model. > 4.5C has probability of 0.4. < 2C has probability 0.0027.
mugwump says
Gavin #146:
Sure, hence why I said “Why do they revise the models? Because they don’t explain all the data (including climate time series, but also all other kinds of data).”. The “all other kinds of data” I was referring to includes the single processes.
As above, I am not claiming that is exclusively what is done.
Come on gavin, you’re not seriously claiming that you don’t have a fraction of an eye on temperature trends when you’re modifying the models? If a model change produces a cooling trend over the 20th century, you never take a second look?
[note, there is absolutely nothing wrong with doing this. It would be wrong not to use all the available data to verify the models.]
[Response: But the warming over the 20th Century is driven by the forcings, not the model. If suddenly CO2 in the model ceased to absorb IR, then obviously we’d examine what changed. But that would radically affected present day climate independently of what implication it would have on the trend. Almost all of the changes described in the paper would presumably affect the sensitivity, but we generally don’t even test that after every little change. The changes are driven by the fidelity to the process, and the emergent properties of the present day climatology (strength of the Hadley circulation, percent ice cover, cloud radiatve forcing). Really, the models are not tuned to the trend. – gavin]
Jim Eager says
Greg, that was the funniest parody rant I’ve read in a long time.
What, you were serious?
Ray Ladbury says
Mugwump, You seem to be confused as to the type of modeling that is going on here. Climate models are not statistical, but dynamical. Statistical models parameters are fit to the data to achieve best agreement. Parameters in dynamical models are determined independently, and then the performance of the model is judged by how well it reproduces the trends seen in the data. If you do not understand this distinction, you will reach incorrect conclusions about the reliability of the process.
The reason the distribution over possible sensitivities is asymmetric is because the data are more forgiving of a high sensitivity (>4.5 degrees per doubling) than of a low one (
Mark says
Re #144
“Nowhere in the distribution has a probability of 1. The expectation is the integral of the climate sensitivity multiplied by the probability density function (pdf). The pdf stays well defined, but the climate sensitivity diverges as 1 / (1-f) as f approaches 1. It’s what those in the theoretical physics world call a “logarithmic divergence”.”
As f approaches 1. Never gets there. Hence your log 0 is incorrect.
[edit]
Mark says
Further to Nick at #143.
Consider a distribution that has 3.5 as its mean. The dice roll of a valid d6. Flat. And goes no further than 6.
Now take a Poisson distribution. It may also have a 3.5 mean. But it has a fat tail.
Now when it comes to CO2 sensitivity, it cannot be negative. There’s very much an absolute minimum. What’s the maximum? Well, unless you model it, infinite, really. So the modal, mean and average are all different and tell you something.
If CO2 sensitivity is low, then our actions have a smaller but still detrimental effect. If it is high, the detrimental effects happen quicker and are more widespread.
So the “harm” is not linear as sensitivity goes up but goes up exponentially.
So, instead of looking for the minimum change, you’d be better off looking for the maximum harm.
Much like they do in civil engineering. They don’t build bridges with a 10% margin for error, they work out what the WORST that could happen and then double the protection against THAT.
mugwump says
Maybe we just have a terminology difference.
Way upthread I said: “Now, GCM modelers don’t necessarily estimate the free parameters by directly inferring them from model behaviour, but they certainly adjust the parameters to get the model output to “fit” the 20th century instrumental record”
By “parameters” I mean any input to the model for which a value is not already precisely determined. That includes uncertain physical constants, but also boundary conditions such as aerosol forcing at each point in time. Note also that “adjusting the parameters to fit the 20th century record” does not have to involve heavy duty parameter tweaking and re-estimation. It can be as benign as picking one parameter value over another because it gives a roughly more plausible shape to the model outputs.
The question of whether models are tuned (at least to some extent) to the 20th century record seems to have been substantiated by Kiehl, which Timo Hämeranta linked to in #78 above [I had read this previously]:
Kiehl, Jeffrey T., 2007. Twentieth century climate model response and climate sensitivity. Geophys. Res. Lett., 34, L22710, doi:10.1029/2007GL031383, November 28, 2007
Maybe GISS does not suffer from this, but it would be very difficult not to.
mugwump says
RE Mark #157:
You can write it as a limit if you want. You still get no upper bound on the expected climate sensitivity from Roe and Baker’s model.
Now, you could argue that allowing f to range all the way to 1 is unphysical, and it should be limited to some number smaller than 1. But that’s modifying the gaussian distribution on f. If you go down that road you also have to address the extreme skewness in the distribution on climate sensitivity induced by a Gaussian distribution on f. It’s just not plausible, so the distribution on f should be very different from gaussian. But once you do that you pretty much invalidate the whole paper.
Paul Melanson says
RE: #145
And if I hear one more fool hand-wave the issue away with some comment about ’solar and wind’, I will lose it.
Many people post comments here to the effect that we can’t move away from carbon-intensive energy generation. Sometimes it’s as a taunt, such as “why do you believe climate models but not economic ones.” I’ve been asking for their references, so far without success. Your statement is along these same lines, e.g. “abundant fossil fuels underpin our entire civilisation and your entire way of life” implies that we can’t change. Could you provide a reference so that I and others don’t think you’re the one “hand waving the issue away?” Thanks!
Thomas Hunter says
A friend of mine pointed me to this. Nice explanation of the complexity of the issue from Spencer. Not only is there carbon dioxide, we also have methane, nitrous oxide and the halocarbons. And then we throw in sun, wind, wind patterns, lapse rates, water vapor feedbacks, conduction, convection, chemical reactions and all the kinetic interactions with the gases that do not absorb infrared. Toss in land-use changes and assorted waste heat, aerosols and particulates on the ground, cloud behavior and the like, and it is quite a complicated mix. I would certainly expect that if energy in is balanced by energy out roughly over some time scale, after taking into account the heat storage capacity of the oceans et al, then since both the satellite and land/sea anomalies show a increased trend in the lower troposphere, that would mean something else would have to show a decreased trend. Such as the stratosphere does.
What perplexes me is that there’s any conversation relating to the role the above listed greenhouse gases play. The IPCC in AR4 is quite clear that they estimate the four main well mixed gases provide a total of up to 2.8 Wm^-2 There seems little to argue about over that figure.
We also know that our satellite and land/sea anomaly trends are up, and the evidence suggests that the greenhouse gases contribute some net percentage to this trend rise. The IPCC specifically targets the burning of ‘fossil fuels and land use changes’ to this observed rise in the near surface numbers since the industrial era and the associated increases in population, urbanization, and technologization.
So rather than ask “What does doubling CO2 do?” the real two questions are perhaps “What does it take in total to make that radiative forcing number double?” and “What is the net effect in the system of raising the total from 2.8 to 5.6?”
It seems the verbal sparing over the lag of the oceans, the appropriateness of the models, and the other factors involved is rather a distraction. It seems a common starting point would be that it’s unclear what another 1 or 2 or 5 or 20 Wm^-2 will do in reality, or what actual effect limiting anthropogenic greenhouse gas emissions will have on the enhanced greenhouse effect. Then it can be worked from there to specifics. The first part of finding out where you’re going is to determine where it is you’re at.
Ray Ladbury says
Mugwump, where in that quote do you get any indication that sensitivity is fit to optimize agreement with the data? I see only discussion of uncertainties. Yes, you can get different values for forcings and for sensitivity–hence the range 2 K-4.5 K per doubling–to be consistent with trends. And different models produce slightly different results.
It is clear that you really don’t understand how the modeling is done. Why not read some of the papers Gavin has recommended and learn it. At least then you wouldn’t be arguing against straw men of your own devising.
On another note: You’ve said that a major factor in your opposition to the consensus science was the way “environmentalists” use it to promote their agenda. Well, by rejecting the science, aren’t you yielding it to them to do with as they will? Put another way, if Al Gore had been standing alongside John McCain or Jim Baker or even Pat Robertson, do you think he would have an Oscar and a Nobel Peace Prize now?
Ray Ladbury says
Mugwump, There’s nothing magical about the Normal distribution other than it’s role in the Central Limit theorem–and here we are not concerned with central tendencies. Bayesian methods offer one method of dealing with the high-end tail. See for example:
http://www.jamstec.go.jp/frcgc/research/d5/jdannan/prob.pdf
I’ve pointed out in the past that the choice of Prior is arbitrary, and that we are still in the realm where the results are heavily influenced by the choice of Prior. However, you can’t blame the models–the data are the data. All you can do is try to gather more.
David B. Benson says
Greg (145) — There are already harms from the piddley global warming so far. As a good engineer, you will want to discover what these have been.
[Capcha agrees, stating “in concerning”.]
Dave Andrews says
Gavin,
Apropos reading tea leaves how does “Model inadequacy is a serious barrier to the interpretation of model results for decision support” count?
One of many, as I am sure you know, of the critiques of climate modelling in Stainforth et al, Phil. Trans.R. Soc. A (2007) 365, 2145-2161
[Response: For what? Some decision-makers want model output at the grid box level to be meaningful – but clearly, models are inadequate for that purpose. It does not mean that they are inadequate for all purposes. – gavin]
Al Tekhasski says
Mark (#135) said:
“the height above sea level of one optical depth at that wavelength is a lot higher. Higher = cooler. Cooler = less re-radiation and so more entrapment of that energy within the system.”
Yes, this is the main idea of AGW in general, higher = cooler = less OLR. But is this concept really true, when looking into details? The reason of my question was to confirm that the official AGW forcing comes from sideband broadening that covers about 7-14 cm-1 of IR band. AT the same time, there is a 150 cm-1 wide range of CO2 absorbtion band where “it is saturated”. But does it really “saturated” and therefore does not contribute anymore? In fact, the emission boundary in 700 cm-1 band extends above tropopause, into stratosphere, where higher = warmer = more OLR! As result it is supposed to cause stratospheric cooling, a well admitted effect. Shouldn’t the cooling over a 10x-20x wider spectral window at least counter-balance the effect of mid-troposphere sideband forcing? That would easily explain the lack of observed “warming”. Of course, there are thousands of climate reseachers around the world, they should not make this kind of omission, right?
mugwump says
RE Ray Ladbury #163:
I assume you are referring to the Kiehl reference. I am not claiming sensitivity is optimized. Sensitivity is hard to control manually. It’s aerosol forcing that is optimized.
Kiehl shows that the different climate models exhibit a wide range of sensitivities, but nevertheless successfully track the 20th century temperature record. How can they do that? By manually adjusting the aerosol forcings (which are up for grabs within a wide range). Whence Kiehl’s remark:
J. Althauser says
spencer –
too bad such explanations didn’t stick with with G HW Bush’s advisor John Sununu. As an engineer he ‘played around with’ the results of a climate model and convinced himself (and allies in the administration) the world needn’t be too concerned. The issues he brought up almost 20 years ago are still being promulgated by confusionists.
The Saga of My Greenhouse Effect on the White House
http://www.ncar.ucar.edu/archive/documents/manuscript/481_1990_greenhouse_saga.pdf
mugwump says
RE #163 Ray Ladbury:
I don’t believe in this idea of “consensus science”. Is Roe and Baker, pretty much demolished in this thread, part of the “consensus science”?
I am simply opposed to bad science, of whatever flavour (and yes, there is plenty of shockingly bad science from some prominent skeptics too).
There does appear to be a strong selection bias in much of the literature that, in my opinion, is driven by a fairly pervasive environmentalist element amongst portions of the academic community. For example, had Roe and Baker been supportive of the skeptical view, I suspect it would have been scrutinized far more carefully by the realclimate folks (they would have no doubt eviscerated it using similar arguments to those I have put forward).
Of course not. Hollywood is wall-to-wall Democrat. And the Nobel peace prize committee ain’t exactly a right-wing think tank. What’s your point?
Lawrence Brown says
“But the integral of 1 / (1-f) is -log(1-f). So the integral from 0 to 1 of 1 / (1-f) is -log (1-1) + log (1-0) = -log(0) = infinity.”
Time to lighten up-
Also the(indefinite) integral of d(cabin)/cabin = Natural log cabin+C.
mugwump says
Re #164 Ray Ladbury:
Well, the whole point of the Roe and Baker paper is to show a normally distributed feedback factor f yields a very long-tailed climate sensitivity distribution. That’s fine, but such a long-tailed distribution clearly does not represent our true uncertainty concerning the climate sensitivity, so it’s unclear what the point of the exercise is.
Hank Roberts says
> pervasive environmentalist element
Look, there are nitwits lacking science education on all spokes of the political wheel, some of then very far out from the center, very firm in their faith in what they believe. So what?
If you come from that kind of position, and wish to discuss the world with people in the sciences — for example issues about ecology — you aren’t even talking the same language.
Regardless of what your political position is, if you haven’t learned the science, you’re uneducated about the science and your faith and beliefs are just political positions.
Avoid the “ists” and the “ians” and the “ites” — and learn from the “ologists” — and you’ll’ve made a start.
Education. Not required for politics. Required for civilization, on the longer term, though.
dhogaza says
As always, Mugwump conquers in his own mind. His genius is special, because he only demolishes mainstream science in the blogosphere, not in the scientific arena, where it counts.
He’s also proven that he knows more about GCMs than those who write them. I’m very, very, impressed.
Sorry, mugwump, you live in a fantasy world.
Ike Solem says
It’s always worth remembering that the only reason climate denialism has persisted so long is the low level of scientific education among both U.S. media reporters and the general public, by the way.
It’s also worth noting that the basic notion was pretty solid back in 1978, and the general predictions have only been slightly modified since then.
Try a 1974 paper, for example. Manabe & Wetherald, Effects of Doubling the CO2 Concentration on the climate of a General Circulation Model.
Guess it was just honest confusion on the part of the media for the past 30 years… hardly. It was deliberate propaganda aimed at preventing a large-scale shift away from fossil fuels and towards renewable energy, and it is still going on today.
Ray Ladbury says
Mugwump, Aerosols adjusted by hand? No. However, there is considerable uncertainty in aerosol forcing, so a wide range is tolerated. One can equally look at this as a matter of skill of the model in predicting the right aerosol forcing.
My point about Gore is that without his climate schtick, he’d be just another washed up politician. Because folks on the right have either rejected good science or refused to stand up and call for action upon that science, Al Gore has been left alone on that bully pulpit. The result: An Oscar, a Nobel Peace Prize and about 8% odds according to Vegas bookies that he’ll one day be president.
Likewise, climate change has gained increasing acceptance–hell both major party candidates say it needs to be addressed, as well as folks from Al Sharpton to Pat Robertson. By continuing to reject the science–which it is clear you have not yet understood–you are merely leaving one more seat on the right of the negotiating table unoccupied. The left side is pretty crowded.
So, my suggestion, while you are here, take some time and learn something about the science. You may not be convinced, but at least your arguments will be more on point and hopefully you will see why the science has convinced intelligent, educated folks in the political center and even on the right, not just the left.
Peter Williams says
Thanks for the post.
My background is astrophysics which, like atmospheric physics, often relies heavily on numerics. I’m always amazed how physicists expect complex phenomena to be reducible to simple back-of-the-envelope calculations. I see the same thing all the time in my current job where I work mostly with engineers who are always suspicious of an answer that can’t be spit out in a few lines of math. Sometimes this is possible; sometime’s it’s not.
While I myself am quite adept at back-of-the-envelope gynmastics, I recognize that there are some problems for which you quite simply just cannot give even the roughest quantitiative estimate using such methods. I guess the expected warming from doubling CO2 is one of those.
Patrick Caldon says
Dan Hughes,
I had exactly the same question as you did a few months. I found it very helpful to:
– buy a couple of textbooks on the subject
– read them carefully
– when reading code read the text in concert
If you include the attached references the documentation for the stuff I’ve read is more than adequate.
mugwump says
RE Ray Ladbury #175:
No one cannot. The models use a wide range of differing values for the aerosol forcing. They can’t all be correct.
A lot of it is not “good” science. Calls for action are more often than not based upon the more extreme and unjustifiable projections of climate sensitivity such as in the Roe and Baker paper.
I’ll take those odds. He’ll never be president.
The reasons for Al Gore’s Oscar and Nobel Peace Prize are a matter of opinion. I’ve given mine. You clearly differ. It’s not something that particularly concerns me.
Puhlease. I understand the science just fine. I have read a ton of the original papers. [edit]
[Response: Dealing with uncertainties in the forcings is necessary – otherwise you can’t deal with the true spread of uncertainties. However, the range of 20th C trends ranges from 0.3 to 1.1 deg C in the different AR4 models, so the idea that they were all tuned to get exactly the right answer is just wrong. For those that had trends about the right level, the range of aerosol forcings, sensitivity and internal variability provide a consistent simulation of what might have happened. – gavin]
Martin Vermeer says
Al Tekhasski #167:
You can look this up, as Hank would say. Go to David Archer’s on-line model:
http://forecast.uchicago.edu/Projects/modtran.html
Remove, for simplicity, everything but CO2. See the little “counter-peak” in the middle of the 600-800 cm^-1 band. That’s stratospheric cooling. High CO2, radiatively cooling the surrounding air.
Double CO2. See the “flanks” of the band move outward. That’s the tropospheric greenhouse effect. See also how the “counter-peak” strengthens. But it’s still a lot smaller than the movement of the flanks.
But you’re definitely on the right track.
Right…
Mark says
Al #167:
“But does it really “saturated” and therefore does not contribute anymore?”
But the thickness of the saturation layer is thicker.
Like putting another blanket on your bed: the lower blanket blocks the same way as it did on its own. So why are you warmer?
Your 2degrees per doubling is a good *approximtion* and you DID say “order of magnitude” not “accurate”, so why do you say it cant be 4? That’s within your “order of magnitude”. If you wanted “within +/- 10%” then your process was inaccurate. And one reason is that the thicker layer blocks more, so ignoring it completely is incorrect and inaccurate.
So which were you looking for? Order of Magnitude or Accurate?
Mark says
mugwump, #160:
“You can write it as a limit if you want. You still get no upper bound on the expected climate sensitivity from Roe and Baker’s model.”
Only under Richard’s assumption of how to model it.
And you can’t have infinities they cause a lot of problems, but the demand from R is that the weighting results in an integral of 1/(1-f) when f goes to 1. Why?
“Roe and Baker (RB) model the “total feedback factor” f as a gaussian. Climate sensitivity is proportional to 1 / (1-f). Obviously, we should bound f to be between 0 and 1, which a gaussian doesn’t do, but we can just truncate and rescale to get a valid probability density function for f, which will still have the functional form of a gaussian for f between 0 and 1.”
So RB has a factor F as a gaussian, but bounding f to be 0-1 is not gaussian. A demand brought in for no apparent reason. Why can it not be between 0.1 and 0.4? No infinities there. After all, it isn’t a Gaussian for your purposes either.
The proportionality only works if you normalise it but that requires you divide the values by the area under the curve. Your and Richard’s requirement give a divisor of infinity. So it only has value at f=1. Everywhere else it is zero.
Given that there are so many issues with what you brought forward as “truth”, where are the resolutions to this?
Heck, ignore all that and what does this problem (if it did exist as you portray it) have to do with climatology and AGW? Nowt. The models don’t use f=1. So it breaking down at f=1 is irrelevant.
Harmonic oscillation has a pendulum being forced back by a gravity force that is proportional to the angular difference from vertical.
It is assumed and all the maths uses this approximtion. However, the force is proportional to the sin of the angle offset, which is an infinite series. And this makes it inaccurate. But it is taken as being so near 0 in angle that an approximation to linear is good. But it still uses “theta”. And if you take theta to 50degrees angle, the motion is no longer harmonic. Does this mean we cannot use harmonic motion to define a pendulum’s movement? No. We still have pendulum clocks. We just have them moving slightly.
Same here.
Within a range of accepted sensitivities, it will approximate to 1/(1-f). Outside this range, this ratio will become less and less accurate because the 1/(1-f) is an approximation.
Yet you demand we take this equation to areas it doesn’t hold and surprise surprise, we get an answer that is incorrect and invalid.
Whoop de do.
I guess approximating a normal distribution to a selection of 10,000 respondents must be dropped because it breaks down when you have 3 respondents.
Barton Paul Levenson says
rutger posts:
If you actually do the matrix math that governs Milankovic cycles, you find that the next significant stades are at 20,000 and 50,000 years from now. Ice ages can wait; right now we have to deal with global warming.
Barton Paul Levenson says
greg posts:
No kidding, really? I’m glad you told me that. I didn’t know.
Another fact I just wasn’t aware of!
Proof is for formal logic or mathematics, not science. Science can only disprove things.
Gosh, we wouldn’t want that to happen.
Interesting question. If we had built our present civilization on solar and wind and biomass energy, history might well have been a little easier on great numbers of people. For example, if Londoners weren’t burning coal to heat their homes in the period 1750-1875, a lot fewer little kids wouldn’t have been enslaved as chimney sweeps, to work in soot for no pay, get scrotal cancer, and be kicked out when they got too big to crawl through the chimneys. For that matter, we wouldn’t have had the mass death of the 1952 London inversion, or the deaths in Donora, PA in 1948. Interesting speculation indeed.
I think some of the posters here may be girls, greg. Better have something available to deal with the cooties!
Personally, I have faith in God. It’s hard to work up much enthusiasm for faith in the species that brought us Auschwitz, Kolyma, “rape camps,” slavery, child molesting and Darfur.
And over anyone who gets in our way, eh?
It has already turned out to be a problem. Ask the Australians. And we know how to solve it — switch to renewable sources of energy.
Barton Paul Levenson says
Ray Ladbury posts:
Point of information — Pat Robertson accepts global warming and urged his congregation to act on it. He has even appeared in an ad on the subject with Al Sharpton, of all people.
Dave Andrews says
# 173, Ike,
Nice quote. What about this one?
“Manabe is a man who feels pretty sure of what he knows but who also keenly feels the limits to knowledge. Both aspects of his intellectual character contribute to another irony –his skepticism about just how serious a problem global warming really is. Though Manabe was the first to treat the greenhouse effect just right, and though he is credited by lifelong colleagues with always having kept his eye sharply focused on the key roles of the greenhouse gases in climate, he does not exactly share their acute concern about where rising carbon dioxide levels are taking us
From William Sweet, ‘Kicking the Carbon Habit’, Columbia University Press, 2006, p108, (emphasis added)
Mary Hinge says
Off topic but I thought you would like to hear what Jack Koenig,of the anti AGW group Mysterious Climate Project, has to say about the evolution by natural selection. I don’t know how you can hold a scientific debate if you reject one of the firmest theories we have!
“Could you please list some of the evidence? As far as I knew, evolution is still just a theory, a theory which may have substance on one hand, and not on another. You also have to remember the evolution theory was hammered into everyones’ skulls by the Lamestream Media (ABCNNBCBS) in much the same manner as they are hammering AGW into everyones’ skulls.”
The whole of this link from ‘Watts up with that’ is here http://wattsupwiththat.wordpress.com/2008/09/08/uah-global-temperature-dips-in-august/#comment-38350
Enjoy!
mugwump says
RE Mark #182:
Mark, the climate sensitivity is proportional to 1 / (1-f). Negative f would imply negative feedback, which is pretty unlikely. [By negative feedback I mean that the temperature rise from increasing CO2 is less than what you’d get if there was no feedback]. Hence the lower bound on f of 0. [NB: you don’t actually need this lower bound for the calculations I gave because the tail of Roe and Baker’s distribution for f less than zero is negligible]
For the upper bound, climate sensitivity increases to infinity as f approaches 1. If you allow f to go above 1, the sign flips and you get a sensitivity of -infinity ranging up to zero as f increases from 1 to infinity. Negative values of sensitivity are even more unphysical than negative feedback, since they imply temperature actually decreases as CO2 increases Hence the upper bound of 1.
Mark, the issues are in your head. Read the paper, and ask yourself honestly if you really understand it. [edit]
Ray Ladbury says
Barton, Yeah, I know about Robertson and the video with Sharpton. However, Gore is still the “lone crusader,” and this has helped Al even as it’s hurt climate science (Al is a whipping boy for the political right–they won’t even agree the Sun rises in the East if he asserts it.). Had anyone on the right been up there with Al, it would have de-emphasized him and made the reality of climate change more acceptable to the right. As it is, they will oppose it to their dying day.
Fred Staples says
The comment about Angstroms saturation experiment (70) came from Raypierre (Uncertainty, noise and the art of model-data comparison), not you Gavin. Sorry.
“On the other hand, the carbonate chemistry used by R&S is standard high-school stuff, and there’s a good chance it would have been discovered much earlier if people had paid more attention to Arrhenius, and if Ångström hadn’t set back the field by a highly publicized but botched experiment. –raypierre”
However, to reject Angstrom’s results out of hand you should repeat the experiment. Here are the results from a crude high-school experiment reported on the internet. Air, and 100% CO2 at 22 degrees, were subjected to identical infra-red irradiation.
After 5 minutes, the CO2 warmed 15 degrees C and the air 10 degrees C.
But in successive 5 minute intervals the temperature increases were:
Air 100% CO2
3 degrees 3 degrees
5 degrees 5 degrees
4 degrees 5 degrees
The experimentalist thought that this demonstrated that CO2 acted as a greenhouse gas. I think it demonstrated that Angstrom’s saturation effect was correct.
Has this experiment ever been done properly, and for more than 20 minutes?
[Response: I told you before – the HITRAN database is based on much more sophisticated measurements of these effects for individual lines of the spectrum and at multitudes of different temperatures, pressures and mixtures of gases. Experiments in high school are all very interesting, but hardly what NIST and others spend their time doing. – gavin]
As for your second comment, the second law of thermodynamics is a statement about the difference between energy and heat. Energy can transfer from a colder body – heat can’t. If that were made clear in all comments, (the colder atmosphere cannot warm the earth directly) simplistic explanations of AGW would not be repeated.
[Response: This is nonsense. Downwelling LW radiation from the atmosphere can be measured even by high schoolers. But apparently this doesn’t count in the energy budget of the surface? Energy=heat in this context. – gavin]
You reject Essenhigh’s comments because he neglects the effect of absorption in the upper atmosphere. This is the point of Barton’s piece (62) explaining surface warming via relative absorption in the upper atmosphere. If you express the surface temperature increase (via Stefan-Bolzmann) as a function of the upper atmosphere absorption relative to the lower atmosphere, the effect is significant only if the upper layer absorption is more than 10% of the lower.
But the H2O absorption is absent (at least a factor of 10) and the pressure is lower by a further order of magnitude. At a relative absorption of .01, the effect of doubling the CO2 in the upper atmosphere is negligible.
[Response: No, it just isn’t. (And for these purposes, the upper atmosphere is 5 to 6 km above the surface). We are not talking about the stratosphere. – gavin]
As for the CO2/H20 spectral overlap, Chuck Wiese (on Dr Pielke’s web site) used HITRAN to calculate the impact of doubling CO2. His results gave an increased flux of about 2 watts per square metre, half of which radiates down. The temperature increase to compensate (via Stefan-Bolzmann) is 0.16 degrees C.
[Response: I have no idea what he’s done, but it’s wrong. The forcing at the tropopause (after stratospheric adjustment) from 2xCO2 is around 4 W/m2 using HITRAN (Myhre et al, 1998, Collins et al 2006 etc.). When Wiese gets his result past peer review, let me know. – gavin]
You do not comment on the UAH lower and mid troposphere temperature trends, which, since 1978, are obviously not compatible with the “higher is colder” AGW theory.
[Response: Huh? higher is colder is from the adiabat which has nothing to do with trends since 1978. – gavin]
As for the Little Ice Age, it was not visible in the original hockey-stick (George Monbiot abolished both it and the Medieval Warm Period in a contemporary Guardian article). Now it is visible, and it gets a mention in the latest post – “Progress in reconstructing climate in recent millennia”.
[Response: Huh? (again!). What on earth are you talking about? The cooler period in the 17th-18th century (and also in the 15th) is visible in all reconstructions. I’ve even written papers about it. – gavin]
The non-scientific community is heavily influenced by the certainty expressed on this web-site. Is it justified?
[Response: The only certainty is that the number of people who keep posting completely rubbish half-digested contrarian nonsense seems to be a constant. – gavin]
mugwump says
RE gavin #179:
I never said they were all tuned to get exactly the right answer. In fact, I was quite careful in #159 to point out that the process does not even have to involve explicit tuning:
If, as you claim, modelers choose aerosol forcings without a view to the impact on 20th century fidelity, then you’d expect no correlation between the choice of aerosol forcing and model climate sensitivity. OTOH, you’d expect an inverse relationship if there is any feedback from 20th century fidelity to aerosol choice in the modeling process. Kiehl shows there is a strong inverse relationship.
Ray Ladbury says
Mugwump, re our conversation:
RL: One can equally look at this as a matter of skill of the model in predicting the right aerosol forcing.
M: No one cannot. The models use a wide range of differing values for the aerosol forcing. They can’t all be correct.
Actually, one can. If the model has to assume an improbable value for aerosol forcing to achieve agreement, it has less skill.
M: Puhlease. I understand the science just fine. I have read a ton of the original papers. [edit]
Uh, given that you don’t seem to understand the difference between statistical modeling and dynamical modeling, I would beg to differ. Look, this is not simple science. I have a PhD in physics and had to devote a year of my spare time to understand the basics of the subject. Have you, for instance, perused Raypierre’s book?
mugwump says
Come on Ray. That’s a false dichotomy. GCMs are a combination of both, whichever way you cut it.
There’s no way of saying this without sounding obnoxious, but since you showed me yours, I’ll show you mine. I have a PhD in mathematical statistics and computer science. I did two years of a PhD in theoretical physics before I got interested in my present field and switched topics. I was the top undergraduate student in theoretical physics and pure mathematics at my university, and did a third undergraduate major in computer science. I worked as an academic for 5 years before moving into industry, and published about 30 peer-reviewed papers with an H-index of 18.
This is actually pretty simple science compared to my first loves: quantum field theory and general relativity. That’s not to say modeling the climate is easy. Or answering questions like “what is climate sensitivity” is easy. But the basic tools are familiar to anyone with a strong background in physics, statistics, mathematics and computer science.
pete best says
[Response: The only certainty is that the number of people who keep posting completely rubbish half-digested contrarian nonsense seems to be a constant. – gavin]
Gavin – you have the patience of a saint but most peoples would have run out around 50 posts ago. You must be pulling your hair out (thats if you have any left of coure ;) )
Barton Paul Levenson says
mugwump writes:
Then you don’t believe in science. Modern science runs on peer review and the scientific consensus. And it is so enormously productive we’d be crazy to change it.
veritas36 says
Do the models have any prediction for NH?
We had torrential, lengthy rainstorms all summer. Driving on a freeway all I could see was a flickering tail light in front of me. It was not possible to change lanes. This is not traditional NH rain, and was not necessarily associated with a thunderstorm. The amount of rainfall (and winter snowfall) is up about 50% this year.
I know that global warming predicts in general more intense, extreme rainfall. Has anyone looked at satellite data to see that this is happening? Do the models predict NH might get this? What can you say about what’s happening here, if anything?
Thanks.
Rod B says
Mark, a clarification: one does not linearly continue to get warmer as more and more blankets are added to your bed. At some point adding another blanket will not make any discernable difference.
[Response: But you only need look to Venus to know that this is a long way off. – gavin]
Ray Ladbury says
Mugwump, Great, that answers part of my question: you are educates and intelligent. Now to the other part: Have you read Raypierre’s text or some equivalent?
Would you expect to understand QED or GR with only a brief effort? Would you expect to catch Steve Weinberg in an error, for instance? Do you think statistical mechanics (which is closer to what we have in climate science) is any easier than these subjects?
Barton, brings up a good point: Perhaps you would tell us what you mean by “consensus science”.
walter pearce says
Mugwump:
How many angels can you fit on this pin? For one whose skepticism is based largely on the behavior of others (see your comment 54) you’ve become rather vehement yourself.
Given the complexities (the subject of the original post), do you not find the temperature trends over the last two decades not only to be persuasive indicators of the models’ validity, but of a certain utility as well? Or am I missing your point?
Rod B says
This is kinda related to part of Fred’s and Gavin’s, et al discussion, but something that quietly disturbs me (from a knowledge perspective). Is all of the downwelling IR radiation emitted from the internal energy stores of greenhouse gases? The vast majority? (That standard IPCC Radiation Budget image implies that it does.) If this is the case would this not be an example of energy transfer but not heat (per se) transfer, even though the absorbed downwelling radiation energy turns into heat?
[Response: Downwelling LW (also called IR or thermal radiation!) comes from clouds, water vapour, trace GHGs and aerosols. Whatever you call it, it still occurs and does not violate any of thermodynamics. – gavin]
On the other hand, if the downwelling radiation comes from non-GHGs doesn’t this emission decrease heat (temperature) of the emitting gas? Wouldn’t this mean that heat (along with energy…) is being transferred from a low temperature to a higher temperature in violation of Thermodynamics? Is the answer here (if there need be an answer) that one can (theoretically) find a certain portion of an atmosphere layer (a “slice of the Boltzmann distribution” if you will) with a higher temperature (heat content) than the layer as a whole?
[The basic question I have is where does all of that downwelling IR radiation (~85% of the surface IR emission) come from and how does it manage to get back to the surface.]