Guest commentary by Spencer R. Weart, American Institute of Physics
I often get emails from scientifically trained people who are looking for a straightforward calculation of the global warming that greenhouse gas emissions will bring. What are the physics equations and data on gases that predict just how far the temperature will rise? A natural question, when public expositions of the greenhouse effect usually present it as a matter of elementary physics. These people, typically senior engineers, get suspicious when experts seem to evade their question. Some try to work out the answer themselves (Lord Monckton for example) and complain that the experts dismiss their beautiful logic.
The engineers’ demand that the case for dangerous global warming be proved with a page or so of equations does sound reasonable, and it has a long history. The history reveals how the nature of the climate system inevitably betrays a lover of simple answers.
The simplest approach to calculating the Earth’s surface temperature would be to treat the atmosphere as a single uniform slab, like a pane of glass suspended above the surface (much as we see in elementary explanations of the “greenhouse” effect). But the equations do not yield a number for global warming that is even remotely plausible. You can’t work with an average, squashing together the way heat radiation goes through the dense, warm, humid lower atmosphere with the way it goes through the thin, cold, dry upper atmosphere. Already in the 19th century, physicists moved on to a “one-dimensional” model. That is, they pretended that the atmosphere was the same everywhere around the planet, and studied how radiation was transmitted or absorbed as it went up or down through a column of air stretching from ground level to the top of the atmosphere. This is the study of “radiative transfer,” an elegant and difficult branch of theory. You would figure how sunlight passed through each layer of the atmosphere to the surface, and how the heat energy that was radiated back up from the surface heated up each layer, and was shuttled back and forth among the layers, or escaped into space.
When students learn physics, they are taught about many simple systems that bow to the power of a few laws, yielding wonderfully precise answers: a page or so of equations and you’re done. Teachers rarely point out that these systems are plucked from a far larger set of systems that are mostly nowhere near so tractable. The one-dimensional atmospheric model can’t be solved with a page of mathematics. You have to divide the column of air into a set of levels, get out your pencil or computer, and calculate what happens at each level. Worse, carbon dioxide and water vapor (the two main greenhouse gases) absorb and scatter differently at different wavelengths. So you have to make the same long set of calculations repeatedly, once for each section of the radiation spectrum.
It was not until the 1950s that scientists had both good data on the absorption of infrared radiation, and digital computers that could speed through the multitudinous calculations. Gilbert N. Plass used the data and computers to demonstrate that adding carbon dioxide to a column of air would raise the surface temperature. But nobody believed the precise number he calculated (2.5ºC of warming if the level of CO2 doubled). Critics pointed out that he had ignored a number of crucial effects. First of all, if global temperature started to rise, the atmosphere would contain more water vapor. Its own greenhouse effect would make for more warming. On the other hand, with more water vapor wouldn’t there be more clouds? And wouldn’t those shade the planet and make for less warming? Neither Plass nor anyone before him had tried to calculate changes in cloudiness. (For details and references see this history site.)
Fritz Möller followed up with a pioneering computation that took into account the increase of absolute humidity with temperature. Oops… his results showed a monstrous feedback. As the humidity rose, the water vapor would add its greenhouse effect, and the temperature might soar. The model could give an almost arbitrarily high temperature! This weird result stimulated Syukuro Manabe to develop a more realistic one-dimensional model. He included in his column of air the way convective updrafts carry heat up from the surface, a basic process that nearly every earlier calculation had failed to take into account. It was no wonder Möller’s surface had heated up without limit: his model had not used the fact that hot air would rise. Manabe also worked up a rough calculation for the effects of clouds. By 1967, in collaboration with Richard Wetherald, he was ready to see what might result from raising the level of CO2. Their model predicted that if the amount of CO2 doubled, global temperature would rise roughly two degrees C. This was probably the first paper to convince many scientists that they needed to think seriously about greenhouse warming. The computation was, so to speak, a “proof of principle.”
But it would do little good to present a copy of the Manabe-Wetherald paper to a senior engineer who demands a proof that global warming is a problem. The paper gives only a sketch of complex and lengthy computations that take place, so to speak, offstage. And nobody at the time or since would trust the paper’s numbers as a precise prediction. There were still too many important factors that the model did not include. For example, it was only in the 1970s that scientists realized they had to take into account how smoke, dust and other aerosols from human activity interact with radiation, and how the aerosols affect cloudiness as well. And so on and so forth.
The greenhouse problem was not the first time climatologists hit this wall. Consider, for example, attempts to calculate the trade winds, a simple and important feature of the atmosphere. For generations, theorists wrote down the basic equations for fluid flow and heat transfer on the surface of a rotating sphere, aiming to produce a precise description of our planet’s structure of convective cells and winds in a few lines of equations… or a few pages… or a few dozen pages. They always failed. It was only with the advent of powerful digital computers in the 1960s that people were able to solve the problem through millions of numerical computations. If someone asks for an “explanation” of the trade winds, we can wave our hands and talk about tropical heating, the rotation of the earth and baroclinic instability. But if we are pressed for details with actual numbers, we can do no more than dump a truckload of printouts showing all the arithmetic computations.
I’m not saying we don’t understand the greenhouse effect. We understand the basic physics just fine, and can explain it in a minute to a curious non-scientist. (Like this: greenhouse gases let sunlight through to the Earth’s surface, which gets warm; the surface sends infrared radiation back up, which is absorbed by the gases at various levels and warms up the air; the air radiates some of this energy back to the surface, keeping it warmer than it would be without the gases.) For a scientist, you can give a technical explanation in a few paragraphs. But if you want to get reliable numbers – if you want to know whether raising the level of greenhouse gases will bring a trivial warming or a catastrophe – you have to figure in humidity, convection, aerosol pollution, and a pile of other features of the climate system, all fitted together in lengthy computer runs.
Physics is rich in phenomena that are simple in appearance but cannot be calculated in simple terms. Global warming is like that. People may yearn for a short, clear way to predict how much warming we are likely to face. Alas, no such simple calculation exists. The actual temperature rise is an emergent property resulting from interactions among hundreds of factors. People who refuse to acknowledge that complexity should not be surprised when their demands for an easy calculation go unanswered.
Hank Roberts says
veritas36, I pasted a chunk of your question in here; click the link:
http://scholar.google.com/scholar?q=more+intense%2C+extreme+rainfall.+Has+anyone+looked+at+satellite+data
Rod B says
Gavin, if one added one more blanket to the 300-400,000 (relative Venus to Earth CO2??) already on how much warmer would one get? Are you already more than 98.6? How many blankets did that take? If you double CO2 on Venus, what would the temperature be? (Serious curiosity question.) As an aside, how did Venus, at 0.85 the mass of Earth, get about 100bars of pure CO2 anyway?
Thomas Hunter says
Gavin; Venus? If I remember correctly, the entire atmosphere of Venus in the lower 2.5 kilometers is all supercritical fluids (that get very little insolation anyway), and various other gases are in that state up various altitudes to 45 kilometers, mixed with the rest in some manner. And there are sulphur clouds from something like 60-90 kilometers that block 60% of the sun, and the planet has no magnetic field and no tectonic plates. Not so much like putting on 20 blankets and having the 21st give no additional help at reducing convection.
BP Levenson: “If we had built our present civilization on solar and wind and biomass energy, history might well have been a little easier on great numbers of people.”
If pigs had wings, they wouldn’t bump their butt on the ground when they skip down the street.
Perhaps you need to re-read the history of the world since the fall of the Roman Empire from around 200 to 500 AD. Pay particular attention to 476-1000 and 1000-1500. Follow that up with short review of the technology available around 1700 and steam engines, 1800-1900 with railroads, the 1930s with diesel-electric locomotives. Then take a look at China in 300 AD, the Middle East in 800, Western Europe in 1100, 1846 in Nova Scotia, and various other developments centering around petroleum, coal and the internal combustion engine in the latter part of the 1800s and early 1900s. Don’t forget to include a deep look into the materials and technology available in 1806 on, after de Rivaz designed and implemented the first internal combustion engine. Which interestingly enough ran on hydrogen and oxygen.
Here’s a little jumpstart:
http://inventors.about.com/library/weekly/aacarsgasa.htm
Thomas Hunter says
Rod B: “As an aside, how did Venus, at 0.85 the mass of Earth, get about 100bars of pure CO2 anyway?”
I am not sure if the lack of its own internal magnetic field allowing the solar wind to blow away lighter elements like hydrogen, the basic absence of water vapor, and the basic lack of non-IR reactive substances in the atmosphere has much to do with it. But as I mentioned, the inability of Venus to lose heat due to sulpur clouds high up in the extensive troposphere, no tectonic plate activity and plenty of volcanos et al, should account for all that heat and pressure. The planet doesn’t reach semi-Earthlike conditions until about 50 kilometers. There is also the lack of axial tilt, and 240+ days each side faces the sun. And no oceans or moon.
Quite a curious sister, no?
[Response: It is, but not for the reasons you state. It is not hot because of volcanism, but because of CO2, as Sagan suggested 30 years ago and as was borne out by the Pioneer and later probes. – gavin]
Chris Maddigan says
This may be a naive question but I seem to be getting mixed messages on this from all over the place (including here).
Are the models actually published? Can one inspect the actual equations? Are the methods by which each equation is derived documented? Are the methods used to determine each parameter documented? Are these all public? Or are the models just black boxes with the inner workings only known to their developers?
[Response: Some models are completely open (NCAR CCSM, GISS ModelE, EdGCM etc.), others are not for a variety of reasons. All of the results are available either through PCMDI, or through secondary gateways like Climate Explorer. Documentation for each model varies in quality (NCAR’s is probably the best). – gavin]
japes says
Please can we have mugwump and Ray Ladbury debating on television.
Now!
Lloyd Flack says
Are people expressing doubts about the possibility of properly modeling climate trends concerned about the models of the computer programs that evaluate them?
If it is the mathematical models themselves then I would expect that their concerns about misspecification are misplaced. These sorts of physical models are likely to be fairly robust. Slight misspecification should not lead to huge errors in a system so constrained by physical laws and where effects are the sum of many simultaneous effects. We can do simple ballpark estimates of the effect on global average temperatures of the greenhouse gases by themselves. These simple estimates are not that far from what comes out of the more elaborate GCMs. What probably increases CO2 sensitivity by approximately half an order of magnitude is the cloud cover effects. We need the GCMs for these and even then or estimates are not precise. But physics does say there has to be a temperature increase and unless we have a so far undemonstrated negative feedback then this temperature increase will be enough for us to be worried.
Generally we know that a scientific model is on the right path when it is useful in explaining phenomena other than what it was created to explain, when it becomes part of a coherent overall story. We trust it more when it becomes useful as a framework suggesting further avenues of investigation. Current climate models are useful in helping to understand the paleoclimatic record. A high CO2 sensitivity explains so much that has happened in the past.
Greenhouse skeptics seem mostly to focus on so far unexplained details prematurely claiming that they cannot be explained by an extension of current models. Their arguments remind me of similar ones from creationists. Because we don’t know everything then we must know nothing. Scientific hypotheses that explain most of what is going on better than the alternatives get tinkered with to explain the discordant data rather than simply thrown out. A useful theory will usually be supported by multiple lines of evidence and are confirmed by having the alternatives disproved rather than being directly proved themselves. Theories like Evolution are confirmed by the convergence of evidence. I think this applies to high greenhouse gas sensitivities too.
Now if the concern is about the computer programs is the concern about the low level mathematical routines or is it about the way that these routines are put together into a program. I wouldn’t worry about the low level mathematical routines. These will be things like the routine for solving partial differential equations. These routines can be analyzed and tested thoroughly. Mistakes are not likely here and if they were made then because of the ubiquity of these routines the results of these mistakes would usually be spectacular and unmistakable.
Now will mistakes be made in assembling these low level modules into a program so that the program does not actually calculate the outcome of the model that it is intended to? Possible but if it occurred we would expect different errors in each model implementation. We would not expect different people programming different models to come up with results with the same order of magnitude. You would not expect a coherent picture to emerge from these simulations unless you were on the right track. All these models are simplifications but are adequate approximations for our purposes. There are many adequate ways to model the same phenomena that will give similar results.
Also, these are programs that give results for all the intermediate stages when we are simulating what will happen over a long time period. If they are not actually modeling what they are meant to be modeling this will usually become apparent early in the simulation. The programs that break down drastically are usually the ones that produce a few pieces of output after a long time and do not give any output before that. The problems can bubble along out of sight in such programs.
Jim Eager says
Re Thomas Hunter @203: “Perhaps you need to re-read the history of the world since the fall of the Roman Empire…”
Perhaps not. If it wasn’t clear to you, Barton’s quoted comment [from 184] was only a rhetorical device. Do you need a jumpstart on rhetoric?
Captcha: goes gasoline
Hank Roberts says
> not hot because of volcanism, but because of CO2
And the CO2 did not get tied up in carbonate minerals instead because hydrogen (and water) were lost early on?
http://dx.doi.org/10.1006/icar.1997.5677
Hank Roberts says
> emitted from the internal energy stores of greenhouse gases
Rod, no. We’ve gone ’round this many, many times.
Greenhouse gases can absorb and emit infrared photons.
All gases can transfer heat by collisions.
The average temperature is evened out by collisions between GHGs and non-GHGs very, very quickly.
Heat is added to or removed from the average amount in the surrounding air only by infrared photons, and only those molecules that can interact with the infrared photons catch and emit them.
Think of the heat as a baseball. It’s tossed onto the field at the beginning of play, it gets moved around among all the players on the field, but only the batter can knock it out of the park, removing it from play.
Er, except, for the analogy in which the batter is the GHG, postulate that only the batter can also catch the next baseball when it’s tossed onto the field.
This is why they use calculus instead of poetry in the major leagues of climatology.
mugwump says
Ray Ladbury #198:
It’s been a long time since I read an entire textbook. They tend to be aimed at undergraduates or graduate students and are rather slow-going. That said, since Raypierre’s text is online I will give it a whirl. I usually read the original papers and if there’s some background I don’t have I google it. If that doesn’t yield an answer I go over to google books and query the textbooks there.
I have probably read around 70 climate science papers this way.
Of course not. It took quite a concerted effort as an undergraduate for me to understand them. But there’s a hierarchy in the hard sciences. Once you can do QFT, GR, and harder areas of Mathematics, the rest is pretty easy.
In terms of the abstractions involved, yes. But I don’t think statistical mechanics is close to climate science. Thermodynamics and fluid mechanics are.
I mean that creature that is invoked to justify all kinds of crazy alarmist nonsense.
[Response: Then discuss what you term the ‘alarmist’ nonsense, not the complete red herring of consensus. By avoiding generalities and focusing on specifics, you’ll find a great deal more willingness to engage. -gavin]
mugwump says
RE gavin #211:
What, like you? Eg, #190: “[Response: The only certainty is that the number of people who keep posting completely rubbish half-digested contrarian nonsense seems to be a constant. – gavin]”
Besides, I do focus on specifics. Apart from my original generic remarks at #54, the rest of this conversation – aside from the idle banter with Ray concerning his unrequited love affair with Al Gore – has mostly been about the specific problems with Roe and Baker.
[And I did try to bury the “GCMs are not statistical models” chestnut, since I think it is just a matter of terminology, but people (including you) kept bringing it up]
[Response: I know. My comment was a reminder. However, while Fred Staples is just regurgitating nonsense, your comments are of a much more reflective nature. If you avoid the bad habits of the trolls, you’ll get a better response and less of a knee-jerk response. That’s all I was trying to say. – gavin]
Ray Ladbury says
Mugwump,
I have a pretty good book on foundations of stat mech that you might enjoy–the issues are subtle indeed. Try “Physics and Chance,” by Lawrence Sklar. I would dispute that being conversant with one particular field of physics provides adequate preparation for others. You still need to put in the time to understand the basics of a field.
I’m afraid I agree with Gavin–your definition of consensus science is both vague and pejorative. How about I try a definition: We have consensus on an issue/fact/theory when it becomes indispensable to future progress in the field–that is, when you cannot increase understanding in the field without the concept, when nobody publishes without it, you have consensus. By that definition, there is consensus that climate sensitivity is greater than 2.
Ray Ladbury says
Mugwump, FWIW, I’m not a big Al Gore fan. I thought he ran a lousy campaign. He owes you much more thanks than me. Folks like you have let him have the bully pulpit all to himself.
mugwump says
Ray,
Actually, the opposite is true: demanding that everyone agree that sensitivity is at least 2C guarantees understanding won’t increase. For example, you easily get a sensitivity less than 2C if you take the weaker end of aerosol forcings, and the stronger values for negative cloud feedbacks, all within the range of plausible values given our current best knowledge. A dictate of 2C requires the plausible ranges to be narrowed by fiat, not by science.
[Response: No. Because there are other constraints – in particular, the LGM. Explain that away using a sensitivity of less than 2 C and then we can talk. – gavin]
Hugh Laue says
To Gavin, David Benson re my posts on 9th Sept (#102 and 108) Thanks for your patient responses. I followed your leads including reading the IPCC tech summary report, plus Spencer’s explanations, including how CO2 works etc – I am satisfied that global warming is real, that CO2 is the major cause, that the models are based on sound science. Continue your patient responses focussed on the science (there are likely to other newbies like myself just starting to try and get to grips the subject) and you will win those who have eyes to see and ears to hear. Good work. Question now is what action to take – and carbon credit trading is unfortunately not working the way it was intended; too open to abuse and impossible to monitor realistically. But I guess that debate is for some other site.
mugwump says
We’ll see. I’ve not had time to look into the sensitivity estimates from the LGM yet.
However, given the uncertainties associated with deriving sensitivity from GCMs, I have recently been looking at much simpler models, such as those used by Douglass and Knox and Schwartz. Those models point to lower estimates of climate sensitivity – eg Schwartz comes up with 1.9K +- 1K, and I think without the fat tail on the upper end (I just read it in work breaks today so I haven’t had time to verify the upper tail yet).
[Actually, Douglass and Knox addresses volcanic sensitivity rather than CO2 sensitivity, but their approach is similar to Schwartz and is interesting for the way in which it directly tackles the problem without relying on heavily parameterized models].
[Response: Neither of these approaches is robust and neither correctly diagnose the sensitivity of models with known sensitivity (ranging from simple energy balance models to GCMs). Both were heavily commented upon for just those reasons. It’s worth reading those comments before going too deep. For the LGM discussion, start here. – gavin]
David B. Benson says
Hugh Laue (216) — What to do properly belongs on other sites. ClimateProgress is one such.
But as general advice for everyone, plant more biomass! Trees, perennials, annuals, flowers, foods, whatever. Plant lots; we are going to need it.
Ray Ladbury says
Mugwump, If your simplification strategy involves ignoring data (e.g. the LGM and other paleoclimate data), your chances of constructing a model with low sensitivity increase substantially. I would not expect it to resemble Earth, but you can try since you do not seem the type to learn from the experiences of others. Positing a model that is too simple (e.g. Schwartz) can work to yield non-Earth-like models as well,
https://www.realclimate.org/index.php/archives/2007/09/climate-insensitivity/
as can under-constrained/overfit models (e.g. M&M2007)
https://www.realclimate.org/index.php/archives/2007/12/are-temperature-trends-affected-by-economic-activity-ii/langswitch_lang/zh
As to the consensus, where has anyone imposed by fiat or any other method that climate scientists only consider sensitivity more than 2 degrees per doubling? Climate models have such sensitivities because they work, while those lower do not. No heavy-handed tactics. No coercion. Not even a vote. Just scientists motivated by self-interest to advance their careers and curiosity to understand their subject matter. That’s how consensus works. A marketplace of ideas.
mugwump says
I have read the Robock and Wigley comments on the Douglass and Knox (DK) paper, plus DK’s response. I think DK are correct: a significant contribution from ocean lag is not justified by the physics or the data. I had a discussion with James Annan on his blog about this, in the context of his 3C estimate. He deleted my final comment which I thought very much clarified the issue. Anyway, I have not seen a convincing counter to the DK paper.
Judging from the writing, the Scwartz paper I linked to was written after at least one round of comments, but I will read those comments as well.
[BTW, papers that deviate from the consensus position tend to be heavily commented upon. Whereas much worse papers that go along with the consensus (eg Roe and Baker) seem to get the kid gloves treatment. That’s part of the selection bias I was talking about.]
[Response: My mistake with the Schwarz paper. However, it still suffers from the same flaws as the original one – if given output from a model that is exactly the same as the model he is using, his methodology is not robust. i.e. it does not derive the coefficients (i.e. the timescale or the sensitivity) in an statistically unbiased way. The criticisms of DK by Wigely and Robock, in contrast to your claim, completely undermine their analysis. If a method doesn’t even work in a simpler situation where you know the answer, why do you think it will work in the real world?
As for why bad papers on one side of the fence get more comments than bad papers on the other, it’s based on impact. Most bad papers that agree with the vast majority of the work won’t have any impact and they will sink quickly into obscurity. Bad papers that contradict the majority of the work stand out much more strongly, and that can activate people to bother to write a proper reply. This is a non-trivial undertaking, that is largely a waste of time in terms of career progression or community appreciation, and yet is one of those necessary community services we are all expected to do without reward. This is not particularly mysterious. – gavin]
Richard Pauli says
RealClimate comments section is a wonderful gathering of engaged scientists sometimes meeting motivated skeptics, and even a few professional deniers.
I still cannot appreciate the enormity of the AGW problem; and to me the ramifications just work to stress the mind and hinder thinking.
I suspect that those suffering in their denial are desperately seeking validation for any flaw that could possibly support and nurture their disbelief. Hoping that chaotic systems will somehow accept a human bias.
Thank you for your civil persistence.
mugwump says
I don’t see where they offered a “known situation” independent of more complicated modeling. They did claim a peak 2W/m2 ocean flux during Pinatubo from data (levitus), but I don’t see where they got that number. The best I could do from the data in levitus was to calculate the average flux over the past 50 years (or whatever the length of the data series is) and I got 0.2W/m2. DK say they get 1W/m2 – I assume they mean during Pinatubo. In reality we need the flux due to the change in temperature from the eruption, which would require more than just reading from a graph, and DK’s derivation from first principles seems sound.
That doesn’t necessarily invalidate the method. Sometimes biased estimates are better (have lower variance). Eg, the least variance estimator for the mean of a multi-dimensional gaussian in more than two dimensions is biased (the James-Stein estimator). So is the usual formula for sample variance. Nevertheless, it’s an interesting point. And right up my alley. I will look into it. Thanks.
John Lang says
We know that the measured temperature increase has not kept pace with the predictions expected from a climate sensitivity of 2.0C to 4.5C per doubling.
An increase of 0.7C since CO2 started increasing points to a figure of 1.5C or so. The deep paleoclimate temperature and CO2 estimates similarily point to a sensitivity figure of 1.5C or so.
Gavin states that there is a lag in the temperature response, that the oceans are absorbing some (half perhaps) of this temperature response.
How can we tell that the oceans ARE absorbing some of the expected increase? Why we would not expect this to be permanent storage mechanism? How long is the lag before the oceans permanently release this energy so that the surface and the atmosphere heat up according to the sensitivity range of 2.0C to 4.5C per doubling?
[Response: The warming in the oceans was demonstrated most recently by Domingues et al., 2008 (discussed here). The oceans aren’t going to release any of that energy. Instead, they just need time to warm up. Once they are warm the atmosphere will warm until the radiation is in balance again (of course, all these things are happening together). But the heat going into the ocean (really the deep ocean) just keeps the surface cooler than it would otherwise be. – gavin ]
mugwump says
No one has Ray. But you apparently wanted to. There are plenty of respected scientists that do not buy into your consensus.
As for the motivation behind using a simpler model (actually, I would call them “direct approaches” – the idea is to finesse the use of models altogether): it’s clear we’re not going to get a good bound on climate sensitivity from GCMs anytime soon. The current best guess is not really any different from Charney’s. So it makes sense to look for other ways to constrain the sensitivity. You might want to read the Schwartz paper I linked to above. He puts the case very well.
Jonick Radge says
Richard Pauli, # 221.
That is a beautiful comment.
You prompted me to muse on some of the less eloquent outpourings here and therefore on what a powerful intoxicant grandiosity can be. It has no doubt been a steadfast driver for more than a few students throughout their school years (and beyond if you believe the claims). You also reminded me that barring some organic disability, social grace and self-examination are forms of intelligence that can be developed with a little effort without regard to age or background.
Patrick 027 says
I was looking for errors in:
http://www.weatherquestions.com/Climate-Sensitivity-Holy-Grail.htm
Did I do a good job here?: http://blogs.abcnews.com/scienceandsociety/2008/09/nature-is-not-a.html?cid=130321742#comment-130321742
(start just after: “Sep 11, 2008 2:47:53 PM”)
Are there other points to be made about that?
Is there a good example where the kind of data/comparison Spencer is using is used correctly?
Thomas Bleakney says
It would be great if you folks would help me explain the flaws in skeptic Richard Lindzen’s paper “Taking Greenhouse Warming Seriously” which can be found here: http://www-eaps.mit.edu/faculty/lindzen/230_TakingGr.pdf
He uses 4 leading GCMs to calculate the equilibrium temperature distribution in the atmosphere with respect to both latitude and altitude that would result from 2X CO2 increase. The temperature peaks at the equator at an altitude corresponding to about 200 mbar. He then compares the difference between this peak and the surface temperature rise to the weak temperature change with altitude that has been actually observed. He then somehow concludes from this that only 1/3 of observed GW can be attributed to the CO2 rise. He states:
“Contrary to the iconic statement of the latest IPCC Summary for
Policymakers, this is only on the order of a third of the observed trend at the
surface, and suggests a warming of about 0.4° over a century. It should be added
that this is a bound more than an estimate.”
I know there was trouble at first with this altitude data but he seems to be using the corrected data. How can he say it is a bound?
[Response: With all due respect to Prof. Lindzen, that argument is a crock. He knows full well that the expected amplification with height is expected purely from the moist adiabat responding to any temperature change at the surface (as we have discussed). He should also be aware that the tropospheric temperature records in the tropics are rather uncertain, and different methodologies give very widely varying trends – the most recent of which match the expected pattern rather well. – gavin]
Barton Paul Levenson says
Rod writes:
Venus is believed to have undergone a runaway greenhouse effect early in its geological history. The carbon dioxide that, on Earth, is in carbonate rocks, is in the atmosphere on Venus. Add our carbonate rock CO2 to the atmosphere and we’d have something like 60 bars of it.
According to Bullock and Grinspoon, who I think are the foremost Venus atmosphere modelers at the moment, Venus may undergo periods where the surface temperature is up to 900 K, not the present 735 K.
Barton Paul Levenson says
Thomas Hunter writes:
Perhaps you need to stop assuming that people who disagree with you haven’t studied the subject.
Barton Paul Levenson says
mugwump writes:
There are probably geologists who still don’t accept continental drift, and astronomers who still don’t accept the Big Bang. You can always find someone to agree with any position; what matters is how seriously they are taken by their colleagues.
Ray Ladbury says
Mugwump: “There are plenty of respected scientists that do not buy into your consensus.”
And what have they published in peer-reviewed journals of late. How many times has they been cited by others. Answer to both: Not much. Voila, scientific consensus.
As to Schwartz, I did read the paper when it came out. I was unimpressed, since it greatly underestimates the complexity of warming in the oceans. At most it represents a lower bound on the sensitivity. Did it ever occur to you that the reason the central tendency of the “guess” of climate sensitivity is not changing is that it could be right?
mugwump says
RE #230:
I see a suprising number of papers published with lower sensitivities, given the consensus you speak of. Definitely in the minority, but non-negligible.
Seriously, there may be problems with Schwartz, and other papers of this ilk, but they are a worthy attempt to measure sensitivity without relying on models, and hence obtain a less uncertain answer.
If you’re interested, check out Nicola Scafetta’s comment on Schwartz original paper, which has been absorbed into Schwartz’ latest version that I linked to above. It seems to show clear justification for 2 lags in the climate – one of a few months and one of around 8 years. I say “seems” because I am curious how sensitive the autocorrelation method is – I sure can’t see those lags by eyeballing the data so I am suspicious of this approach. I’ve been in this game too long not to trust my lyin’ eyes before I trust statistics or models.
The unchanging mean is not the problem. It’s the unchanging uncertainty. The range of plausible values has barely narrowed in 30 years leading to suggestions recently that we not even bother trying to estimate climate sensitivity (Allen and Frame).
The policy implications of climate sensitivity and the mixing time are enormous. If it is 2C or less, or very slowly mixing (so it takes us many centuries to equilibriate), we probably have very little to worry about. If it is 4C or more and rapidly mixing (so we’ll see most of the warming in the next 100 years or so), we probably have a lot to worry about. Rather than throwing in the towel, we should be examining all available avenues for better bounding the climate sensitivity.
Ray Ladbury says
Mugwump, you are dodging the second part of my criterion–how often do these papers get cited in subsequent published work. That is an excellent measure of the degree to which they advance the state of understanding. The problem with the approaches of Schwartz, Douglas, etc. is that they do not provide a way forward because their focus is so narrow, they make it virtually impossible to explain paleoclimatic data.
What avenues are you suggesting that you think aren’t being pursued. The problem is that low sensitivity doesn’t explain the data given the known physics. How do we approach this without ignoring the physics?
Hank Roberts says
Ya know, Mug, you can look up these tired old talking points rather than raising each one as though you’d just thought of it for yourself or read it at the other place — use the search box at the top of the page.
If you’d make the effort, read the prior discussion, give some sense you’re not just posting what seem like clever opposition statmements without bothering to read up on them, you could do better.
Try Eric Raymond’s recommended approach? You’ll feel smarter immediately if you’ve made more effort before posting old ideas:
How To Ask Questions The Smart Way:
http://www.catb.org/~esr/faqs/smart-questions.html
mugwump says
Ray, they get cited. That’s how I find them.
They do provide a way forward: it’s just a different way of looking at the problem. Maybe it will turn out to be a dead-end, but after 30 years of failing to narrow the uncertainty, it ain’t like GCMs are exactly pointing the way forwards either.
Why are you so desperate to rule out this novel avenue of attack? Why not just see where it leads? I am not suggesting we drop GCMs to focus on direct approaches to estimating the sensitivity.
I am suggesting that if we just want to know what climate sensitivity is, maybe there’s an easier way to get that than modeling the entire climate. Don’t solve a harder problem than you need to. [I am not claiming any novelty here – obviously people have been thinking along these lines for 100 years or more.]
Dan Hughes says
RE: #178, 205
Patrick, I have books and I have papers; several books and many many papers. To know what makes up a specific model we need the continuous equations for that model. The same goes for the discrete approximations, numerical solution methods, and the actual coding.
Not even addressing the parameterizations of sub-grid processes, it is highly unlikely that any of the codes utilize the basic fundamental equations of fluid motions; let’s call them the Navier-Stokes equations. For one thing, the very difficult problem of turbulence must be addressed. For another, the spatial resolution used in the calculations cannot begin to resolve the gradients of driving potentials for mass, momentum, and energy exchanges both within the solution domain and at its boundaries. For a third, the extremely difficult issues associated with multi-phase flows must be addressed. The list goes on and on.
Thus all the codes utilize model equations developed from the basic equations by adopting Idealizations, appropriate assumptions, and associated approximations. SImply pointing to a book written about the Navier-Stokes equations as a source of information for what is used in a specific model and code is not a correct specification of the answer.
Let’s take as an example the momentum balance model for the vertical (radial) direction for the atmosphere. The number of possible formulations for this single equation is somewhat large. Consider the following possibilities:
(1) no equation at all
(2) an equation expressing the hydro-static balance between the pressure gradient and the gravitational force
(3) a form in which only a few terms have been dropped from the fundamental formulation
(4) the complete un-modified form of the fundamental statement of the momentum balance
(5) various modifications applied to the above (as applicable) to include different approaches to modeling of turbulence.
And this list is only a zeroth-order cut and I’m sure many others can be listed.
Why are the actual continuous equations so important, beyond providing an indication of what phenomena and processes can and cannot be described. The system of PDEs plus ODEs contains critical information relative to the characteristics of the model equations, well-posed or ill-posed, boundary condition specifications ( where, how many, and what), propagation of information within the solution domain and at the boundaries, and the proper approach for solving the discrete approximations to the continuous equations. Ad hoc specification of boundary conditions based solely on the discrete approximations is a well-known source of difficulties in numerical solution mehtods, for one example. The correct representation of discrete approximation for integrals of div and curl in non-orthogonal coordinate systems, for another. Some model equation systems for the basic hydrodynamics of atmospheric fluid flows are known to not be well-posed, as another. There are many other critical aspects that are set by the system of continuous equations.
I and several others have attempted to find in the published literature a summary of the actual final form of the continuous equations used in, for example, the GISS/NASA ModelE model and code. None of us have been successful. We have been directed to the papers cited on the ModelE Web pages. The information is not in those papers. Recently, Gavin Schmidt directed me to a paper from 1983. The vertical-direction momentum balance model in that paper is the hydro-static version listed as (2) above.
So, somewhere between papers published in 1983 and those on the ModelE Web pages published in 2006 there might be a specific statement of the equation for the vertical momentum balance model that is actually used in the GISS/NASA ModelE code. We have been told several times that it’s there, yet we can’t be directed to the specific paper, page number, and equation.
Several people have attempted to find that specific statement and none have been successful. I hope you will accept a challenge and start a search for that equation.
mugwump says
Ya know Hanky babes, when someone starts a post with a deliberately dismissive diminutive (I wish I could say I though of that wonderful alliteration, but it belongs to William Connolley) of my nom de plume, followed by an ill-informed rant, I usually don’t bother responding.
neil pelkey says
“To get reliable predictions for climate on regions as small as a US state will take more computing power and understanding of climate processes than we have at present.” Now if we hadn’t spent all that money on the super collider and studying grizzly bears in Montana we could get to working on this.
Jeff Davis is correct on this problem however. When we start by telling the common person and non-climate scientists(~99.9999% of voters) that they are too ill-informed to understand the problem, but they need to give up their job (or hope of being the first Hawaiian to become POTUS), then it is back to drill, drill, drill, drive, drive, burn, burn, burn.
Hank Roberts says
> the unchanging uncertainty
How many models have incorporated plankton ecology, so far?
We know many areas in which we don’t yet know the magnitude of the forcing, and others like plankton where we barely have begun to know how a complicated part of the system has been working, let alone how it may be changing. That’s known uncertainty and it’s not going to change until the research is done.
Of course this uncertainty remains uncertain. This is not evidence of a problem. This is how science works.
______________
ReCaptcha: is continuous
Hank Roberts says
One for Gavin et al., relevant to simple answers —
while looking up this earlier article here
https://www.realclimate.org/index.php?p=160
I came across this current presentation:
http://www.debate-central.org/ask/corinne-le-quere
The sponsor is
http://www.sourcewatch.org/index.php?title=National_Center_for_Policy_Analysis The site is meant to provide information to high school debaters.
I wonder how asking climate scientists questions via this site works.
Count Iblis says
Analogous to renormalization group methods used in statistical mechanics, isn’t it possible to somehow “integrate out” all these hundreds of effects that are important but in which you are not interested in and obtain a model in which you only have the physical quantities of interest (like the average temperature) plus a whole bunch of effective constants that do not exist at the level of the fundamental equations?
Thomas Hunter says
Ah, Gavin, yes. I merely mentioned the volcanoes because they are a factor, not because it was a large or main factor. Of course, it is the CO2, although I myself would put it more so as the CO2 being unable to escape or be absorbed. On a related note, do you have any opinion on how the behavior of CO2 and the rest as a supercritical fluid at the bottom 5% or so of the roughly 60 KM Venusian troposphere?
I am aware that it was a rhetorical device, Jim. A very impossible what-if line of questioning not really worth considering. Might as well ask what if the Roman Empire had never fallen, then taken over the entire planet and destroying all civilizations not going along with them using neutron and particle-beam weapons, and then implementing a commune-like utopia for the surviors. As in something like they’d developed and built supercomputers by 900 and cold fusion devices by 1200. Make up some other stuff about the economics, politics, religion, social structures and history along the way.
I wasn’t aware you were disagreeing with me or I with you, Barton. Nor am I accusing you of being unaware of the subject. I’m simply saying, history being what it is, we have the current level of tecnology and sources of fuels that we have. Sure, it would be nice to have giant satellite mirrors thingys beaming all our energy supplies from space, super-efficient wind turbines, hydrogen and fuel cell cars and the like. Unfortunate, but none of this stuff was able to be developed, implemented and refined a few hundred years ago. I am optimistic that current solar technologies will continue to come down in cost and new techonolgies developed, much like they are and have been doing, seemingly hand in hand with computer and TV flat panels.
John Mashey says
re: #230 BPL
Hmmm, are there any serious geologists who don’t accept continental drift?
(As you say, there must be, I just don’t know any names offhand.)
On Big bang, for sure, Hoyle, and to this day, Halton Arp comes to mind, i.e. astronomer who has done great observational work, but got fixated on an idea decades ago and won’t let go, no matter what evidence piles up.
If someone got exposed to this, and didn’t have the background to sort it out, they’d easily get convinced by this Caltech PhD that science is really broken, and almost everybody else is wrong. If you can find a copy of “Seeing Red” in a library, try it out, as a good exercise in critical thinking,and the photos are beautiful.
Obviously, for climate parallels, this is more like “It’s all cosmic rays” than like “Anything but CO2”, i.e., someone seems to have a strong belief in a particular idea that happens to conflict with the mainstream, as opposed to not wanting the mainstream, and being happy to take anything else.
Lawrence McLean says
Is there any long term data regarding the behavior of the night sky radiation effect?
For those who do not know, the night sky radiation effect describes the process when a surface exposed to the night sky cools below the surrounding temperature.
In itself, a trend towards reduction in the temperature differential would be pretty close to proof for increases in the Greenhouse effect.
Craig Allen says
Mugwamp:
If the last ten or twenty years of climate that we have experienced in Australia is truly indicitive of what the current degree of warming gives us, then there is indead a great deal to worry about.
Southern Australian agricultural and natural systems are in a dire situation due to the ongoing drought conditions, and our scientists predict little reprieve.
Ray Ladbury says
John Mashey and BPL, I had a prof. in grad school (fairly well known for his work in E&M) who didn’t believe in quarks and much of modern particle physics–he had his own theories. The department made it clear that he was to stick to the curriculum, although occasionally, he’d give a “special” lecture on his own special brand of “particle physics”. In all the time I followed goings on in the department, he attracted one student, who subsequently went nowhere. Nobody was interested in what he produced because it didn’t advance the state of understanding–that’s consensus at work.
mugwump says
I had an undergrad prof who designed “aura” detectors. No kidding.
That’s tenure at work.
I have a serious question for gavin. What’s the justification for linear climate sensitivity? By that I mean: if sensitivity from LGM to the Holocene is, say, 1K/W/m2, would we expect it to be the same today even though the Earth looks very different (it’s a lot hotter and darker for starters, with very different precipitation patterns). Or are the differences from such changes small enough that we can assume linearity?
Rod B says
Hank (210), you are using the term “heat” too loosely. “Heat” is pretty much a nebulous term and can mean different things in different contexts and by different folks. I’m using it in the context that heat does not equal energy (meaning not precisely the same thing). Photons add energy to the (GH)gasses. The atmosphere gets “heated” when that absorbed radiative energy gets collisionally transferred. This aside, I think I understand and agree with all you say in 210.
Rod B says
Ray (213), a quick observation: I think your definition of consensus is way to complicated, nor do I think consensus entails a prohibition on any questioning.
Rod B says
Barton, I’m getting OT but I appreciate the helpful info re Venus. Why didn’t Venus’ CO2 get sequestered into carbonate rock?