Once more unto the breach, dear friends, once more!
Some old-timers will remember a series of ‘bombshell’ papers back in 2004 which were going to “knock the stuffing out” of the consensus position on climate change science (see here for example). Needless to say, nothing of the sort happened. The issue in two of those papers was whether satellite and radiosonde data were globally consistent with model simulations over the same time. Those papers claimed that they weren’t, but they did so based on a great deal of over-confidence in observational data accuracy (see here or here for how that turned out) and an insufficient appreciation of the statistics of trends over short time periods.
Well, the same authors (Douglass, Pearson and Singer, now joined by Christy) are back with a new (but necessarily more constrained) claim, but with the same over-confidence in observational accuracy and a similar lack of appreciation of short term statistics.
Previously, the claim was that satellites (in particular the MSU 2LT record produced by UAH) showed a global cooling that was not apparent in the surface temperatures or model runs. That disappeared with a longer record and some important corrections to the processing. Now the claim has been greatly restricted in scope and concerns only the tropics, and the rate of warming in the troposphere (rather than the fact of warming itself, which is now undisputed).
The basis of the issue is that models produce an enhanced warming in the tropical troposphere when there is warming at the surface. This is true enough. Whether the warming is from greenhouse gases, El Nino’s, or solar forcing, trends aloft are enhanced. For instance, the GISS model equilibrium runs with 2xCO2 or a 2% increase in solar forcing both show a maximum around 20N to 20S around 300mb (10 km):
The first thing to note about the two pictures is how similar they are. They both have the same enhancement in the tropics and similar amplification in the Arctic. They differ most clearly in the stratosphere (the part above 100mb) where CO2 causes cooling while solar causes warming. It’s important to note however, that these are long-term equilibrium results and therefore don’t tell you anything about the signal-to-noise ratio for any particular time period or with any particular forcings.
If the pictures are very similar despite the different forcings that implies that the pattern really has nothing to do with greenhouse gas changes, but is a more fundamental response to warming (however caused). Indeed, there is a clear physical reason why this is the case – the increase in water vapour as surface air temperature rises causes a change in the moist-adiabatic lapse rate (the decrease of temperature with height) such that the surface to mid-tropospheric gradient decreases with increasing temperature (i.e. it warms faster aloft). This is something seen in many observations and over many timescales, and is not something unique to climate models.
If this is what should be expected over a long time period, what should be expected on the short time-scale available for comparison to the satellite or radiosonde records? This period, 1979 to present, has seen a fair bit of warming, but also a number of big El Niño events and volcanic eruptions which clearly add noise to any potential signal. In comparing the real world with models, these sources of additional variability must be taken into account. It’s straightforward for the volcanic signal, since many simulations of the 20th century done in support of the IPCC report included volcanic forcing. However, the occurrence of El Niño events in any model simulation is uncorrelated with their occurrence in the real world and so special care is needed to estimate their impact.
Additionally, it’s important to make a good estimate of the uncertainty in the observations. This is not simply the uncertainty in estimating the linear trend, but the more systematic uncertainty due to processing problems, drifts and other biases. One estimate of that error for the MSU 2 product (a weighted average of tropospheric+lower stratospheric trends) is that two different groups (UAH and RSS) come up with a range of tropical trends of 0.048 to 0.133 °C/decade – a much larger difference than the simple uncertainty in the trend. In the radiosonde records, there is additional uncertainty due to adjustments to correct for various biases. This is an ongoing project (see RAOBCORE for instance).
So what do Douglass et al come up with?
Superficially it seems clear that there is a separation between the models and the observations, but let’s look more closely….
First, note that the observations aren’t shown with any uncertainty at all, not even the uncertainty in defining a linear trend – (roughly 0.1°C/dec). Secondly, the offsets between UAH, RSS and UMD should define the minimum systematic uncertainty in the satellite observations, which therefore would overlap with the model ‘uncertainty’. The sharp eyed among you will notice that the satellite estimates (even UAH Correction: the UAH trends are consistent (see comments)) – which are basically weighted means of the vertical temperature profiles – are also apparently inconsistent with the selected radiosonde estimates (you can’t get a weighted mean trend larger than any of the individual level trends!).
It turns out that the radiosonde data used in this paper (version 1.2 of the RAOBCORE data) does not have the full set of adjustments. Subsequent to that dataset being put together (Haimberger, 2007), two newer versions have been developed (v1.3 and v1.4) which do a better, but still not perfect, job, and additionally have much larger amplification with height. For instance, look at version 1.4:
The authors of Douglass et al were given this last version along with the one they used, yet they only decided to show the first (the one with the smallest tropical trend) without any additional comment even though they knew their results would be less clear.
But more egregious by far is the calculation of the model uncertainty itself. Their description of that calculation is as follows:
For the models, we calculate the mean, standard deviation (sigma), and estimate of the uncertainty of the mean (sigma_SE) of the predictions of the trends at various altitude levels. We assume that sigma_SE and standard deviation are related by sigma_SE = sigma/sqrt(N – 1), where N = 22 is the number of independent models. ….. Thus, in a repeat of the 22-model computational runs one would expect that a new mean that would lie between these limits with 95% probability.
The interpretation of this is a little unclear (what exactly does the sigma refer to?), but the most likely interpretation, and the one borne out by looking at their Table IIa, is that sigma is calculated as the standard deviation of the model trends. In that case, the formula given defines the uncertainty on the estimate of the mean – i.e. how well we know what the average trend really is. But it only takes a moment to realise why that is irrelevant. Imagine there were 1000’s of simulations drawn from the same distribution, then our estimate of the mean trend would get sharper and sharper as N increased. However, the chances that any one realisation would be within those error bars, would become smaller and smaller. Instead, the key standard deviation is simply sigma itself. That defines the likelihood that one realisation (i.e. the real world) is conceivably drawn from the distribution defined by the models.
To make this even clearer, a 49-run subset (from 18 models) of the 67 model runs in Douglass et al was used by Santer et al (2005). This subset only used the runs that included volcanic forcing and stratospheric ozone depletion – the most appropriate selection for this kind of comparison. The trends in T2LT can be used as an example. I calculated the 1979-1999 trends (as done by Douglass et al) for each of the individual simulations. The values range from -0.07 to 0.426 °C/dec, with a mean trend of 0.185 °C/dec and a standard deviation of 0.113 °C/dec. That spread is not predominantly from uncertain physics, but of uncertain noise for each realisation.
From their formula the Douglass et al 2 sigma uncertainty would be 2*0.113/sqrt(17) = 0.06 °C/dec. Yet the 10 to 90 percentile for the trends among the models is 0.036–0.35 °C/dec – a much larger range (+/- 0.19 °C/dec) – and one, needless to say, that encompasses all the observational estimates. This figure illustrates the point clearly:
What happens to Douglass’ figure if you incorporate the up-dated radiosonde estimates and a reasonable range of uncertainty for the models? This should be done properly (and could be) but assuming the slight difference in period for the RAOBCORE v1.4 data or the selection of model runs because of volcanic forcings aren’t important, then using the standard deviations in their Table IIa you’d end up with something like this:
Not quite so impressive.
To be sure, this isn’t a demonstration that the tropical trends in the model simulations or the data are perfectly matched – there remain multiple issues with moist convection parameterisations, the Madden-Julian oscillation, ENSO, the ‘double ITCZ’ problem, biases, drifts etc. Nor does it show that RAOBCORE v1.4 is necessarily better than v1.2. But it is a demonstration that there is no clear model-data discrepancy in tropical tropospheric trends once you take the systematic uncertainties in data and models seriously. Funnily enough, this is exactly the conclusion reached by a much better paper by P. Thorne and colleagues. Douglass et al’s claim to the contrary is simply unsupportable.
steven caskey says
Ray Ladbury Says:
30 April 2008 at 2:10 PM
Steven, you have to look at whether the data support a particular sensitivity. A sensitivity of 1 or even 1.5 causes way more problems for many different datasets than does a sensitivity of 3 or even 3.5. The probability distribution on sensitivity is very asymmetric: 1.0 is extremely improbable given the data, while 4.5 cannot be ruled out. Almost all of the uncertainty is on the high side.
it isn’t as important what direction the sensitivity goes as it is the range the sensitivity can be accurately predicted to, at least from a scientific standpoint. the narrower the range the more precise the knowledge. my point was the range was so wide that currently the very people that are termed deniers could be closer to the truth if the truth turned out to be in the low end of the range then those that are predicting the high end of the range. if the data does not support the low end of the range then perhaps a good starting point would be to support the raising of the low end of the range.
Ray Ladbury says
Steven, you are misunderstanding the situation: The probability to the left of 1.5 is near zero. The probability to the right of 4.5 is not negligible. This is extremely important both from the point of view of science and risk assessment. Look up “thick-tailed distribution” and the term skew as it applies to probability.
steven caskey says
Ray Ladbury Says:
30 April 2008 at 6:24 PM
Steven, you are misunderstanding the situation: The probability to the left of 1.5 is near zero. The probability to the right of 4.5 is not negligible. This is extremely important both from the point of view of science and risk assessment. Look up “thick-tailed distribution” and the term skew as it applies to probability.
no, I understand exactly what you are saying I think..not that I know the exact statistics but lets say you come up with a 1% chance of 1.5 but a 15% chance of 4.5…wouldn’t this be what you mean?
steven caskey says
let me rephrase that. if I know what a skewed bell curve looks like then that is what I would be looking at when viewing the climate sensitivity probabilities, correct?
Hank Roberts says
http://tamino.wordpress.com/2007/10/27/uncertain-sensitivity/
http://tamino.files.wordpress.com/2007/10/probs.jpg
(see the discussion, this is just one image from the thread)
Chris says
Regarding, “Now the claim has been greatly restricted in scope and concerns only the tropics, and the rate of warming in the troposphere (rather than the fact of warming itself, which is now undisputed).”
Correct, the rate of warming is not in dispute – it’s decidely negative and will likely continue that way for some time to come! The warming trend observed in the 1990’s has more to do with less aerosols in the atmosphere and a more active sun. This is a better explanation than CO2-driven climate change for the following observed phenomena: cooler stratosphere and warmer surface temperatures (particular areas over land in the northern hemisphere, such as across Asia where the former Soviet Union dissolved and China has modernized). What happens to the impact of CO2 in the climate models if the assumed amount of aerosols are cut in half, or more? Plus, throw in some minimal mechanism for a more active sun. How much of the CO2-driven temperature rise is eliminated once these assumptions are made? It’s my understanding that the role of CO2 has increased over the years primarily to account for more aerosols assumed in the models. Why? Just to prop up CO2? In fact, I have seen two published articles(posted on Atmoz and WUWT) that suggest the opposite: the air is cleaner today than 30 years ago. Yet, the climate models assume otherwise. Go figure.
[Response: Please do. The impact of CO2 is independent of any other forcing. -gavin]
steven caskey says
Hank Roberts Says:
30 April 2008 at 11:35 PM
http://tamino.wordpress.com/2007/10/27/uncertain-sensitivity/
http://tamino.files.wordpress.com/2007/10/probs.jpg
(see the discussion, this is just one image from the thread)
thank you, the skew was larger then I had anticipated as was the tail but the basic shape I did recall correctly
steven caskey says
Hank Roberts Says:
30 April 2008 at 11:35 PM
http://tamino.wordpress.com/2007/10/27/uncertain-sensitivity/
http://tamino.files.wordpress.com/2007/10/probs.jpg
(see the discussion, this is just one image from the thread)
I am still a bit confused by where the graphs came from. from what I read it seems there were produced by the person who runs the open mind blog? I tried looking for a chart that looked similar in the ipcc report but was unable to find one. it may be there and I will attempt another look later.
Ray Ladbury says
Chris,
You, like so many other skeptics have fallen victim to what I call the Chinese menu fallacy: You assume that if you can just find other causes for warming that aren’t in the current models that the whole nasty business with CO2 being a greenhouse gas will go away. It won’t. Greenhouse forcing is not an adjustable parameter–it is fixed–and pretty narrowly–with independent data. So are the other forcings–with varying success. The parameters that are poorly fixed are aerosols and clouds. Find a new forcer, that’s were the give can occur in the models. No one is propping anything up. They are merely doing what the science tells them to do.
Ray Ladbury says
Steven, Given the shape of the curve, do you see why I am saying that the climate studies (which take ~3 deg/doubling) have been conservative. The cost of climate change to civilization probably rises exponentially with increasing temperatuer, so in reality, the risk (cost times probability) is dominated by that thick tail on the right side. The evidence says the denialists are almost certainly wrong, and it cannot rule out the scenarios of the alarmists like Lovelock and Hansen. In this sense, “alarmist” is not a pejorative. If sensitivity is 6 degrees per doubling, alarm is the only appropriate reaction.
steven caskey says
Ray Ladbury Says:
1 May 2008 at 8:46 AM
Steven, Given the shape of the curve, do you see why I am saying that the climate studies (which take ~3 deg/doubling) have been conservative. The cost of climate change to civilization probably rises exponentially with increasing temperatuer, so in reality, the risk (cost times probability) is dominated by that thick tail on the right side. The evidence says the denialists are almost certainly wrong, and it cannot rule out the scenarios of the alarmists like Lovelock and Hansen. In this sense, “alarmist” is not a pejorative. If sensitivity is 6 degrees per doubling, alarm is the only appropriate reaction
I see what you’re saying according to the graph I looked at, however I have only seen it on a blog so far. but for a minute let’s say that the graph is correct as presented for theoretical purposes, it would greatly favor the higher levels of climate sensitivity ranges but it would have an incredibly large range for climate sensitivity of nearly 8C. it appears that most of the uncertainties between 1.5C and 2C have either been eliminated or discounted by the person that made the graph and the next logical step would indeed be to try to eliminate more uncertainties which would in all probability eliminate the tail of the graph at the high end. of course this is speculation and in attempting to further refine numbers it could cause shifts in either direction
steven caskey says
Ray Ladbury Says:
1 May 2008 at 8:46 AM and it cannot rule out the scenarios of the alarmists like Lovelock and Hansen. In this sense, “alarmist” is not a pejorative. If sensitivity is 6 degrees per doubling, alarm is the only appropriate reaction
I haven’t read anything about what lovelock has said but I do have a familiarity of what hansen has done and that is that he has taken a climate sensitivity of 3C to be the immediate effect and then an additional 3C as a long range effect. this would be based on things such as less ice and snow, methane release, and darker vegetation further north. this hypothesis may seem very ominous on it’s face but actually it may be exactly the opposite since then one would have to go back and apply the same long range forcings to the increases in solar radiation which would decrease the amount of unaccounted for temperature change currently being placed on co2. having just read the response from real climate to his paper and some of the postings there it was also pointed out to me that some of these long range responses are also limited by their nature and that the long range response would actually be less with time. ie there is a limited amount of ground that could be uncovered and a limited amount of methane that could be released. I don’t have the skill to figure out what exactly applying this hypothesis to earlier solar forcing would result in but should it ever be done or if it has been done I would find it of interest. some of this I said in an earlier post. I apologize in advance for being some of this being repetitive.
Chris Colose says
On Miskolczi and Kirchoff’s law, Kirchoff’s law means absorptivity and emissivity must be equal *only when considering the same frequency.* Earth’s visible absorptivity is 0.7, but the emissivity is not– just the visible emissivity.
Ray Ladbury says
Steven, Climate models are not Chinese menus. The sensitivities of CO2, solar irradiance, etc. are determined by independent data–not by fitting to the temperature rise. If there is an unaccounted for forcer, CO2 forcing, which is tightly constrained, will not change. Less well constrained forcers, such as aerosols, clouds, etc. have some give–not CO2.
steven caskey says
Ray Ladbury Says:
1 May 2008 at 5:25 PM
Steven, Climate models are not Chinese menus. The sensitivities of CO2, solar irradiance, etc. are determined by independent data–not by fitting to the temperature rise. If there is an unaccounted for forcer, CO2 forcing, which is tightly constrained, will not change
I agree there is a set forcing from co2…I believe it has a range from .8 to 1.2 from what I read although I may be marginally off. what I am discussing is the feedback mechanisms which are not set nor are they well understood as fully exemplified by the ranges in climate sensitivities. is this not correct or am I missing something?
steven caskey says
as an example to further the point; in the 1990 estimate the climate sensitivity was judged most likely to be at 2.5C but this was raised to 3.0C sensitivity based upon the increase in temperature as opposed to the increase in known forcing . now if the additional warming was due to previous solar long term forcing that hasn’t yet been included in the equation then the sensitivity may well still be at 2.5C as a best bet conclusion. of course I have no idea what the outcome of research into long term solar forcing would conclude so this is just a possible example but one I would think worth taking a look at especially when long term forcing is considered to be equal to the short term forcing by some and if so could make a considerable difference
Chris N says
Gavin and Ray,
Is “climate sensitivity” to CO2 an independent variable? The following graph shows aerosols have two effects: one direct and one indirect (via albedo).
http://en.wikipedia.org/wiki/Image:Radiative-forcings.svg
I assume that is the case for CO2. Although CO2 sensitivity is not a forcing component in the strict sense, it is one nonetheless via the embedded formulas in the climate models. How can you say CO2 is an independent variable when it has a supposedly indirect effect (via climate sensitivity)? Does not climate sensitivity depend on other variables? Also, you didn’t answer my question: Why do climate models assume there are more aerosols today than the past decades (please see graph below).
http://en.wikipedia.org/wiki/Image:Climate_Change_Attribution.png
Further, what are the accuracies of the models if the assumed amount of aerosols are cut in half? I can only assume they wouldn’t be accurate at all. So, one would either conclude that aerosols are “propping” up the role of CO2, or the models are not accurate at all. You can choose the best description. I contend that my hypothesis (less aerosols, more active sun) provides a better explanation for observed results (cooler stratosphere, warmer land surface temperatures) than any climate model. Until you guys provide better results, your sense of creditability appears ill-founded at best.
Finally, I see two Chris’ on this site of late. I’m Chris N.
Hank Roberts says
Remember, if you think you might be retyping a FAQ, try typing it into the Search box at the top of the page. You may find you’re right.
Also the “Start Here” link at the top of the page is handy.
steven caskey says
the paper by scafetta & west and the reaction to it on real climate is a great help to what I was saying about the limits on long term climate sensitivity. since you do have to treat the two forcings of solar and co2 in almost exactly the same manner then extreme predictions on long term climate sensitivity are not practical and by going back and applying these forcings to solar forcing you can limit these feedback mechanisms by comparing to what is currently happening and what has happened in the past and by doing so hopefully eliminate some of the uncertainties that are creating the long tail at the high end of the possible climate sensitivities ranges
Ray Ladbury says
Chris N., I’m not sure what you are talking about. Where do you get your information that climate models are assuming more aerosols today than in the past? That is a rather vague accusation that sounds as if it is taken out of context.
Now, how, pray does your model account for a cooler stratosphere? And it certainly doesn’t account for the fact that there is more warming in night-time temperatures than day-time temperatures, or any of a number of other trends. It sure makes the problem easier when you only pick a subset of the trends to fit. I think it is your understanding of the models that is ill founded.
Ray Ladbury says
Steven, I think you will find that the sensitivity was raised because independent data favored the new value. Raising the sensitivity in response to temperature trends would be ill advised, because you then could not use temperature trends as validation for the models. Sensitivity is not an adjustable parameter in the models. It may vary over time, but only as new data come in to support different sensitivities.
Hank Roberts says
Steven (aside, the shift key would really help, if you find it easy to use, both to make quotation marks before and after quoted material, and to make capital letters that help indicate when you start sentences. Paragraphs also help organize thought, as others have mentioned, for those of us with older eyes. If you can’t do that easily I understand, but if you can it’d be a kindness.
You wrote: “treat the two forcings of solar and co2 in almost exactly the same manner” — I’m not sure who you’re talking about doing this, the modelers? the politicians?
— we don’t have control over the sun. Solar input is only changing by about one watt out of thirteen hundred watts per square meter, not a whole lot.
— we do have control over CO2, which we’re doubling on the shortest time span by far ever in Earth’s history.
I’m trying to figure out what point you are trying to make. Can you make it explicitly?
Jeffrey Davis says
we do have control over CO2
Imagine a world in which solar output was going up, CO2 concentrations were static, and the atmophere was warming at an alarming rate. Would fatalism be the order of the day or would someone hit upon the happy idea of reducing CO2 concentrations in the atmosphere as a way of mitigating the effect of the increasing solar energy?
steven caskey says
“Hank Roberts Says:
2 May 2008 at 9:18 AM I’m trying to figure out what point you are trying to make. Can you make it explicitly?”
I will try.
[current forcing today / future long term feedbacks] = [forcing from the past/current long term feedbacks]
The larger you predict the future long term feedback mechanisms the larger the long term feedback mechanisms from the past must be influencing the climate today
current total forcing = [forcings x immediate feedback] + [past forcings x long term feedback]
The higher the value of the long term feedback the smaller the value of the immediate feedback must be.
I know I have oversimplified to an incredible degree and this is not up for peer review so please don’t be too harsh but I hope it got my frame of thinking a little more clear.
Leif Svalgaard says
Jeffrey Davis:
Imagine a world in which solar output was going down, CO2 concentrations were up, and the atmosphere was cooling. Would fatalism be the order of the day or would someone hit upon the happy idea of increasing further CO2 concentrations in the atmosphere as a way of mitigating the effect of the decreasing solar energy?
[Response: You’d be much better off with SF6 or some of the HFCs – cheap, inert and with GWPs many times that of CO2. – gavin]
Ray Ladbury says
Steven, OK, so what was the past forcing you posit is still reverberating today? Changes in insolation have been tiny. Other influences have been short-term and inconsistent (some positive, some negative). Did you ever study differential equations? Think about how the time dependences of the homogeneous and particular solution have to be related to see a consistent, monotonic effect.
To paraphrase raypierre–the sun goes up and down and up and down, and temperature (trend) goes up. Look, it comes down to this: the energy has to come from somewhere. Where do you think it’s coming from.
In any case, the fallacy of your argument is that somehow, CO2 forcing is determined from current forcing. It isn’t. It is determined from things like paleoclimate, past response of the atmosphere to purturbations, and so on. They are saying, “the sensitivity has to be x, because in the dim and dark past we saw y.” So unless you can produce y with a much smaller sensitivity in the dark and distant past, CO2 sensitivity in the models won’t be affected. CO2 sensitivity in the models is not a fitting parameter. It is fixed by prior information.
steven caskey says
Ray Ladbury
My point wasn’t that long term feedback was a significant cause of the current climate. It was that if it isn’t, and it seems obvious that you believe it not to be, then there is no reason to believe it will be in the future.
Ray Ladbury says
Steven–the problem is that in the past we didn’t have a rapidly increasing driver that would have effects that persiste for hundreds of years, and the system’s response to large perturbations may be quite different from the response to small perturbations. Past perturbations were not sufficient to melt the ice caps. This ome might be. In the past, permafrost stayed frozen; now it is melting and releasing CO2. In the past, the ocean remained a net sink for CO2, but now its ability to absorb is diminishing. Believe me, I have looked for warm fuzzies to convince me that we don’t have to worry about that thick positive tail. I haven’t found them.
Chris Colose says
Chris N,
I’m not sure why you are challenging the credibility of Gavin ( a highly published and renouned researcher) or anyone else when your questions and assumptions make little sense (how does an increase in the sun lead to strat cooling from a radiative viewpoint? What is “what are the accuracies of the models if the assumed amount of aerosols are cut in half?” supposed to mean?).
Your questions on forcings and sensitivity seems very ill-posed or confused. Adding CO2 is a climate forcing, not a “sensitivity.” The sensitivity tells you how much the climate changes from x amount of forcing. A climate with a very high sensitivity will change a lot from x forcing, and a climate with low sensitivity will change very little for the same x forcing. For example, if the radiative forcing from some increase in CO2 is 2 W/m2 and the climate sensitivity is 0.75 Kelvin per W/m2, then adding that amount of CO2 will give 1.5 K increase.
C
Hank Roberts says
Leif, you write
> would someone hit upon the happy idea of increasing further CO2
Only someone who hated fish. And plankton. You know this.
Jeffrey Davis says
My tone AND math was off. Lashings of apologies. I’d intended a clever rebuttal to the fatalism inherent in the “nothing we can do” position of the idea that solar increases were responsible for AGW. Well, of course we could and would do something. Like tinker with CO2 concentrations. Maybe. But mentally I’d switched the sign. Up was down. Etc. Hard to explain. Like calling your best friend the wrong name.
I’m getting old.
Chris N says
We appear to be talking past one another. My comments are two distinct points but somewhat related. First point: Less aerosols in the stratosphere will cause it to be cooler, all else equal, than a time when more aerosols are in the stratosphere. Thus, the present trend of cooler stratosphere is due to less particulate matter reaching the stratosphere today than 20 or 30 years ago. Regarding the models themselves, would they still be “accurate” if the aerosol forcing is cut in half? It appears to me that the aerosol forcing has been inflated in order to get the CO2-driven models (with their inherent climate sensitivities) to reasonably match surface temperature trends. According to this graph, it appears that aerosol forcing in the models has been incresing over the years, not decreasing.
http://en.wikipedia.org/wiki/Image:Climate_Change_Attribution.png
If you think I don’t know what I’m talking about, please explain the graph above.
[Response: I’m not sure what the relation of the first part of your claim is to the second part. Volcanic aerosols reach the stratosphere and cause a warming there, but don’t last very long. The anthropogenic aerosol increase in the graph you linked is mostly in the troposphere, and therefore doesn’t have nearly as much effect on the stratosphere. Aerosol forcing has been increasing, but not nearly as much as CO2 forcing in recent years, which is why the CO2 forcing is winning out and we are seeing strong warming. Some models assume less aerosol forcing, some more, which is why a range of models with different climate sensitivity can still be compatible with historical instrumental climate records. If you think there is some other feedback mechanism that could yield lower climate sensitivity than the IPCC range, and fit the temperature record with lower aerosol forcing than assumed in the range covered by IPCC, please turn that into a quantitative model and show me. Nobody has done that. Not with cosmic rays, not with solar forcing not with fanciful “iris” cloud feedback, not with nothin’. It’s not to say it couldn’t possibly be done, but nobody’s done it, which leads me to think the proponents of low climate sensitivity are not serious about seeing whether their ideas work when turned into hard,cold numbers. –raypierre]
Martin Vermeer says
Jeffrey Davis #181, …but your valid point is that natural disasters may be as bad as self-inflicted ones, and their mitigation just as legitimate. The fact that a very damaging development is “not our fault” — well, AGW is, but think asteroid impact or whatever — is no reason to just suffer it.
Fatalism is a curse, and 100% self-inflicted.
Hank Roberts says
Steven, #174
Are these equations meant to represent a theory you have? Or do they come from some source you can cite?
It looks as though you are assuming there that present conditions are equal to past conditions.
Spencer Weart goes through the science done to test that assumption, in considerable detail.
steven caskey says
“Hank Roberts Says:
5 May 2008 at 6:26 PM
Steven, #174
Are these equations meant to represent a theory you have? Or do they come from some source you can cite?”
They were just equations I made up to try to make my line of reasoning a bit clearer. The point I was trying to make isn’t that things are the same as they were but rather that the long term climate sensitivity shouldn’t change that much. Thus if you predict a high long term sensitivity for current forces it would make sense to go back to past forces and use similar climate sensitivities for those and see how they should be affecting today’s climate. I understand I grossly oversimplified but it was the best way I could think of to show my line of thought. I will make it a point to read what Spencer Weart has done.
Ted Nation says
In #117 above I sought a response to a paper for which Roy Spencer was the lead author. It now appears to have also included Christy as a co-author and claimed a response of tropical cirrus clouds that should require modellers to lower sensitivity value by as much as 75%. Now Spencer is claiming that the climate science community is ignoring their results. (See “The Sloppy Science of Global Warming” posted March 20, 2008 on “Energy Tribune”).
“By analyzing six years of data from a variety of satellites and satellite sensors, we found that when the tropical atmosphere heats up due to enhanced rainfall activity, the rain systems there produce less cirrus cloudiness, allowing more infrared energy to escape to space. The combination of enhanced solar reflection and infrared cooling by the rain systems was so strong that, if such a mechanism is acting upon the warming tendency from increasing carbon dioxide, it will reduce manmade global warming by the end of this century to a small fraction of a degree. Our results suggest a “low sensitivity” for the climate system.
What, you might wonder, has been the media and science community response to our work? Absolute silence. No doubt the few scientists who are aware of it consider it interesting, but not relevant to global warming. You see, only the evidence that supports the theory of manmade global warming is relevant these days.”
The paper in question appears to be,
Article title: Cloud and radiation budget changes associated with tropical intraseasonal oscillations
Published in: GEOPHYSICAL RESEARCH LETTERS in August, 2007.
I don’t put a great deal of stock in what Spencer and Christy do but I would like to see some authoritative response to this paper.
Ray Ladbury says
Spencer and Christy don’t have a great track record when it comes to producing results that are accurate the first time round. In part, that is likely due to the difficulty of hammering the sattelite measurements into order. However, his insistence on doing science by press is inexcusable. If his paper has merit, that will come out, but to claim to have overturned climate science and complain that you aren’t getting the attention you deserve is kind of sad really.
David B. Benson says
Ted Nation (186) — I’m certainly no authority, but the global temperature may have been hotter in the mid-Holocene than now as certainly hotter than the global temperature in the 1950s. The Eemian interglacial (termination 2) is thought to have been quite a bit warmer than that, with termination 3 even hotter.
So spencer’s iris effect, if adequately proven to actually exist, does not appear to keep temperatures from rising a substantial amount more. At best, IMHO, this could only lower climate sensitivity most modestly, say from 3.0 K to 2.9 K.
Ted Nation says
Thank you for responding to the Spencer, Christy, et. al paper but I’m looking for something authoritative. This paper is out there bouncing around among skeptics and deniers without rebuttal. (The latest is the Australian, Jennifer Marohasy). I thought this kind of thing was partially what Realclimate was set up to respond to. I’m familiar with the long dispute regarding satellite temperature data and how it was resolved with Christy forced to acknowlege errors in his data. However, while the errors were unrevealed, others marshalled the evidence on the other side. I realize that it may be some time before independent analysis is done on the data from the new satellite but an authoritative listing of catradictory evidence is called for.
David B. Benson says
Ted Nation (189) wrote “but an authoritative listing of catradictory[sic] evidence is called for.” I’m not sure what you want. The evidence from ice cores can readily be obtained from the NOAA Paleoclimatology web site. The analysis of the Vostok ice core by Petit et al. has been converted in graphical form for a Wikipedia page:
http://en.wikipedia.org/wiki/Image:Vostok-ice-core-petit.png
where termination 2 is about 125 kya, termination 3 is about 240 kya and termination 4 is about 325 kya. All three show higher temperatures than at present.
Christopher Hogan says
Your point about the error bars is correct. But I’m not sure it’s clear to the typical reader. I posted a version of the text below on another board and got the comment that it was a lucid explanation of the statistical issue. I thought it might be helpful to try to post it here.
The Douglass et al. error bars tell you that you have a fairly precise estimate of average prediction, but they do not tell you that you have a very precise prediction. That’s the conceptual mistake they made — they confused the accuracy of their estimate of the average prediction with the accuracy of the prediction itself.
A simple example can make this clear. If I ask 1000 economists to predict the average rate of inflation in the year 2100, and take the average and standard deviation of those predictions, what I’ll get is a fairly precise estimate of the average prediction. What I most assuredly do not have is a very precise prediction. In fact, it’s still just a guess. I should have no expectation that the actual inflation rate in that year will be close to that prediction. And if I then asked 100,000 economists, I’d get ten times more precision in my estimate of the average prediction. But the prediction itself would be no more accurate than the first one.
To recap: they mistook the accuracy with which they estimated the mean prediction, for the accuracy of the prediction itself. That’s like saying that if you ask 100x as many economists, you’ll get a 10x improvement in the accuracy of your economic forecast. Nope. You’ll get a 10-fold improvement in your estimate of what the average economist thinks, that’s all.
Aaron says
There’s a blog entry at climate-skeptic.com that claims to poke holes in this post.
http://www.climate-skeptic.com/2009/01/can-you-have-a-consensus-if-no-one-agrees-what-the-consensus-is.html
Is there any truth to it?
[Response: It is a little confused. The point is that the supposed absence of a hot spot is a much more fundamental problem for atmospheric physics than it is a problem for greenhouse gases – specifically the moist adiabat is fundamental to all theories of moist convection in the tropics (this is the temperature gradient that results from lifting up parcels of moist air). That gradient because of the temperature/water vapour saturation relationship always decreases as the surface temperature increases (thus leading to enhanced warming aloft). This is such a fundamental issue – that long predates climate modeling or worrying about greenhouse gases, that for this to be wrong would overturn maybe a century of meteorology. Thus it is highly unlikely to be wrong and the problem is much more likely to be in the observations. Having said that, the follow on post from this (here) demonstrates that there may well be a hot spot in any case. – gavin]
David B. Benson says
I worte in comment #190 “at present”. In this context “present” is the year 1950 CE.
cce says
RE: 192
I posted this on “another site”, but no one there found it particularly interesting, given certain ideological beliefs that no such hotspot exists:
“Warming patterns are consistent with model predictions except for small discrepancies close to the tropopause. Our findings are inconsistent with the trends derived from radiosonde temperature datasets and from NCEP reanalyses of temperature and wind fields. The agreement with models increases confidence in current model-based predictions of future climate change.”
http://www.nature.com/ngeo/journal/v1/n6/abs/ngeo208.html
“Insofar as the vertical distributions shown in Fig. 3 are very close to moist adiabatic, as for example predicted by GCMs (Fig. 6), this suggests a systematic bias in at least one MSU channel that has not been fully removed by either group [RSS & UAH].”
http://earth.geology.yale.edu/~sherwood/sondeanal.pdf
“The observations at the surface and in the troposphere are consistent with climate model simulations. At middle and high latitudes in the Northern Hemisphere, the zonally averaged temperature at the surface increased faster than in the troposphere while at low latitudes of both hemispheres the temperature increased more slowly at the surface than in the troposphere.”
http://www.atmos.umd.edu/~kostya/Pdf/VinnikovEtAlTempTrends2005JD006392.pdf
“In the tropical upper troposphere, where the predicted amplification of surface trends is largest, there is no significant discrepancy between trends from RICH–RAOBCORE version 1.4 and the range of temperature trends from climate models. This result directly contradicts the conclusions of a recent paper by Douglass et al. (2007).”
http://ams.allenpress.com/archive/1520-0442/21/18/pdf/i1520-0442-21-18-4587.pdf
Also, it’s always worth pointing out that the satellite “channels” do not represent the actual temperature trends at those altitudes, but the trends of huge swaths of atmosphere that include the stratosphere to various degrees (except for TLT). The “channel” that is centered on the “hotspot” (RSS TTS — only reliable since 1987) is half troposphere and half stratosphere, a fact that is seldom (never) pointed out by people pushing this grab bag of nonsense.
Aaron says
“Having said that, the follow on post from this (here) demonstrates that there may well be a hot spot in any case. – gavin”
Thank you. :)
Aaron says
Thank you.