Once more unto the breach, dear friends, once more!
Some old-timers will remember a series of ‘bombshell’ papers back in 2004 which were going to “knock the stuffing out” of the consensus position on climate change science (see here for example). Needless to say, nothing of the sort happened. The issue in two of those papers was whether satellite and radiosonde data were globally consistent with model simulations over the same time. Those papers claimed that they weren’t, but they did so based on a great deal of over-confidence in observational data accuracy (see here or here for how that turned out) and an insufficient appreciation of the statistics of trends over short time periods.
Well, the same authors (Douglass, Pearson and Singer, now joined by Christy) are back with a new (but necessarily more constrained) claim, but with the same over-confidence in observational accuracy and a similar lack of appreciation of short term statistics.
Previously, the claim was that satellites (in particular the MSU 2LT record produced by UAH) showed a global cooling that was not apparent in the surface temperatures or model runs. That disappeared with a longer record and some important corrections to the processing. Now the claim has been greatly restricted in scope and concerns only the tropics, and the rate of warming in the troposphere (rather than the fact of warming itself, which is now undisputed).
The basis of the issue is that models produce an enhanced warming in the tropical troposphere when there is warming at the surface. This is true enough. Whether the warming is from greenhouse gases, El Nino’s, or solar forcing, trends aloft are enhanced. For instance, the GISS model equilibrium runs with 2xCO2 or a 2% increase in solar forcing both show a maximum around 20N to 20S around 300mb (10 km):
The first thing to note about the two pictures is how similar they are. They both have the same enhancement in the tropics and similar amplification in the Arctic. They differ most clearly in the stratosphere (the part above 100mb) where CO2 causes cooling while solar causes warming. It’s important to note however, that these are long-term equilibrium results and therefore don’t tell you anything about the signal-to-noise ratio for any particular time period or with any particular forcings.
If the pictures are very similar despite the different forcings that implies that the pattern really has nothing to do with greenhouse gas changes, but is a more fundamental response to warming (however caused). Indeed, there is a clear physical reason why this is the case – the increase in water vapour as surface air temperature rises causes a change in the moist-adiabatic lapse rate (the decrease of temperature with height) such that the surface to mid-tropospheric gradient decreases with increasing temperature (i.e. it warms faster aloft). This is something seen in many observations and over many timescales, and is not something unique to climate models.
If this is what should be expected over a long time period, what should be expected on the short time-scale available for comparison to the satellite or radiosonde records? This period, 1979 to present, has seen a fair bit of warming, but also a number of big El Niño events and volcanic eruptions which clearly add noise to any potential signal. In comparing the real world with models, these sources of additional variability must be taken into account. It’s straightforward for the volcanic signal, since many simulations of the 20th century done in support of the IPCC report included volcanic forcing. However, the occurrence of El Niño events in any model simulation is uncorrelated with their occurrence in the real world and so special care is needed to estimate their impact.
Additionally, it’s important to make a good estimate of the uncertainty in the observations. This is not simply the uncertainty in estimating the linear trend, but the more systematic uncertainty due to processing problems, drifts and other biases. One estimate of that error for the MSU 2 product (a weighted average of tropospheric+lower stratospheric trends) is that two different groups (UAH and RSS) come up with a range of tropical trends of 0.048 to 0.133 °C/decade – a much larger difference than the simple uncertainty in the trend. In the radiosonde records, there is additional uncertainty due to adjustments to correct for various biases. This is an ongoing project (see RAOBCORE for instance).
So what do Douglass et al come up with?
Superficially it seems clear that there is a separation between the models and the observations, but let’s look more closely….
First, note that the observations aren’t shown with any uncertainty at all, not even the uncertainty in defining a linear trend – (roughly 0.1°C/dec). Secondly, the offsets between UAH, RSS and UMD should define the minimum systematic uncertainty in the satellite observations, which therefore would overlap with the model ‘uncertainty’. The sharp eyed among you will notice that the satellite estimates (even UAH Correction: the UAH trends are consistent (see comments)) – which are basically weighted means of the vertical temperature profiles – are also apparently inconsistent with the selected radiosonde estimates (you can’t get a weighted mean trend larger than any of the individual level trends!).
It turns out that the radiosonde data used in this paper (version 1.2 of the RAOBCORE data) does not have the full set of adjustments. Subsequent to that dataset being put together (Haimberger, 2007), two newer versions have been developed (v1.3 and v1.4) which do a better, but still not perfect, job, and additionally have much larger amplification with height. For instance, look at version 1.4:
The authors of Douglass et al were given this last version along with the one they used, yet they only decided to show the first (the one with the smallest tropical trend) without any additional comment even though they knew their results would be less clear.
But more egregious by far is the calculation of the model uncertainty itself. Their description of that calculation is as follows:
For the models, we calculate the mean, standard deviation (sigma), and estimate of the uncertainty of the mean (sigma_SE) of the predictions of the trends at various altitude levels. We assume that sigma_SE and standard deviation are related by sigma_SE = sigma/sqrt(N – 1), where N = 22 is the number of independent models. ….. Thus, in a repeat of the 22-model computational runs one would expect that a new mean that would lie between these limits with 95% probability.
The interpretation of this is a little unclear (what exactly does the sigma refer to?), but the most likely interpretation, and the one borne out by looking at their Table IIa, is that sigma is calculated as the standard deviation of the model trends. In that case, the formula given defines the uncertainty on the estimate of the mean – i.e. how well we know what the average trend really is. But it only takes a moment to realise why that is irrelevant. Imagine there were 1000’s of simulations drawn from the same distribution, then our estimate of the mean trend would get sharper and sharper as N increased. However, the chances that any one realisation would be within those error bars, would become smaller and smaller. Instead, the key standard deviation is simply sigma itself. That defines the likelihood that one realisation (i.e. the real world) is conceivably drawn from the distribution defined by the models.
To make this even clearer, a 49-run subset (from 18 models) of the 67 model runs in Douglass et al was used by Santer et al (2005). This subset only used the runs that included volcanic forcing and stratospheric ozone depletion – the most appropriate selection for this kind of comparison. The trends in T2LT can be used as an example. I calculated the 1979-1999 trends (as done by Douglass et al) for each of the individual simulations. The values range from -0.07 to 0.426 °C/dec, with a mean trend of 0.185 °C/dec and a standard deviation of 0.113 °C/dec. That spread is not predominantly from uncertain physics, but of uncertain noise for each realisation.
From their formula the Douglass et al 2 sigma uncertainty would be 2*0.113/sqrt(17) = 0.06 °C/dec. Yet the 10 to 90 percentile for the trends among the models is 0.036–0.35 °C/dec – a much larger range (+/- 0.19 °C/dec) – and one, needless to say, that encompasses all the observational estimates. This figure illustrates the point clearly:
What happens to Douglass’ figure if you incorporate the up-dated radiosonde estimates and a reasonable range of uncertainty for the models? This should be done properly (and could be) but assuming the slight difference in period for the RAOBCORE v1.4 data or the selection of model runs because of volcanic forcings aren’t important, then using the standard deviations in their Table IIa you’d end up with something like this:
Not quite so impressive.
To be sure, this isn’t a demonstration that the tropical trends in the model simulations or the data are perfectly matched – there remain multiple issues with moist convection parameterisations, the Madden-Julian oscillation, ENSO, the ‘double ITCZ’ problem, biases, drifts etc. Nor does it show that RAOBCORE v1.4 is necessarily better than v1.2. But it is a demonstration that there is no clear model-data discrepancy in tropical tropospheric trends once you take the systematic uncertainties in data and models seriously. Funnily enough, this is exactly the conclusion reached by a much better paper by P. Thorne and colleagues. Douglass et al’s claim to the contrary is simply unsupportable.
Lynn Vincentnathan says
Acc to the article the authors conclude that “carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming.”
If that were actually the case, then the implication would be that we would have to reduce our GHGs all the more in hopes of reducing the warming trend at least a little. I mean we can’t very well turn down the sun, or halt cosmic rays, or whatever. We have to do what we can do. And in this case every effort would have to be on reducing that tiny portion of our contributions to GW in hope of avoiding a runaway scenario and much worse harm….since it seems the harms increase almost exponentially with the warming. Every little bit of reduction in warming would help tremendously, and might be that straw lifted from the camel’s back just in time.
I guess, if there are still people on earth in billions of years when the sun does start getting much hotter, they will be struggling to do whatever is in their power to do to reduce the warming and its harms. Where there’s life, there’s hope, and I imagine future people (if not this generation) would struggle even more to do whatever is in their power to keep life going. The mantra might be, “Johnny, the sun is causing us a lot of harm and danger, and we don’t want to add anything to that, so be a good boy and turn off that light not in use!”
SecularAnimist says
A commenter on a political blog site has posted the following quote which he attributes to the “lead author” of this paper:
The first sentence of that statement does seem to more or less accurately describe the paper’s contention, although it seems it would be more accurate to say “does not show the characteristic fingerprint associated with the predictions of the 22 models examined in the study“.
However, the second sentence asserting the “inescapable conclusion … that observed increases in carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming” seems to go far beyond what the paper purports to have demonstrated, to the point of seriously misrepresenting the paper’s conclusions.
Does anyone know whether the above quoted statement was actually made by the lead author of the paper? If so, is it a justifiable characterization of the actual conclusions of the paper? (Aside from whether those conclusions are scientifically sound.)
VirgilM says
Re: Stratospheric Cooling
I found a paper in the Journal Science called “Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling” V. Ramaswamy, et al. Science 311, 1138 (2006) DOI: 10.1126/science.1122587
In this paper is the following quote “In Fig 3B, the comparison of WmggO3 and Wmgg shows that the overall lower stratospheric temperature decline is driven primarily by the depletion of ozone, and to a lesser extent by the by the increase in well-mixed greenhouse gases”
This paper looked a lower stratopheric temperatures between 1979 and 2003 and tried to model those temperatures using various natural and anthropogenic forcings. I must note that this conficts with Gavin’s assertion that greenhouse gases is the most responsible for stratospheric cooling during the last few decades. Gavin cannot ignore the results of the paper that I referenced.
Gavin said: “The big difference between solar and CO2 forcing is in the stratosphere – where CO2 causes cooling – just as is seen in the real world.” The problem with this statement is that the stratosphere cooled because of O3 depletion, NOT CO2. So you can’t really look at the stratosphere to see if most of the tropospheric warming is due to solar or GHG forcings.
[Response: You confuse the lower stratosphere with the whole stratosphere. MSU4 is mostly a lower stratospheric signal, and indeed, the trends there are mostly associated with ozone depletion. But higher up, it’s more related to CO2 (see http://www.atmosphere.mpg.de/enid/20c.html for instance, fig 4). -gavin]
Andre says
pertaining RAOBCORE I read:
“It turns out that the radiosonde data used in this paper (version 1.2 of the RAOBCORE data) does not have the full set of adjustments. Subsequent to that dataset being put together (Haimberger, 2007), two newer versions have been developed (v1.3 and v1.4) which do a better, but still not perfect, job”
and
“Nor does it show that RAOBCORE v1.4 is necessarily better than v1.2.”
and in the response to #42 comment
” With respect to RAOBCORE, I don’t have a position on which analysis is best”
Care to explain?
[Response: The researchers on RAOBCORE presumably think that v1.4 is better (otherwise why put it out?). I do not have enough knowledge on radiosonde analyses to be able to render a judgment. The main point is that the systematic uncertainty is much larger than portrayed in the Douglass et al paper and the authors knew that before their paper was published. Whether one is better or not, shouldn’t they have at least mentioned it? – gavin]
Ian Rae says
“The main point is that the systematic uncertainty is much larger than portrayed in the Douglass et al paper and the authors knew that before their paper was published.”
Yes they should have mentioned it. But the point of the paper is to show that observational data doesn’t match model data in certain parts of the troposphere. Showing that one dataset, v1.4, overlaps the models is a pretty weak rebuttal. Why don’t the other obs datasets overlap?
This tendency to keep responding to criticisms with the latest data (v1.4 was published this year) reinforces more than anything how unsettled the science still is. And that progress is being made…
[Response: You missed they main criticism – the uncertainty on the model runs is completely wrong. The correct ones overlap even the earlier versions of the radiosondes. – gavin]
Ian Rae says
The graphic labeled “Not quite so impressive” shows only RAOBCORE v1.4, happily up by the mean of the model data. The earlier datasets are not shown in the graphic, but yes they would overlap, down near a trend value of 0.1. So it would appear that in order to make things overlap you need models that predict global warming of less than 0.1 deg/decade! Yes, the upper error bar is around 0.5, but since the obs datasets are down at the lower bound, it’s kind of reassuring.
Also, I thought error bounds meant any value within those bounds was equally likely. And since (peering at the diagram) around 80% of the model value error range is above the obs data, the question of a discrepancy remains statistically likely.
Ray Ladbury says
Ian Rae, You seem to think that uncertainty in one aspect of climate science means that it is all uncertain–not true. CO2 forcing is well known. Radiosonde measurements are quite difficult (think about how they are made). There’s lots of noise in the data. Also, your idea that any value within error bounds is unlikely–errors are often assumed to be normally distributed about the mean, although this, too is just a convenient approximation.
Terry says
So what you are saying is that the confidence intervals around the model predictions are so large that they are essentially unfalsifiable?
[Response: for this metric and taking into account the errors on the observations and shortness of the interval, yes. Different metrics, different time periods, different observations it’s a different story. – gavin]
David Young says
As a non professional scientist I would like to know why, if this new paper is so bad, that it passed peer review. Isn’t peer review supposed to weed out bad papers?
Nick Gotts says
Re #50 [David Young] David, I think of peer review as a kind of spam filter – it gets rid of most of the spam, but some slips through, and occasionally emails you would really have wanted get blocked – i.e. a really good paper gets rejected – I know this has happened to mine ;-).
Gavin (no, not that one, a different one) says
#49 “So what you are saying is that the confidence intervals around the model predictions are so large that they are essentially unfalsifiable?”
No, the models are falsifiable (in the sense of Popper), they just have not been falsified by this particular set of observational data. This is becuase the data are consistent with the models.
However the models are biased, on average they over-predict. This is undoubtedly a concern already known to the modellers.
#50 “As a non professional scientist I would like to know why, if this new paper is so bad, that it passed peer review. Isn’t peer review supposed to weed out bad papers?”
No process involving human beings is going to be perfect, bad papers do get through peer review sometimes. Fortunately replies to the article appear in the journal and the original aothors may submit a rebuttal. This means that science is able to recover from errors in the review process. This system has worked well in the past, and no doubt will sift out the truth in this case.
henning says
@Lynn – 44
No. What it would mean (if it was true) is, that all of the models and our understanding of what actually drives climate change would be wrong. Since the models used in AR4 depend on a certain GHG effect, none of their projections (temperature, percipitation, sea-level, sea-ice etc.) would have any confidence at all. The entire game would start from scratch – and nobody would spend huge amounts of money for reducing GHGs when the effect would be lost in the noise of other, yet unknown or heavily underestimated, forcings.
Let me play the devils advocate for a second and string the evidence against CO2 together:
CO2 levels are rising, but the climate sensitivity is vastly overestimated by the IPCC due to overestimated positive feedbacks. In truth, the feedbacks cancel each other out (Lindzen et al.) and the resulting sensitivity is much smaller (Schwartz). The surface temperature record is contaminated (McKitrick) and most of the observed warming is in fact land-use and not GHG. If it was GHG, the radiative forcings should show higher trends in the mid troposphere, which it doesn’t (Douglass et al.). Lower temperatures in the stratosphere are caused by ozone depletion and the arctic is just effected by cyclic changes in wind and sea streams. So you see – nothing to worry about, especially since Loehle just showed, that it has been like this a mere millenium ago. And guess what – this is well within the modelled range, according to Gavin Schmidt.
;-) Just kidding.
Ray Ladbury says
David Young #50, Peer review fulfills many functions–weeding out papers that are clearly incorrect, yes, but also improving papers that are flawed. And even if a paper is not 100% correct, a reviewer may decide that it would be of sufficient interest to the general community to be published. Remember, the intended audience are experts, not laymen. The assumption is that experts can read and discuss the paper and reach a conclusion as to its merits. The ultimate test is whether the paper is cited in future work. Peer review is a floor, not an absolute judgement.
Timothy Chase says
RAOBCORE: Still a few kinks…
The people at RAOBCORE believe that the 1.4 is definitely better than the earlier versions. However, I wouldn’t get too attached to 1.4 as of yet. They believe there is an issue endemic to all versions which will be fixed in the next. No doubt RAOBCORE will be a nice tool once it is done, but at the moment it looks like it has a few kinks.
I think I would avoid using this in the tropics for the time being, and unfortunately I can’t tell when 1.5 is coming out.
If I understand the problem correctly, their product is designed to identify by means of empirically established measurements how the climate system behaves according to certain metrics under near equilibrium conditions. As such they have to take the measurements under near equilibrium conditions. However, there was a strong weather system passing through at the time and they took their measurements anyway. An actual case of GIGO — in a product being used to “test” the models.
Paul says
Gavin,
Re 43. So you are saying there is insufficient (statistical) confidence in the the model output (per confidence interval quoted) and the observational data (illustrated by the very large revision to ROABCORE) as a joint probability distribution to make any claims about the efficacy of the model output?
[Response: Efficacy? If you mean to say that this data and this comparison are not very useful in characterising model skill, then the answer is yes, it is not (yet) useful. – gavin]
Richard Sycamore says
Gavin, will you be publishing the content of this post in the form of a rebuttal? And would such a rebuttal be subject to the same level of peer review as the original article? I encourage you to do so. It is good to see some attention being paid to confidence levels around the model predictions (red triangle series in the last graph: they’re all over the map). I can think of many studies in climatology where a match between two time-series patterns would be “not so impressive” if the authors were to correctly calculate robust confidence intervals on the two data series.
Gsaun says
I was forwarded what looked like a news report that made a lot of really interesting statements. It was also interesting that it took the deniers blogosphere and was replicated over and over again.
Tracking it back, I found that it was a press release from Singer’s site. What is interesting is that almost none of the press release is relevant to the paper other than to make a passing reference to a paper that was accepted.
Apparently, getting a paper into a peer-reviewed journal then entitles you to make other claims that you can’t make easily in something that is peer reviewed.
In my reply, I made some of simpler comments for my commentor and the people who might eventually read the back and forth. Since no one has printed it above I post it for your viewing. Anchor yourself so you don’t go into spin mode.
Climate scientists at the University of Rochester, the University of Alabama, and the University of Virginia report that observed patterns of temperature changes (‘fingerprints’) over the last thirty years are not in accord with what greenhouse models predict and can better be explained by natural factors, such as solar variability. Therefore, climate change is ‘unstoppable’ and cannot be affected or modified by controlling the emission of greenhouse gases, such as CO2, as is proposed in current legislation.
These results are in conflict with the conclusions of the United Nations Intergovernmental Panel on Climate Change (IPCC) and also with some recent research publications based on essentially the same data. However, they are supported by the results of the US-sponsored Climate Change Science Program (CCSP).
The report is published in the December 2007 issue of the International Journal of Climatology of the Royal Meteorological Society [DOI: 10.1002/joc.1651]. The authors are Prof. David H. Douglass (Univ. of Rochester), Prof. John R. Christy (Univ. of Alabama), Benjamin D. Pearson (graduate student), and Prof. S. Fred Singer (Univ. of Virginia).
The fundamental question is whether the observed warming is natural or anthropogenic (human-caused). Lead author David Douglass said: “The observed pattern of warming, comparing surface and atmospheric temperature trends, does not show the characteristic fingerprint associated with greenhouse warming. The inescapable conclusion is that the human contribution is not significant and that observed increases in carbon dioxide and other greenhouse gases make only a negligible contribution to climate warming.”
Co-author John Christy said: “Satellite data and independent balloon data agree that atmospheric warming trends do not exceed those of the surface. Greenhouse models, on the other hand, demand that atmospheric trend values be 2-3 times greater. We have good reason, therefore, to believe that current climate models greatly overestimate the effects of greenhouse gases. Satellite observations suggest that GH models ignore negative feedbacks, produced by clouds and by water vapor, that diminish the warming effects of carbon dioxide.”
Co-author S. Fred Singer said: “The current warming trend is simply part of a natural cycle of climate warming and cooling that has been seen in ice cores, deep-sea sediments, stalagmites, etc., and published in hundreds of papers in peer-reviewed journals. The mechanism for producing such cyclical climate changes is still under discussion; but they are most likely caused by variations in the solar wind and associated magnetic fields that affect the flux of cosmic rays incident on the earth’s atmosphere. In turn, such cosmic rays are believed to influence cloudiness and thereby control the amount of sunlight reaching the earth’s surface-and thus the climate.” Our research demonstrates that the ongoing rise of atmospheric CO2 has only a minor influence on climate change. We must conclude, therefore, that attempts to control CO2 emissions are ineffective and pointless. – but very costly.”
gough says
For some who have heard Prof. Douglass’ talks over the years in Rochester, this is a moment rich in drama. In his talks, there is always a heavy dose of anti-Gore sarcasm, and the belittling of climate scientists who predict anything more than the mildest consequences of global warming. He shows a slide with a vicious circle, in which predictions of significant consequences generate research funding, which in turn causes researchers to predict even more dire consequences. GCMs are deemed to be wrong because they are too complicated, while using a grid that is too coarse, and couldn’t possibly take account of all the physics correctly. One is struck by Prof. Douglass’ continued level of certainty, which hasn’t wavered over the years. Even after the claims of Christy and Spencer, which he had trumpeted, were shown to be erroneous, he didn’t waver. When someone asked, “What about the melting glaciers?” he responded that all the attention goes to glaciers that are shrinking. We don’t hear about the glaciers that are growing.
The tone of the recent press release is no surprise, but it is a severe disappointment that the press conference (mentioned in #1) was canceled. A future playwright or composer of opera might have obtained some excellent material.
Steve says
Because the results between the v1.2 and v1.4 datasets were so different, I actually emailed one of the DCPS authors asking them to justify their dataset selection. From that explanation, I believe the v1.2 dataset to be the more accurate and the results based on that dataset to be more believable.
One can be sure that if the empirical data had overlapped the model, there would be little/no discussion from modellers about error bars. In true protect-the-model form, it is asserted that if error bars had been added to the measurements that measurement and model envelopes would overlap and, assuming the best case, show that the models are acceptable. It would also have shown, though, that the models could be even worse.
I am a firm believer in model development. With respect to the atmosphere, though, they have still not risen to the level of trustworthiness for future climate prediction/projection.
[Response: Well that’s nice. Perhaps you’d like to share their explanation which curiously is not to be found in the paper itself? People wouldn’t be criticising the calculation of the error bars if it had been done properly… – gavin]
Timothy Chase says
Steve (#58) wrote:
I think the following is worth quoting at this point:
Looks like they would recommend using version 1.4. It also looks like they are having a problem with the tropics.
It might also be worth looking at what the producer of a competing product has to say regarding their own product:
Climate models do quite well — as measured by a variety of metrics in many different contexts. Radiosondes? Looks like there is still a substantial amount of work to be done — as indicated by the “Caution” labels.
Gavin (no, not that one, a different one) says
“The interpretation of this is a little unclear (what exactly does the sigma refer to?), but the most likely interpretation, and the one borne out by looking at their Table IIa, is that sigma is calculated as the standard deviation of the model trends.”
Does this mean that sigma is the standard deviation for the mean trend for each model over several realisations (i.e. it is the standard deviation of 22 numbers rather than 67)? If this is the case it may have artificially reduced the width of the error bars even further as the prior averaging over realizations will have reduced the variance to some extent. I would have thought the standard deviation over the 67 realisations would be a fairer estimate of the model uncertainty.
[Response: Agreed. – gavin (yes, that one) ]
Lynn Vincentnathan says
RE #53 & 44. Henning, I never conceded any of those other points — the idea was that our GHG emissions were only playing a minor role in the warming (& I didn’t actually concede that either).
However, if we pose ALL those other unfounded contrarian bizzaro-science points, it’s all the more easy to shoot them down (re policy implications):
Even if GW and AGW are not happening, we still drastically need to reduce all the measures that involve GHG emissions through energy/resource efficiency/conservation & alt energy, not only because this will reduce many other environmental harms and lessen dependence on foreign oil and slow the depletion of finite resources, but also because it just makes eminent economic sense.
For instance, we have reduced our GHG emissions (over our 1990 emissions) by two-thirds cost-effectively, without lowering our living standard (increasing it actually). And we could reduce more cost-effectively. Since we moved down to Texas to get on Green Mountain’s 100% wind energy (which also saves us money), I haven’t bought a bicycle. So once I get that I can cycle the 2 miles to work, rather than drive, and I’m sure improve my health & stress in the process. I also understand that cycling and walking help reduce crime, save money on road repair (by offsetting car driving), and create a more friendly community, etc.
Reducing GHG emissions is a win-win-win-win-win-win situation. And if perchance the skeptics are correct and humans are not causing GW, and GW isn’t even happening — then our belief in the scientific facts that it is happening & our serious mitigation response to it would turn out to be the best thing that ever happened to us – the false positive bonanza.
However, if we’re talking false negative & a do-nothing to mitigate approach, we’re really in for hell on earth.
Russell Seitz says
Gavin:
Repeating your assertion #2 that The National Press Club Press Conference had been canceled at DSB brought a replyfrom Singer saying he held it, though it is not on the NPC events calender – maybe he bought somebody a waffle?
Hank Roberts says
Apparently he talked to someone at a meeting sometime, if you read way down in this undated story, at least that’s how they describe it.
http://afp.google.com/article/ALeqM5jnsW1wNezDB_oKpYLA5npFOC03Dg
That John fellow at Alabama has his name spelled a new way in this article, I notice, suggesting the fact checker’s off today.
No one signed it; Google News got it from Agence France Presse (AFP).
henning says
Just to get this in line. Lets assume, that the observed trends are in deed correct – wouldn’t that mean, that Douglass et al.’s conclusions are correct, too? Radiative forcing should show trends higher than surface trends in the troposphere, as I understand it. If these trends were lower, the entire GHG theory would fall, right?
[Response: No. The expected amplification has nothing to do with GHGs being the cause. Any real and clear differences (which there are not) then it would imply either a problems in the observing systems or with the our ideas about moist convection. – gavin]
Lynn Vincentnathan says
RE #62 & “No. The expected amplification has nothing to do with GHGs being the cause. Any real and clear differences (which there are not) then it would imply either a problems in the observing systems or with the our ideas about moist convection.”
Gavin, you mean even if they were right and there was a significant difference between reality (assuming the obs are capturing that — which apparently they are not doing that exactly) and the models, it would just mean the models need tweaking and the underlying processes rethought at bit.
IOW, this is pretty much ado about nothing (for those interested in the macro-issues of whether or not GW is happening and whether it’s caused by our GHGs)?
And the denialists main gist in their article and palaver around the web was to knock the models?
Well, I have an answer to that attack, as well: Okay, models do not perfectly replicate reality, and perhaps the models might be overestimating the problem; but then again the models might underestimating the problem and we could be in much hotter water than we thought.
Lynn Vincentnathan says
RE #66, Hank, that is shocking that Agence France Presse would run such a story. I think they have been a fairly good source of GW news stories. Surely they know the few people they interviewed are in the extreme minority of climate scientists, and that their ideas have been for the most part debunked.
I know the media in general, especially here in the U.S., have been very bad in their GW coverage, and have given us the “silent treatment” on GW, and when they broke their silence, wrongly used the “balanced, pro-con” format (which is good for opinion issues, but not for science).
But I thought the news services have been somewhat better on GW, and that it was the newspapers and TV news that refused to pick up the stories the news services offered them.
Now I have to rethink news services (I’m writing a paper on GW and the media). I guess no one can be trusted in this, except the vast majority of climate scientists who say AGW is real.
I’m wondering if the recent sale of AFP had anything to do with this – see http://www.reuters.com/article/technology-media-telco-SP/idUSL1556429420071217
Steve says
#64 Gavin, I’m not a go between for you and other researchers. You have their email addresses. Ask them yourself! The whole idea behind scientific progress is for the researchers themselves to correspond with one another to resolve differences in results. If you do not have a good enough relationship with those researchers to air honest differences then you have basically created an inbred network of colleagues who do not review your work with a critical eye but simply rubber stamp it [edit]
[Response: Asking for clarification of your statements as opposed to simply taking your word for something someone may have said seems appropriate. Rest assured that there is plenty of communication going on between those researchers who are actively working on this. My comments here have focussed only on two issues which do not require any expertise to assess – the incorrect calculation of the model uncertainty and the complete lack of discussion of the actual observational uncertainty. – gavin]
Steve says
#73 Gavin, I understand that no one should simply take my word for anything. However, I did not feel I had the right to post statements from a private communication. The implication was that these researchers are available to any interested party with questions.
In a follow-up email, though, I received a full explanation about the model uncertainty calculations and observational uncertainty (and I did not even ask for them). If you really want to know, all you have to do is email them.
Arno says
Excerpt from the article above in comparing the two grafics, which shows no much differences:
“If the pictures are very similar despite the different forcings that implies that the pattern really has nothing to do with greenhouse gas changes, but is a more fundamental response to warming (however caused).”
It seems that the IPCC thinks different and supports Douglas, et al point of view:
“Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from (a) solar forcing, (b) volcanoes, (c) wellmixed
greenhouse gases, (d) tropospheric and stratospheric ozone changes, (e) direct sulphate aerosol forcing and (f) the sum of all forcings.”
source:
Chapter 9 Understanding and Attributing Climate Change, page 675
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
It is clearly seen, that – simulated by the PCM model – different forcings generate different warming patterns in the tropical atmospheric temperature.
What do you think about this?
Hank Roberts says
Steve, ask them to come here or give you permission to quote them. No need to complicate this by asking other people to do it. You have the info, they have the right to say you can post it. Go for it!
Timothy Chase says
Steve (#79) wrote:
Steve,
If you are in contact with the authors of Douglass, Pearson, Singer and Christy, perhaps you could invite them to explain:
(1) why they chose to use an older version 1.2 of RAOBCORE when the current version is 1.4;
(2) why they think that 1.2 is superior to 1.4 even though the manufacturers regard 1.4 as having substantial improvements over 1.2 and 1.3;
(3) why they chose not to even acknowledge the existence of versions 1.3 and 1.4 in their paper;
(4) how they “calculated” model uncertainties; and,
(5) why they chose to omit any acknowledgement of observational uncertainties inherent in the RAOBCORE radiosonde product.
I am also wondering whether they are aware of the fact that RAOBCORE and HADAT carry prominent cautionary notes regarding their use and RAOBCORE specifically notes that all versions of their product have significant problems in the tropics.
Personally, to me this seems more like the “oversight” of using earlier versions of UAH — which had a variety of technical issues, not the least of which involved the difference between night and day. Not that I mean to compare RAOBCORE to UAH.
Steve says
#82 Tim, As I said in an earlier post, I have no desire to be the go-between for discussions. And I will not post the contents of non-public correspondences. That is a violation of trust. I did indeed inform them of this thread. My guess is that they will not post because of the sometimes uncivil responses that occur (as they also do on anti-AGW sites). Respectfully, you have a keyboard and the email addresses are public knowledge. If you are burning to know the answers, please send an email. If you get permission to post the response, I would love to see the civil exchange that develops.
Russell Seitz says
Re 1. Thanks for the update- who claimed it was canceled ?
[Response:
– gavin]
Robert Reynolds says
As a geologist, having read and only partially understood the climate based terminology, I see a science in termoil with neither side with the ammunition to do each other in. I feel that this controversy could not be more ill-timed. With barbarians at our gates, determined to destroy western civilization, we are not in any position to throw many trillions of dollars and Euro’s into sequestering CO2. I personally feel we are in the 5th interglacial warm spell of the Pleistocene and anything that prolongs it is better than returning to another epoch of glaciation. How could it exceed the warm climate of the (175 my) Mesozoic Era? This was a good era for land based organisms. Primitive mammals, birds and flowering plants appeared. Mammals remained small because of dinosaur predation. Extensive forests florished as shown by the coal seams of the Mesa Verde fm and the very thick Paleocene coals of Wyoming. I think the problem of serious over-population and the resultant strain on natural resources and agricultural land will be our undoing in the struggle for survival, expedited by widespread nuclear technology.
Hank Roberts says
> Barbarians at The Gate
Excellent book, I recommend it. Quite cautionary for our time too.
http://books.google.com/books?id=3tDDlEFq1fAC&printsec=frontcover&dq=barbarians&lr=
Fred Staples says
Although it is interesting, I doubt if this discussion about the troposphere temperatures relative to the surface will resolve the argument about the impact of greenhouse gasses on global temperatures.
This issue will not be resolved until the James Hansen prediction from the seventies has been either confirmed or rejected.
Writing in 1978, he predicted that, “if the abundance of the greenhouse gasses continue to increase with at least the rate of the 1970s, their impact on global temperature may soon rise above the noise level”. For significance, he was looking for 0.4 degrees centigrade increase.
There is nothing so powerful as a successful prediction.
Without the significant increase in temperatures from 1978 to 1998, no-one (politicians, journalists and peace prize committees) would have taken the AGW argument seriously. Whether an increase of 0.91 degrees C could really have resulted from an increase in CO2 concentrations from 335ppm to 366ppm in twenty years is another matter altogether.
It is what happened next that will be decisive. Starting from the end of 2007, how far back must we go before the temperature trend again differs significantly from zero? Against a straightforward F Test, the increase is the UK data (a close proxy for the Northern hemisphere, Ray) is significant at the 5% level in 1993, and not afterwards. And if we go back to the two warm years at 1989 and 1990 the increase from 1989 is only just significant at the ten per cent level.
The CO2 increase since 1989 is almost the same as the Hansen increase.
If these trends continue, and global temperatures do not rise over the next five years, the clamour from the sceptics will be deafening. Will it then be possible to construct a defence of AGW?
guthrie says
Robert- what sea level was present 175 million years ago? Secondly, what corals were around? You cannot compare conditions many millions of years ago to what we may experience very soon (in geological scales) because it is rate of change that matters, not the precise end point.
Climate change also puts pressure on farming because of altered rainfall patterns, salinisation as sea levels rise, increased Co2 levels changing plants respiration and some other reasons which I cannot recall. It is more likely in my opinion that climate change, population growth, resource use, ecocystem destruction together would cause major problems.
Barton Paul Levenson says
Robert Reynolds posts:
[[As a geologist, having read and only partially understood the climate based terminology, I see a science in termoil ]]
“Turmoil.”
[[with neither side with the ammunition to do each other in.]]
Then you clearly don’t know much about the subject.
[[ I feel that this controversy could not be more ill-timed. With barbarians at our gates, determined to destroy western civilization, we are not in any position to throw many trillions of dollars and Euro’s into sequestering CO2. I personally feel we are in the 5th interglacial warm spell of the Pleistocene and anything that prolongs it is better than returning to another epoch of glaciation.]]
A real geologist would know we weren’t due for an ice age for another 20,000-50,000 years, even without global warming.
[[ How could it exceed the warm climate of the (175 my) Mesozoic Era? This was a good era for land based organisms.]]
Doesn’t mean it would be good for us, or that the transition would be smooth or even survivable. Lava becomes great soil, but you don’t want to be there when it comes out of the volcano.
[[ Primitive mammals, birds and flowering plants appeared. Mammals remained small because of dinosaur predation. Extensive forests florished as shown by the coal seams of the Mesa Verde fm and the very thick Paleocene coals of Wyoming.]]
A geologist would know that the Paleocene was not during the Mesozoic.
[[ I think the problem of serious over-population and the resultant strain on natural resources and agricultural land will be our undoing in the struggle for survival, expedited by widespread nuclear technology.]]
Could be.
Ray Ladbury says
Fred Staples #87 said: “If these trends continue, and global temperatures do not rise over the next five years, the clamour from the sceptics will be deafening. Will it then be possible to construct a defence of AGW?”
Actually, since there are few skeptics who even understand climate science, let along publish in refereed journals, they can scream as loudly as they want. Five years is a very short time to look for climatic trends–I certainly wouldn’t recommend allowing a 5 year trend to overrule the evidence emerging from a 20 year trend or a 150 year trend. I would also think that physics ought to play a role–physics says we’ll keep warming in the long term.
Fred Staples says
I’m sorry, Ray, if my comment (89) was not clear. The trend over the last six years is downward, but that is far too short a period to mean anything.
Before 1998 the temperature trend was upward – not just increasing but increasing significantly in the F-test sense relative to the random variation in the signal.
However, from 1994 to 2007, 13 years, the upward trend is not significantly different from zero. There is one chance in six that the observed trend arose by accident.
If we then go back 5 more years to 1989, 18 years, we find an annual average temperature of 10.5 degrees against 10.42 this year. The trend temperature is still upward over the entire 18 years because three of the next four years (’91/92/93) were cold (below pre-Hansen levels, actually) but the upward trend is again not statistically significant.
If the temperatures fluctuate about current levels over the next 5 years to 2012 we will then have a total of 23 years without a significantly increasing trend. It is this period that will, I suspect, make AGW indefensible to the non-scientific establishment.
Russell Seitz says
if these trends continue, and global temperatures do not rise over the next five years, the clamour from the sceptics will be deafening. Will it then be possible to construct a defence of AGW?”
Comment by Fred Staples — 18 December 2007 @ 7:34 AM
As this question is literally rhetorical– it concerns the quality of both sides rhetoric , the answer may depend on the degree to which proponents of models refrain from giving them much to clamor about. The rhetoric of motives has already severely afflicted one side, but that will afford no protection to the other if its commitment to scientific candor should falter, or it lets the semiotic abuse of models as tool for the advancement of environmental or economic agendas get out of hand.
Ray Ladbury says
Fred Staples #91, It is interesting that you are more interested in what the “non-scientific establishment” thinks than what the scientists think, is it not? Wouldn’t one think that the experts would have a better appreciation for what is going on than the non-scientists? And even if your contention of cooling were correct (it is not), if there were a good reason for the cooling (e.g. decreased insolation, increased aerosols from Chinese coal-burning power plants, etc.) that would certainly not mean we are out of the soup. Thanks, Fred, but I’ll stick with physics.
Fred Staples says
We have discussed the physics at great length, Ray. We agreed, I think, that both the two plausible explanations for AGW (inhibited surface radiation and “higher is colder”) require the troposphere temperature to increase more than the surface temperature. That issue is the subject of this thread.
It is a simple matter of record that AGW would not have been taken seriously had it not been for the warming from 1978 onwards, predicted by James Hansen. Temperatures had declined from the previous peak in the thirties.
The CO2 Science web site provides us with an F-test for two sets of data:
Angell, J.K. 1999. Global, hemispheric, and zonal temperature deviations derived from radiosonde records. In: Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, TN, USA.
And
The Global Historical Climatology Network (GHCN)
This provides a direct comparison of the troposphere data and the surface data, measured independently, from 1978 onwards.
For the surface data we obtain an increase of 0.8 degrees C, with an F value of 55 for 27 degrees of freedom – absolutely significant.
For the troposphere data we have a trend not significantly different from zero, F = 0.16 for 24 degrees of freedom (to year 2004).
How Ray, in the name of physics, can you explain those results? As I am sure you know, the F-test is testing that data against its inherent variability and its measurement error, combined. The surface temperature has increased; the troposphere temperature has not.
Hank Roberts says
http://www.nap.edu/openbook.php?record_id=9755&page=21
“..the range of these trend estimates is determined by applying different trend algorithms to the different versions of the surface and tropospheric data sets. Further discussion of the uncertainties inherent in these estimates is provided in chapters 6–9.”
http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#
Hank Roberts says
Fred, that information you quote from the CO2science website — do you know what’s been published since that 1999 article, following up the Christie et al. work?
check the references and follow the cites forward.
Richard Sycamore says
“Trends, trends, trends”. Refocus. The substantive issue here is whether or not you have a divergence problem. If the temperature trend continues to fail to rise in lock-step with rising CO2 you have a problem; your CO2 sensitivity estimate is dropping. The real question is how much confidence you have in this estimated parameter. If you are confident, then you know that the long-term trend will pick up, despite what it appears to do in the short run. My advice is to forget about data ‘trends’ (very flaky index prone to abuse by spinmeisters) and focus on model parameter estimates. After all, “attribution is fundamentally a modelling exercise”.
Ray Ladbury says
Fred, isn’t it funny how CO2″Science” has managed to fall 8 years behind on their estimates of tropospheric warming. My, the time just gets away, doesn’t it? I believe there have since been a couple of adjustments–ever upward–in tropospheric warming estimates. There may be more to come. It is not hard to understand this. Measuring tropospheric temperatures is a difficult enterprise, and the troposphere is a turbulent place where energy is transported rapidly. So between uncertainties and the difficulty of modeling energy transport in the troposphere, I’m not too concerned about the somewhat lower than predicted warming of the troposphere. Part of physics is understanding when errors preclude definitive statements, and in the face of the overwhelming evidence for anthropogenic causation from other lines, it is hard to get too overwrought.
But it is physics that tells us that if additional CO2 absorbs more IR, then the planet must warm, and physics tells us that CO2 has to absorb more IR. And it has found no other mechanism that can account for the warming we see. Like I say, physics been bery, bery good to me. I’ll stick with it.
Ray Ladbury says
Oh, Fred, you can read about this issue more here:
https://www.realclimate.org/index.php/archives/2005/08/the-tropical-lapse-rate-quandary/
and here:
https://www.realclimate.org/index.php?p=170
Isn’t it odd, that people are willing to go to all this trouble trying to discredit all the sites that measure surface temperature, and yet they take the much more fraught radiosonde measurements as gospel. Go figure.
Pekka Kostamo says
Re 93: “It is interesting that you are more interested in what the “non-scientific establishment” thinks than what the scientists think, is it not?”
This is in fact a crucial issue. The “non-scientific establishment” calls the shots in the political and economic arenas. They control the resources.
The denialists are not interested in advancing science, not at all. Their only goal is to create such an uncertainty that the required political and economic decisions are not made. In this they have been rather successful.
However, there is a growing body of “non-scientific” parties that see climate change as an opportunity. Early adoption of new scientific findings have always given an edge on the competitive marketplace.
By the way, denialist services … How come that always and anywhere (in any language and in any major media), if you mention climate change, within the hour there pops up two to six denialists to re-circulate the same discredited opinions. It looks like a network of “service centers”, with some underpaid and overworked youths (or retirees) copy-pasting these preset opinions and factoids, under various real or assumed names. They do not have very much traction nowadays, but as loyal employees they carry on regardless.