Once more unto the breach, dear friends, once more!
Some old-timers will remember a series of ‘bombshell’ papers back in 2004 which were going to “knock the stuffing out” of the consensus position on climate change science (see here for example). Needless to say, nothing of the sort happened. The issue in two of those papers was whether satellite and radiosonde data were globally consistent with model simulations over the same time. Those papers claimed that they weren’t, but they did so based on a great deal of over-confidence in observational data accuracy (see here or here for how that turned out) and an insufficient appreciation of the statistics of trends over short time periods.
Well, the same authors (Douglass, Pearson and Singer, now joined by Christy) are back with a new (but necessarily more constrained) claim, but with the same over-confidence in observational accuracy and a similar lack of appreciation of short term statistics.
Previously, the claim was that satellites (in particular the MSU 2LT record produced by UAH) showed a global cooling that was not apparent in the surface temperatures or model runs. That disappeared with a longer record and some important corrections to the processing. Now the claim has been greatly restricted in scope and concerns only the tropics, and the rate of warming in the troposphere (rather than the fact of warming itself, which is now undisputed).
The basis of the issue is that models produce an enhanced warming in the tropical troposphere when there is warming at the surface. This is true enough. Whether the warming is from greenhouse gases, El Nino’s, or solar forcing, trends aloft are enhanced. For instance, the GISS model equilibrium runs with 2xCO2 or a 2% increase in solar forcing both show a maximum around 20N to 20S around 300mb (10 km):
The first thing to note about the two pictures is how similar they are. They both have the same enhancement in the tropics and similar amplification in the Arctic. They differ most clearly in the stratosphere (the part above 100mb) where CO2 causes cooling while solar causes warming. It’s important to note however, that these are long-term equilibrium results and therefore don’t tell you anything about the signal-to-noise ratio for any particular time period or with any particular forcings.
If the pictures are very similar despite the different forcings that implies that the pattern really has nothing to do with greenhouse gas changes, but is a more fundamental response to warming (however caused). Indeed, there is a clear physical reason why this is the case – the increase in water vapour as surface air temperature rises causes a change in the moist-adiabatic lapse rate (the decrease of temperature with height) such that the surface to mid-tropospheric gradient decreases with increasing temperature (i.e. it warms faster aloft). This is something seen in many observations and over many timescales, and is not something unique to climate models.
If this is what should be expected over a long time period, what should be expected on the short time-scale available for comparison to the satellite or radiosonde records? This period, 1979 to present, has seen a fair bit of warming, but also a number of big El Niño events and volcanic eruptions which clearly add noise to any potential signal. In comparing the real world with models, these sources of additional variability must be taken into account. It’s straightforward for the volcanic signal, since many simulations of the 20th century done in support of the IPCC report included volcanic forcing. However, the occurrence of El Niño events in any model simulation is uncorrelated with their occurrence in the real world and so special care is needed to estimate their impact.
Additionally, it’s important to make a good estimate of the uncertainty in the observations. This is not simply the uncertainty in estimating the linear trend, but the more systematic uncertainty due to processing problems, drifts and other biases. One estimate of that error for the MSU 2 product (a weighted average of tropospheric+lower stratospheric trends) is that two different groups (UAH and RSS) come up with a range of tropical trends of 0.048 to 0.133 °C/decade – a much larger difference than the simple uncertainty in the trend. In the radiosonde records, there is additional uncertainty due to adjustments to correct for various biases. This is an ongoing project (see RAOBCORE for instance).
So what do Douglass et al come up with?
Superficially it seems clear that there is a separation between the models and the observations, but let’s look more closely….
First, note that the observations aren’t shown with any uncertainty at all, not even the uncertainty in defining a linear trend – (roughly 0.1°C/dec). Secondly, the offsets between UAH, RSS and UMD should define the minimum systematic uncertainty in the satellite observations, which therefore would overlap with the model ‘uncertainty’. The sharp eyed among you will notice that the satellite estimates (even UAH Correction: the UAH trends are consistent (see comments)) – which are basically weighted means of the vertical temperature profiles – are also apparently inconsistent with the selected radiosonde estimates (you can’t get a weighted mean trend larger than any of the individual level trends!).
It turns out that the radiosonde data used in this paper (version 1.2 of the RAOBCORE data) does not have the full set of adjustments. Subsequent to that dataset being put together (Haimberger, 2007), two newer versions have been developed (v1.3 and v1.4) which do a better, but still not perfect, job, and additionally have much larger amplification with height. For instance, look at version 1.4:
The authors of Douglass et al were given this last version along with the one they used, yet they only decided to show the first (the one with the smallest tropical trend) without any additional comment even though they knew their results would be less clear.
But more egregious by far is the calculation of the model uncertainty itself. Their description of that calculation is as follows:
For the models, we calculate the mean, standard deviation (sigma), and estimate of the uncertainty of the mean (sigma_SE) of the predictions of the trends at various altitude levels. We assume that sigma_SE and standard deviation are related by sigma_SE = sigma/sqrt(N – 1), where N = 22 is the number of independent models. ….. Thus, in a repeat of the 22-model computational runs one would expect that a new mean that would lie between these limits with 95% probability.
The interpretation of this is a little unclear (what exactly does the sigma refer to?), but the most likely interpretation, and the one borne out by looking at their Table IIa, is that sigma is calculated as the standard deviation of the model trends. In that case, the formula given defines the uncertainty on the estimate of the mean – i.e. how well we know what the average trend really is. But it only takes a moment to realise why that is irrelevant. Imagine there were 1000’s of simulations drawn from the same distribution, then our estimate of the mean trend would get sharper and sharper as N increased. However, the chances that any one realisation would be within those error bars, would become smaller and smaller. Instead, the key standard deviation is simply sigma itself. That defines the likelihood that one realisation (i.e. the real world) is conceivably drawn from the distribution defined by the models.
To make this even clearer, a 49-run subset (from 18 models) of the 67 model runs in Douglass et al was used by Santer et al (2005). This subset only used the runs that included volcanic forcing and stratospheric ozone depletion – the most appropriate selection for this kind of comparison. The trends in T2LT can be used as an example. I calculated the 1979-1999 trends (as done by Douglass et al) for each of the individual simulations. The values range from -0.07 to 0.426 °C/dec, with a mean trend of 0.185 °C/dec and a standard deviation of 0.113 °C/dec. That spread is not predominantly from uncertain physics, but of uncertain noise for each realisation.
From their formula the Douglass et al 2 sigma uncertainty would be 2*0.113/sqrt(17) = 0.06 °C/dec. Yet the 10 to 90 percentile for the trends among the models is 0.036–0.35 °C/dec – a much larger range (+/- 0.19 °C/dec) – and one, needless to say, that encompasses all the observational estimates. This figure illustrates the point clearly:
What happens to Douglass’ figure if you incorporate the up-dated radiosonde estimates and a reasonable range of uncertainty for the models? This should be done properly (and could be) but assuming the slight difference in period for the RAOBCORE v1.4 data or the selection of model runs because of volcanic forcings aren’t important, then using the standard deviations in their Table IIa you’d end up with something like this:
Not quite so impressive.
To be sure, this isn’t a demonstration that the tropical trends in the model simulations or the data are perfectly matched – there remain multiple issues with moist convection parameterisations, the Madden-Julian oscillation, ENSO, the ‘double ITCZ’ problem, biases, drifts etc. Nor does it show that RAOBCORE v1.4 is necessarily better than v1.2. But it is a demonstration that there is no clear model-data discrepancy in tropical tropospheric trends once you take the systematic uncertainties in data and models seriously. Funnily enough, this is exactly the conclusion reached by a much better paper by P. Thorne and colleagues. Douglass et al’s claim to the contrary is simply unsupportable.
Ray Ladbury says
Pekka, re: denialist services, Interesting idea. One study I’d like to see is a correlation between denialism latitude and/or proximity to near-sea-level locations. I suspect that there are a lot more denialists at northern latitudes, simply because some northern climes may even benefit from climate change. Likewise, there is less incentive to worry about rising sea levels if you occupy the physical (though not the moral) high ground. This is what Roger Waters has called “The bravery of being out of range.”
So, if we wanted to visit the denialist services mothership, we should probably look on a hill somewhere near the arctic circle.
Hank Roberts says
Richard Sycamore, you’re going on at length on your subject in the other thread: https://www.realclimate.org/index.php/archives/2007/12/live-almost-from-agu%e2%80%93dispatch-3/#comment-77526
Please don’t distract Fred right now, he’s asked about trends based on what he read elsewhere, it’s an important question, he’s been given pointers to how to do his own skeptical reading and get updated info.
You’re getting attention in the other thread.
Russell Seitz says
re 101:
“if we wanted to visit the denialist services mothership, we should probably look on a hill somewhere near the arctic circle.”
Thin metaphorical ice Ray.
If Mjoes did not hail from 70 north — Tromso advertises itself as : The Northernmost University”, where the sun shines sideways if at all and the IR optical depth to sunward is deep as the gloom in a Bergman movie, his Nobel priorities might have been otherwise.
Vincent Gray says
This argument is not about a PREDICTION. It is about a a SIMULATION. There is so much variability between models and between data collections that it is not surprising that some madels can be found which simulate some data.
This does not prove the correctness of the models because of the well-known (but little accepted) maxim that a correlation, however convincing does not prove cause and effect.
No model has ever convincingly predicted future climate. Global temperatures, however measured, have been relatively unchanged for some eight years, in violation of all model PROJECTIONS. Until models can be shown to be successful in prediction, why should anybody believe in them?
Hank Roberts says
But, Vincent, can you cite any source to support any of what you write above? I can understand you saying you believe it. But I’ll be surprised if you can show anyone else has published research supporting what you believe. Please provide your evidence that my hypothesis about this is wrong by giving cites — that’s how science works, after all.
Hansen’s Scenario C looks very good so far, after 20 years. https://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/
Eight years is insufficient data to reliably demonstrate a trend (or lack of one) against the noise level in climate. Didn’t I find this for you earlier? Has someone told you different? Who? Where?
William Connolley gives you the information to be appropriately skeptical about what people tell you, and points out how you can download the data set and do your own statistics to test what you’re being told and shows you what you will get using standard tests of significance on one sample data set, and comments:
“15 year trends are pretty well all sig and all about the same; that about 1/2 the 10 year trends are sig; and that very few of the 5 year trends are sig. From which the motto is: 5 year trends are not useful with this level of natural variability. They tell you nothing about the long-term change.”
http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php#
So, even Hansen’s 20 year old model has been quite good at predicting (and over a long enough period of years for the trends to be statistically interesting). Does having a factual basis to believe this change what you believe? Do facts make a difference?
Barton Paul Levenson says
Dr. Gray writes:
[[No model has ever convincingly predicted future climate. Global temperatures, however measured, have been relatively unchanged for some eight years, in violation of all model PROJECTIONS. Until models can be shown to be successful in prediction, why should anybody believe in them?]]
Climate models successfully predicted that the climate would warm, that the stratosphere would cool, that the poles would warm more than the equator, that nights would warm more than days, and they predicted quantitatively how much the Earth would cool after the eruption of Mount Pinatubo. What else do you want?
Timothy Chase says
Barton Paul Levenson (#106) wrote:
I would include the expansion of the Hadley cells, the rise of the tropopause, the super greenhouse effect in the tropics, and I understand they do quite well against ocean circulation. They have also predicted that the range of hurricanes and cyclones would expand — just about a year before Catarina showed up off the coast of Brazil, I believe. They are also used to understand paleoclimates, tested using hindcasting, etc..
And we should keep in mind the fact that they aren’t based upon correlations and aren’t instances of line-fitting. They are built upon physics. Radiation transfer theory, thermodynamics, fluid dynamics, etc.. They don’t tinker with the model each time to make it fit the phenomena they are trying to model. With that sort of curve-fitting, tightening the fit in one area would loosen the fit in others. They might improve the physics, but because it is actual physics, tightening the fit in one area almost inevitably means tightening the fit in numerous others.
*
Vincent,
The trend in the global average temperature for the past few years is flat only if you lop off the Arctic. And DePreSys did well at forcasting that, the recent short-lived El Nino and the La Nina. Moreover, it tells us that temperatures will remain flat for 2008, but with the coming of the next El Nino (some time around December of 2008, I presume), temperatures will begin to climb again. Or so they project.
If you want the models to realistically model natural variability with the hot El Ninos, cool La Ninas and all, you have to initialize the models with real-world data. But that means taking measurements. Plenty of measurements. And we are getting into this now.
Fred Staples says
Did this post get lost?
Thank you, Hank. The links in 99 are very interesting.
Suppose I concede immediately all the contentious points relating to the radio-sonde data. This means that the people responsible did not suspect that the sun was warming their instruments, that they could not apply the necessary correction retrospectively, and that when the instruments were modified the reduction in the systematic error compensated perfectly for a rise in tropospheric temperature between 1978 and 2000, which it completely masked. We must also accept that the absence of an upward trend since 2000 cannot be used as evidence for a zero increase because “the other errors are, unfortunately, not as easy to quantify as the solar heating error. It is not clear what direction they may have pushed trends”.
The UAH story is familiar, and their data must be the most analysed and corrected data in the field. Christy and Spencer seem to have accepted all the corrections with good grace, and the fit of their data to the RSS data is almost perfect.
I have consequently repeated my analysis using the UAH data.
First, there was no significant increase in lower troposphere temperatures between 1978 and 1996, and no increase at all to the end of 1995.
Overall, from 1978 to 2007 the increase is significant and the plus or minus 5% confidence limits range from 0.35 to 0.47 degrees C.
For the surface data we have an increase of 0.8 degrees C, with an F value of 55 for 27 degrees of freedom – absolutely significant.
So, the increase in tropospheric temperature over the crucial “Hansen” period is half that of the surface temperature. Your link states that it should be from 1.0 to 1.8 times the surface temperature increase. The anomaly remains.
I will not suggest that we use Occam’s razor to resolve the dilemma, nor do I claim that this data disproves the AGW theory. What I do claim is that the CO2 global warming theory is nothing like as certain as its proponents suggest. If it were only a matter of scientific debate, it would not matter – time and further work would resolve the issues (between five and ten years on current trends, in my opinion).
But journalists and political leaders cannot judge the merits of the case – they accept the “scientific consensus” and campaign for policies which may well prove to be foolish and even dangerous. Take, for example, the UK Environment secretary. He is quoted as claiming that the UK temperature since the seventies have risen by about one degree, which they have, and that this establishes the case for AGW.
He omits to say that the same temperatures fell by one third of a degree between the fifties and the seventies, that 1949 was the second warmest year in the record, and that there has been no significant increase in UK temperatures since 1989.
Ray Ladbury says
Fred, What it shows is that either
1)there are additional sources of error in the measurements–a very likely possibility if you know anything about either the satellite or the balloon measurements.
2)that the models are not complete–also quite likely
3)both of the above
None of this in any way invalidates the robust conclusion that the current warming is due to anthropogenic CO2–for which we have mountains of evidence. In science, you have to go with the preponderance of evidence, and it is very unusual for any single piece of contrary evidence to invalidate a theory, especially if there are plausible alternative explanations–as there are in abundance here.
Marco Feindler says
Hi, ich wnschte mir hier mehr Tiefgang!
jbleth says
UAH is about to make a correction, but it will lower their data by about 0.2 degrees for the last three months http://vortex.nsstc.uah.edu/data/msu/t2lt/readme.19Dec2007
Timothy Chase says
Just a quick question….
The tropics seems to be a favorite topic among contrarians at this point — although I suspect you are already familiar much of this. Part of what the “Iris Effect” is supposed to work off of. And interestingly enough, there does seem to be a reduction in cloud cover, particularly in the 20N-20S. It reduces the outgoing reflected shortwave at the top of the atmosphere, but increases the outgoing longwave by the same amount so that the net effect of reduced cloud cover is neither to increase the temperature (as some might have it in order to provide an alternative “explanation” to the greenhouse effect for the current warming trend) nor to reduce temperature as Lindzen and Christy would have it.
And of course without the increased opacity of the atmosphere to longwave, it would be difficult to explain how the increase in TOA outgoing longwave just managing to keep up with the reduction in TOA outgoing reflected shortwave — given the fact that the tropical SST has increased over the same period (1980s to present). Additionally, as you make clear in the post, this is an issue that has nothing actually to do with the forcing which is driving the warming trend.
*
Nevertheless, there is the issue of reduced cloud cover, a trend which as I understand it many models have difficulty with. And from what I understand, this pertains to the parameterization of moist air convection which is necessary due to the resolution of models being too granual to capture the process. How is the GISS Model E performing in this area at this point? Is it capturing the reduction in cloud cover, and do we have some understanding the process that is involved?
For what may at first seem like a bizarrely different topic, it is my understanding that Wieslaw Maslowski is doing a better job of capturing the oceanic advection in the artic which is melting the ice from below and thus of explaining the trend in Arctic sea-ice using a higher resolution model. Their model predicts 2013 without taking into account the data from either 2005 or 2007. So it would seem that the biggest problem with forcasts is with modeling convection in both the atmosphere and the ocean, caused by the complexity of the process which simply overwhelms computer resources.
Would there be ways of applying neural networks in the area of parameterization, or perhaps of adjusting the parameterization based upon flow conditions, or perhaps of dynamically adjusting the spatial or temporal grid it becomes finer when calculations require it, like with the fractal compression of digital photographs?
No doubt some people are already looking into things like this, but I was just wondering what sort of things are being attempted.
Phillip Duncan says
I have a background that includes some computer modeling/simulation of complex non-linear systems… though no knowledge of climate and atmospheric dynamics other than what I’ve picked up recently, mostly off this site.
Can someone point me to some resources that might bring me up to speed with the modeling techniques used? The previous post kind of piqued my curiosity.
I read statements that “curve fitting” is not used to tweak the models. I obviously understand that the models are actually simulating real physical phenomena rather than trying to derive abstract functions that map inputs to outputs through a learning or “curve fitting” process. However, are there no parameters internal to the models that are derived through adaptive techniques trying to achieve a fit for historical data?
When Timothy Chase said “applying neural networks in the area of parameterization” which parameters are being referenced… grid sizes, time steps or parameters that are internal to the physical processes being modeled?
Many Thanks in advance.
Timothy Chase says
Phillip Duncan (#113) wrote:
Well, I am afraid that I won’t be able to help you much in this area in terms of understanding the actual parameterizations which are used — although I can point you off in one direction or another which may be helpful up to a point, depending. But when I speak of using neural networks or of a dynamic model resolution where the local model resolution might be adjusted automatically as, according to some calculation a lower resolution is more likely to affect the results (e.g., where windspeeds and turbulence become greater), this isn’t necessarily something which current models are capable of. It may not even be a realistic suggestion, but given what limited knowledge I have, it seems reasonable at least.
*
With respect to the difference between curve-fitting and the sort of parameterization which is made use of in climate models, the distinction is quite important — and relatively easy to make — so I hope you don’t mind if I explain it a little first for the the benefit of those who may be less knowledgable than yourself. Models use parameterizations because of the fact that they are necessarily limited (in one form or another) to finite difference calculations.
There will exist individual cells, perhaps a degree in latitude and a degree in longitude. These cells will be of a certain finite height, such that the atmosphere will be broken into layers – with perhaps the troposphere and stratosphere sharing a total of forty atmospheric layers. Likewise, calculations will be performed in sweeps such that the entire state of the climate system for a given run is calculated perhaps every ten minutes in model time.
Now physics provides the foundation for these calculations, but as we are speaking of finite difference, the calculations will tend to have problems calculating turbulent flow due to moist air convection, for example. When you have flow which is particularly turbulent, such as around the Polar Vortex, cell-by-cell calculation based on finite differences will lack the means by which to tell how for example the momentum, mass, moisture and heat which is leaving the cell will be split-up and transfered to the neighboring cells. To handle this, you need some form of parameterization. Standard stuff as far as modeling is concerned, I would presume.
Parameterization is a form of curve-fitting. But it is local curve-fitting in which one is concerned with local conditions, local chemistry and local physics — backed up by the study of local phenomena, e.g., what you are able to get out of labs or in field studies. It is not curve-fitting which adjusts the models to specifically replicate the trend in the global average temperature or other aggregate and normalized measures of the climate system.
*
To give an example, with the most recent NASA GISS model, they are beginning to take into account elements of the carbon cycle. So for example, if you wish to take into account how plants will respond to increases in temperatures, you need a representative species and you need studies which show how members of those representative species will respond to a specific temperature, level of carbon dioxide and perhaps tropospheric ozone. The data you get from such studies are then parameterized, providing us with a set of equations which may be applied at the cell-level at each increment of model time.
In any case, if you would like to look at the models themselves and even examine their code, they are available – although it may take some digging to find whatever it might be that you are specifically looking for. The datasets which they are given in terms of the levels of specific greenhouse gases, aerosols, solar irradiance and the like are available. There is extensive literature on how the levels of the various of these quantities are estimated based upon empirical studies (e.g., gas bubbles which trap aerosols from earlier in this century in places like Greenland), etc..
*
Anyway, this would probably be a good place to look for much of the information you may be interested in:
The same webpage also includes technical articles detailing the changes which went into the most recent model.
PS
My apologies for not responding a little earlier, but I wanted to give someone else more knowledgable than myself the opportunity to respond first.
Jim Prall says
Re #113 Philip Duncan:
A good, very readable introduction to climate physics and the basics of climate modeling is Ray Pierrehumbert’s draft textbook, available online at:
http://tinyurl.com/2n7sr4
I started referring to this text while attending an undergrad course on “radiation in planetary atmospheres” that focused on the optical properties of the atmosphere – the absorption and emission aspects of the greenhouse effect. I’d already done one undergrad intro to climatology, with a quick overview of the equations of atmospheric motion, convection, vorticity, etc., but there were no programming tasks in that course.
I haven’t read all of the draft textbook yet, but it appears to cover both these aspects and to introduce how to go about building computer models that incorporate these sets of physical laws. I found the prose very readable and progressive, explaining the steps along the route.
Philip Machanick says
Sorry if this is OT but water vapour gets a few mentions here and comments on the 2005 https://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/ are closed — but I found another reference to the Lindzen claim that at least 98% of the greenhouse effect is water vapour:
http://www.downbound.com/Greenhouse_Effect_s/322.htm
The link in this article is dead but here’s another copy: http://eaps.mit.edu/faculty/lindzen/153_Regulation.pdf or http://www.cato.org/pubs/regulation/regv15n2/reg15n2g.html
This seems to me the most likely source for the denialist propaganda machine.
Hope this is of interest to collectors of denial memorabilia. There is a claim now doing the rounds that H_2O is only 95% of the effect so, in another couple of decades, they will be in the mainstream (which will probably have flooded their houses by then …).
Ted Nation says
I just ran into an article regarding a Roy Spencer, et al August 2007 publication claiming that satellite observation of tropical cirrus clouds called into question the manner in which climate models treated them. Using Google Scholar search I tried to find some discussion or follow up study on their claims, but found nothing. I did, however, find the following in the Wikipedia entry under Roy Spencer:
“In August, 2007, Spencer published an article in Geophysical Research Letters calling into question a key component of global warming theory which may change the way climate models are programmed to run. [2] Global warming theory predicts a number of positive feedbacks which will accelerate the warming. One of the proposed feedbacks is an increase in high-level, heat trapping clouds. Spencer’s observations in the tropics actually found a strong negative feedback. This observation was unexpected and gives support to Richard Lindzen’s “infrared iris” hypothesis of climate stabilization. “To give an idea of how strong this enhanced cooling mechanism is, if it was operating on global warming, it would reduce estimates of future warming by over 75 percent,” Spencer said. “The big question that no one can answer right now is whether this enhanced cooling mechanism applies to global warming.”
Is this another example of jumping to conclusions from early observational result or is there really something to this? Why hasn’t it been discussed on Realclimate? Does the Wikipedia entry need some editing?
John Mathon says
Lynn Vincentnathan Says:In this case even if they were correct and the models failed to predict or match reality (which, acc to this post has not been adequately established, bec we’re still in overlapping data and model confidence intervals)…In this case, the vast preponderance of evidence and theory (such as long established basic physics) is on the side of AGW, so there would have to be a serious paradigm shift based on some new physics, a cooling trend (with increasing GHG levels and decreasing aerosol effect), and that they had failed to detect the extreme increase in solar irradiance to dislodge AGW theory.
The problem with this argument is that
1) there is lots of physics and effects we don’t understand in the climate system. The sheer fact that the models and scientists cannot readily explain the last 10 years is prima facea evidence of this.
2) if the AGW is less than 2 degrees C per century then the AGW proponents have lost the political argument because all the damage from GW is supposed to come from this level of heating. Therefore, AGW arguers really only have to argue that the rate of heating will be less than the 2 degrees.
3) The trend over the last 30 years of heating is about 0.33 degrees for 30 years. (remember prior to that trend the earth was cooling for 30 years) At that rate the next 90 years will see about 1 degree heating unless we get another acceleration of heating like the 1998 El Nino repeatedly occuring.
4) The forcing value for CO2 is still highly unknown and subject to wide variation. The value for all the other forcings are all computed by the “inverse method”. This means that the models are fitted to the data. Therefore using past data to compare with the models is self-congratulatory and circular. The only thing that matters from a modeling perspective and a science perspective is what has happened since the models predicted the future. The score there is very bad for the models. They have failed to predict completely the recent 10 years of climate.
5) The more accurate we make the models of climate for past data the more stringent it puts error bars around the current predictions. The fact that AGW enthusiasts keep touting the accuracy of their models actually works against you. The models and the current data appear to be so out of whack now that there is only a 5% probability that AGW is correct.
[Response: This is garbage. Both in how you describe how modelling is done, and in your assessment of its skill. Short period comparisons are bogus because of the huge influence of short term weather events. ’10 years of climate’ doesn’t even make sense. And for longer term tests, the models do fine (see here for instance). – gavin]
Ray Ladbury says
John Mathon, I can see that you didn’t bother to read the post above before posting.
1)Climate and Weather are different. Climate is long term. Weather is anything on a scale of a couple of decades or less.
2)Look at the papers by Hansen et al. There is probably a lot of warming “in the pipe” that has not happened yet. We are a long way from a new equilibrium, and CO2 already in the air will keep warming things until we reach one.
3)See 2 above.
4)Not true. Forcing for CO2 is well established to be around 3 degrees per doubling. And most of the uncertainty (hence most of the risk) is on the high side).
5)Read the article. It deals with the errors the authors made in calculating confidence intervals.
There is plenty of real science here if you are interested. Or you can stay ignorant. Here are the pearls. Decide what you are.
Alastair McDonald says
Re #118 Lynn is correct that a new paradigm is needed, but new paradigms are fiercely resisted. See Gavin’s response at such a suggestion.
As Gavin says the models do fine with the present paradigm. But that parardigm says that the greenhouse effect can be equated to solar forcing at the top of the atmosphere. Then when a volcanic eruption alters that solar forcing and give results that match the models, it is claimed that the models have reproduced greenhouse forcing. But they have not. They have reproduced solar forcing. It has not been proved that the greenhouse and solar forcing are equivalent.
And in fact they are not! The results from the MSUs and radiosondes have shown that. Solar radiation produces diurnal forcing, but “fixed” greenhouse gases produce decadal forcing. That is why there is still question mark of the tropical lapse rate problem.
The optically thick greenhouse bands are saturated by definition. So the greenhouse effect does not work through Arrhenius’ scheme of radiation being blocked, as pointed out by Karl Angstrom. It operates by Tyndall’s scheme of the air near the surface being warmed by absorption. Fourier was describing Saussure’s hot box, not the glass of an Arrhenius hot house!
It is the CO2 adjacent to the ice that absorbs most of the radiation, which warms the air most, and that melts the ice.
Cheers, Alastair.
steven caskey says
perhaps I think of things in too simplistic a way…but if the major cause of feedback is water vapor and co2 is much more important then solar radiation in determining feedback..then on a nice bright summers day a bowl of water sitting directly in the sun should evaporate at approximately the same rate as a bowl of water that is placed in the shade..has anyone already measured evaporation rates in such a manner
steven caskey says
“and for longer term tests the models do just fine” I guess I don’t see this..the 1991 ipcc “best guess” prediction is .2C off in just 15 years..extrapolated over a century that is more then 1C off and was using a climate sensitivity of 2.5…the current “best guess” is a climate sensitivity of 3 and is likely to be even further off in the long run then the 1991 “best guess” especially since it now has several years of temperature catching up to do
[Response: Any projection is a function of projected emissions and a climate model. The difference between the 1990 estimate you are referring to and later ones was the emissions projection, not the model (since, as you note, best estimate climate sensitivity has increased slightly). That’s why we use multiple scenarios, and why the 1988 projections from Hansen’s paper have stood up so well. And I think I’ve mentioned on numerous occasions the folly of looking at short period trends in a noisy system….. – gavin]
Ray Ladbury says
Steven Caskey, your way of thinking about the matter is not simplistic, but wrong. It is climate CHANGE. Of course the Sun is still the dominant source of energy coming into the climate syste, but it is not CHANGING very much. What is changing is how much IR radiation greenhouse gases allow to escape from the atmosphere. Look at it this way:
You have a bathtub with the water tap on and the drain open. The flow is such that the water level is constant in the tub: water in from the tap=water out the drain. Now block off half the drain. The source and flow of the water is still the same (the tap), but now less is escaping, so pretty soon, somebody will have a mess to clean up. We’re trying to avoid that mess.
steven caskey says
well now I have a real conundrum because if I take some of the arguments for feedback and thermodynamic principles as explained by some of those who support the AGW theory and I apply them to the solar forcing we know happened I find very easy to believe co2 has very little influence at all
the oceans are hiding the real effect of global warming and the full impact won’t be felt until later..so if the oceans are supposed to be hiding the impact of AGW couldn’t they also have hidden the effects of solar warming and the difference between when solar cycles and temperatures parting ways in about 1980 be a residual effect? also there are indications that the oceans may actually be cooling now
another musing I found was that there could be long term effects of AGW that would add another 3C to the climate sensitivity of co2..these would include things such as trees growing further north, less snow and ice to reflect sunlight, the incrreased releasing of methane from frozen tundra…now if there is in fact a long term feedback from global warming, and it does make sense at face value, and you applied that to the solar induced warming of the early 20th century…how much temperature change is left to explain?
please tell me which arguments by those supporting AGW I should ignore as invalid or why there should be a different reaction in feedback mechanisms between solar and co2 forcing…thanks!
[Response: There’s no difference in most feedback effects between solar and CO2 (the differences there are mostly refer to stratospheric changes and their consequences). In a warming situation from either cause, ocean heating will slow the response simply due to its heat capacity, but in neither case does it ‘hide’ the response. There are long-term feedbacks as well (ice sheets, vegetation shifts), but these have not changed significantly (for your purposes) over the last century and so cannot explain current trends. Whatever the sensitivity is, you still have solar forcing that is (at best) 5 times smaller than the GHG forcing and which is still insufficent to explain current trends – even assuming that the long term sensitivity was valid on multi-decadal time scales (which it isn’t). – gavin]
Ray Ladbury says
Steven Caskey, Remember, we are looking at CHANGES in the forcers. The CHANGE in insolation is tiny, so unless you can figure out a feedback that operates on solar radiation and NOT on CO2, that’s a nonstarter. It’s not a matter of ingoring arguments. It’s knowing relative magnitudes and the basic science. Look at the START HERE section and commence your education.
steven caskey says
Dear Mr Ladbury, perhaps you didn’t read what I said but in the world I live in it is important how much solar radiation is absorbed by the earth not just how much is being produced by the sun…so unless you can show me a study that eliminates the loss of ice, the decrease in the amount of snow, and the growing of trees further north as factors in the amount of radiation actually absorbed by the earth..or you can show that it wasn’t solar radiation that was the primary driver for at least the first half of the 20th century..or you can show me that these things did not happen in the first half of the 20th century…or you can show that a doubling of the effect of a driving force should not be doubled in the long term because of these factors as proposed by some of those that support the AGW hypothesis..then to dismiss me as ignorant is rather arrogant on your part is it not?
Hank Roberts says
Steve, try the ‘Start Here’ link at the top of the page, and the first link under Science at the right side.
Most of us here are readers — we’re not servants or waiters to whom you can address your demands for an education.
Horse, water, drink.
—————–
On the topic, this may be helpful
http://www.agu.org/pubs/crossref/2008/2008GL033454.shtml
steven caskey says
sorry for the tone of my last response, I just find it a bit frustrating to discuss an issue with people that seem to have such closed minds…now, if I had said Hansen says that the climate sensitivity of co2 is 3C in the short term but you have to add another 3C to the climate sensitivity for the long term effects everyone would be nodding in agreement….so I would contend that either this is not correct or you would have to apply the same effects to solar forcing…is this such an uneducated statement that I should be dismissed? this isn’t my field and I readily admit it but that doesn’t mean I can’t use some basic logic..as far as my comments about the oceans hiding the heat..I could have said thermal inertia and went on to explain but I guess I didn’t realize that this wasn’t just a blog for us ignorant people
Hank Roberts says
Try citing your sources for your beliefs. It really does help when you say why you believe something, where you read it, and why you trust the source.
Eric Raymond’s article on how to ask questions the smart way on the Internet is addressed to computer questions but quite good as general advice to people entering who want to attract helpful responses.
David B. Benson says
steven caskey (128) wrote “… if I had said Hansen says that the climate sensitivity of co2 is 3C in the short term but you have to add another 3C to the climate sensitivity for the long term effects everyone would be nodding in agreement.”
(1) Read Hansen et al. more carefully. I think you will find that the hypothesis relates to the total radiative forcing, assumed in the future to be dominated by CO2.
(2) It is certainly the case that this remains a speculative hypothesis, but one with some logic for it. Not everybody agrees.
steven caskey says
Hank Roberts Says:
27 April 2008 at 12:56 PM
Try citing your sources for your beliefs. It really does help when you say why you believe something, where you read it, and why you trust the source.
I’m not sure what I believe nor what sources to trust, if I was this would not be a page I’d be interested in since I would already have all the answers. As an example I was recently pointed to a study conducted by harries on the change of ghg’s effects on radiation over time. The change on the co2 part of the spectrum was very small although statistically significant but the change in the methane area of the spectrum was much more pronounced. From my untrained perspective this would seem to indicate a greater contribution by far to the ghg effect from methane then that from co2 and yet it was used as evidence that co2 was the significant driver. Then there are controversies regarding how the troposphere should be reacting and if that isn’t bad enough there are also controversies over how the troposphere is reacting. Then you have controversies over how important sea ice and snowfall are as far as an albedo effect. The controversies go on and on and yet the science is being declared by so many as settled to the point where we are trying to completely change our economy and scaring our kids with dire predictions of the future. Am I concerned that there may be actual consequences of adding so much co2 to the atmosphere? Of course I am too many people believe this to dismiss it as mythical. But I am also concerned that we are raising false alarms that will tarnish the credibility of climatologists to the point where when we do have a better grasp of the science and there is an actual climate emergency on the horizen that people will mearly say sure oh yes another emergency haha. If solar cycle 24 turns out to be a weak cycle and this actually causes the temperature to go down, then how will we ever convince people that we need to prepare for solar cycle 25 that is predicted by NASA to be incredibly weak? Are we that sure of the driving forces that we can ignore these possibilities? thank you all for letting me post on your page and thank you gavin for your responses, they were much appreciated and allowed me to find my mistake on my interpretation of the ipcc predictions
Hank Roberts says
> I was recently pointed to a study …
> Then there are controversies …
> Then you have controversies …
> the science is being declared by so many as settled …
> we are trying to completely change our economy …
> If solar cycle 24 turns out to be a weak cycle
> and this actually causes the temperature to go down …
> solar cycle 25 that is predicted by NASA to be incredibly weak …
See, if you don’t have a source for what you believe, it looks like these are things you read on a blog somewhere.
Or you’re playing the climate change bingo game and haved a winner.
When someone makes a claim about some science published, ask:
Did you get a cite to the study?
Tell us where you learned this?
Tell us what you looked up?
Because people make up all sorts of stuff, misunderstand or misinterpret what they read, or tell only part of a story to emphasize a talking or arguing or PR point they want to make.
Watch out even if you get a source for the PR sites that identify themselves as providing “advocacy science” instead of peer reviewed science.
Hank Roberts says
Steven, here’s what I find looking for your “Harries” reference with the information you provide. Can you clarify what you read?
http://scholar.google.com/scholar?as_q=+co2+methane+climate&num=100&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=Harries&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en&lr=&newwindow=1&safe=off
Ray Ladbury says
Steve Caskey, This is the reason why scientists study for ~10 years to get a PhD, and then work for about 5 years as a postdoc and then publish for a couple of decades before they really become influential in a field. It really does take that long to understand the relative importance of different effects, which researchers are credible, etc. As a layman, your best bet is to look at peer-reviewed literature that has been accepted by the experts. Realclimate is an invaluable resource in this regard.
Look at relative magnitudes of different effects. Look at how long they persist. Anthropogenic CO2 is and remains the 400 pound gorilla even if we have a dip in solar activity.
steven caskey says
yes I understand the importance of a study being peer reviewed…just as in my profession where they also refer to me as doctor..so I very seldom bother to go to blogs and when I do I am more interested in what their references are then in what they have to say..I am refering to such things as the recent study of the temperature of the troposphere which was peer reviewed and published in the dec 2007 royal meteorological journal by christy and others..there is also a paper that was peer reviewed by a hungarian scientist who’s name escapes me now but who worked out the possible climate change due to co2 in a finate atmosphere was considerably less then being projected…as far as the work under harries I don’t recall if that was peer reviewed or not and have to head to work now but I do recall the significant difference in the radiation windows between co2 and methane and the the comparison of the graphs of the change in radiation in these windows from ~1970 to 2003..I will try to find time to take a closer look at it later
[Response: The Hungarian study you are talking about is by Miscolszy. It appeared in an obscure Hungarian weather journal, and having looked at it myself the standards of peer review for that journal can’t be very good. You have to look at the journal and its standards in evaluating work. In this case, several of us had a look at the paper, and it’s clear that the author made serious and elementary errors concernng application of Kirchoff’s law, and the virial theorem. This paper isn’t important enough to address in a peer-reviewed comment, but I have some Bowdoin undergrads working on a write-up of the problems in the paper, and that will be ultimately posted on RC when they’re done. As for the Christy study, that’s more or less a broad-brush review of temperature trends gotten by various groups. Just what is it that you see in that study that would cause you to doubt the seriousness of AGW as a problem? It’s still true that nobody can get these temperature trends from a physical model that leaves out the influence of CO2, and it’s still true that the trends are compatible with predictions of models that have equilibrium climate sensitivities from 1.5C to 4C. The data does not in any way support or demand low sensitivity to CO2. It doesn’t prove high sensitivity, either, which is why we are stuck making policy in the face of uncertainty. –raypierre]
steven caskey says
there seems to be no doubt that co2 is a ghg and effects climate as such. the discussion seems to be centered around the climate sensitivity and the degree of this influence. the study by christy using the raw data would indicate a low sensitivity, however,as I have read before and refreshed my memory today on your pages the margin of error of the data could cause the sensitivity to be much higher. what this study does do from my perspective is show that our ability to measure such things appears to be insufficient to draw firm conclusions one way or the other. thank you for reminding me of Miscolzsy, that is in fact the study I was refering to and will make it a point to read your critique of his work when it comes out.
See, if you don’t have a source for what you believe, it looks like these are things you read on a blog somewhere
the comments on solar cycles 24 and 25 was based on information I read about nasa’s predictions on their home page. there appears to be about a 50/50 split on what magnitude solar cycle 24 will be while the prediction on a very weak solar cycle 25 was made by hathaway at nasa and there are similar predictions made by other scientists about solar cycle 25. russian solar physicists whose names I can research if you are interested. I must admit as I read I pay too little attention to names since I am not familiar enough with the personalities to draw any conclusions from who does the work.
[Response: I am not aware of any study by Christy that would support a climate sensitivity appreciably lower than what is given in the IPCC range. Could you be more precise about just what kind of result or argument you are quoting? Steve Schwartz did have a paper in JGR which claimed the data supported a low climate sensitivity, but as Gavin pointed out in his RC post on that paper, the Scwhartz’s analysis was based on invalid methods. That critique will have to work its way into a regular journal article someday, but meanwhile, you can read the reasoning here on RC. We do indeed have a range of possible climate sensitivities. The 20th/21st century data does not strongly constrain what sensitivity is the right one, though study of the Eocene climate and the ice-age climate would tend to argue against the lowest end of the IPCC range, though not definitively at this point. What is relevant for policymakers, though, is that nothing we know at present rules out the high end of the IPCC range, or even beyond that . That is important because big damages come at the high end, so they figure importantly in the expected damage, even if they have low (or unquantified) probability. –raypierre]
Ray Ladbury says
Raypierre, your comment raises an important point: The risk cannot be bounded at a reasonable level because of the thick high-side tail on the probability distribution of sensitivities. It would seem to me that anything we could do to reduce uncertainty on the high side would pay serious dividends on the policy side. If we can rule out such high sensitivities, and if unforeseen feedbacks (e.g. outgassing of CO2 by melting permafrost, the oceans, etc.) are not as severe as feared, we might be better able to develop coherent mitigations. These are big ifs, but without progress on this front, the mitigation problem becomes a bit of a Gordian knot.
Hank Roberts says
> , if you don’t have a source for what you believe,
And if you don’t have a publication record in the science about which you’re commenting!
> it looks like these are things you read on a
> blog somewhere
A dentist opining about dentistry, and a climatologist opining about climatology, have some trust established in their own areas of knowledge. Readers will expect they’ve got a basis for opinions, in their fields. Trust goes both ways between writers and readers.
steven caskey says
Trust goes both ways between writers and readers
I have no doubt that scientists on both sides of this issue have full faith in what they say and their interpretation of the results. there may be some exceptions of course but I believe the vast majority of the people involved are both serious and convinced in their beliefs
the christy report which stated that the troposphere was not warming as fast as expected in agw models is where I got the interpretation of lower climate sensitivity although it is certainly possible that I may have read someone else’s interpretation of it and that may have lead me in that direction. I did read the response on this web site and I believe I may have read it before about the same time I read the christy paper and noted the main complaint being the margin of error in the temperature readings and the type of data base they chose. is it not a logical assumption that if the troposphere is not warming faster then the surface or is warming faster but by an amount less then those predicted by models that this would be a reflection on climate sensitivity? this is not meant as a rhetorical question either I am open minded to being pointed in the right direction should I have taken a wrong turn.
It would seem to me that anything we could do to reduce uncertainty on the high side would pay serious dividends on the policy side
I think reducing uncertainties is an excellent idea
Hank Roberts says
> both sides
> full faith in what they say and their interpretation
I think that’s another mistake
I don’t think science works by taking sides, nor by faith, nor by certainty. PR, however, certainly does. It’s easy to think you’re reading scientific work and find you’re actually reading a political or business PR site instead.
Try this — pasting words taken directly from your statement above into Google, like these:
http://www.google.com/search?num=100&hl=en&newwindow=1&safe=off&client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&hs=NZO&q=if+the+troposphere+is+not+warming+faster+then+the+surface+or+is+warming+faster+but+by+an+amount+less+then+those+predicted+by+models+that+this+would+be+a+reflection+on+climate+sensitivity%3F&btnG=Search
Look at the hits that come up in the first few pages.
The top couple are from Wikipedia. Of the rest, some are from science sites; some from PR or “science advocacy” sites.
Compare what they’re saying.
I think you’ll agree you see claims there are “two sides” and “faith” and “certainty” — but not from the science sites. Those are the opinion/PR/argument words.
Then try the same exercise but in Google Scholar, for a contrast.
steven caskey says
I don’t think science works by taking sides, nor by faith, nor by certainty. PR, however, certainly does. It’s easy to think you’re reading scientific work and find you’re actually reading a political or business PR site instead.
I’m sure I’ve read plenty of opinion pieces also. But if I believed everything I read I would be talking about big oil or grants for studies depending on whose side I was taking. I find my opinion to be one of a small minority and some anecdotal evidence that I am not unduly influenced by other’s opinions. also note my answer was in response to your sentence involving the word trust and to me it seems that the words trust faith beliefs all involve the basic idea and would be more difficult to answer without their use
steven caskey says
as far as reducing uncertainties goes I will note that I did not say for PR reasons although that was the context it was brought up in. to me this means another look at the possibilities and a closer look at the probable consequences. a better understanding of the science would allow such a reduction in uncertainties would it not? I doubt that anyone would allow such a reduction if the data did not support it.
steven caskey says
I would agree that there shouldn’t be sides and all the scientists in the field should be working together to iron out their differences and prove to eachother that their interpretation of the data is the correct one. to say this is happening and that there isn’t in fact sides to this issue is to ignore reality for the sake of argument
Ray Ladbury says
Steven Caskey, A scientist should do his speaking on scientific matters in peer-reviewed journals and at conferences along with other experts. Any other forum is ex cathedra. Since denialists have utterly failed to publish any credible theories for the current warming epoch, their opposition cannot be considered scientific. There is only one side to the science here.
In the interactions of scientists with politicians and the public, the objective should be education–the public needs to understand the likely consequences of the science and the possibilities for mitigation.
Outside of these venues, the scientist is a private citizen with preferences for one strategy or another, but with no special authority.
There simply is no credible science that challenges anthropogenic causation of the current warming epoch. We know also that warming will have adverse consequences, that the climate system has positive feedbacks that could take the situation completely out of our control and that there are still considerable uncertainties in where these “tipping points” lie. Most of the uncertainty is on the high side of the risk equation. To date the scientists have been quite conservative.
steven caskey says
Since denialists have utterly failed to publish any credible theories for the current warming epoch, their opposition cannot be considered scientific
I would contend that in order to deny a theory/hypothesis one would not have to replace it with a different hypothesis/theory but rather mearly prove the current theory/hypothesis flawed. I am not saying this has been done and I’m not saying it can be done, I am mearly disagreeing with the level of responsibility of those that disagree should be held to. I do agree peer reviewed papers are the best way discuss an issue as far as formal results go but on a personal note I would be distressed if differences in my field led to such division as there appears to be in yours.
Hank Roberts says
Steven, can you put quotation marks around words that you’ve copied and pasted? I find I can’t tell if you’re echoing other people’s words in order to comment, or because you agree. Showing direct quotes, and cites, are very helpful tools for making clear whose words you’re using.
Ray Ladbury says
Steven Caskey, Scientific evidence is best judged in terms of how well a theory fits the observations. A theory may be incomplete and do a less than stellar job of fitting the data and still be correct. It is much easier to use comparative measures (e.g. likelihood ratio, AIC, BIC, DIC) to judge the goodness of a theory.
A theory is only rarely disproved by finding a single piece of data so wildly at odds with the theory that the advocates of the theory just throw up their hands. Rather, the theory will be modified somewhat to account for the theory. If this results in a more complicated theory, the various comparative metrics need to be looked at for the modified theory vs. all others.
The hypothesis that CO2 produced by human activity is largely responsible for the current warming epoch does not currently have any credible alternative. Most of the other ideas don’t even merit the term hypothesis, as they lack detailed physical mechanisms.
steven caskey says
Hank Roberts Says:
30 April 2008 at 9:38 AM
Steven, can you put quotation marks around words that you’ve copied and pasted? I find I can’t tell if you’re echoing other people’s words in order to comment, or because you agree.
certainly I will sorry about that
steven caskey says
Ray Ladbury Says:
30 April 2008 at 10:11 AM A theory is only rarely disproved by finding a single piece of data so wildly at odds with the theory that the advocates of the theory just throw up their hands. Rather, the theory will be modified somewhat to account for the theory
I would add to this that it would be especially difficult when the actual theory isn’t disputed but rather the magnitude of the positive and negative feedbacks in a very complex environment where it isn’t possible to keep all other factors constant in order to test the outcome of changing one variable. it is the modifications to the theory that are in dispute, and there is actually less difference between those who believe the lower range of the ipcc predictions at ~1.5 climate sensitivity to be true and the people termed deniers at ~1.0 climate sensitivity then there is between those who believe the lower end of the ipcc predictions to be true at ~1.5 climate sensitivity and those that believe the upper end of the ipcc predictions at ~4.5 climate sensitivity to be true. what that tells me is there is a serious need to reduce uncertainties.
Ray Ladbury says
Steven, you have to look at whether the data support a particular sensitivity. A sensitivity of 1 or even 1.5 causes way more problems for many different datasets than does a sensitivity of 3 or even 3.5. The probability distribution on sensitivity is very asymmetric: 1.0 is extremely improbable given the data, while 4.5 cannot be ruled out. Almost all of the uncertainty is on the high side.