Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.
So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.
The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.
If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.
An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.
In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.
Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.
A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.
What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:
- Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
- Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
- Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).
It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).
Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.
This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.
Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.
If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.
Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.
Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.
Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:
– UK Met Office data (instead of GISS data) with 8-year trend lines
– GISS data with 7-year trend lines (instead of 8-year).
– GISS data with 15-year trend lines
These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.
And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.
Mike Tabony says
First, thanks Gavin for your many hours of dedication to this site. This site is equal to or better than a university course in climate change. Thanks to all the other contributors as well.
Relating to this article, I really like the comments by Aaron Lewis, # 190, and Wayne Davidson, #276. Mr. Lewis’ idea, letting the plants do all the integration for us appeals to my common sense. I might just throw in the rest of the biological world. I’ve been wondering why the robins are hanging around my house in central VA in the middle of the winter. As a child I can remember them being out on the southeastern tip of Louisiana. And, I must agree wholeheartedly with Mr. Davidson, the lines on a graph are one thing and much of the Arctic Ocean, including the Northwest Passage, clear enough for normal ship traffic for almost a month last summer is quite another. Look around your world; climate change is everywhere.
To get to the actual graph at the beginning of this article, it reminds me of many graphs I’ve seen of stock and commodity prices. Price charts are filled with the noise of millions of investors deciding what’s best for them. However, the prices can show step-like advances like this graph does. The prices will increase and meet a determined group of sellers who must be satisfied before they can advance further. The increasing temperature deviation portrayed in the graph can only proceed when the environment (the atmosphere, oceans, biosphere, etc.) stores the heat and reaches equilibrium with the previous advance.
A further examination of the graph does show the bottoms of the troughs to be tending upwards as one might see in the beginnings of what could be an exponential increase in the rate of advance. Could this imply that the storage mediums referred to above are reaching capacity?
Finally, two of the troughs in the graph seem to be well linked to the eruptions of two volcanoes and probably the effects of the shading particulates they added to the atmosphere. What do the models say the temperature deviation might now be without those eruptions? I suspect it would already be above .5 degrees C.
Rod B says
Fred, I’ll address a part of your post 295. You’re bouncing around a point that is part of my skepticism (but I’m leaving that alone for now), and something that was an involved discussion here a little while back. The moderate explanation of radiation transfer is that outgoing infrared radiation is absorbed intramolecular by matching up frequencies with discrete energy levels in the rotations and vibrations of the molecular bonds. Different from but like the process of electron level quantizations. Only greenhouse gasses have the molecular/atomic layout required for this absorption. This absorption does not heat the molecule, though there was less than unanimous agreement with this.
The molecule relieves itself by either re-radiating out — up or down, or transferring the absorbed energy to translation of the molecule (which does heat the molecule — thought there was considerable discussion over whether a single molecule can exhibit temperature) through a process of equipartion, or by direct transfer to another molecule’s translation energy (kinetic — and atmospheric temperature increase) via a collision. At least at lower altitudes, the latter seems to predominate, given the likelihood and frequency of molecular collisions; it also allows energy (heat) transfer to N2 and O2, et al, the non-greenhouse gasses.
That initial (equivalent) photon’s energy follows a very tortured path until it either is radiated away from the top of the atmosphere, returns to be absorbed by the surface, or spends a while in the atmosphere.
Donald Dresser says
Folks,
Does anyone have any comments on how to reconcile the recent JPL/NASA reports on Antarctica (see http://www.washingtonpost.com/wp-dyn/content/article/2008/01/13/AR2008011302753.html?) with the University of Illinois – Urbana data on southern hemisphere ice accumulation (see http://arctic.atmos.uiuc.edu/cryosphere – at the bottom of the page)?
Thanks
David B. Benson says
Richard Tew (286) — Carbon capture and sequestration (CCS) is being seriously proposed as a way to continue to burn coal without adding carbon to the active carbon cycle. If biomass, such as biocoal, is burned instead of fossil coal, the result of CCS-firing is carbon-negative.
Dr. James Hansen has, I believe, proposed that a concentration of carbon dioxide in the atmosphere of between 300 and 350 ppm is necessary to preserve arctic ice. From today’s 385 ppm, about 182 billion tonnes of carbon need to be removed from the active carbon cycle to reach the maximum of his estimated range. I propose 315 ppm, simply on the grounds this was the concentration enjoyed in the 1950s. To reach this requires removing about 350 billion tonnes of carbon from the active carbon cycle. Either way, I opine that we owe it to future generations to get started right way.
Ray Ladbury says
Fred Staples,
OK, let’s start simple. Does log(n) go to infinity as n goes to infinity? Since I assume you know this, HOW CAN YOU CALL THAT SATURATION?
And, great, I’m all for learning about nuclear physics. However, don’t you think the physics might be just a wee bit different when we’re trying to understand the energy balance of the atmosphere?
For instance your distinction between electronic excitation and vibrational excitation? The vibrational excited state and the electrical excited states are both quantum mechanical states. They absorb energy and they relax. Yes they have different timescales, and that’s important for understanding how the greenhouse effect works. However, a CO2 vibrational state can relax collisionally or radiatively.
I also note that you seem to want to really simplify atmospheric structure. It is not a single, homogeneous, isothermal layer. I really do not understand why you are so resistant to learning the physics behind our current understanding of climate. Without some sort of model, your statistical analyses are no more useful than Soduku.
Hank Roberts says
Donald Dresser —
To reconcile the two links, read the first one down to the words “on land.”
And read the second one down to the words “sea ice.”
Does that make the difference clear?
Kevin Stanley says
Donald Dresser-
I’m not sure I’m looking at the same info on the Cryosphere Today site as you, but all the pictures and charts at the bottom of the page you linked deal with sea ice extent–the area over water covered with ice, however thick or thin–whereas at a glance the Washington Post article seems to be dealing with the mass of the large ice sheets attached to land. So if I’m correct about the things you’re comparing, there’s not really anything to reconcile. The skin of ice over the water surrounding Antarctica has recently covered a somewhat larger area than normal. At the same time, the ice sheets (anchored to land) are losing mass–melting and crumbling off into the ocean around the edge of the Western Antarctic Ice sheet. No conflict. In fact, it seems to me like the ice breaking off of the ice sheet could even possibly contribute to the sea ice extent. But I may just not know what I’m talking about.
Hank Roberts says
Donald Dresser — try the transcript of the online conversation with Dr. Rignot, here. I’ve excerpted a bit that is relevant to the topic of this thread, how good models are, how good they were 30 years ago, and how they’re changing to allow better decisions:
http://www.washingtonpost.com/wp-dyn/content/discussion/2008/01/13/DI2008011301886.html?sid=ST2008011400051
for example, just an excerpt:
—–excerpt—–
Cary, N.C.: What is the source of the claim from global warming skeptics that Antarctic ice is growing, not shrinking, despite the collapse of the Larsen ice shelves?
Eric Rignot: Climate models have been predicting climate warming would increase precipitation in polar regions (because of enhanced evaporation on the oceans), which has indeed been the case in a few places (e.g. Antarctic Peninsula), but the effect is very modest. Since there is no melt in Antarctica and these models ignored the influence of glaciers, Antarctica could only grow. Reality shows otherwise. Reality shows that glaciers speed up and drive the ice sheet mass budget. This is a major shortcoming of models which we will now try to improve.
Models predicted a loss of Antarctic mass only after a warming of 4-5 degree Celsius. We are obviously there much sooner than expected.
…
Eric Rignot: … I am a bit more optimistic. I think the scientific community is coming to grasp with the Earth climate system. Slowly but surely. We are not doing very well in terms of predicting the future of ice sheets, but we are now gathering important information which will help improve the next generation of models. Global climate models were not doing very well 30 years ago. They came a long way and are now becoming more reliable. I remain optimistic that in years to come, not decades, we will produce more realistic predictions of the evolution of Greenland and Antarctica. We are learning tremendously right now about the dynamics of these systems. An unfortunate byproduct of changing the climate of the Earth so rapidly …
_______________________
Silver Spring, Md.: If Antarctic ice loss is accelerating – particularly in West Antarctica – as your work shows … what do you think odds are it will collapse this century?
Eric Rignot: That is a very difficult question. I think a collapse of West Antarctica in the next 100-200 yr is now a concept that is back on the table; it was not on the table anymore 10 years ago; it was first put on the table in the early 1970s. But even if the ice sheet does not collapse, a loss of a significant portion of Antarctica and Greenland could raise sea level 1-2 m in the next century, and I think this is already something to worry about.
—- end excerpt—-
NOTE, some ellipses added, some in original, see source.
lgl says
#298
So, the stratospheric temperature stays quite stable for several years. Then there’s a large volcanic eruption. After a couple of years the temp stabilizes at a lower level (or even increases), and then it all happens once more. What does that tell us about the cause of the temperature drop? The slow and steady increase of CO2?
Hank Roberts says
lgl, what’s your source for believing “So …” is a correct description? Why do you consider your source reliable?
Timothy Chase says
lgl (#309) wrote:
To me at least (and I don’t have any formal background) it suggests that the stratosphere is being affected by the climate system which exists in quasi-stable regimes, but as CO2 continuously increases, that system reaches a tipping point of sorts where the climate system reorganizes itself in response to certain changes — such as the rise or fall in levels of energy in various “pools,” determining how energy flows through the climate system. The reorganization probably involves teleconnections between climate modes (atmosphere-ocean oscillations, such as the El Nino-Southern Oscillation, the Arctic Oscillation/North Atlantic Oscillation, Indian Ocean Diapole, Pacific Decadal Oscillation, etc.) where teleconnections are formed and broken.
The reorganization will involve changes in the tendency for a given oscillation to be in one state or another, as well as the likely duration and strength of those states. Such reorganization is suppose to be common in chaotic systems. According to some of the literature I have been running across, there would appear to be a small-world network of teleconnections between the oscillations. (I’ll share titles a little later once I have had the chance to read through the essays once and can intelligently say what they are about.)
Given the length of time between steps, I would guess that a large part of what is going on at root involves changes to a hierarchically-organized ocean circulation at different levels and scales and over different regions within the hierarchy — but that is just a guess on my part. The stratosphere is further removed from the ocean, therefore it shouldn’t be that surprising that its behavior is more step-like.
Rod B says
Ray (305) asserts “log(n) go(es) to infinity as n goes to infinity…” This has very little relevance to the practicalities of the physics.
Timothy Chase says
Re lgl #309
Simpler Thought on stratosphere trend, volcanoes….
Aerosols in the upper stratosphere reduce the amount of energy entering the climate system. Gives the surface a chance to cool and simultaneously carbon dioxide a chance to build up. A cooler surface means less thermal radiation. The additional carbon dioxide renders the troposphere more opaque to radiation, meaning that the surface will have to warm up more to reach the point that it is emitting sufficient radiation to bring the temperature of the stratosphere back up. Meanwhile, more carbon dioxide is building up in the stratosphere which has the net effect of cooling it since it is above the effective radiating layer.
However, if you check:
http://hadobs.metoffice.com/hadat/images/update_images/global_upper_air.png
You will see that there is a downward trend early on before the first three eruptions, a shallower downward trend after the first, a slight upward after the second, and a long flat after the third. Probably has something to do with how strong the blow to the climate system was as the result of each eruption and the state the climate system was in prior to that eruption.
So there are trends (up, down, neutral) between volcanic disturbances.
Mike says
In the global climate models, what forcings would cause a reduction in temperature and CO2? Would GCMs predict another ice age if you could turn off the human CO2 contribution in the models and had the computing power to simulate the next 50,000 years?
Lawrence Coleman says
Is there any quantifiable means to determine how much air travel and the booming aviation industy has and will play in climate change. With the numbers of passenger aircraft and air-freight transport expecting a huge jump in numbers in the coming years is there any way for sure to guage the impact that these millions of tonnes of CO2 being pumped directly into the upper atmosphere @ 30-37000 ft will do?. Does the latest computer modelling take this into account? This may also be a silly question but is there technology available to filter and trap CO2/carbon when it leaves the aircraft’s engines. Maybe if the fuel was burnt more completely then there would be less pollution, I’m guessing more refined higher octane jet fuel with a very narrow explosion band may help in the medium term. It’s pretty obvious that solar power is next to useless on aircraft, nuclear could be an option? Hydrogen fuel cells to bulky, costly and dangerous. LpGas likewise. Nuclear powered aircraft seems the best bet as far as I can see as long as stringent safety measures are implemented especially upon the possibility of crash landings. What do you guys think??
Petro says
Jim Galasyn reported at 300:
“Beaufort Sea ice pack fracture
A massive fracture of the Beaufort Sea ice pack as well as other interesting features are observed this winter in the Arctic. A new page under “Specific Ice Events” in the Education Corner shows this with satellite imagery and animations.”
Some of the collapsed ice pack is perennial ice:
http://nsidc.org/news/press/2007_seaiceminimum/images/20070910_perennial.png
Implication for next summer Artic sea ice area is evident.
Martin Vermeer says
Oh, but it has. What Ray means to say — in science talk — is that, as n gets bigger and bigger, also log(n) goes on growing; the growth gets less and less, but it never stops, and what’s more, there is no limiting value that it will never exceed.
Saturation would be if there were a point at which adding more CO2 would cease to have any effect; or (weaker), that log(n) (or whatever) would approach to a finite limiting value rather than infinity. Both these behaviours would warrant being called ‘saturation’. Neither occurs, not for the log function and not for the greenhouse effect either.
Calling log behaviour ‘saturation’ is misleading at best. And at the present point in time the issue is irrelevant, as the delta of CO2 concentration is still so small that
log(c/c0) == (c-c0)/c0
to rough approximation, i.e., near-linear behaviour.
Or was it that what you meant by ‘very little relevance’? Then I agree. Here’s to hoping that it never becomes (very) relevant.
lgl says
#310
The link in #298 and in #313 which is roughly the same as shown in Technical summary. When several measurements show the same trend I usually consider them reliable. And it does not surprise me that these large eruptions have severe impact on the climate system.
lgl says
#313
The trends are, increase between 1973 and El Chichon, increase between 1985 and Pinatubo and flat after 1995. And the temp steps down about 0,5 degrees both times.
The models don’t show this behaviour so they must have missed something.
Barton Paul Levenson says
[[In the global climate models, what forcings would cause a reduction in temperature and CO2? Would GCMs predict another ice age if you could turn off the human CO2 contribution in the models and had the computing power to simulate the next 50,000 years?]]
They would if you added the physics of the Milankovic cycles. At the moment GCMs don’t generally include those, since they operate on a time scale of tens of thousands of years, and most GCMs simulate a hundred years or so at a time at most.
Craig allen says
Re 311 Timothy Chase :
I’ve also been wondering about the chaotic aspects of the climate system and the stability of the various oscillations etc. Does anyone know if there a possibility that at some level of greenhouse forcing there will be a major shift such that we see something completely novel appear. For example, the complete disappearances of one of the existing oscillation’s and the emergence of a new one, or a major alteration to the route of one or more of the oceanic currents. Does anything like this radical from any of the climate models?
B Buckner says
Re:311 Tim Chase
The “lower strat” temp data you reference includes measurement of a significant portion of the upper troposhere which is warming. The plots are deceptive.
Jim Cripwell says
I have come across another reference, namely
http://www.moderna-myter.se/2008/01/hur-stmmer-ipccs-prognoser-med.html
This is in Swedish, and for those, like me, who do not understand the language, if you click just under the graph, you get a largert version. No comment.
[Response: This is Pielke’s graph and the IPCC90/92 numbers are exaggerated by about 25% from what is actually in the 1992 report. Projections from 1990 and 1992 are still higher than they were in the 95 and 2001 reports, mainly because of expectations of continued CFC and methane growth. However, they are still within the 2-sigma of the derived trends in the observations. – gavin]
Hank Roberts says
Tim Chase and lgl are writing about very short term ‘trends’ between volcanic events, but to declare a trend you need some statistical work, if I understand the word correctly. Not just a picture.
Numbers, anyone? Where has this been published?
Rod B says
Martin (317), what I meant was that the ln(n) function might or might not apply throughout the range of concentration, and it is presumptuous, and irrelevant, to say that CO2 will continue to absorb more radiation until its concentration is infinite. [Though the idea would make moot the thorny issue of LN to the (about) 5th power of concentrstion ratios…]
Hank Roberts says
Tangential, for those in N. America — watch for spring. Got robins?
“… count for today’s song map was many fewer than for the other two maps, right? This shows that robins are not yet defending their breeding territory. Robins returning to breed and nest are indeed signs of spring. That’s why increases of reports for the “Song” map will be the clearest pattern we expect to see as we track this spring’s robin migration. …”
Redwing blackbird? tulip? hummingbird? This is meant for kids, but ‘grownups’ who haven’t quit paying attention could use this as well.
http://www.learner.org/jnorth/
Ray Ladbury says
Re 312: Rod, think about it. The fact that log(n) continues to rise and diverges as n–>infinity is precisely the point, since it means that adding more CO2 continues to have an effect. If you look at the Taylor expansion, for small changes in concentration, the logarithmic term grows nearly linearly.
SecularAnimist says
Ray Ladbury wrote: “It appears that the inhabitants of the denialosphere are falling into the same trap as the creationists –- trying to find one single devastating observation or experiment that will falsify anthropogenic causation of climate change.”
To me that is the most striking thing about the folks I would call “honest” denialists — “honest” in that they genuinely believe what they are saying, as opposed to some who are obviously disseminating fossil fuel industry propaganda.
Despite, in most cases, having little or no real knowledge, background, or education in climate science, they seem prepared to conclude that they have discovered the simple and obvious flaw in “the anthropogenic hypothesis” that completely demolishes the whole notion of anthropogenic climate change, the simple and obvious flaw that has somehow escaped the attention of hundreds of climate scientists who have been studying the matter for decades.
Regarding the “modelers vs. empiricists” categorization, at present it is the empiricists who have every reason to sound the loudest and most urgent alarms about anthropogenic global warming, since its empirically observable effects are far outpacing the predictions of the modelers.
Jim Cripwell says
Ref 323. Thanks for the comment, Gavin, but I was not concerned with the IPCC predictions. Rather I was concerned with the correlation between the different data bases. In 213, you referred me to Vose et al (2005), which was completed by 2005. However, the Pielke graph shows good correlation between the different data sets before 2005, but it is in the last 2 years that this correlation seems to have broken down. This is why I query whether a study done in 2007 only using NASA/GISS is a proper scientific way to proceed.
[Response: The offset in one particular year doesn’t noticeably affect the correlation. On any timescale and for any period I’d be surprised if it was less than 0.95. The difference in the last couple of years is almost all due to treatment of the Arctic. I’ll post something on this next week. – gavin]
Timothy Chase says
B Buckner (#322) wrote:
The plot was in 313, and I had given:
Global lower stratospheric anomalies from Jan 1958 to Nov 2007
http://hadobs.metoffice.com/hadat/images/update_images/global_upper_air.png
… which shows HadAT2, UAH and RSS. I assume they are all using channel 4 (given how recent the plot is) with differences due to methodologies, but they show what appears to be good agreement on this trend at least.
In any case, channel 4 is presumably almost all stratosphere:
… and as such it should show very little contamination from the troposphere.
*
The principle cause of stratospheric cooling
(Wikipedia in need of correction?)
However, there is also the question of what is principally driving stratospheric temperature anomaly trends: depletion of ozone or higher levels of carbon dioxide?
According to:
Figure 8.20 d Stratosphere
Observed global annual mean surface, tropospheric and stratospheric temperature changes and GISS GCM simulations as the successive radiative forcings are cumulatively added one by one.
… at the bottom of:
Climate Change 2001:
Working Group I: The Scientific Basis
http://www.grida.no/climate/ipcc_tar/wg1/fig8-20.htm
… which gets repeated in:
… it is principally ozone depletion (where ozone absorbs ultraviolet directly from sunlight) rather than higher levels of greenhouse gases.
However, and this may be a new result, according to:
Thursday, November 09, 2006
Stratospheric cooling rears its ugly head….
http://rabett.blogspot.com/2006/11/stratospheric-cooling-rears-its-ugly.html
… the principle cause of stratospheric cooling is higher levels of greenhouse gases, and the effects of ozone on such cooling are much less significant.
[Response: It all depends on where you are. lower strat is principally ozone-depletion related (the majority of the MSU4 trend), while mid and upper strat (and mesosphere as well) is mainly CO2. – gavin]
Rod B says
Ray (327), I fully understand the mathematics. My assertion (325) is simply that it is not clear, obvious or proven that the mathematical formula, ln(nb), matches the physics of concentrations from zero to infinity. To positively claim it applies to n=infinity, or even n=50, or maybe 30, or maybe…(???) as a proof of absorption, IMHO, is not relevant or helpful, and might be specious. It seems you would have a relevant argument (right or wrong I don’t know, but it’s scientifically arguable) if you kept concentrations around realistic maximum possibilities (10, 20, ??) and not try to prove your case with the ridiculous (or sublime, take your pick) infinity… or million.
Timothy Chase says
Hank Roberts (#324) wrote:
lgl’s criticism is based off of short-term statistical trends. I haven’t any criticism – only questions. I wouldn’t base a criticism of the models off of a short-term statistical trend, and in the case of satellite records of stratospheric and tropospheric trends, the long-term trends may be even more problematic.
With respect to your call for numbers, I am not sure that “the numbers” would be a great deal of use at this point as they would be showing data joined together from multiple satellites, corrected for satellite drift, etc.. In any case, in the past, when actual conflict existed between the models and the satellite data, the problem lay with the satellite data, not the models. John Christy’s difficulty with being able to tell the difference between day and night is one very good example.
However, while there are problems with model projections, (Hansen mentions cloud cover and consequently temperatures along the west coasts of continents below), in terms of lower stratosphere temperature anomaly and nearly all other global trends, the current NASA GISS Model E seems to be doing quite well.
Please see page 672 (pdf page 12) of:
J. Hansen et al. 2007, Climate simulations for 1880-2003 with GISS modelE, Clim. Dynam., 29, 661-696, doi:10.1007/s00382-007-0255-8.
http://pubs.giss.nasa.gov/abstracts/2007/Hansen_etal_3.html
lgl says
#330
If most of the 0,5 oC drop in lower stratosphere is caused by ozone depletion, then how much more SW (in W/m2) has reached the troposphere and surface?
[Response: a little (maybe tenths of a W/m2). But ozone depletion is a net cooling factor since it also has an impact on LW absorption. gavin]
Roger Pielke. Jr. says
Gavin-
Can you show your work to support your claim that the 1990 IPCC projection is “still within the 2-sigma of the derived trends in the observations”?
The 1990 IPCC projection can be found here:
http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001321updated_chart_ipcc_.html
1990 IPCC predicted trend = 0.33/decade
Range of trend in 4 observational datasets (using GISS L/O) 1990-2007: 0.20-0.22
In order for the IPCC trend to be within the 2-sigma of the observed requires that error to be > +/-0.13, which seems unreasonable given that the 2 sigma error in the 1979-2004 (only 7 years longer) is +/- 0.04 for the surface datasets and 0.08 for the satellites.
What is the 2 sigma of the trends in observations 1990-2007?
[Response: To be clear, the graph that was linked to and that I was discussing was your original attempt at discussing the 1992 supplement forecast. The numbers on that graph showed 0.6 deg C for 1990 to 2007, which even you subsequently acknowledged (but never graphically showed) should have been 0.45 deg C. That implies a trend of 0.26 degC/dec. The trends in the two GISS indices over the same period are 0.22 +/- 0.09 degC/dec and 0.26 +/- 0.11 degC/dec (2-sigma confidence, with no adjustment for possible auto-correlation, using standard Numerical Recipes methods). You could play around with the uncertainties a little or pick different datasets, but the basic result is the same. Given the uncertainty in the trends because of interannual variability and the imprecision of the data, the 1992 forecast trends are within observational uncertainty.
Now, for 1990 it is a slightly different matter. I would estimate the 1990 forecast is closer to 0.5 for 2007 (thus 0.29 degC/dec) than the 0.57 you graphed. It would then still be within the error bounds calculated above. For your estimate of the trend (and both of us are just reading off the graph here so third party replication might be useful at this point) it would fall outside for one index, but not the other. Making different assumptions about the structure of the uncertainties in that case could make a further difference in how it should be called. Since I wasn’t discussing this in the comment above, I’m happy whichever way it goes.
For the record, your comments about this exchange in other forums are singularly unprofessional and rather disappointing. My only suggestions have been that a) you read from a graph correctly before making comparisons, and b) calculate error bars (as much as can be done) before pronouncing on significance. These might strike some as simple basic procedures. You appear to think they are optional. I’m not much worried that my professional reputation will suffer because of our apparent disagreement about that.
One final note. I find the mode of discourse that you have engaged in – serial misquotation with multiple ad homs in various parts of the web with different messages for different audiences – unproductive and unpleasant. I am not the slightest bit interested in continuing it. – gavin]
Phil. Felton says
Re #331
“My assertion (325) is simply that it is not clear, obvious or proven that the mathematical formula, ln(nb), matches the physics of concentrations from zero to infinity.”
Actually it’s easily proven that it does not!
The ln() relationship arises because the absorption lines in the CO2 spectrum become saturated at their centers and therefore the dependence on concentration is due to the non-saturated ‘wings’ of the broadened lines yielding a ln-dependence. This applies for the range of concentrations experienced in our atmosphere not at very low concentrations nor at extremely high ones I suspect.
lgl says
#332
> in terms of lower stratosphere temperature anomaly and nearly all other global trends, the current NASA GISS Model E seems to be doing quite well.
Can’t agree with you there. The model shows a decrease between the eruptions, and the total drop between 1958 and 2000 is around 0,8 oC while the observations you linked to show close to 1,5 oC drop.
Assuming I am comparing apple to apple?
Ray Ladbury says
Rod, So, since you don’t believe in physics, what do you propose we use? The logarithmic dependence is a direct consequence of the fact that we are having greatest effect (near Earth) in the wings of the absorption line. As the wings are quite broad, there is no reason to expect that CO2 will magically stop absorbing IR at some concentration.
As most of Earth’s carbon is bound up in rocks, for the range of possible concentrations of CO2 in the atmosphere, saturation quite simply will not occur.
Roger Pielke. Jr. says
Gavin- Thanks for reporting back. Thanks for observing that the trends that I reported for the 1990 IPCC are outside of the 2 sigma range. By picking your own smaller trend (rather than the one I reported) you came up with a slightly different answer. I see that you call for error bars when convenient (now) and ignore them when they are not (your comparison of Hansen’s 1988 projections with data). Of course, you well know that in my posts on this I had not claimed anything about statistical relationships of observed and predicted, simply presented the central tendencies.
On your complaints, we can both feel misrepresented I suppose, and that is why it is best to present analyses when making claims, as especially when representing someone else’s work. So choose not to engage if you must, but conversations among people that disagree can can lead to constructive learning.
[Response: Hopefully the constructive learning you are doing involves checking your facts. My description of the Hansen et al trends most certainly did include discussions of the error bars in the observations and in the model simulations:
As I have stated repeatedly, discussion of differences without discussion of uncertainties is misleading, and no verification of a forecast can be done without it. You can ‘report’ what you like, but don’t be surprised when people who bother to look up the original references correct you. I have neither mis-stated any fact, nor misrepresented your claims in any way. Your continued claims to the contrary, and your doing the precise same thing you accuse me of, are a poor reflection on your professional integrity. – gavin]
Bryan S says
Gavin,
As a follow up to our discussion earlier this week, I have been considering your suggestion that ENSO significantly alters the global heat budget on annual or longer time scales.
The magnitude of natural tropical variability (specifically ENSO), and its effect on the earth’s heat budget over annual to decadal periods seems an important issue in climate science. To probe such a question further, I would like to ascertain some basic information. Firstly, what is the net radiative flux (in W/m2, then converted to Joules) needed to raise the temperature of the troposphere (entire global integral) 0.2 degrees C** (or whatever is the best number attributed to the 1998 Super El Nino) over the relevant one year span? Over this same period, when considering the upper 750 m of the ocean, how does this quantity of atmospheric heating compare to the change in the ocean heat content observed for this same time period (measured in Joules or converted to W/m2) in Lyman et al. 2006 (Figure 1)? From this data, are we then able to estimate the fraction of the observed change in the global heat budget that was directly attributable to the El Nino event (since the heat content anomaly in the atmosphere was certainly correlated to the large 1998 El Nino)? Once this fraction is determined, it seems we could better speculate on the validity of the argument by Ellis et.al (1978) that even over annual periods of time, the change in ocean heat content is a proxy for the top of atmosphere radiative imbalance, which Roger Pielke Sr. asserts approximately equals the mean non-equilibrium radiative forcing+feedbacks). I have e-mailed Roger, and also asked him to comment on this issue, which he indicated he would do on his weblog next week.
I hope my question is well posed, and this would be a sound methodology to address this question? (ie, I understand the error bars on ocean heat content limit the precision of this comparison), but it seems important to understand if the numbers are even of the same order of magnitude)
Thanks again for the continuing education.
[Response: Good questions. It will take me a little while to track down some answers, but I think it should be do-able. You could make a stab at it yourself by looking at the differences in the ‘Energy into Ground’ (+ve down) and ‘Net radiation TOA’ (+ve up) in one of the AMIP-style model runs we have archived. Look at the anomalies month by month (compared to some baseline) from mid-1997 to mid-1998. The numbers are ensemble means, and so you should be looking predominantly at the El Nino impact alone. You can compare that to the evolution of the surface temperature anomaly. I’ll report back later with my analysis. – gavin]
[Response: Update. Both ‘Energy into Ground’ and ‘Net radiation TOA’ are +ve down. Time series (including zonal means) are found here: http://data.giss.nasa.gov/modelE/transient/Rc_jt.2.01.html – gavin]
Hank Roberts says
> in my posts on this I had not claimed anything
> about statistical relationships
> of observed and predicted, simply presented
> the central tendencies ….
When you “present” a “central tendency” you thereby make a statistical claim about the data, as I understand the term. No?
http://www.google.com/search?q=define%3Acentral+tendency
> it is best to present analyses when making claims …
At least, cite the published analysis you base your claims on.
http://imgs.xkcd.com/comics/wikipedian_protester.png
http://forums.xkcd.com/viewtopic.php?f=7&t=7301
B Buckner says
Timothy,
While I have a lot of respect for Tamino, if you go to the source:
http://www.remss.com/msu/msu_browse.html
you will find that MSU channel 4 covers a range of altitudes between 10 km and 30 km, and is centered at an altitude of about 17 km. The troposphere/stratosphere boundary is at 17 km at the equator, and at 8 km at the poles, so clearly channel 4 (lower stratosphere) samples a significant portion of the troposphere. Further, as shown on Figure 9.1(f) in AR4, global warming of the troposphere is expected to warm the air to an altitude of about 17 km, with a sharp drop off and cooling after that. So the MSU channel 4 sampling below 17 km is measuring warming air, while above that cooling air.
Hank Roberts says
Mr. Buckner, when I look for papers citing the MSU channels, I always find mention of how the raw satellite data is adjusted to deal with these questions; I don’t find anything as simple as you make it sound, however, in your last sentence there (8:11pm posting).
Here for example there’s an extended discussion of the many factors considered to make use of the raw satellite data, including those you mention and many more. There’s a lot of work done;
http://climate.envsci.rutgers.edu/pdf/TrendsJGRrevised3InPress.pdf
Vinnikov et al.
Temperature Trends at the Surface and in the Troposphere
J. Geophys. Res, 2006 – climate.envsci.rutgers.edu
“… radiation measured by MSU channel 2 results from changes in surface temperature and another 10% from the atmosphere above 180 hPa….”
You’ll find that paper mentioned at RC and elsewhere, q.v. for discussion. But see the original full paper for quite a few detailed paragraphs addressing the issues involved in making sense of the many different instruments. They do work at this.
gavin says
Bryan,
I did a preliminary analysis from the sources I linked to. From early 1997 to mid 1998, the simulated atmosphere accumulated anomalous heat at about 0.2 W/m2 on average due to the El Nino event. This corresponded to an increase of surface air temperature (in these simulations) of around 0.3 to 0.4 deg C. Over that same time, the model ocean was losing heat at approximately 0.7 W/m2. The heat out at the TOA was thus ~0.5 W/m2. Now whether this is a good approximation to reality is unclear – my sense is that it is going to be sensitive to the details of the cloud feedbacks in the tropics, but assuming for the time being that this is reasonable, it implies that the interannual variability in OHC is certainly on the same order as the long term TOA imbalance, and that the factor between TOA imbalance and OHC might vary between 0.8 and 1.2 on interannual timescales. One would therefore expect to see a plateau or even a dip in OHC growth through this period in the real world.
These kinds of model simulation do have a couple of disadvantages though that might be apropos. In mid-latitudes where SST variability is driven mainly by the atmosphere, starting off with the SST makes the fluxes go the wrong way. You could alleviate this by doing the analysis in the tropics alone – might even be easier to compare with data on clouds or rainfall too. But this is probably ok for a first cut.
Steve Reynolds says
gavin> Over that same time, the model ocean was losing heat at approximately 0.7 W/m2. The heat out at the TOA was thus ~0.5 W/m2.
Was the area the same for ocean and atmosphere?
[Response: These are all true global means. – gavin]
Bryan S says
Gavin,
With more detailed analysis, this definately needs to be published!
I need to think through the ramificatios of this conclusion a little before commenting on details, and its getting late, and my family is calling. I look forward to continuing this informative discussion. Thanks for staying up late!
Rod B says
Phil (335), I concur. Ray (337) says, “As the wings are quite broad, there is no reason to expect that CO2 will magically stop absorbing IR at some concentration.”
How broad would you guess are the wings? There is likewise absolutely no reason to expect that it won’t stop. Given the quantization of the absorption it makes logical sense that it would stop absorbing long before infinity.
“….saturation quite simply will not occur….” My implied suggestion exactly: give it up. You have nothing to gain/prove but a small amount to lose.
Hank Roberts says
> There is likewise absolutely no reason to expect that it won’t stop.
Which “it” are you talking about now?
Chris Colose says
Rod,
see Eli Rabett’s page here . Also, if you go to Ray Pierrehumbert and Spencer Weart’s saturated gassy argument/part 2 pages you can see that venus is not even “Saturated” and more and more CO2 will cause warming well beyond the realm of what is practical for policy-making. This is going nowhere unless you want to discuss the details for acadmeic reasons, but the ideas in the blogosphere that “CO2 will be saturated soon so there is not much to worry about” is nonsense
Richard Sycamore says
#340 “When you “present” a “central tendency” you thereby make a statistical claim about the data, as I understand the term. No?”
To “present” a central tendency is to make a statistical “claim”, yes. A single sample mean is an unbiased estimate of a population mean; it makes a statement.
However – to address the case RPJ was referring to – it is possible to present a set of central tendencies without drawing any inference as to whether the central tendencies are the same or different. It is a matter of descriptive vs inferential statistics. To move from a description of central tendency to an inference about a set of tendencies you need to estimate the uncertainties in central tendency. No estimate of uncertainty, no inference.
But – a brief digression – operating in descriptive mode rather than inferential mode is really a kind of a dodge. What the author is essentially doing is letting the reader assume the risk of making a bad inference, thus shifting the liability from author to reader. In purely scientific circles this doesn’t really matter, as every scientist knows the literature is “buyer beware”. But it can be a problem for policy makers who aren’t aware of this cultural practise. They are often forced to, sometimes unknowingly, take on credibility risk that the scientists have chosen to waive.
Ike Solem says
It is interesting to see what aspects of the IPCC report were selected by Pielke and Tierney for discussion. They seem to shy away from the IPCC sea-level rise predictions discussed by Rahmstorf et. al:
I’d like to know why Dr. Pielke is focusing only on one subset of IPCC predictions, considering that Dr. Pielke calls the surface temp data “a feast for cherrypickers”, and then goes on to say “Rather than select among predictions, why not verify them all?”
[Response: In fairness, Pielke discussed sea level rise here. -gavin]