Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.
So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.
The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.
If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.
An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.
In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.
Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.
A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.
What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:
- Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
- Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
- Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).
It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).
Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.
This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.
Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.
If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.
Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.
Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.
Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:
– UK Met Office data (instead of GISS data) with 8-year trend lines
– GISS data with 7-year trend lines (instead of 8-year).
– GISS data with 15-year trend lines
These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.
And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.
Timothy Chase says
B Buckner (#341) wrote:
Tamino isn’t the only one who seems to be making that “mistake”:
Likewise, I notice that the material you link to:
http://www.remss.com/msu/msu_browse.html
… refers to TLS as lower stratosphere and makes no mention of it as including troposphere.
However, you are right that the tropopause (the boundary layer between the troposphere and the stratosphere) extends to a height of 18 km over the equator.
Please see:
Moreover, it has been pointed out that the tropopause is rising as the result of global warming.
Please see:
… and:
The variability of the tropopause height gets me to thinking: is it possible that the height at which the channel 4 measurements begin is itself variable?
Looking around, I find:
Now this suggests that it is going off of air pressure. Checking for the air pressure of the stratosphere at the equator, I find:
So that can’t be it. Judging from what I can see, you are right that there is some contamination from the troposphere. But how significant is it? We know that the rate at which temperatures are rising near the equator are considerably smaller than at the higher latitudes, at least at the surface. I suspect you will find the same thing in terms of tropospheric warming. As such, there will probably be very little contamination due to tropospheric trends.
But this wouldn’t explain why Fu et al (2004) feel comfortable stating that it “records only stratospheric temperatures.” Perhaps Hank Roberts is right, and postprocessing in some form eliminates the contamination, or perhaps they were being somewhat imprecise. In either case, I don’t see much concern for the issue in the literature, but it would be nice to know.
Barton Paul Levenson says
Bryan S writes:
[[Firstly, what is the net radiative flux (in W/m2, then converted to Joules) needed to raise the temperature of the troposphere (entire global integral) 0.2 degrees C** (or whatever is the best number attributed to the 1998 Super El Nino) over the relevant one year span?]]
The atmosphere has a mass of 5.136 x 1018 kilograms according to Walker (1977, p. 20); the fourth digit may have been revised since then but I can’t think of any specific sources offhand. The troposphere is usually taken to have 80% of the mass of the atmosphere, which would give it m = 4.109 x 1018 kg. Dry air has a specific heat at constant pressure (yes, I know) of 1,004 Joules per kg per K, and the atmosphere is mostly dry, water vapor constituting around 0.4% by volume on average. Make it 1,010 J/kg/K for the whole troposphere. Change in temperature is dT = dH / (m cp), so dH = dT m cp and a 0.2 K change requires 8.3 x 1020 J. A year is 365.2422 * 86400 = 3.156 x 107 seconds, so the power required is 2.630 x 1013 watts. The Earth’s area is about 5.1007 x 1014 square meters, so the required flux density is about 0.052 watts per square meter.
Barton Paul Levenson says
Rod B posts:
[[Ray (337) says, “As the wings are quite broad, there is no reason to expect that CO2 will magically stop absorbing IR at some concentration.”
How broad would you guess are the wings? There is likewise absolutely no reason to expect that it won’t stop. Given the quantization of the absorption it makes logical sense that it would stop absorbing long before infinity.]]
Venus has 89 bars of carbon dioxide (965,000 ppmv at 92.1 bars pressure) and a mean global annual surface temperature of 735.3 K (both figures from Seiff et al.’s 1986 standard atmosphere for Venus). The simulations of Bullock (1997) and others indicate that in times of intense volcanism, the surface temperature on Venus can go to 900 K. So even at that level it isn’t saturated. Whether some theoretical saturation limit exists or not is irrelevant to climatology; in practice, more CO2 means more greenhouse effect.
B Buckner says
Hank
I understand very smart people understand the issues and work hard to provide representative data. Channel 4 measures a certain layer of the atmosphere. The data needs to be processed to reflect the targeted layer. Complications arise because the targeted layer boundary varies with latitude, pressure and time, and there are huge temperature gradients at the boundary. It is not possible with this technology to get it right and perfectly separate the layers. That is why I said the lower stratosphere data is contaminated with warm troposphere data. If the lower stratosphere MSU satellite temp data is accurate, then there is a problem with global warming theory because the lower stratosphere has not cooled since 1993. I don’t think that is the case and the constant lower strat temps are more likely to be the result of a warming upper troposphere and a rising boundary between the troposphere and stratosphere.
Rod B says
Bryan and Gavin, I find your discourse very interesting and informative. Same goes for the Buckner, Roberts, Chase, et al discussion. Happy to listen and learn. Thanks.
Rod B says
> Which “it” are you talking about now?
CO2 absorbing infrared radiation.
Rod B says
per Chris (348): “…the ideas in the blogosphere that “CO2 will be saturated soon so there is not much to worry about” is nonsense…”
I never said, implied or hinted as such. I did imply that a concentration ration of 20 – 30 times, compared to ~ 2x today, is possibly arguable for continuing absorption. But a million times, let alone infinite, is way beyond the realm of any known physics justifying the ln(n) formula.
“…This is going nowhere unless you want to discuss the details for acadmeic reasons…”
You have a good point there!
Donald Dresser says
Hank Roberts and Kevin Stanley, thanks for pointing out what I had missed regarding the Antarctic ice reports.
Timothy Chase says
B Buckner (#354) wrote:
In response to my question in 330:
gavin wrote:
Ozone depletion has gone flat and may even be recovering a bit.
Ergo, there isn’t that much reason to expect the lower stratosphere to continue cooling. It could even warm up a little if ozone recovers appreciably. However, the middle and upper stratosphere should continue to cool.
lgl says
#354
If this http://www.remss.com/msu/msu_data_description.html#figures is correct, the top of troposphere isn’t warming either. (0,03K/dec)
Ray Ladbury says
Rod B., your comments represent a misconception about spectral lines. They are not well behaved distributions
You can read about them here:
http://ads.harvard.edu/books/1989fsa..book/AbookC14.pdf
and here:
http://en.wikipedia.org/wiki/Voigt_profile
Result: Add more CO2 and you will absorb more IR. Absorb all the IR, and the gas will emit more that can be absorbed higher up.
Timothy Chase says
B Buckner and lgl,
Apparently the trend in lower stratosphere temperature anomaly is pretty much what theory says it should be:
Ramaswamy et al., Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling, Science 24 February 2006: Vol. 311. no. 5764, pp. 1138 – 1141, DOI: 10.1126/science.1122587
http://www.gfdl.noaa.gov/reference/bibliography/2006/vr0601.pdf
Roger Pielke. Jr. says
Gavin- Thanks much for the link in #350
Richard Sycamore- Thanks for stating what seemed fairly obvious.
The issue that has me confused about Gavin’s complaints about my efforts to look at past IPCC forecasts is that the 1995, 2001, and 2007 forecasts and observations are clearly and unambiguously right on top of one another. One can certainly do an uncertainty analysis, but, in this case, it seems redundantly unnecessary. As I characterized the IPCC forecasts in my post the forecasts are “spot on.”
Now 1990 is a little different. And whether you like Gavin’s 0.29/decade or my 0.33/decade, it also seems logically obvious that over the long term the observations (however the future evolves) cannot simultaneously be consistent with IPCC 1990 and IPCC 1995/2001/2007. No uncertainty analysis is needed to reach this conclusion either. Now it might be that the obs fall outside the 2 sigma uncertainty 1990-2007 (using my trend) or 1990-2008 (using Gavin’s trend + UKMET 2008 forecast) or some other period, but over the long term not all IPCC forecasts can be equally accurate, and this says nothing about the truth value of climate science or the integrity of the IPCC. It does however say something about the details of predictive capabilities from different points in time, and how climate forecasts have evolved, which seems to be worth doing (even the IPCC does it!).
[Response: My complaints were nothing to do with whether verification is worthwhile or not. We all agree it is. They had to do with presentation and accuracy in what was being discussed. You might interpret criticism of specifics as a criticism of approach but that does not logically follow and probably explains your confusion. Our conversations would be much more productive if you simply responded to actual comments rather than reflexively assuming some imagined agenda. For a start, how about apologising for mischaracterising (#338) my discussion of the Hansen et al simulations? – gavin]
lgl says
#362
Thanks Timothy, but again I disagree.
Still they only manage to get a flat curve between the eruptions, and
“Although the overall trend in temperature has been modeled
previously (5, 9, 10), the steplike structure and
the evolution of the cooling pattern in the observed
global temperature time series has not
been explained in terms of specific physical
causes, whether these be external forcing and/or
internal variability of the climate system. Thus,
attribution of the unusual cooling features observed
during the 1980s and 1990s has yet to be
addressed, along with potential implications for
the future.”
Not very convincing. And when they write things like;
“The decadal-scale temperature decline that is dominated
by stratospheric ozone depletion is very
likely unprecedented in the historical evolution of
the lower stratospheric thermal state”…
How can they possibly know?
[Response: Err… because ozone-depeleting CFCs are completely man made and have never existed in the Earth’s atmosphere before? – gavin]
lgl says
#364
But there has been far more severe volcanic activity through history than the last decades. How can they know the temp drop after a VEI7 og 8 volcano?
Fred Staples says
As far as the Logn formula is concerned, Ray, (305) it was introduced (empirically) by Arrhenius and modified by Hansen. The Climate Audit is tracking down sources, and the Reference Frame has a sketch of a deduction. It is not supposed to work indefinitely, or at very low concentrations. Angstrom (Weart and Pierrehumbert, A Saturated Gassy Argument, here) thought the gas would be completely saturated beyond quite modest levels. Your contributors did not agree.
However, over the range where the increase in temperature continues, the point is the doubling time. On the Arrhenius version, doubling the concentrations makes the temperature increment proportional to Logn2, which is constant.
If a CO2 concentration increase from 280 to 560 increases global temperatures by 1 degree in 100 years, 800 years will be required to raise temperatures by 4 degrees at a concentration of 4,480 ppm. Is CO2 toxic at that level? On the issue of the excitation mode, I think (and will look for sources) that the consequences of electron excitation and intra-atomic excitation are different. The first will re-radiate, at the same temperature and frequency, allowing surface radiation to continue upwards and travel backwards; the second will dissipate heat into N2/O2 as Rod B (302) says.
If most (even much) of the absorbed energy does increase the N2/O2 temperature, how does that energy leave the atmosphere, and what happens to the back-radiation argument?
The Saturated Gassy Post says:
“Or it (the excited molecule) may transfer the energy into velocity in collisions with other air molecules, so that the layer of air where it sits gets warmer. The layer of air radiates some of the energy it has absorbed back toward the ground, and some upwards to higher layers. As you go higher, the atmosphere gets thinner and colder. Eventually the energy reaches a layer so thin that radiation can escape into space.”
Which other air molecules? Can N2/O2 absorb and radiate?
[Response: You don’t need the other air molecules. It’s still the greenhouse gases (CO2, CH4, H2O, etc.) that are doing the radiating — it’s just that they’re doing the radiating at a level where the atmosphere is optically thin enough for the radiation to get out. Note that this remark applies to only one part of the counter-argument to the “saturation” claim (the “thinning and cooling” part that shows that saturation as understood by Angstromm wouldn’t negate the greenhouse effect increase even for a grey gas saturated at sea level). The other part, which is probably the most important part for CO2 on Earth, deals with the wings of the CO2 absorption spectrum, where CO2 absorption is weak and needs greater concentrations to be saturated. Always more absorption waiting in the wings! (until you approach Venus conditions). By the way N2 can become a greenhouse gas for sufficiently dense atmospheres. It’s the dominant greenhouse gas on Titan. Not a factor on Earth, though. –raypierre]
pat n says
We continue to run into skeptical arguments dealing with ground level global temperature noise in the 1920s-1940s period.
For example, a Facebook blogger wrote:
… how does one explain the earth’s rise in temperature from the early 1900’s to 1940 when HUMAN production of CO2 was significantly lower in relative terms to today’s production… Wouldn’t it make sense that temperatures should have fallen then, not risen? …
I assume there is no no one explanation which is supported by scientific evidence.
Aerosol concentration changes fail to be convincing.
FYI – My reply to the Facebook blogger:
— I have an explanation for the apparent cooling period from 1940-1975 which differs from the possible explanation given by many mainstream scientists (aerosols). My explanation is that the 1920s-1940s was a warm bubble, a result of less condensation and cloud cover in the 20s and early 30s, followed by higher intensity El Nino conditions in mid 30s-40s.
[Response: I’m glad you pointed out that this nonexistent problem is still making the rounds, but I don’t buy your explanation. Either the current or previous IPCC report has many citations regarding the causes of the earlier parts of the warming. CO2 still plays a role, but basically the problem is that the warming is still small enough then that it’s not beyond natural variability. I suppose some of your claims about clouds and ENSO could be included under “natural variability.” And of course, it’s well established by now that the main reason for the interruption of the warming is aerosols, notwithstanding that there is still considerable uncertainty regarding the secondary aerosol effect on clouds. –raypierre]
—
Martin Vermeer says
Rod,
obviously logarithmic behaviour breaks down for very low CO2 concentrations (otherwise an almost CO2-free atmosphere would exhibit a negative greenhouse effect). What happens at the very high end is anybody’s guess. But for the range we are actually looking at — say, pre-industrial to twice pre-industrial — do you have any reason to believe the models, which robustly predict log behaviour based on well established physics, have got it wrong?
Ray’s statement is certainly relevant as a counter-statement to the claim of saturation within the range of concentrations we are actually talking about. What IIRC started this discussion.
Roger Pielke. Jr. says
Gavin- We could take this aggrieved academic show on the road – Dr. Abbott and Prof. Costello ;-)
To recap:
1. You complained that I posted up a graph without displaying proper uncertainty ranges
2. I replied that I had discussed them in the text.
3. I pointed to a graph that you posted without uncertainty ranges.
4. You replied that you had discussed them in the text.
Anyway, the substantive discussion of uncertainties I presented in #363 should make my views on this completely unambiguous.
Shall we go around again? Or is everything quite clear by now? ;-)
Martin Vermeer says
Lawrence,
you awoke the technology nut in me.
Why burn carbon at all? Burn hydrogen.
Not if beamed down from space by laser beam (seriously!)
Why allow them to land? Visit them in the air by smaller shuttle aircraft.
(I know, I know. Perhaps not :-)
Timothy Chase says
lgl (#364) wrote:
You are quoting from the opening paragraph. It states what hasn’t been achieved — until now. And they manage to get basically the same curve (with envelope) in the simulation as we get from observation. The progressive flattening of the curve between eruptions has to do with the drop in the rate of ozone depletion. Just as I suggested would be the case in 359:
The reason is that the principle cause of cooling in the lower stratosphere is the depletion of ozone, not higher levels of greenhouse gases. Rising levels of carbon dioxide are the principle cause of cooling in the middle and upper stratosphere.
Their case is made a bit more impressive with the temporal evolution diagrams showing temperature anomaly by latitude and year — despite their model’s lack of a Quasi-Biennial Oscillation — something they left out of the model.
*
Going back to your previous post…
lgl (#360) wrote:
TTS?
That isn’t “top of troposphere.”
David B. Benson says
lgl (365) — Greenland ice core records show that, for example, the aerosols (sulfates) in the atmosphere due to the Mt Toba super-eruption about 74–71 kya disappeared in 3–6 years. Not decadal scale.
Hank Roberts says
lgl, Gavin’s exactly correct in answering your question about the historical record on ozone depletion
> likely unprecedented in the historical evolution
Catalysis by chlorofluorocarbons; cooling of the stratosphere from the greenhouse changes has reached the temperature where ice clouds that enhance the destruction of the ozone form more often, which is delaying the recovery.
http://www.usgcrp.gov/usgcrp/ProgramElements/plans/atmosphereplans.htm
Prehistory offers one other mechanism that depletes the ozone layer, but if it had happened recently we wouldn’t be here to notice it:
http://journals.cambridge.org/abstract_S1473550404001910
The third possibility — the one Crutzen pointed out in his Nobel Prize lecture: if industry had chosen bromine rather than chlorine to create halogenated fluorocarbons, the ozone layer would have been gone before we had a clue we were causing the problem.
See references here:
http://academics.smcvt.edu/chemistry/FACULTY/jvh_publications/1995_CrutzenRowlandMollina.pdf
Blind luck, on that one, the economics were apparently close to 50-50 between choosing chlorine or bromine as a feedstock.
Who knew?
lgl says
#370
TTS is at 10 km. That’s above the polar troposphere and below the tropic top of troposphere. I assumed if there is almost no warming at 10 km there’s probably even less at top of tropic troposphere, maybe I’m wrong.
lgl says
#371
But how much ozone disappeared?
lgl says
#370
Timothy
Why is the temp drop between 1960 and 1975 as large as between 1990 and 2005?
Rod B says
Martin (368) says, “….do you have any reason to believe the models, which robustly predict log behaviour based on well established physics, have got it wrong?…”
To be truthful this is an area where I have doubts, but I’m a long way from claiming with any credibility that it’s wrong. But it wasn’t my (simple?) point which explicitly questioned the ln(n) relationship at the extremes, in particular when n = infinity. I am not quarreling with the claim, “within the range of concentrations we are actually talking about…”
Eli Rabett says
#366 Also the various isotopic combinations, eg. 18 O, 13 C, etc.
Eli Rabett says
#370. Increasing ghg concentrations are more responsible for stratospheric cooling than ozone depletion. See also Uhreck’s page referenced at the link
Martin Vermeer says
Re: #377: Rod, then we agree. And I assume Ray agrees too. If you look back at his post #305, you see that he never claims that the log(n) relationship holds valid to infinity, but rather that Fred Staples’ earlier claim that logarithmic behaviour is the same as saturation, is simply not true. I’m surprised you missed that logic.
Hank Roberts says
# lgl Says:
> But how much ozone disappeared?
http://journals.royalsociety.org/content/501842p67034t582/
Review. Stratospheric ozone depletion
Issue Volume 361, Number 1469 / May 29, 2006
Pages 769-790
DOI 10.1098/rstb.2005.1783
Author F. Sherwood Rowland
http://ozone.unep.org/Frequently_Asked_Questions/
http://www.pnas.org/cgi/content/abstract/104/2/445
Chris Colose says
the earlier century warming had a lot to do with anthropogenic, solar, lack-of-volcanic and some internal variability. From my impression of most people on facebook, they seem to assume that *we* think CO2 is the only thing relevant to climate, so if the sun goes down by 5%, CO2 increases, and it cools, our theory is falsified.
By the way, raypierre, when exactly does N2 become able to absorb/emit IR, I didn’t know that, I didn’t even think it could ever do that??
Eli Rabett says
#382 Chris, for all practical purposes N2 does not absorb or emit IR, however if you REALLY want to get picky then you have to start by including collision induced absorptions (the collision complex formed by two molecules in the act of colliding) and quadrupole moments (normal interaction of molecules with light is due to electric dipole interactions between the photons and the molecules. You can find the quadrupole and magnetic dipole terms discussed in books on electromagnetic theory (Jackson is the default). The collision induced stuff is so far in the weeds that you have to go to the primary literature.
For nitrogen see Demoulin, Farmer, Risland and Zander JGR 96, 13003 (1991). The optical depth from the top of the atmosphere to the top of the Alps is about 10% absorption. O2 has both quadrupole and magnetic dipole lines which are about as strong (Balasubramanian, D’Cunha, Rao, JMS 144 374 (1990). Also Rinsland, et al., JQRST 48 693 (1992). Both are minor but real. The amount of energy deposited into any small region of the atmosphere is zilch, so this is one of those things you can stump an expert on, but in the long run don’t matter much.
Timothy Chase says
Eli Rabett (#378) wrote:
Eli, I had raised this very issue (whether stratospheric cooling was principally due to ozone depletion or rising levels of carbon dioxide) back in #330, giving links to the IPCC (2001) and Wikipedia where claims are made that it is principally due to ozone depletion and gave the link to where you claim that it is principally due to rising levels of carbon dioxide in “Stratospheric Cooling Rears its Ugly Head,” and Gavin said:
I believe he had said something to this effect a while back, but I recalled it only on the second time through. Anyway, ozone is more important in the lower stratosphere helps to make sense of the trend in lower stratosphere anomalies — which have behaved pretty much as models would project. (See #362.)
But in any case, looking at your chart (I don’t have the air pressures down, but going off of the falling air pressure with increasing height), I believe you may be capturing the switch. In the lower stratosphere, ozone depletion appears dominate. In the middle and upper stratosphere, rising GHGs dominate. Might have preferred three curves, though. One for GHG only, one for ozone only, and one for both. You have the last two, but not the first.
Rod B says
Martin (380). Ray said, “…Does log(n) go to infinity as n goes to infinity? Since I assume you know this, HOW CAN YOU CALL THAT SATURATION?..”
I dunno. Maybe I misread that. If so I apologize.
Timothy Chase says
lgl (#374) wrote:
Actually the “Relative Weighting Function” diagram shows a peak at 10 km, but wings going as far down as 0 km and as far up as nearly 35 km. But being above the troposphere in the polar regions means that it is getting some of the stratosphere. Therefore it is measuring both the stratosphere and troposphere — and just enough of both that their trends nearly cancel.
Chris Colose says
I bought the article for temporary access, but am having technical problems; hopefully I will read it after contacting them.
The technical chemsitry/spectrometry is probably a bit above me. My question was mainly where the “atmospheres density” comes in as far as allowing N2 to act as a strong greenhouse gas on Titan, and any other differences between Titan and Earth in this regard– chris
Martin Vermeer says
Rod (#385): the saturation ‘meme’ is alive and kicking in denialist circles well half a century after dying a natural death in the scientific community. Your, what appeared an attempt at deflecting attention from Ray’s debunking it, looked so straight out of the [edit] Deniers’ Handbook (if you can’t win the debate, add confusion), that I reacted strongly. Pardon my thin skin.
Ike Solem says
Gavin, thanks for the link to Dr. Pielke’s review of IPCC estimates of sea level rise vs. actual measurements. His summary is a little strange, however:
Well, there’s actually a very large literature on sea level rise. There are two major components – the thermal expansion of the oceans as warming proceeds, and the contribution from melting ice sheets. Here are some examples:
1) Willis, J. K., D. Roemmich, and B. Cornuelle (2004), Interannual variability in upper ocean heat content, temperature, and thermosteric expansion on global scales, J. Geophys. Res., 109.
This paper, which is al about how to combine satellite altimetry data with in situ measurements to get good estimates of upper ocean heat content, has some interesting observations that apply to some discussions on this thread:
2) Miller L. and Douglas, B.C (2004) Mass and volume contributions to twentieth-century
global sea level rise, Nature (428).
This paper also attempts to sort out the components of global sea level rise:
Concerning Dr. Pielke’s attacks on RealClimate as “spin”, we can also look at
3) Rahmstorf, S. (2007) “A Semi-Empirical Approach to Projecting Future Sea-Level Rise”, Science, 315.
There’s also this point, which is worth thinking about:
So, where is the spin and lack of comprehensive analysis that Dr. Pielke is upset about? Given the warming trends in the polar regions, it seems clear that Greenland and West Antarctica will continue to lose mass to the oceans over the next century at an ever-increasing rate. Exactly how long it will take to reach equilibrium is very uncertain, however.
However, the reports are pointing towards very rapid increases in the rate of melting. See Greenland’s Ice Melt Grew by 250 Percent, Satellites Show, National Geographic News, Sept 2006 as well as Greenland Ice Sheet Is Melting Faster, Study Says, National Geographic News, Aug 2006. Those are news reports on two papers published in Science and Nature that rely on the GRACE (Gravity Recovery and Climate Experiments) satellites for direct ice mass measurements.
Finally – whether or not the rate of sea level rise over the next 100 years (a rather arbitrary time frame) is “catastrophic” or not depends partly on how people respond to it. Hurricane Katrina, for example, was not intrinsically catastrophic – that was a result of poor disaster planning and a failure to maintain the levees (perhaps they should have asked the Dutch engineers for advice?).
The best advice for policy makers would be three-fold: halt the use of fossil fuel combustion for energy, implement a massive global renewable energy infrastructure program, and prepare plans for the worst-case scenarios of the unavoidable effects of global warming that are already in the pipeline.
Ray Ladbury says
Fred,
Yes, the log(n) dependence was derived empirically. That does not diminish the fact that there are physical reasons for it–the thick tails of the absorption spectrum.
As to the relaxation of the excited state–if it can absorb radiation, it can emit radiation, right. It can also relax collisionally–and this is what imparts energy to N2/O2. It’s a question of the proportion in each class, and that depends on the density of the air in the vicinity of the molecule. Keep in mind, also, that collisional excitation is possible, even at relatively low temperatures. This is how energy leaves the climate in the ghg absorption bands after all.
Barton Paul Levenson says
Fred Staples posts:
[[If a CO2 concentration increase from 280 to 560 increases global temperatures by 1 degree in 100 years]]
Try 3 degrees.
Barton Paul Levenson says
Igl posts:
[[Why is the temp drop between 1960 and 1975 as large as between 1990 and 2005?]]
The mean global annual surface temperature was flat between 1960 and 1975 and rose from 1990 to 2005. Where are you getting your figures?
Barton Paul Levenson says
Chris Colose writes:
[[The technical chemsitry/spectrometry is probably a bit above me. My question was mainly where the “atmospheres density” comes in as far as allowing N2 to act as a strong greenhouse gas on Titan, and any other differences between Titan and Earth in this regard– chris]]
The surface air pressure on Titan is 146,700 pascals as opposed to 101,325 at sea level on Earth, but the main difference is the temperature — the surface temperature on Titan is 94.5 K, compared to 287 or 288 on Earth. In those conditions, nitrogen becomes a greenhouse gas (though methane is also important in the Titan greenhouse effect). If you want to google the name McKay along with Titan, several of Chris McKay’s articles on the subject are free on the web.
B Buckner says
Tim and Eli,
Figure 9.1(c)&(d) in AR4 graphically portrays the separate modeled effects of GHG and ozone respectively, for the period between 1890 and 1999. Have a look. The ozone cooling has an effect to a slightly lower height, but it is hard to see that ozone dominates cooling in the lower strat as indicated by Gavin.
Barton Paul Levenson says
My web site got deindexed by some hacker, and no longer shows up in web searches. I remember this happened to RealClimate a while back. How did you fix the problem?
[Response: There’s a revalidation process in Google tools somewhere, it can take a while though. – gavin]
Lynn Vincentnathan says
Instead of dithering about the present yearly ups & downs in the overall warming pattern, how about setting our sights on what might happen if we reach 6 degrees end of this century or in 2 centuries or more.
SIX DEGREES by Mark Lynas will finally be released here in the U.S. on Jan 22, and Amazon is giving a 5% discount for preorders (I already got mine thru Amazon.co.uk, which was a lot more expensive):
http://www.amazon.com/Six-Degrees-Future-Hotter-Planet/dp/142620213X/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1200928287&sr=1-1#productPromotions
For RC’s discussion of the book, see: https://www.realclimate.org/index.php/archives/2007/11/six-degrees/langswitch_lang/im
lgl says
#392
We were discussing the lower statospheric temperature.
Hank Roberts says
> 9.1(c)&(d) in AR4
I can see it — the blue in (c), compared to (d).
Did you look at the 2 “sources for further information” listed in the caption? (I’ve quoted the caption below.)
———
For convenience of others wanting to follow, this is a link to the whole chapter: http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
Caption: Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from (a) solar forcing, (b) volcanoes, (c) wellmixed
greenhouse gases, (d) tropospheric and stratospheric ozone changes, (e) direct sulphate aerosol forcing and (f) the sum of all forcings. Plot is from 1,000 hPa to 10 hPa (shown on left scale) and from 0 km to 30 km (shown on right). See Appendix 9.C for additional information. Based on Santer et al. (2003a).
Santer, B.D., et al., 2003a: Contributions of anthropogenic and natural forcing to recent tropopause height changes. Science, 301, 479–483
Don’t miss this followup, Santer et al.’s response to Pielke et al.’s skeptical comment on that article; the response, available as full text online, reads as good clear hard scientific argument.
http://www.sciencemag.org/cgi/content/full/303/5665/1771c
(Good illustration of why you should always read the footnote, look up the article, and look at subsequent citations and comments and followups — science grows like a plant, finding new resources and growing in all directions. Don’t focus on a ‘founder’ or ‘origin’ with science — focus on where it is now and how it’s growing.)
This review also looks quite helpful, and is more recent:
UCRL-JRNL-209353
http://www.osti.gov/energycitations/servlets/purl/15017395-itn0X8/native/
http://www.osti.gov/energycitations/product.biblio.jsp?osti_id=15017395
Detecting and Attributing External Influences on the Climate System: A Review of Recent Advances
T. Barnett, F. Zwiers, G. Hegerl, M. Allen, T. Crowley, N. Gillett, K. Hasselmann, P. Jones, B. Santer, R. Schnur, P. Stott, K. Taylor, S. Tett
February 2, 2005 Journal of Climate
It begins:
“The “International Ad Hoc Detection group” (IDAG) is a group of spe-ci-a-lists* on climate change detection, who have been collaborating on assessing and reducing uncertainties in the detection of climate change since 1995. Early results from the group were contributed to the IPCC Second Assessment Report (SAR; IPCC 1996). Additional results were reported by Barnett et al. (1999) and contributed to the IPCC Third Assessment Report (TAR; IPCC 2001). The weight of evidence that humans have influenced the course of climate during the past century has accumulated rapidly since the inception of the IDAG. While little evidence was reported on a detectable anthropogenic influence on climate in IPCC (1990), a ‘discernible’ human influence was reported in the SAR, and the TAR concluded that ‘most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations’. The evidence has continued to accumulate since the TAR. This paper reviews some of that evidence, and refers to earlier work only where necessary to provide context…..”
—–
* Thanks to the WordPress software, some words must now be hyphenated to be allowed.
pat n says
What you called (Response #367) a nonexistent problem (i.e. the 1920-1970s up and down in global temperatures) in facty continues to be used by many skeptics to push a falsehood that global warming is just cyclical and non-anthropogenic.
The problem still exists.
1890-2007 graphs of temperatures at U.S. climate stations in the Midwest and Great Plains show spikes in annual temperatures in 1931 and in 1921 which appear well beyond natural variability at many climate stations.
The dust-bowl dry conditions of the early 1930s are well documented
as extreme in comparison to all other droughts of record in the U.S. explained by El Nino or other factors.
http://picasaweb.google.com/npatnew/TemperaturesAtUSClimateStations
—
Bryan S says
Gavin, here are my thoughts on the analysis you performed related to the question I posed.
Willis (2004) and Lyman et al. (2006) observe an ocean heat content increase between early 1997 and mid 1998 of around (eyeballing) 18 zetajoules (+or- 20 zetajoules). This is equivalent to a net surface energy flux of around 0.89 W/m2 (gained into the ocean) Now from your analysis, the model runs suggest that the atmosphere gained approximately 0.2 W/m2 during this same period (due to El Nino), and that the simulated net (upward) heat flux from the ocean was 0.7 W/m2 (cooling the ocean the same amount), leaving a loss of 0.5 W/m2 to space (ocean heat loss minus atmospheric storage).
Now, if we accept for the sake of argument that the Willis (2004) stated heat gain is accurate (it has a significant error bar), this suggests that there is a difference between shortwave reaching the ocean surface and the modeled loss of heat to space, equal to 1.39 W/m2 over this period, contributing to the observed net 0.89 W/m2 gain of heat into the ocean. Now this difference might be attributed to changes in cloud feedback over the tropics, but this is a rather large variance.
The 0.2 W/m2 heating of the atmosphere *directly* due to the 1998 Super El Nino thus contributed an estimated 12.6% (0.2 W/m2/1.59 W/m2) to the sum of all the atmospheric processes leading to the actual TOA radiative imbalance. Considering a graph of ocean heat content, it is now apparent to me my why one cannot easily correlate OHCA with ENSO, because there is a bunch more going on in the system that is governing the TOA radiative imbalance.
I therefore suggest that this analysis has some important implications. Firstly, even a very large El Nino (1998 event) will not have a dramatic direct effect on the TOA radiative imbalance over annual periods. The larger, indirect effect likely comes through changes in cloud and precipitation feedbacks in the tropics (and these may take time to adjust). Maybe this is a good reason to pay close attention to what Roy Spencer is working on in trying to observe some of these cloud feedbacks more closely.
Based on this thinking, I see no reason not to stick to my guns when suggesting that even over annual spans, the change in ocean heat content is a good proxy (within the limitations of measurement accuracy) for the TOA radiative imbalance, which is due mainly to the sum of the non-equilibrium radiative forcings+feedbacks, following Ellis et al. (1978) and Pielke (2003).
[Response: Be careful here. The numbers I gave were for a simulation that just had an El Nino event – but with no other forcings. Therefore the difference between that and Willis et al is much more than just in the cloud feedbacks. The estimate of the 0.2 W/m2 gain by the atmosphere over that period is probably reasonable, so you could infer a 1.09 W/m2 TOA imbalance – if the short term OHC numbers are correct. However, the Chen et al paper show a clear increase in outward LW in the ERBE data during the 1987/88 El Nino. This might be clearer if you could analyse just the tropical oceans to see if the tropics lost heat that was then taken up in mid to high latitudes. – gavin]