Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.
So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.
The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.
If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.
An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.
In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.
Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.
A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.
What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:
- Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
- Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
- Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).
It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).
Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.
This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.
Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.
If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.
Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.
Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.
Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:
– UK Met Office data (instead of GISS data) with 8-year trend lines
– GISS data with 7-year trend lines (instead of 8-year).
– GISS data with 15-year trend lines
These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.
And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.
Russell Seitz says
Both 389 and the Pielke essay it critiques make it clear that despite the cost and weight, hard copy distribution of AR4 is vital,because a lot of needless controversy is arising from the difficulty of fast parallel access and lack of a good online index .
This recalls earlier problems arising from multivolume studies like Scope-ENUWAR and many other large reports – sometimes the pseudo-linear narrative form of online single ( or at best double screen access gets in the way of discovering what lies only a few flippable pages away in the hard copy- If you have it.
The more tedious the read, the better the beach you need to read it on.
Bryan S says
Re: #389 “Figure 11a shows no sharp rise in thermosteric expansion at the onset of the ENSO event. This implies that during an El Nino event, large amounts of heat are redistributed within the ocean, but little heat is lost or gained in the global average.” (Willis et al, 2004)
Thanks Ike for helping support my case to Gavin!
lgl says
#373
Hank
Where is the historic ozone record?
Is there a proxy somewhere?
Rod B says
Martin (388), you’re pardoned. I would suggest being a little less Pavlovian which I think caused many to attack for something I didn’t say… maybe.
[edit]
Timothy Chase says
B Buckner (#394) wrote:
Perhaps the biggest thing that I notice with respect to the “well-mixed greenhouse gases” figure c of that diagram is that there is warming and cooling between 75-100 hPa with respect to the greenhouse gases. To the extent that it evenly balances the two in the lower stratosphere they will tend to cancel as far as the overall effect.
Moreover, the temperature change due to WMGHGs includes carbon dioxide, methane and nitrous oxide and would have been from 1880 forward. According to Hansen, carbon dioxide has played less of a role in earlier twentieth century warming, with methane playing more of a role — as methane levels flattened in the the later part of the twentieth century. (See for example Hansen et al, Climate Change and Trace Gases, Phil. Trans. R. Soc. A (2007) 365, 1925–1954.)
With respect to ozone vs. WMGHGs, it would be understandable if Gavin were to confine his analysis to the period of satellite observation. Assuming a constant geometric increase in carbon dioxide, the rate of additional forcing due to carbon dioxide would be constant from 1880 forward. The effects of CFCs and ozone depletion would have been after their introduction in the 1930s, and thus would have largely missed the period of early twentieth century warming.
And likewise,
Ramaswamy et al., Anthropogenic and Natural Influences in the Evolution of Lower Stratospheric Cooling, Science 24 February 2006: Vol. 311. no. 5764, pp. 1138 – 1141, DOI: 10.1126/science.1122587
http://www.gfdl.noaa.gov/reference/bibliography/2006/vr0601.pdf
… shows a flattening in recent years and virtually replicates what has been satellite observations of lower stratospheric cooling, despite the fact that they did not include the Quasi-Biennial Oscillation in their calculations. The flattening doesn’t make sense under the assumption that carbon dioxide dominates the lower stratosphere since carbon dioxide has continued to rise geometrically, but it does make sense in terms of ozone depletion since that has flattened. Still, I wish they would split out the different greenhouse gases rather than putting them all into the category of WMGHGs.
Hank Roberts says
lgl Says: Where is the historic ozone record?
Pasting your question into the search box for your convenience:
http://www.google.com/search?q=Where+is+the+historic+ozone+record%3F
Clicking on the most likely result in the first screenful:
http://ozonewatch.gsfc.nasa.gov/
Your second question into the search box for your convenience:
http://www.google.com/search?q=proxy+for+stratospheric+ozone%3F
I’d suggest repeating the searches in Google Scholar, and following the ‘related’ and ‘cited’ information forward in time.
Google has no ‘wisdom’ option as Coby reminds us.
Reading will be necessary.
I’d suggest adding -SEPP to your search to reduce the bogosity level, but such decisions are yours to make.
Are you okay taking it from there?
lgl says
Thanks Hank,
The problem is 999 of 1000 hits deal with polar region which is not that interesting, and yes I have tried adding things like ‘low latitude’.
But never mind, I will probably find something in a few days :-)
Timothy Chase says
lgl (#402) wrote:
Checking the article Hank gave us in 381, we have been measuring total column ozone since the 1920s:
(Anything in pub med more than six months old is open access, I believe. I found it invaluable in my evo days.)
That would be before the introduction of CFCs in the 1930s, although there should have been some depletion before CFCs due to stratospheric water vapor as the result of increased levels of methane, but I believe that would have been comparatively small.
Hank Roberts says
lgl Says: “… polar region which is not that interesting, and yes I have tried adding things like ‘low latitude’.”
+historic +ozone +record -polar
gets you, for example, this within the first page of hits, just as an example:
Atmos. Chem. Phys. Discuss., 5, 10925–10946, 2005
http://www.atmos-chem-phys.org/acpd/5/10925/
SRef-ID: 1680-7375/acpd/2005-5-10925
Detection and measurement of total ozone from stellar spectra:
Paper 2. Historic data from 1935–1942
Abstract
Atmospheric ozone columns are derived from historic stellar spectra observed between 1935 and 1942 at Mount Wilson Observatory, California. Comparisons with contemporary measurements in the Arosa database show a generally close correspondence. The results of the analysis indicate that astronomy’s archives command considerable potential for investigating the natural levels of ozone and its variability during the decades prior to anthropogenic interference.
http://www.atmos-chem-phys-discuss.net/5/10925/2005/acpd-5-10925-2005.pdf
Using Google,
–“search within” at the bottom of the first page will let you refine.
–The minus sign is your friend.
Google Help : Advanced Search
Once you know the basics of Google search, you might want to try Advanced Search, which offers numerous options for making your searches more precise and
http://www.google.com/help/refinesearch.html
Look at the actual search string created by using Google’s page of fill-in-boxes ‘Advanced Search’ page and you can figure out how to type them in yourself.
I’d expect there’s a description somewhere of how to use Google as a command line tool without all the cute boxes to fill in; anyone got a pointer to that?
Bryan S says
Re #400: “Therefore the difference between that and Willis et al is much more than just in the cloud feedbacks”.
Gavin, you are right, this difference involves the complete scope of the external forcings and other weather processes combined. I think my central point still holds however, that it would be difficult to change the sign of the TOA imbalance (cause the total heat budget to experience a net loss to space), even with a very strong El Nino. Maybe some modeling experiments need to be done, to see what forced changes in the tropics would allow such a scenario to occur, given the current TOA non-equilibrium external forcings? It certainly seems probable in any sense, that if one considered the entire ENSO cycle, the percentage of the TOA imbalance attributable directly to ENSO would likely approach 0. It also seems to me, if..if..if it turns out that we observe multiple years of very little heat accumulation in the system (or even small net losses to space), or a great deal of gain for that matter, it is important to acertain whether the models are adequately simulating this variability. I do want to thank you for engaging in this very interesting conversation and wish you the best in your research.
Andrew says
#15: R. Pielke Jr.: “Models of open systems cannot in principle be “validated””
Do you believe this to be generally true? Without qualification, it’s obviously false. I happen to be boiling some water on my stove at the moment. I’m completely confident that I could successfully validate the open system model for that I would write down from my undergraduate transport textbook. More complex models of open systems are easy to find in engineering – such as just about any combustion engine, or just about everything in biology.
What did you really mean?
[Response: He’s referring to a piece by Naomi Oreskes. It’s (IMO) a rather semantic distinction, that derives from the lack of absolute proof in the real world. If you define validation as ‘proving true’ , then it’s impossible, but if you define it as ‘proving useful’, there is no problem. The latter is what everyone is really doing, whatever the word is that is being used. – gavin]
Steve Bloom says
Re #400 (Bryan S.): “Maybe this is a good reason to pay close attention to what Roy Spencer is working on in trying to observe some of these cloud feedbacks more closely.”
My goodness your biases just pop out all over. There are lots of rather more credible people working on this stuff. Of course it can be safely predicted that you’ll like Spencer’s results better.
Also regarding ocean heat content, the numbers are how reliable as long as the deep ocean data (or, perhaps more to the point in the short term, data relating to heat transfer to and from the deep oceans) is missing?
Richard Sycamore says
Dr. Schmidt, is this is your definition of “validation” – proving useful, as opposed to proving true?
[Response: In some sense I suppose. Useful is always a matter of degree, therefore the binary distinction implied by validated/invalidated is not really relevant. But if a model has been shown to be useful in predicting some aspect of the real world, it could be considered valid. This is all moot though – I hardly ever use the word. – gavin]
David B. Benson says
Andrew (411) [and response] — If people would use the Bayesian terminology of confirmation and disconfirmation rather than the quite confusing (in this context) terms of ‘proof’ and ‘disproof’, much confusion would be avoided.
The Oreskes et al. abstract does use ‘confirmation’, but rather disparagingly, which is unfortunate. After all, if the weight of the evidence gives odds in favor of hypothesis H against hypothesis K of 100:1, which would you bet on?
I would go further than Gavin in complaining about that abstract. It seems to treat V&V as absolute, which is not true in any practice outside of mathematics and certain rather special parts of computer science, AFAIK.
Hank Roberts says
A broker cannot, in principle, be proved honest.
Some brokers can be proved useful.
pat n says
Re #400:
The larger indirect effect of years with strong El Nino conditions on global temperature annual averages likely may be due mainly to increases in low level moisture in mid N.H. latitudes in winter and not likely a result of changes in tropic’s cloud and precipitation feedbacks.
Evidence at:
http://www.mnforsustain.org/climate_snowmelt_dewpoints_minnesota_neuman.htm
Ray Ladbury says
Richard Sycamore, In a scientific context, a useful model or hypothesis is one that yields reliably true predictions. Do you have a better suggestion for how to define “true”. If you are holding out for Truth, might I suggest theology.
Bryan S says
Re #112: Steve Bloom,
If you see your brother standing by the road
With a heavy load from the seeds he’s sowed
And if you see your sister falling by the way
Just stop and stay you’re going the wrong way
You got to try a little kindness
Yes show a little kindness
Just shine your light for everyone to see
And if you try a little kindness
Then you’ll overlook the blindness
Of narrow-minded people on the narrow-minded streets
Glen Campbell, 1969?)
Roger Pielke. Jr. says
nos. 413 and 411
The Oreskes piece that Gavin dismisses as dealing in “semantic distinctions” is a widely cited and highly influential piece from Science that anyone interested in the role of models in decision making should read. There is a large and valuable literature on models in public policy.
Oreskes et al. conclude their paper with the following:
“Finally, we must admit that a model may
confirm our biases and support incorrect
intuitions. Therefore, models are most useful
when they are used to challenge existing
formulations, rather than to validate or
verify them. Any scientist who is asked to
use a model to verify or validate a predetermined
result should be suspicious.”
We can ask hard questions of widely accepted beliefs. It is when such questions are unwelcome or summarily dismissed that science suffers. In the end, asking such questions will make our knowledge that much more rigorous, and useful, even if the question themselves are politically uncomfortable, or challenging to the purveyors of quantitative models.
[Response: There is a world of difference between asking ‘hard questions’ and using appropriate means to answer them. But sure, read the Oreskes piece, it’s interesting. – gavin]
Mark Bahner says
“And of course, it’s well established by now that the main reason for the interruption of the warming is aerosols, notwithstanding that there is still considerable uncertainty regarding the secondary aerosol effect on clouds.”
How can it be “well established,” given these facts:
1) Worldwide emissions of sulfate aerosols continued to rise well past the mid-1970s (to approximately the mid-1990s by most analyses),
2) The rise in temperatures in the last decades has been mainly in the northern hemisphere, even though the concentrations of sulfate aerosols are much higher in the northern hemisphere, and
3) As you yourself note, the IPCC classifies level of scientific understanding of the cloud albedo cooling effect of aerosols as “very low.”
Hank Roberts says
Well, yeah.
Mercury
Lead
Tobacco
Chlorofluorocarbons
Fossil fuel
‘Business as usual until enough damage is proven’ doesn’t work.
Models surprise us. They allow understanding in time to take precautionary policy steps, before the damage builds up.
What’s epidemiology? Modeling.
That’s why the science is attacked so hard by lobbyists.
lucia says
This University of Florida web page has the definition of validation I’m used to:
In contrast, the definition of validation, “a demonstration that a model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model,” refers to its performance (Rykiel, 1996). Validation compares simulated system output with real system observations using data not used in model development.
Defined this way, you can’t separate a judgment about whether or not any code or model is valid from the intended application of the model.
With regard to validating GCMs (since that’s the topic here) one must ask whether the intended application is
a) to obtain qualitative results that guide thinking? Or
b) to predict a metric like the future annual average mean surface temperature for the next 10 years within 1% of the true value with some high degree of confidence? or
c) Something in between.
Obtaining validation for the former (a) is rather easier than the latter. It’s also rather important to make sure that we don’t insist we’ve achieved validations of type (b) simply because we’ve achieved (a). It’s equally bad to decide that a model is utterly invalid for any and all purposes because we haven’t been able to validate to very stringent quantitative standards.
There is valid and there is valid. Forgetting which type of valid we mean results in the logical fallacy of equivocation.
Chris Colose says
Roger Pielke,
based on this thread, I think you are much better off asking questions instead on going on kamikaze missions on gavin, RC, AGW, models, or whatever else is on the list for that day. I am fully aware that the blogosphere loves vieweing these arena fights between you, McIntrye, gavin, and the other well known climate blogs out there, but in reality it is not very productive, nor is it educating the people who have a dispassionate consideration of the issues. Maybe if ESPN wants to fund blog matches, then we can discuss, but it would be nice to learn things as well. From my impression on the blogs throughout the internet, the only comments I’ve seen from people without much understanding of the issues is “Oh, gavin got caught lying by Roger” or “Oh, Roger got destroyed by gavin.” Very cute. Maybe these people would like to read some actual science, and not bickering, and no offense, but I do not see gavin doing the bickering and cherry-picking and misreprenting.
You have the credentials to teach people here, at prometheus, and other places (including me). However, no one is going to take you seriously if you just complain about everything that climate science has to offer, document what you don’t like in your blog, turn off comments, then come over here, and start telling an expert modeller how crappy his profession is after throwing out misrepresentations of what he or Hansen or others said.
I know that scientists have probably gotten so far into technical research, but maybe we need to revisit our grade school books, in Ch. 1, on the scientific method. The harsh reality is that we don’t have an “extra” experiment earth, we don’t have a time machine. If models are going to be used, they need to have a certain degree of “usefulness” (e.g. predicting the past, predicting the present). Explanatory and predictive power is actually important here. You can’t “prove” beyond a shadow of a doubt, that a model ran at 2x CO2 will reproduce exactly what will happen. Even when you get 2x CO2, and observations matches models, you can’t “prove” the model was dead on, because the model could have missed two things entirely which offset each other, so there was no real difference.
Richard Sycamore says
#422 That’s the definition I typically go by: “satisfactory range of accuracy”. Incorrect models can be useful. They just aren’t valid in the sense of having satisfactory predictive skill. I hope the GCMs are more than that! I hope there’s more science than art to the way they are tested!
Fair weather cyclist says
The words “strong el nino” appear three times in this thread. Can anyone tell me how the strength of an el nino (and/or la nina) is defined and measured? One paragraph and a citation would be nice, but any and all help gratefully received.
oyvinds says
#420 I am wondering how you can claim that most analysis say that emisssions of sulphur dioxide increased from mid 1970ies to 1995.
EPA estimates for US: 1970: 28 000 tonnes 1980: 22 000 tonnes 1990: 21 000 tonnes 1996: 17 000 tonnes.
Estimates from Europe (EMEP): 1980: 59 000 tonnes, 1990 42 000 tonnes, 1995: 31 000 tonnes.
The European emission numbers also include around 3 000 tonnes from volcanic and oceanic sources.
It is possible that there has been an increase from 2000 –> present day caused by an increase in Asian emissions.
Fair weather cyclist says
oyvinds, not to mention China and India.
Douglas Wise says
#366.
Fred Staples questions the validity of the logn2 formula for assessing the surface temperature impact of escalating atmospheric CO2 levels and asks for an explanation of the mechanism by which radiation escapes from the top of the atmosphere to space. In a partial response, Ray Pierrehumbert suggests that there are two counter-arguments to the “saturation” claim – the first is “thinning and cooling” at high level and the second relates to the lack of saturation in the wings of the CO2 absorption bands (at any level). He cites the second as being the more important.
Might I ask for clarification?
1) What proportion of total warming does Raypierre attribute to the “more important” wings and has his assessment taken into account the overlapping absorption spectra of CO2 and water vapour in these wings?
2) At high levels of the atmosphere, where water vapour is largely absent, radiation escape to space must presumably depend upon the presence of CO2. Is it not possible, therefore, to speculate that adding extra CO2 from an initial low concentration could initially facilitate rather than interfere with radiation escape to space?
3) Could it not follow, therefore, that the logn2 formula may be valid for the lower layers but not for the increasing CO2 concentrations of the upper layers?
4) An excited CO2 molecule can apparently lose its energy by heating the surrounding air or by emitting a photon. Can the photon emitted ever be of longer wavelength than that absorbed? If so, the longer wavelength photon could readily reach space without further ghg interference if emitted at a level at which water vapour was absent.
5) I understood that heat rises while radiative energy can travel in any direction. However, wouldn’t its net direction of travel be upwards if the layer below is very much more opaque than that above?
6) How does energy high in the atmosphere ever get back down to the surface?
7) Do atmospheric vortex engines, tornadoes etc, which deposit surface energy high in the atmosphere have other than temporary surface cooling effects if the energy so deposited can only exit the system by being translated into radiative energy, courtesy of a finite number of CO2 molecules?
These are questions from a non physicist trying hard to understand.
Roger Pielke. Jr. says
Chris (No. 423)- How to respond on blogs when one’s work is misrepresented always requires choices (e.g., let the misrepresentation stand, or perhaps look like a complainer). For example, in your comment you obviously have me confused with someone else, as we’ve never turned our blog comments off. So if you have accurate, substantive critiques, please do share them, and I’ll be happy to engage. But do get your facts right first.
Alastair McDonald says
Re #425 Where “Fair weather cyclist” asks about strong El Ninos. They are described here: http://ggweather.com/enso/oni.htm
Note that euballing the charts shows that there is a trend for El Ninos to get stronger and La Nnas to get weaker.
Hank Roberts says
I’ll try, Douglas, using your numbers
1) That’s from the radiation physics references, not Ray’s work.
2) CO2 is well mixed; when added it increases all through the atmosphere, not just at the top.
3) Therefore, no.
4) That’s in the radiation physics references; no, doesn’t follow.
5) Opaque absorbs and re-emits and absorbs again, so, no.
6) Same as on its way in, as radiation, plus getting absorbed.
7) No; moving air rearranges the heat.
Those are amateur answers from reading here, and not meant as more than doggerel, the real answers take advanced coursework and I’m relying on having read what’s written for nonphysicist.
Ray Ladbury says
Douglas Wise–the main thing to do is keep in mind where the energy is coming from, and in the IR, nearly all of it is coming from Earth’s surface or from the lower, warmer layers of the atmosphere. Re 1) the only way adding more CO2 could have zero effect is if there were no photons left in the absorption band of CO2. Remember, water radiates, too, and up to the point where water peters out, most heat is transported via convection, not radiation.
2)Remember the direction of NET energy flux. Since more energy is coming from below, the net effect will be warming.
3)Nope.
4)Absorption and emission are completely symmetric. If a molecule can emit a photon of a given wavelength, it can also absorb it.
5)What do you think happens when the photon encounters an opaque layer? It gets absorbed, right? That means it stays in the system.
6)Mainly backradiation and convection (i.e. the air is warmer when it gets back down to the surface than it would be w/o extra ghgs, so it transports less energy away from the surface)
7)Transporting energy to the upper troposphere increases temperature, and so radiation; it promotes energy loss but the net effect depends on the ghgs above the level where you dump the energy.
Karsten J says
Question: how will it be possible to detect *in time* abrupt climatic changes, if they arise faster than defined by the statistical science?
After all, the definition of climate to be thirty year’s average is completely manmade. Nature doesn’t care how we define climate. Records from icecores suggest big climatic changes can actually occur very fast indeed. Last year’s dramatic melting of the arctic sea-ice may indicate that we are at a tipping point for the global climate system. Same goes for the recent data about dramatic melting of the Greenland ice sheet and dramatically fastening flow in icestreams and correspondingly increasing outflow of icebergs from West Antarctica (NASA).
Fair weather cyclist says
Alastair, thanks for that page: an interesting chart.
It leads me to a worry though. Strong/weak ninos/ninas are frequently invoked to explain short term fluctuations in global mean temperature. Since the definition of strength is based wholly on temperature (admittedly in a defined area) is there an element of circularity or tautology in such statements?
Hank Roberts says
Pielke Sr. turned comments off entirely at his blog.
http://forums.myspace.com/p/3689737/36189919.aspx?fuseaction=forums.viewpost#36189919
Pielke Jr. turned his blog off for a while. He’s back now.
This may help:
http://scienceblogs.com/stoat/2006/07/characterising_pielke_jr.php
“RP Jr seems to find himself frequently mischaracterised, most recently by the AZ Daily Star. But how can this be? With language so precise, what room for misunderstanding could there be? Well….”
Fair weather cyclist says
Alastair, thanks for that page: an interesting chart.
It leads me to a worry though. Strong/weak ninos/ninas are frequently invoked to explain short term fluctuations in global mean temperature. Since the definition of strength is based wholly on temperature (admittedly in a defined area), is there an element of circularity or tautology in such statements?
Chris Colose says
Dr. Pielke
I did get you confused with RP Sr.’s original blog, sorry. Where has RC misrepresented your work?
This was really not the emphasis of my post. More productive exchange amongst the experts in places where a lot of “laymen” read would be encouraging
Chris Colose says
433 Karsten
The NAS defined abrupt climate change as “…an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause” or for a more policy setting friendly definition, “an abrupt change is one that takes place so rapidly and unexpectedly that human or natural systems have difficulty adapting to it.”
Hope that helps
Mark Bahner says
Comment #426, regarding my comment in #420, asks,
I guess I was mainly going from the IPCC Fourth Assessment Report Figure 10.26, which shows (to my keen eyes ;-)) the following worldwide SO2 emissions in TgS/yr:
1975: 58
1980: 63
1985: 67
1990: 71
1995: 71
2000: 70
Are you saying that the IPCC Figure 10.26 is wrong, and that worldwide sulfur dioxide emissions did not continuing increasing beyond 1975?
Barton Paul Levenson says
Can someone help me with a physical chemistry problem? I’m confused by
the contradiction between the physical and the chemical definitions of
atmospheric pressure.
Pressure from the whole atmosphere is
P = (M / A) g
where M is the mass of the atmosphere, A the area of the Earth, g
gravity. For Earth, M = 5.136e18 kg, A = 5.1007e14 m2, and g
= 9.80665 m s-2 give 98,745 pascals for surface pressure,
which is close to the 101,325 Pa of the standard atmosphere. The
discrepancy is because the USSA uses sea-level pressure and almost all
land is above sea-level, thus land takes up a little of the atmosphere
space. Calculating the other way, the atmosphere should have a mass of
5.27e18 kg. Not much of a difference.
The volume fraction of CO2 is 384 ppm. Using a carbon dioxide
molecular weight of 44.0096 AMUs and a dry air figure of 28.9644, I get a
mass fraction for CO2 of 0.000583. That means there’s 2.99e15
kg of CO2 floating around up there.
Now, Dalton’s law of partial pressures indicates the CO2
partial pressure should be .000384 * 101325 or 38.9 pascals. But when I
put 2.99e15 into the equation above for atmospheric pressure, assuming
only the carbon dioxide were present, I get 57.5 Pa. Why don’t the two
answers match? I assume there’s some simple answer I’m missing here.
Roger Pielke. Jr. says
Chris (no. 437)-
Thanks. I won’t revisit the discussions with Gavin, as they are here and on our blog for anyone who is interested/nothing-better-to-do …
But, as an example of RCs distasteful tactics, have a look at no. 435 on this thread. I ignore 99.99% of this stuff, but why in the world would RC allow such an off-topic and incorrect comment referring to a specific individual? It is certainly not the first of this genre here.
It is distasteful because: (A) We never turned our blog off, and (B) it provides a link to the well-worn but completely incorrect allegations by former RC participant William Connolley that I am a climate skeptic of some sort (I am not).
As I have said before, enough of this behavior occasionally leads to some responses. I probably should ignore 100%;-) But please join the conversation on our blog if you’d like, all are welcome, especially when we get our interface updated to 2007 software!
[Response: The authors of comments are solely responsible for those comments, just as they are on your blog. Regardless, when asked to point out anywhere where RC has misrepresented your opinions, you punted. That is probably the clearest statement you’ve made on this thread. – gavin]
Rod B says
Douglas (428), this is one of the couple or so areas of climatology that is the basis of my skeptical questioning. None-the-less, I’ll try to answer, or maybe shed some light on, some of your questions from an unbiased prospective. I need to do this quick before the smarter climatologists/physicists weigh-in with their view (and maybe corrections) — though I’m already behind Hank.
1) I can’t answer. The degree of absorption of IR “in the wings” is one of my biggest questions.
2) It does require greenhouse gases like CO2 to accomplish the exiting radiation since other gases radiate very litttle ala black/graybody type (though some will dispute even that). But it’s not obvious that adding more CO2 will increase significantly the exiting radiation. Re-radiation that heads upward now has a greater chance of being intercepted by another CO2 molecule before leaving the atmosphere. Though not significant, outward radiation might go up a little — I just don’t know.
3) I would think the log(n) (2???) formula would be more likely in the upper atmosphere if the concentration of CO2 increased there, as opposed to maybe linear which is likely the relationship at very low concentrations. I contend log(n) morphs into something else at very high concentrations but this wouldn’t apply in your upper atmosphere case.
4) the photon is emitted at the same frequency it was absorbed (but see below), but this might be purely academic: because of the doppler effect, another molecule seeing the re-emission could easily see a different frequency, which it might not absorb and let pass upward. But then the radiation can still hit a molecule that has no doppler effect. It gets even messier because the initial absorption of the surface radiation can also be affected by doppler.
5) Yes and no. Mathematically the probability of radiating up or down is equal. The radiating molecule has no idea what the concentration is in any direction. If theoretically it’s fully opaque downward, then the radiation will just return to the surface, which a pile of radiation does.
6) Energy high up gets back to the surface predominately through IR radiation.
7) Vortices will put energy “high” in the atmosphere. There it will stay until transferring via collision to GHGs (which can transfer energy from its translation to vibration/rotation — the radiating type) — whenever. The GHG can then radiate either up or down. Since it is starting higher in the atmosphere it could have a higher probability of escaping. I just don’t know if that would be significant, though.
These are answers from a non physicist trying hard to understand…
ps Oops! Damn! Just noticed Ray beat me, too.
Douglas Wise says
Thanks Hank and Ray Ladbury for your responses to my post #428.
Unfortunately, your explanations appear to contradict my understanding of Ray Pierrehumbert’s assertion that a beam of 15 micron IR directed upwards could be back radiated but could not penetrate the currently saturated layer of lower atmosphere. (Probably a fault in my logic circuits). As one can identify 15 micron IR from space, I had assumed it must therefore have been generated above the saturated layer by reaction between CO2 (no water vapour up there) and non radiative energy taken up by convection. This led me to the thought that CO2 acts as a key in the upper atmosphere that unlocks the door to space and lets energy out (a sort of IR radiation synthesiser). I accept, of course, that its primary role , particularly at lower levels, is that of IR blocker or downward reflector. At higher levels, saturation below would prevent downward reflection reaching the surface.
Ray Ladbury speaks of symmetry with respect to absorption and emission. He states that, if a molecule can absorb at a particular wavelength, it can also emit at the same wavelength. May be it can but can it also emit at longer wavelengths if some of the energy has been lost in prior collisions?
Finally, if CO2 is having significant effects at high levels of the atmosphere, one should expect that the effective radiating level of 15 micron IR would be rising (brightness temperature falling). Is this occurring? I have asked this before but no-one has answered. It would, to me, be compelling direct experimental evidence to put before the sceptics.
Roger Pielke. Jr. says
My goodness, here we go again. OK, Gavin, now I’ll respond to your provocations. You write, “when asked to point out anywhere where RC has misrepresented your opinions, you punted. That is probably the clearest statement you’ve made on this thread.”
Your mischaracterization of my work starts with the very first sentence in this post: “John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007.”
I can’t speak for John Tierney, but I never made any effort to validate or falsify IPCC projections over 2000-2007. Instead I summarized the IPCC predictions along with data, making clear that no significance could be drawn at this time. In fact, I wrote, “I assume that many climate scientists will say that there is no significance to what has happened since 2000, and perhaps emphasize that predictions of global temperature are more certain in the longer term than shorter term.” Funny how that quote didn’t make it into this post when characterizing my efforts as “misguided.”
I began with the IPCC AR4 as a first step in working back to IPCC 1990, where something could be said with a bit more certainty.
So with this post you created a strawman argument to knock down. And rather than simply letting me have my say and moving on, you continue to provoke.
[Response: Because you continue to misread what is said. First off this post is not just about you. Nowhere did I say you personally were ‘misguided’ – I said specifically, that “short term comparisons” were – a position I wholeheartedly stand by. And my only characterisation of the Tierney post (your post was not linked) was that validation/verification/falsification was ‘discussed’ – which it most certainly was. Tierney specifically asked for “any thoughts on how to interpret the results on Dr. Pielke’s graph”. I gave mine in the comment thread – which included concluding that it was misleading for the reasons stated above. You can either disagree or agree with my criticism. But it is not, and was not, a personal attack on you – despite what you appear to have concluded. However, I am not going to let your responses here and elsewhere, which have unquestionably misrepresented my statements, just go without comment. Feel free to move on. – gavin]
Ray Ladbury says
Douglas, You’re assuming that the only energy gets transported is via radiation. We have a whole atmosphere that transports radiation along with it in its motions. Also, if you got a gas in the 2-300 K temp range, it will emit in the IR if it can. You have IR photons emitted in all directions at all levels of the atmosphere. It’s what happens to them that varies. And when you add more CO2, fewer escape. Yes, the fact that CO2 can escape in the IR means that it will emit some photons high in the atmosphere that will escape. However, the fact that you have a higher flux from the warmer regions below means that NET more photons get absorbed than get emitted. After all, think what would happen if the CO2 were not in the atmosphere. IR in that wavelength range would simply escape unimpeded through the atmosphere.
WRT emission and absorption, remember that harmonic oscillator energy leves are quantized. If an excited molecule is going to lose its excitation, it loses all of it. Conceivably, you might distort the molecule slightly via collision, etc. during relaxation, but that energy would go into kinetic energy imparted during the collision. The molecule is either excited or it is not. No in-between energy states.
Your question about what is happening to emission in the CO2 band could have been answered definitively by the DISCOVR satellite. Instead, DISCOVR sits in mothballs–fully built and ready for launch–somewhere here in Greenbelt, MD. Somebody in Congress decided they didn’t want to know the answer.
Rod B says
Douglas, one partial insight. I assume it’s theoretically possible for a molecule to absorb energy at a higher vibrational quanta, lose some of that to translation, end up at a lower vibrational energy band, then radiate that out… at a longer wavelength. The propensity and probability of this I have to leave for more knowledgeable folks.
I also might have misconstrued “opaque” from your earlier post and my subsequent answer. I need to revisit it. But as it ties in with #443, I don’t understand the 1st para. If back radiated re-emission is running into a layer of radiation saturation (at that freq), it won’t be re-absorbed but continue to the surface. Radiation at 15 microns escaping the atmosphere is certainly from CO2 emitting radiation outward in a low enough concentration that it won’t hit another CO2 molecule on the way. The CO2 molecule could have gotten the energy initially from re-emission of other CO2 or from collision exchange with non-GHGs.
Your statement, “….that its primary role , particularly at lower levels, is that of IR blocker or downward reflector… ” I think would more properly read: “… primarily role…IR blocker or absorber, then re-emitting either downward or upward, or losing it through collision… “.
But I’m not sure I got your question correctly.
Hank Roberts says
Dr. Pielke, my apology for the wording I was thinking of your ‘Spring Break’ in March 2007 — at the time it was a noticeable pause, but in hindsight a minor one in the usually daily weblogging.
I’m just another reader, and I do urge you to be less, well, political; you’ve said you’re not a climatologist, but you seem easily upset by climatology, and all the rest of us see is the upset sometimes, not the reasoning.
Douglas – not ‘reflector’ — greenhouse gases absorb photons, their energy (vibration, rotation, speed) increases; they transfer energy by collisions, and they emit photons. Can’t help beyond that note on terms.
Alastair McDonald says
Re #436 Where Fair weather cyclist writes:
“Strong/weak ninos/ninas are frequently invoked to explain short term fluctuations in global mean temperature. Since the definition of strength is based wholly on temperature (admittedly in a defined area), is there an element of circularity or tautology in such statements?”
As you note, the definition of an El Nino is based on local sea surface temperatures based in the Pacific. When one occurs global temperatures rise. In fact it has been argued by sceptics that global warming has stopped because the maximum recorded temperature was in 1998. That was the year of the last strong El Nino. When we get the next strong El Nino it is most likely that we will have yet another record in global temperatures.
If that is a tautology then it is true.
Since strong El Ninos seem to happen roughly every sixteen years, then we can expect the next one within six years. The result could be disruption to global agriculture with a possibility of famine.
Chris Colose says
Douglas,
By making the lower levels more opaque to infrared, you then move the “effective radiating level” to lower pressures (higher altitudes), and that will produce a gradient such that all levels below P(erl) will warm insofar as T(s) > T(erl), i.e., you have a lapse rate. raypierre’s article emphasizes the importance of spectral dependance, i.e., spectrum of upwelling and emitted radiation. When you add CO2 to the atmosphere, you increase both absorption and emission (and the stratosphere cools because increase of emission outweighs increase of absorption), but in the much “larger” troposphere, there is a lot more of the atmosphere above you to radiate downward. In that sense, you do get radiation emitted to space, but overall more emission downward as the planet moves closer to equilibrium.
Rod B says
Douglas, re Ray’s 445: I agree with his first paragraph. I disagree (to an academic extent) with his 2nd paragraph. Molecules have many vibrational energy levels, not just on and off. But, as I implied earlier, I’m not knowledgeable of their propensity or capability to, say, absorb into and only into level 5 (with 1-4 empty) and then lose some to kinetic and drop to, say, level 3, leaving 1, 2, 4, 5 empty, then emitting from level 3. Maybe it can’t do dat, which would make my assertion moot. (Plus I’m a little uncomfortable saying Ray is incorrect when it comes to radiation… )