Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.
So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.
The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.
If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.
An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.
In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.
Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.
A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.
What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:
- Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
- Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
- Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).
It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).
Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.
This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.
Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.
If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.
Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.
Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.
Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:
– UK Met Office data (instead of GISS data) with 8-year trend lines
– GISS data with 7-year trend lines (instead of 8-year).
– GISS data with 15-year trend lines
These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.
And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.
Werner Wintels says
1. Any time series showing less than a decade of data is not going to show trends that can be considered climatological, be it for temperature or any other variable. Anyone using such a graph to make claims about climate trends, including to verify GCM predictions, is making a misleading argument. The graph shown in the Tierney/Pielke blog is a good example of misleading data representation. If it was Pielke’s intent by showing the chart to show that one cannot make statements about temperature trends by giving an example of bad analysis, more power to him. He would have made his point much clearer by including a chart like Gavin’s above.
Gavin’s 30-year graph is a good summary of significant trends in global temperatures than the Pielke graph. Most readers will be able to discern from Gavin’s graph that the year-to-year variability is going to be too high to make any real conclusions about temperature with a time series of less than 10 years. Anyone who has ever analyzed a graph can clearly see this.
2. GCM’s are not designed to simulate what will happen 7 years from now. They are designed to tell us what the average conditions will be like on timescales of 10-100 years. One needs to display at least 20-30 years of data to draw statisticly significant conclusions about a climate trend and whether a model has been useful in predicting it.
A “spot check” can still be done for the 2001-2007 climate forecast, though: a) you could compare the predicted average temperature over the 7-year forecast with the actual mean temperature over that 7 year period; If one wants to look for trends in data, one could look to those 7 year means compare to the seven pre-2001. b) one could test for robustness and plot a time series of the mean temperature for 2001-2002, 2001-2003, 2001-2004 and compare it to predicted temperatures; c) you could see weather the highly variable real temperature calculated above converges to the smooth predicted temperature curve in time.
3. An honest appraisal of any time series of global surface temperatures ending in 2007 or 2008 will also note that we are currently experiencing a La Nina in the equatorial Pacific and have been since about October. It is predicted to last until May or June. This means that 2/3 of the equatorial Pacific is 1 to 3 degrees cooler than normal, or 2 to 6 degrees C cooler than during an El Nino. Because the equatorial Pacific is such a huge chunk of the Earth (check any globe to see how huge), it will almost certainly make global temperatures for 2007 and 2008 cooler than most previous years (even if all Arctic sea ice disappears). It will be interesting to see if this Cold phase of the ENSO cycle will be associated with cooler or warmer global temperature than past La Nina events.
P.S. Please see following websites for La Nina data and predictions:
http://www.cpc.noaa.gov/products/analysis_monitoring/lanina/
http://iri.columbia.edu/climate/ENSO/currentinfo/QuickLook.html
Ike Solem says
RE#249,
“General Question about temperature Measurements:
First: If global measured temps fall in a given year, do we believe that the atmosphere actually lost heat over that time period? Or is some variation caused by measurement error? (ie, the heat is is still around, just not where we happen to measure it).”
One of the main variables there is the latent heat – the energy stored in the atmosphere due to the evaporation of water. A cubic meter of dry air at a given temperature will hold less energy than a cubic meter of air that is saturated with water vapor. You can sometimes feel this energy being released right before a rainstorm, as water vapor condenses to water droplets and releases heat (or as ice crystals melt to water droplets).
That’s a route to warm the polar regions – warmer sea surface temperatures at the equator lead to more water vapor in the atmosphere, and when that vapor condenses back to ice and water in polar regions, heat is released.
Water vapor also plays a direct role in global warming, since it absorbs infrared radiation and acts as an atmospheric blanket. For more, see WATER VAPOR FEEDBACK AND GLOBAL WARMING, Held & Soden 2000
Regarding the temperature measurements, try Top 11 Warmest Years On Record Have All Been In Last 13 Years, Dec 2007
Timothy Chase says
Natural Variability
La Nina Still Going in January 2008
http://earthobservatory.nasa.gov/Newsroom/NewImages/images.php3?img_id=17894
Taken January 14, 2008
Includes a kml file…
The Dawn of a New Solar Cycle
http://earthobservatory.nasa.gov/Newsroom/NewImages/images.php3?img_id=17895
Cycle 24
Includes two QuickTime movies from Jan 1 to Jan 14 2008
Putting things into context…
Discussion of 2007 GISS global temperature analysis is posted at Solar and Southern Oscillations: what it means for us to tie second with a year (1998) that had a strong El Nino when this year we have a La Nina and a cool solar year.
GISS 2007 Temperature Analysis
http://www.columbia.edu/~jeh1/mailings/20080114_GISTEMP.pdf
Fred Staples says
I am not a denialist, Ray, (198), but I do think the level of certainty expressed on this site is unjustifiable.
There is always room for scepticism over scientific hypotheses. Do you remember Wigner energy? That minor oversight in the Physics of nuclear reactors (probably the best researched theory ever) might have caused the evacuation of the Lake District and the end of nuclear power station development. For that matter, do you remember Arrhenius’s views on CO2 concentrations (without feedbacks) as an explanation of the Ice ages?
I am sure you are familiar with the spectroscopy at, for example:
http://www.iitap.iastate.edu/gccourse/forcing/images/image7.gif
The dominance of H20, per molecule, in the infra-red absorption region is obvious, and the concentrations of CO2 and H2O are about 380 ppm and 2000 ppm respectively. I am not saying that the CO2 perturbation models are wrong – just that they need a good deal of unqualified experimental verification before they are unequivocally accepted.
Finally, what is the most politically influential metric predicted by the models? Right or wrong it is surely the global average temperatures, and the most influential prediction is Hansen’s in 1988, because it has had time to be tested. His A, B, and C predictions diverged after year 2000, and as far as CO2 emissions are concerned we are firmly on the B line.
From 2000, his B line predicted temperature increase (fig 3, 5 year running mean) is 0.4 degrees C in 10 years and 0.6 degrees in 20 years.
My flat line regression ( 161, 170 ) from 1997 gives a (not significant) increase of 0.065 degrees in 10 years. The standard error on that slope is 0.055 degrees per decade, very high precisely because (251) the time interval is short and the data variable. However, we can use the standard error to ask for the odds against the real slope being as high as the Hansen prediction.
At plus 3 standard errors (minus is equally likely), the slope would be 0.23 degrees per decade. In other words the odds are far more than 1000 to 1 against the Hansen increase being the real world figure.
Time will resolve this – over the years to 2030 the CO2 signal (over and above aerosols, solar influences, random “noise” and aerosols, as Hansen says) will either appear or it won’t.
Can we really say that we are certain of the CO2 influence today?
AZ says
RE: 248 Gavin Schmidt-vs. Roger Pielke
I don’t think its fair to denigrate Dr. Pielke’s research on this basis. I’m not an expert on these matters by any stretch. But I happen to be reading “Storm World” at the recommendation of this site and although it centers on the controversy surrounding the global warming-hurricane connection, it gives plenty of insight into the different scientific camps on the various sides of the global warming issue. Correct me if I’m wrong but Dr. Pielke seems clearly identified with the empiricist school of thought whereas Dr. Schmidt falls within the climatologist/climate modelling camp. Although denialist have found it politically expedient to identifiy with the empiricists, in fairness their research is nonetheless valid and useful and should not immediately be discounted as ideologically driven. Is that a fair assessment?
Gaelan Clark says
Thank you to Walt, Jim, and Chris for your replys, they are most useful.
Also, #251, your point and explanation are very enlightening to a mind that has not quite wrapped itself around these perplexing concepts–thank you very much.
Ray, I would be more than happy to share my background with you-History, Political Science at USF–entered into a number of small business ventures–now into sustainable oil palm plantations in Mexico.
I am grateful to you, indeed everyone else who has as well, for your willing help in providing a mass of research info for me to start my knowledge quest.
—The last few comments have been especially helpful.
Gaelan Clark says
#247–I just loooked at your link, and one thing strikes me as peculiar–Land Use seems to have a net cooling effect. How could this possibly be true?
You take a field, bulldoze it–taking up all of the existing vegetation–and then put a parking lot on top of it—you have stopped all of the natural evapotranspiration and are now retaining heat from the surface that you have just laid—–I know this is oversimplification, but aren’t the basic precepts correct in what I have just laid out?—If so, how does one justify the graph from the IPCC??
—Ray–fantastic link to MIT, I have downloaded a few classes already!!!
[Response: You’ve forgotten about albedo. – mike]
Lynn Vincentnathan says
RE #255 & “empiricists camp” v. “modelling camp”
As an anthropologist teaching Expressive Culture this semester, I have to point out (as I did to my students yesterday) that no one sees or experiences the real reality, except through cultural-tinted glasses. It’s all models. Every word is a symbol. When we take the “temperature” with a thermometer, it’s a model. But my prof some 30 yrs ago assured us students that there is a real reality, after we got a bit worried about the issue. And I passed that assurance on to my students yesterday.
I guess a good test of whether our models are serviceable models of that real reality (as it pertains to important aspects of our lives) is whether or not they can predict the future to some extent. And we’ve come a long way since crystal balls and soothsayers.
Reality — that thing we can’t really know except through our cultural/model lenses — seems to have a way of biting back, letting us know it really is there.
I’ll stick with the analyses of those using science-based models over the charletan soothsayers and snake oil salespersons.
Ray Ladbury says
Fred Staples asks: “Can we really say that we are certain of the CO2 influence today?”
I’m glad you asked that question Fred, as the answer is an unequivocal “YES”. The reason is in part because the data support CO2 as the cause of the warming–both the qualitative aspects and the quantitative results. With a 10 year trend, you are likely to fall victim to noise, and your analysis is ignoring even the known sources of noise–ENSO and volcanic eruptions. Fred, I will readily concede that there are a lot of things we don’t know about climate. The effects of adding CO2 don’t fall into this class. If you believe CO2 contributes to the greenhouse effect at 280 ppmv, then there is no reason to assume its effects will cease at 560 ppmv. And if it is the greenhouse effect you are questioning, then throw out all of climate science. And given the remarkable success climate scientists have had, there’s no reason to assume the models are dramatically wrong.
Look, I realize you’re having fun doing fits to data, but I really think your time would be better spent teaching yourself a little bit about the physics of climate–e.g. that it is not just a matter of the amount of ghg in the atmosphere, but also WHERE it is.
Yes, there is always room for skepticism in science, but not every proposition is equally deserving of skepticism. I’m a lot more likely to question whether quarks are truly fundamental than I am to question conservation of energy. I’m a lot more likely to question whether we understand the effects of aerosols and clouds than I am to think we’re out to see on insolation and greenhouse gasses. Indiscriminate skepticism is not productive. It merely distracts you from areas where your skepticism would be more profitable.
bigcitylib says
#248
“Note: Industry has recently bought a few lesser known journals which try to legitimize their “global warming is false” idea…your librarian should be able to tell you which is which.”
Are you using “bought” strictly here? Or metaphorically?
Hank Roberts says
Fred, Pielke quotes McKittrick as writing about
“… 2 flat intervals interrupted by step-like changes associated with big volcanoes….”
Any relation to your calculations, or unrelated?
Barton Paul Levenson says
Fred Staples writes:
[[do you remember Arrhenius’s views on CO2 concentrations (without feedbacks) as an explanation of the Ice ages?]]
Yes, and to a large extent he was right, since the solar energy distribution changes we now believe were the immediate cause of the ice ages were amplified greatly by the CO2 feedback. BTW, his model did take water vapor feedback into account.
[[The dominance of H20, per molecule, in the infra-red absorption region is obvious, and the concentrations of CO2 and H2O are about 380 ppm and 2000 ppm respectively.]]
Right. Nonetheless, CO2 is important in global warming and H2O is less so. The reasons for this are:
1. Water vapor has a very shallow scale height (about 1.8 km compared to 7 km for the troposphere in general), so it peters out quickly with altitude. CO2, on the other hand, is well-mixed.
2. An average molecule of water vapor stays in the air about nine days. An average molecule of carbon dioxide stays in the air 200 years. We can’t affect water vapor very much whatever we do; it rains out or evaporates up too quickly. That’s why CO2 is treated as a forcing in climate models and H2O is treated as a feedback.
[[ I am not saying that the CO2 perturbation models are wrong – just that they need a good deal of unqualified experimental verification before they are unequivocally accepted.]]
Lab work by John Tyndall proved that CO2 was a greenhouse gas in 1859. We had a good idea of the line structure by the 1950s and have now mapped thousands of individual CO2 lines, so we have a good idea how it affects radiative transfer.
[[over the years to 2030 the CO2 signal (over and above aerosols, solar influences, random “noise” and aerosols, as Hansen says) will either appear or it won’t.]]
It already had by 2001 or so.
[[Can we really say that we are certain of the CO2 influence today?]]
Yeah, pretty much.
Hank Roberts says
Empiricists, modelers, and croupiers:
http://julesandjames.blogspot.com/2008/01/would-you-bet-on-satellite-record.html
Gaelan Clark says
Mike–have not forgotten–never considered–still learning, and thank you for that.
But, I still don’t get it, because what I am infering is that the urban areas are heating up because of land-use change from arable land to concrete jungle. Another peculiarity from the surface tempertaure constructions is that a number of the weather stations were once in the open vegetation lands, and are now surrounded by parking lots, air-conditioning vents, or airport runways, etc., etc.,–using these temperature readings would surely show an increase that is neither natural nor “model” AGW—while it certainly is anthropogenic, it should not be used to show that CO2 is causing the temp spikes.
Please, I know this is sophmoric to you, but in my circle of friends we discuss this quite a bit—not to any scientific degree, but discuss none-the-less, and it helps to know what the answers are, in “sophmoric” terms.
Tim McDermott says
AZ (255),
Why do you think that modelers are not empiricists? You realize, don’t you, that all science that uses mathematics is modeling? F=MA is a model. g=Gm1m2/r^2 is a model, useful but not “true.”
I’m a programmer who has done some modeling/simulation. I have no direct knowledge of what is in climate models, but after hanging out here for a while, my guess is that they involve hundred(s) of differential equations, a lot of which are involved in circular relationships. If that is the case, how does an non-modeling “empiricist” contribute? Such systems of equations are impossible to solve analytically.
How does anyone not working with a model have anything to say?
Ray Ladbury says
Gaelan, you can read about this on this site–among other places, here
https://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/langswitch_lang/in
Basically, though, yes development around a station will affect its temperature. However, there are lots of stations nearby. As a result, if you see one station warming when the others are not, you can not only see that it may be in error, but tell roughly how much. The algorithms all look at spatial and temporal averages and filtering.
You don’t want to throw out urban stations, as they still provide data, and give you info about urban heat island effects as well.
Ike Solem says
Gaelan,
There’s another factor at play: agriculture and irrigation. See Irrigation may not cool the globe in the future, LLNL:
The most recent IPCC report estimated that about 1/5 of the observed warming was due to deforestation and other land-use changes, although such estimates are very difficult to do. The problem is that the biosphere has multiple effects on climate – for example, tropical forests may have a greater role in moderating the global climate than northern temperate forests do. See Planting temperate forests no solution to global warming.
steven mosher says
re 266. Really Ray,
” there are lots of stations nearby”
How many stations are there in Brazil. How many urban and how many rural. And compare that to the number of stations in the US. Just for grins.
Ray Ladbury says
Steven, It’s called “GLOBAL CLIMATE CHANGE” for a reason. Noise in datasets really isn’t a new or impossible problem.
AZ says
Re: 258, 265 modelers vs. empiricists
I’m not trying to make some big philosophical point here. Its a distinction made by Mooney in “Strom World” and one that I’ve personally encountered in my research (in hydrological modeling circles). There are just some scientists (some “empiricists” but not all), who simply bristle at any sort of computer modeling or at least are innately “skeptical” of computer modeling results.
Marcus says
#257: Gaelan, the majority of land-use change is in agriculture, so I imagine the cooling albedo change comes from cutting down dark forests and replacing them with amber waves of grain. Also, desertification, where it happens.
But yes, where you pave parking lots and such, you do expect land use to cause warming.
(trees and other plants are a little odd, in that they have opposing local effects: dark leaves absorb heat, but transpiration transforms heat into latent heat which effectively cools the immediate surroundings but doesn’t change the net heat in the atmosphere. I think.)
Jim Eager says
Re Galean @ 264: Gaelan, as Ray pointed out, the urban heat island effect has been discussed here, repeatedly, and as has been stated here numerous times, known noisy or dirty data is still useful data, IF it’s KNOWN to be noisy or dirty. It is then easy to detect and to filter out the induced bias during processing of the data. Once the bias is identified and removed, the same anomolous trends shown by neighboring rural stations can be detected in the urban data. The biased urban stations are kept precisely because they have long term continuous data records. To simply throw them out removes a valid long term data set. To replace them with new, unbiased sites means that you have no replacement calibrated data set from the new sites for many, many years. There is no fatal problem to begin with, and the solutions are definitely negative.
Fred Staples says
I replied to your comment before, Hank, but it was – what is the word I want – moderated.
[edit. once we’ll allow, twice gets you banned. last warning]
As for the Physics, Ray and Barton, we have debated ghg’s before, and I look forward to doing so again on a suitable post. I certainly do not claim to have a coherent view of climate science, but I would be interested in your comments on the following assertions:
I do not believe that we are dealing with anything other than the resonant absorption of electro-magnetic energy in the infra-red regions of the spectrum. Electron shell excitation states leading to quantum radiation will not be involved. Tri-atomic molecules will absorb this radiation as kinetic energy much more efficiently than di-atomic molecules, (and promptly dissipate their temperature increase within the atmosphere) but that does not mean that N2 and O2 will not warm directly at all.
I find the CO2 radiation saturation argument originally demonstrated by Angstrom (and the resulting logn relationship to concentration) convincing. Whatever CO2 and H2O do to warm the surface, they will do it at low altitude and low concentrations.
I agree with everyone else that the lapse rate is crucial, and that it can be derived from ideal gas equations. Without it, the only plausible explanation for AGW, “higher is colder”, would not be tenable.
Most important, I agree with Woods’ 1909 comments on the low importance of the back-radiative effects in comparison with straightforward thermal insulation. There must be a back-radiative effect on surface warming, but if it is not significant in a glass (as opposed to a rock-salt) greenhouse, why is it so dominant in the atmosphere?
Have you read S D Silverstein’s 1976 paper based on Wood’s experiments, which I found excellent value for 10 dollars?
Ray Ladbury says
AZ, the power of science is that it combines empiricism with modeling–models are constrained by data and in turn tell you what data are important. The reality of anthropogenic causation of climate change does not in any way depend on complex computer modeling. It is known physics, and whether you do it with a computer or pen and paper should not matter. Like it or not, models are how we understand the world. The are not reality, but they distill the important elements of reality and make them understandable to us. If they are objecting to models, they are objecting to science.
Jim Eager says
Re AZ @ 270: “There are just some scientists (some “empiricists” but not all), who simply bristle at any sort of computer modeling or at least are innately “skeptical” of computer modeling results.”
And there are some modelers who dismiss empiricists as “stamp collectors” and “curve fitters,” also from Chris’ book. Such dismissals from either camp are not helpful. Developing models without comparing the model functions and results to observed processes empirical data is pointless, just as is graphing empirical data and looking for fits and correlations without developing an understanding of the underlying physical system, aka a model of what is going on and how it works.
wayne davidson says
There is certainly some angst out there while comparing one global temperature trend with another. All these measurement methods must be put in perspective with actual world class bench marks. When was the last time when a ship from North Central Russia crossed the sea in a straight line to Alaska?
Go beyond present satellite data temporal limitations, and its clear, on no other time in history was this ocean so open, the correct temperature trend chart must reflect a direct decrease in sea ice volume (not only surface) as world wide temperature steadily increases. If a chart shows no recent Northern Hemisphere temperature increase, it is likely not representing the entire area.
Barton Paul Levenson says
Gaelan: If the urban heat island effect was seriously affecting urban temperature stations, wouldn’t they show a significant difference from rural stations?
[Response: Actually they do, at least in the US. That’s why the GISTEMP analysis corrects the urban trends to match the rural ones. The issue is not the existence of UHI (this is acknowledged by everyone), but it’s remaining influence on the large scale averages. – gavin]
Hank Roberts says
> empiricists
Discussed here previously, this will help understand what the author was talking about I think:
RealClimate » Storm World: A Review
Note: Chris Mooney has provided us with an early copy of Storm World and we’re reviewing …
https://www.realclimate.org/index.php/archives/2007/06/storm-world/
“… Mooney also traces their respective work back to two different historical schools of thought in the atmospheric science community. On one side are the data-driven empiricists, such as Redfield, Loomis,and Riehl and on the other side the theorists such as Espy, Ferrel and Charney. Gray naturally follows in the tradition of the first group (his Ph.D adviser was Riehl who is sometimes credited as the father of the field of tropical meteorology). Emanuel, a student of Charney, follows in the tradition of the great theorists in atmospheric science. Of course its not quite that simple (and Mooney acknowledges as much)….”
I’d also recommend revisiting Spencer Weart’s AIP History (first link under Science, right side of web page) for a reminder of how meteorology was done up until the very recent development of large computers, and how much that has changed what it’s possible to learn. And how incredibly _fast_ it’s changed.
I did my first statistics coursework using a Frieden mechanical “Automatic Calculator” — it’s the very first piece of equipment pictured here:
http://www.rdrop.com/~jimw/jcgm-vcfii.shtml
My college had one computer — an IBM 1620. http://www.computerhistory.org/projects/ibm_1620/
How should we be evaluating the climate work done using tools like those, around 1970?
How should we be evaluating the climate work done around 1980? 1990? 2000?
Ask a climatologist, not a political scientist.
Pielke writes above in #54 replying to Gavin:
“You write ‘Once you include an adjustment for the too-large forcings’ — sorry but in conducting a verification you are not allowed to go back and change inputs/assumptions that were made at the time based on what you learn afterwards, that is cheating. Predicting what happened after 1990 from the perspective of 2007 is easy;-)”
This is nonsense. It assumes no progress in the models or technology, and the biggest question for political decisions is exactly whether the models are improving.
The straightforward test is to do exactly what Pielke calls “cheating” — take the data as it was then, apply today’s model.
It’s not “cheating” to take the data used in the 1980s, and run it with today’s models.
Claiming the 1980s work wasn’t reliable, by comparing short term to long term work improperly, then calling that “validation” and suggesting that contemporary work can’t be better, is serving fudge with fudge frosting, seems to me.
But of course I’m not a climatologist, nor is Pielke, nor is Tierney. So I’ll listen to the climatologists. I know the political need to discount their warnings. I don’t trust that at all.
Empiricists use statistics. Statisticians use computation and models, nowadays.
Lynn Vincentnathan says
#270, maybe that’s because even though the weather models have become pretty good in describing current weather conditions and making short-term future predictions, they still are not highly accurate more than a week into the future. So I imagine those concerned with warning people about hurricanes would be reluctant to say that hurricane brewing in the Atlantic is definitely not going to hit us over here in S. Texas.
OTOH, climate (which is weather at the macro-statistics-level) is more stable, so that they can even print atlases describing the regional climates that in the past (before GW) held up for decades or more.
So if the climate changes, even a little, that’s a really big thing compared to daily weather changes. And it’s so complex, with so many variables that a model that includes these major variables (some causing the climate to warm, some causing it to cool, etc) is much better than simply looking at 2 variables (say, GHGs & T). It helps to explain those ups and downs better (see above the volcano effects and el nino effects in the graph).
What the modellers do (I think) is use all the relevant empirical data they can and see if they not only match fairly closely climate stats that have actually happened in the past, but also are based on well-established principles of science (like laws of thermodynamics, etc), then forward these models into the future to see what might happen. I believe the models are frequently tested against actual empirical data as it becomes available each year. That’s my impression.
So future modelling is an extrapolation from past empirical data and general scientific princles. I can understand that some scientists due to the scientific cautionary mode of avoiding false positives might feel funny about saying anything at all about the future. But people like me want to know. Hindsight is always 20/20 & safe.
Aside from using some type of model, I can’t think of how we’d talk about the future.
An analogy re this seemingly halting or slowing of the warming over a few years, might be earthquake dynamics. Just because Calif has not had an earthquake for a few years does not mean earthquakes have ended. We know (through hard working scientists who’ve told us) that there are tectonic activities going on underfoot and pressures moving plates in different directions, but that they sort of snag up for a long time (? due to earth friction ?), then burst forth moving in spurts (which are the quakes).
So we have a pretty good idea that GHGs have kept the earth a lot warmer than it would have been without them. And we’ve gotten preliminary data that our human additions of GHGs have warmed the earth a bit, as expected, as predicted by the models, once important variables, like the aerosol and albedo effects, were included.
How else would a person make climate projections into the future, without the use of mathematical equations based on past empirical observations and general scientific principles, and without taking all the variables into account (i.e., without our current climate models)? Educated insight? Crystal ball? Simply saying such-and-such?
Saying GW has stopped is making a forecast generated from some model or other, just as saying GW is continuing is based on models. We need to make those skeptics’ models more transparent and compare them to the climate science models. I wonder if they include as many variables and detailed empirical observations as the models used by climate scientists do.
Ray Ladbury says
So, Fred, Have you read anything from the current millenium? How about from the last half of the last century? And wherever did you get the idea that logarithmic dependence leads to saturation?
You claim that greenhouse gasses act only near the surface. Pray, what magic stops them from acting high in the atmosphere? How does a CO2 molecule know where it is in the atmosphere and how does it know where the photon came from? Is is your contention that excited molecules in the mid-troposphere never decay radiatively? That there are no photons in the CO2 band once you reach 10 km or so altitude? As I have said, you really owe it to yourself to learn the physics.
Hank Roberts says
The ‘thirty meters from the surface’ idea keeps popping up and I’d always wondered where it came from. Eli notes this: http://rabett.blogspot.com/2008/01/if-you-dont-remember-past-you-will.html
Phil Scadden says
#273. Fred, I found http://www.aip.org/history/climate/co2.htm a
useful starting point for finding out what was wrong. Go into the literature from there.
Steve Reynolds says
gavin> That’s why the GISTEMP analysis corrects the urban trends to match the rural ones. The issue is not the existence of UHI (this is acknowledged by everyone), but it’s remaining influence on the large scale averages.
I appreciate how that works in the U.S. and a few other places with a high density of measurement sites, but how does the UHI correction work where only urban sites exist?
Steve Bloom says
OT, but many here will be very interested to see this abstract (posted just now by solar physicist Leif Svalgaard over at CA):
“This is an abstract for an upcoming meeting:
“‘SORCE’s Past, Present, and Future Role in Earth Science Research, Science Meeting 2008
La Posada de Santa Fe Resort & Spa, Santa Fe, New Mexico, February 5-7, 2008 :
“‘Fire vs Fire: Do Volcanoes or Solar Variability Contribute More to Past Climate Change?
Thomas Crowley [thomas.crowley@ed.ac.uk] and Gabriele Hegerl, School of Geosciences, The University of Edinburgh, Scotland.
“‘Geologists in particular are quick to ascribe past centennial scale climate changes to solar variability. But successively refined records of volcanism from ice core studies suggest that pulses of volcanism explain more decadal temperature variance than can be linear linked to cosmogenic isotope variations. Formal statistical detection and attribution studies arrive at the same conclusion. However, there still seems to be some (literally) wiggle room for perhaps a small contribution from solar. An example will be given from a 2000 year northern hemisphere temperature reconstruction that suggests (at least at the time of writing this abstract) that there may be a moderately significant solar linkage at ~200 year period.
“‘Given time, a somewhat disconcerting apparent correlation between pulses of volcanism with the Dalton, Maunder, and Sporer Minima will be discussed. Given the unlikely physically significant correlations between the two, the possibility will be explored that cosmogenic records may have an uncorrected overprint from volcanically driven climate change. Provisional summary judgement: solar may be at best marginally significant on the multidecadal to centennial time scale.’
“‘My [Leif’s] comment: 10Be is deposited by adhering to stratospheric aerosols which then drift down and rain out. The amount of aerosols in the stratosphere is controlled mainly by volcanic eruptions. There were such strong eruptions in 1693 (Hekla on Iceland, having large effect on nearby Greenland), 1766 (Hekla), 1809 (see Dai JGR 96, 1991), 1814 (Mayon), 1815 (Tambora), 1883 (Krakatoa).”
My comment is to wonder why this is coming up *now*. Dozens if not hundreds of researchers, including an RC co-author or two, must have looked at this exact issue and not found anything. Presumably Crowley and Hegerl have some new angle, but what? Of course these results haven’t even been published yet and will require confirmation, but C+H are very highly respected researchers; I wouldn’t be surprised if the abstract alone (has anything else been circulated?) is enough to trigger a bit of a scramble to re-examine this issue.
But to the extent this creates consternation among the paleoclimatologists, just imagine how the solarphiles will react. :)
Finally, even if Mike is kicking himself for not spotting this, it appears he may have cause to be pleased since volcanoes with little or no solar would seem to point toward a flattish global reconstruction.
[Response: The discussion in the abstract is interesting, but I don’t see where it has any relevance to resolving any of the key outstanding issues for several reasons. Solar reconstructions back to AD 1610 are based on sunspot data, not the cosmogenic isotopes. solar reconstructions such as those developed by Crowley and others simply splice the longer-term Be10 or C14 records onto the sunspot-based estimates to extend the estimates of solar forcing back in time prior to the early 17th century. So the amplitude scale of the solar forcing is set by the calibration of sunspot data against modern satellite irradiance measurements, not by the isotope data. Note that the primary discrepancies between various proxy-based Northern Hemisphere temperature reconstructions (see e.g. the wikipedia comparison) is actually during the 17th century, when solar reconstructions are independent of the isotope data anyway. Finally, in modeling studies, regardless of which of the various alternative longer-term solar reconstructions are used (see e.g. the comparison in Jones and Mann (2004)), solar forcing is always secondary relative to volcanic forcing in terms of its contribution to the long-term temperature trends. In all simulations, the main reason for the moderate observed hemispheric “Medieval Warm Period” is low explosive volcanic activity, and the main reason for the moderate observed hemispheric “Little Ice Age” is high explosive volcanic activity. Solar forcing is simply much smaller than volcanic forcing even when averaged on the centennial timescales of interest. This is even more true in the most recent work. In much of the current modeling work, solar irradiance estimates have been even further down-sized relative to the earlier (e.g. Lean et al ’95) estimates used in earlier simulations. This is due to the fact that the larger amplitude previous estimates relied on an additional low-frequency calibration based on Baliunas’ work on ‘sun-like’ stars, which is now believed to be invalid. So the Crowley and Hegerl abstract is sort of interesting, and they may well be right–but it doesn’t really matter much either way, at least not in this context. Sorry :( – mike]
Matt says
http://ross.mckitrick.googlepages.com/Letter.to.policymaker.pdf
Someone please reveiw this partial critic on the IPCC report. In particular, this statement:
“About half the energy at the surface leaves through
infrared radiation, and the other half is removed by the fluid dynamics of the atmosphere: convection,
turbulence, wind, evaporation, and so forth.”
This statement seems inconsequential, and misleading, since the globe can only emit energy by radiation or particle emission.
[Response: The statement is approximately true, but like many things emanating from that source, misleading. It refers to the energy balance of the surface of the planet, which is indeed affected by turbulent fluid heat transfer as well as radiation. These transfers are all properly accounted for in general circulation models, so there’s no sense in which this statement can be considered a “criticism” of the IPCC. As you note, the only way heat can leave the planet is through radiation (actually particle transfers are insignificant as an energy loss mechanism). It is the top of atmosphere radiation budget, rather than the surface radiation budget, that is in fact the prime determinant of climate. –raypierre]
Richard Tew says
I wonder if anyone has done the numbers regarding the total amount of hydrocarbons liberated from the ground vs the amount of carbon now in the atmosphere. All the voices talking about bio-fuels that permit carbon recycling don’t seem to touch the point that hydrocarbon mining has brought into the biosphere carbon that will stay there (it seems) until some geological time-scale process returns it to the earth. Until that occurs, the greenhouse effect gets amplified by all the carbon that’s been liberated in the past several hundred years. Temperature profiles may be only one indicator for all that extra carbon now in circulation as there are carbon repositories besides the atmosphere. Are any others showing shifts similar to those appearing on global temperature charts (eg, the acidity of waters both fresh and marine)?
lucia says
Gavin,
In Hansen ’88 projections, you provided a data file with data file for A, B, & C forcings used. I’ve plotted those and don’t see the effect of ‘scenario volcanic eruptions’. I gather from the text, you were providing the non-volcanic bits.
Since everyone is discussing these elsewhere… do you have the forcings as actually run. (That is, assuming I am interpreting what’s in those files correctly?)
Thanks in advance.
Steve Bloom says
Re #283 response: Thanks for that thorough answer, Mike.
Reading over those links and the relevant portions of AR4 WG1 Chapter 6, I see that I wasn’t quite up with the times. I had known about the general trend toward reducing estimates of solar forcing, but I hadn’t known the part about the pre-1600 solar forcing estimates being so dependent on the post-1600 sunspot counts or that the MWP was already thought to be explicable mainly by reduced volcanic explosiveness. That would certainly explain Leif’s interest, since as I’m sure you know his latest proposed solar forcing reconstruction for that period implies that it’s small indeed (as in about the same amplitude as the 11-year solar cycle). It does appear that C+H’s work is nice confirmation for this since it seems to be a direct argument that the volcanoes can explain everything. Summing up, it sounds as if the solar irradiance trends are in the process of being demoted from small to nearly insignificant.
IIRC it’s been said here recently that solar forcing is still thought to explain a good part of the early 20th century warming, but it sounds as if a further downward adjustment of the solar component isn’t much of an issue.
Re Baliunas, I hadn’t known that her work had ever had that much influence. It does help explain a lot of her subsequent behavior.
BTW, I’m not so disappointed if it’s only the solarphiles who are consternated, so no apologies necessary! ;)
Ric Merritt says
Mr Mckitrick (see #284) is an economist. My confidence in his judgment of the risks from climate change is low. In all his bobbing and weaving and posturing about the risk, he never even hints that some climate changes might be both baleful and dreadfully hard to reverse if we wait too long to start. However, after a quick scan of the linked document, I think his thrust in structuring economic incentives to control emissions makes a lot of sense. I wish he would put aside his climate science opinions and participate vigorously in the debate about how to reduce emissions. A waste of expertise. His point that mechanisms should be kept flexible over the coming decades to react to new data seems obvious to me, yet I see it made almost nowhere. Instead we encourage our politicians to throw around targets for 2050 which may easily be off in either direction, and are discouraging to contemplate from where we sit today. Ironically, I think the desirability, and obvious possibility, of reacting flexibly over the decades is itself one of the strongest arguments for starting now. If Mckitrick, or Lindzen, or Fred Singer (please stifle those eye rolls, I said IF) turns out to be pretty close to right, we’ll know soon enough to correct course economically, with no long-term damage done, contrary to the loud howls we hear from defenders of BAU. But if we sit around and wait, as Mckitrick would have us do, and some of Hansen’s more worrisome projections are right, we’re truly screwed.
Jon Gradie says
Volcanoes: Are there any publications which mention or discuss the impact (if any) of the Kilauea eruption (ongoing for 20+ years)? I am curious if this constant low-altitude injection of sulfate aerosols has any regional climatic impact (I been living here for 20+ years … I just take it for granted).
Great stuff, great discussions, great site!
Hank Roberts says
Richard — yes.
The “Start Here” link at the top of the page will help you with your questions. So will the first link under Science in the right column, to the AIP History website.
Bill Tarver says
“Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.”
When are you guys going to issue a revised sea level forecast? It seems to me you have vastly underestimated the true position and that we can already see the beginnings of an exponential increase. Hansen’s forecast of ~5m by the end of the century, while brutal, looks more realistic. There still isn’t any real sense of urgency about the problem, or the political will to face up to it. People see your upper limit forecast of 60cm, shrug, and think it’s not too bad.
Urs Neu says
Re 251, 253
If looking at ENSO influence on global temperature, keep in mind the roughly half year time lag. That means that 2007 is mainly influenced by a moderate El Nino (corresponding to an influence of about plus 0.05 to 0.07 K of global temperature), while the influence of the current La Nina will fully hit 2008 (last six ENSO months of 2007 and first six of 2008), corresponding to about minus 0.03 to 0.07 K to global temperature.
Comparing 2007 to 1998: the ESNO influence is positive for both, with about plus 0.05 to 0.07K and plus 0.20 to 0.25 respectively.
For 2008 besides La Nina there is a minimum of the solar cycle, where we are not quite sure of the magnitude of the effect and if the roughly 10y quasi-oscillation we find in the global temperature data is really due to solar variability or just an internal oscillation of the climate system. The influence of this oscillation, be it due to the solar cycle or not, is about 0.1 K for the full cycle, i.e. a departure of about 0.05K from the average at minimum stage. Together this would give roughly minus 0.1K departure from the trend line in 2008. In addition, there will be a considerable contribution of stochastic natural variability. Nevertheless I think it is quite likely that global-warming-has-stopped claims will not recede this year. We’ll probably have to wait for 2009 and 2010 without El Nino and increasing solar radiation (or whatever the cycle is due to). In agreement with Smith et al., BTW.
Barton Paul Levenson says
Fred Staples writes:
[[I find the CO2 radiation saturation argument originally demonstrated by Angstrom (and the resulting logn relationship to concentration) convincing. Whatever CO2 and H2O do to warm the surface, they will do it at low altitude and low concentrations.]]
That argument was shown to be wrong back in the 1940s. Yes, the infrared radiation from the surface is absorbed fairly low down. No, that doesn’t mean that absorption higher up doesn’t count. If the upper layers absorb more from the lower layers, they will heat up. And radiate more. And heat up the lower levels, which in turn will heat up the ground. Absorption at all levels counts, even if the lowest level absorbs 100% of the radiation from the ground.
Fred Staples says
Very well, (273), without the joke, Hank, (197/259)) there is no connection. Presumably he (McKitrick) can see what anyone who looks at the satellite data can see.
The logn rule arises from saturation, Ray, (280) not the other way around. Each additional CO2 molecule added to the atmosphere finds less surface radiation to absorb.
Reactor Physics, Ray, (280) depends on twentieth century Physics, and it is very much on the particle side of the wave/particle duality. Nuclear fission requires the absorption of neutrons to break down the strong nuclear force.
Does Climate Science have much to do with this? The great classical physicists, Maxwell, Stefan, Bolzmann et al, (on which much of climate science depends) knew nothing about atomic structure.
How, Ray, does a CO2 molecule absorb surface heat – electron shell excitation (quantum physics), or increased inter-atomic vibration (kinetic energy). How is the increased energy dissipated (not lost)?.
At the tropopause we have a dry N2/O2 atmosphere, with CO2 at 380 ppm, 0.04%, at a temperature about 255 degrees K. Beyond it, eventually, is space at 3 degrees K, close to absolute zero.
How is that atmospheric energy lost to space?.
Simple radiative models (using Stefan-Bolzmann) are easy to construct. Complex models, I am sure, are incredibly difficult. Their validity can be judged only against their predictions, and given the inherent variability and short time scales this can only be done statistically. Hence the “fun” (259) of “line fitting” and statistical testing.
Incidentally, whether or not two data sets (surface and troposhere temperatures, for example) are statistically different has nothing to do with error bars. We have to calculate the probability that they are not part of the same set of data.
P. Lewis says
Re Richard Tew #285
Ocean acidification
Hank Roberts says
Fred, if you have read all the way through Weart’s book and the ‘What Angstron Didn’t Know’ comments threads, you know as much as we your fellow readers here know. For those of us who don’t do quantum math, we have to rely on those who do. (So does all modern electronics, it works whether we believe in it or not.)
All the radiation physics work is in that area. And it works.
But you are asking the same questions Weart answers, and that people worked very hard to answer, in those two comment threads, as though you were unaware of them. Please reread them, or read them.
Please read, save people vast amounts of retyping, and off topic yet.
Philippe Chantreau says
Fred, this link has interesting elements on some of what you’re asking (how some heat is released to space from the stratosphere):
http://www.atmosphere.mpg.de/enid/2__Ozone/-_Cooling_nd.html
I found it more to the point than the RC discussion in this subject, although it is probably oversimplified for the Gavin types.
Clough and Iacono have a lot of good work on that subject, here are other names that you can look up to find info on stratospheric radiative processes: Santer, Ramaswamy, Scwartzkopf, Cess. All that info is out there, if you use the right words to search, you don’t even need Scholar, you’ll find the articles. There is no reason to be continuously asking others to spoon-feed you, as Hank has justly pointed. Furthermore, whatever the quantum mechanics may be, there is abundance of observational data for the middle and high troposphere, which you can find with CERES, ARM and ERBE (pick which is most relevant).
marguerite manteau-rao says
Aren’t you just saying that one should look at long term trends, not just a few years? (what may seem like a long time to us, eight years is nothing in terms of climate history) And that when one does so, the evidence for global warming is irrefutable?
http://lamarguerite.wordpress.com
‘It’s All About Green Psychology’
Jim Galasyn says
Meanwhile, in the Arctic: