Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.
So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.
The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.
If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.
An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.
In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.
Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.
A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.
What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:
- Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
- Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
- Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).
It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).
Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.
This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.
Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.
If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.
Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.
Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.
Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:
– UK Met Office data (instead of GISS data) with 8-year trend lines
– GISS data with 7-year trend lines (instead of 8-year).
– GISS data with 15-year trend lines
These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.
And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.
Bob North says
Regarding the reports of increased Antarctic and Greenland glacier melting and the record low area of summer sea ice in the Arctic, my question is: To what extent could deposition of particulate emissions (such as from diesel exhaust) on the ice surface contribute to the increased melting by decreasing the ice’s albedo and increasing the retention of heat. This could in part explain the increased ice melt when an initial review suggest that 2007 temperatures were not dramatically higher than in the recent past. Not CO2, but still another anthropogenic contribution to warming. Have the impacts from this been investigated or calculated?
Bob North
BobN
Bryan S says
Gavin, Now that we are on the same page, I will ask my original question again.
Q: What is the magnitude (an average ballpark number) of annual variability(in W/m2) of system heat content changes (the whole enchilada) from AOGCM output? or another way: Is the magnitude of *interannual* variability in models close to that observed from past changes in OHC (the entire volume integral)? or another way: Do models underestimate annual variability of heat content changes? Thanks in advance for the answer.
Just another point however on variability. The problem of global measurement (obtaining signal in a noisy system) is not substantially different from the challenge of obtaining average values of the weather, but the results from the ocean are potentially more profound. The thing I think that most people don’t realize though, is that nearly all of the actual “global warming or cooling” of the earth system is taking place in the ocean. Surface temperature or SST are really just 2-D proxies for these 3-D heat changes. There are really a bunch of interesting questions which emerge from this, including how quickly and deeply heat is mixed into the ocean, and how quickly this ocean heating is realized in the atmosphere. Urs Neu (#200) is most certainly right that ENSO has a profound effect on global surface temperature, but I am interested in how much it really affects the global ocean heat content integral (which will determine the amount of “global warming in the pipeline”)
[Response: It’s unclear whether the models underestimate the interannual variability (AchutaRao et al, 2007). It’s possible that they do, but without better data on what that variability really is, it will remain ambiguous. Unforced decadal variability is even more uncertain. Sampling issues have definitely increased the variability in the data, though maybe that is now getting better. You liken the annual average in OHC to that of SAT, but you need to consider the size of the variability relative to the signal. From a range of -80 to 80 W/m2 locally, we are looking for a < 1 W/m2 signal (almost two orders of magnitude smaller). For SAT, the sd in one location interannually is a couple of degrees, say 5 deg C for a range (I haven’t checked). The signal we are looking for there is a couple of tenths which gives a signal to noise ratio ~8 times as large. Plus the SAT variability has large spatial scales and is much better sampled.. etc. I thought I answered your question above though, the interannual heat content changes of the whole system is on the order of a few tenths of W/m2. – gavin]
weather tis better... says
Bob North – “To what extent could deposition of particulate emissions (such as from diesel exhaust) on the ice surface contribute to the increased melting by decreasing the ice’s albedo and increasing the retention of heat.”
As I understand the recent finding of ice melt in the western Antartic, they speculate it is being caused by deep ocean currents, not surface causes.
Hank Roberts says
Good to remember how computers and information have improved. Imagine what the IPCC could have done 20 years ago if current technology had been available to the researchers, eh?
This news article
http://www.sciencedaily.com/releases/2007/12/071211233433.htm
mentions that military as well as civilian satellites are now being used to track these issues. That’s something I’d hoped but hadn’t seen mentioned before.
——excerpt——-
“… University of Colorado at Boulder climate scientist … Konrad Steffen, director of the Cooperative Institute for Research in Environmental Sciences…. presentation … American Geophysical Union … Dec. 10 to Dec. 14 [2007]…. used data from the Defense Meteorology Satellite Program’s Special Sensor Microwave Imager aboard several military and weather satellites ….
Steffen maintains an extensive climate-monitoring network of 22 stations on the Greenland ice sheet known as the Greenland Climate Network, transmitting hourly data via satellites to CU-Boulder to study ice-sheet processes.
—–end excerpt——-
I’d guess the ‘hourly data’ is for managing military assets. Could ice sheet changes be happening that fast?. Icequakes, maybe?
Bryan S says
Gavin, thanks for the really informative discussion.
Barton Paul Levenson says
Where can I find a good annual time series for global sulfate and/or black carbon aerosols? The only thing I’ve managed to find on-line so far is Lamb’s Dust Veil Index, and that only goes up to 1983.
Walt Bennett says
Re: #174, #184
Jim,
I am driving myself crazy trying to figure out where Hadley’s SSTs come from. It seems that they are using NOAA satellite data; do you concur?
I have been to the NOAA site and I cannot find any data for global SSTs.
What I really want to know is, what are the sources and methods for determining SST for both Hadley and GISS? Is one (Hadley) using satellite and the other (GISS) using buoys and ships? Perhaps I am speaking nonsense, but part of the problem is that I don’t do this for a living, don’t have unlimited time to track things down, don’t really have the expertise to make sense of the data (though I do try), and of course it is easy to get overwhelmed by the sheer volume of sites, articles, links, datasets and so forth.
The reason I used the word “stark” (which you endorsed) is because, to my eyes, we are talking about a difference in sign. GISS sees a plus sign, Hadley sees a negative sign. To the layman, the next question is “Huh?”
I also repeat my question: Is it valid to extrapolate a global temperature which is dominated by SSTs, which ends up burying the significant increases in land temperature?
If we look at a graph which contains only land temperatures, we see a result which more closely resembles observed changes. I know of some people who categorically refute that humans can detect changes of “tenths of a degree”, and I agree. However, the change on land is much greater than that, especially here in the U.S. northeast, and I am confident that I can detect a general warming trend in the last ten years, not to mention almost completely different seasons than 30 to 40 years ago.
I know that water is 70% of the surface. This means that when we take “averages”, land gets 3/10 the weight of water. Is that the correct way to look at the rise in temps? Isn’t the increase of land temps more accurate in terms of what’s actually happening to the planet?
It seems to me that land and oceans have substantially different properties. A rise of .1*C in overall ocean temps would be a much bigger deal than a similar rise on land.
I would appreciate if Jim or anybody would take over my feeble attempt at making a point, and put it in clearer terms. I know that I am botching this, but I swear there is a valid question in here somewhere.
Hank Roberts says
Walt, check these:
http://www.google.com/search?q=%2Bhadley+%2B%22sea+surface+temperature%22+%2Binstrument
You can improve on the search after reading the first few pages of hits to this approximate search; lots of clues therein already.
Timothy Chase says
Re: Walt Bennett 227
GISS and Hadley temperatures are different in a number of respects. Data sources? Sure — up to a point, but they share mcuh of the same data as well. Base periods against which you calculate anomalies is another place where they differ. One could be positive, the other negative and yet both be showing the same exact thing.
But then there is a difference in terms of methodology — that is how they interpolate to fill in the gaps.
Hadley interpolates to a lesser exent than GISS, considering only neighboring cells (if I remember correctly — from a couple of days ago). In contrast, GISS relies upon the fact that while temperatures aren’t that well correlated over great distances, temperature anomalies are — up to 1000 km, if I remember correctly, where more weight is given to a given measurement for filling in a gap the closer it is to the gap that you are trying to fill in.
As such, Hadley has less coverage, and little above 70 N, but GISS “covers” a fair amount above 70 N. However, Hadley has been extending its coverage over the past several decades.
They are actually both quite good and generally well-correlated, at least within the range of uncertainty. But both are going beyond the data using somewhat different methodologies and therefore do not give quite the same view of the world.
Anyway, here are links to both where you may find out a little more…
Hadley
http://www.cru.uea.ac.uk/cru/data/temperature
GISS
http://data.giss.nasa.gov/gistemp
Walt Bennett says
Re: #208,
Hank,
Thank you, and I will make still more time to track the answer down.
However, I have had time to figure out the question I am trying to ask.
Let’s turn the planet into a 2D grid, and report a temperature anomaly for each box in the grid, based either on direct measurement or extrapolation. We know that 70% of those boxes will be over water, and 30% will be over land. Now, my question is this: what do we do to these numbers in order to determine a global anomaly? Do we simply add them up and divide by the total, giving equal weight to each anomaly whether it occurs over land or over water? Or is their some sort of weighting done to account for the greater heat capacity of water?
Hank Roberts says
Walt, that sounds like the kind of problem that was being faced in the 1980s. A search for examples turned this up, just to pick one place people stated similar problems. I think you’d find the pictures revealing, they look at the globe and the oceans and how to lay a useful grid over the sphere, for example:
http://igitur-archive.library.uu.nl/phys/2001-0924-153925/hpcn.pdf
“The state of the art ocean models are based on logically rectangular grids which makes it difficult to fit the complex ocean boundaries. In this paper we demonstrate the use of unstructured triangular grids for solving a barotropic ocean model in spherical geometry with realistic continental boundaries …….resembles 2D grid … tested on a cluster
of workstations and on an IBM SP-2….”
Of course the computer in your Walkman is probably more sophisticated than the ones they were using. But it still isn’t as simple as you’re trying to make it, I think.
Walt Bennett says
Re: #211,
Hank,
Simple is all I can offer, my friend :-)
And I do believe that I was, once again, less than clear. My question has nothing to do with models. On the 2D grid we write down actual measured anomalies in each box, averaged over a year or a month or whatever time period we are evaluating. My question is, how do we derive the average global anomaly from those numbers? Do we add them together and divide by the number of boxes, or are the boxes over water given a different weight than the boxes over land?
[Response: If the boxes are complete, each box is weighted proportional to its area, ocean and land boxes alike. This is an estimate of temperatures, not heat content, and so no weighting by heat capacity is required. There is a slight twist due to the incomplete and unequal coverage of the hemispheres. Some groups calculate the average NH and SH numbers separately, and then take the average to get the global mean. Since the SH has less coverage, that weights SH boxes slightly more strongly. – gavin]
Jim Cripwell says
Ref 207. Walt. My understanding of the measurement of average global temperature anomalies is largely second hand. I simply cannot answer your questions. I have picked up bits and pieces of information from several different places. I have written to Environment Canada, asking which data set is “best”, but received a “political” answer, not a scientific one. I am going to try again in a few weeks, when I have all of the 2007 data. I believe that the three, NASA/GISS, HAD/CRU and NCDC/NOAA use essentially the same data, collected by various means from all over the world. The RSS/MSU is different as it only uses the data from it’s own sensors. The three “massage” the data in different ways. The broad outline as to how each one does it, has been explained. However, the “devil is in the details”, i.e. the computer codes as to precisely what is done to the data. [edit]
[Response: The computer code for NASA/GISS is available at http://data.giss.nasa.gov/gistemp/sources/ and the different methodologies were compared in Vose et al (2005) – gavin]
Timothy Chase says
Walt Bennett (210) wrote:
The central question is: what are you measuring?
If you were trying to measure heat content, then it would make sense to take into account heat capacity, but you aren’t so you don’t. When you are measuring the global average temperature (or to be more precise, the global average temperature anomaly) to a first approximation, it doesn’t matter whether the temperatures being measured are over land or water.
If you could arrive at a continuous function such that each point on the globe had a specific numerical value, you would integrate over the surface, then divide by the total surface for an instantaneous value, or you would integrate over surface and time, then divide by the total surface and divide by the length of the period for which you are constructing the average. But that is only in principle since we have only a finite number of measurements.
To the extent that you use interpolation, preferably on the basis of simple rules which involve the least number of assumptions, you may take into account the difference between land and ocean, but only for the sake of estimating the missing data. NASA makes use of this a bit more than Hadley as we know that temperature anomalies are strongly correlated over relatively large distances. Hadley takes a more conservative approach.
Incidentally, if you are trying to calculate normalized measures, it is generally best to stay away from weighted averaging. Oftentimes you simply won’t know what to weight by.
I will use an example from my own experience in the study of highway performance. You should calculate everything in terms of aggregate measures such as vehicle*miles and vehicle*hours, then divide at the end to get your normalized measure e.g., average speed = vehicle miles traveled divided by vehicle hours of travel.
However, state department of transportation I worked for was doing weighted averages by distance traveled. As such, using a single car that traveled 50 mph for 1 hour then 10 mph for 5 hours, they came up with 30 mph as the average speed since the length of each leg of the journey was the same. But the total miles were 100 and the time of travel was 6 hours, which means that the actual average speed was 16.67 mph over the entire length of the journey.
Now if you had weighted by hours of travel instead of miles of travel, you would arrive at the right answer. But the problem is you still wouldn’t know that weighting by time of travel was the right thing to do — unless you performed the calculation without the use of weighted averaging. So avoid the weighted averages. Chances are you will even improve the performance of whatever program you are writing to do the calculations.
Jeff Phillips says
Thanks for the intellectual debate. I’m positive that almost everyone who has commented is much smarted than I am. I have no scientific background, but without the time to read each comment, I do have a question…Maybe you covered it and I missed it. It has a few parts to it.
How much oxygen producing vegetation (Global %) has been eliminated in the last 150 years? In the same time-frame, how much CO2 has been added to the atmosphere? How much heat trapping concrete/asphalt has replaced the vegetation? How many rooftops world-wide with heat trapping materials have been added since the dawn of the 20th century? Finally, does any of this matter to your models?
Or is this just a dumb question?
You don’t have to be polite. If it’s dumb tell me.
Hank Roberts says
Jeff, one place to start — where, and how, sunlight becomes life on the planet is the basis for it all. Much else there if you back up the directory tree and search on the terms you’ll find here, this helps put us and our part into proportion. We’ve really just slightly overbalanced a very large system, so far, and are wondering how far it can be pushed ….
http://www.globalchange.umich.edu/globalchange1/current/lectures/kling/energyflow/energyflow.html
Jeff Phillips says
Pushed until what?
Ray Ladbury says
Walt, I think you have fallen victim to the old dictum: A man with one watch always knows what time it is; a man with two is never sure. Of course, to a scientist, 2 watches working independently are better than one, as you can take an average and probably get closer to the actual time. 3 are better still, as you can start to estimate your errors. Given polar amplification, it is not surprising that Hadley’s estimates might undershoot actual warming. This is of course why Hadley and UAH are the models of choice in the denialosphere. However, the fact is that any reasonable analysis shows increasing temperature, regardless of what dataset you use. The watches are still running forward. Physics still works.
Hank Roberts says
> Pushed until what?
Jeff, for that, the Start Here link at the top of the page will get you started. Look at the scenarios at the IPCC too. I’m just another reader here (well, more talkative than many). Earlier topics here at RC will give you a lot of possible answers.
From the link I gave you:
“… What fraction of the terrestrial NPP do humans use, or, “appropriate”? It turns out to be a surprisingly large fraction. Let’s use our knowledge of ecological energetics to examine this very important issue. (Why NPP? Because only the energy “left over” from plant metabolic needs is available to nourish the consumers and decomposers on Earth.)….”
Rod B says
Just for the record, Ray (198). I see nothing wrong, scientifically, with us skeptics (or anyone) pointing out and pursuing what might look like chinks in the AGW armor. Some might even lead to an AHA! moment. Or at the least it could aid our understanding. And the incessant litany of the science being irrefutable is no deterrence. I would, however, agree with your criticism of my pals (??) who automatically proclaim that every perceived chink is, in fact, complete refutation of the theory. That’s silly and the criticism is rightly deserved.
Denialosphere???
Alan K says
#218
“Given polar amplification, it is not surprising that Hadley’s estimates might undershoot actual warming.”
Have you mentioned this to them? How can we have a model “out there” that consistently and transparently gets it wrong? Is this an oil-company-funded model? I thought models were inviolate?
In fact, I will ask them this myself; apologies in advance if they respond with requests for further reasons they are wrong as these requests will be passed directly on to this forum.
Ray Ladbury says
Rod, by all means if you see a shortcoming, point it out. What I condemn is the inability of many so-called skeptics to see results in the full context of the science. Basically, if you believe in the basic science of the greenhouse effect (and it is well established) there is no reason to assume it chages magically when CO2 reaches 280 ppmv or 560 ppmv and perhaps even >1000 ppmv. If that is so, then for the anthropogenic theory to be wrong, we would have to be missing a very significant piece of physics from the models. Even then, we would not necessarily be out of the woods, as the effects of CO2 can persist for thousands of years, and might well outlast any negative feedback, etc. we were to discover. It would be a reprieve, but not a pardon. We also have zero evidence of such missing physics. Indeed, the models perform quite well, suggesting we’re pretty close on the most important forcings.
The alternative is that we have greenhouse forcing all wrong. It can’t be just a little wrong, because it is independently constrained by many separate lines of evidence. If that is true, then our entire theory of the climate would have to be scrapped. This idea that a little twiddling around the edges will make the whole problem go away is simply wrong. Basically, anthropogenic causation of the current warming epoch is a simple consequence of the known physics of climate science. Saying it is wrong has consequences as serious as saying evolution is wrong.
Ray Ladbury says
Alan K, To quote George Box: “All models are wrong. Some models are useful.” Climate skeptics love the first part of the quote, but would prefer to forget the second part. Hadley and GISS take different approaches to the same problem–the paucity of measurements in polar regions. GISS uses extrapolation–itself prone to errors. Hadley just says, “We don’t know.” (Yes, Gavin et al., I know this is a gross simplification and welcome corrections chastisement, etc..) Both are wrong. Both are useful. Independent of Hadley or GISS, we KNOW the polar regions are warming dramatically. What matters is that regardless of which dataset you are using, you get the same answer: We’re still warming. You may get slightly different answers, but the slope is still positive. The fact that you get the same answer with such independent and different analyses is itself a confirmation of the robustness of the result. Unlike the skeptics, we don’t have to cherrypick the dataset, period or analysis method to sculpture a particular fore-ordained conclusion. Real science is nice that way.
henry says
Where would I find a list (lat/lon) of the stations GISS uses to extrapolate the Arctic temps?
I’m wanting to see the amount of area covered by this “extrapolation”. Someone should be able place a 1200km circle around these stations and show area covered by GISS.
When I go to the GISS homesite, it says the data and programs can be downloaded, but can only be viewed or used if:
“The subdirectories contain software which you will need to compile, and in some cases install at particular locations on your computer. These include FORTRAN programs and C extensions to Python programs. Some Python programs make use of Berkeley DB files.”
[Response: I don;’t think you need to do it all yourself to get the answers you want. Try going to the global maps page: http://data.giss.nasa.gov/gistemp/maps/ and look at the various options. If you choose the GISS analysis + HadI/Reyn SST you’ll see the Arctic area filled in from extrapolation of the met stations. If you select ‘none’ for the land analysis, you’ll see how much of the ocean area was filled in. – gavin]
Walt Bennett says
Re: #212 inline,
Gavin,
Perhaps I should be reading something on this subject so I am not burdening anybody here to teach me something they learned 20 years ago in college, so feel free to point me in a direction and shoo me away :-)
You have zeroed in on my question, sir. It does not seem intuitively correct to me, to take a surface temperature over open ocean and compare it to a surface temperature over land. I’m not sure exactly what properties would be different, but heat absorption seems to be a property which would be different and which would affect surface temp.
In other words, if I have the sun beating down on 1000 square miles of land, and the same amount of sun beating down on 1000 square miles of ocean, and of course atmosphere is identical in both cases, I would expect, over time, that the land would warm more than the ocean would, because the ocean will absorb and redistribute the heat more efficiently than land will.
I’m sure that there are other factors which are involved, which is why I suggest that you might want to recommend some reading I could do on a subject which I am sure has been well explored by others.
What this comes down to is, your answer was what I expected, based on the effect SSTs have on global average. However, it just doesn’t make sense to me that the properties are the same for each box.
Thanks for helping me understand this better.
[Response: Actually, there has been several decades of research on precisely this question. The predominant mechanisms determining the appropriate averaging length scale is the horizontal mixing by large-scale atmospheric motion, and this is largely similar over land and over ocean. This leads to spatial correlation scales of between 1500 and 2000 km in annual mean surface temperature. You can find some discussion of this in Mann and Park (1994) [Mann, M.E., Park, J., Global scale modes of surface temperature variability on interannual to century time scales, Journal of Geophysical Research, 99, 25819-25833, 1994] (available as pdf here). Note the discussion on page 3, and in particular the references to decades of earlier work related work by Livezey, Madden, North, and others. It should be noted that there is some regional variation. See e.g. Briffa and Jones (1993) [Briffa, K. R. and P. D. Jones (1993) “Global surface air temperature variations during the twentieth century: Part 2, implications for large-scale high-frequency palaeoclimatic studies.” The Holocene 3(1): 77-88] abstract of which is as follows:
This paper is the second of a two-part series analysing details of regional, hemispheric and global temperature change during the twentieth century. Based on the grid box data described in Part I we present global maps of the strength of regional temperature coherence measured in terms of the correlation decay length for both annual and seasonal mean data. Correlation decay lengths are generally higher for annual rather than seasonal data; higher in the Southern compared to the Northern Hemisphere; and consistently higher over the oceans, particularly the Indian and central north Pacific oceans. Spatial coherence is relatively low in all seasons over the mid to high latitudes of the Northern Hemisphere and especially low in summer over the northern North Atlantic region. We also describe selected regional temperature series and examine the similarities between these and hemispheric mean data, placing emphasis on the nature of the relationships in different seasons…. See also Weber et al (1995) [Weber, R. O. and R. A. Madden (1995). “Optimal Averaging for the Determination of Global Mean Temperature: Experiments with Model Data.” Journal of Climate 8: 418-430], abstract of which follows:
Optimal averaging is a method to estimate some area mean of datasets with imperfect spatial sampling. The accuracy of the method is tested by application to time series of January temperature fields simulated by the NCAR Community Climate Model. Some restrictions to the application of optimal averaging are given. It is demonstrated that the proper choice of a spatial correlation model is crucial. It is shown that the optimal averaging procedures provide a better approximation to the true mean of a region than simple area-weight averaging does. The inclusion of measurement errors of realistic size at each observation location hardly changes the value of the optimal average nor does it substantially alter the sampling error of the optimal average.
There is much more to read on all of this in the references contained within the papers cited above. -mike]
Walt Bennett says
General note:
We seem to be confusing models and measurements. My line of inquiry deals only with measurements, the differences between them, and how they are applied.
Imran says
#185
Figen – I’m not sure there is any need to be so defensive about this. I agree with Jim and Walt about this. When looking at at different data sets I also get to differnet results. I also have done the calculations and made other graphs and can clearly get to different conclusions. If you use the Hadley data you can see basically flat global temperatures for 7 years and if you look at the Haddley sea data the global average SS temperatures are definitively in decline. Additonally if you plot against the IPCC 2001 predictions its transparent that their predictions were overestimated in the short term. I would love to post this graph here – I have asked for an e-mail adress to send it to (see #157) but no reply. Just tell me how to post it or where to send it. Its not about proving anything – its about having the intellectual curiosity to try and understand these differences.
Hank Roberts says
Notice how hard it is to stay on topic? Wonder why?
Walter Pearce says
Off topic, congratulations to Gavin on a terrific performance on today’s Dianne Rehm show. As a member of the lay audience I felt I got a clear sense of what we know, what we don’t know, the magnitude of the task ahead and the importance of starting now.
Alan K says
#222
“Notice how hard it is to stay on topic? Wonder why?”
good point Hank – the original post concerns model projections. Can everyone pls try to stay focused.
B Buckner says
Walt,
In follow the links provided by Tim Chase above, I was surprised to learn that both GISS and Hadcru use the sea surface temperatures to obtain the anomalies; that is the changes in the temp of water itself is used to obtain the anomaly and not the temp of the air immediately above the surface of the water.
Ray Ladbury says
Walt, measurements don’t exist independent of models. If I take a measurement, I want to know what kinds of random and systematic errors it may be subject to. Right there, you are already talking about modeling.
Arch Stanton says
More off topic (point of order):
It seems to me the topic concerns the interface between models and measurements.
Ike Solem says
If self-styled “skeptics” [edit] were really interested in getting accurate data about the Earth’s climate system, they’d be advocating for increased data collection. For example, they’d be pushing hard for the Deep Space Observatory.
See Why did NASA kill a climate change project? IHT 2006
They’d also be advocating for a much larger ocean temperature and current monitoring program based on direct measurements. For example, see Call For Network To Monitor Southern Ocean Current, 2007, ScienceDaily
There is now a moored North Atlantic monitoring system, which has already revealed that there are large variations in ocean circulation:
.
Instead, “skeptics” troll through existing datasets, looking for time periods that they can use to promote their pre-determined conclusion – “global warming is minimal, is not caused by human use of fossil fuels, and will be a good thing anyway.”
[edit – please no personal remarks]
Timothy Chase says
Re: Jim Galasyn (191) on the Ice Sheets on the West Antarctic Peninsula
There is a discussion here with Eric Rignot…
Transcript
Science: Climate Change Impact on Antarctica
Marc Kaufman and Eric Rignot
Washington Post Staff Writer and NASA Scientist
Monday, January 14, 2008; 12:00 PM
http://www.washingtonpost.com/wp-dyn/content/discussion/2008/01/13/DI2008011301886.html
Incidentally, most of the stories I have seen on this are talking about the ice loss as if its just glaciers and don’t seem to understand that its ice sheets that are becoming unstable.
Gaelan Clark says
In trying to understanding the models that the IPCC uses in their assessments and predictions–am I understanding this properly?…the models assume a baseline on all values, and holding only those values constant, they then force into the models increased CO2 and then out comes the result—temperature increases….
But, the models are predicting up to 100 years into the future—do these models hold all of these values constant over this 100 year span while only CO2 remains the “un”-constant???—When in the history of this planet has the atmosphere remained constant for up to 100 years?
[Response: Over the 20th Century, the models assume up to 14 or so different things changing – not just CO2 (but CH4, aerosols, ozone, volcanoes, land use, solar, etc etc.). CO2 is big part, but it is not the only thing going on. Similarly, future scenarios adjust all the GHGs and aerosols and ozone precursors etc. They don’t make assumptions about solar since there is no predictability for what that will be, and although some experiments throw in a few volcanoes now and again, that too is unpredictable. What would you have the modellers do instead? – gavin]
Gaelan Clark says
Gavin, Thank you for your reply.
I have no idea what I would have the modellers do, and I do not assume to have any answers…., BUT, I will tell you what I would have the modellers, and the IPCC, DON’T—-scare the public with the top-end of the range of possible outcomes of increased CO2 scenarios, and stop saying that the science is settled.
Even without your PHD I can see that there are many questions still to be resolved–for instance as you posit–Solar Radiance-a HUGE “what if”-, and possibly others that are not being discussed here—again, I don’t know–but I do know that you do.
—Quick question, not to nit-pick, what do you mean by “…14 or so different things changing – not just CO2 (but CH4, aerosols, ozone, volcanoes, land use, solar, etc etc.).”…furthering…”They don’t make assumptions about solar…”??
[Response: Where have you read that I have stated that the ‘science is settled’? If I thought that, I wouldn’t still be a scientist. That kind of statement is instead a strawman characterisation of what scientists are saying, and at maximum reflects only the basic consensus and certainly not 90% of what it is that scientists are actually doing. For many purposes the outstanding points of contention are not relevant to most people – which is why 90% of papers on climate don’t get a press release, but there is a lot that is known, and to make that clear is completely appropriate. To answer your last question about solar in future simulations, the assumption is that there will be no change in the long term irradiance. That isn’t likely to be correct but there is no good reason to think it will be higher or lower in the future. If we get better predictions, then we’ll use them – But given current understanding, no reasonable changes in solar are likely to change the underlying prognosis. – gavin]
Walt Bennett says
Re: #231,
That is interesting. Mike was kind enough to lob copious references at me, so now I have some homework to do.
Why I seem intent on delving into something that will, in all likelihood, turn out to be well investigated…I can’t answer that.
Except to say that so far, I don’t know enough to say that my questions have been answered. So, I will see what I learn from Mike’s references. Thanks Mike, for taking the time.
With regard to the folks who consider it their duty to keep a thread “on-topic” I will ask, who are we hurting by exploring tangentially related questions, and is not the quest for knowledge slightly more important than sticking to a specific agenda? This is a blog, after all, not a 90 minute symposium.
JCH says
A hiccup in the “it’s not happening” Denialosphere?
http://www.forbes.com/fdc/welcome_mjx.shtml
Marcus says
Gaelan: If you look at the variability of atmospheric concentrations over the past several thousand years, it is tiny compared to the changes in the past hundred.
If you look at the variation in solar forcing (as well as we understand it) over the past several thousand years, it is small compared to the changes we expect due to anthropogenic forcings over the next hundred years.
If you look at volcanoes over the past couple hundred years, while they make a difference not too much smaller than anthropogenic changes, those differences last only a year or two.
So, effectively, the assumption that the natural forcing remains constant is a fairly good one.
(note that some models do try to account for the interactions between changing atmosphere, climate, and precipitation and ecosystems and oceans. This is hard, though: for example, in ecosystem modeling, most work shows the biosphere taking up a lot more carbon due to carbon fertilization: however, more recently, modelers have begun to realize the importance of nitrogen limitation in carbon rich futures… and precipitation is very important for ecosystems, and poorly predicted… and the effects of temperature change on ocean mixing is still not well understood…)
Walt Bennett says
Re: #237
Gaelen,
I think it’s great that you have come to RC and that you want to participate in this incredibly important discussion. You seem to understand one thing very well: Gavin and his peers know a lot more about this stuff than we do.
I want to ask you, though: why should they spend their time defending their motives? Don’t we want them busy doing the work? Aren’t we grateful enough that they spend the time answering our science questions?
I predict that when you ask somebody about their motives, that person will defend those motives. So, why not skip that and make up your mind based on the content of the information Gavin, Mike, Stefan and others post here.
Isn’t it great that we are on first name bases with the who’s who of climate science?
Let’s not take too liberal advantage of that, and ask them to spend time defending their practices. As I said, when we have science questions, they answer them.
That is a wonderful thing, and I hope you agree.
Just my two cents…
Daniel C. Goodwin says
Re: 234, “trolling through datasets”
Of course I agree that every kind of data collection investment is urgently imperative, but on the subject of trolling through existing datasets, and also somewhat in the original spirit of “comparing like with like” – here is a data exercise any child could perform:
In the global surface temperature dataset referenced by the map-maker at http://data.giss.nasa.gov/gistemp/maps/ there are four previous Decembers which scored a global monthly mean anomaly around +0.39 (that of Dec 2007): 1939, 1979, 1990 and 2002. Visual data is extremely powerful – I like the polar projections, as the epicenters of our situation would appear to be the poles. The dynamics of the zonal mean line are also very interesting. It’s a striking progression, and you can draw your own conclusions from a dataset which has weathered extreme scrutiny.
Gaelan Clark says
#240–Thank you very much for your reply. I now understand the modelling concept much better now.
#241–I have never assumed to undertake anyone’s “motives”, yet their “practices” are indeed a fundamental part of the “science.” And, I am trying feverishly to try to understand the content, but my lack of physics-and any number of science classes precludes me to the sidelines until I get the courage to ask a question—obviously I am asking because I want to expand my limited knowledge.
Dr.Schmidt has been kind enough to deliver to me and others answers to our questions, albeit in a way that I sometimes do not understand, so I take lesson from the references and learn more through that.
Again–I do not care about motive, but the application of the practice is of great concern–for instance, how does one extrapolate temperature from bristle cone pines?–Further, by this temperature proxy–that may or may not be correct–how does one say that current warming is unprecedented?
[edit]
I simply see no motive from any scientists, though, that would make such inferences—I would like very much to know their practice of science that leads them to this answer—of that, I have found no real proofs, I am still searching, and hopefully can be pointed in the right direction, possibly by you.
Thank you in advance.
Gaelan Clark says
Thank you Dr. Schmidt, I really do appreciate your time.
And, I am just parroting the news on the “science is settled” thing, I never meant to infer that you said such a thing.
There are so many like myself, that want to know more about the science but are limited by our own choice of class load during our school years.
I am looking into taking some basic science classes at my alma mater, USF–Sun and Fun in Tampa, FL—do you have any suggestions for me that would help me understand the basic science behind what you are doing?
[Response: Not specifically, but many schools do a 201 level intro to climate or atmospheric science – those are usually a good start (though you’ll need to go further to really get a handle on the physics or modelling aspects). If you want to self study working though ‘A climate modelling primer’ by McGuffie and Henderson-Sellers or Houghton’s Physics of Atmospheres would be helpful. – gavin]
Walt Bennett says
Re: 243
Gaelen,
Excellent question and indeed the basis for my own interest.
I will point you in a direction:
The Discovery of Global Warming
http://www.aip.org/history/climate/index.html
This is a very comprehensive history of the research that went into current understanding of the greenhouse effect in general and AGW in particular. It is written at a level for a layman to understand, and it is well-referenced.
I hope you take my comment in the spirit intended, that we do not want to abuse the great opportunity we have to sit this close to scientists doing science.
Jim Eager says
Re Gaelan Clark @ 237: “for instance as you posit–Solar Radiance-a HUGE “what if”
Is it? A huge variable, I mean. Solar radiation falling on Earth’s surface, or insolation, is of course the source of 99.9% of Earth’s energy budget, so solar insolation itself is huge. And the amount of insolation does vary by both predictable means and unpredictable means. The predictable means include a very slowly increasing solar constant, and very slow Milankovetch orbital and rotational cycles. Neither are appreciably in-play on a decadal or even century scale, so they can be discounted for short term modeling. There may or may not be other long-term periodic variations in solar radiance, but until their existence and effect are proven and quantified there is no way to incorporate their impact in any model.
Unpredictable means include sunspot cycle, which is at least semi-regular, so we have a known range of variation, aerosol injections from periodic volcanic eruptions, which are included in some models, and cloud distribution, which is not yet fully understood in terms of its net forcing. So how large are these unpredictable variations and how do they compare to known greenhouse gas and other forcings? Can we legitimately call any of them “huge?”
Chris Colose says
Gaelan,
In addition to what gavin has written, you also need to be reasonable on how much natural variability will change. The increased secular trend in solar irradiance from 1900-1950 or so was actually rather high compared to the Holocene, and is yet considerably smaller than the radiative forcing for CO2 (as shown in this graph- http://www.greenfacts.org/en/climate-change-ar4/images/figure-spm-2-p4.jpg ). Most likely, solar will decrease a bit, but that will be trivial next to rising GHG’s…there is no simulation of a coupled ocean-atmosphere-ice phenomena that is going to suddenly spurt out a change like that of 2x CO2 in Holocene-like conditions. Now if you want to argue that solar will decrease by a few percent, then we can discuss, but I don’t really think anyone is going to work on wishful thinking. Volcanoes, El nino and such things generally work on short timescales, and so the signal as we approach 2x to 3x CO2 is still going to be there.
Projection modelling is not making a ‘prediction,’ it is making a ‘projection.’ Obviously Gavin cannot predict that a New York sized asteroid might hit Earth in 2015 and that will substantially influence climate for a long term scale. In this instance, it probably wouldn’t be too relevant what the IPCC has to say right now. But the models say that if we reach 2x CO2, and all other things are equal (except feedbacks) you’ll get ~ 3 C of warming. If all other things aren’t equal (but as he noted, scientific and socio-economic projections for those things as well, as well as different “emission scenarios” which we can control) then you need to factor in those effects, but unless you get a huge solar dimming, or the asteroid, CO2 is very likely to be the predominant forcing agent over this century.
Richard Ordway says
Roger Pielke. Jr. Says:
11 January 2008 at 10:53 AM
Gavin Schmidt-vs. Roger Pielke
For readers who think “two equal scientists are bickering here” …please look at the peer-reviewed published data which is analyzed by the world-wide scientific community…and is in your public library for your pleasure…ie. Nature Journal, Science Journal etc.
Roger Pielke’s evidence does not stand up under world-wide peer review scrutiny…please look it up for your selves…with the help of the librarian if you wish.
Note: Industry has recently bought a few lesser known journals which try to legitimize their “global warming is false” idea…your librarian should be able to tell you which is which.
erikG says
General Question about temperature Measurements:
First: If global measured temps fall in a given year, do we believe that the atmosphere actually lost heat over that time period? Or is some variation caused by measurement error? (ie, the heat is is still around, just not where we happen to measure it).
Thanks, to any one who has the knowledge and the time to answer.
-Erik
[Response: For the most part the atmosphere will have lost energy over that period. – gavin]
Ray Ladbury says
Gaelan, Are you familiar with the open courseware project at MIT. Their goal is to put course materials on-line for every course they teach. Here’s the website:
http://ocw.mit.edu/OcwWeb/web/home/home/index.htm
and a more specific link that looked good:
http://ocw.mit.edu/OcwWeb/Earth–Atmospheric–and-Planetary-Sciences/12-301Fall-2006/CourseHome/index.htm
Also, the course that Hank linked to above:
http://www.globalchange.umich.edu/globalchange1/current/lectures/kling/energyflow/energyflow.html
I actually worked with George Kling (the prof) writing up his research on Lake Nyos, the volcanic lake in Cameroon that belched out a bunch of CO2 and suffocated several villages. He’s a good guy.
If you can tell me your background, maybe I can come up with other resources. After getting my PhD in physics, I never wanted to take a class again. I prefer to learn on my own.