Gavin Schmidt and Stefan Rahmstorf
John Tierney and Roger Pielke Jr. have recently discussed attempts to validate (or falsify) IPCC projections of global temperature change over the period 2000-2007. Others have attempted to show that last year’s numbers imply that ‘Global Warming has stopped’ or that it is ‘taking a break’ (Uli Kulke, Die Welt)). However, as most of our readers will realise, these comparisons are flawed since they basically compare long term climate change to short term weather variability.
This becomes immediately clear when looking at the following graph:
The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with. The mean of all the 8 year trends is close to the long term trend (0.19ºC/decade), but the standard deviation is almost as large (0.17ºC/decade), implying that a trend would have to be either >0.5ºC/decade or much more negative (< -0.2ºC/decade) for it to obviously fall outside the distribution. Thus comparing short trends has very little power to distinguish between alternate expectations.
So, it should be clear that short term comparisons are misguided, but the reasons why, and what should be done instead, are worth exploring.
The first point to make (and indeed the first point we always make) is that the climate system has enormous amounts of variability on day-to-day, month-to-month, year-to-year and decade-to-decade periods. Much of this variability (once you account for the diurnal cycle and the seasons) is apparently chaotic and unrelated to any external factor – it is the weather. Some aspects of weather are predictable – the location of mid-latitude storms a few days in advance, the progression of an El Niño event a few months in advance etc, but predictability quickly evaporates due to the extreme sensitivity of the weather to the unavoidable uncertainty in the initial conditions. So for most intents and purposes, the weather component can be thought of as random.
If you are interested in the forced component of the climate – and many people are – then you need to assess the size of an expected forced signal relative to the unforced weather ‘noise’. Without this, the significance of any observed change is impossible to determine. The signal to noise ratio is actually very sensitive to exactly what climate record (or ‘metric’) you are looking at, and so whether a signal can be clearly seen will vary enormously across different aspects of the climate.
An obvious example is looking at the temperature anomaly in a single temperature station. The standard deviation in New York City for a monthly mean anomaly is around 2.5ºC, for the annual mean it is around 0.6ºC, while for the global mean anomaly it is around 0.2ºC. So the longer the averaging time-period and the wider the spatial average, the smaller the weather noise and the greater chance to detect any particular signal.
In the real world, there are other sources of uncertainty which add to the ‘noise’ part of this discussion. First of all there is the uncertainty that any particular climate metric is actually representing what it claims to be. This can be due to sparse sampling or it can relate to the procedure by which the raw data is put together. It can either be random or systematic and there are a couple of good examples of this in the various surface or near-surface temperature records.
Sampling biases are easy to see in the difference between the GISTEMP surface temperature data product (which extrapolates over the Arctic region) and the HADCRUT3v product which assumes that Arctic temperature anomalies don’t extend past the land. These are both defendable choices, but when calculating global mean anomalies in a situation where the Arctic is warming up rapidly, there is an obvious offset between the two records (and indeed GISTEMP has been trending higher). However, the long term trends are very similar.
A more systematic bias is seen in the differences between the RSS and UAH versions of the MSU-LT (lower troposphere) satellite temperature record. Both groups are nominally trying to estimate the same thing from the same data, but because of assumptions and methods used in tying together the different satellites involved, there can be large differences in trends. Given that we only have two examples of this metric, the true systematic uncertainty is clearly larger than the simply the difference between them.
What we are really after is how to evaluate our understanding of what’s driving climate change as encapsulated in models of the climate system. Those models though can be as simple as an extrapolated trend, or as complex as a state-of-the-art GCM. Whatever the source of an estimate of what ‘should’ be happening, there are three issues that need to be addressed:
- Firstly, are the drivers changing as we expected? It’s all very well to predict that a pedestrian will likely be knocked over if they step into the path of a truck, but the prediction can only be validated if they actually step off the curb! In the climate case, we need to know how well we estimated forcings (greenhouse gases, volcanic effects, aerosols, solar etc.) in the projections.
- Secondly, what is the uncertainty in that prediction given a particular forcing? For instance, how often is our poor pedestrian saved because the truck manages to swerve out of the way? For temperature changes this is equivalent to the uncertainty in the long-term projected trends. This uncertainty depends on climate sensitivity, the length of time and the size of the unforced variability.
- Thirdly, we need to compare like with like and be careful about what questions are really being asked. This has become easier with the archive of model simulations for the 20th Century (but more about this in a future post).
It’s worthwhile expanding on the third point since it is often the one that trips people up. In model projections, it is now standard practice to do a number of different simulations that have different initial conditions in order to span the range of possible weather states. Any individual simulation will have the same forced climate change, but will have a different realisation of the unforced noise. By averaging over the runs, the noise (which is uncorrelated from one run to another) averages out, and what is left is an estimate of the forced signal and its uncertainty. This is somewhat analogous to the averaging of all the short trends in the figure above, and as there, you can often get a very good estimate of the forced change (or long term mean).
Problems can occur though if the estimate of the forced change is compared directly to the real trend in order to see if they are consistent. You need to remember that the real world consists of both a (potentially) forced trend but also a random weather component. This was an issue with the recent Douglass et al paper, where they claimed the observations were outside the mean model tropospheric trend and its uncertainty. They confused the uncertainty in how well we can estimate the forced signal (the mean of the all the models) with the distribution of trends+noise.
This might seem confusing, but an dice-throwing analogy might be useful. If you have a bunch of normal dice (‘models’) then the mean point value is 3.5 with a standard deviation of ~1.7. Thus, the mean over 100 throws will have a distribution of 3.5 +/- 0.17 which means you’ll get a pretty good estimate. To assess whether another dice is loaded it is not enough to just compare one throw of that dice. For instance, if you threw a 5, that is significantly outside the expected value derived from the 100 previous throws, but it is clearly within the expected distribution.
Bringing it back to climate models, there can be strong agreement that 0.2ºC/dec is the expected value for the current forced trend, but comparing the actual trend simply to that number plus or minus the uncertainty in its value is incorrect. This is what is implicitly being done in the figure on Tierney’s post.
If that isn’t the right way to do it, what is a better way? Well, if you start to take longer trends, then the uncertainty in the trend estimate approaches the uncertainty in the expected trend, at which point it becomes meaningful to compare them since the ‘weather’ component has been averaged out. In the global surface temperature record, that happens for trends longer than about 15 years, but for smaller areas with higher noise levels (like Antarctica), the time period can be many decades.
Are people going back to the earliest projections and assessing how good they are? Yes. We’ve done so here for Hansen’s 1988 projections, Stefan and colleagues did it for CO2, temperature and sea level projections from IPCC TAR (Rahmstorf et al, 2007), and IPCC themselves did so in Fig 1.1 of AR4 Chapter 1. Each of these analyses show that the longer term temperature trends are indeed what is expected. Sea level rise, on the other hand, appears to be under-estimated by the models for reasons that are as yet unclear.
Finally, this subject appears to have been raised from the expectation that some short term weather event over the next few years will definitively prove that either anthropogenic global warming is a problem or it isn’t. As the above discussion should have made clear this is not the right question to ask. Instead, the question should be, are there analyses that will be made over the next few years that will improve the evaluation of climate models? There the answer is likely to be yes. There will be better estimates of long term trends in precipitation, cloudiness, winds, storm intensity, ice thickness, glacial retreat, ocean warming etc. We have expectations of what those trends should be, but in many cases the ‘noise’ is still too large for those metrics to be a useful constraint. As time goes on, the noise in ever-longer trends diminishes, and what gets revealed then will determine how well we understand what’s happening.
Update: We are pleased to see such large interest in our post. Several readers asked for additional graphs. Here they are:
– UK Met Office data (instead of GISS data) with 8-year trend lines
– GISS data with 7-year trend lines (instead of 8-year).
– GISS data with 15-year trend lines
These graphs illustrate that the 8-year trends in the UK Met Office data are of course just as noisy as in the GISS data; that 7-year trend lines are of course even noisier than 8-year trend lines; and that things start to stabilise (trends getting statistically robust) when 15-year averaging is used. This illustrates the key point we were trying to make: looking at only 8 years of data is looking primarily at the “noise” of interannual variability rather than at the forced long-term trend. This makes as much sense as analysing the temperature observations from 10-17 April to check whether it really gets warmer during spring.
And here is an update of the comparison of global temperature data with the IPCC TAR projections (Rahmstorf et al., Science 2007) with the 2007 values added in (for caption see that paper). With both data sets the observed long-term trends are still running in the upper half of the range that IPCC projected.
Dan Hughes says
Along the lines of #144 above. What testable hypotheses are at the foundations of AGW with respect to the theoretical basis, mathematical models, numerical solution methods, and Validation of applications to the analyses of interest. With respect to the latter, those applications the results of which might impact the health and safety of the public via changes in public policy relative to energy generation and consumption are of special interest.
Thanks
Hank Roberts says
Dan, looking at your ‘auditblogs’ page, I’d suggest the place to begin is the assumption that “foundations” exist for scientific work. Literally, that means some original work that, if shaken, will cause later work to collapse.
Are you looking for something like the assumption that the Earth was the center of the universe, which when removed left the whole complicated structure of epicycles up in the air?
On your website you are questioning the Navier-Stokes equation — that’s a description, not a foundation. Like Newton’s gravity, another description that worked well enough for a long time and is still sufficient for construction work if not for rocket science.
This sort of extension and improvement is common, the early work is usually superseded in science, e.g.:
[PDF] An Exact Mapping from Navier-Stokes Equation to Schrodinger Equation via Riccati Equation
V Christianto, F Smarandache
PROGRESS IN PHYSICS, 2008 – ptep-online.com
Are you really looking for a “foundation” that can’t be shaken without changing everything we know now? It seems improbable to me.
John N-G says
#124 Bryan S says: A steak dinner at your favorite steakhouse if the OHC gain for 2004, 2005, 2006, and 2007 turns out to be more than “statistically insignificant” in any refereed paper on the issue.
Given the expected year-to-year natural variability, inaccuracies in measurements, and sampling erros, it seems plausible that published OHC gains that exactly matched the projections would nevertheless be “statistically insignificant”. If so, the bet is meaningless and we’re back to the original topic of this post.
It seems a common misconception in general that observed trends must either be insignificant or in agreement with projections. In the short term, they can (and usually will) be both.
Fair weather cyclist says
A fact about the NH cannot refute a statement pertaining to a global average.
Nick Gotts says
Re #152 (Hank Roberts) With regard to scientific foundations, I think there are beliefs which could be undermined by empirical evidence, and which are essential supports for all scientific endeavour, but whether any of these beliefs are themselves part of science, I’m not sure. One such belief is that there is no superhuman agency manipulating the evidence we gather. Suppose we were to discover that an alien intelligence far beyond our own had been monitoring us for the last million years, and had in some cases intervened to change the results of experiments? Timothy, as unofficial philosopher-in-residence, what’s your view on this question of foundations?
Barton Paul Levenson says
Walt Bennett writes:
[[The point, if there was one, to my question was, is it possible that we are getting it wrong with regard to SST cooling? Hadley’s own graph shows that the ocean tends to warm when land warms. Why in the last two years has that not been the case? ]]
What makes you think two years is a meaningful sample size?
Imran says
#147 : Gavin, Stefan – thanks for your comments – and I take the good point about alarmist vs. alarming. An observation is your two responses are slightly contradictory – if the data is in line with projections (as per Gavin’s comment), why do we need to invoke the seasonal weather variability analogy (as per Stefan’s comment) ?? Good analogy by the way.
I have made some plots of the HADCRUT global data vs. the IPCC2001 predictions and I would like to share with you for your opinion if possible – personally I really struggle to see how the 2001 predictions can be considered anything other than an overestimation in the short term. Have you got an e-mail address I could send to ? Thanks.
Barton Paul Levenson says
After losing one post, I painstakingly retyped it, only to have it rejected as spam — with, of course, no way to tell WHAT IN IT constituted the “spam.” This is getting very annoying. There’s no way to tell how many people have quit trying to post here because of this sort of thing.
Antti says
Besides the prediction that “average global temperature will increase by 0.2 C / decade”; isn’t there any other way to test how good the Climate Models are in modelling the CO2 effects; Is there any way to measure eg how athmosphere radiation changes with yearly changes in the CO2 concentration?
Timo Hämeranta says
All, about the Ocean Heat Content (OHC), please see the following new study:
van der Swaluw, E., S. S. Drijfhout, and W. Hazeleger, 2007. Bjerknes Compensation at High Northern Latitudes: The Ocean Forcing the Atmosphere. Journal of Climate Vol. 20, No 24, pp. 6023–6032, December 2007, preprint online http://www.knmi.nl/publications/fulltexts/vdswaluw.pdf
Fort those of you who don’t know how OHC is estimated I copy a bit:
“1. Introduction
The heat transport from the equator to the poles, through the atmosphere and the ocean, contributes to the maintenance of the quasi-equilibrium heat budget on earth. The total meridional heat flux can be calculated by integrating the observed net radiative fluxes
(=the difference between the absorbed short-wave minus the emitted long-wave radiative flux) at the top of the atmosphere (TOA). In order to split up the total heat transport into its two components, one generally estimates the atmospheric heat transport from atmospheric observations and attributes the residual to the oceanic heat transport. Most studies use this indirect way of estimating the oceanic heat transport (however, see Wunsch (2005) for a reversed approach), since direct estimates from oceanic observations are sparse….”
Fred Staples says
What a pleasure, Hank, (142) to write about a topic that I know something about. Nuclear Reactors are (or were in my day) controlled by statistical inference (you can’t measure the temperatures everywhere).
The F-test is the simplest form of analysis of variance (ANOVA), which is the basis for all statistical testing. Any set of data will have a mean and a variance, which is the sum of the squares of the difference from the mean.
If you believe that your data, temperature in this case, is increasing with time, you can substitute a regression line for the average, re-calculate the variance about the line, and see how much of the original variance the line explains. Taking into account the number of observations, (relatively few observations need a very tight fit to be significant, relatively many can be more scattered) the F value is the ratio of the “explained” variance to the remaining variance.
Assuming that the data is normally distributed, the tables of F values give the probability that the line has arisen by chance. To accept the trend line, and consequently to look for a physical explanation, you usually ask for a chance probability of less than 1 in twenty (F less than 0.05). If you want a reference fom my bookshelf, Hank, try Chapter 3 of “Using Multivariate Statistics” by Tabachnick and Fidell or a more descriptive “Introductory Statistics for business and economics” by Wonnacots’ T and R , page 484.
The UAH data is from http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt.
Am I taking out the 1998 peak, Hank (143)? Of course not. For a first approach you must use all the data, and you must be aware of two possible errors: trends which appear significant, which are not, and trends which do not appear significant, which are.
So, to summarise my results:
Overall, from 1978 until today, the trend line of 1.43 degrees centigrade per century is significant. The probability of that trend arising by chance is infinitesimal.
But, there was no significant increase from 1978 until December 1995, and there has been no significant increase from July 1997 to date.
Between the two trend line means, there was a step increase of 0.28 degrees centigrade in just 19 months.
The questions I put from this analysis are simple. Would the general public, politicians, and journalists accept the AGW argument if that step had not appeared?
And if that step was crucial, what caused it?
[Response: Yet the trend from 1978 to today gives a difference 0.43 deg C. Your claim that a flat-line, short 0.28 jump and then another flat line is a better fit is not true. In fact it simply demonstrates that fitting trends to short post hoc picked periods is misleading. And that doesn’t even deal with the impossibility of coming up with a mechanism for such a strange series of events. This is just nonsense. – gavin]
Imran says
#128
Walt – a good point indeed. Keep asking the questions about the ocean because this is the key. There are serious problems with the heat transfer (from atmosphere to water) models and there are a few points, which if you analytically think about, will bring you to very different conclusions about whats warming what.
1) The ocean contains ~1000 times the energy of the atmosphere and water has a specific heat capacity 4 times greater than air (ie. it takes 4 times the energy to raise the T of a kg of water compared to 1 kg of air by 1K). How much impact would w eexpect to see in the ocenas froma 1 deg C rise in air temperature ? I like to think of the analogy of putting a fan heater in the kitchen and looking for a temperature rise in the center of the aga.
2) Anecdotally we all know that oceans are a major driver of air temperature (Gulf stream, El Nino, La Nina etc). Not the other way round.
3) The sea level rise since 1850 has been steady and relentless (20 cm in the 20th century), mostly put down to ‘thermal expansion’ …. but an analysis of the distribution of water temperature (vertically and latidudinally) and understanding of the variable thermal coefficient of expansion of water(which is not constant with T or even linear) will tell you that this thermal expansion is far more than can ever be explained by the observed atmospheric T rise – even if it was assumed to immediateley transfer into the water without a time lag.
Think through this and you will get a different picture of the interface between ocean and air temperature – one which much more elegantly fits the observations of sea and land temperature differemces.
Lawrence Coleman says
Very interesting graphs, the blue lines are getting straighter as well meaning less annual variabilty and more and more steady climb to the top wherever that may be?? I’ve been studying and old edition of encyclopeadia brittanica 1969 edition of Greenland and would like to share a few ovsevations I made. 1: the mean temp of central greenland in winter is usually -50C to -55C with the temp occassionally falling to -67C. The temp in the central regions in summer however reaches -3C..that was then in 1969 so now it should be around -1C -0C. 2:The majority of snow and rain falls in the southern half. Which means that any loss of ice in the summer through morains in the northern half would not get replenished at the same rate as it’s loss due to decreased snow. They mentioned that even in 1969 there was clear signs of glacial retreat in the south west quadrant. But my concerns about sea level rise were slightly allaied by the extreme cold and depth of the vast majority of the ice sheet. Even with worst case temp predictions it would still take a heck of a long time to melt enough of greenland’s ice to make a significant change to global sea level. This is extremely hard and relatively homogenous compacted ice that for the most of the year is kept at a frigid -55C..that ain’t gonna melt in a hurry! Even with the speed of the glaciers increasing..in 1969 it as about 1 inch/year and the fact that once these ice shelves are moving it will take the pack ice from the higher altitudes up to 7000ft with it..it is still going to take a long long time. I could be wrong but I’m not sure that in my lifetime I will be witness to much of a rising ocean at all. In regards to the other climatic effects of ACC I very much believe that they will become more and more obvious the the years coming…just not sea level rise..yet.
Walt Bennett says
Re: #156,
Barton,
You too miss my point, if there was one: These are *measurements* we are talking about, not *projections*.
One day is enough of a sample size if you are two different organizations measuring the same thing.
Bryan S says
Dr. Nielsen-Gammon, Howdy from a fellow Texican. In my comment, I was not referencing the OHC to any projection, only saying that according to what I am hearing, there has been no statistically significant gain of heat into the climate system over the last 4 years. I think the change in Joules will likely be very small. Obviously, this finding is not in a refereed journal yet, so we will have to wait for confirmation. Thus I put forth a friendly wager. I take it you have learned not to bet on Aggie football games!
It seems to me that the system heat content changes (not weather) over even annual time scales are very interesting since they are a direct metric for the TOA radiative imbalance over that same period. If the equalibriated sum of radiative forcings+feedbacks can cause a TOA negative radiative imbalance for even short annual periods of time, this would seem of fundamental importance to understand whether or not this observed variabilty is being properly handled in the models. No?
I will also make a suggestion, then ask a question to you and Gavin concerning ENSO and its effect on global annual ocean heat content changes. I *think* (very dangerous)that Gavin’s statement that ENSO significantly effects the heat content of the oceans is inadequate at best. Certainly, in a direct way, the heat needed to increase the temperature of the entire atmospheric volume a significant amount is not even a drop in the bucket in terms of the magnitude of most annual variability in ocean heat content that is represented in the time series graphs of OHCA (Joules). This is because of the insignificantly small heat storage capacity of the atmosphere compared to the upper ocean. For the entire averaged atmospheric volume, I understand there is no significant long-term trend in heat content increase (ie warming troposphere vs cooling statosphere). I do however understand why the changing temperature of the troposphere due to ENSO will have an effect on the way radiation is processed (ie latent and sensible heat fluxes from the ocean surface+ short-term feedbacks), but if this made a significant effect on the total heat content of the ocean, wouldn’t we expect to be able to correlate OHCA time series with ENSO? Just eyeballing, I see not even a clue of ENSO. It would seem to me that it is the TOA radiative imbalance that is approximately equal to the summation of all the equalibriated radiative forcings+feedbacks taking place below (and really THE SUM OF ALL THE WEATHER PROCESSES), and all these are approximately equal to the changes in ocean heat content. This is why such a metric is so important, because it cuts through a bunch of complex processes and sums them all up. When I first read Roger Pielke Sr.’s paper on this subject, I thought it was bull-ony, but the more I have read, the more it makes sense to me.
Barton Paul Levenson says
Walt Bennett posts:
[[You too miss my point, if there was one: These are *measurements* we are talking about, not *projections*.
One day is enough of a sample size if you are two different organizations measuring the same thing.]]
I got your point. I just think your point is wrong. A sample size of two isn’t enough no matter how careful the measurements, especially in a case like climate where so many different factors are involved. The oceans can still be warming on a long-term trend even if they seem to be cooling for a couple of years.
Walt Bennett says
Re: #144,
Donald wrote “What observations would falsify your understanding of global climate change?”
Donald,
Having been coming here for going on two years now as a layman seeking information, I can assure you that the makers of this board are fully invested in AGW theory. There is no serious doubt in their minds that rising CO2 leads, inevitably, to higher temperatures, which in turn will lead to various changes in the climate system. These changes include some places being wetter, some being drier, and most of the planet being much warmer, which will affect native plant and animal species.
However, your question is craftily worded. The “understanding” of AGW is a slippery toad. I for one have come to strongly suspect that models, and thus projections, underestimate the effects of acceleration, both in terms of rising temperature and the melting of previously permanent ice.
I believe it is fair to say that our understanding of AGW, both for scientist and layman, are still developing.
IPY (International Polar Year) is underway and will yield not only new information about changes in the status of the coldest parts of the planet, but will install permanent capability to continue the monitoring.
A brand new study shows that west Antarctica is losing ice, causing the continent as a whole to lose ice, and that the rate is accelerating. These sorts of findings are above and beyond what models have so far been capable of predicting, and in fact IPCC AR4 simply punted when it came to projecting how much ice will be lost from Antarctica and Greenland in the 21st century.
That’s an amazing omission, especially considering that it will take far less than a century to feel the effects of the melt.
I have read that IPCC now wants to turn its attention to these changes and produce a new report.
So, what you are seeing is science trying to keep up with what the planet is telling us. Our “understanding” is clearly evolving.
Walt Bennett says
Re: #167,
If you are comfortable with that “analysis”, so be it.
I am looking at NASA and Hadley over two years, one of whom says oceans are warming and the other of whom says the oceans are cooling.
All I have been seeking is any sort of explanation whereby both results could be considered “valid”.
And to think I am an AGWer, and cannot get a straight answer to a simple question. It makes me understand why skeptics get so frustrated. Are we so defensive slash parochial that we cannot take a step back and try to make sense of confounding, conflicting studies?
I appreciate Gavin’s suggestion to check out the spatial patterns. I’d be even more grateful if climate scientists would do it. Shouldn’t such disparities at least pique their interest?
Dan Hughes says
re#152
I did not ask about the foundations of science. I asked about ‘foundations of AGW’. Plus if you are referring to this post there is nothing in it that questions the Navier-Stokes equations. It is a question about very specialized applications of those to situations for which they were not derived. Have you BTW counted the number of unknowns and number of equations at an interface and determined under what conditions a well-posed problem is set?
Your response has been become so typical here at RC and several other so-called science blogs. Attempts at diversions from the questions asked along with presumptions of motive are just about all that many people asking for information can expect. And most importantly not a single word devoted to providing the information asked for.
Testable hypotheses are very significant aspects of all of science, engineering, and all technical issues in general. How about listing a couple and open them up for open discussions. Otherwise all I can do is assume that there aren’t any. That leads to the single conclusion that AGW is not science based.
[Response: Oh please. Possibly you might want to think about the perceptions people have when the fact that we don’t respond to every ill-posed question is immediately interpreted as proof that the AGW is not science. Playing ‘gotcha’ with this kind of trick is tiresome. If you want answers to questions, then ask something specific. ‘AGW’ is not a fundamental theory in and of itself, it is the conclusion of many different lines of evidence and basic physics. There aren’t going to be any major revisions of HITRAN or radiative transfer theory that will make any difference to forcing related to GHG increases, but there’s plenty of uncertainty in cloud feedbacks or aerosol-cloud interactions or the impact of unresolved ocean dynamics when it comes to the climate response. Come up with a specific hypothesis that you think we should be testing and we can discuss. – gavin]
Fred Staples says
No Gavin,(161) I am not claiming that a three part line is a better fit. That would be data-mining.
I have analysed only two sets of data which I have assumed to be independent.
The first, from 1978 to 1995 is typical of the long-term temperature record; it is variable but flat. It has a mean value and a variance, and will serve as a base-line.
It is perfectly legitimate to take any two set of data and to test if their means are significantly different, taking into account their variances and their numbers of observations. I wish to test data, from July 1997 to date, to see if it is significantly different from the base-line. The key word is “significantly”.
The easiest way to do this is to T-test the difference in the means. We are asking if the variance about the separate means is significantly different from the variance about their common mean. It is a legitimate question which we could ask about any two data sets in the record – which, in the UK goes back to 1684.
The answer is that the variances of the two samples of data are similar, the difference between their means is 0.28 degrees centigrade, and the probability of that difference arising by chance is infinitesimal.
The interval, in time, between the two sets of data could have been anything we chose, but it was, in fact, just 19 months.
So it is a fair question, not nonsense, to ask what caused that change in temperature. If the two data sets had been twenty years apart, you might reply that it was the CO2 concentration. But 19 months?
Hank Roberts says
Walt, first, please cite your sources. I recognize the Antarctic melt as just mentioned in EurekaAlert; others may not though.
Second, you’re making a huge overbroad statement. Yes, you’re looking at two numbers and saying, lo, they differ. And you’re leaping to the conclusion that each number represents the entire world ocean, and thus thinking the agencies behind the numbers must mean that, lo, the entire world ocean is described by the one number from that agency, and OMG, the single number from one doesn’t match the single number from the other, so one has to be wrong.
Look at the sources for each agency’s work. They pull data from different devices in different locations in different ways and use different models with different assumptions to handle them.
There are dozens if not hundreds of different agencies and climate models, and a vast number of sources of information each with a huge background of knowledge about variability and reliability over time.
This is, after all, the world we’re trying to describe. You recall the blind men and the elephant fable? Suppose the blind men were living ON the elephant and trying to describe it ….
Both results are valid because each number you’re looking at is merely a summary representation for the press of a huge amount of information that has very fine grain detail behind it.
Look at the details — which are published, and easily available. Look at the maps from each agency of temperatures.
Really, this is becoming silly.
Dan Hughes says
re: Gavin at #169
Gavin, I was addressing Hank’s response. RC has yet to respond. Other that to prove once again exactly what I said, ‘ … along with presumptions of motive …’. It was not intended to be a ‘trick question’. May I ask exactly which aspect of the question made it into a trick question having some hidden motive?
Your recent contributions here are also very enlightening.
Walt Bennett says
Re: #171,
Hank,
if it’s becoming so silly, then disengage.
So you are comparing Hadley and NASA-GISS to two blind men trying to describe an elephant.
That’ll give the skeptics comfort.
OF COURSE THEY ARE USING DIFFERENT DATA!
Why are the results so starkly different, and why is it considered unimportant to know that answer?
Jim Cripwell says
Ref 173. Could I add my endorsement of what Walt has written namely “Why are the results so starkly different, and why is it considered unimportant to know that answer?” This particular discussion has skirted around this isaue from the beginning; carefully never discussing this real issue, and never providing any sort of an answer. And if the analogy of the two blind men is valid, where can I find a description of what the elephant actually looks like?
[Response: The results aren’t ‘starkly different’ no matter how many times someone says they are. The differences there are, as has been stated many times, are mainly related to treatment of the Arctic. Look at the spatial results and see for yourself. – gavin]
Hank Roberts says
Fred, Pielke quotes McKittrick as writing about
“… 2 flat intervals interrupted by step-like changes associated with big volcanoes….”
Any relation?
Hank Roberts says
Walt, I’m pointing out that the fable is about everyone:
http://en.wikipedia.org/wiki/Blind_Men_and_an_Elephant
Ian Forrester says
There has been quite a bit of discussion concerning ocean temperatures. Are people referring to the 2006 paper by Lyman et al. which shows that oceans are cooling? If so they may want to checkout the correction published in 2007 by Lyman et al. (Correction to “Recent Cooling of the Upper Ocean”).
Here is a quote from the correction:
“Although Lyman et al. [2006] carefully estimated sampling errors, they did not investigate potential biases among different instrument types. One such bias has been identified in a subset of Argo float profiles.
This error will ultimately be corrected. However, until corrections have been made these data can be easily excluded from OHCA estimates (see htttp://www.argo.ucsd.edu/ for more details). Another bias was caused by eXpendable BathyThermograph (XBT) data that are systematically warm compared to other instruments [Gouretski and Koltermann, 2007]. Both biases appear to have contributed equally to the spurious cooling”.
Is this the reason for conflicting data?
The correction can be found at:
http://www.pmel.noaa.gov/people/lyman/Pdf/heat_2006.pdf
Barton Paul Levenson says
Walt Bennett posts:
[[Why are the results so starkly different, and why is it considered unimportant to know that answer?]]
Because they are only “starkly different” in your mind. If they go on being starkly different for 30 years, you’d have a case. As is, there’s nothing much to investigate.
lgl says
#177
Ian,
But isn’t there still a cooling, after ‘excluding
profiling floats (gray line).’ page 10?
henry says
[The red line is the annual global-mean GISTEMP temperature record (though any other data set would do just as well), while the blue lines are 8-year trend lines – one for each 8-year period of data in the graph. What it shows is exactly what anyone should expect: the trends over such short periods are variable; sometimes small, sometimes large, sometimes negative – depending on which year you start with.]
Since a 30-year period is considered the “standard” reporting period, show 30 year trend data for each 30-year period in the same chart (GISS).
lgl says
#170
Fred
You should choose 1983 as your starting point instead. All warming at lower latitudes between 1950 and 1983 is probably a result of ENSO, mostly the shift in 1977.
There is almost no warming between 1983 and 1997.
Bob Ward says
‘New Statesman’ magazine has published a rebuttal to the article by David Whitehouse that is mentioned at the start of this RC post: http://www.newstatesman.com/200801140011
It cites the RC post – well done on helping combat this outbreak of dodgy statistical analysis!
lgl says
#170
Fred,
http://virakkraft.com/ENSO-temp.ppt
Jim Cripwell says
In 174 Gavin writes ” The results aren’t ’starkly different’ no matter how many times someone says they are.” Fair enough. Then why doesn’t your presention show the same sort of graphs calculated for all the different time/temperature data sets (HAD/CRU, RSS/MSU and NCDC/NOAA)? It is surely a trivial matter to repeat the calculations, and reproduce 4 graphs instead of only one. Why select NASA/GISS in the first place, when so many people believe it is different from the other three? Why not use RSS/MSU? Incidentally I have done my own calculations, and I challenge the idea that the “results are not starkly different”. They are, indeed, starkly different. And it is easy to prove I am wrong by simply doing the calculations with all data sets, as I have suggested, and showing the results.
Figen Mekik says
I don’t mean to be short, but if the results, as you say Jim Cripwell, are so starkly different and you know this by your own calculations; why don’t you prove the climate scientists wrong by your sharing your calculations? It’s an open forum but why ask them to do all the work? If you have evidence they are overlooking, show it, prove it.
guthrie says
Jim- why would you use RSS/ MSU? Can you state your reasons why you would prefer that over the surface temperature record?
Fred Staples- I am not a statistician, but I can almost make sense of your words, they seem quite clear. However, I cannot quite get my head round what you are claiming. Are you saying that there was a 0.28C jump in temperature between some time in 1995 and 1997?
guthrie says
Walt Bennett- Have you thought of asking Hadley themselves? I’m sure they would be helpful, just don’t expect an answer in 48hrs.
John Nielsen-Gammon says
#165 Bryan S, you miss my point. If I believe the IPCC projections (and I do), and if I believe that the changes in OHC consistent with those projections are too small to rise above the statistical uncertainty caused by measurement constraints and the like over the next few years (and I do), why would I bet otherwise? It’s much more lucrative to bet against the prevailing mood of Aggie supporters.
I agree with RP Sr. that total heat content is the ideal metric, and I agree with Gavin that scientists haven’t yet demonstrated the ability to measure it sufficiently accurately to distinguish among projections.
Bryan S says
Gavin, Johnson et al.(2007) observed the change in the global ocean heat integral between 2005-2006 and concluded this quantity was equivalent to a net surface flux of 80 W/m2. By looking at some time series graphs (Lyman, 2006), it seems likely that this is dwarfed by changes observed in some previous years. Such a magnitude of variability must be driven by the sum total of all the atmospheric and oceanic process+any changes in incoming shortwave reaching TOA. It would seem important to check model output against these observations.
Q: What is the magnitude (an average ballpark number) of annual variability(in W/m2) of system heat content changes from AOGCM output? or another way: Is the magnitude of interannual variability in models close to that that observed from past changes in OHC? Thanks.
[Response: You might want to look at that paper again. The idea that the net annual mean surface flux into the ocean could anything like 80 W/m2 is so far out that I have to assume you’ve misinterpreted something. The variations from year to year in net heat flux into the ocean will be on the order of a few tenths W/m2 – with maybe some multi-W/m2 peaks related to volcanoes or extreme tropical variability. – gavin]
Aaron Lewis says
Frank,
How many data points did you need to characterize a reactor system? Try scaling that up and calculate how many points you will need to characterize a climate system’s behavior. Nineteen months is weather. For statistical purposes, it is random, chaotic, and a whopping big population to sample. Your sampling approach does not recognize the diverse time scales of the forcings acting on the subject population. Poor sampling can result in poor results from otherwise correct statistical methods.
It is easy to prove that we have a different climate than we had when I was a senior scientist at “Hanford.” However, it is in the nature of the system, that we can’t demonstrate statistically that recent weather is ongoing climate change. We can show it in other ways.
For recent climate change, I rely on the flowering plants in my yard. They integrate climate change better than the NWS. I might question NWS data, but I never argue with my apple tree. My apple tree says things are getting warmer year by year, and the native daffodils nod in agreement. (According to John Muir’s journal, those daffodils should wait until March to bloom.) That tells me, that today my soil is warmer that it was 8 or 5 years ago when I planted the beds. Soil temperature is an integrating measurement. It integrates highs, lows, means, hours of sun, cloud cover, radiation – everything. It is confirmed by my hyacinths, freesia, and tulips sprouting 2 months earlier over the last 4 years. These bulbs, integrate soil temperature. Not just a probe reading now and then, but a true integration across nights and days for weeks on end.
I can look out my window and see 20 examples of unusual plant behavior caused by the unusually warm weather that we have had for the last few years. The only thing that gives me hope for our nearby commercial growers is the honeybees dancing from blossom to blossom. Honeybees in January? There were no honeybees around here out collecting nectar 5 years ago or 10 or 50 years ago. I have talked to the old beekeepers in the area.
Now, the plants and bees are responding to small changes in the weather. When I run statistics on the local weather data, the changes come up as statistically insignificant. For example, this year is bit colder than some recent years ( http://fruitsandnuts.ucdavis.edu/chillcalc/chilldatachoose.cfm?Station=170&type=chill ), and yet, more and more of my plants are blooming earlier. It is easier to see the problem by looking at the plants than by looking at the National Weather Service or chill hour data. My point is that the weather and climate data that we collect may not reflect the stress that global warming causes on ecosystems and agriculture. See for example: http://www.springerlink.com/content/b46jr4570r7v05k7/
There is some background at : http://www.cfbf.com/agalert/AgAlertStory.cfm?ID=512&ck=10A7CDD970FE135CF4F7BB55C0E3B59F
Jim Galasyn says
Meanwhile, in Antarctica:
Hank Roberts says
Bryan S, which article? This one?
http://209.85.173.104/search?q=cache:BE0n4CmQYg8J:oceans.pmel.noaa.gov/Pdf/heat_2006.pdf+Johnson+et+al.+NOAA/Pacific+Marine+Environmental+Laboratory
Count Iblis says
Shouldn’t there be an increasing trend in the standard deviations due to global warming causing more extreme weather?
[Response: I’m not aware of any such demonstrated trend in monthly or annual mean anomalies. When discussing extremes in the context of climate change, one has to be very specific about what extremes you are talking about. General statements about ‘more extreme weather’ are not supported either by theory nor observation. – gavin]
Hank Roberts says
Gavin, thank you for the updated and new charts. Anyone who’s missed them, look again at the main article and you’ll see the update with new links.
Mark Bahner says
Hi Fred,
I’m not sure RC will post this, but…
I’ll summarize what I think you’re saying:
Your analysis concludes that temperature was approximately stable from 1978 to 1995, then there was a step change of 0.28 deg C from 1995 to 1997, and that from 1997 to 2007, there was no significant increase in temperature. You wonder how this “step change” is compatible with AGW theory.
I think the reason your analysis seems to show a “step change” is that you include the temperature data following the major volcanoes of El Chinchon in 1982 and Mt. Pinatubo in 1992. Mt. Pinatubo was a particularly big eruption.
Mark
tom s says
Gavin, can you tell me what the temperature anomaly for the USA will be for this summer to within .10C please? Also, are you skeptical at all about surface temperature reconstructions that represent the entire globe? What kind of margin of error is there in such reconstructions?
[Response: 1) No. Seasonal prediction is a) not what I do, and b) not a very mature field. 2) For the instrumental record the sampling errors are on the order of 0.1 deg C in the annual mean. Issues with the network were discussed exhaustively here. – gavin]
tom s says
re: 191
Yup, really melting down there…any day now she’ll be slippin’ into the sea…(sigh)
Ray Ladbury says
It appears that the inhabitants of the denialosphere are falling into the same trap as the creationists–trying to find one single devastating observation or experiment that will falsify anthropogenic causation of climate change. Their search is futile for the same reasons as well–the fact that support for anthropogenic causation like evoloution does not rest on any single line of evidence, but rather is extremely broadly based. The science behind the theory is well established and understood–and there is no reason why (or evidence to suggest) it should depend on whether CO2 concentrations are 280 ppmv or 560 ppmv. This physics is so interwoven into our understanding of both modern and paleoclimate that if we were to see behavior very different from that expected, it would have to mean:
1)that there is some sort of effect (e.g. negative feedback)not in the models (in which case, given the persistence of CO2 in the environment, we’d still have a problem when this petered out.
-or-
2)Our entire understanding of the climate would have to be scrapped and rebuilt from scratch–and if this were true, it’s unlikely the models would do as good a job as they do.
There is a lot we don’t understand about climate–but the effect of adding CO2 to the atmosphere doesn’t fall into the class of things we don’t understand.
Bryan S says
Gavin, I think you have misinterpreted what was written. Below, I cut and paste the statement from the Johnson et al. 2007 paper. I am interested if models show this magnitude of annual variability in heat storage.
The difference in combined OHCA maps between 2006 and 2005(Fig. 3.4) illustrates the large year-to-year variability in ocean heat storage, with changes reaching or exceeding the equivalent of an 80 W m–2 magnitude surface flux.
Reference
Johnson, G. C., J. M. Lyman, and J. K. Willis, 2007, Global Oceans: Heat Content. In State of the Climate in 2006, A. Arguez, Ed., Bulletin of the American Meteorological Society, 88, 6, S31-S33.
[Response: You’re right, I misunderstood. The -80 to 80 W/m2 range is for the local values, and from the figures, the biggest changes look to be related to the switch to El Nino in 2006. This figure is interesting though, because the net heat uptake is the integral of all those points. The net value, which isn’t given in that article, will be less than 1 W/m2, and so there is a huge amount of variability that needs to be averaged out. That makes any one year’s value rather uncertain, and thus longer term trends are more reliable. – gavin]
Urs Neu says
As mentionned in the post, one of the factors adding to the noise in the global temperature record is El Nino. I just had a look at the order of magnitude of its influence on the trends discussed here.
Since best fit estimates suggest an alteration of global temperature through ENSO by about 0.1 times the MEI El Nino Index (www.cdc.noaa.gov/people/klaus.wolter/MEI/) with a 5-6 month time lag, one can estimate the influence on global temperature trends by looking at the MEI index trends. This shows, that e.g. there is a negative MEI trend corresponding to about -0.02 to -0.04 K per decade over the last 10 years.
There seems to have been a slow down of global warming of that amount by the negative ENSO trend over that period. Over the last five years the ENSO trend corresponds to a cooling of about -0.06 K per decade.
Thus ENSO is not only important for interannual variations but considerably influences trends up to at least 10y periods. Of course, the longer the period investigated, the smaller the ENSO induced trend will be.