Preliminary calculations* show that surface temperatures** averaged over the globe in 2004 were the fourth highest (and the past decade was the warmest) since measurements began in 1861. (Actually, there are measurements at some sites before 1861, but this date is generally chosen as the first time when there is a dense enough network of data available to make a global average meaningful). 2004 was slightly cooler than 2003, 2002 and 1998, with the average world temperature exceeding the 30 year average (1961-1990) by 0.44° C. 1998 remains the warmest year, when surface temperatures averaged +0.54°C above the same 30-year mean. October 2004 was the warmest October on record. Sea-ice extent in the Arctic remains well below the long-term average. In September 2004, it was about 13% less than the 1973-2003 average. Satellite information suggests a general decline in Arctic sea-ice extent of about 8% over the last two and half decades.
For further details see the WMO Web site , go to “News” and look for Press Release 718.
You can also check the NASA-GISS news report on 2004
_______________
*Following established practice, WMO’s global temperature analyses are based on two different datasets. One is the combined dataset maintained by the Hadley Centre of the Met Office, UK, and the Climatic Research Unit, University of East Anglia, UK and the other one is maintained by the USA Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA). Results from these two datasets are comparable: both indicate that 2004 is likely to be the fourth warmest year globally.
**Change of “temperatures” to “surface temperature” made, and hyperlink added 1/23/05.
Pat Neuman says
Combined land and ocean temperatures, as shown on the NOAA NCDC websites,
http://www.ncdc.noaa.gov/img/climate/research/2004/ann/glob_jan-dec_pg.gif
http://www.ncdc.noaa.gov/oa/climate/research/2004/ann/global.html#Gtemp
seem confusing. Thus I show a plot of just global land air temperatures (1880-current by 10 year moving averages) at my P&C_Articles yahoogroup.
http://groups.yahoo.com/group/Paleontology_and_Climate_Articles/
Do any others here think it is more informative to view globally averaged “land” and “ocean” temperatures separately, rather than combined?
Dan Hughes says
Why are the several “base line references” different? The main entry above uses (1961-1910), the NASA-GISS uses (1951-1980), and the P&C_Articles group uses a 10-year running average (and plots the temperatures).
As to the question in Comment 1., are the ocean temperatures the temperature of the water or that of the air above the water? If not the air temperature above the water on what basis are they averaged with the air temperature? Or, why not use temperatures of the soil (or other materials on the surface) instead of the air above the soil and average these with the ocean water temperatures?
The plot of the New York City, US mean and “Global” mean at the NASA-GISS link is interesting and kind of makes one think about exactly what the meaning of a “Global” mean temperature is.
Thanks for any info.
[Response: Just different arbitrary conventions. Note that no-one is plotting the ‘global mean temperature’ – they are plotting the global mean anomaly (with respect to their baseline). See the exaplantion why here – gavin]
dave says
About the temperature trend itself, there’s little to say. So, I would like to relate it to some recently published consequences.
It appears that a recent study done by NCAR (Drought’s Growing Reach) shows
You can find the presented paper here. There appears to be less and less doubt about the consequences of this warming. Also, on a personal note since I live in Boulder, Colorado, it appears in this paper that the Western US. does not show the drying trend shown in other regions of the world over the last 30 years or so. However, it has been abnormally dry in the west now for about 7 years. This is not reflected in the 30 year study since the 70’s and 80’s were a bit above average re: precipitation. In fact, it was due to this (in part) that the front range here in Colorado (Denver, Colorado Springs, etc.) grew so much causing us to stretch our water supplies to the limit. The normal dry conditions were not evident on a short time scale, and now it is getting dryer.
I wonder what the increase in global mean surface temperature is for the decade 1994 to end of year 2004 (thus, not counting Pinatubo) as compared to the longer term trend since 1880 or so. The increase for this decade alone appears to be considerably higher than the recent trend (about 0.17 C per decade as detailed in Pat’s (#1) second link above incorporating Fu, et. al. for adjusted satellite measurements).
James B. Shearer says
Since the average global temperature fell for the second year in a row, the title of this post is wrong.
David Ball says
October 2004 was the warmest October on record.
Comments like this aren’t very helpful. A single warm month in climate terms does not a significant trend make.
Re: #4 Neither does a fractional year-to-year decrease. A 2 year trend? Come on. You (#4) are being a bit pedantic.
dave says
I am perplexed. There is no data cited for comment #4. I can not
confirm it. It is contradicted here. Either
1) it is not worth responding to
or
2) it is correct (on a year by year basis)
Which is it? On the other hand, even if it is correct on a yearly-basis, it does not account for the fact that all the warmest years on record (since 1861?) have occurred in the last 15 years. So, what are the odds on that if nothing unusual is happening?
John Finn says
On the other hand, even if it is correct on a yearly-basis, it does not account for the fact that all the warmest years on record (since 1861?) have occurred in the last 15 years. So, what are the odds on that if nothing unusual is happening?
I suppose it depends on whether you believe that recent warming is a recovery from an earlier cold spell or not. If you do then I suppose you would expect temperatures for recent years to be warmer than those for previous years.
Also regarding your earlier point:- 2002 and 2003 (as well as 1998)- for what it’s worth – do appear to be warmer than 2004 according to the CRU and GISS temperature records. However, according to UAH-derived satellite data, 2004 ranks only 10th out of 27 (since 1979). Again this is down to personal preference on what you believe – a record that provides accurate, even measurement across the whole globe or a record that fails to adequately represent at least 70% of the earth’s surface.
[Response: Except that an extremely small change in how the different satellites are strung together makes a big difference (RSS), and so the systematic uncertainty in the global MSU numbers is larger than you might think. – gavin]
WatchfulBabbler says
I’d argue (1), not worth responding to. Dr. Shearer is certainly worthy of attention, but such a statement is absurd. Yes, global temperatures *are* continuing to rise; yes, 2004 was slightly less warm than the extraordinarily warm previous years. These are not mutually inconsistent statements, any more than “IBM’s stock dipped slightly today, but continues an overall upward climb.”
DrMaggie says
Regarding possible data sources for the statement in comment #4 above, it seems to me that a look at the figure Global temperatures in 2004 at the NCDC website indeed suggests that the global land surface mean temperature anomaly values for 2002-2004 show a falling trend. (Anomalies with respect to the 1880-2003 mean, if I understand correctly.) Of course this short-term behavior doesn’t turn around the long-term positive trend illustrated by e.g. the GISS findings (link in the main post). I can, however, see that the figure could be potentially misleading to someone who doesn’t have information about what comes out from analyzing the data series with that long-term behavior in mind.
dave says
Re #8,9:
Very strong El Nino in 1998. Strongest on record I believe.
Re #7:
What does “recovery” mean? What’s the climate (external) forcing or (internal) feedback causing this so-called recovery?
Peter J. Wetzel says
Imagine for a moment the common-sense neutral observer’s view of the histrionics that have been displayed here regarding a single year of new data and its effect on our view of climate change.
By definition, one year’s average weather influences the assessment of climate by only a few percent at most.
Imagine that this discussion is taking place in 1944. Climate has been warming at an accelerated rate for decades. The aggressive adherents to a dogmatic view that climate is warming are basking in their glory.
Now imagine how mightily their aggressive opponents gloat, with delighted finger pointing, as they reveal the 1975 annual global temperature result.
Which side appears more foolish? I wouldn’t venture to say. But those who remained centered, and held a moderate, common-sense position surely were most successful at preserving their dignity.
Both extremes in this histrionic debate need to take a deep breath and relax. The current climate trend is defined by the trend in 30 year temperature, or longer. This is by mutually accepted definition. The most recent thirty year climate trend that we possess is now centered on the year 1989. The climate trend after 1989 is unknown and unknowable. Period. You’ll just have to wait another 15 years, minimum, before you know what the indisputable climate trend is for 2004. If your climate trend horizon is even broader (and broadening one’s horizons seems to me a good thing), then you’ll have to wait even longer.
— Pete Wetzel, Ph. D., Research Meteorologist at NASA Goddard Space Flight Center, specializing in parameterizing the interactions between the land surface and the atmosphere for Global Climate, Regional Mesoscale, and local Cloud-resolving numerical weather prediction models.
[Response your comments point up the need for a proper attribution analysis of climate change, rather then just poking at trends. Oddly enough, people have thought of that before, which is why chapter 12 of the TAR is about detection and attribution. Now… how come you didn’t know that already? You must have read the TAR – William]
[Response I’m not sure what point you are trying to make here, but if you feel that you can only assess whether temperatures are changing by looking at 30-year averages, consider the following: Global mean temperature anomalies (in degrees C, relative to 1961-90 reference period):
1885-1914: -0.35; 1915-1944: -0.18; 1945-1974: -0.07; 1975-2004: +0.21.
If it walks like a duck, looks like a duck and quacks like a duck….. it’s a duck -Ray]
*******************
Now, having said my say regarding the general debate, I have an interesting skeptic’s comment as well. Science thrives on skepticism. We abrogate our credibility as scientists when we cease to question our own dogma.
What if, and this is purely theoretical, with no attempt to cite any actual observations — what if the way those who are assessing climate are defining daily mean temperature in a way that is inadequate to define the true daily mean temperature?
Mean temperature, for the land data, is dominated by a convention which defines the mean as the average of just two numbers — the daily instantaneous extreme maximum and the daily instantaneous extreme minimum. This definition disregards all other temperature observations through the day, and only considers the two most atypical values that occur each day.
What if the trends of the intermediate temperature, the trends of the most commonly occurring temperatures *between* the maximum and minimum observed each day, are different from the trends of the extremes. For example, consider a tempestuous and unsettled spring day in which there is just one brief 10 minute break of sunshine with calm winds, during which the temperature soars 5 degrees C for just a few minutes, while the rest of the day remains blustery and cold. Imagine that an increase in carbon dioxide alters the cloud properties just a little — enough to either prevent that brief break in the clouds, or to create several additional breaks, one of which would be more likely to occur at the time of peak heating for the day. The impact on the maximum temperature caused by this change could be large. Yet the impact on the true mean, minute by minute, temperature might be almost insignificant. If this type of cloudy day with infrequent sunny breaks makes up a significant part of the average climate (which it does in many locations dominated by stratus and strato-cumulus cloud cover), then the climate trend, as assessed using the daily maximum and minimum, could be drastically different from the climate trend as assessed by the average of, say, 24 hourly temperature measurements.
In general, if the shape of the daily temperature cycle is changed by increasing greenhouse gases [and there is a sufficiency of theoretical arguments to suggest that it could be (plenty of completely untested hypotheses which are soundly based in accepted physics)], then the trends of that shape (the trend of the true 24-hour average temperature) may bear no resemblance at all to the trend in the average of the daily maximum and minimum.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Response In the late 19th and early 20th century, when meteorological networks were expanding and government agencies were trying to standardise observations, a lot of thought was given to how best characterise the daily cycle of temperature. Many papers were published about this and eventually the (Max + Min)/2 convention was agreed upon as a simple but effective way of computing the daily average. More recently, studies have looked at daily minima, and daily maxima and how they have changed over time, and how this may have changed the daily (diurnal) cycle. In many places, this has changed due to an increase in nocturnal cloudiness, for example. Such effects are of course rolled into the global mean. Your hypothesis that the record of global mean temperatures might have been affected by the odd warm hour on a spring day here and there has a very low probability of being correct, given the vast amount of data that goes into the global mean, from stations in all pats of the world (from the fully dark Antarctic winter days to the fully illuminated Arctic summers, desert and equatorial forest sites etc etc). However, I encourage you to investigate this using all the hourly data sets that you can lay your hands on, and report back to realclimate.org -Ray]
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Eli Rabett says
It strikes me that Dr. Wetzel is throwing spaghetti against the wall here and observing what sticks. As to his first comment, it is a recipe for never doing anything because the future is always 15 years away. Whether he agrees or not, the best expert judgement is collected in the IPCC reports, and it says that increasing greenhouse gas concentrations will lead to a warming world (WG1), discusses possible effect and finds them on balance negative (WG2) and finally, discusses mitigations and costs(WGIII).
I would argue that the sensible middle of the road position eight years ago was to start taking no or negative cost actions limiting greenhouse gas emissions. Unfortunately many argued for no action. Among the arguments put forth at that time was wait and see. Today, when both the observational and theoretical evidence for man made climate changes with negative effects are stronger, Dr. Wetzel wants to wait and see. I would argue that having waited and seen, the middle of the road position is now to start taking actions that have some cost, given the extremely high cost of consequences on the high end of the reasonable estimates. That is usually called insurance and it is something that people are advised to take out. As with long term care insurance, the longer you wait, the more expensive it is, until you can’t afford it, you are in the most miserable of conditions and nothing helps.
As to the other issue he raises, it stikes me as a white paper for funding a research project not an argument against current understanding. OTOH, I suspect that the current method was not chosen at random, although it is arbitrary and it might first be worthwhile reading up on why that method was picked and the statistical arguments for it first.
Steve Reuland says
Imagine for a moment the common-sense neutral observer’s view of the histrionics that have been displayed here regarding a single year of new data and its effect on our view of climate change.
I don’t think a common-sense neutral observer would have detected any histrionics. The post itself consists of nothing more than a sober statement of facts. How is that histrionics?
“ The most recent thirty year climate trend that we possess is now centered on the year 1989. The climate trend after 1989 is unknown and unknowable. Period. You’ll just have to wait another 15 years, minimum, before you know what the indisputable climate trend is for 2004.”
I’m not an expert on this subject, but I don’t believe anyone claimed that 2004 by itself represents some sort of trend, but that 2004 is part of a trend that can be seen over the last 30 years or whatever arbitrary time frame you wish to select). The way you’re putting it, we could never know if there was a trend. We’d always be 15 years behind.
Peter J. Wetzel says
I apologize if I’ve been inadequate in my communication skills. Precise communication, particularly when it is intended to impart views not currently in vogue, and particularly to a potentially hostile audience, can be horribly difficult. (People with a dogma to defend seem to be desperately lacking in receptiveness to ideas which draw upon the value of both sides of an argument, arguments I hope and truly intended to be objective and neutral.) So I hope to clarify:
I find the original post by Ray exemplary. Facts are presented without too much judgemental commentary. 2004 was the fourth warmest year since 1861. This result is a combination of land data, using stations where the only measurements recorded are those of the maximum and minimum daily temperature, and ocean data which are probably much more representative of the true daily mean. (My purely theoretical caveats regarding the unrepresentativeness of the daily exremes, apply primarily to the land record, and are not intended to dispute the overall result. They are only cautionary. If the land record is suspect, then look only at the ocean record. You still see the same trend, just not as extreme in its magnitude.)
The histrionics that I percieve, were initiated by #4, and several folk took the bait. To repeat, I do not see histrionics in Ray’s original posting.
But sadly, histrionics seem to have been revived in response to my #11. Please understand the true meaning of a climate trend: a new year of data advances our ability to characterize the trend by 3.33% or less, assuming that the shortest acceptable period required to define “climate” is 30 years. So the 2004 data allow us to define a climate trend which is centered on the year 1989. Please show me where I made any recommendation regarding action, or lack thereof, to be taken based on this new result.
I’m sorry if I am being a little testy, but I find the posted interpretation of my statement: “You’ll just have to wait another 15 years, minimum, before you know what the indisputable climate trend is for 2004.” to be truly paranoid and defensive. The simple fact is that the 30 year climate trend centered on the year 2004 will not be known for 15 more years. Period. Folk who read more into that statement did so because the viewed the statement through an amazingly defensive and dogmatic personal filter. So let me clarify:
If you take the new 2004 annual data on surface temperature and include it in an average for 30 years, you have the most recent estimate of current climate, which is centered on 1989. If you then compare that trend with the trend centered on 1988 and previous years, you find an indisputable trend of climate warming.
If you do the same for 31 year averages, 32 year averages, 33 year averages, etc., on on through at least 70 year averages, you continue to find an indisputable trend of climate warming — even if you dismiss the land data as flawed because of the use of daily extremes rather than a more robust indication of the daily mean.
So those skeptics who would deny Ray the right to title this post “Global temperatures continue to rise” have only a faint semantic defense — and I think it is a weak one, given that this site is “REALclimate.org” — the title might have indicated that Ray was speaking in a 30 to 100 year climatological sense specifically. Hardly necessary in my humble opinion.
Now — William admonished me to address the issue of attribution. Does a statement of observed trend (Ray’s original post), somehow require discussion of its cause? I think not. I have extensive opinions regarding attribution, but I view this particular topic as oriented toward the mechanistic, not the causative. I am happy to discuss attribution at length (and I will). But I don’t feel that this particular topic is the place for that. And I find it to be quite revealing of potential bias (i.e. closed-minded dogma) that the attribution issue would be artificially injected here. Just because the current consensus of coupled GCM model studies and other process studies strongly suggest an anthropogenic cause for the current (1989-centered) climate trend, can not and should not bully the objective science of defining observed climate trends, into hypothesizing and projecting the 1989-centered climate forward in time even one year. Sorry. Let’s keep our debates cleanly and clearly defined, please.
[Response: the issue of attribution is central to the very question you raise: what would have been the correct response to the trends then visible in 1945 (warming) or 1975 (mild cooling) or now (warming). Now, the answer is easy: we have attribution analyses that all show a dominant anthropogenic component. In 1975 the answer is available: they didn’t know enough about the functioning of the climate system to interpret the trends. Thats what they said at the time, and they were correct (in fact they didn’t really have global data either, or it was only just coming onstream then). In 1945 they didn’t have global or even hemispheric data, so presumably no-one even asked the question. But if they had, then the 1975 answer would have been correct.
As to projecting the trends forward… since we now *do* have a good understanding of the causes of the current trends, we can attempt to predict/project. GCMs are used for the purpose because they embody most of the knowledge that can be crammed into them. But (given what we know about the way simpler models can be fitted to GCMs, and likely future CO2 emissions) it *is* possible to use much simpler models. I’m not sure anybody does this, but simply extrapolating current trends for the next decade (and remembering natural variability) is quite defensible *now* – William]
David Ball says
Re # 11
Both extremes in this histrionic debate need to take a deep breath and relax. The current climate trend is defined by the trend in 30 year temperature, or longer. This is by mutually accepted definition. The most recent thirty year climate trend that we possess is now centered on the year 1989. The climate trend after 1989 is unknown and unknowable. Period.
Keep in mind that the extremes got where they are by selectively ignoring the evidence. The sky is not falling but the problem is serious and “don’t worry, be happy” isn’t an option.
As for defining trends, the mutually accepted definition is defined by the 30 year trend, but it is still arbitrary. There is nothing holy about it over say a 25 year trend or a 50 year one. It is convention. What you appear to be suggesting, and please correct me if I’m wrong, is that one measure of the climate trend is the running 30 year average. Using that, and it is a conservative way of measuring the trend, you are correct. It is not, however, the only way to do a trend analysis and to suggest that the trend after 1989 is “unknowable” is a bit of an overstatement.
Steve Reuland says
Dr. Wetzel, in your last post, I counted 3 seperate accusations of dogmatism, none of which I believe were warranted. That’s not the behavior expected of someone who is neutral and objective. I’m not accusing you of being biased (as you seem to have accused everyone else), but you doth protest too much.
Peter J. Wetzel says
William — I really don’t have any major argument with anything you said in your response in #14. But once again your focus is simply different from mine: You imply that a data analysis and a response are somehow inseparable. You ask “what would have been the correct response?” The point I want to make is that your use of the word “correct” there is a highly political pandora’s box. Yet any response to the data before us, whether correct for third world emerging nations or for energy glutton nations, whether correct for environmentally responsible individuals or for greedy “me first” individuals, can be and should be separated from the process of collecting and analyzing the data that define the global temperature trend.
Ideally the scientists who collect and analyze this important data really ought to be double blind (yes, thank you Michael Crichton for this remark, despite couching it in such biased drivel that noone will take it seriously); and without doubt, in order to preserve the integrity of the complex analysis that must be applied, these scientist shoud be maximally shielded, if not entirely separated from the process of determining a societal response.
Well, now that I think about it, I do have another quibble with one word you used: You said that current attribution analysis shows a “dominant” anthropogenic component. Personally I feel that word “dominant” is much too strong. I would prefer to use the word “emerging”, i.e. just recently becoming apparent above our incompletely understood assessment of natural variability.
To #15 — I tried to clarify in #14 that I considered there to be a consensus that *at least* 30 years of data are required to define “climate”, but that ‘more is better’. I certainly won’t quibble if you want to use 25 years as the minimum. Such a number allows us to use more and more satellite data to begin to do legitimate climate analysis. However there must be lower limit to the number of years. Mathematicians have methodologies to project trends. Of course one can characterize the trend beyond 1989, but does such a characterization represent a description of “climate”, based on the consensus that climate ought to consist of roughly equally weighted effect of the weather over at least 25 or 30 years? Or does it represent an “impatient” man’s *projection* of climate which emphasizes fewer years than it should, or weights more recent years more heavily than it should?
I guess another way to approach this debate is to ask the question: Does our present knowledge allow us to ascribe a certain amount of “momentum” or “inertia” to the current climate trend? If we observe a train accelerating toward us, currently traveling at 300kph, and we are 1mm from the front of the train, we could probably make a projection that it will hit us. Applying that simple analogy to climate trends requires us to think clearly about how we define the “target” — what is it that we are afraid the runaway climate train will “hit”? If the target is very nearby (i.e. a climatologically sustained two standard deviation anomaly above the GCM-defined natural variability), we might be able to project with confidence. If the target is the sufficient submerging of the Maldives, or Bangladesh, or Vienna so as to make them uninhabitable, then our projections become a little less certain. When I invoked 1944 and 1975 as being potentially (at least demonstrated to be possible) climate turning points which could be repeated today, I was trying to address the degree of uncertainty that could exist in our projections. Could there be another turn toward cooling after 30 years of warming (defined by short term smoothing of the global annual trend), have occurred beginning in 1998 or 2002? How confident are we that this won’t happen?
Peter J. Wetzel says
To #16:
I accept your reprimand, and I owe this site a profuse apology.
Referring to the Comment Policy, I accept that my “dogma” comments come dangerously close to violating the prohibition of ad hominem comments.
This site is a precious resource. I vow to do my best, in the future, to avoid even the appearance of tainting the decorum of this site. My miserably inadequate defense is that I’m new to this “blogging” universe, where the blog owners have a right to enforce their own strict standards; but I am familiar with, and am a refugee from, the sort of very “loosely” moderated discussion forums wherein ad hominem attacks are perhaps not universally acceptable, but certainly seem to be tacitly tolerated.
Peter J. Wetzel says
In response to #11, Ray said:
“In the late 19th and early 20th century, when meteorological networks were expanding and government agencies were trying to standardise observations, a lot of thought was given to how best characterise the daily cycle of temperature. Many papers were published about this and eventually the (Max + Min)/2 convention was agreed upon as a simple but effective way of computing the daily average.”
Ray — I think a strong incentive for using this is simply the fact that there are *so* many more stations where the observer visited the shelter just once per day and recorded the maximum and minimum, then reset the thermometers. In the days before electronic automation, this was so much less labor intensive than the few stations (such as those at manned airfields) where hourly temperature data were kept. Frankly, the (Max + Min)/2 data are simply tolerated. Research has indeed shown that this methodology provides a *reasonable* estimate of daily mean. And the consequences of rejecting all these stations were dire. It would mean much of the world would be “data sparce”.
But quite a few papers have also been published discussing the distortions to the daily mean that are caused by changing the once-daily time of observation. It has been found that using midnight is quite a bit better than using a morning (e.g. 7AM) or an evening (7PM) observation time. 7PM is more likely to be the time of maximum, so if that is the recording time, more extreme maxima tend to appear in the record on two days either side of the observation time. Whereas when a 7AM time is chosen, the prominent minima are more likely to be enhanced by often appearing on two days.
Unfortunately, most of the volunteer observers who take/took these observations chose to take their once daily observations during daylight, or at best around 10PM, when they were normally awake, so many recording times ended up being either morning or evening. Result: the temperature record we use to define climate contains a random mix of these records with distinctly different characteristics — the ones with 7PM observing time tend to produce warmer climate estimates, the ones with 7AM observing time tend to be colder.
“More recently, studies have looked at daily minima, and daily maxima and how they have changed over time, and how this may have changed the daily (diurnal) cycle. In many places, this has changed due to an increase in nocturnal cloudiness, for example. Such effects are of course rolled into the global mean.”
And so as a result, the 7AM recording times would show much more of the effect of an increased minimum due to increased nocturnal cloudiness, whereas the 7PM recording times would de-emphasize this effect.
Off hand, I’m not sure of this, but there may have been (and there should be) attempts to standardize the data in order to eliminate the effects of recording time in a “corrected” climate record. Unfortunately this sort of correction doesn’t always improve the data. I’m familiar with the corrections made to eliminate the effects of urbanization: The original observations are “massaged” using automated computer routines which are applied worldwide. The result is a set of artificial “observations” which are the best effort of well-meaning scientists to “standardize” huge volumes of data, without having the human resources to even do “sanity checks” on all the individual station results. As a result, there are stations (I’ve looked at Barrow, Alaska, for example [In the NOAA GHCN data]), where the “adjustments” produce data with statistical characteristics that are *so* obviously inaccurate that they are useless for any reasonable analysis.
The problem climatologists who do this sort of analysis face is a lack of financial and manpower support to adequately quality control the vast amounts of individual station data. A herculean, and very laudible effort is being made, but the resources that institutions such as NOAA have available, are simply overwhelmed by all the tasks that are required, once the decision is made to abandon the *raw* data, and begin to massage it for the purpose of “standardization”.
I put forth all this detail in an effort to give the informed lay reader a bit of better understanding of the deep inner workings of climate analysis. Believe me, the people doing this work are dedicated, well meaning, and very hard working. And my intent is not to arbitraily sow “seeds of doubt”. Rather, I hope to help the reader to gain a more robust understanding of the full scope of the “uncertainty” which often disappears when climate results are summarized for the general public.
This “uncertainty” should not paralyze the process of climate evaluation. Ideally, scientists need to be fully aware of all these issues; and the competent interdisciplinary climate scientist takes them into account when they provide their conclusions about the REAL CLIMATE trends. But there are also many narrowly focused specialist scientists who do not take the time to delve into these uncertainty issues in depth. These competent specialists rely on the *summarized* results which have been “sanitized” simply for the sake of brevity. It is horribly difficult to recognize whether a scientist, reporting climate results in his specialty, has a grasp of all the uncertainties in disciplines beyond his/her specialty. But the overall result of having good competent scientists who cannot keep abreast of all the wide range of uncertainties which underlie some of their assumptions is a tendency to represent results in a way that accepts the more remote results of other disciplines as if they were “absolute”.
The upshot of this rambling: I believe that there is a “bias toward certainty” — an unjustified momentum which leads well-meaning scientists to accept more than they should, when it lies outside of their area of expertise.
But I digress.
Your hypothesis that the record of global mean temperatures might have been affected by the odd warm hour on a spring day here and there has a very low probability of being correct, given the vast amount of data that goes into the global mean, from stations in all pats of the world (from the fully dark Antarctic winter days to the fully illuminated Arctic summers, desert and equatorial forest sites etc etc). However, I encourage you to investigate this using all the hourly data sets that you can lay your hands on, and report back to realclimate.org -Ray
Ray — The fraction of the Earth dominated by broken status and statocumulus is not trivial. You happened to mention Arctic summers — this is one of those regimes in which breaks in cloudiness are rare but define maxima when they occur. Similarly, breaks in cloudiness in Arctic winter produce intense temperature drops. And I can propose many more robust hypotheses. The problem, as I’ve suggested above, is that the manpower and funding available to investigate all the fruitful avenues available for improving our understanding of climate is woefully inadequate when compared to the tasks which we scientists have on our wish lists.
Peter J. Wetzel says
From the response to #11:
“In the late 19th and early 20th century, when meteorological networks were expanding
and government agencies were trying to standardise observations, a lot of thought was given
to how best characterise the daily cycle of temperature. Many papers were published about
this and eventually the (Max + Min)/2 convention was agreed upon as a simple but effective
way of computing the daily average.”
I think a strong incentive for using this is simply the fact that there are *so*
many more stations where the observer visited the shelter just once per day and recorded
the maximum and minimum, then reset the thermometers. In the days before electronic
automation, this was so much less labor intensive than the few stations (such as those
at manned airfields) where hourly temperature data were kept. Frankly, the (Max + Min)/2
data are simply tolerated. Research has indeed shown that this methodology provides a
*reasonable* estimate of daily mean. And the consequences of rejecting all these stations
were dire. It would mean much of the world would be “data sparce”.
But quite a few papers have also been published discussing the distortions to the daily
mean that are caused by changing the once-daily time of observation. It has been found
that using midnight is quite a bit better than using a morning (e.g. 7AM) or an evening
(7PM) observation time. 7PM is more likely to be the time of maximum, so if that is the
recording time, more extreme maxima tend to appear in the record on two days either side
of the observation time. Whereas when a 7AM time is chosen, the prominent minima are more
likely to be enhanced by often appearing on two days.
Unfortunately, most of the volunteer observers who take/took these observations chose to
take their once daily observations during daylight, or at best around 10PM, when they were
normally awake, so many recording times ended up being either morning or evening. Result:
the temperature record we use to define climate contains a random mix of these records with
distinctly different characteristics — the ones with 7PM observing time tend to produce
warmer climate estimates, the ones with 7AM observing time tend to be colder.
“More recently, studies have looked at daily minima, and daily maxima and how they have
changed over time, and how this may have changed the daily (diurnal) cycle. In many places,
this has changed due to an increase in nocturnal cloudiness, for example. Such effects are
of course rolled into the global mean.”
And so as a result, the 7AM recording times would show much more of the effect of an
increased minimum due to increased nocturnal cloudiness, whereas the 7PM recording times
would de-emphasize this effect.
Off hand, I’m not sure of this, but there may have been (and there should be) attempts
to standardize the data in order to eliminate the effects of recording time in a “corrected”
climate record. Unfortunately this sort of correction doesn’t always improve the data.
I’m familiar with the corrections made to eliminate the effects of urbanization: The
original observations are “massaged” using automated computer routines which are applied
worldwide. The result is a set of artificial “observations” which are the best effort of
well-meaning scientists to “standardize” huge volumes of data, without having the human
resources to even do “sanity checks” on all the individual station results. As a result,
there are stations (I’ve looked at Barrow, Alaska, for example [In the NOAA GHCN data]),
where the “adjustments” produce data with statistical characteristics that are *so*
obviously inaccurate that they are useless for any reasonable analysis.
The problem climatologists who do this sort of analysis face is a lack of financial and
manpower support to adequately quality control the vast amounts of individual station
data. A herculean, and very laudible effort is being made, but the resources that
institutions such as NOAA have available, are simply overwhelmed by all the tasks that
are required, once the decision is made to abandon the *raw* data, and begin to massage
it for the purpose of “standardization”.
I put forth all this detail in an effort to give the informed lay reader a bit of better
understanding of the deep inner workings of climate analysis. Believe me, the people
doing this work are dedicated, well meaning, and very hard working. And my intent is
not to arbitraily sow “seeds of doubt”. Rather, I hope to help the reader to gain a more
robust understanding of the full scope of the “uncertainty” which often disappears when
climate results are summarized for the general public.
This “uncertainty” should not paralyze the process of climate evaluation. Ideally,
scientists need to be fully aware of all these issues; and the competent interdisciplinary
climate scientist takes them into account when they provide their conclusions about the
REAL CLIMATE trends. But there are also many narrowly focused specialist scientists who
do not take the time to delve into these uncertainty issues in depth. These competent
specialists rely on the *summarized* results which have been “sanitized” simply for the
sake of brevity. It is horribly difficult to recognize whether a scientist, reporting
climate results in his specialty, has a grasp of all the uncertainties in disciplines
beyond his/her specialty. But the overall result of having good competent scientists
who cannot keep abreast of all the wide range of uncertainties which underlie some of
their assumptions is a tendency to represent results in a way that accepts the more remote
results of other disciplines as if they were “absolute”.
The upshot of this rambling: I believe that there is a “bias toward certainty” — an
unjustified momentum which leads well-meaning scientists to accept more than they should,
when it lies outside of their area of expertise.
But I digress.
Your hypothesis that the record of global mean temperatures might have been affected by
the odd warm hour on a spring day here and there has a very low probability of being
correct, given the vast amount of data that goes into the global mean, from stations
in all pats of the world (from the fully dark Antarctic winter days to the fully
illuminated Arctic summers, desert and equatorial forest sites etc etc). However,
I encourage you to investigate this using all the hourly data sets that you can lay
your hands on, and report back to realclimate.org -Ray
The fraction of the Earth dominated by broken status and statocumulus is not
trivial. You happened to mention Arctic summers — this is one of those regimes in which
breaks in cloudiness are rare but define maxima when they occur. Similarly, breaks in
cloudiness in Arctic winter produce intense temperature drops. And I can propose many
more robust hypotheses. The problem, as I’ve suggested above, is that the manpower and
funding available to investigate all the fruitful avenues available for improving our
understanding of climate is woefully inadequate when compared to the tasks which we
scientists have on our wish lists.
Chris Randles says
Is it possible to adjust the global mean temperature according to the number of sunspots in the year?
Adjusting for El Nino might be difficult in that global warming may cause change in frequency of El Nino hence such adjustment might remove some of the signal as well as some of the noise. Nevertheless roughly what is the level of such adjustments and would it make 2004 appear much nearer to being the warmest?
I realise that too much noise reduction can mask changes in trends and that if the adjusted temperatures shows falls for three or four years, this is not enough to show a new trend. Even so, I think that such information would be useful to put the difference between then .54 degrees and the .44 degrees mentioned in the article into context. Also to help with context what is the average and standard deviation of the change in temperature from one year to the next?
Is there anything else that can or should be adjusted for? (I would rather avoid adjusting for things like aerosols which could have anthropogenic cause.)
Dan Allan says
Dr. Wetzel stated, in post 14:
“A new year of data advances our ability to characterize the trend by 3.33% or less, assuming that the shortest acceptable period required to define “climate” is 30 years.”
Several posts have either left this statement unchallenged or have reinforced it. And perhaps, in general, this is a useful guideline. But if we take this argument to its extreme, we can see the erroneousness of it. Imagine that in years 2005 thru 2010, the mean temperatures rise 6 degrees. Obviously, nobody expects this. The result would be shocking. It would also, I have believe, be extremely meaningful. And yetB, using Dr. Wetzel’s 30 year benchmark, it would considered be “meaningless” until we had more data. Clearly one must factor in both the duration of the anomaly and the magnitude of the anomaly before deciding whether an anomaly is meaningful or meaningless. If it is true that temperatures are no higher than at any time in the last 1000 years, and this condition has persisted even a decade, one must consider it to be, if not “proof” of anthropogenic warming, at least a bit suspicious.
Dr. Wetzel urges us to “challenge accepted dogma.” I propose doing so, in challenging what he states as accepted dogma: that any time-series less than 25 or 30 years is meaningless.
Dan A
(Great website, btw).
[Response: the “30 year” mean is the traditional defn of climate, and it was OK for traditional uses. A more modern view would be that climate is the statistics of weather. This encompasses both means and variations – ENSO is then part of “climate” although it disappears from long-term means. See-also the IPCC glossary for climate. In your example, a sudden jump in T would be a change to the statistics, I suppose – William]
Dan Allan says
William,
Thanks for the reply. Although I wish I knew whether you fundamentally agreed or disagreed with my post. Let me try again – briefly, I promise. It seemed to me – unless I misunderstood – that Dr. Wetzel argued that our recent warm years should not be considered “meaningful” yet, because the rolling 30 year average mean temperature is not yet that extreme. For instance, in post 11, he stated, “The most recent thirty year climate trend that we possess is now centered on the year 1989. The climate trend after 1989 is unknown and unknowable. Period.” I have to disagree with this, and was trying to point out that even a very short period, a few years of data, would have to be considered meaningful (i.e., not explainable by ENSO) if the temperature anomaly were sufficiently large. Even if we accept Dr. Wetzel’s 30 year minimum, we have to ask ourselves, “how cold would the mean temperature have to be over the next years such that the climate trend centered around 2004 would be one of downward temperatures?” Again, the degree of the temperature anomaly, not just the duration of it, must be relevant in assessing whether there is a trend.
I hope this makes a small bit of sense.
Thanks.
Dan
[Response: Sorry if I’m being opaque. If the temperature anomaly were large enough, then a single years anomaly would be significant. If 2005 were +2 oC warmer than 2004, that itself would be highly significant. Because it would be an extreme outlier and not fit into the existing statistics (e.g. see from CRU; in fact, looking at that graph, even + 0.5 oC would do). But… a signal that extreme is unlikely. OTOH (sorry) there is also the question of the 2003 european heatwave. That was a very extreme signal – far outside the previous variation – yet its also rather hard to fit into std GW theory, because it would be extreme even under warming to be expected in the next few decades (I think). I also think that refusing to look at periods shorter than 30 years – as PW appears to be saying – is unjustifiable (and also I think he has it wrong, even from his defn: the traditional defn of *climate* is a 30 y mean; so to derive trends from this POV you would have to have two 30 y periods) – William]
Mike Mayson says
Given the uncertainties and compromises surrounding temperature measurement and the definition of a “global average” I wonder if temperature is the best indicator of climate trend.
After all, a temperature reading is an indication of a property of a very small volume of material.
It represents a property of a larger volume only by inference.
Would it not be better to focus on planetary heat fluxes?
I am aware that there are a number of satellite experiments in progress and planned to look at this and I would be very interested if someone could point me to a summary of what has been learnt to date.
[Response: Note that the numbers we are talking about are the global average temperature anomaly (not absolute temperature). There are very good reasons for that as outlined in Hansen’s description. Temperature anomalies are relatively easy to measure compared to changes in planetary heat fluxes, however some measurements have been done so far i.e. Harries et al (2001). – gavin]
Peter J. Wetzel says
We can debate definition. I hope we can settle on a consensus which specifies what we collectively define as “climate” and what we define as “climate change”. But I do not undersand how an entirely arbitrary choice of definition can be labeled as “justifiable” or “unjustifiable”.
As a scientist, I must adhere to the “scientific method”. The choice of definition is made during the formation of the hypothesis, and is not modified “ad hoc” as the data are evaluated, nor as new data arrive.
The hypothesis here is painfully simple: “Man affects climate”. The definitional issues: Define “affect”. Define “climate”.
Yes, I choose a traditional and rather conservative definition of “climate”. My personal choice is to accept, as climate change, a 30 year (or longer) average which is *at least* two standard errors above a long term baseline (e.g. 1861 to 1960, or perhaps AD 1 to AD 1850). Standard error involves both natural variability (including that not well understood because it operates on long time scales, and therefore has not been observed during the period of modern technology) as well as measurement error (or error/uncertainty in the proxies).
Once this criterion for climate change has been met (and I contend that, at best, it has only barely done so), then we can begin to talk about the further uncertainties involved in attribution.
I stand firm in my contention that a very traditional definition of “climate” serves us well in this debate. I contend that it rests on the most solid of scientific foundations — it is a blind choice. The 30 year definition was established long before the current global warming debate began. Those who choose their definitions “a postieri” run the risk of being accused of pre-ordaining their desired conclusions.