Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.
The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.
There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:
Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….
This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.
How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.
Mistaken Assumption No. 2: … and thinks that all station data are perfect.
This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.
It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.
Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)
Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.
Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations
As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.
Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.
The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.
Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away
This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…
What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.
Jeffrey Davis says
Re: 299
because he goes against the grain he’s all of a sudden a hack of some sort
Irrelevant. Science is science. Hackery is hackery. Nobody’s a scientist all the time.
FurryCatHerder says
Re 295 —
There was someone on that thread who made a wager, and I was more than happy to accept. I don’t recall being contacted by said other party to formalize the wager with any sort of written terms and escrow deposit.
On the short term, someone betting “for” AGW to show its face is betting against any natural events that would depress the climate in the near term, simply because natural events produce greater short term climate change. On the long term, someone betting “for” AGW to show its face is betting against everyone deciding the threat is real and acting on it. In short, the more obvious the threat the less likely the person taking the “for” position is to actually win. Even if one believes in AGW, the area under the probability curve is dominated by a combination of “not going to happen” and “people react and it doesn’t happen”. My intuition tells me that a central “up a little, down a little” position poses the greatest financial risk to someone wagering against AGW.
Given the amount of focus being placed on “Climate Change”, I’d wager against any long term climate change and would have to be given odds to take a 20 or 30 year wager. I’ll likely be dead in 50 years, so 40 and 50 year wagers are straight out as I don’t plan to be well into my 80s and wondering if I’ve won or lost ;)
Mark A. York says
“Those few pictures indicate that there is possible bias at many stations and the bias appears to be due to urbanization which has all the biases being warming biases.”
This is a classic false cause fallacy. In my government work I’ve run into these weather stations in all sorts of locations. I place thermographs in streams to get long term profiles, so perhaps they are all biased too by this logical flight of fancy? I doubt it.
“This means that if you do not know and adjust for the bias, then your process could result in a temperature signal that is higher and changing upwards faster than it actually is.”
Every measure there is says the opposite when adjusted for every possible bias. That indicates it isn’t and separates wishful thinking from real science. I’ve done field biology long enough to know that much.
ray ladbury says
Vernon, now let me get this straight:
You are actually advocating as helpful sending a bunch of completely untrained individuals with
1)zero understanding of data networks,
2)zero understanding of data processing,
3)zero understanding of the history of the network,
4)zero understanding of the types of errors that are significant,
5)zero understanding of the signal,
6)zero understanding of the noise
And you want to do this when there is
1)zero evidence of any bias in the system
2)the results are consistent with every other line of inquiry
3)the data analysis is being done by skillful professionals who know how to deal with the errors introduced by site variability and degradation
4)the vast majority of the stations in the network are not affected by the biases you allege are there (unless you think polar bears and groundhogs are barbecuing and building parking lots).
And you propose to give your volunteers no training so that they understand how data are analyzed so that they think every little variation from ideality that they find is the smoking gun that disproves that the globe is warming. Hell, you’re not even going to verify that they can balance their own checkbook for fear of compromising their “objectivity”. That about got it?
[edit – please keep personal issues out of this discussion]
Hank Roberts says
Geez, the guy’s been queried by a real climate scientist — and he went quiet, two weeks ago.
http://julesandjames.blogspot.com/2007/06/more-on-20000-bet.html
Harold Pierce Jr says
RE #170
One major objective of my project is to determine the range of natural temperature variation at a weather station by reducing the number of factors that effect temperature to as few as possible. For example, by chosing the daily minimum temperature, the effects of clouds on sunlight are eliminated. In general, the daily minimun temperature occurs just before sunrise when the winds are calm and the land quescient. This is why the white-crowned sparrow calls early in the morning, for its call can travel quite a distant without distortion or interference from background noise.
Oil on the oceans should be looked into. Not only do ships deposit huge amounts oil by discharging bilge water, there is enourmous quantities that are flushed from streets of coastal cities. Every parking lot is just slattered with oil and this eventually ends up in the rivers, lakes and oceans. Oil is not biodegarable although it is slowly oxidized by air.
Steve Bloom says
Re #287: Just to note that the concluding passage from that Science Daily article —
“The dating of dust particles also showed that it has been at least 450,000 years ago since the area of the DYE-3 drilling, in the southern part of Greenland, was ice-free.
“That signifies that there was ice there during the Eemian interglacial period 125,000 years ago. It means that although we are now confronted with global warming, the whole ice sheet will not melt and bring about the tremendous sea-level rises which have been the subject of so much discussion.”
— isn’t supported by the paper. As noted elsewhere above, the dating is not certain with regard to the Eemian. Even if it were, though, the final conclusion about not having to worry about “tremendous” sea-level rise is completely unsupported since the research affirms that more or less complete melting did occur during prior interglacials. Aside from inferring an interesting line of research as to what the differences are between the interglacials that might result in differential melting of the Greenland Ice Sheet, these results should only cause us to be more concerned about a repeat of circumastances under which the Eemian sea-level rise would have come primarily from the West Antarctic Ice Sheet. IOW, there’s no comfort at all here for denialists.
This view is consistent with the press release from Eurekalert.
Re #304: Ray, it’s worth repeating that this surface record documentation business is just the latest chew toy for the “audit” crowd now that the hockey stick has become boring. Once this is over with, it’ll be on to something else.
Some of them are capable of sounding reasonable, but IMHO Mankoff ought to search the surfacestations site for comments made about Jim Hansen prior to firming up plans to work with them.
Steve Bloom says
The Greenland and Antarctic ice sheets got some extensive news coverage this last week, BTW. (Note to Hank that the Antarctic one includes a juicy ANDRILL teaser.)
Matei Georgescu says
Re#304
Snapping up 4 high-quality images from pre-determined locations requires what sort of statistical and mathematical training?
Re#299
Hardly irrelevant – if you attack a scientist’s methodology, state what part of it constitutes revision and how it should be revised.
For example, read my initial post on this thread regarding the invalid argument made by Parker (2006) and why this paper should not be cited as evidence of a lack of UHI signal on large scales – I don’t simply state that “he means nothing more nor less than something which conforms with his personal and oftentimes ambiguously-stated methodology”. I actually critique his methodology – please separate that from personal attack, which has no business here.
Hank Roberts says
Harold Pierce — please, try checking your beliefs; Google Scholar will usually give you useful info if you read only the abstracts on the first page of hits. Spelling counts in finding answers; this for example:
H)-21. beta.(H)-hopane as a conserved internal marker for estimating the biodegradation of crude oil
RC Prince, DL Elmendorf, JR Lute, CS Hsu, CE Haith â�¦ –
Environmental Science & Technology, 1994 – pubs.acs.org
… Introduction The majority of the components of crude oil are biodegradable (1-41,
but quantifying biodegradation in the field has proven to be a challenge. …
DocMartyn says
Modeling your filtering process; will you put your money where your mouth is?
Let us say that a group of skeptics were to be assigned a virtual weather station. They are given a temperature data set which has a clear temperature trend. There are 40-50 such stations, each station has a nominal 100 years histroy (but it may be less).
Each skeptic is allowed to introduce a number (randomly generated) of random events in their station, during its whole history; as in a move, as in a gap of x number of years, as in a UHI effect or even a barbeque next to the instruments.
We will have a strict way that data is allowed to be altered, generally agreed before hand. The actual position of each station in a grid will be chosen at random. We will have the golden envelope that contains the “real” data set and hand the “noised” data to you people.
Could you find the underlaying temperature trend?
What will your error bars be?
I could get 40-50 people who will each handle one site and one site alone. We could easily have someone make up a “real” temperature data-set, who would not be in contact with either group.
Are you prepared to wager on the outcome?
Timothy Chase says
Matei Georgescu (#298) wrote:
When I say “personal methodology,” I mean essentially that there are methodologies which are used in science for the purpose of forecasting, then there is the methodology which he himself originated and promotes. When I say “ambiguous,” I mean that his methodology is stated in terms of “principles” which at least appear mutually contradictory.
Please see:
His actual field is “marketing,” and he is Professor of Marketing at the Wharton Business School, University of Pennsylvania. In the forecasting of natural phenomena as opposed to market forecasting, I believe he has as much expertise as Bill Dembski in evolutionary biology. Judging from the Kos article above, his greatest skill is self-promotion.
However, “hack” is not the word that I would use to describe him. Hack generally implies lack of skill, and this guy has all the skill of your typical pool hustler or card shark. As for his articles being peer reviewed, plenty of articles in deconstructionism can make the same claim.
Ian Forrester says
Re #306: “Oil is not biodegarable although it is slowly oxidized by air”.
It may not be “biodegarable” but it is definitely biodegradable. I’ve conducted many lab scale and field scale projects on biodegradation of crude oil and refined products over the past 30 years.
Ian Forrester
Chuck Booth says
Re 306 “Oil is not biodegarable [sic]”
Of course it is biodegradable – there are plenty of oil-degrading bacteria and fungi, etc. in the ocean. Here is a free electronic book on the subject from the U.S. National Academies of Sciences:
http://books.nap.edu/openbook.php?record_id=314&page=270
Paul G says
For those who are interested, the NOAA has restored access to surface temperature site data and despite the misgivings of many people here, photographing of sites has resumed.
Hank Roberts says
DocMartyn, you’re restating the same PR talking point that’s failed above at length. Climate isn’t weather; this is a small signal detectable with a large number of stations.
http://julesandjames.blogspot.com/2007/06/20000-climate-forecast-bet.html
John Mashey says
re: #118 Jim Cripwell
re: #302 FCH
Are either of you up for a Long Bet, ending say around 2020? {I think I might still be around then, see http://www.longbets.org for the mechanism].
If I read Jim’s post right, it sounds like he may believe the Abdusmatov-like theory (i.e., like “Climate Skeptic?”) that tells us to expect cooling soon. (Is that correct?) I tried to offer CS a Long Bet, but he seemed to disappear shortly thereafter.
FCH: I’d hate to bet against you, but you seem (I’m not sure?) to bet either that the world will not naturally get warmer, or that humans will decide to do something quickly enough. I wish it were otherwise, but since I believe that physics says CO2 will stay a warming influence for a long time, even if we stopped emitting any CO2 tomorrow, I’d guess that a 3 (or preferably 5) year average for 2020 will not be lower than that of 2009 (that’s picked to be 11 years apart to match solar cycles). I’d even take my chances with volcanoes and ENSOs.
Overall on chasing USHCN stations:
according to the CIA Factbook:
510.1 M sq km = Earth total surface area
148.9 M sq km = Earth land surface area
..9.2 M sq km = USA land area, of which (1.8% of total surface)
.-1.5 M sq km = Alaska (other sources), Hawaii small.
..7.7 M sq km = rest of US (1.5% of total surface), since I don’t believe many stations are subject to serious UHI in Alaska.
There could be a substantial amount of uncorrected UHI … and it still wouldn’t matter on the world scale.
In one of my old roles as a computer performance guy [I was one of the architects for SGI Origin/Altix supercomputers with which Gavin would be familiar], I’d have had STRONG words with anybody who:
– started with 100 long-established benchmarks (probably a 3X over-sample) that together yielded a performance number,
– took 1-2 of those, whose results were not outliers, but were in the middle of the distribution, and which had often been scrutinized by experts
– and then proposed to spend a lot of time analyzing those 1-2 to death.
My main worry about UHI isn’t the measurement, it’s the effect on the US Southeast/Southwest especially:
hotter -> run air-conditioners more -> exhaust hotter air outside ->
hotter … chew up more power to run air-conditioners … -> burn more coal ->
fire up least efficient plants for peak electricity usage.
http://communicate.aag.org/eseries/aag_org/program/AbstractDetail.cfm?AbstractID=12661
says Phoenix, AZ already a UHI of 6C … which, still doesn’t make any difference to the overall numbers, but has strong local effects. I was there ~15 years ago in the summer, and it was already ferocious. I’d guess that actions to ameliorate the UHI effect (more trees, rooftop gardens, better building techniques) will prove to be good investments in many places.
ray ladbury says
DocMartyn, Let me get this straight. You’re talking 40-50 randomly selected stations–GLOBALLY. What is the range on the number of events, and the duration (or is that random, too)? Are the stations randomly selected? Is there any restriction on the type of error that can be introduced (i.e. does it have to be of a type that could be found in nature)?
If such a proposition were available, I would strongly consider a piece of that action. But before taking your money, as an honest man, I would have to ask you to consider the wager you are making:
You are saying that by introducing some sort of noise some random number of times at 40-50 different stations our of a network with thousands? tens of thousands? of stations that you could significantly alter a GLOBAL trend. OK, let’s say the stations are randomly selected. The chance of any two neighboring stations being affected is very small, and of 3 neighboring stations being affected miniscule, and so on. What if I choose to compare any station’s reading to the 5 nearest it? OK, now say that YOU get to choose the stations. Be careful. If you choose them too close together, the trend you introduce will be local, not global. Now let’s consider the type of noise you introduce. If the noise is large, or it varies in some significant manner from natural noise, GOTCHA. All I have to do drop that station for all time, or downweight it to insignificance. And if your noise looks like a trend I might see in nature, it probably won’t significantly affect the Global results. So, given that I’ll either be able to identify your sabotage or that it won’t significantly affect the global trend, and given that you’re tampering with maybe a percent of the number of stations in the network, yeah, I like those odds. Still interested.
Steve Reynolds says
304 ray ladbury…> so that they think every little variation from ideality that they find is the smoking gun that disproves that the globe is warming.
Why do so many appear to make a straw man of this?
Yes, it is very unlikely that AGW will be disproved by auditing temperature records, but don’t we want to have the most accurate data possible? Resistance to transparency only helps to make a denier case that there is something to hide.
Also, this is mostly not a qualitative argument about whether AGW exists. If the temperature record is in error by 0.1C or so, maybe a climate sensitivity of 2C is more likely than 3C (I know there are other methods of estimating sensitivity, but there is considerable uncertainty).
Also, this is mostly not a qualitative argument about whether AGW exists. If the temperature record is in error by 0.1C or so, maybe a climate sensitivity of 2C is more likely than 3C (I know there are other methods of estimating sensitivity, but there is considerable uncertainity).
Edo River says
can someone tell me what an “undocumented changerpoint” found in The USHCN Version 2 Serial Monthly Dataset, document referred in the #2 assumption link, is?
Alex says
Very interesting thread… keep the debate going guys, there’s still a long way to go before we find out if the arctic will melt. It’s very interesting to see even on a pro-global-warming site such a mix of varying viewpoints on climate change, personally I feel the debate is essential to preserving good science. I also think its important to keep from overwhelming the general public with a certain bias before a general scientific consensus is within reach.
Until this issue is as widely accepted as tectonic plates, the debate is far from over. So please, do try to interpret scientific evidence with one agenda in mind, that is, science.
Philip Machanick says
The Australian Broadcasting Corporation is showing the climate change swindle show soon. In preparation, The Australian ran a piece Hostages to a hoax by Martin Dunkin who made the show — it features a pair of graphs from Willie Soon’s Geophysical Research Letters 2005 (vol. 32, 27 Aug, L16712) paper in the print edition (Fig. 1 from http://ff.org/centers/csspp/library/co2weekly/20060406/20060406_11.pdf — took me a bit of digging to find it, since Durkin only cited the journal name, volume and year).
I don’t think I’ve yet seen a critical dissection of this particular paper but it strikes me as odd that he can get away with doing a 125-year correlation-based comparison of 2 isolated variables vs. temperature.
How many astrophysicists, I wonder have published papers funded by American Petroleum Institute, and Exxon-Mobil?
A couple more questions … why does he have to use his own carefully massaged temperature measure when there are other accepted measures around? How significant an effect can you expect with solar energy per area varying by less than 0.3%? If he must treat the 2 variables in isolation, why does he compare them over a long-term range, when the CO2 is not increasing as steeply as today? Without doing the stats, if you eyeball the graphs, the CO2-temp trend looks like a much better fit post-1960, when CO2 started to increase more significantly.
Informed comment would be much appreciated.
Timothy Chase says
Steve Reynolds (#319) wrote:
The figure of roughly 2.9 C comes from the paleoclimate studies for the past 400,000 years. I don’t know of anyone who would be trying to estimate climate sensitivity on the basis of present day temperature records. For one thing, it just wouldn’t make much sense: climate sensitivity isn’t just the temperature change which has occured as the result of the rise in CO2 levels – it is also whatever temperature change is still in the pipeline until the climate system finally re-achieves a quasi-equilibrium.
But there is one thing to keep in mind, something which I personally regard as a great deal more important than any Urban Heat Island effect: the climate sensitivity isn’t what temperature increase will result in the long-run from the carbon dioxide which is in the system at present. The climate sensitivity is ultimately a question of how much the temperature increases relative to the increase in carbon dioxide once the new equilibria for both are achieved. The further this goes, the more positive feedback we are going to be seeing as the result of the carbon cycle.
Recently we discovered that the Southern Ocean has been losing its ability to absorb carbon dioxide. Likewise it appears that plants are losing their ability to take up as much carbon dioxide as they have been in the past – at least during times of heat and drought stress. And now thawing permafrost is releasing methane in the Arctic and Sub-Arctic regions. Then there is the wildcard of shallow water methane hydrates.
Many of the feedbacks which are kicking in from the carbon cycle are a largely a function of temperature. At some point, it is quite possible that the ocean will become a net emitter of carbon dioxide. And even if we were to stop emitting carbon dioxide right now, we would still have a fair amount the temperatures would continue to rise substantially for the next fifty years.
But somehow I doubt that we will even be reducing our net emissions within the next few years. We will probably be fairly lucky if we see them start to fall twenty years from now.
nicolas L. says
re 321, Alex
Don’t get this wrong but… arctic is already melting.
http://www.msnbc.msn.com/id/9053898/
“Average arctic temperatures increased at almost twice the global average rate in the past 100 years. Arctic temperatures have high decadal variability, and a warm period was also observed from 1925 to 1945.”
“Satellite data since 1978 show that annual average arctic sea ice extent has shrunk by 2.7 (2.1 to 3.)% per decade, with larger decreases in summer of 7.4 (5.0 to 9.8)% per decade”
http://ipcc-wg1.ucar.edu/wg1/Report/AR4WG1_Pub_SPM-v2.pdf
Secondly, I think this issue is as widely accepted in the climate scientists communauty than tectonics are accepted by geologists (and I’m sure I could find one or two of them to tell you tectonics is just a flat wrong theory :) ). You said it, none should interpret scientific evidence with an agenda in mind…
Steve Jewson says
Some responses to people who replied to my original posting.
Thanks for your replies.
Gavin:
Actually I think realclimate.org is a great place to educate people wrt the issue that most state-funded surface weather observations are kept proprietary. Anyone who cares about society’s response to climate change should care about this issue, and the more people who know about it the better. I agree that the governments responsible should also be targeted directly.
The impact of not being able to get data is that many groups of people who need to start responding to climate change can’t do so as effectively as they would otherwise be able to because they can’t quantify their exposures and their risks. That’s bad news for all of us.
Wrt your comment about NOAA data: note that much of the ‘free’ data from NOAA actually comes with legal restrictions, because of the infamous WMO resolution 40. For a lot of the international data NOAA provides, it wouldn’t be legal for me to use it.
My apologies to Norway and NZ for not mentioning that they do make their data available. All credit to them. I’ve dealt with the UK, France, Germany, Holland, Spain, Italy, Belgium, Luxembourg, Sweden, Finland, Denmark, Australia, Japan, Austria, India and Greece. They are all very nice people, but their data is very expensive (at least last time I checked), and this really limits the extent to which it is usable by the people who need to be looking at it.
I agree that *very* large scale climate questions can be addressed using the free data. But I’m interested in smaller scales. How are the patterns of rainfall in the UK being affected by climate change? What are the patterns of temperature change in France? Are there more thunderstorms in Italy? Are typhoon winds in Japan changing? You can’t answer these kinds of questions with the free data.
Tamino:
I agree that you can find a lot of temperature series on line. But it’s a tiny fraction of what’s being measured, and the resolution just isn’t there for the kinds of questions I’ve listed above.
Ray Ladbury:
Sorry, I don’t quite understand your point. I’m a professional meteorologist, and I spend most of my time analysing weather data. I write papers and books on the subject. In my humble opinion of myself, I am qualified to analyse weather data and make sense of it. I want the data to be available so that I can reproduce, check, confirm or refute, and extend, what the state funded researchers are doing. And so that other suitably qualitified people can do the same. I wouldn’t deny that it’s hard work. I don’t have any plans to download the human genome. I wouldn’t have the faintest idea what to do with it.
Hank Roberts:
My comments about the availability of climate data are based on working in a group of people that has contacted many of the NWS’s in the world to try and get their data. If there is any other group in the world who has spent as much time as we have contacting the NWS’s to get data, I’d like to here from them and know what their experiences were. Anecdotally, I talk to a lot of applied meteorologists. They all have the same frustration.
My (rather poor) joke on this subject: for companies in London, UK, it’s easier to get weather data for London, Texas, than it is to get it for their own town.
Pekka Kostamo says
Having seen a number (maybe a hundred or so) of official temperature observation stations over 40 years of time and on all continents (ex. Antarctica), I might consider favourably any study results that would report about 0.2 degree underestimate of the global warming in the actual observations.
The reason being that the personnel training and equipment service, maintenance and replacement improvements have substantially reduced the “raw data” temperature measurement errors, slowly but surely. The old and grimy wooden thermometer screens with flaking white paint have gradually become things of the past. The solar radiation heating errors are consequently much less than they used to be.
Another comment I have is that the function of “climate observation” has undergone a profound change over the years. It used to be a local interest. I.e. in the U.S. the “State Climatologist” office system was established to provide climate guidance to the local businesses and the public related to things like which crops and varieties it would be possible to grow in which localities, or on the flooding probabilities of proposed rainwater drains, etc. This did not require 0.1 degC measurement accuracy, and the observation stations were equipped and maintained accordingly with lowest cost instruments meeting those actual needs. This was of course entirely right, local climates in the U.S. show a wide range and the needs were correctly understood.
“Global climate” was then a specialized and narrow academic discipline that was not much of a consideration. It has only recently become an operationally critical main stream interest.
Higher accuracy measurements were required by the weather forecasting services. In that application, a station’s 0,2 degC bias would show up on the national and regional weather maps. Consequently much more was invested in the equipment and training of people working on the (fewer) synoptic stations, in systematic re-calibration of sensors, maintenance of thermometer screens etc.
Various constructions of thermometer screens have been tested in Europe (Netherlands and Norway). In those well controlled and well maintained circumstances impacts are still significant.
http://www.dwd.de/EUMETNET/Berichte/TECO98temp.pdf
U.S. is unique in having a quite separate local climate observation organization. In all other countries climate observation has been an integral part of the national weather service.
Global climate analysis is bound to live with the quantity and quality of past observations, made to the requirements and specifications of other services – inadequate as they may be. As has been said, luckily it is not critical of the science aspect, as the observation statistics are just a diagnostic tool, not a primary input.
As practical advice based on experience, I do propose the following: If a new observation does not come close to a prediction by an established old theory, check carefully the observation. The large errors are very likely found there. Small differences may be on either side.
As to the photo mission, much more useful would be to collect old photos taken on the stations, of which there certainly are great numbers in the family albums.
Sam says
RE: T Chase #323
“Recently we discovered that the Southern Ocean has been losing its ability to absorb carbon dioxide.” The oceans are not losing their ability to absorb carbon dioxide, they are just not increasing the absorption. This has been attributed to changes in weather (winds), which are not necessarily permanent.
“Likewise it appears that plants are losing their ability to take up as much carbon dioxide as they have been in the past – at least during times of heat and drought stress.” Really? Losing their ability? Plants thrive in a high CO2 environment, and perform particularly well in wrt drought as they evapotranspire much more efficiently. This efficient use of water overcomes the effect of heat.
“And now thawing permafrost is releasing methane in the Arctic and Sub-Arctic regions.” Permafrost is melting, but why are methane concentrations in the atmosphere leveling off and trending downward?
The case for global warming is strong. Why is there a need to stretch the data and cheer lead for disaster?
tamino says
Re: #322 (Phillip Machanick)
One of the big problems with the graph is that the solar data (total solar irradiance, or TSI), is not really right. Durkin, in his piece in the Australian, claims that
But this is just plain wrong. Note that the solar data graphed go back to about 1875; NASA and NOAA have only been measuring TSI since about 1978. In fact the TSI data in Soon’s paper come from a reconstruction, based on proxy data, by Hoyt & Schatten (1993, updated later). But the satellite measurements (the data actually from NASA and NOAA!) distinctly contradict the proxy reconstruction of Hoyt & Schatten. If you want to use a proxy reconstruction for data prior to 1978, the “gold standard” these days seems to be Lean (2000, later updated), which matches the satellite observations quite well during their period of overlap.
It seems to me that what Soon did was to search for some temperature dataset, somewhere, that would match the TSI data he was using. There are certainly enough regions of the earth that one could likely find such a match, whether there is a causal relationship or not! Now that we have better TSI data, the “match” isn’t nearly so good.
You can find more info on TSI in this post and this post on my blog.
Vernon says
Re: 304 Ray, far be it for me to imply you could be wrong, but since I know so little… please explain how you can adjust for a sampling bias without knowing what it is first. I did a review and could not find this.
I also never discussed how to do a study of possible bias in surface stations.
Hank Roberts says
Vernon, do you know what “sampling bias” is?
” Sampling bias can occur any time your sample is not a random sample. If it is not random, some individuals are more likely than others to be chosen (more subtly, some combinations of individuals are more likely to be chosen together). Sampling bias occurs whenever those more likely differ in their distribution of one or more of the measured variables from those less likely.”
http://cs.fairfield.edu/~sawin/Stats/Notes/sampling.html
We are not even starting with weather stations randomly distributed across the planet. You’re assuming and stating as a fact your belief that the people running the instruments don’t know as much as you do. You may want to learn more about how they are using the system before deciding for sure that they are wrong.
Ray Ladbury says
OK, Vernon, work through this with me. You claim there is a sampling bias in the data. How would we find it and characterize it. Well, first, “sampling bias” isn’t very specific. The sampling bias could be one that creeped in over time (e.g. through urbanization) or it could have existed from the beginning or it could be a combination of both. (Can you think of any other possibilities?) No matter. We have data going back 100 years or more from some stations, and we have multiple stations near each station that we can cross compare. We can even compare stations far from each other but with similar microclimates (latitude, altitude, geographic setting, weather…). We can compare stations that have different microclimates that vary from each other in well understood ways. We can look at stations and their neighbors where one station is known to have urbanized and compare them to similar stations where all remain rural.
Now the signal we are looking for is gradual AND global. If we see a sudden temporary change, we probably just drop that reading. If we see a sudden permanent change, we probably downweight that station in future analyses. And we can also see if nearby stations are similarly affected. If they are, we can investigate that’s a tipoff that maybe we actually need to look at that little region. It still won’t produce a global signal, but something odd is going on there and we might need more info to understand how to treat it. Or if we don’t have a grad student we want to expose to poison ivy, we can just drop that little cluster–hell, we’re oversampled by ~100x anyway.
Now, I know I’ve made this sound easy. It’s actually a lot of hard work, but it is mostly straightforward, and anyone who has worked with a large geo information network is going to understand how this works. Still the thing that makes it straightforward is the fact that the signal you are looking for is gradual and global, while the errors will tend to be local and often more rapid. It also doesn’t hurt that the biggest signals are in polar high-latitude regions that are still largely unurbanized.
I don’t disagree that as you move from global climate models to regional and local models (e.g. what does climate change mean for the poor souls in Vegas who are enduring 120 degree temperatures and haven’t seen rain this year?), these effects may be important. For the issue of Global climate change, they’re a fart in a windstorm.
Vernon says
RE 331: Ray, you still have not answered the root question, how do you adjust for a sampling bias when you do not know what it is. Your answer is that there is no bias but by definition you cannot adjust for a bias until you detected it and understand it. All I am saying is that the pictures shown indicate that there is poor siting with many stations that could have a bias but we will not know until the stations are inspected. So I have to ask, how are you going to determine what the bias is, or even if one exists, without a study of the stations and why is doing a study to determine whether there is a bias or not something that you have to be so against? When is collecting better data wrong?
Timothy Chase says
Sam (#327) wrote:
From what I can see, the amount of methane that we are releasing has leveled off, but it is not declining as of yet, at least not as of 2005.
Please see:
… and more recently, I believe the amount of methane being released from the permafrost has actually increased.
While we have managed to reduce our own methane emissions and it has leveled off as of 2005, it is a factor and it is something we need to take seriously. While I most certainly do not expect a catastrophic release of methane from the permafrost, we have good reason to believe that its release will be increasing in the coming years. And although it has a half-life of only 40 years, even once it decays, it leaves behind an equal amount of carbon dioxide which will remain in the atmosphere much longer.
Please see:
My concern is that we still appear to lack the political will to do something about our own emissions of carbon dioxide, and once permafrost thaws, unlike our emissions, it is something we cannot control. As for shallow water methane hydrates, these too are something which we cannot control – once they become a factor, but they pose a more distant threat.
Chuck Booth says
Re 324 “I’m sure I could find one or two of them to tell you tectonics is just a flat wrong theory”
Oh, yes, there are few alternative “theories,” some of which have been proposed by geologists :
http://users.indigo.net.au/don/links.html
http://64.233.167.104/search?q=cache:_bOi346yXgsJ:www.ncgt.org/aboutNCGT/aboutNCGT.pdf+plate+tectonics+alternative&hl=en&ct=clnk&cd=9&gl=us&client=safari
Ray Ladbury says
Hank, to be fair, we are sampling temperature at various points around the globe. Perhaps what Vernon is alleging is that the distribution of weather stations around the globe is nonuniform and so would give rise to a sampling bias. However, the answer to this is the same as the answer that I gave: the data will tell you.
Vernon, are you aware of the statistical analysis technique of bootstrapping? It is a technique for looking (among other things) at the dependence of your result on a subset of your data. Now, bootstrapping in a complicated geo-network like the meteorological network can be performed in a variety of ways. You can remove single stations and recompute your result–this tells you if a single station or indeed several stations may introduce a bias. You can remove local clusters of stations–singly or in combination. This tells you where you might have had large-scale local changes. You could remove regions and see if your signal is still robust over the rest of the planet.
Other things you could do:
1)Divide your data in half randomly. Compute your result with each half and see if they agree.
2)Compare your results to multiple other results to see if they are consistent.
Again, Vernon, the signal you are looking at is global. You have to find a problem not just in New York, but in Timbuktu and Novosibirsk–or even more likely, you have to find multiple problems that give rise to a comparable effect in >33% or so of the stations in the network. Even more improbably, it would have to have roughly the same time and spatial dependence as your signal. Do you really think you’ll find anything of that order of magnitude? Do you really think that anything that huge would have escaped notice?
Now notice that I am not saying “Don’t look.” I’m saying “Look where you are most likely to find any issue that exists–in the data,”–and that has already been done with astounding thoroughness.
John Mashey says
re: #322 Phil
Hi Phil!
Since OT, short, see:
http://en.wikipedia.org/wiki/The_Great_Global_Warming_Swindle
which goes through the GGWS controversy, including pointers to reviews.
Sam says
RE:333 Tim: thanks for the response on the methane issue. Before getting too pessimistic, please consider that most of the permafrost will remain frozen (only the surface and southern areas will begin melting), and that when melted vegetation will grow and with it carbon will begin to acculmulate in the ground again. This is after all, how the methane got their in the first place.
Vernon says
RE: 335 Ray, how do we know how many stations are badly sited? Is that not the whole issue. Your going with feelings that it is not likely that enough stations have a problem but your don’t have the evidence to back that assertion.
Now do I think that could escape notice, why not, based on the pictures I have seen so far, it appears that many of the stations presented have problems. I do not know if the pictures represent a valid sample of the network but I would like to see the evidence so we can know, not feel that it is correct.
Once again you ask if I feel on whether this could escape notice and I have to say again, I don’t want to feel it, I want to see the empirical evidence.
Some how I do not think that science is based on feelings but rather empirical evidence.
Doug Watts says
I call the temp. quibbling disingenuous because it is analogous in structure and intent to conflating a few electrical meters being inaccurate as evidence that electricity may not exist.
It’s that transparent and phony.
Hank Roberts says
Vernon, did you reread #20?
Are you saying that you feel, based on looking at those pictures, that the temperature readings from the boxes in the pictures must be biased, and must be biased to give too high a reading?
If you put a comparable box nearby, with fresh paint, or cleaned screens, or positioned outside of the shadow/depression/parking lot/heating exhaust vent, in the picture, would you expect the thermometer result from the nearby box to be different enough in its reading to make a difference?
How would you tell, if not by making the comparisons already described?
Would you want to look at readings from the individual thermometers in the two boxes side by side, and decide if they were different?
How would you decide?
Dan says
re: 338. This entire point is a rotten red herring and quite disingenuous with respect to global warming. As has been pointed out numerous times and continually conveniently ignored by the skeptics/denialists, the surface global temperature stations are a very small subset of the larger data set (tree rings, glacier melt, satellite measurements, etc.) indicating the temperature trend. Yet skeptic/denialists keep repeating (and inflating) the red herring issue as if somehow that makes it more important. It is denialist tunnelvision of the worst kind.
Steve Reynolds says
323 Timothy Chase> The figure of roughly 2.9 C comes from the paleoclimate studies for the past 400,000 years. I don’t know of anyone who would be trying to estimate climate sensitivity on the basis of present day temperature records. For one thing, it just wouldn’t make much sense: climate sensitivity isn’t just the temperature change which has occured as the result of the rise in CO2 levels – it is also whatever temperature change is still in the pipeline until the climate system finally re-achieves a quasi-equilibrium.
You are mistaken. 20th century warming (corrected for your objections, I’m sure) and volcanic cooling are also used. See:
http://www.jamstec.go.jp/frcgc/research/d5/jdannan/GRL_sensitivity.pdf
for a brief discussion of the various methods and their sensitivity spreads (see figure 1). Paleoclimate appears to show the smallest sensitivity (peaked around 2.6C).
Also very good discussion at the author’s blog:
http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html
Timothy Chase says
Steve Reynolds (#342) wrote:
Actually the figures from the paleoclimate seem to center around 2.8 C for the past 420 million years, not the 2.9 that I gave or the 2.6 that you gave. And this agrees well with the essay you pointed to which gives 2.9 C. In any case, the argument regarding the long-term nature of carbon emission climate sensitivity wouldn’t apply to the recent volcanic aerosols from Mt Pinatubo as they were essentially cleared within about three years – if I remember correctly.
The again, Urban Heat Island effects would be irrelevant to an estimate based upon Mt. Pinatubo – unless of course people started up their barbeques at just the right moments. It is the delta which is important. So the 3 C from Pinatubo would seem less suspect – if one were worried about the Heat Island effect.
As for the smallest and the largest sensitivity, they come from the joint use of observation and models – not observation alone, and it may very well have been something similar to what was done with Pinatubo and therefore largely independent of anything that would have been affected by the Urban Heat Island effect. I don’t know as the paper which you cite doesn’t say. But anything from 1.5-6 C with a best guess of something around 3 C. The paleoclimate estimate of 2.8 C is above the lower estimate of 1.5.
In any case, we have several largely independent lines of inquiry suggesting something between 2.8-3.0 C. I tend to think if a conclusion is justified by multiple indpendent lines of investigation, the justification which it receives is far greater than that which it would receive from any one given line of investigation in isolation. Likewise a range of 2.8 – 3.0 C seems preferable to 1.5 – 6.0 C, at least if one is interested in narrowing the uncertainty.
As for my original point, sure, they could try to use trends to calculate the sensitivity (given the appropriate mathematical methods) and clearly they have tried. However, in my view the paleoclimatological records are far more likely to give you a narrower range of uncertainty. And that would have been the both the appropriate and correct way of stating my point.
DocMartyn says
The wager is for a grid. You get station station psudo-data and I have the real psudo-data and the noised psudo-data. all you have to do it throw away the noise. Could you get the same average in the grid using the noisy-psudo-data compared with the real.
That is the question, psudo-data in a grid. We test your ability to get rid of the noise. Can you do it?
Timothy Chase says
Anyway, if someone is interested in how the Urban Heat Island effect “distorts” temperature trends, there is relevant literature.
Here is the abstract from one:
Rod B says
“the surface global temperature stations are a very small subset of the larger data set (tree rings, glacier melt, satellite measurements, etc.)”
So you’re going to show up us skeptic’s prejudice against the accuracy of temperature measurements (to the tenth of a degree, no less) by counting and measuring tree rings??!!? Around the globe???!?
Hank Roberts says
> We test your ability to get rid of the noise. Can you do it?
The “you” you’re trying to reach would be the data analysts who produce the reports from the weather stations — I’m a reader here just as you are.
You seem to be asking how many stations it takes to derive a small global signal, buried in larger annual variability? If so, the answer from the previous discussion seems to be “about a third of them” — the network is 3x as large as needed to be able to detect a signal.
Why not try this for yourself and show your result? You could use the data set pointed to over at Stoat,
http://scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php
there’s a link from which you can download the original data set — get it all, fudge some of it and run the statistics, and see for yourself how much of a deviation you’d have to introduce before you affected the trend.
If you’re asking whether it’s possible to spit in the pool and then pull back out the spit, the answer would be no. Silly question. If you’re asking whether spitting in the pool is going to cause it to fail the microbiological tests for clean water, the answer would depend on the quantity of spit and its contents and how diluted it becomes.
Seems the first question would be if _you_ can introduce enough bogus information into a data set to affect the trend, and so find out how sensitive it is, eh?
Stoat suggests: “Pick up the HadCRU temperature series from here. Compute 5, 10 and 15 year trends running along the data since 1970” http://www.cru.uea.ac.uk/cru/data/temperature/
And he shows you his result, http://scienceblogs.com/stoat/upload/2007/05/5-year-trends.png
So you can do that, then alter the data set and doing the math again. Tell us how much you have to change the data to change the trends.
ray ladbury says
Re 344. How many stations in the grid vs. how many stations with noise? Are the stations distributed over the entire planet or a reasonable approximation thereof? How close does the reconstruction have to be to the pseudodata?
Timothy Chase says
Here is another abstract – and the article if one is interested. It is from 2003.
Please see:
DocMartyn says
How many stations in the grid vs. how many stations with noise?
All stions will have noise. What type noise depends on many things, things you would not know about, but “realistic”.
Are the stations distributed over the entire planet or a reasonable approximation thereof?
I don’t want you to do a whole planet, just a grid, i will make it square if you like. All you have to do is to find the underlying signal. How you do it I don’t care. If you are given noisy pseudo-data for 40-50 stations, over a 100 year period, can you get rid of the noise.
How close does the reconstruction have to be to the pseudodata?
What do you think is possible? Let us say that if we combine the pseudo-temperature set for your chosen sites and plot them against the same sites “real” pseudo-data. What will be the significance and what will be the R2 factor?
Just how good do you think your programs are?