Anybody expecting earthshaking news from Berkeley, now that the Berkeley Earth Surface Temperature group being led by Richard Muller has released its results, had to be content with a barely perceptible quiver. As far as the basic science goes, the results could not have been less surprising if the press release had said “Man Finds Sun Rises At Dawn.” This must have been something of a disappointment for anyone hoping for something else.
For those not familiar with it, the purpose of Berkeley Earth was to create a new, independent compilation and assessment of global land surface temperature trends using new statistical methods and a wider range of source data. Expectations that the work would put teeth in accusations against CRU and GISTEMP led to a lot of early press, and an invitation to Muller to testify before Congress. However, the big news this week (e.g. this article by the BBC’s Richard Black) is that there is no discernible difference between the new results and those of CRU.
Muller says that “the biggest surprise was that the new results agreed so closely with the warming values published previously by other teams in the US and the UK.” We find this very statement surprising. As we showed two years ago, any of various simple statistical analyses of the freely available data at the time showed that it was very very unlikely that the results would change.
The basic fact of warming is supported by a huge array of complementary data (ocean warming, ice melting, phenology etc). And shouldn’t it have helped reduce the element of surprise that a National Academy of Sciences study already concluded that the warming seen in the surface station record was “undoubtedly real,” that Menne et al showed that highly touted station siting issues did not in fact compromise the record, that the satellite record agrees with the surface record in every important respect (see Fig. 7 here), and that numerous independent studies (many of them by amateurs) also confirmed the warming trend?
If the Berkeley results are newsworthy, it is only because Muller had been perceived as an outsider (driven in part by trash-talking about other scientists), and has taken money from the infamous Koch brothers. People acting against expectation (“Man bites dog”) is always better news than the converse, something that Muller’s PR effort has exploited to the max. It does take some integrity to admit getting the same answer as those they had criticized, despite their preconceptions and the preconceptions of their funders. And we are pleased to see Muller’s statement that “This confirms that these studies were done carefully and that potential biases identified by climate change sceptics did not seriously affect their conclusions.” It’s far from the overdue apology that Phil Jones (of CRU) deserves from his critics, but it’s a start.
But Muller’s framing of the Berkeley results is still odd. His statement, that had they found no warming trend, this would have “ruled out anthropogenic global warming”, while true in a technical sense, would not have implied that we should not worry about human drivers of climate change. And it would not have overturned over a century of firmly established radiative-transfer and thermodynamics. Nor would it have overturned the basic chemistry which led Bolin and Eriksson (reprinted here) to predict in 1959 that fossil fuel burning would cause a significant increase in CO2 — long before the results of Keeling’s famous Mauna Loa observations were in. As a physicist, Muller knows that the reason for concern about increasing CO2 comes from the basic physics and chemistry, which was elucidated long before the warming trend was actually observable.
In a talk at AGU last Fall, Naomi Oreskes criticized the climate science community for being reluctant to take credit for their many successful predictions, so here we are shouting it from the rooftops: The warming trend is something that climate physicists saw coming many decades before it was observed. The reason for interest in the details of the observed trend is to get a better idea of the things we don’t know the magnitude of (e.g. cloud feedbacks), not as a test of the basic theory. If we didn’t know about the CO2-climate connection from physics, then no observation of a warming trend, however accurate, would by itself tell us that anthropogenic global warming is “real,” or (more importantly) that it is going to persist and probably increase.
Muller’s other comments do very little to shed light on climate change, and continue to consist largely of putting down the work of others. “For Richard Muller,” writes Richard Black, “this free circulation also marks a return to how science should be done,” the clear insinuation being that CRU, GISS, and NOAA had all been doing something else. Whatever that “something else” is supposed to be completely eludes us, given that these groups all along have been publishing results in the peer-reviewed literature using methods that proved easy to reproduce using easily available data (and in the GISTEMP case, complete code). In one sense, though, we do agree with Muller’s quote: nobody has stolen his private emails and spun them out of context to make his research look bad.
Laudably, Muller’s group have submitted their research to peer-reviewed journals, and the submitted drafts are available on their website. Amidst a number of verifications of already well-established results on the fidelity of the surface station trends, they also claim to have discovered something new. In their paper Decadal Variations in the Global Atmospheric Land Temperatures, they find that the largest contributor to global average temperature variability on short (2-5 year) timescales in not the El Nino-Southern Oscillation (ENSO) (as everyone else believes), but is actually the Atlantic Multidecadal Oscillation (AMO). This is pretty esoteric stuff, but it would actually be quite interesting if it were true — though we hasten to add that even if true it would have no significant bearing on the interpretation of long term temperature trends. Before anyone gets too excited though, they should take note that the basis for this argument is that the correlation between the global average temperature and a time series that represents the AMO is higher than for one that represents ENSO. But what time series are used? According to the submitted paper, they “fit each record [ENSO and AMO times series] separately to 5th order polynomials using a linear least-squares regression; we subtracted the respective fits… This procedure effectively removes slow changes such as global warming and the ~70 year cycle of the AMO, and gives each record zero mean.” Beyond the obvious fact that if one removes the low frequencies, than we’re really not talking about the AMO anymore (the “M” in “AMO” stands for “Multidecadal”), one has to be rather cautious about this sort of data analysis. Without getting into the nitty-gritty technical details here, suffice it to say that Muller & Co are proposing a new understanding of global temperature variability, and their statistical approach is — at the very least — poorly described. There is a large literature on how to do this sort of thing, not to mention previous work on the AMO and its relationship to global temperatures (e.g. this or Mann and Park (1999) (pdf), among many others), which the Berkeley group does not cite.
Overall, we are underwhelmed by the quality of Berkeley effort so far — with the exception of the efforts made by Robert Rohde on the dataset agglomeration and the statistical approach. And we remain greatly disappointed by Muller’s public communications (e.g. his WSJ op-ed) which appear far more focused on raising his profile than enlightening the public about the state of the science.
It will be very interesting to see what happens to these papers as they go through peer review. No doubt, they will improve: that’s one of the benefits of the peer review process (suddenly popular again!). In the meanwhile, Muller & Co. have a long way to go before they can claim to be the best (as opposed to just the BEST). By launching his BEST project, Muller has no doubt ensured a place for himself in shaping the narrative on climate change science, but it remains to be seen to what extent he is going to contribute to the science of climate change.
Bart Verheggen says
Nick Stokes writes about the trend estimates of the last decade (http://moyhu.blogspot.com/2011/10/gwpf-is-wrong-warming-has-not-stopped.html ):
So I checked the BEST data.txt to see why these month data had such large error bars, and were so out of line. It turns out that all the data they have for those months is from 47 Antarctic stations. By contrast, in March 2010 they have 14488 stations.
I.e. Gaving was right in his 131 reply: “I imagine it is related to a very limited data coverage in their 2010 collation.”
P. Puusa says
149
Jim says:
Everybody please knock it off with the off topic pontifications about crop yields
Too bad. There was some real gold there. I believe it started with a comment noting the BEST study showing a 2c warming since the 1810’s.
[Response: That isn’t a very sensible framing. The spatial coverage in the 1810s is not sufficient to give a good global coverage, so any trends from then are highly uncertain. Secondly, the 1810s were affected greatly by the eruption of Tambora in 1815 and another big eruption in 1809 – these caused widespread crop failures (1816 was the ‘year without a summer’ – Henry Stommel wrote a great book on this). – gavin]
[Response: People are more than welcome to discuss the effect of climate on crop yields in the new open thread as long as they actually stick to the topic, provide legitimate support for their statements that others can check, and steer clear of insults. Believe me, I’m as interested in the topic as anyone.–Jim]
JCH says
Bart says @151
Several commenters on various blogs seem to agree the data coverage on the last two months is a problem.
My question is, why did BEST include them? Am I wrong in believing additional data will melt that little icicle hanging off the end of their current graph?
If there can be a drill-down discussion on crop yields, why can’t there be a discussion about Muller’s characterization of CRU as being an outlier. I’ve seen a lot of complaining about HadCRUT running cold. He’s told Judy that, apples to apples, GisTemp is closest to BEST. I remember in Kyle Swanson’s article on “the shift” he made a comment about the shift not being very noticeable on GisTemp, and it appears to have now disappeared. So whatever the slight differences are between the two series, it appears they can make a substantive difference in the science.
It seems whenever I see an attempt to claim AGW has stopped, it’s bolstered with a WfT graph of HadCRUT data that shows a downward trend between arbitrary dates in the last 11 years. I switch their graphs from HadCRUT to GisTemp, and the downward trend usually changes to at least a flat trend, but usually an upward trend.
John P. Reisman (OSS Foundation) says
#153 JCH
If I recall properly, and anyone please feel free to correct me, the HadCRUT was missing some Arctic data which included the NH amplification, thus it would run a bit colder.
JCH says
#154 John P. Reisman
Well, is its running colder a grounds for saying it’s wrong? Wouldn’t what you are saying mean the the series is less than global?
Kevin McKinney says
#155–“Wouldn’t what you are saying mean the the series is less than global?”
In a sense, yes. In essence, the least-well instrumented portions of the globe get left out.
John P. Reisman (OSS Foundation) says
#155 JCH
Maybe you should read this:
http://ossfoundation.us/projects/environment/global-warming/myths/models-can-be-wrong
John P. Reisman (OSS Foundation) says
I added this to the October Leading Edge as well:
The Judith Curry Mistake
By John P. Reisman Oct. 31, 2011
Dr. Judith Curry continues to misinterpret long term climate trends by focusing on irrelevant time periods (too short). She has been informed by many highly competent scientists that are apparently much more qualified than herself in how to separate the short term natural variation from the human change signal based on changes and influences of increased radiative forcing. The key to relevant context for examination of the data/trend is time. Generally you need at least three decades of change combined with attribution including human and natural factors in order to see (separate) the significance of the human signal from natural variation.
She apparently continues to ignore this reality and point to data segments that are too short to separate the natural variation from the human influenced trend signal. Why would a scientist continue to ignore these well known realities? Let us consider the possibilities:
* Curry’s view is subject to confirmation bias
* Possibly there is some as yet unseen special interest influence
* Curry has become a victim of her own tribe mentality problem/hypothesis
* Dr. Curry is not sufficiently knowledgeable in the field of climate science to express a competently informed view
It is possible, if not likely, that one or more of these factors are in play with Dr. Curry’s continued focus on irrelevance. Either way, she exemplifies inadequacy in interpretation of the available evidence.
http://ossfoundation.us/projects/environment/global-warming/summary-docs/leading-edge/2011/oct-the-leading-edge
Terry says
From a fairly broad perspective, isnt the fact that in the US, about 33% of the temperature stations show cooling when they are often close to ones that show warming, of some serious concern. Working in the commercial world, I would not like to risk my client money if a decision were to be based on such records, unless there was good reason to dismiss one or other of the subsets. It seems to me that answering this question ought to be a priority instead of taking an “average” and treating it as a true representation of fact. Positive and negative anomalies in the same area cannot both be right, unless there is good reason, in which case you ditch the one that is caused by other factors and the discrepancy disappears. I would like to know why this important point seems to be largely ignored in favour of an “average” that clearly cant be right.
Philip Machanick says
Terry #159: what’s your source for this? How do you know this effect exists and is ignored?
Is anyone else seeing a glitch where Tamino’s blog doesn’t display content?
John P. Reisman (OSS Foundation) says
#159 Terry
Can you point me to the source of your claims? Did you come by this information by reading scientific papers or a blog post on someones web site?
1. What is the context (remember, it’s called global warming not one region warming)?
2. Was it peer reviewed.
3. Did the claim survive peer response?
JCH says
Philip at 160
Look at page 10, figure 4.
tamino says
Re: #159 (Terry)
Suppose you have a class with 30 school children. One day you measure the height of each. Two weeks later you measure them again. You find that 1/3 or the kids showed a lower height while 2/3 showed greater height.
You suspect that the large variation in height differences is due to the kids slouching, wearing different shoes, or having had a different breakfast the morning before the measurements are taken. Perhaps you’re right.
But you also notice that not only is the average greater, the difference is statistically significant. The conclusion: these children are still growing. It’s the average that’s meaningful, because the process of averaging reduces the inherently large uncertainty level.
Hank Roberts says
> Working in the commercial world, I would not like to risk my client money
What’s the risk if the science is correct?
How do you balance risk that gets worse over time, against cost now?
Do you discount the future cost the further in the future it will happen?
Terry says
Re # 163 Tamino
Yes Im pretty conversant with the concept of averages and gridding. My point is that if the stations are sampling the same well mixed atmosphere then they ought to at least show the same “trends”. Your point about averages of the sample is taken and I would expect that to apply to the temperatures themselves but not the trend even if they are using different instruments and have different local characteristics. The trend ought to be at least similar certainly not opposite in polarity.
[Response: Please provide some specifics of locations and data]
Hank Roberts says
Terry, trend over what length of time?
t_p_hamilton says
Terry’s original:”From a fairly broad perspective, isnt the fact that in the US, about 33% of the temperature stations show cooling when they are often close to ones that show warming, of some serious concern. Working in the commercial world, I would not like to risk my client money if a decision were to be based on such records, unless there was good reason to dismiss one or other of the subsets.”
If I were asked to invest on such sketchy information, no way would I do so. How about 12 stations that are in the same region of the US, where a third show an opposite long-term trend (say the 1960’s decade compared to the 2000’s decade average) than the other 8.
Terry says
RE #166. According to BEST, 70 years.
CM says
#160, #161, #166 re: Terry’s source for “in the US, about 33% of the temperature stations show cooling”
— is obviously the BEST papers. The Muller et al. BEST paper on station quality in the U.S. (PDF) says:
On Hank’s question “over what length of time”, the Wickham paper (PDF) is the more helpful.
From the following histogram and discussion, it’s clear that this includes stations with records of less than 10 years, and the trends of these stations are, unsurprisingly, all over the place. But they also have a map showing warming and cooling stations of over 70 years duration, and again the ratio is said to be 2:1.
tamino says
Re: #165 (Terry)
Your claim about trends being different from averages — in the statistical sense being considered — is simply mistaken.
The trends (1/3 negative, 2/3 positive) show large variation due to both measurement error and natural fluctuations. Because of this, all those trend estimates have uncertainty levels. Because the noise level at a single location is so large, the uncertainty in the trend estimate at a single location will be large.
Your argument comes down to “there’s noise in the data.” We knew that. Without averaging, the noise is big enough to cause you concern. With averaging, we achieve statistical significance. So to answer your original question, the variation in the data (and in local trends) is not ignored, and your claim that the average “cant be right” is false.
ldavidcooke says
RE:165
Hey Terry,
Yes I saw cases in the USHCN, in which a group would demonstrate lower averages near stations where there where upward trending; however, not 33%. Though there could have been a couple of instances.
Overall, it was due to local geographic topography (sampling resolution). If you were to mark those sites and perform a windrose, dewpoint and RH analysis, the localization was pretty clear. As was said by others averaging “hides” outliers. from both sides of the mean.
Cheers!
Dave Cooke
Terry says
Re #170 Tamino
But going back to your analogy with the class room, if it showed that a subset of the children were shrinking, would that not raise an alarm that there was something wrong since they would all be expected to have positive gains in height with time. If the shrinking was statisically significant and not considered “noise”, you would conclude that there was something wrong with some of the data. Going back to the temperature record the same applies. If the negative trends are significant then either they are not measuring the same well mixed boundary layer (due to other influences)as the positive trends or the reverse is true. One of the subsets therefore does not belong in the average.
[Response: Until you provide the specifics of exactly the data you are talking about, this generalized discussion is pointless.–Jim]
MMM says
Terry: Let’s use the march to summer as an example. If I compare April 3rd to April 2nd around the Northern Hemisphere, some subset of stations will have become warmer, and some will have become colder. The number of warming stations will likely outweigh the number of cooling stations. If I look at a longer time period (April 30th compared to April 2nd, for example), the proportion of warming stations will increase.
Do you mean to say that because some stations on April 3rd have cooled since April 2nd, that you would doubt that the Northern Hemisphere was warming?
Romain says
Tamino,
“With averaging, we achieve statistical significance. So to answer your original question, the variation in the data (and in local trends) is not ignored, and your claim that the average “cant be right” is false.”
Question from a novice: how is this spacial noise – or poor spatial correlation – taken into account when estimating the uncertainty on the average temperature?
Alexander Harvey says
Eric,
Thanks for your response to #75.
FWIW I suspect that those that say least prior to a determination that the method used in the generation of the temperature series is a positive contribution, will serve themselves well.
As it stands, the key paper could be a train wreck, if it is the other papers will be but more wreckage.
If there be serious flaws, the attention that the authors have garnered for it in the media will have done much to muddy the waters.
Supporting evidence that is both flawed and heralded may leave a taint on that which it supports.
I can have no real idea as to how this paper will fair and would be delighted for it to be a useful contribution but as of now it gives me indigestion, a nasty feeling that it will not stand a lot of scrutiny. If that be the case, the fact that it largely confirms what was already known, is I think unhelpful. I do wish that people would be careful with this stuff, for it is important; part of that care might well extend to passing review prior to publication, press release, and fuelling debate.
This has many ingredients, that could lead many to be seen to have played the impromptu ass rather than keep their counsel.
Alex
Pete Dunkelberg says
Muller is talking.
Philip Machanick says
#176 Pete Dunkelberg: Muller may be talking but he’s not making sense. He more or less accepts the problem but says we must do nothing about it because if the US moves and neither India nor China follow, we don’t correct the problem. What was all that in the past about the holdouts from Kyoto, the USA and Australia, being the excuse for the rest of the world to do nothing? Does the word “leadership” mean nothing? His contribution is looking increasingly confused and purposeless. He’s now moved to justifying criticising climate scientists for endorsing Al Gore. This is the same person who endorses Anthony Watts, who has published way more confusing and wrong information than Gore.
#162 JCH: there’s a discussion going on here, so it’s better to post the relevant facts here. We can’t post comments into a PDF anyway :( In any case Fig 4 on p 10 doesn’t have the resolution to support the claims made, nor does it contain the local climate factors to judge whether a red and a blue dot are in comparable areas (different sides of a hill for example).
Stephan says
Terry (#165, #172): I understand your point and I think it is a fair question. But: You say “well-mixed atmosphere”, but exactly that assumption is the problem. The boundary layer is not well-mixed enough to assume uniform temperature over short distances. If you go around with a thermometer in your neighbourhood you’ll notice quite large differences. A site’s microclimate is influenced by vegetation, exposure to wind and other such small-scale factors.
So, to reply to your original observation: It may well happen that one station gets warmer and one a few kilometers away gets colder, for example because the land use around it changes (from agriculture to growing forests or similar, but often it’s perhaps not even an obvious change). This is why we actually can expect the trends from individual stations to vary a bit, and this is especially true for stations with a short timeseries. So the fact that a proportion of the stations don’t show warming does not in itself mean much. We have to look at the overall picture.
Pete Dunkelberg says
>Muller may be talking but he’s not making sense.
Clearly. ;)
Terry says
RE Stephan #178
Ye I completely agree with you that station specific environments are likely the cause. Any that is exactly my point. The positive anomaly sites are not measuring the same thing as the negative ones. And while my background is physics and not statistics, I cannot see how it is valid to clump them together in the belief that they will all average out to give a meaningful representation of the “true” atmospheric trends. Which is exactly what is currently the case.
Ray Ladbury says
Terry, until you provide specifics, then neither we nor you have any idea what you are talkiing about.
However, keep in mind that we are talking about long time series of data for a number of stations that oversamples the planet by roughly a factor of 4. A station that departs systematically from its past behavior or the behavior of surrounding stations will arouse suspicion, and it will generally be easy to correct the data for the systematic errors.
The prospect of dealing with 100 year time series of a system oversampled by a factor of 4x is enough to make me salivate. I have to make decisions regarding billion dollar satellites with far less data.
Romain says
Ray Ladbury, and others,
Forgive my ignorance once again, but I am puzzled by this.
“However, keep in mind that we are talking about long time series of data for a number of stations that oversamples the planet by roughly a factor of 4”
I believe this is somehow related to this paragraph from the 1st BEST paper (page 23 in the 11.Spatial Uncertainty section)
“Ideally 𝐹 𝑥, 𝑡! would be identically 1 during the target interval 1960 ≤ 𝑡! ≤ 2000 used
as a calibration standard, which would imply that 𝜏 𝑡!, 𝑡! = 0, via equation [21]. However, in
practice these late time fields are only 90-98% complete. As a result, 𝜎!”#$%#& 𝑡! computed via
this process will tend to slightly underestimate the uncertainty at late times.”
People seem to be sure the spatial sampling is more than enough (Ray Ladbury is even saying 4 times over-sampled) to capture acurately the average temperature, at least for the last 40 years.
How do we know that? Is there a study out there that I missed (I am thinking something about the temperature spatial autocorrelation for different climate patterns)? Or something I don’t undertand? Thank you for your help.
Ray Ladbury says
Romain, for one thing you can see it in the record. Most nearby stations march in lockstep. Remember, what we are interested in here are global, longterm trends. That’s a very forgiving problem. Also, think about the sorts of changes you are likely to see as a result of introducing some source of systematic error into the data–it’s quite unlikely to look like the signal you are looking for.
All of this was discussed during Tony “Micro” Watts’station project. I will say then as I say now, you have to understand the data, the errors and the processing to ascertain whether an error at a station will have any effect at all on the product.
ldavidcooke says
Re:182
Hey Romain,
The 4-times sampling standard can likely find its beginnings in the works of a Bell-Labs scientist back about 1960-64. The issue was how to digitally sample changes in a analog signal. It was found for a sinesoidal signal that with 4 samples at the highest signal frequency rate it should be possible to replicate the analog signal. This only applies to signals created with a sine wave form, (though the mixing signals of different frequencies or hetrodyning can create non-sine wave results).
Consider that the daily temperature change were a sinesoidal signal, sampled 4 times daily you should be able to replicate the change in tempersture for that day. Similarly if you were to sample 4 sites simultaneously for the temperature the ability to replicate that localities temperature for that day should be possible with a high degree of accuracy.
(Note: For non-sine wave signals you can replicate them, though with varying resolution, based on the measure of the phase angle or rate of change. You then determine the acceptable level of reproduction accuracy and select the sample frequency that is approprate for the signal you are representing. (In the mid-late ’70s this technology was extended sine wave signals and standard digital sampling was changed to 2 times signal rate.)
As you are likely aware because temperature is not varying about a null value evenly, it is not truly sinesoidal. However, if you track the phase angle or rate of change for a given time period you should be able replicate the temperature record accurately with just two daily samples. Though this technique may only be applied when attempting to replicate a locality if you also indicate the daily differences or slope of change between sampling sites, (to my knowledge this method is not used by any historic record system).
An extreme example or measure of a signal may be the pulse width of a square wave, the accuracy of its reproduction is not only based on the sample rate; but, is also dependent on the switching speed of the device creating the signal. High speed sample reproduction accuracy of a device would be the slew rate of the device. (The ability to model patterns with computer systems have a similar issue, where in the rate to change is limited by the rate of the processor to calculate change.))
Cheers!
Dave Cooke
caerbannog says
Romain, when I was playing around with the GHCN data (with my little “hand-rolled” program — see my previous posts here), I tried a little informal experiment to test the “redundancy” of the GHCN surface temperature network.
Here’s a plot of my “experimental results”: http://img30.imageshack.us/img30/782/gissvs1of10ghcnstations.jpg
The plot shows an ensemble of global-average temperature time-series scans, where each temperature time-series was computed from a random “1 out of 10” selection of GHCN land temperature stations. For each run, a random-number generator was used to select the stations with a 1 out of 10 probability for each random-number “trial”. Basically, I recomputed global-average temperature results a bunch of times while throwing out 90 percent of the GHCN data at random each time. I plotted the first 10 random runs generated by my program so that nobody could accuse me of “cherry picking” an individual good result.
The GHCN stations were selected completely randomly, with no attempt to maintain uniform global coverage.
To provide a basis for comparison, the official NASA/GISS land-station results are plotted along with my random “1 out of 10 station” results. For clarity, the NASA/GISS temperatures are plotted the foreground (red) scan, as indicated by the legend. (The legend labels for the other scans are just cryptic data labels generated by the program that I wrote — for those who are curious, they contain information like the data file name and run number plus a bit of other info about the processing options.)
What you can clearly see is that all of the “1 out of 10″ results agree reasonably well with the NASA results. In anything, they tend to show a bit more warming than the NASA results do, probably because throwing out so much data tends to increase the overweighting of the Northern Hemisphere data (remember that there are more temperature stations in the NH than in the SH, and that the NH has been warming faster than the SH). Anyway, so much for idea of NASA “cooking the books” to exaggerate the global-warming trend.
The take-home message here? The global surface temperature record really is quite redundant and robust.
Note: I was able to do all of the above with nothing more than publicly-available raw data, documentation, and free, open-source software tools. Didn’t have to file a so much as a single FOI request.
Pete Dunkelberg says
Shorter Muller: Oh snap! even arithmetic has a liberal bias!
GlenFergus says
At the month-to-month scale, the Berkeley temps (from here) are vastly more noisy than any of the other estimates, including the satellite measures . Anyone know why?
Berkeley plotted in brown: 1850-2011, 1970-2011
As well, over the last decade or so Berkeley appears to be running substantially higher than the other estimates. Oddly, that doesn’t seem to be reflected in their published charts.
GlenFergus says
For some quantification of inter-month noise, the standard errors to the post-1970 linear monthly trends are:
NASA GISS ………………. 0.18°C
HadCRUT3v ……………… 0.12°C
NOAA NCDC …………….. 0.12°C
Berkeley ………………… 0.36°C
NCEP/NCAR reanalysis … 0.19°C
RSS LT v3.3 ……………… 0.17°C
UAH LT v5.4 ……………… 0.18°C
Berkeley monthlies over the last 4 decades appear to be nearly twice as noisy as the next highest estimate. I suggest that the high Berkeley inter-month noise is unlikely to be physical, because it is inconsistent with six other reputable estimates encompassing three largely independent approaches. Presumably it is an artifact of their data processing method.
At least at the monthly scale, the Berkeley results should be considered unreliable, in my view.
Hank Roberts says
> an artifact of their data processing method
More likely a fact of their data, don’t you think?
They used the widest possible collection of source material, from different sites and lengths of time.
GlenFergus says
The “widest possible coverage” is provided by the satellite products, or, less satisfactorily, by the reanalysis products. Those show much lower variability. Whatever they have done appears to have introduced spurious inter-month noise.
Martin Vermeer says
GlenFergus, I’m probably being dumb, but are you comparing BEST land-only to other land-ocean temps?
Hank Roberts says
Glenn, if you don’t like “widest” try different words: their stated approach was to find, collect, and come up with statistics to combine in usable fashion all of the various available data collections.
You’re complaining about what was described in the main post above: “efforts made by Robert Rohde on the dataset agglomeration and the statistical approach” — because you don’t like the result.
You could instead perhaps write up what you understand about the result? Perhaps you have a contribution to make here, rather than a complaint?
tamino says
Re: #188 (GlenFergus)
I think Martin Vermeer (#191) is right — you’ve compared Berkeley land-only to other land+ocean indexes. If you used all land-only series the comparison would be quite different.
Susan Anderson says
inter-month noise?!! c*** on a crutch! We’ve seen a good bit of real inter-month weather noise lately and it is likely to break the world’s banks – what’s left after the banksters have done their hit and run stuff.
pardon my “french” but even a sense of humor is not enough to get through this nonsense sometimes.
Yeah, I know, I’m talking about reality and its timeframe doesn’t work with physics and statistics properly, but that’s what we live with – noisy or not, it’s physically challenging. I’ll get over myself, but not just yet.
KR says
Romain @ 182 – Regarding oversampling:
One excellent reference for this is Hansen & Lebedeff 1987 (http://www.snolab.ca/public/JournalClub/1987_Hansen_Lebedeff.pdf), in particular Fig. 3 showing strong correlation of temperature anomalies over varying (and large) distances – a fairly small sampling provides excellent coverage.
We seem to have plenty of data points to determine the trends.
GlenFergus says
Martin et al, thanks, that appears to be the explanation.
Martin Vermeer says
GlenFergus, don’t sweat it — doing actual numerics yourself puts you ahead of 99% of the pack!
Romain says
Thank you all for your answers…
Idavidcooke, the (over)sampling and uncertainty issues I was talking about are SPATIAL issues. Your answer is about TIME series.
Ray Ladbury, caerbannog, KR, thank you for your answers. I still have to digest it but meanwhile:
“Romain, for one thing you can see it in the record. Most nearby stations march in lockstep.”
Well this is the thing. And this is what I had in mind from previous discussions: a fairly large spatial correlation, like the 800-1200 km that can be found in the Hansen 1987 paper.
But then there is this figure 4 page 10 of the UHI BEST paper. You can see very close stations (in California for exemple) showing different trends…so clearly in California, the spatial correlation is poor, no? I just have some difficulties to reconcile everything…a hint?
Hank Roberts says
> clearly in California, the spatial correlation is poor, no?
No. You know those California winery ads? They’re all about the local differences — microclimate differences — between areas. You can go a mile and be in a very different microclimate.
You’re looking at a picture of the entire USA with little blue and red dots.
Are those unadjusted temperatures, do you know?
Because that makes quite a difference. Here’s an example of the difference and how it can be spun if it’s not explained:
http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
Ray Ladbury says
Romain–ever been to California. There is tremendous variability in terrain. Remember, what matters are the trends, and those tend to track.