Another week, another ado over nothing.
Last Saturday, Steve McIntyre wrote an email to NASA GISS pointing out that for some North American stations in the GISTEMP analysis, there was an odd jump in going from 1999 to 2000. On Monday, the people who work on the temperature analysis (not me), looked into it and found that this coincided with the switch between two sources of US temperature data. There had been a faulty assumption that these two sources matched, but that turned out not to be the case. There were in fact a number of small offsets (of both sign) between the same stations in the two different data sets. The obvious fix was to make an adjustment based on a period of overlap so that these offsets disappear.
This was duly done by Tuesday, an email thanking McIntyre was sent and the data analysis (which had been due in any case for the processing of the July numbers) was updated accordingly along with an acknowledgment to McIntyre and update of the methodology.
The net effect of the change was to reduce mean US anomalies by about 0.15 ºC for the years 2000-2006. There were some very minor knock on effects in earlier years due to the GISTEMP adjustments for rural vs. urban trends. In the global or hemispheric mean, the differences were imperceptible (since the US is only a small fraction of the global area).
There were however some very minor re-arrangements in the various rankings (see data [As it existed in Sep 2007]). Specifically, where 1998 (1.24 ºC anomaly compared to 1951-1980) had previously just beaten out 1934 (1.23 ºC) for the top US year, it now just misses: 1934 1.25ºC vs. 1998 1.23ºC. None of these differences are statistically significant. Indeed in the 2001 paper describing the GISTEMP methodology (which was prior to this particular error being introduced), it says:
The U.S. annual (January-December) mean temperature is slightly warmer in 1934 than in 1998 in the GISS analysis (Plate 6). This contrasts with the USHCN data, which has 1998 as the warmest year in the century. In both cases the difference between 1934 and 1998 mean temperatures is a few hundredths of a degree. The main reason that 1998 is relatively cooler in the GISS analysis is its larger adjustment for urban warming. In comparing temperatures of years separated by 60 or 70 years the uncertainties in various adjustments (urban warming, station history adjustments, etc.) lead to an uncertainty of at least 0.1°C. Thus it is not possible to declare a record U.S. temperature with confidence until a result is obtained that exceeds the temperature of 1934 by more than 0.1°C.
More importantly for climate purposes, the longer term US averages have not changed rank. 2002-2006 (at 0.66 ºC) is still warmer than 1930-1934 (0.63 ºC – the largest value in the early part of the century) (though both are below 1998-2002 at 0.79 ºC). (The previous version – up to 2005 – can be seen here).
In the global mean, 2005 remains the warmest (as in the NCDC analysis). CRU has 1998 as the warmest year but there are differences in methodology, particularly concerning the Arctic (extrapolated in GISTEMP, not included in CRU) which is a big part of recent global warmth. No recent IPCC statements or conclusions are affected in the slightest.
Sum total of this change? A couple of hundredths of degrees in the US rankings and no change in anything that could be considered climatically important (specifically long term trends).
However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake. This led to some truly death-defying leaping to conclusions when this issue hit the blogosphere. One of the worst examples (but there are others) was the ‘Opinionator’ at the New York Times (oh dear). He managed to confuse the global means with the continental US numbers, he made up a story about McIntyre having ‘always puzzled about some gaps’ (what?) , declared the the error had ‘played havoc’ with the numbers, and quoted another blogger saying that the ‘astounding’ numbers had been ‘silently released’. None of these statements are true. Among other incorrect stories going around are that the mistake was due to a Y2K bug or that this had something to do with photographing weather stations. Again, simply false.
But hey, maybe the Arctic will get the memo.
Jeffrey Davis says
Still Peak Oil people are seem as doomongers much like environmentalists and as yet not quite mainstream enough thinking.
Favorite Far Side cartoon:
Two fish are outside their small fishbowl watching a fire consume their fishbowl castle. One says to the other, “Thank heavens we made it out in time. Of course, now we’re equally screwed.”
John Mashey says
re: #301
Peak Oil
ASPO-USA will be held in Houston in October. You can check who’s speaking and decide whether they actually know anything, or whether they are random doommongers.
(ASPO = Association for the Study of Peak Oil & Gas),
http://www.aspo-usa.com/
Alternatively, look at
http://www.lastoilshock.com/
and get Strahan’s book (Amazon Canada or UK, not USA).
Peak = 2015 +/- 5 years is the consistent estimate. Personally, I’m planning for $10/gal gas here within 10 years.
Peak Oil & Global Warming are rather tightly coupled. If we do our best to burn up oil fast, we not only increase CO2, but when the gas gets really expensive, we get to have a World Depression that makes it hard to invest very quickly in replacements, and will almost certainly mean we’ll be burning a lot of coal before we figure out how to sequester it.
Maybe we can avoid this; younger folks will get to see it firsthand!
John Tofflemire says
Tamino says (#280):
“The “denialist propoganda” to which I refer is the claim that the globe was cooling for 30+ years mid-century. The impression which is intended is that the planet cooled, and kept cooling, for three decades. It just ain’t so.”
In #256 Tamino said:
“It’s more correct to say that it cooled from about 1944 to 1951 (7 years), then levelled off for 24 years.”
In fact, one could claim, with a reasonable degree of validity, that it is “denialist propoganda” the earth warmed, and kept warming, for the 30+ year period from 1976 to 2006. For example, the average global temperature anomaly in 1976 was -.1182 degrees Celsius (using the NOAA data). In 1981 the average global temperature anomaly was .2392 degrees Celsius, an increase of .3574 degrees Celsius in just 5 years. In 1996, the average global temperature anomaly was .2564 degrees just .0172 degrees higher than 15 years earlier. In 1998, just two years later, the average global temperature anomaly was .5772 degrees Celsius. In comparison, the average global temperature anomaly in 2006 was .5391 degrees Celsius or .0381 degrees Celsius cooler than in 1998!
Thus, one could claim that the earth’s temperature rose rapidly between 1976 and 1981, then essentially “fluctuated” between 1981 and 2006, except for the rapid increase between 1996 and 1998. Under such logic one could further claim that the earth has not “warmed, and kept warming for three decades” to pharaphrase Tamino’s statement noted above.
Of course, such a claim is sheer nonsense. The earth has been in a warming period since 1976 and to deny that is to deny reality. That is, temperatures have, on average, risen and have, on average, tended to remain at elevated levels. Similarly, between 1944 and 1976, temperatures, on average, fell and, on average tended to remain at supressed levels. The difference is that the total temperature increase in the warming period since 1976 has been much greater than the total temperature decrease that took place in the 1944 to 1976 cooling period. That is also reality.
Now, Hal Jones notes (#290) that:
“#276. John T. #280 tamino I suppose it depends if you’re talking about constantly cooling every year versus the general trend.”
In fact, between 1976 and 2006, in 17 years the year-on-year change in the average global temperature anomaly was positive and in 13 years the year-on-year change in the average g.t.a. was negative. In other words, average temperatures fell nearly half the time year-on-year during a warming period! The earth’s temperature fluctuates up and down whether the period is warming or cooling. It NEVER constantly warms or cools!
Why is all of this important and why is this thread not “much ado about nothing”? The U.K.-based Climate Research Unit last week forecast that the average global temperature anomaly in 2014 will be .3 degrees Celsius higher than in 2004. Since the NOAA average g.t.a . in 2004 was .5344 degrees Celsius, the CRU is forecasting that the NOAA g.t.a. in 2014 will be .8344 degrees Celsius (+/- some confidence interval). There is a lot riding on this forecast for, if it is correct, then it would be impossible for anyone but the delusional to argue that AGW is real. On the other hand, if the actual average g.t.a. is significantly lower than this figure (crossing our fingers that we don’t have an accursed major volcanic eruption to screw things up), the AGW theory as currently held will very much be in doubt. Thus, it is crucial that the temperature measurements taken over the next seven years be as accurate as possible. Steve McIntyre has everyone a favor (most of all AGW proponents) here in this regard.
John Tofflemire says
Sorry! I meant to say: “it would be impossible to argue for anyone but the delusional to argue that AGW is false”
Chris says
RE #3
Nick, you are referring to Ramanathan, V. et al (2007). That is not the conclusion and the study has no bearing on the 1940-70 period.
Chris says
As Gavin has continued to point out, this study indeed means nothing. See the graph on global temperatures which will not change.
http://www.epa.gov/climatechange/images/triad-pg.gif
The U.S. map gives you some feel for how variable climate change can be regionally and locally over long periods of time
http://www.epa.gov/climatechange/images/tempanom.gif
Bill Nadeau says
#306
And what assurance do you have the global temperature will not change?
Lawrence Brown says
Re 288, Lynn says:”Well, see the scientists want to do as much as they can now with whatever evidence they have, before what actually happens during the 21st century has a chance to validate or invalidate their models. That’s because by 2100 there may not be any scientists left.”
This will be more than compensated for by the fact that there won’t be any economic forecasters left either. Theres a lot of angst about the future of civilization,as we approach the point of no return. Noboby wants to see our progeny as nomads roaming the the arctic on a subsistence level living. So in that light when our backs are to the wall,which they soon will be,the skeptics and their “uncertainty” arguments will be drowned out by reality.
BTW the source code for ModelE is readily available on line.See Gavin’s response in #211. He may have been playing cat and mouse for awhile and then concluded that some of us we’re too challenged to find it(like yours truly).
dallas tisdale says
In this high polarized debate, I feel that everyone should remember that the vast majority do not deny that man has an impact on this planet. The question is how much, how soon. That is not irrational.
The label denier is a bit much. Skeptical is better. While the correction of the US historical data does not indicate cooling it does indicate warming without acceleration. That would be a good thing. While that may apply to only 2% of the earth’s surface, it is an indication that implementing new technology to reduce emissions and improve efficiency may be made before the tipping point with less financial impact.
If you are absolutely certain that all the data is 95% plus correct, unwilling to fine tune the data with other statistical methods, unwilling to bare peer review from less than cliquish peers, then the science is settled. Some are not that certain.
To the request for code, that should not be required unless someone has built their own model based on the provided algorithms and data and found significant differences. If the results cannot be replicated from the provided information by a competent source, then comparison of code may be in order. Is that unreasonable?
Since the majority of the science now boils down to statistical analysis, should not a variety of statistical methodology be used to validate the models?
I am not a climatologist, only studied engineering, the only thing I can offer is the KISS rule. If it is a statistical problem, get a statistician.
Hank Roberts says
Bill, I think Chris means that you won’t be able to see the difference in the chart — before and after this particular correction being discussed here is made. The picture — at that scale — won’t change visibly. Chris, izzat what you meant?
cce says
Quick question.
1998 was originally calculated to be warmer than 1934, and was so until at least 2001. By this year, it was calculated to be warmer, which the recent correction reversed. Does anyone know when 1998 surpassed 1934 in the NASA calculations? I just read a story where it was said that 1998 had “long been believed” to be warmer than ’34 but that’s a bit of an exaggeration.
cce says
Sorry that should be “1998 WASN’T originally calculated to be warmer than 1934 and WASN’T so until at least 2001”
Steve McIntyre says
#311. You are incorrect to say that “1998 was originally calculated to be warmer than 1934” – I presume we’re talking U.S. here. 1934 was originally almost 0.6 deg C warmer than 1998 (Hansen et al 1999) and NASA 1999 news release. In the next two years, 1998 gained 0.6 deg C on 1934.
Contrary to one of Gavin’s posts, the time-of-observation bias adjustment was included in Hansen et al 1999.
[Response: Not so. Read the abstract of Hansen et al (2001): “Changes in the GISS analysis subsequent to the documentation by Hansen et al. [1999] are as follows: (1) incorporation of corrections for time-of-observation bias and station history adjustments in the United States based on Easterling et al. [1996a]”. – gavin]
The dramatic increase in 1998 relative to 1934 appears to originate in Karl’s “station history adjustment”, which was added to NASA calculations between 1999 and 2001, with dramatic results. [edit]
[Response: Also not true. The Plate 2 in Hansen et al (2001) clearly shows that the effect of the TOBS adjustment between the 1930s and 1990s is larger than that of the station history adjustment (both of which are significant however). – gavin]
John Mashey says
re: #270
Well, if not weekly, let us know what happens sometime; most of the people who’ve argued about this have just seemed to disappear.
====
re: SETI@home, etc
1) Here’s a good list of such projects:
http://en.wikipedia.org/wiki/List_of_distributed_computing_projects, of which one is:
2) http://en.wikipedia.org/wiki/Climateprediction.net
3) Distributed PCs can work well for certain kinds of work, of which SETI@home, finding primes, etc are good examples. Note that 2) is an ensemble project: that is, each PC runs a completely separate simulation that will fit there, and people look at the ensemble results. [Of course, if someone really doubts a simulation code, running a bunch of instances on different machines won’t remove their doubt … in fact, it would increase the need to check each run. :-)]
Algorithms that work well this way usually have following characteristics:
A) There are huge number of independent tasks.
B) Ideally, as a machine becomes idle, it gets the next task (hopefully small), spends a lot of time computing, and then hands back a short answer [ideally, YES or NO], i.e., the work has a HIGH compute:communication ratio.
(A lot of particle physics people have used workstation/PC farms to look for interesting events (which hardly every happen). Each system takes an event, spends a lot of time looking for interesting patterns, and then returns YES or NO. SETI@home is similar to this, as are various other kinds of problems. Somewhat similar have been renderfarms at special-effects shops; I don’t know current times, but in olden days a system might run 2 hours to generate 1 frame of a movie.)
C) If you get a result back from a machine that you don’t own, whose software environment might be unknown, you can VERIFY interesting results easily. Hence, if some PC doing SETI@home says “YES”, you can easily check that by rerunning the algorithm. [If a PC somehow uses bad code and misses the little green men, you may not notice.] Of course, if you are dealing with PCs you don’t trust, you can at least send the same inputs to multiple machines and compare.
D) Unfortunately, none of this helps much to parallelize a single run of physics-based time-stepped gridded algorithms across multiple distributed PCs, especially with dense 3D grids.
I don’t think any of the projects in the above are of this type. If you know of a project that is (seriously) doing this, please post.
People can and do make serious CFD or FE codes work on Linux clusters, for example, but it certainly takes work, and people usually use fast Ethernet, Myrinet, IB, etc, and dedicated machines to minimize latency.
(more later, relatives visiting for a couple days).
Alan K says
#279 oh and “Ah, now the unobjective statement behind your question becomes obvious. “WE”, the voters do not see that at all. Check the various polls re: the need for action on GHGs.”
I’m sure every poll says there is a need for action on GHG, the same polls probably say people will vote for higher taxes to help the poor and would be happy to donate 10% of their income to charity. The reality, meanwhile, is that people actually vote with their feet.
eg. the first one off the top of google
http://www.oag.com/oag/website/com/OAG+Data/News/Press+Room/Press+Releases+2006/Global+Warming+fears+fail+to+dampen+demand+for+air+travel+0910064
Jake Ruseby says
I would imagine that the uncertainties on these numbers would mean that any reordering of the rankings would be statistically meaningless as you point out. What are the uncertainties on these numbers, because without them I’m not even sure how to interpret what I’m looking at? This also means that there shouldn’t have been any hoo-ha when 1998 and 2006 turned out to be the warmest years.
DWPittelli says
Gavin,
As you say, “The algorithms, issues and choices are outlined in excruciating detail in the relevant papers” and “The error on the global mean anomaly is estimated to be around 0.1 deg C now.”
However, to me the fact that no one caught a 0.15 deg C error in the US data for several years shows that the level of transparency has been inadequate to the goal of achieving accuracy. It would certainly be easier for organizations providing “corrected” data to include all the corrections in their database than for outside researchers to hunt through various articles and from them make guesses about exactly when and how various corrections have been made to each piece of data. Until this level of transparency has been achieved, I think it is reasonable for people to be skeptical of claims that any area’s temperature record is accurate to 0.1 deg C.
[Response: The error in the global mean anomaly is around 0.1 deg as I said. The fact that a revision 0.15 deg C in the US made no appreciable difference to the global mean implies that local area errors need to be much larger to have a significant impact. Note that no claim is made that individual stations data is correct to 0.1 deg C – the low error for the global mean comes from the power of large numbers and the large scale averaging that goes on. – gavin]
DWPittelli says
Yes Gavin, I understand that that the US is only 2% of the world’s area and so the 0.15 C in the US is trivial globally. However, no one noticed a 0.15 C error in the US data for several years, and I am skeptical that data in most of the rest of the world is under much better scrutiny.
caerbannog says
Given the amount of noise in the data, it is not surprising that the 0.15 C error wasn’t detected immediately. Year to year variation in the USA’s mean temperature is far greater than 0.15 deg. So the error would likely not have been spotted by *anyone* until several subsequent years of data were collected and analyzed.
And given that skeptics have had full access to both the corrected and raw data for years, and given that there has been plenty of funding available for skeptics’ activities, the fact that this 0.15 boo-boo is all the skeptics have to show for their efforts is an indication that the data overall are pretty robust. If there *were* serious problems with the data, a couple of dedicated analysts with a few 10’s of K of funding from Exxon-Mobil would have been able to uncover them. Exxon and others would have been throwing money at research and analysis instead of paying to have puff-pieces published in the National Enq^H^H^HReview, WSJ, and other partisan publications.
Tim McDermott says
Dallas Tisdale said: Since the majority of the science now boils down to statistical analysis, should not a variety of statistical methodology be used to validate the models?
The adjustments to historic temperature is not “the models.” Climate models are generally constructed from first principles of physics and chemistry. They are independent from historical data except that the skill of the model is judged, in part, by its ability to reproduce historical trends. Since there are several independent temperature series that all agree to the level of accuracy needed to evaluate models, there seem to be better places to spend the science budget.
J.S. McIntyre says
re 309 –
Some thoughts from a non-scientist type who has been watching this unfold and come to some conclusions accordingly regarding labels such as denialism, skepticism and the so-called “debate”. I don’t comment on this site to often – I don’t have the technical background to hold my own, for one thing. I, like many people in my situation, have to rely upon what I can read in reports and science popularizations. Often, it comes down to observing little things like tactics, asking questions about motives, and understanding the basics of how science works across all fields. In that regard I am qualified to comment.
Simply put, this is a debate, yes, but only in the rhetorical sense, IMHO. As a scientific debate, I believe the facts supporting GW so far outweigh the arguments against GW that to call it a debate would be, at best, silly. There is just too much data, from far too many sources, to suggest that there is any legitimacy to the so-called AGW “Skeptics” position.
In fact, from all appearances, what is really occurring is what David Michaels, professor at the George Washington University School of Public Health, correctly characterized as manufactured uncertainty:
http://www.ajph.org/cgi/content/abstract/95/S1/S39
See also:
http://www.ucsusa.org/news/press_release/ExxonMobil-GlobalWarming-tobacco.html
http://thepumphandle.wordpress.com/2007/01/11/exxonmobil-says-it-will-stop-manufacturing-uncertainty-%E2%80%93-who-is-next/
http://www.motherjones.com/news/feature/2005/05/some_like_it_hot.html
Peer review is another indicator. If there is a real scientific debate, it would be there where I expect to see it in play. But this does not seem to be the case:
http://www.sciencemag.org/cgi/content/full/306/5702/1686
It also helps to understand the expertise of the people involved. Paralleling the so-called Evolution-Creationism debate, the majority of these folks criticizing GW science have no background in the science they seek to criticize, do no research of substance into the phenomena but instead engage in often spurious critiques of the science, much like Intelligent Design’s Discovery Institute, which does no research, just publishes books and op-ed pieces while funding efforts to undermine school policy in places like Kansas and Pennsylvania. Even from a casual perspective, this has to raise questions as to the legitimacy of the critiques they offer of the people who actually studied and work within the field of climatology.
A great dissection at how far apart the methodology of the so-called “skeptics” is from that of the climatologists can be found here:
https://www.realclimate.org/index.php?p=74
While on the surface this link it is a discussion of sci-fi author Michael Crichton’s cherry-picking of data to support his work of fiction, it is also an excellent snapshot of the types of arguments and tactics employed by the denialists over the past few years. Far too often, we see denialists take bits and pieces of data out of context and attack the resulting straw man they create (like the attempt to discredit Mann et al’s Hockey Stick), creating an illusion of a debate, of doubt far in excess of the actual uncertainty, that is at best disingenuous and, if I may be so bold, possibly criminal, particularly if the long-term effects of AGW end up being as dire as the middle-of-the road projections suggest.
In light of this, to suggest that the GW “Skeptic” crowd is truly Skeptical in the sense of how skepticism is employed in science is, at best, quite a stretch:
http://www.skeptic.com/about_us/discover_skepticism.html
Or are you seriously willing to suggest that people like Crichton or the makers of “The Great Global Warming Swindle” …
https://www.realclimate.org/index.php/archives/2007/03/swindled/
…represent a skeptical approach to the issue? To suggest that this current issue which, more and more, appears to be the tempest-in-a-teapot the lead article makes it out to be, somehow refutes global warming (as many of the folks Denialist camp appear to want people to believe)?
I could go on, but I think you get where I’m coming from. It’s not any one thing that causes problems for the AGW “Skeptic” crowd, but instead an overwhelmingly obvious pattern of tactics and less-than-forthright behaviors that cause problems for them with anyone who takes the time to watch and research their methodology in action over time. They are not doing any science of significance; they are not offering anything new to give the science something to chew on. Instead, they are sewing uncertainty, often playing fast and loose with the facts. People like Steve McIntyre can keep playing this game, doing their part in enabling behavior in the general population classically referred to as ‘Fiddling While Rome Burns’, and they will likely succeed in the continual seeding of doubt, at least for a time.
But whatever it is you think they are doing, it is apparently not about engaging in legitimate skepticism.
Timothy Chase says
DWPittelli (#318) wrote:
If you just think in terms of the normal distribution, assuming thermometers could only measure with an accuracy of a degree, the larger the number of thermometers, the more accurate the average, and a large enough number of thermometers could make the uncertainty regarding the average temperature arbitrarily small where the uncertainty of the average is proportional to the inverse of the square root of the number of thermometers.
Statistics 101.
I assume that those who have actively participated in this debate for a while will be aware of this.
Others might want to check out the following for a look at this from a refreshing perspective…
The Power of Large Numbers
July 5th, 2007
http://tamino.wordpress.com/2007/07/05/the-power-of-large-numbers
PS
I might like to point out that this is a special case of the principle that a conclusion which recieves justification from multiple independent lines of investigation is often capable of a degree of justification far greater than that which it would receive from any one line of investigation considered in isolation. The evidence from surface stations, satellite measurements of troposphere, sea surface temperatures, etc all add up, and much more quickly than someone just off the street might think.
Timothy Chase says
Incidentally, while I most certainly don’t mean to suggest that it is common, no doubt there are some who believe or at least suspect that climatologists are deliberately manipulating the numbers in order to make it appear as if temperatures are going up or going up more rapidly than they actually are.
Time to retire that belief. As the result of the recalculation, 1998 went down, not up.
Additionally, for those who believe that climatologists simply aren’t concerned with accuracy, it is worth keeping in mind that the recalculation was part of a deliberate and systematic attempt to improve accuracy. It succeeded.
And looking at the chart for the global average temperature trend, there really isn’t room for much doubt with regard to the direction of the trend, or for that matter, the near magnitude of the trend.
Barton Paul Levenson says
[[ Silly dim old skeptics can’t quite believe how computer models are able to model something as complex as the weather. Out to 100 years in the future.]]
You have weather confused with climate. Weather is chaotic and can’t be predicted beyond about five days. Climate is a long-term regional average (formally, 30 years or more), and is deterministic. An example to distinguish the two: I don’t know what the temperature will be tomorrow in Cairo, Egypt. But it’s a safe bet that it will be higher than in Stockholm, Sweden.
John Mashey says
re: #281 FCH
I must run off, and I’ll come back to this, but FCH: think about why your analogy doesn’t fit very well.
– Open source is useful and cost-effective for some kinds of software.
– It isn’t for others, i.e., the return on the effort to to make it widely open, document it appropriately, respond to bug reports, etc isn’t worth it.
In particular, UNIX/Linux source:
– was/is used by programmers who typically want to use the resulting programs often, for real work, and as needed, make modifications to make them do additional work, or write related systems software
– provided a way to get a large mass of widely-useful software onto some hardware platform or other
– was/is of direct use to software engineers with the relevant expertise and motivation to make it work, without or without any help
– is structured as large collections of usually-small, relatively-independent modules, of which many have purposes and code easily accessible to a high school student with no special background. Anybody who can read and compile C can add a new flag to a lot of commands.
– UNIX: Ken & Dennis & Brian & co were happy to make source available for *lots* of things … but had they been told they had to make everything they ever did available, and that their code would be assessed for its quality, and that various random people would be suggesting changes they needed to consider … that would haver been the end of that. As it happens, over time, some people got to be considered as people who might actually have useful suggestions, but only because we proved over some years that we understood what was going on and were actually useful. Without naming names, some people were appalled at some of the “improvements” that were done elsewhere…
It is a labor of love for Linux and his lieutenants to do what they do.
Let us also reflect that GCC (a fine piece of software), in the last few years, has finally started to acquire global optimizations of the sort found in {MIPS, HP, IBM, etc} compilers 20 years ago. Likewise, things like XFS for Linux didn’t happen because a few people were looking at filesystem code and decided they’d like to play at doing better :-)
Now, if there were a UNIX/Linux systems programmer-sized community of people whose daily work is climate-simulation&analysis, who have both the relevant scientific backgrounds and programming skills and motivation to knowledgably examine source code and help improve it, that would be nice.
[One of these years, I’ll have to do an essay that backtracks the 60-year history of open-source, and look at why it works well where it does, and not where it doesn’t … but not here :-)]
Dave Blair says
#324, the problem is that you can’t just average climates. There are just to many variables. The climate on a Caribean island is much too different than in the Canadian Rocky Mountains. Regional and microclimates are important as is variance, wind, humidity, clouds, etc etc.
dallas tisdale says
Ref: 320
I understand. The historical data and proxy data may benefit from further statistical analysis. The data used to determine the skill of the models, sorry if validate was a poor word.
As far as the expense, given the urgency of the situation?
It is obvious that the Earth is warming and obvious that some portion of that warming is anthropogenic. I am confident that the business as usual mentality is changing, a change for the better. I just don’t share 95% confidence in the estimates of the rate of global warming.
Neil B. says
Over at Number Watch they have a chart showing the ever increasing “Difference Between Raw and Final USHN Data Sets” as well as photos of temperature sensors near outdoor air-conditioning fans, etc. They claim the first graph shows the adjusting factors are partly responsible for the warming trends, and their second complaint is that temperature stations aren’t monitored and corrected well enough (or is the first the answer to the second?) Comments please, ty.
[Response: The adjustments to the US data are not responsible for global trends. Continental trends on every continent (except Antarctica) show similar patterns. The adjustments certainly do matter, and the largest one is related to changes in the time of day people in the US took measurements, other ones deal with station moves and instrument biases. Are they claiming that known problems shouldn’t be corrected for? – gavin]
Timothy Chase says
Dave Blair (#326) wrote:
Actually you can – if you perform multiple runs with slightly different initial conditions. It gives you the spread – and it gets more accurate the higher the resolution.
With the NEC Earth Simulator from a few years back we were performing 32 trillion floating point calculations per second. Things have improved since then. Hadley is now using the different initial conditions from empirical observations over consecutive days and running their models with measurements from past years they have found surprising accuracy in terms of their projections over the near-term scale of a decade.
And that is with models grounded in physics, not some attempt to fit the model to the data. Climate models do the former, not the latter.
Lynn Vincentnathan says
#316 & This also means that there shouldn’t have been any hoo-ha when 1998 and 2006 turned out to be the warmest years [in the U.S.].
I don’t think there’s been any hoo-ha over which year in the U.S. recorded the highest temps. Even in years when the entire world records the highest temp (which is more pertinent to the concept of “global” warming), there’s no hoo-ha on any channel I watch. (Until last week I didn’t have cable, so maybe the science channels mentioned it.)
The sad truth is that the well-oiled media just don’t mention global warming much at all (and they talk about solutions even less), with only a slight pick up after Katrina (or A.K.), followed by a sharp deceleration. The public brainwaves have been flatlined by the media on global warming.
The only thing that might grab media and public attention is very very severe global warming disasters, one upon another (like the equivalent impact of a Katrina happening every month … and assuming we are not at war at the time) … which means we will have already passed the runaway tipping point of no return by the time people take global warming seriously enough to start doing something about it.
I sincerely hope I’m wrong.
Which brings me to the best GW policy IMHO: Hope for the best, and expect (& try to avert) the worst.
cce says
Re: 313 “You are incorrect to say that ‘1998 was originally calculated to be warmer than 1934′”
That’s why I corrected it in the post immediately following.
Just to reiterate,
1998 (US) was originally calculated to be cooler than 1934.
This remained so until at least 2001.
By 2006, it was recalculated to be slightly warmer (that is, it had a “bigger number”)
Now, it is slightly cooler again.
Does anyone know when 1998 “surpassed” 1934, since skeptical sites seem to be making a big deal about how “long” 1998 was on top.
“In the next two years, 1998 gained 0.6 deg C on 1934.”
Certainly you mean 0.06 degrees.
[Response: Actually it’s larger than that – about 0.3 deg C. The difference that the TOBS adjustment and station history moves made in the US ranking was quite large. For the 1930’s to 1990’s there is about 0.2 deg C for TOBS bias correction, 0.1 deg C for station history adjustment, 0.02 for instrument changes, -0.03 for urban effects as applied to the USHCN raw data – all of this is described in the 2001 paper (see plate 6). McI’s claim of 0.6 is probably from a misreading of figure 6 in the 1999 paper which used the convention of Dec-Nov ‘years’ rather than the more normal Jan-Dec average. The appropriate comparison is Figure A2 (d) (1999) and Plate 6(c) (2001). – gavin]
barkerplace says
You are reported in the Daily Telegraph as saying that global warming is “a global phenomena”. I hope the hack got it wrong and you in fact called it “a global phenomenon”, which it is BTW.
[Response: Possibly I can blame the phone connection…. – gavin]
Lynn Vincentnathan says
#327 & I just don’t share 95% confidence in the estimates of the rate of global warming.
It does seem that the rate is fairly slow in lay (though perhaps not geological) terms. And we seem to be only at the beginning of seeing the GW trend come out of the noise. I think the first studies to reach .05 on AGW was in 1995. So it may be too early to tell how fast the warming might speed up, or stay steady, or decelerate. That’s why they have a big range in projected scenarios (involving variations in sensitivity and GHG emissions).
My thinking has been that scientists tend to underestimate the problem, since they can only work with quantifiables. There are many factors that are thought to have impact, but are not (easily) quantifiable. Like the melting of the ice sheets & mechanics of their disintegration.
Now using my kitchen physics, when I defrost my Sunfrost frig (www.sunfrost.com) everything looks very stable, then after a long time lag (kitchen, not geological, time), the top ice sheet just breaks off KA-BOOM. Something like catastrophe theory in mathematics (which I know very little about) might be more applicable to cryosphere dynamics than linear algebra.
Then, of course, when the world’s ice and snow start vanishing en mass, that leaves dark land and sea to absorb more heat.
Other factors include a warming world releasing nature’s stores of GHGs — as in ocean hydrates and permafrost. Again, I guess it’s hard to quantify what these rates will be. But I imagine there will be threshhold points for this — the point at which ice melts, say, at various levels of the underground or sea (tho sea currents impact this too) — and recent studies find some ocean methane hydrates at shallower levels than previously thought, and other studies indicate stored permafrost carbon going a lot deeper than previously thought. Undersea landslides are also a factor.
And then at a certain point of land/vegetation desication (warming causes more WV to be held in the atmosphere, taking it out of the land & plants), and fiercer wind storms, we can expect greater forest and brush fires. Those not only reduce the CO2 sequestering plant-life, but also release CO2 into the atmosphere. And again, I imagine that such events would be hard to quantify, though I’m sure scientists are busy working on that.
There’s just a whole lot left out of the codes and equations and calculations that might not only indicate increases in the warming, but also accelerations of it — like some wild domino effect (I remember the nuclear fission demonstration as a kid with mousetraps and ping-pong balls).
I recenlty read on ClimateArk.org that British scientists have predicted the temps will level off for 2 years, then after 2009 rise sharply, and all hell is about to break loose. See:
http://www.climateark.org/shared/reader/welcome.aspx?linkid=81736 and
http://www.climateark.org/shared/reader/welcome.aspx?linkid=81891
But, again, does their “code” contain all these hard-to-quantify factors? Are they being too scientifically cautious with “fudge factors” if at all they included them. Maybe it’ll even be worse than they suggest, but I sure HOPE (Help Our Planet Earth) that they are wrong and we luck out.
nanny_govt_sucks says
Gavin said:
Even if there is no indication of anomalous data in the rural station?
[Response: The GISS analysis is done to get the best regional trend, not the most exact local trend. Once again, read the reference: “the smoothing introduced in homogeneity-adjusted data may make the result less appropriate than the unadjusted data for local studies”. If you want to know exactly what happened at one station, look at that one station. If you want to know what happened regionally, you are better off averaging over different stations in the same region. The GISS adjusted data is *not* the best estimate of what happens at a locality, but the raw material for the regional maps. Think of an example of a region with two rural stations – one has a trend of 0.15 deg/dec, the other has a trend of 0.25 deg/dec – the regional average is 0.2 deg/dec, and in the GISS processing, both stations will have the trend set to 0.2 deg/dec prior to the gridding for the maps. Different adjustments for different purposes. – gavin]
steven mosher says
Gavin,
In reading Hansen 2001 I Came across this:
“The strong cooling that exists in the unlit station data in the northern California region is not found in either the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted)”
Now, the elimination of this data on the presumption of Contamination, doesn’t lead to much of a change. That is not my issue. My issue is the documentation of the process. Since the “other sites” in the region are not listed I can’t really duplicate the analysis.
But, I tried. So I looked at Lake spaulding and sites within 70KM or so. I had no idea what kind was used around each site to do the checks. Anyway If you compare Lake Spaulding with Tahoe City, Colfax, Nevada city you can see that Lake spaulding has a cooling trend from 1914 -1931 ( not 1927) that differs from these other stations. Then Lake spaulding starts correlating with nearby stations in a more regular fashion. The problem is I’m curious about what objective statistical method was used to judge Homogeniety? Text doesnt say. Related question.
When you ingest data from USHCN, do you just ingest
Monthly Means? or do pull in Daily detail data? ( tmax, tmin)
Second I looked at Crater Lake NPS HQ( which is a very cold place relative to its surrounding sites ) Which sites did Hansen 2001 compare Crater Lake to? The two I glanced at showed no trend differences with Crater Lake, They were warmer than crater Lake station.. In fact the difference in bias ( there was no trend difference)
between Crater Lake and the other Station was fully accounted for by altitude differences ( using Hansens 6C per KM). Crater Lake just happens to be an isolated snowy cold place, But since warming trend is what matters ( hey the artic is cold) I was wondering if I could get a couple of pointers before I plunge into Crater Lake in any depth.
Can I get a pointer to the EXACT test used for homogeneity and a pointer to the exact list of stations that Crater Lake NPS HQ was compared to ( or a radius from its location)
Dave Blair says
#329, Timothy, weather prediction is also based on physics and have some large computing power running their models too. The prediction for weather this afternoon will probably be pretty accurate, but 2 – 3 weeks from now? Same for climates predictions, you say they are accurate for the next decade, maybe so and time will tell, but when we hear the predictions for 50 or 100 years from now that when you wonder why all the movie stars and politicians are getting involved.
Alexandre says
Two questions. Beginner´s questions, actually:
– Is there any chance of mankind quit using fossil fuels before the planet runs out of it? (Unlikely in my view, but I would like to hear more educated guesses)
– What levels of CO2 would we reach if all that buried carbon is released to the atmosphere? (and what would that mean in terms of GW?)
DavidU says
#335
Well the obvious question here is: Have you sent an email to the authors of the paper you discuss?
Unless they are retired or dead the authors of a paper is normally best persons to ask things. Unless you have asked them you will rapily become suspected of prefering to make a lot of noise rather than actually getting answers.
Phil Scadden says
In another argument (sci.physics.foundations), comment was supplied on http://www.numberwatch.co.uk/manmade.htm
What I found interesting here was the website was showing substantially different graph from global satellite temperature than the RSS, UAH curves. Can anyone enlighten me on where their graph may have come from?
Also in that context, http://www.dailytech.com/Blogger+Finds+Y2K+Bug+in+NASA+Climate+Data/article8383.htm
claims “Hansen refused to provide McKintyre with the algorithm used to generate graph data, so McKintyre reverse-engineered it. The result appeared to be a Y2K bug in the handling of the raw data.”
Now given the algorithms seems to be published, I find the claim weird. Anyone know the back story here??
Hank Roberts says
Dave Blair, your belief that studies of weather and climate are the same is one that’s frequently asserted; it’s been answered elsewhere repeatedly; response would be off-topic here.
You could find that clarified in the basic info at
https://www.realclimate.org/index.php/archives/2007/05/start-here/
Barton Paul Levenson says
[[#324, the problem is that you can’t just average climates. There are just to many variables. The climate on a Caribean island is much too different than in the Canadian Rocky Mountains. Regional and microclimates are important as is variance, wind, humidity, clouds, etc etc.]]
Hit the wrong button. Apologies if this message shows up twice.
The things you cite — wind, humidity, clouds — certainly affect the local temperature. But the temperature itself always measures the same thing — the heat content of the surface or the low atmosphere. It’s perfectly valid to average the temperatures of different regions, since it’s the same sort of thing being measured.
Barton Paul Levenson says
[[ What levels of CO2 would we reach if all that buried carbon is released to the atmosphere? (and what would that mean in terms of GW?)]]
We have at least enough coal and oil to quadruple the CO2 in the atmosphere, which would most likely raise global temperature about 5 K. We’d probably lose the polar ice caps; summer at each pole would always be enough to melt all the ice. A lot of the present coastal cities would be under water, including New York, Miami, Houston. The entire country of Bangladesh would be submerged.
Timothy Chase says
RE Steven Mosher (#335)
As we have both noticed, Gavin tries to keep the door open for everyone. But I would like it if we could keep this polite.
I believe it would be in everyone’s interest.
Of course I realize that you are making an effort in this way at present. But in light of whats happened before, I think you can understand and perhaps even share my concern.
catman306 says
If denialists, instead of giving climate scientists nonsense arguments, would go outside and plant some trees, perhaps we will survive a few more centuries.
http://environment.newscientist.com/article/dn12496-forget-biofuels–burn-oil-and-plant-forests-instead.html
Hank Roberts says
Phil, look up the ‘Global Warming Swindle’ stuff, I think the first graph may be from that program; it’s being posted about on a lot of skeptic discussions but with no attribution that I can find for it.
Sparrow (in the coal mine) says
Because a) the raw data are publicly available and b) papers are supposed to contain enough detail to allow others to repeat the analysis. If a paper says ‘we then add A and B’, you don’t need code that has “C=A+B”. – gavin
And this is why so many serious scientists have trouble dealing with the public. This may be true and it’s probably even ideal in the academic and scientific world. But a refusal to release the code in the political arena is suicide. You went through this with Mann’s code so why repeat the same mess? Are those lessons so easily forgotten? Heck the refusal to release the code was discussed in the halls of Congress almost ten years after the fact!!!! Is it really so easy to forget? The vast majority of people are going to assume you are hiding something when you refuse to release the code. A large portion of the legitimate skeptics will view the paper as an attempt to confuse and stall those who are trying to replicate your work. If you have a legitimate reason to release the code then say what it is. Squeezing extra papers from software is a legitimate reason. Release part of the code if you have to. Refusing to release the code simply because you feel like keeping it secret is political suicide.
catman306 says
sparrow’s points explain why so much money has been spent to encourage denialists: more pressure on climate scientists, lessened probability that real steps might be taken to actually do something about global warming, steps that will in most likelihood, cost somebody money. We have denialists because their existence helps save somebody money.
Hank Roberts says
Phil, that “28 years” graph is found in Powerpoint file written by a David Archibald for the Lavoisier Group – slide #1, but he gives no source for it or much else. Without a source, it’s just argument.
John Wegner says
The skeptics are troublesome for GISS and climate scientists everywhere.
Solution: Transparency.
It is as simple as that.
[Response: We publish hundreds of papers a year from GISS alone. We have more data, code and model output online than any comparable institution, we have a number of public scientists who comment on the science and the problems to most people and institutions who care to ask. And yet, the demand is always for more transparency. This is not a demand that will ever be satisfied since there will always be more done by the scientists than ever makes it into papers or products. My comments above stand – independent replication from published descriptions – the algorithms in English, rather than code – are more valuable to everyone concerned than dumps of impenetrable and undocumented code. – gavin]
Hal P. Jones says
Justin, I think the two go together (#294)), the method and the calculations to implement the method’s goals. The code purports to do something. If you have the specific code used, you can analyze it and ensure that it accomplishes what it’s supposed to be doing. Um, if I write a program to generate random numbers, I can run that program over a period of time and validate they are random. I am trying not to go into anything OT here…
Thanks for the idea! I can see why some would have problems with those issues, Timothy (#297) But let me cover those issues from what I see happening. 1. I don’t know enough about it to know if anyone did or didn’t do anything to Dr. Mann’s graphs. But the point of that is you want an outsider, who is an expert at a discipline, to examine the scientific validity of something from they’re angle. How good he is at it or not, like I said not my area and a different issue anyway. 2. He’s not trying to do anything with the stations. Photographic documentation is part of the standards, and it lets you go relook and see how things are later without having to go back. Plus some aren’t sited well. 3. That may be. I think we have to wait and see once every one of them is looked and so everything can be analyzed. I am only interested in having the best data available. No matter which way it goes. 4,5 I don’t get that out of it.
Let’s see. It’s not his project, he doesn’t have control over who goes to what site where. Sure, perhaps some of it is a little over-enthusiastic, but maybe just because it’s interesting and exciting . Sure, instead of “good” it’s maybe better to use “meeting siting standards and minimizing adjustments” and not “bad” but “not meeting siting standards and needs a lot of adjustments” would be more neutral, but it gets tedious to read that. However, if 7 people get 7 sites and 6 are “bad” and 1 is “good” and he compares them to see what effect that may have, I don’t consider that “cherry picking” it’s just what’s there. I can’t fault anyone for focusing on the one issue of the sites and possible effects and not bringing in glaciers, sea ice, ocean temperatures, carbon dioxide and methane levels, albedo and the rest into every discussion and aspect of this.
But Pete (#298) some of this all assumes that solar, wind, hydrogen, hydro, nuclear, clean coal, and the like won’t become more viable and less expensive in the future, that no new oil discoveries of any import will ever happen, and that at some time the oil in shale won’t become at the same relative expense as that in the ground. Who knows, perhaps those with untapped liquid oil are buying it from others to save their own? I think there’s a reason it’s called “black gold”.
Now you’ve lost me Timothy (#299). Why wouldn’t an oil company (or any industry) fund endeavors to protect its interests? Governments and schools do it all the time. If I’m doing experiments that could turn out to be helpful to some entity, why wouldn’t they fund me? They’re going to fund somebody, even if it’s those trying to find new sources of oil, doing R&D into related fields (solar panels for example), and so on. I don’t know about you, but I like having gasoline, electricity, available food and the like. It’s just economics. Think about this: When the government of the United States making money off of every bit of gasoline sold in the entire country starts complaining about oil company profits (which they benefit from every single one of,) why are their motivations any less suspect than that of say Chevron/Texaco?
You are misunderstanding me I believe, from what you said in (#303) John. I’m not making any comment on the validity, scope, or importance of the trend from year x to year y or how somebody interprets it. Or the validity, scope, or importance of the measurements themselves or how somebody interprets it. I’m just saying if you go year to year you can say one thing, if you pick specific periods or specific lengths of time, you can say other things. I totally agree that we need to know as much as possible exactly what’s going on, and as accurately as possible. That the trend of the measurements shows we are warming over time is not really up for debate, anyone can chart it at NOAA.
Speak for yourself Lawrence (#308) I certainly want to see our progeny as nomads roaming the arctic on a subsistence level…. But that’s beside the point, it’s always been easy to get ModelE I believe.
Well cce it’s like this (#311) The difference is not great enough for it to matter between the two years, nor the effect upon the global numbers even if it was. But the point is that finding and correcting errors is the goal of many people, regardless of what “they’re trying to do” or others think “they’re trying to do”. On “either side”.
Jake, it’s all about the trend dude. (#316)
I don’t think this is the same conversation going on in (#317), since I agree with both DWPitelli’s comment and Gavin’s response.
You’ve lost me also caerbannog (#319). Your first paragraph is fine, the second is just, just, just… Not very helpful to supporting the arguement in the first.
J.S. McIntyre (#319). I see it all the time from everyone. Nobody has a lock on the rhetoric about this subject.
In an issue such as this, there’s going to be a very very wide range of people that believe a whole lot of things(#323). Matters of opinion are always such.
(#324), I don’t think that’s a very good analogy Barton. “Climate” has nothing to do with saying “It will be cold in the Arctic and Antarctic this December” or “Water boils at 100C”. (Comparing Egypt and Sweden on the same day.) Nah, I don’t have a better one. But your comment in (#341) bears a bit of explanation; “the temperature” itself always measures the same thing; the temperature of the area you are measuring in the material you are measuring it with the device you’re using to measure it.
I think that rather than how you phrased it Dallas (#327), it is obvious the anomaly is trending up, and if the measurements are stable as to the anomaly, then we are warmer “now” than we were “then”.
Alexandre in #337 most of those two questions are answered in my posts I think. The short answer suddenly is…. Nobody knows the answer to either. But we can make projections, some of which will be true and others not, to various levels of accuracy and margin of error. That’s what everyone is discussing all the time. All that you can get is the opinions of those who have looked into various aspects of this.