This is a continuation of the last thread which is getting a little unwieldy. The emails cover a 13 year period in which many things happened, and very few people are up to speed on some of the long-buried issues. So to save some time, I’ve pulled a few bits out of the comment thread that shed some light on some of the context which is missing in some of the discussion of various emails.
- Trenberth: You need to read his recent paper on quantifying the current changes in the Earth’s energy budget to realise why he is concerned about our inability currently to track small year-to-year variations in the radiative fluxes.
- Wigley: The concern with sea surface temperatures in the 1940s stems from the paper by Thompson et al (2007) which identified a spurious discontinuity in ocean temperatures. The impact of this has not yet been fully corrected for in the HadSST data set, but people still want to assess what impact it might have on any work that used the original data.
- Climate Research and peer-review: You should read about the issues from the editors (Claire Goodess, Hans von Storch) who resigned because of a breakdown of the peer review process at that journal, that came to light with the particularly egregious (and well-publicised) paper by Soon and Baliunas (2003). The publisher’s assessment is here.
Update: Pulling out some of the common points being raised in the comments.
- HARRY_read_me.txt. This is a 4 year-long work log of Ian (Harry) Harris who was working to upgrade the documentation, metadata and databases associated with the legacy CRU TS 2.1 product, which is not the same as the HadCRUT data (see Mitchell and Jones, 2003 for details). The CSU TS 3.0 is available now (via ClimateExplorer for instance), and so presumably the database problems got fixed. Anyone who has ever worked on constructing a database from dozens of individual, sometimes contradictory and inconsistently formatted datasets will share his evident frustration with how tedious that can be.
- “Redefine the peer-reviewed literature!” . Nobody actually gets to do that, and both papers discussed in that comment – McKitrick and Michaels (2004) and Kalnay and Cai (2003) were both cited and discussed in Chapter
2 of3 the IPCC AR4 report. As an aside, neither has stood the test of time. - “Declines” in the MXD record. This decline was
hiddenwritten up in Nature in 1998 where the authors suggested not using the post 1960 data. Their actual programs (in IDL script), unsurprisingly warn against using post 1960 data. Added: Note that the ‘hide the decline’ comment was made in 1999 – 10 years ago, and has no connection whatsoever to more recent instrumental records. - CRU data accessibility. From the date of the first FOI request to CRU (in 2007), it has been made abundantly clear that the main impediment to releasing the whole CRU archive is the small % of it that was given to CRU on the understanding it wouldn’t be passed on to third parties. Those restrictions are in place because of the originating organisations (the various National Met. Services) around the world and are not CRU’s to break. As of Nov 13, the response to the umpteenth FOI request for the same data met with exactly the same response. This is an unfortunate situation, and pressure should be brought to bear on the National Met Services to release CRU from that obligation. It is not however the fault of CRU. The vast majority of the data in the HadCRU records is publicly available from GHCN (v2.mean.Z).
- Suggestions that FOI-related material be deleted … are ill-advised even if not carried out. What is and is not responsive and deliverable to an FOI request is however a subject that it is very appropriate to discuss.
- Fudge factors (update) IDL code in the some of the attached files calculates and applies an artificial ‘fudge factor’ to the MXD proxies to artificially eliminate the ‘divergence pattern’. This was done for a set of experiments reported in this submitted 2004 draft by Osborn and colleagues but which was never published. Section 4.3 explains the rationale very clearly which was to test the sensitivity of the calibration of the MXD proxies should the divergence end up being anthropogenic. It has nothing to do with any temperature record, has not been used in any published reconstruction and is not the source of any hockey stick blade anywhere.
Further update: This comment from Halldór Björnsson of the Icelandic Met. Service goes right to the heart of the accessibility issue:
Re: CRU data accessibility.
National Meteorological Services (NMSs) have different rules on data exchange. The World Meteorological Organization (WMO) organizes the exchange of “basic data”, i.e. data that are needed for weather forecasts. For details on these see WMO resolution number 40 (see http://bit.ly/8jOjX1).
This document acknowledges that WMO member states can place restrictions on the dissemination of data to third parties “for reasons such as national laws or costs of production”. These restrictions are only supposed to apply to commercial use, the research and education community is supposed to have free access to all the data.
Now, for researchers this sounds open and fine. In practice it hasn’t proved to be so.
Most NMSs also can distribute all sorts of data that are classified as “additional data and products”. Restrictions can be placed on these. These special data and products (which can range from regular weather data from a specific station to maps of rain intensity based on satellite and radar data). Many nations do place restrictions on such data (see link for additional data on above WMO-40 webpage for details).
The reasons for restricting access is often commercial, NMSs are often required by law to have substantial income from commercial sources, in other cases it can be for national security reasons, but in many cases (in my experience) the reasons simply seem to be “because we can”.
What has this got to do with CRU? The data that CRU needs for their data base comes from entities that restrict access to much of their data. And even better, since the UK has submitted an exception for additional data, some nations that otherwise would provide data without question will not provide data to the UK. I know this from experience, since my nation (Iceland) did send in such conditions and for years I had problem getting certain data from the US.
The ideal, that all data should be free and open is unfortunately not adhered to by a large portion of the meteorological community. Probably only a small portion of the CRU data is “locked” but the end effect is that all their data becomes closed. It is not their fault, and I am sure that they dislike them as much as any other researcher who has tried to get access to all data from stations in region X in country Y.
These restrictions end up by wasting resources and hurting everyone. The research community (CRU included) and the public are the victims. If you don’t like it, write to you NMSs and urge them to open all their data.
I can update (further) this if there is demand. Please let me know in the comments, which, as always, should be substantive, non-insulting and on topic.
Comments continue here.
Gaz says
That NZ stuff. See
http://hot-topic.co.nz/nz-sceptics-lie-about-temp-records-try-to-smear-top-scientist/
Jimbo says
“….We both know that Svensmark is right. For the sake of your discipline please stand up like a man and admit that you were wrong….” “…End this charade while “Hopenhagen” can still be easily averted….”
Comment by FergalR — 25 November 2009 @ 5:25 PM
[Response: Thanks! But no thanks. – gavin]
We will know soon enough after the CLOUD EXPERIMENT testing the Cosmic Ray Theory of Svensmark.
http://public.web.cern.ch/public/en/Research/CLOUD-en.html
Real science at work. Real observations whatever the results. [edit]
[Response: Well, the theory is from Nye and Dickinson decades ago, and as Bart described a few months ago, the CLOUD experiment has a pretty tall mountain to climb. We’ll see. – gavin]
Giorgio Gilestro says
You may want to add the link in #701 to the front page post too. The combination of the flat line + word “adjusted” is making people go crazy.
I also would like to thank people here at RC. I am a neurobiologist and in the last week I found myself arguing a lot with people around trying to explain how things are. RC has been a fantastic source of information. People talk a lot about data policy release: I wish they would talk about the openess and the availability shown here at RC. I know nothing even remotely similar when it comes to other discipline in science.
Moira Kemp says
Regarding the National Meteorological Services I was wondering if there is an international organization and there is –
http://www.wmo.int/
They have expressed concerns about commercialization and data sharing here –
http://www.wmo.int/pages/about/Resolution40_en.html
This is obviously a bigger issue than can be dealt with by individual scientists, but maybe bringing it into the public domain might help in getting some of those issues resolved.
bielie says
1)I am sorry, but I cannot believe that not making data public because it was obtained from meteorology services around the globe is a good excuse, for the following reason: Most weather services are involved with predicting weather. Any weather data loses it’s commercial value as soon as it is history. After that the data has purely academic value. Reading through Harryreadme a more obvious reason is that the data was hopelessly corrupted/in disarray. Phil states in his emails: “We will hide behind…” and “Delete it rather than handing it over.” If the excuse was true he would rather have said: “I forwarded your request to such and such for release of the data into the public domain…”
2)Peer review: Your assessment of the quality of papers that were rejected cannot be believed, and with this I am not implying that you are wrong;I simply state that IF your institution did subvert the peer review system, you would definitely call any paper not passed hopelessly inadequate. Even if the staff of the publication in question resigned it could still only mean that they are true believers or caved in under pressure. You have to do better than that. While you are free to read, recommend and publish in any journal you like the email suggests much more than that: It suggests malevolent punishment of dissent, especially in the contexts of (historical) statements by non-AGW climatologists.
3)I am a scientist, but not a climatologist. Correct me if my understanding of this is wrong: Using tree ring data is ok pre 1960 but it becomes unreliable post 1960. The reason could be a)All the tree ring data is wrong, and the reason it looks right pre-1960 is because the thermometer measurements were not that accurate. I mean, how many weather stations were there in the 19th century and how accurate were they? b) The tree ring data is correct but the surface data post 1960 is wrong, maybe because of urban heat islands? In both cases the data proves nothing. What is the correct answer? c) The trees suddenly started growing differently due to an unknown (Anthropogenic – gasp)factor. This is seriously stretching it. You have to do better than that.
4)It is obvious that your models did not predict the current cooling spell. We always say the most exact scientific instrument is the retrospectoscope. So it may be correct to say that it is natural cooling superimposed on global warming. BUT it still leaves serious doubt about the validity of you modeling software. It does not inspire confidence at all.
[edit – stick to science]
[Response: 1) It’s clear that Jones was frustrated – something to do with the ~100 FOI requests for data that isn’t his to give perhaps. But nonetheless the facts remain. You are incorrect about the data losing value outside of the weather forecast time-period. If that was true, there would be no problem, but in fact, small-scale localised information on climatology, variability, extremes etc. (which require timeseries) is actually very valuable to businesses, planners and the like. 2) Peer-review. You have got this completely wrong. None of the emails were complaining about papers that *didn’t* make it through peer review – but instead the duff papers that did. And they were right to be worried about the possible breakdown in the system. 3) Tree ring data comes in many forms. The post-1960 divergence problem is a real issue for some kinds of proxies in some areas. It is not a blanket statement about all tree rings. There may be some other anthropogenic factor at work, an analysis artifact, or it might simply be how those trees respond – it remains to be seen what the conclusion is. In the meantime, feel free not to pay any attention to those tree ring records that show this. 4) the current cooling spell is no such thing, and the spread of the models over that same time shows that similar excursions happen all the time. There certainly were models that had 1998-2009 trends that were lower than the observations. – gavin]
Eli Rabett says
WRT the CLOUD experiment, not only do they have a tall hill to climb, they have dug themselves into a very large and easily predicted hole. Wall effects (what happens when molecules and aerosols hit the wall of the apparatus)and outgassing have long bedeviled attempts to study slow processes such as formation of cloud nucleation, they are also very well known, but appear to have surprised the folk at CERN. Why they didn’t spot this in proof of concept experiments on a smaller and less expensive apparatus is hard to say (actually it’s easy:).
Thomas Lee Elifritz says
Mr. Miller (#604), fine then, if that’s what you think, who are these scientists, specifically, by name, and how exactly did the ‘pervert objectivity’ exactly, with references, in proper context, of course.
I need a good laugh on a holiday, at your expense. I’ve even brewed up a nice hot cup of tea awaiting your astute response. Science is self correcting, because the methods it uses vary widely and ‘evolve’. What you are doing isn’t science, it’s ‘propaganda’. Now I’m giving you a chance here to use credible modern scientific methods to bolster your case with actual evidence. This is your big chance, go for it.
Sloop says
Reply to #664 by a layman no longer lurking
The irony of your comment for me is that in my work I nearly always advocate for environmental data accessibility no matter whether generated by public or private organizations. As an aside, It is not true at all however that just because data generation is publicly funded that it should be released for all to see. Yes I agree in principle that should the case for environmental data, but not for many other types which for example could have homeland security implications or would contribute to the problem of international corporate espionage.
There is a political dimension to the met data restrictions. If governments don’t adequately fund meteorological systems, including met data collection, analysis, and storage, then meteorological organizations are forced to find ways to commercialize their work and assets in order to keep functioning.
Alors, this problem of met data restriction arises out of the need to protect the marketing of data required to keep the met systems going! We’ve made our own bed as governments as now the science we desperately need conducted and reported out is being impeded for political reasons. An example of how the consequences of government policy decisions previously are rarely fully characterized.
Furthermore, many reactionary politicians know that if they can squeeze the monitoring and science institutions, they have a better chance of maintaining the status quo. This goes on constantly across all issues of public health and safety that governments must address.
What’s unique about this area of science and policy is the unprecedented implications of climatology for our collective future. Arguably, not even the science and engineering that produced and maintains nuclear weapons systems has had the impact on human economies and society that this science is and will have.
I’m taking a break for Thanksgiving. I encourage everyone to find space for repose, meditation, and restoration; I beginning to think that we’re at an important turning point and we’ll need all our strength and wits in the coming months.
Sloop
dhogaza says
It’s not your data. It’s not for you – or CRU, or Steve McIntyre – to decide whether or not the weather data has lost its commercial value.
It is solely in the hands of the service that owns the data.
You may look at an old car sitting on concrete blocks in my yard and decide, “oh, it’s worthless, I’ll take it!” but sorry, I get to make that determination, and I’ll charge you with theft whether or not you think it’s reasonable.
Don’t like the fact that you don’t get to decide what is or is not of value? Tough. This thinking that denialists get to decide what is, and what is not, proprietary is tiresome crap.
RaymondT says
Gavin, In reference to my previous message @573RaymondT (25 November 2009 at 12:30 PM)And your response
[Previous Response: All of our discretisations use a flux form for solving the equations, thus the conservation constraints are built in. There is no mass balance error in the sense you describe. – gavin]
All numerical solutions to partial differential equations involve a numerical error. Suppose that you start your numerical simulations in 2000 and that there are x moles of water in the atmosphere. Suppose that during the run y moles of water were transferred to the atmosphere from all other sources apart from the atmosphere over a period of 100 years. Because of numerical errors, the total number of moles of water in the atmosphere will not be exactly x + y in 2100. I would like to know the numerical error in calculating the number of moles of water in the atmosphere at 2100?
[Response: Machine accuracy (a few parts in 10^15). We spend a lot of time making sure that there are no mass conservation errors. The numerical errors that are an inevitable part of solving PDEs on a grid are all in the magnitude of the fluxes, not in how the fluxes are applied. Think of it as if we are discretising the volume integral of the conservation equations and using the divergence theorem to related the consevation solely to the boundary terms in every grid box. – gavin]
Ron R. says
“Which of the gratuitus assertions made in the quote by your guru, Peter Raven, (other than the fact that the earth is overpopulated) can stand the test of verification to be anything other than pure hyperbole!”
Read the book the Raven is introducing.
http://atlas.aaas.org/
“He claims that mankind has altered the composition of the atmosphere ‘profoundly’. Well, I guess that if increasing the concentration of a trace gas by a few hundredths of a percent is profound then he’s correct.”
And continuing to alter it: http://co2now.org. (note: Real Climate could probably use something like this on the homepage. Here’s the code http://co2now.org/index.php?option=com_content&task=blogcategory&id=29&Itemid=33.)
“Come on! A quarter of the topsoil, a fifth of the agricultural land? I’d like to see the basis for the calculations that genreated those numbers. His claims about the loss of a ‘major proportion’ of forests and increases in species extintion by ’several hundred times’ sound like they come from the hysterical folks at WWF or NRDC.”
See http://earthtrends.wri.org/features/view_feature.php?theme=7&fid=34. Also see the book Raven is introducing. Lots of work went into it :-)
“By the way, did you intentionally leave out the part about his eugenic solutions to these problems?”
Really? I assume you have a link to to that?
Pete Ridley says
Gavin, what’s your reaction to this comment (Note 1) – if the graph (which rather flattens the top of the hockey stick from 1970 – 2000) doesn’t post you can see it at the reference.
QUOTE:
Hiding the Decline: Part 1 – The Adventure Begins
From the CRU code file osborn-tree6/briffa_sep98_d.pro , used to prepare a graph purported to be of Northern Hemisphere temperatures and reconstructions.
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,- 0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
;
yearlyadj=interpol(valadj,yrloc,timey)
This, people, is blatant data-cooking, with no pretense otherwise. It flattens a period of warm temperatures in the 1940s 1930s — see those negative coefficients? Then, later on, it applies a positive multiplier so you get a nice dramatic hockey stick at the end of the century.
All you apologists weakly protesting that this is research business as usual and there are plausible explanations for everything in the emails? Sackcloth and ashes time for you. This isn’t just a smoking gun, it’s a siege cannon with the barrel still hot.
UPDATE2: Now the data is 0.75 scaled. I think I interpreted the yrloc entry incorrectly last time, introducing an off-by-one. The 1400 point (same as the 1904) is omitted as it confuses gmuplot. These are details; the basic hockey-stick shape is unaltered.
UNQUOTE.
Note 1) see http://esr.ibiblio.org/?p=1447
[Response: Way overblown. Everyone knows (since it was published in Nature) there is a problem the MXD proxy post 1960. It makes perfect sense that the scientists involved would try a number of things to attempt to correct for the problem. The code highlighted is structured to produce two plots. The first is entitled “Northern Hemisphere temperatures, MXD and corrected MXD” and the second is “Northern Hemisphere temperatures and MXD reconstruction”. In the code as available, the “corrected MXD” plot and the artificial correction is commented out (note the ‘;’ on the line
;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
. Instead the program will quite happily plot the temperatures and MXD reconstruction with no correction. As for where the correction comes from, it appears to be based on a PCA technique described in briffa_sep98_decline1.pro and briffa_sep98_decline2.pro:I guess the initial reason to do this would be to see if there is a spatial pattern to the divergence that might reveal something about it’s cause. The weighting of that pattern (the ‘yearlyadj’ PC weights) could be used to correct for the decline, but I’m not sure what use that would be. More importantly, I have no idea if that was used in a paper (I have no access from home), but since the graph would have read “Corrected MXD”, I don’t see how anyone would have been misled. It certainly has nothing to do with Jones’ comment or the 1999 WMO plot, nor the published data. This is just malicious cherry picking. – gavin]
Geoff Wexler says
We need more transparency; much more!
And I am not referring to the public sector For example who really understands the information discussed e.g. by Tim Chase here?
https://www.realclimate.org/index.php/archives/2009/10/climate-cover-up-a-brief-review/comment-page-7/#comment-139467
and in subsequent comments by him on the same thread. This sort of secret funding is subverting both science and democratic decision making. Seeing the tip of an iceberg is insufficient. Of course it is expensive running an outfit which requires thousands of lobbyists :
http://www.publicintegrity.org/investigations/global_climate_change_lobby/key-findings/
and of course many of these lobbyists may be nice honest people correctly reporting the science, but if only we could use an extended FOI act to find out what advice these lobbyists are giving and how much they get paid in return. Thats not all. Just as there is some concern about what happens to streams which appear to disappear into moulins at the top of glaciers so also we need to know what happens to the rest of these funds as they trickle down to the astroturfers.
The Enemy says
It seems to me that if there is a conspiracy afoot to create the impression that Earth is getting hotter then scientists are shooting themselves in the foot with all these graphs showing a basically flat temperature trend for the past decade. Some conspiracy.
xDxEx
Brian Dodge says
Has anyone else noticed that the people who “…will not trust the science until the process is 100% transparent.” are the same right-wing idealogues who demand government functions pay their own way or become privatized, thus guaranteeing profit driven obfuscation? If you dingbats got your way, and all the data became IP for sale (at a price you can’t afford*), so the science couldn’t be “100% transparent”, how would you suggest making policy decisions? Flipping a coin? Rolling dice? Privatizing that function as well, and contract Exxon/Mobil/Heartland/CEI Policymakers Inc. to decide for us?
*I worked for a non-profit institution which purchased a complicated piece of patient care equipment that I was going to maintain. Our purchasing department ordered a $250 service manual, and the company said that it could only be supplied after I took their $2500 training course. The legal beagles at our purchasing department said that they were obligated under FDA regs to provide a service manual, here’s our purchase order, we expect to see the manual ASAP. We then received a letter from the company’s legal department(CC’d to all customers) saying that due to a clerical error the pricing on service manuals was incorrect; that the price of the first copy of a service manual was $2500, and included one free training course[which it was strongly recommended customers take since patient safety was involved]; and that additional hard copies of the basic service manual would be provided at a discount to $250. The non-disclosure agreement that I had to sign to take the course prohibited me from publishing or revealing to any third parties any research results or knowledge obtained from use of their equipment or contained in the service manual or from the training course without the company’s prior consent. I was probably required to sign personally in addition to whatever blanket agreement had been made by some “responsible party”(head of purchasing?) because they were paranoid about me stealing trade secrets while at their factory for training.
Jeffrey Byrd says
Gavin: First off, I’d like to commend you on the professionalism you’ve shown in addressing these issues. I for one would have lost any sense of civility a long time ago. One statement you made in the last thread I must, however take issue with. You stated something to the effect that $30 per person is a small price to save the planet. As history has shown over the course of many millions of years, it is the life on the planet that needs to be saved, not the planet itself. The planet has survived many mass extinction events, only to spring back to life time and time again. Humanity along with other species may not be so lucky this time around. In short, if anything needs to be “saved”, it is us.
With that said, I think it is fantastic that we have so many scientific minds working on the issue of climate change and the environment. With only a handful of people (many part-timers) watching the skies for cosmic debris that could wipe out life on this planet in the blink of an eye, it is refreashing to have at least one life-threatening issue being given the attention it well deserves. Skeptics should really be appreciative that someone’s got their back on this one.
I find it immensely humorous that people think there is hidden or secret data lurking about, being held by the prophets of global warming just to hide their hoax to enable them to carry out their evil and nefarious plans while getting filthy rich in the process. This issue is real. The data is real, and guess what? ANYONE can get access to the data, the code, etc. – most everything but the few datasets that require purchasing. But hey, you can fork over some coin for that as well. Noboday is hiding anything. These are scientists, doing what they do. Nothing more. Any assumption of wrongdoing in those illegally obtained emails, is unfounded. It is nothing more that what goes on in the real world of science. The computer or mobile device you are reading this on, was brought to your proudly by acts science. Give some credit to these people that work tirelessly on environmental issues that may someday save you, your children’s and/or your grandchildren’s future on this planet.
There are many more interesting conspiracy theories out there that need more attention … did you hear NASA uncovered an ancient base on the moon when they were searching for water? And I read it on the internet, so it has to be true …
Happy Thanksgiving for those in the US.
Bart Verheggen says
Jimbo (702),
It always surprises me when people think that the CERN experiment will give the final answer to the cosmic ray – climate question. Granted, it’s an important and noteworthy experiment, but surely it won’t provide the last word on it. Its results will still have to be combined with what’s already known about the various topics at hand (see https://www.realclimate.org/index.php/archives/2009/04/aerosol-effects-and-climate-part-ii-the-role-of-nucleation-and-cosmic-rays/) and withstand the test of time, e.g. through replication efforts with different means.
Since you’re so eager about the CERN results, I’m sure you have seen the recent paper discussing the initial results: http://www.atmos-chem-phys-discuss.net/9/18235/2009/acpd-9-18235-2009.html The results are rather inconclusive one could say: Some experiments suggest a role of ion induced nucleation; many don’t. As Eli mentioned, the initial set-up suffered from unexpected wall effects. Though this was not, as he suggested, because the people involved didn’t realize the importance of wall effects. The CLOUD collaboration includes many atmospheric scientists with plenty of experience in smog chamber research. I took part in the initial stages of the CLOUD project, and am very well aware of the importance of wall losses (it’s the topic I have often stressed in aerosol smog chamber research, see e.g. http://www.atmos-chem-phys.net/6/2927/2006/acp-6-2927-2006.html)
The chamber used in the initial CLOUD experiment though may have suffered from previously encountered compounds, which may have contaminated some of the results, causing some of the wall effects that Eli referred to. As described in the paper, several experiments conducted towards the end gave better results, and some of them suggested a role of ions in aerosol nucleation. Hardly earth shattering, and still a far stretch from proving that GCR substantially influence climate (as explained in the first link).
Ron R. says
You know, people really need to see this as what it is, an unethical and politically motivated hit orchestrated just before Copenhagen. The oil companies must really be desperate, but considering their tactics in the past this shoud really come as no surprise.
As to the lawsuit against Gavin and the continuing Inhofe/M&M/BigOil/KingCoal harrassment of climate scientists it’s really starting to border on, well I’ll say it, persecution imho.
What gets me is the stooges that post here and elsewhere who knowingly or unknowingly are being used as pawns by the aforementioned gang. This is just another example of people going against their own best interests. Kind of like the teapartiers, many of which are dirt poor, raging against efforts to make health care more affordable even using the insurance industrys own talking points. Meanwhile many are going bankrupt and even homeless due to soaring medical costs.
http://www.webmd.com/news/20050202/medical-bills-can-lead-to-bankruptcy
About the occasional filling in of tiny bits of missing data to get a whole picture that seems to be such a sticking point for some, correct me if I’m wrong but I kind of look at it like this – the world, including climate science is a big jigsaw puzzle. Thousands of pieces. Sometimes though a piece is missing or defective. Yet because the rest of the puzzle gives a clear picture of what’s going on it’s fairly easy to know what’s missing.
Kind of like these puzzles:
http://bltchemistry.com/wp-content/puzzle_bush.jpg
http://www.planetdan.net/pics/misc/puzzle_dick.jpg
bielie says
Read the rest of my sentence: The did not act (as is clear in the emails as if they wanted to comply. Phil did not say “lets work out a way to get this crucial data, on which the future of the planet depends, out in the public domain.”
[edit – don’t put words into people’ mouths]
ccpo says
2. Rightly or wrongly, can you see why these e-mails actually put off neutral observors
That’s the point of all this, isn’t it? It is a fairly well established fact that people respond to emotional appeals much more readily than fact. The process has been all along to do nothing more than create doubt, thus inaction. That’s all they want and all they need.
The scientists and the activists all need to bear the above in mind. Appeals to intellect do not, and will not, work outside of absolute proof, i.e., massive disruption. The situation itself proves this. To wit, it is virtually impossible to look at the scientific evidence and conclude there is not warming and that any warming measured is not mostly anthropogenic. Just knowing the basics of the carbon cycle Gavin posted above is enough. So why isn’t it?
Ideology and, in too many cases, cash, pure and simple.
I would like to specifically ask Gavin, et al., to respond to the following:
I am fond of pointing out – because I, too, prefer logic to hyperbole – there is no paper I am aware of that has been through peer review, publication, and post-publication review that puts any aspect of the essentials of the science supporting a strong anthropogenic component to climate change, at this time warming.
(Note this is not meant to imply every detail of climate and climate science is understood and settled.)
To your knowledge, is this accurate?
Cheers
RaymondT says
Gavin in response to my last email.
“All numerical solutions to partial differential equations involve a numerical error. Suppose that you start your numerical simulations in 2000 and that there are x moles of water in the atmosphere. Suppose that during the run y moles of water were transferred to the atmosphere from all other sources apart from the atmosphere over a period of 100 years. Because of numerical errors, the total number of moles of water in the atmosphere will not be exactly x + y in 2100. I would like to know the numerical error in calculating the number of moles of water in the atmosphere at 2100?” and your [Response: Machine accuracy (a few parts in 10^15). We spend a lot of time making sure that there are no mass conservation errors. The numerical errors that are an inevitable part of solving PDEs on a grid are all in the magnitude of the fluxes, not in how the fluxes are applied. Think of it as if we are discretising the volume integral of the conservation equations and using the divergence theorem to related the consevation solely to the boundary terms in every grid box. – gavin]
The method which you use to correct for the mass conservation error may not be physical. If you have to add a flux correction to each block you are not solving the same differential equation.
[Response: We discretise the flux form of the equation, not the straight PDE. It’s the difference between solving dT/dt = dF/dx(x) by saying T_i_new = T_i_old + dt*(dF/dx(x_i)) or integrating and solving it as int(dT/dx)_i = (T_new-T_old)/dt = F_(i+1/2) – F_(i-1/2). They are the same equation, but one discretisation preserves mass balance automatically, while the other one doesn’t. – gavin]
Eli Rabett says
It is useful to remember in discussions of met data that quite often the weather forecasters’ offices are in Department of Commerce (US for example) or Agriculture, which are commercially not scientifically oriented. This has always been a tension for NOAA (as well as NIST).
Rod B says
Ray Ladbury (695), in a surprising sort of way (and without the vitriol), you have a good point. It seems to me that most of the folks demanding all of the reams of raw data couldn’t do much with it if they got it.
Though, for the record and to reiterate and earlier point, just because I don’t have my own multi-million dollar grant, a large supercomputer, and have coded my own GCM doesn’t mean I’m not allowed to question some of the science.
Chuck L says
DOGGONE IT!!! DO YOU PEOPLE ABSOTIVELY, POSILUTELY, HAVE TO BE SO FREAKIN’ DRY? YOU NEED TO GET OUT HERE AND FIGHT!
Eli Rabett says
Ahem. We each do our part.
bielie says
[edit – as above]
Most local weather data is in the public domain anyway. Farmers are fed weather data by government agencies worldwide. Shipping companies get weather data for free from the same agencies. To use Phil’s own term: You are “hiding behind…” whatever. Read my other post.
[Response: See Bjornsson’s statement above. – gavin]
You miss my point. If the allegations are true that the peer review system was subverted by your group your response would be exactly the same than if the papers were really worthless and the review system was subverted by the skeptics. You cannot win the argument by saying “those papers were crap anyway”.
The whole purpose of peer review is that there should be diversity of views. That is why scientific papers are reviewed by panels, and not single persons. Believe me, I have experience of academic rivalry, and egos trying to prevent other people to publish. An intact peer review system prevents it. If a specific editor who reviewed an article felt it was worth publishing it is completely unethical to boycott that editor. The paper, if published, will rise or fall on its own scientific merit. And remember, peers are not always right. Much of the peer reviewed articles in my field (medicine) is crap (duff)anyway. But the system has to publish some crap so that the true breakout science can rise to the top. Think Helicobacter Pylori!!. The fact is: This was unethical interference in the system. You are guilty of exactly the crime you charged that editor with. If you cannot see that, you are ethically challenged.
[Response: Ridiculous. Peer-review does not demand that bad papers be published so that good stuff stands out. The good stuff stands out anyway. And how, precisely, have I interfered with the peer review system? (Cite please?). Discussing whether journals have good peer review systems is a perfectly valid discussion, and journals reputation’s are part of why you decide to publish there or not. Take E&E – no peer-review to speak of, terrible reputation, => very few serious submissions. That is not ‘unethical’, it is scientific market place. – gavin]
I promise you the trees did not decide on new years eve 1959 to start growing in a different way. I submit that the answer is either a) or b) I suggested. (Hey, at least I suggest a solution, unlike the CRU!) Both explanations have serious implications for AGW. The problem is, you cannot see it because you are too emotionally invested in the outcome, so you have to “hide the decline”
[Response: Again, “I” have done nothing. And since I have never been an author on any paleo-reconstruction, I don’t have a personal stake in this in any case. But you are wrong in every particular. AGW does not need trees to be demonstrated. 90% of the detection and attribution on this issue is done with 20th C instrumental data. How many trees are needed to show stratospheric cooling? – gavin]
Then why is it a “Travesty” that you can’t explain it?
[Response: Read Trenberth’s paper linked above. – gavin]
dhogaza says
The FOI rejection explicitly states that CRU is working to amend distribution agreements so they can release the 2% or so of raw data that they’re currently unable to provide.
This refutes your claim.
Dendrite says
A few low-key, non-techical observations on UK laboratory customs and word use from a biologist who worked for many years in a neuroscience lab, analysing noisy data sets and measuring awkward structures:
1. ‘Tricks’. Any new piece of maths or computer code that made a procedure faster or more accurate was referred to as a ‘trick’. A ‘neat trick’ or a ‘clever trick’ referred to a particularly elegant or admirable piece of work.
‘Trick’ could also mean a crucial, make-or-break step in a technique, as in: ‘For getting a good seal, the trick is to keep the tip of the micropipette absolutely clean.’
2.’Fudge factors’. Our work required the use of several correction factors to overcome unavoidable and well-documented errors in the original measurements. These had technical names, of course, but in lab discussions (and in remarks or comments in computer code) they were often referred to as ‘fudge factors’, ‘fiddle factors’, ‘wiggle or wriggle factors’.
I actively encouraged the use of such terms, for several reasons. They fostered an ethos in the lab of self-criticism and self-scepticism which I regard as entirely healthy. In computer code, they drew the attention of new or unfamiliar users to steps in the analysis that warranted close scrutiny. I felt it was good to wear our problems and weakensses on our sleeves.
Would these expressions be embarrassing in the hands of an unscrupulous hacker or rival – undoubtedly yes. Were they evidence of falsification of data – absolutely not. On the contrary, they were intended to make sure that the assumptions and corrections in our analyses were never concealed, forgotten or overlooked.
I’d like to think that if I had intended falsification, I wouldn’t have been so dim as to advertise it quite so clearly.
Dendrite.
simon abingdon says
When I was a computer programmer (of commercial – not scientific – applications) in the 1960s it was gradually realised that use of the “GO TO” instruction led to “spaghetti” code that by dint of its impenetrability was virtually impossible to debug thoroughly. The correct working of such programs could not be assured other than in response to necessarily limited sets of specimen data. See “Go To Statement Considered Harmful” by Edsger W. Dijkstra (Communications of the ACM, Vol. 11, No. 3, March 1968, pp. 147-148).
Gavin, can you assure me that times have moved on similarly in the scientific community and that GCM programs are properly structured into hierarchies of “IF THEN ELSE” and “DO UNTIL” subroutines, each properly annotated in terms of its input, output and function?
Since we’re talking about programming practice that has been accepted for 40 years I’m sure my misgivings must be groundless. I should still like your reassurance though.
simon abingdon
[Response: It’s a work in progress. – gavin]
EL says
636 – “I don’t know what to make of the code he is analyzing, and I’d welcome a response from someone at RC.”
His position is indefensible. He has no idea of the purpose or usage of the code, nor does he have any clue about how the results were presented. For all he knows, the code was used to create an artistic cover for a book.
Computer scientists and applied computer scientists (programmers) are two very different animals. Many programmers are autodidacts, and the rest mostly come from applied computer science institutions. Programmers tend to confuse mathematical models with their idea of programming. They think climate scientists just sit down and create climate models on computers as they would a video game.
As for his comments on peer review, computer science has yet to establish peer review (Though there is quite a few trying to push it). In fact, most research in computer science is published through conferences.
Hank Roberts says
FHSIV says: 26 novembre 2009 at 12:13 AM
> his eugenic solution
You’re likely confusing birth control, or population control, or evolution, or biodiversity with “eugenics” — there’s a long sorry history of confusing them:
E.g. look at http://www.google.com/search?q=Darwin+eugenics+“Peter+Raven”
Hank Roberts says
> FOI requests
For context, is there anywhere a list of the FOI requests (and the content to see if they were different or overlapping) that were filed asking for CRU information? It’d help to have the requests, dates, who made the requests, and the responses. Then people could judge their worth.
Eli Rabett says
Bart, to be honest, I have real trouble believing a lot of smog chamber experiments on the slow end of things. The interaction of the aerosols with the walls has to be a lot more chemically and physically complex than what goes on in the center of the chamber and you can’t keep the aerosols away from the wall. A lot of what is claimed in the literature for limiting these effects comes down to magic. YMMV but think of the definition of an ideal solution, one in which the solute (the chamber) has the same interaction with the solution (the aerosols) as the solution has with itself. You can get close, but differences accumulate over time and almost by definition CLOUD will be looking at long time issues.
Even worse, could you explain to Eli what the “new” physics/chemistry coming out of this experiment will be that depends on using high energy beams. Everyone knows that CR ionize the molecules in the atmosphere. We even have a good idea of the ionization efficiency as functions of the energy and identity of the CR and the atoms. We even (should, this seems clear although I’m not an expert here) have measurements of the types of ions formed by CR, and the various collision processes that take place until they are thermalized. This is all short time stuff. So how will the growth process differ for cosmic ray formed aerosols and others formed by ionization, such as by lightening? If the answer is no difference, CLOUD is a political boondoggle. I’ve read the CLOUD proposals and progress reports and see nothing about such issues beyond simple handwaving.
Ray Ladbury says
Rod B., I would not dream of taking away anyone’s right to question the science, but is it too much to ask that they have even a nodding acquaintance with the facts, evidence and scientific methodology first?
In these 2 threads we have seen people question
1)whether the planet is warming, 4 separate global temperature reconstructions based on very different methods that all show warming; despite trillions of tons of ice lost globally, despite tons of phenological evidence,…all showing warming.
2)whether the warming is anomalous in history, despite dozens of independent paleoclimate reconstructions, despite receding glaciers, despite borehole records, stalactite studies, and even a historical record showing earlier and earlier dates of first bloom of cherry blossoms on Mt. Fuji
3)whether CO2 can account for the warming, despite the fact that we know CO2 is a greenhouse gas, despite simultaneous tropospheric warming and stratospheric cooling and despite about 10 separate lines of evidence all favoring a CO2 sensitivity around 3 degrees per doubling
4)whether we are responsible for the increased CO2, despite the fact that humans have produced more than enough CO2 to account for the increase and despite the fact that isotopic changes in atmospheric carbon show the increase must have come from a fossil source.
If people are ignorant of this and thousands of other pieces of evidence, or if they refuse to consider it, it is hard for me to see how they can contribute much value to a scientific discussion. If they cannot be persuaded by all of this evidence that anthropogenic causation is a reasonable conclusion, it is difficult to believe that any amount of evidence would persuade them.
As for myself, all I would require to abandon anthropogenic causation would be a theory that explained the above evidence at least as well as the current consensus theory and did not imply that adding CO2 would cause significant warming. Still waitin’.
RaymondT says
Gavin, Suppose that you set all your feedbacks to zero, assume that the solar heat flux is constant and run your model for 500 years or 1,000 years would your temperatures drift ?
[Response: You can’t set your feedbacks to zero, they are fundamental to the whole thing (how can I turn off evaporation and still have a reasonable climate?). But you don’t need to. If you run the models out with no change in solar flux, or CO2 or whatever, then yes, the temperatures are stable (in a statistical sense). 500 years is about the minimum time you need to equilise the deep ocean, but residual drift after that is very small. – gavin]
Heather says
Thank you, RC, for the excellent debunking of what was posted. Your eloquence takes all the steam out of the hackers’ allegations, at least for anyone equipped with simple common sense and reason.
I have a challenge for the hackers: why don’t you hack into the proceedings of the Catholic Church while the archbishops and cardinals debated how to handle the pedophilia issue, and post all that private correspondence? (Remember to include notes in personnel files regarding transfers of pedophilic priests). And then, when you’ve done that, please do the same with our Congressional representatives and their staff. Please be sure to cherry-pick the emails and limit what you post to the most potentially damning items.
Phil M says
Re:
http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-6-10.jpg
– it’s good that the ‘hide the decline’ algorithm is no longer used…
– data is data, let it speak for itself…
– since this thread isn’t about how great the HS is, I’ll leave it there
– other than to say, it certainly looks to me like the proxies have real trouble in tracking temps above a certain threshold, which happens to be the 0.0C line…
– so it would be difficult to say much about the MWP from these plots other than place it at 1000 years ago…
– but that is OT for this thread…
John says
Regarding the hockey stick graph, am I reading it incorrectly, or does the unbelievably wide confidence intervals surrounding the projected temps going back more than a few hundred years render that portion of the analysis nearly invalid, in that it appears that at any given time, it could range from the lines provided, (below current temps) to at or above them (or well below?)
ie, the further back you go, the data becomes subject to the possibility that it may vary by what looks to be several standard deviations from the mean? and therefore its comparison to temps from this and the last century is almost meaningless?
I would appreciate a better explanation of the display and significance of those confidence intervals.
Bart Verheggen says
Eli,
The nucleation studies which are one of the aims of CLOUD are short time experiments. I am aware of long time experiments into aerosol ageing processes that look much more into chemical transformation processes over time. Wall loss is absolutely important there, and it potentially interferes with a lot of interpretations. For chemical characterization however, it may be less important. For those issues where it is important we just have to account for it as best as we can. Not much different in principle than accounting for the meteorlogy as best as we can when interpreting atmospheric measurements. Actually, wall losses are simple in comparison, one could argue. (not to trivialize things, though)
As to new physics: Since GCR are the primary source of ionization in the atmosphere, and since the mechanism and contribution of ion induced nucleation is still very uncertain, there is a lot of value in these kind of experiments even apart from any climate related issues. (That’s actually how I got involved; I later found out about the climate connection hypothesis.) I am more skeptical about the prospects of a smog chamber study elucidating the whole chain of climate relevant processes; the nucleation is perhaps the ‘easy’ part of that whole chain.
David Gordon says
Gavin, thanks for your last answer, I am still following up on it.
If I may make a suggestion, were I you I would bar further ad hominem types of posts on this board, they make this site look very political and not scientific in nature. Name calling doesn’t help anybody who is trying to figure out what is going on; rather it gets in the way and dilutes the quality conversations that are conducted in between. This goes for name-calling on both sides – ‘deniers’ and ‘alarmists’, ‘right wing’ and ‘left wing’, and so on and so forth. If I wanted to see political poo-tossing there are many places where that can be found; but there are few where serious discussion of the science can be found.
iain says
Given that many (not absolutely all) AGW skeptics seem to be even more adept at malicious misinterpretation of the English language than they claim “the team” to be at misinterpretion/massaging of the statistics (and that’s even without the lunatic fringe), may I take the opportunity of that thanksgiving thing on your side of the water to echo the profound thanks of many others for the calm and informative moderation and editing of these hack-related comments strings. That’s not the same thing as saying there are no problems in the hacked e-mails to be considered. Just that anyone with a low BS-tolerance level owes you a debt of gratitude for not allowing this thing to turn into the usual exchange of ignorance-fueled hot air and agenda-generated verbal toxins from either side. Thanks.
SecularAnimist says
What we are seeing here is the fruit of a generation-long, multi-multi-million dollar media campaign to create an audience of “Ditto-Heads” who have been systematically brainwashed to unquestioningly believe whatever they are told by the so-called “right wing” media and to regard all other sources of information as not only suspect, biased and untrustworthy, but as promulgators of deliberate, malicious, un-American “liberal” lies.
Once created, this symbiotic system of the “right-wing” media and its audience of gullible “Ditto-Head” mental slaves, can be readily enlisted by wealthy, powerful corporate interests who need a “mass movement” to advance or obstruct political agendas to their benefit. Whatever message they wish to transmit to the Ditto-Head grassroots — for example, denial of the reality of anthropogenic global warming and “Swift Boat” like attacks on climate scientists — they simply brand it “conservative” and broadcast it through the “conservative” media, and all the Ditto-Heads obediently and unquestioningly take it up.
There is no “ideological” basis for denial of AGW. There is only the naked greed of those who stand to rake in trillions of dollars in profit from the continued business-as-usual consumption of fossil fuels and are determined to do so at any cost to humanity and life on Earth. The “ideology” and the laughably misnamed “skepticism” is just corporate-sponsored, scripted, focus-group-tested fodder for the Ditto-Heads.
simon abingdon says
#728 [Response: It’s a work in progress. – gavin]
Sorry Gavin, it won’t wash. You have to structure your programs properly from the outset. You can’t incorporate good practice retrospectively.
Every “spaghetti” program has untold errors waiting to rear their ugly heads (or remain forever unsuspected while erroneous results are acted upon with misplaced confidence).
[Response: Thanks! I’d never have thought of that. All legacy code should be tossed out immediately, and then everyone can wait 10 years while we start again from scratch. Brilliant! – gavin]
Anand Rajan KD says
I guess I find it utterly ironic, Anand, that we would be beside ourslves with glee if we found the tinyest bit of protolife on another, otherwise sterile planet while here on earth we are rapidly causing the decimation whole ecosystems. Forests and species turned into mere numbers in some fatcat’s swiss bank account, groundup on a one way trip to the dump.
Why would you be happy if there were life on other planets? A common misconception is to view humankind to be the custodian of all life on earth. Why not look at life for what it is – just a special case of death? There is nothing special or wondrous about life and life forms; its just chemical reactions.
Dont be so happy with life. It is life that caused the warming in the first place.
If you want to cool, do your bit and move on. Drive a Prius or something. Remember every day you breathe and move, “you have just dined, and however scrupulously the slaughter-house is concealed in the graceful distance of miles, there is complicity,…”
I join in the abhorrence of any such callous and ignorant attitude. It is a sad state of affairs that reason should be abandoned to such extent of inconsideration in discourse of such matters of import.
How have any of my comments shown ignorance? I asked a simple question as why the climatologists wouldn’t want the globe to warm up. I get an answer that since they have data and knowledge that predict it and it’s all going to be a really bad thing to happen.
I said three things:
1) It is not for you [climate scientist] to declare what might happen in the future to be *bad* for humanity. Much less for other species.
2) It is most certainly not a wise thing to declare that, given the fact that you are generating the data that helps you say that.
3) Because you dont want the globe to warm up, there is a possibility that you might have cooked the data to suggest warming so as to support action to prevent it.
Geoge Monbiot is absolutely right. If you are a climatologist you have to remember that big oil has all the money, you only have your integrity. Leaving out all the banter in the emails (which contains lots of subtle incriminating details which we all would love to gloss over), the desire to delete data to avoid FOI requests is by itself enough to ask for Phil Jones’ head. It is not to save his career because he’s such a nice guy and all that, but to save the reputation of climate science. The fact that many of the AGW proponents and scientists have not stepped up and asked for the same indicates the incredible naiveté of the field. And their herd mentality.
Could it be because climatology is a young field, and this is its first major challenge?
You guys think your reputations are unquestionable because you are ‘working to save the earth’? Looks like it is not just the polar bears that are standing on thin ice.
– “But Nature is no sentimentalist -…we must see that the world is rought and surly, and will not mind drowning a man or a woman…”
– “But these shocks and ruins ae less destructive to us than the stealthy power of other laws which act on us daily. An expense of ends to means is fate;-organization tyrannizing over character.”
-Emerson
Gavin, please do not delete. I believe I should be allowed to respond to posts that address me.
simon abingdon says
#742 [Response: Thanks! I’d never have thought of that. All legacy code should be tossed out immediately, and then everyone can wait 10 years while we start again from scratch. Brilliant! – gavin]
Sorry if I touched a raw nerve.
simon
Phil M says
Response: Thanks! I’d never have thought of that. All legacy code should be tossed out immediately, and then everyone can wait 10 years while we start again from scratch. Brilliant! – gavin]
– good idea!
– move over to C++ !
– fortran always looks so ugly!
:o)
manacker says
Forget it Heather (735).
Trying to slam the Catholic Church to get the eyes off of this scandal won’t work, although they probably have plenty of skeletons in the closet, as well.
The only thing that will work now is a few mea culpas and total transparency.
Denial and arrogance will only make things worse.
Max
dhogaza says
Bull. As a professional software engineer with 40 years of experience, I’ll stand by that one word evaluation.
Poorly-structured code is harder to read, harder to debug, harder to maintain but none of this makes it *impossible* for poorly-structured code to work properly.
Mark Sawusch says
correction to the first question asked in my post #662:
1. why didn’t the papers reveal the “fudge factor”(s)- not my words the programmer’s- used to produce the calibration of tree ring density? They happen to show an almost exponential rise post 1939 (granted level with the highest “fudge factor” between 1979-1999). I find no justification or explanation of this.
I made at least 3 errors interpreting the code in briffa_sep98_d.pro per my original post (only part of which was allowed to be posted here):
errors 1&2. the program data sets actually appear to start in 1400 with the first containing all data 1400-1903, so here are the 19 remaining consecutive data subsets by first year, with the “fudge factors” and the “fudge factors”*.75 which is really what was used:
Year Fudge Factor Fudge Factor*.75
1400 0 0
1904 0 0
1909 0 0
1914 0 0
1919 0 0
1924 -0.1 -0.075
1929 -0.25 -0.1875
1934 -0.3 -0.225
1939 0 0
1944 -0.1 -0.075
1949 0.3 0.225
1954 0.8 0.6
1959 1.2 0.9
1964 1.7 1.275
1969 2.5 1.875
1974 2.6 1.95
1979 2.6 1.95
1984 2.6 1.95
1989 2.6 1.95
1994 2.6 1.95
3. the program does print only the “Northern Hemisphere MXD” without the “VERY ARTIFICIAL CORRECTION FOR THE DECLINE.” The command to plot the “fudged” data is commented out. But again, I ask, referring to the text in the second paper that this is admittedly a “rather” “ad hoc” approach my question #3: Although the 2 papers only mention “adjusting” the data for purposes of obtaining a “final calibration”, is it not true that when a “VERY ARTIFICIAL” adjustment is used to create a “VERY ARTIFICIAL” calibration, and this “VERY ARTIFICIAL” calibration is then applied to the raw data, what you end up with is “VERY ARTIFICIAL” data? i.e. GIGO? Just asking.
RaymondT says
Gavin, Thank you for your answers. Another question I have is on the variability of the temperature and precipitation when you run the models. I understand that in climate models you are mostly interested in average behaviours. When you run the climate models what is the maximum temperature or precipitation that you can reach. Do you get into situations where you have to reject a numerical solution because it leads to temperatures which are too high or too low ?
[Response: Not in the standard models. Sometimes if you are trying somthing new, you can put in bugs that cause problems, but obviously that is non-physical, and goes away when you fix the bug. -gavin]