Guest commentary by Tamino
Update: Another review of the book has been published by Alistair McIntosh in the Scottish Review of Books (scroll down about 25% through the page to find McIintosh’s review)
Update #2 (8/19/10): The Guardian has now weighed in as well.
If you don’t know much about climate science, or about the details of the controversy over the “hockey stick,” then A. W. Montford’s book The Hockey Stick Illusion: Climategate and the Corruption of Science might persuade you that not only the hockey stick, but all of modern climate science, is a fraud perpetrated by a massive conspiracy of climate scientists and politicians, in order to guarantee an unending supply of research funding and political power. That idea gets planted early, in the 6th paragraph of chapter 1.
The chief focus is the original hockey stick, a reconstruction of past temperature for the northern hemisphere covering the last 600 years by Mike Mann, Ray Bradley, and Malcolm Hughes (1998, Nature, 392, 779, doi:10.1038/33859, available here), hereafter called “MBH98” (the reconstruction was later extended back to a thousand years by Mann et al, 1999, or “MBH99” ). The reconstruction was based on proxy data, most of which are not direct temperature measurements but may be indicative of temperature. To piece together past temperature, MBH98 estimated the relationships between the proxies and observed temperatures in the 20th century, checked the validity of the relationships using observed temperatures in the latter half of the 19th century, then used the relationships to estimate temperatures as far back as 1400. The reconstruction all the way back to the year 1400 used 22 proxy data series, although some of the 22 were combinations of larger numbers of proxy series by a method known as “principal components analysis” (hereafter called “PCA”–see here). For later centuries, even more proxy series were used. The result was that temperatures had risen rapidly in the 20th century compared to the preceding 5 centuries. The sharp “blade” of 20th-century rise compared to the flat “handle” of the 15-19th centuries was reminiscent of a “hockey stick” — giving rise to the name describing temperature history.
But if you do know something about climate science and the politically motivated controversy around it, you might be able to see that reality is the opposite of the way Montford paints it. In fact Montford goes so far over the top that if you’re a knowledgeable and thoughtful reader, it eventually dawns on you that the real goal of those whose story Montford tells is not to understand past climate, it’s to destroy the hockey stick by any means necessary.
Montford’s hero is Steve McIntyre, portrayed as a tireless, selfless, unimpeachable seeker of truth whose only character flaw is that he’s just too polite. McIntyre, so the story goes, is looking for answers from only the purest motives but uncovers a web of deceit designed to affirm foregone conclusions whether they’re so or not — that humankind is creating dangerous climate change, the likes of which hasn’t been seen for at least a thousand or two years. McIntyre and his collaborator Ross McKitrick made it their mission to get rid of anything resembling a hockey stick in the MBH98 (and any other) reconstruction of past temperature.
Principal Components
For instance: one of the proxy series used as far back as the year 1400 was NOAMERPC1, the 1st “principal component” (PC1) used to represent patterns in a series of 70 tree-ring data sets from North America; this proxy series strongly resembles a hockey stick. McIntyre & McKitrick (hereafter called “MM”) claimed that the PCA used by MBH98 wasn’t valid because they had used a different “centering” convention than is customary. It’s customary to subtract the average value from each data series as the first step of computing PCA, but MBH98 had subtracted the average value during the 20th century. When MM applied PCA to the North American tree-ring series but centered the data in the usual way, then retained 2 PC series just as MBH98 had, lo and behold — the hockey-stick-shaped PC wasn’t among them! One hockey stick gone.
Or so they claimed. In fact the hockey-stick shaped PC was still there, but it was no longer the strongest PC (PC1), it was now only 4th-strongest (PC4). This raises the question, how many PCs should be included from such an analysis? MBH98 had originally included two PC series from this analysis because that’s the number indicated by a standard “selection rule” for PC analysis (read about it here).
MM used the standard centering convention, but applied no selection rule — they just imitated MBH98 by including 2 PC series, and since the hockey stick wasn’t one of those 2, that was good enough for them. But applying the standard selection rules to the PCA analysis of MM indicates that you should include five PC series, and the hockey-stick shaped PC is among them (at #4). Whether you use the MBH98 non-standard centering, or standard centering, the hockey-stick shaped PC must still be included in the analysis.
It was also pointed out (by Peter Huybers) that MM hadn’t applied “standard” PCA either. They used a standard centering but hadn’t normalized the data series. The 2 PC series that were #1 and #2 in the analysis of MBH98 became #2 and #1 with normalized PCA, and both should unquestionably be included by standard selection rules. Again, whether you use MBH non-standard centering, MM standard centering without normalization, or fully “standard” centering and normalization, the hockey-stick shaped PC must still be included in the analysis.
In reply, MM complained that the MBH98 PC1 (the hockey-stick shaped one) wasn’t PC1 in the completely standard analysis, that normalization wasn’t required for the analysis, and that “Preisendorfer’s rule N” (the selection rule used by MBH98) wasn’t the “industry standard” MBH claimed it to be. Montford even goes so far as to rattle off a list of potential selection rules referred to in the scientific literature, to give the impression that the MBH98 choice isn’t “automatic,” but the salient point which emerges from such a list is that MM never used any selection rules — at least, none that are published in the literature.
The truth is that whichever version of PCA you use, the hockey-stick shaped PC is one of the statistically significant patterns. There’s a reason for that: the hockey-stick shaped pattern is in the data, and it’s not just noise it’s signal. Montford’s book makes it obvious that MM actually do have a selection rule of their own devising: if it looks like a hockey stick, get rid of it.
The PCA dispute is a prime example of a recurring McIntyre/Montford theme: that the hockey stick depends critically on some element or factor, and when that’s taken away the whole structure collapses. The implication that the hockey stick depends on the centering convention used in the MBH98 PCA analysis makes a very persuasive “Aha — gotcha!” argument. Too bad it’s just not true.
Different, yes. Completely, no.
As another example, Montford makes the claim that if you eliminate just two of the proxies used for the MBH98 reconstruction since 1400, the Stahle and NOAMER PC1 series, “you got a completely different result — the Medieval Warm Period magically reappeared and suddenly the modern warming didn’t look quite so frightening.” That argument is sure to sell to those who haven’t done so. But I have. I computed my own reconstructions by multiple regression, first using all 22 proxy series in the original MBH98 analysis, then excluding the Stahle and NOAMER PC1 series. Here’s the result with all 22 proxies (the thick line is a 10-year moving average):
Here it is with just 20 proxies:
Finally, here are the 10-year moving average for both cases, and for the instrumental record:
Certainly the result is different — how could it not be, using different data? — but calling it “completely different” is just plain wrong. Yes, the pre-20th century is warmer with the 15th century a wee bit warmer still — but again, how could it not be when eliminating two hand-picked proxy series for the sole purpose of denying the unprecedented nature of modern warming? Yet even allowing this cherry-picking of proxies is still not enough to accomplish McIntyre’s purpose; preceding centuries still don’t come close to the late-20th century warming. In spite of Montford’s claims, it’s still a hockey stick.
Beyond Reason
Another of McIntyre’s targets was the Gaspe series, referred to in the MBH98 data as “treeline-11.” It just might be the most hockey-stick shaped proxy of all. This particular series doesn’t extend all the way back to the year 1400, it doesn’t start until 1404, so MBH98 had extended the series back four years by persistence — taking the earliest value and repeating it for the preceding four years. This is not at all an unusual practice, and — let’s face facts folks — extending 4 years out of a nearly 600-year record on one out of 22 proxies isn’t going to change things much. But McIntyre objected that the entire Gaspe series had to be eliminated because it didn’t extend all the way back to 1400. This argument is downright ludicrous — what it really tells us is that McIntyre & McKitrick are less interested in reconstructing past temperature than in killing anything that looks like a hockey stick.
McIntyre also objected that other series had been filled in by persistence, not on the early end but on the late end, to bring them up to the year 1980 (the last year of the MBH98 reconstruction). Again, this is not a reasonable argument. Mann responded by simply computing the reconstruction you get if you start at 1404 and end at 1972 so you don’t have to do any infilling at all. The result: a hockey stick.
Again, we have another example of Montford implying that some single element is both faulty and crucial. Without nonstandard PCA the hockey stick falls apart! Without the Stahle and NOAMER PC1 data series the hockey stick falls apart! Without the Gaspe series the hockey stick falls apart! Without bristlecone pine tree rings the hockey stick falls apart! It’s all very persuasive, especially to the conspiracy-minded, but the truth is that the hockey stick depends on none of these elements. You get a hockey stick with standard PCA, in fact you get a hockey stick using no PCA at all. Remove the NOAMER PC1 and Stahle series, you’re left with a hockey stick. Remove the Gaspe series, it’s still a hockey stick.
As a great deal of other research has shown, you can even reconstruct past temperature without bristlecone pine tree rings, or without any tree ring data at all, resulting in: a hockey stick. It also shows, consistently, that nobody is trying to “get rid of the medieval warm period” or “flatten out the little ice age” since those are features of all reconstructions of the last 1000 to 2000 years. What paleoclimate researchers are trying to do is make objective estimates of how warm and how cold those past centuries were. The consistent answer is, not as warm as the last century and not nearly as warm as right now.
The hockey stick is so thoroughly imprinted on the actual data that what’s truly impressive is how many things you have to get rid of to eliminate it. There’s a scientific term for results which are so strong and so resistant to changes in data and methods: robust.
Cynical Indeed
Montford doesn’t just criticize hockey-stick shaped proxies, he bends over backwards to level every criticism conceivable. For instance, one of the proxy series was estimated summer temperature in central England taken from an earlier study by Bradley and Jones (1993, the Holocene, 3, 367-376). It’s true that a better choice for central England would have been the central England temperature time series (CETR), which is an instrumental record covering the full year rather than just summertime. The CETR also shows a stronger hockey-stick shape than the central England series used by MBH98, in part because it includes earlier data (from the late 17th century) than the Bradley and Jones dataset. Yet Montford sees fit to criticize their choice, saying “Cynical observers might, however, have noticed that the late seventeenth century numbers for CETR were distinctly cold, so the effect of this truncation may well have been to flatten out the little ice age.”
In effect, even when MBH98 used data which weakens the difference between modern warmth and preceding centuries, they’re criticized for it. Cynical indeed.
Face-Palm
The willingness of Montford and McIntyre to level any criticism which might discredit the hockey stick just might reach is zenith in a criticism which Montford repeats, but is so nonsensical that one can hardly resist the proverbial “face-palm.” Montford more than once complains that hockey-stick shaped proxies dominate climate reconstructions — unfairly, he implies — because they correlate well to temperature.
Duh.
Guilty
Criticism of MBH98 isn’t restricted to claims of incorrect data and analysis, Montford and McIntyre also see deliberate deception everywhere they look. This is almost comically illustrated by Montford’s comments about an email from Malcolm Hughes to Mike Mann (emphasis added by Montford):
Mike — the only one of the new S.American chronologies I just sent you that already appears in the ITRDB sets you already have is [ARGE030]. You should remove this from the two ITRDB data sets, as the new version should be different (and better for our purposes).
Cheers,
Malcolm
Here’s what Montford has to say:
It was possible that there was an innocent explanation for the use of the expression “better for our purposes”, but McIntyre can hardly be blamed for wondering exactly what “purposes” the Hockey Stick authors were pursuing. A cynic might be concerned that the phrase actually had something to do with “getting rid of the Medieval Warm Period”. And if Hughes meant “more reliable”, why hadn’t he just said so?
This is nothing more than quote-mining, in order to interpret an entirely innocent turn of phrase in the most nefarious way possible. It says a great deal more about the motives and honesty of Montford and McIntyre, than about Mann, Bradley, and Hughes. The idea that MM’s so-called “correction” of MBH98 “restored the MWP” constitutes a particularly popular meme in contrarian circles, despite the fact that it is quite self-evidently nonsense: MBH98 only went back to AD 1400, while the MWP, by nearly all definitions found in the professional literature, ended at least a century earlier! Such internal contradictions in logic appear to be no impediment, however, to Montford and his ilk.
Conspiracies Everywhere
Montford also goes to great lengths to accuse a host of researchers, bloggers, and others of attempting to suppress the truth and issue personal attacks on McIntyre. The “enemies list” includes RealClimate itself, claimed to be a politically motivated mouthpiece for “Environmental Media Services,” described as a “pivotal organization in the green movement” run by David Fenton, called “one of the most influential PR people of the 20th century.” Also implicated are William Connolley for criticizing McIntyre on sci.environment and James Annan for criticizing McIntyre and McKitrick. In a telling episode of conspiracy theorizing, we are told that their “ideas had been picked up and propagated across the left-wing blogosphere.” Further conspirators, we are informed, include Brad DeLong and Tim Lambert. And of course one mustn’t omit the principal voice of RealClimate, Gavin Schmidt.
Perhaps I should feel personally honored to be included on Montford’s list of co-conspirators, because yours truly is also mentioned. According to Montford’s typical sloppy research I have styled myself as “Mann’s Bulldog.” I’ve never done so, although I find such an appellation flattering; I just hope Jim Hansen doesn’t feel slighted by the mistaken reference.
The conspiracy doesn’t end with the hockey team, climate researchers, and bloggers. It includes the editorial staff of any journal which didn’t bend over to accommodate McIntyre, including Nature and GRL which are accused of interfering with, delaying, and obstructing McIntyre’s publications.
Spy Story
The book concludes with speculation about the underhanded meaning of the emails stolen from the Climate Research Unit (CRU) in the U.K. It’s really just the same quote-mining and misinterpretation we’ve heard from many quarters of the so-called “skeptics.” Although the book came out very shortly after the CRU hack, with hardly sufficient time to investigate the truth, the temptation to use the emails for propaganda purposes was irresistible. Montford indulges in every damning speculation he can get his hands on.
Since that time, investigation has been conducted, both into the conduct of the researchers at CRU (especially Phil Jones) and Mike Mann (the leader of the “hockey team”). Certainly some unkind words were said in private emails, but the result of both investigations is clear: climate researchers have been cleared of any wrongdoing in their research and scientific conduct. Thank goodness some of those who bought in to the false accusations, like Andy Revkin and George Monbiot, have seen fit actually to apologize for doing so. Perhaps they realize that one can’t get at the truth simply by reading people’s private emails.
Montford certainly spins a tale of suspense, conflict, and lively action, intertwining conspiracy and covert skullduggery, politics and big money, into a narrative worthy of the best spy thrillers. I’m not qualified to compare Montford’s writing skill to that of such a widely-read author as, say, Michael Crichton, but I do know they share this in common: they’re both skilled fiction writers.
The only corruption of science in the “hockey stick” is in the minds of McIntyre and Montford. They were looking for corruption, and they found it. Someone looking for actual science would have found it as well.
trrll says
Considering the involvement in the anti-global warming community of industry shills with a proven past involvement in industry funded conspiracies to cast doubt upon valid science (see e.g. “Merchants of Doubt” or
http://www.desmogblog.com/sites/beta.desmogblog.com/files/plagiarism.conspiracies.felonies.v1.0.pdf), not to mention the criminal break-in to CRU’s email, I think the existence of a conspiracy against climate science is pretty well established. Given McIntyre’s long history of promoting misleading claims that appear designed to cast doubt upon climate science, along with the fact that he does not seem stupid enough to actually believe what he says, not to mention the role of his web site in disseminating the stolen emails, it is pretty reasonable to suspect that he is part of the same gang.
[edit – I get it, but let’s not go there. It’s a little tedious]
Steve Bloom says
Our friend Judy seems to be making a habit of asserting things and then failing to back them up, as with her offer to discuss the details of Montford’s book once a thorough review had been undertaken.
Similarly, she seems to be making a habit of saying things that don’t make sense at all, as Eli excerpts from Steve Schneider’s last interview (with Rick Piltz).
MapleLeaf says
[edit – no personal attacks please]
Steve Bloom says
#137: “[Response: Do you mean Montford? Monckton is a whole other kettle of fish. – gavin]”
It’s getting so hard keeping the MMs straight. Maybe color one set blue?
MapleLeaf says
I very rarely do this,but today I am forced to use caps.
Andy Revkin and Judith Curry. HAVE YOU ACTUALLY READ WHAT TAMINO POSTED?! Please, both of you, LOOK CAREFULLY at Fig. 4 above and reflect. Andy Revkin, please for the love of all that is that is ethical and moral, do you job and report on the science and the truth rather than making straw men arguments, and stop cow towing to the likes of Nigel Persaud.
Judith Curry..you too showing your true colours by arguing straw men. Have you looked at Fig. 4 above. Have you?! To say that I am incredibly disappointed by your recent antics would be an incredible understatement….and FWIW, I am a scientist.
MapleLeaf says
Tamino @150, he shoots, he scores, right though the five-hole!
Thank you Tamino, you exemplify the difference between a true statistician and a wanna be statistician with nefarious motives.
jyyh says
I’m not bogged down to see other finns here, this is quite an accurate place. Often the writers here incorporate the recent findings in their reports, thus one may get a fuller picture of aspects of climate change than in the main stream political journals. It’s very hard to keep up with even some of the scientific journals, them having so much text. Doubly so as I’m not in contact with the university currently. It is very nice to have articles that compress this info, as there’s been quite a long time I’ve actually taken courses in alma mater. Your efforts are appreciated, this is how science journalism could work. Thank you and also Tamino for letting me keep somewhat up (can’t get everything) in this knowledge-based enterprise called natural science. The implications of this thing called climate change are worrying enough without having to mess with boggy politics, though I’ve thought taking some steps in that marshy landscape fully knowing the popularity contest that is politics is hardly won by sticking on actual science… cutting the science short attracts nitpickers and their goodfellas, but drying the message of the long articles in scientific journals is too tiresome to most voters.
MapleLeaf says
Re #153,
Sorry, over zealous. Not sure how to state this diplomatically, but I’ll try. Readers, please keep in mind that McIntyre went to great lengths (e.g., using a moniker of “Nigel Persaud”) to attack MBH98 while simultaneously defending/promoting M&M 2005 (i.e., himself).
McIntyre repeatedly using a pseudonym (Nigel Persaud) in a pubic forum is relevant, does in fact call into question his true intentions, his etHICS and also calls into question his claim in CanWest news papers that “Everything that I’ve done in this, I’ve done in good faith”. That is not an ad hominem, but an example that McI’s actions are not consistent with what he claims.
Now Andy Revkin, described above IS a REALLY good story for you to investigate further and write up. Are you up for it?
James McDonald says
trrll, astute observation. My summary of that phenomenon is that for them authorities matter, not facts. To them, facts are simply one tool among many to prop up or tear down an authority, so Gore’s heating bill, or random irrelevant comments by Darwin assume equal or greater significance for them. As they say, all is fair in love and war, and for them it is a war of authorities, so deceitful tactics are perfectly acceptable as long as they are effective. (Also as they say, know thine enemy, and don’t assume they are playing by the same rules or for the same goal.)
Lawrence Coleman says
Could any of you guys help me..I’ve trying to find what percentage of additional evaporation there is now over the world’s oceans at the current increased temp of 0.7C over the mean. I know it a complicated question, ie. wave heights, relative temps of various oceans, relative temps of the air, rel. humidity and air pressures etc. etc. Has anyone got a good guessimation overall or maybe break it down into the various tropics. Another words how much more additional water vapour is there in the atmosphere now compared to 100yrs ago?
Thanks guys.
[Response: That’s hard to measure, but models suggest that it is something like a 2% global increase for a degree of warming. But this might well be affected by aerosol changes more than temperature is, and of course, the distribution will not be uniform. – gavin]
Martin Vermeer says
tamino #150:
Eh, didn’t it lose skill for 1400-1499 in this case? What did you do differently from Wahl & Ammann scenario 1?
(This is why one doesn’t remove all the potentially suspect data simultaneously — remove enough data and the early part of any reconstruction will be all over the place, but in the absence of skill it means nothing.)
Edward Greisch says
87 Scott Slaba: Thanks of the link on your web site to http://oilmoney.priceofoil.org/
89 D. Robinson: So where are they getting their money? Is Montford getting rich on book sales?
[Response: As a published author, I very much doubt it. But do not discount the propensity of people to disinform for free. – gavin]
Laws of Nature says
Re #83 [Response: The test of whether this is useful is whether you have some predictability in the validation interval, and whether the basic patterns hold up when you add more data, change the method, hold back some data etc. And they are. – gavin]
Re #97 [Response: You are very confused I’m afraid. Quantitative methods actually come up with numbers that can be checked by anyone. Your ‘99%’ is just pulled out of your .a**e. Look, the charge that the HS is simply a statistical artefact is just wrong. [..]– gavin]
Well, you seem correct that a hockey-stick is still in the data when you remove the bristlecone pines, but my point was and is, that by removing this non-temperature proxis, the robustness of the result degreases and that seem to contradict your statement Re #83.
These proxies don’t belong into any temperature reconstruction, because they are wrong, they don’t respond to temperature (at least not alone).
To argue like Tamino, well we removed already this proxi, why should we remove all proxies which contain a hockeystick is flawed as well.
Perhaps he should focus on proxies which don’t have issues and then perform a correct analysis without prejudgement – that sounds more like a scientific approach.
[Response: Funny! But you have the history all wrong. The attempts to make multi-proxy reconstructions arose exactly from this desire. Yet people for some reason do not like the results – thus we have seen attempt after attempt to discredit each element in the mix – with the result that proxies are attacked in direct proportion to their 20th Century rise rather than anything intrinsic. This isn’t to say that every proxy is perfect – they are not, indeed, they are all imperfect (that’s why they are ‘proxies’!). And being imperfect, there is always a reason to pick on one you don’t like. But regardless of the reasons you can take away many of the individual proxies and the basic picture remains unchanged – but of course, the more you remove the less information you have, and eventually, you don’t have any information at all. This end point is a clear aim of some commentators. – gavin]
John P. Reisman (OSS Foundation) says
I can’t believe that we are still discussing the pathetic attempts by those that are either irretrievably ignorant still trying to debunk the hockey stick.
I can’t believe that I am still updating the Hockey stick controversy page
http://www.ossfoundation.us/projects/environment/global-warming/myths/the-hockey-stick
I can’t believe that people still don’t understand that models are never perfect, but they can be illustrative and in the case of the hockey still the results are robust due to the fact that the same general results are found in multiple reconstructions including the reconstructions including the McIntyre/McKitrick corrections.
AND THE HOCKEY STICK STILL REMAINS EVEN WHEN YOU DON’T USE TREE RING DATA!!!!!!!!
http://www.ossfoundation.us/projects/environment/global-warming/myths/models-can-be-wrong
Models are not perfect but we still get on airplanes.
Models are not perfect and yet we still drive in cars.
Models are not perfect and we still get on trains.
Spacecraft still fly.
Missiles can still hit targets.
Infrared is still blocked by CO2
Stuff still happens.
Communities still plan.
Economies are still functioning, kinda, sorta, do the degree of their capacity based on resources available and demand.
Banks still rely on the Basel Convention and the world banks to manipulate the monetary system.
Models are not perfect. But the human race seems to at least be functioning with relative capacity, even though we rely on models every day of our lives.
—
A Climate Minute: The Natural Cycle – The Greenhouse Effect – History of Climate Science – Arctic Ice Melt
‘Fee & Dividend’ Our best chance for a better future – climatelobby.com
Learn the Issue & Sign the Petition
Jean S says
Tamino: “I thought you wanted to get the facts straight. It is described in MBH98, Methods Section, sub-section “Calibration,” 3rd paragraph, lines 6-11, which states:”
Yes, that is my intention. I’m not sure what is yours. As you well know (it’s even in the name of the subsection!) the passage you quoted describes part of the MBH9X _calibration algorithm_ which completely different matter than the tree ring PC selection under discussion. The tree ring selection is described in MBH98 (first paragraph of the second column in the first page) simply as
“Certain densely sampled regional dendroclimatic data sets have been represented in the network by a smaller number of leading principal components (typically 3–11 depending on the spatial extent and size of the data set). This form of representation ensures a reasonably homogeneous spatial sampling in the multiproxy network (112 indicators back to 1820).”
I have no clue how “Rule N” is supposed to take into account “the spatial extent and size of the data set”.
Tamino: “Didn’t you just say you wanted to get the facts straight? The code is (and has been for five years) available at
http://www.meteo.psu.edu/~mann/shared/research/MANNETAL98/METHODS/multiproxy.f
It’s not that hard to identify the section preceded by these comment lines:”
Again, I do not know what your intention is. The quoted selection of the _part_ of the full MBH98 code, which has been available only due to pressure of congress hearings at the time, is describing the same calibration procedure you referred earlier. (A lot of?) actual MBH9X tree ring PCA code is available in CRU files, nothing about Preisendorfer there.
Tamino: “Did you miss the part about getting a hockey stick with no PCA at all?”
Did you miss the part sayig MBH9X PC1 is nothing but masked bristlecones? Garbage in, garbage out.
Tamino: “You don’t flatter yourself repeating this argument. The NoAmer ITRDB tree ring series are not in common units — they’re not in any units at all.”
C’mon. They are all describing the same thing – treering growth, so there is no reason to use correlation PCA. If something should be done to that dataset, I would suggest log transform as the tree ring indecies are rather multiplicative than additive, but that is a different story.
Tmino: “Normalization is the right thing to do. You really fumbled this one.”
No, it is not. You would get F on this in my class.
Tamino: “Missing 4 years out of 580 is not a valid reason to require removal of the Gaspe series.”
Where did you get the idea that Gaspe seires was left out in the last 530 years of the reconstruction? It is used in every step starting AD1450. But unlike every other series in the MBH9X dataset, the Gaspe series was not used starting at the first step _after_ the beginning of the series (i.e., AD1450). Instead ad-hoc extension was applied to it in order to get it included to a previous (i.e., AD1400) step.
Tamino: “they got the right answer” “We’ll be interested in your answer to the question: if their work is so horribly wrong, how did they get the right answer?”
This is what sets us apart. Scientists do not have predetermined idea what is the “right answer”. They only have hypothesis which should be backed up by evidence. In my view MBH9X does not provide any evidence for _any_ hypothesis concerning past temperatures. And BTW, even a broken clock is right twice a day.
[Response: Jean S first says “the use of “Preisendorfer Rule N” is not described in MBH98 (or related literature).” He’s proved wrong.
Rather than just admit he was wrong, he changes the subject to the “_calibration algorithm_ which completely different matter than the tree ring PC selection.”
It’s easy to verify that for the NoAmer ITRDB data, you get the same result using either Preisendorfer or a simpler ad hoc rule (e.g. just keep PCs retaining the leading 50% of data variance). Which goes to the point: that MM are wrong to keep only 2 PCs from their fully centered (but not normalized) PCA. Again, their selection rule was obvious: if it looks like a hockey stick, get rid of it.
First he says “Tree ring series are already in common units.” He’s proved wrong.
They’re dimensionless, each being scaled by its own mean, hence the mean of each series will profoundly affect the size of the variation — exactly what’s to be avoided. AND they’re not even describing the same thing, some are ring width data while others are ring density! Even if they weren’t dimensionless they couldn’t be in the same units.
So he says, “C’mon. They are all describing the same thing – treering growth.” He follows this lame excuse with a lame suggestion about a log-transform, probably because his foolishness is so obvious for all the world to see, that he hopes offering some technical suggestion will make him look smart.
Does he really expect us to believe that tree-ring width and tree-ring density are “in common units”? That scaling them by the mean value doesn’t affect the variation? That he can excuse the faulty MM PCA procedure just by calling it all “treering growth”?
There’s a limit to how much attention should be given to those who invent one after another outlandish delusion to rationalize their refusal to admit the truth. Comments like those from Jean S contribute only one thing to the discussion of climate: a waste of time. – tamino]
ThinkingScientist says
RE: #149 caerbannog says
“If you generated a big ensemble of time-series of MM2005c “noise” and then computed a straight average, is it possible that a “hockey-stick” might emerge?
No. The simulations are stationary so they would tend to a mean of zero over many simulation runs.
Martin Vermeer says
hveerten #95,
http://www.cce-review.org/evidence/Vermeer.pdf
;-)
Judith Curry says
Although I am very busy at the moment trying to complete a paper before leaving on travel, my original drive-by is admittedly insufficient, so I am taking a few moments to clarify the weaknesses in Tamino’s review. Note, this is off the top of my head, I don’t have the HSI book with me.
First, Montford’s book clarifies three weaknesses in the paleoreconstructions, from MBH 98/99 through Mann et al 08. These include problems with tree rings, the centered PCA analysis, and the R2 issue.
[Response: Really? This is it? The PCA analysis is completely moot as has been shown in the literature Wahl and Amman (2007) and von Storch et al (2005) and above. And you think this is a big issue in 2010? Please. The ‘R2’ issue similarly – the NAS Chapter 9 deals with the issues there very clearly. The basic point is that when you get to the relatively sparse networks further back, the reconstructions don’t have fidelity at the year-to-year variability. If that is something you care about (i.e. whether 1237 was warmer or cooler than 1238), then you are out of luck. If instead you are interested in whether the 13th Century was cooler than the 12th C, it’s not the right metric to be using. And finally, ‘tree rings’? A whole community is just dismissed in your mind? The community that actually pioneered community-wide data sharing in climate science? A community moreover in which the literature has openly dealt with the many issues that arise in dealing with the nature of trees and tree rings – they are the ‘problem’? Again, really?
The points are even more bizarre when you actually look at the latest work that shows that reconstructions without tree rings or off-centre PCA give good reconstructions back centuries and that they aren’t grossly different to the ones using tree rings. What more do you want? – gavin]
The tree ring issue is admittedly murky, but unless the dendro community becomes more objective in its analysis, tree rings will become irrelevant. The centered PCA and R2 issues are much more straightforward. The centered PCA is bad statistics, and just because no single significance test is objectively the best in all circumstances does not mean that you can cherry pick significance tests until you find one you like and ignore R2.
[Response: This is simply insulting. You have absolutely no evidence that this was the case. The RE/CE statistics are perfectly fine at describing what the authors thought were relevant and have a long history in that field (Fritts, 1976) and as we have seen the PCA issue is moot. The idea that people went looking for ‘bad statistics’ to fix their results is without merit whatsoever. Please withdraw that claim.]
The key points of Montford’s book that Tamino ignores are:
1. The high level of confidence ascribed to the hockey stick inferences in the IPCC TAR, based upon two very recent papers (MBH) that, while provocative and innovative, used new methods and found results that were counter to the prevailing views. Plus the iconic status that the hockey stick achieved in the TAR and Al Gore’s movie.
[Response: You are misreading the IPCC reports. The relevant claims in the SPM and Chapter 2 in TAR were that ‘the increase in temperature in the 20th century is likely to have been the largest of any century during the past 1,000 years. It is also likely that, in the Northern Hemisphere, the 1990s was the warmest decade and 1998 the warmest year’. “Likely” in TAR speak was 66%-90% chance, thus better than 2 in 3, but not as good as 9 in 10. Your characterisation of ‘prevailing views’ is simply wrong – the paleo community had long been aware that the medieval period had been very heterogenous (Hughes and Diaz 1994 for instance) and that the peaks did not line up in different records. ‘Likely’ was the appropriate distinction for the 20th C warming being greater than any century-scale warming in 1000 years, since there wasn’t (and isn’t) any evidence to the contrary and plenty in support. The only issue that one could reasonably have is the statement about 1998 or the 1990s. Those claims were based on the fact that 1998 was by far the warmest year in the warmest decade in the instrumental record, but without direct evidence that other very warm years in perhaps not quite as warm decades did not match or exceed it. Thus I would have been happier if that part of the statement had been downgraded to ‘more likely than not’.
In AR4, the relevant statement was: Average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the past 1,300 years.. Thus the statement for the last 500 years has been strengthened (which is appropriate given the increase in multiple lines of evidence for that period), and the longer term statement has been lengthened to 1300 years at the same level of confidence as before. Again a reasonable and supportable position. The differences are in the characterisation of the 20C rate of warming, and mainly the highlighting of a specific year in a millennial context. Instead, there is Eleven of the last twelve years (1995–2006) rank among the 12 warmest years in the instrumental record of global surface temperature (since 1850), indicating a move towards a (correct) realisation that the relative warmth of individual years are harder to assess. In toto, I do not see this as a significant downgrading of the conclusions – you may disagree, but this is not the stuff of conspiracy theories.
In terms of ‘iconic’ status, showing the results in the SPM seems fair enough, but MBH are not to blame for how images get used or discussed in the media. At all times when the authors themselves were interviewed I have yet to see any statements that were not justified. And as for the AIT, the hockey stick only got a brief mention, and that was by mistake (he used the wrong panel from a Lonnie Thompson paper). This is irrelevant.]
2. The extreme difficulties that Steve McIntyre had in reproducing the MBH results. Any argument that defends these difficulties by saying that Steve McIntyre is incompetent or lacking in persistence is just plain counter to the evidence that Montford provides. Science needs to be reproducible. Period. And authors need to provide all of the data and metadata needed to reproduce the results, not just draft or incomplete datasets
[Response: Science is reproducible and this science was. Mann et al did not generate the underlying data themselves, they got it from public archives and from asking colleagues – and that was made public when the previously unpublished work was published. Wahl and Ammann replicated the code (as did McIntyre). There were minor errors in the data listing at Nature, but that was fixed when it was pointed out. Scientists are not obligated to hand-hold people trying to reproduce their results, especially when they have already gone public with a farrago of misstatements in non-peer-reviewed papers (try actually reading MM2003). However, you are making a big error in characterising the culture that existed in 1998. I guarantee I will not find complete public archives for every climate paper that appeared in Nature that year – are none of those papers ‘science’? Nonsense. Replication is not about repetition- it’s about finding new ways to address the same problem. Two ice cores are better than two teams measuring the same one.]
3. The NAS North et al. report found that the MBH conclusions and “likely” and “very likely” conclusions in the IPCC TAR report were unsupported at that those confidence levels. How the hockey team interpreted the North NAS report as vindicating MBH, seems strange indeed.
[Response: This is simply not true. There are no ‘very likely’ conclusions in the relevant sections of TAR (I quoted them above). The only thing they pointed out was in regards to the relative warmth of 1998 and the 1990s in the millennial context which I agree with. They did state with a ‘high level of confidence that global mean surface temperature was higher during the last few decades of the 20th century than during any comparable period during the preceding four centuries‘ – this is equivalent to the strengthening of the statements made in AR4 concerning the last 500 years. They went on to say that the ‘committee finds it plausible that the Northern Hemisphere was warmer during the last few decades of the 20th century than during any comparable period over the preceding millennium‘ – and in further questions, clarified that plausible was equivalent to ‘likely’ in IPCC-speak (i.e. less confidence than the statement about the last 500 years). The statement about 1998 and the 1990s was that “Even less confidence can be placed in the original conclusions by Mann et al. (1999) that “the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium” because the uncertainties inherent in temperature reconstructions for individual years and decades are larger than those for longer time periods and because not all of the available proxies record temperature information on such short timescales” which is true enough. Of course, now it is likely that the 2000s were the warmest decade.]
4. A direct consequence of the North NAS report is that the conclusions in the IPCC AR4 essentially retracted much of what was in the IPCC TAR regarding the paleo reconstructions. This is the only instance that I know of where the IPCC has reduced a confidence level or simply left out a conclusion that was in a previous IPCC report. This is discussed in the CRU emails.
[Response: Again, this is not true. AR4 did in no way ‘essentially retract’ much of what said in TAR – for anything substantial concerning the nature of late 20th C warmth the conclusions both in the NAS report and the AR4 report strengthened the TAR conclusions (see the statements above). Perhaps you think that the ‘essential’ thing is the position of 1998 as the single warmest year? Well, in that case I strongly disagree, this is not ‘essential’ in anything very much. Much more important in actually understanding the climate are the relations between forcings and responses both globally and spatially over this period, and none of that relies on rankings of individual years. And as for IPCC changing conclusions this has happened many times – Lindzen used to point to statements about upper tropospheric water vapour for instance that became less confident from the 1990, 1995 and 2001 reports, similarly uncertainty in aerosol indirect effects has clearly grown over time. ]
5. Even with this drawback in the AR4 conclusions and confidence level, somehow what was left was judged to hinge on the unpublished Wahl/Amman papers, one of which was having difficulty surviving peer review in GRL for a period of several years, and was finally pushed through quickly by Steve Schneider in Climate Change. IPCC deadlines were violated, and peer review in the context of the papers publication in Climate Change was a joke (all of this is described in the CRU emails). So all of these shenanigans to get these papers into the IPCC, papers that some have judged to have more methodological problems than the original MBH papers, have seriously degraded trust in the IPCC consensus, once this was illuminated in the CRU emails.
[Response: This is nonsense. The conclusions in the Wahl and Amman papers, and their published code had been public since 2005 so there was no doubt about their results. Steve Schneider was exceptional in many ways, but his journal is not the speediest in terms of turnaround of manuscripts. Weird editorial decisions with respect to the responses to the MM05 GRL paper also did not help. But the authors of the IPCC chapter knew full well that the their statement in the first draft about MM05 was not right – there weren’t any unanswered questions about the impact of PCA centering on the results of MBH98. The WA07 paper was accepted in time for this to be cited (and it was an IPCC-wide decision to decide on the cutoffs, not Keith Briffa’s) and it was (no IPCC deadlines were violated). If it hadn’t been it would not have been the end of the world and I don’t see how anything subsequently would have changed. McIntyre has had 5 years to write a comment or a new paper on the subject and he hasn’t. As for Briffa talking to Wahl during the final drafting stage, I see nothing problematic with that in the slightest. The idea put forward by McIntyre and Montford that IPCC authors are supposed to sit in purdah while writing the reports has absolutely no basis in fact or in practice. Many people were talked to and many people made suggestions where their expertise was required. The fact is that the AR4 statements in the final version were more correct than in the first draft and that is something people should be happy about.]
6. The dependence of the various proxy reconstructions used in the AR4 on essentially the same datasets is described, it is difficult to judge these reconstructions as independent.
[Response: Long well-resolved paleo records are rare – I doubt that is a surprise to anyone. Should people not use what has been published to get the best characterisation of past climate change? Methods can be independent though, and since your earlier comments seem to revolve around methods, I don’t quite get what point you are making.]
7. The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA.
[Response: Absolutely untrue in all respects. No, really, have you even read these papers? There is no PCA data reduction step used in that paper at all. And this figure shows the difference between reconstructions without any tree ring data (dark and light blue) compared to the full reconstruction (black). (This is a modified figure from the SI in Mann et al (2008) to show the impact of removing 7 questionable proxies and tree ring data together). In addition, there are many papers that deal with issues raised by MM – Huybers (2005), von Storch et al, (2005), Rutherford et al (2005), Wahl and Amman (2007), Amman and Wahl (2007), Berger (2006) etc.
Judith, I implore you to do some work for yourself instead of just repeating things you read in blogs. (Hint, not everything on the Internet is reliable). ]
8. The divergence problem is clearly explained, including how the graphs in the IPCC report were misleading, and how the splicing of the historical records with the paleo records is misleading. I.e., the trick to hide the decline. Why should we have confidence in paleoproxies that show a temperate decrease in recent decades, in contrast to historical measurements?
[Response: The divergence problem is well known. And I absolutely disagree that the IPCC graphs are ‘misleading’. How perchance were you misled? The picture on the 1999 WMO report cover has nothing to do with IPCC, and frankly was completely unknown until November last year. Yet an incomplete caption on a report that no-one knew about is the biggest scandal in climate science? Get real. I’m with Muir Russell on this one. There is nothing wrong per se in splicing records together to get a continuous series – for instance I have just done the exact same thing in creating a series of solar forcing functions for climate model runs – but these things should be clearly explained. The divergence issue is predominantly an issue for the tree ring density measurements (Briffa et al), and while there is some reason to think that is a unique phenomena, it remains unresolved. So, feel free to ignore the Briffa et al curve if you want. This is not a general issue and doesn’t affect the MBH and Mann et al 2008 conclusions at all. ]
9. Finally, Montford asks the question as to why the scientists and the IPCC promoted the hockey stick at such a high confidence level so prematurely, and why such extraordinary efforts were made to defend it when it arguably isn’t a critical piece of the climate puzzle, rather than to learn from outside statisticians and do a credible error analysis on the data and the inferences.
[Response: Oh please. Why didn’t the first multi-proxy paper deal with all issues and try all methodologies and come to all the conclusions? Because that is not the way science works. People try new things, issues arise, issues are dealt with and a more sophisticated understanding emerges. Some data is used, more data is gathered and more complete pictures arise. No single paper is ever perfect – and I’m sure if any of your papers (or mine for that matter) got the attention that has been payed to MBH98 there’d be all sorts of potential issues as well. But you are again overstating the conclusions of those early papers, and there have been no extraordinary efforts to defend them. It is quite the contrary, there have been many and multiple extraordinary attempts to discredit them (unless you think Congressional review is ‘ordinary’). No-one is against efforts to learn from outside statisticians, that is just a strawman. People are against politically-driven hack jobs purporting to be analyses but that don’t even bother to work out what the consequences of any different choices might be. All of the data in Mann et al (2008) is online, as is all the code – where are the outside statisticians who are clamouring to have their ideas heard? They are welcome to try and do a better job. ]
I’ve probably missed a few things, but those are the key points raised in the book that have stuck with me. I’ve tried to follow the debate by reading the journal articles and posts at both RC and CA. I was very frustrated in trying to sort all this out. Montford’s book sorted everything out into coherent, well argued and well documented arguments. There is a certain element of spin, so I wanted to see what RC had to say about all this. On the RC side, we have the outdated Dummies Guide to the Hockey Stick and Tamino’s review, plus the snarky replies to serious posters that include statistician Jean S. You need to do better than this to counter Montford’s book. Failing to do so will just push more people into the Montford/McIntyre corner of the ring. And how and why this issue has become so contentious and stayed so contentious is a serious issue in the field of climate science.
[Response: The reason this has become ‘contentious’ has nothing to do the MBH and everything to do with people not wanting climate change to be a problem. Icons that arise for whatever reason attract iconoclasts. Noise in the blogosphere does not correlate to seriousness in climate science. As your comments make abundantly clear, you have very little knowledge on this issue and have done no independent investigation of the wild claims being made. Yet the more smoke there is, the more you appear to want to blame MBH for the fire. A ‘certain amount of spin’? Seeing conspiracies everywhere you look is not ‘spin’, it is paranoia. Real scientific controversies get resolved in the literature for the people who actually care about getting things right. For those that don’t, continued repetition of long debunked talking points seems to be their only tactic. I, for one, am pretty tired of that and heartily bored of pointing them out.
The fact of the matter is that we are far beyond the point where people need to either s*** or get off the pot. Continuing to whine about what selection rules were used in a PCA analysis 12 years ago without coming up with any constructive alternative, continuing to complain about a centering convention that makes no difference whatsoever, continuing to moan about error analyses being inadequate without doing a single stitch of work to improve them… enough, already! Science moves forward because people do actual work. Nothing happens when people just sit in a room and [edit] complain about the state the world. The people who are actually publishing in this field are doing all of the things you seem to think are being ignored, while the people whose work you are reading are doing nothing but complain about how they are being ignored. I’m very confident about which group will make the most progress in future. – gavin]
Leonard Evens says
This is a bit off topic, but it is at least tangentially related.
I am reading—actually listening to readings with an mp3 player—“The Decline and Fall of the Roman Empire”” by Edward Gibbon. It was written in the period 1776 to 1789. In it, Gibbon states that Europe was much colder than at (his) present during the time of his history, presumably, the first half of the first millenium CE. He notes that of course there were no thermometers available at the time but that indirect evidence suggest it was colder. He conjectures that the cause was the extensive forests, which were later cut down and replaced by cultivated land.
Does modern research confirm any of this?
Jeffrey Davis says
One of the problems that journalists like Revkin have never overcome is a factor I call “map distortion”. A map will use a dot and a label for a locale that might be a couple orders of magnitude smaller than the dot and the label. It gives the illusion the “East Nowhere” is about the same size as “Chicago.” Both show up on the map as a dot and a label. Journalists, allegedly under the thrall of “fairness”, give all the “dots” and “labels” the same weight in their writing.
A career of that kind of thing and the careful reader can see that what the journalist is actually in thrall to is a paycheck.
D. Robinson says
Re Gavin’s response to #127:
“Perhaps we’d make more progress if you told me what key question you think all this affects? – gavin”
I appreciate your responses. My point is that considering a miserable r2, a widespread divergence problem in tree rings, the unintended use of Tiljander and the fact that W & A had to make up their own statistical validation [RE] I would expect that an Oxford educated math expert might not expend so much energy defending the hockey stick. Especially if it doesn’t matter because of the reams of other data.
[Response: If I had a choice I wouldn’t expend a single further electron on this subject, but then we’d get accused of ‘not dealing with serious issues arising’. I don’t think this is the most important thing in the world and hopefully this will be the last thread on this for a while. But your comment typifies the pointlessness of this (sorry). You are caught up in technicalities that just aren’t relevant for anything interesting. WA07 did not invent ‘RE’ – that was discussed in the NAS report (along with the r2 issue, page 94 I think) and dates back to at least Fritts (1976) in this context, and it is a useful metric for how well a reconstruction does in the verification interval. The Tiljander stuff is moot since the Mann et al (2008) paper showed both with and without and found no material difference. The divergence problem is more interesting, but only matters for a small subset of the proxies and however it is resolved it won’t make much difference to the broader conclusions. But all of these things you mention are means to an end (the end being a better understanding of the climate), not the ends themselves. The big picture stuff is made up of lots more than this and so the implications – however things get resolved in these small details – are going to be small. Thus, really, why do you care? – gavin]
[edit – speculating about funding is boring and OT]
trrll says
Re: Laws of Nature #163
The problem with proxies is that they are indirect measures that are influenced by factors other than temperature alone. So you pool a whole bunch of different proxies, and you hope that the errors more or less cancel out.
It is a very risky practice to start deleting data once you’ve looked at the outcome, because no data is perfect, and it is human nature to be more suspicious of data that leads to a conclusion that you don’t like, and it is easy to rationalize deleting it–and that way lies self-deception. On the other hand, you do want to know if some single dataset is dominating the conclusion (because the whole point of pooling a lot of data sources is to avoid that), so scientists will frequently engage in sensitivity analysis, deleting one dataset or another, and checking to see whether the conclusion is altered meaningfully. You want to know that the conclusion is “robust,” not hanging on one particular bit of data that might be wrong. The point of Tamino’s article is that the hockey stick has been to subjected to this over and over, and it stands up.
And it pretty much has to, because the blade is in the instrumental record, which is the most reliable dataset of the whole bunch.
Of course, one could decide that proxies are too risky to use, throw out everything before the instrumental record–which would give you a hockey stick with a very short handle. But the critics don’t want to do this either, because their underlying hypothesis, rarely clearly articulated (probably because it sounds really shaky when you lay it out), is:
a) There is some mechanism, not included in current models, which limits the the ability of CO2 produce warming. The fact that the modern increase in temperatures matches what was predicted from CO2 increase decades ago (and is still predicted in modern models) is an unfortunate coincidence.
b) There is some other mechanism of producing global warming that has been active in the past, but occurs by a mechanism that is not included in current models, and which doesn’t have anything to do with CO2, and this, rather than CO2, is responsible for the warming seen in the instrumental record (and whatever that mechanism is, it is temporary and will go away by itself Real Soon Now).
So the “skeptics” need proxies, because they want to believe that it has been this warm before (for some reason that the current models can’t predict), and it went away of its own accord. Indeed, one often sees skeptics clutching at “proxies” that are far, far more shaky than precise measures of tree growth, such as medieval vineyards.
Didactylos says
Gavin: no, I really meant Monckton. He has waxed lyrical on the subject of hockey sticks and red noise. ThinkingScientist’s logic may be a little more technical, but it’s roughly the same misunderstanding.
ThinkingScientist said: “No. The simulations are stationary so they would tend to a mean of zero over many simulation runs.” What more needs to be added to this? Hasn’t he answered all his own questions?
Geoff Wexler says
Consider: ‘Dire Predictions’ by Mann & Kump p.81 “The proxy temp. estimates match the model simulations well.. with climate sens. to 2 XCO2 of 2-3 degs.C”
Suppose it was warmer in the past, would the current models really be unable to predict that?
Acccording to RC, the uncertainty in the amount of aerosol cooling makes the twentieth century warming (the blade) a rather dodgy way of estimating the clim. sens. That is consistent with adjusting the estimated aerosol cooling for the 20th. century, upwards a bit. Now put in a greater climate sensitivity than the 2-3 degs.C. Result : a hockey stick with a much wavier handle and a less dramatic looking blade. No change in models.
But look: The predictions for the future would be more dire than before especially as the aerosol cooling would not be expected to keep up.
Laws of Nature says
Re #169 Thank you for your post! I started to wonder if I really cannot be understood (Unlike Gavin’s, your comments are spot on and productive)! You say, that it is dangerous to “after-screen” your proxies in order to modify the result in a way, but in imperfect “pre-screening” is wrong in very much the same way. For example just google for a few pictures of a “bristlecone pine”, I think just by eye you can see that the environment must have a big impact on the growth! Later publications seem to have better proxies (and hockeysticks), but discussing this one as a milestone simply strikes me as very odd.
[Response: It’s precisely because bristlecone pines respond to their environment that they are useful. Not sure what your point is. – gavin]
Your comment also deals with another question I have for a long time (not completly on topic I am afraid): If a CO2-doubling provokes 3.7W/m^2 additional forcing and that leads (with feedbacks and so on) to about 3K temperature increase, how much temperature increase from the beginning of the instrumental record till now should we expect? (IPCC states 1.7W/m^2 CO2-forcing till now)
[Response: Actual temperature increase is a function of the total forcing, not just CO2 (which, coincidentally, is around 1.7 W/m2 but with error bars of +/-1 W/m2 or so), the climate sensitivity and the thermal inertia in the system. With the best estimates of all of these things, we should have seen somewhere between 0.6 and 1 deg C warming by now. – gavin]
Brian Dodge says
[edit – further funding-related discussions are OT]
Rattus Norvegicus says
About the bristlecones. It seems to me that Salzer, et. al. (PNAS, 2009) did a pretty good job of showing that bristlecones are indeed responding to temperature. So although the NRC panel said 3 years ago that bristlecones should be avoided, it seems as though they were actually pretty fair proxies all along. So why does McIntyre, as told to Andrew Montford, continue to insist that they are not proxies for temperature at all? Because they show a hockey stick. Salzer did show that there may have been a problem with the standardization used by Graybill on the stripbark samples, but the newly developed chronologies still show a hockey stick. But as Tamino points out, the McIntyre method of doing a sensitivity analysis is to eliminate all of the data he does not like. In his book Montford has a section which deals with McIntyre’s analysis of other proxies. It turns out that in McIntyre’s opinion any proxy which tends to show a 20th century increase is invalid. This is the McIntyre method, throw out all the data he doesn’t like. He used the same technique in his rather amateurish “analysis” of Briffa’s Yamal chronology which was ripped a new one both in his reply posted at the CRU website and in his comments to the Muir Russell committee (in considerably less polite language).
BTW, the discussion of why RE is to be preferred over r^2 in Wahl and Ammann was excellent, and really ripped on Steve who clearly just dosen’t get it.
ThinkingScientist says
RE: #170 Didactylos
If you want to understand more about stationary stochastic processes I would recommend that you consult a suitable undergraduate text. I would recommend “An Introduction to Applied Geostatistics” by Isaaks and Srivastava. You can find it on Amazon. It is very readable. Chapter 9 on Random Function Models, starting on p196, would be a good starting point.
Concerning the idea of a “Climate Signal” proposed by WahlAmman2007 and also suggested by Gavin, this would not give rise to a hockey stick if the sequence is simulated as a stationary process. It is the phase spectrum that would cause this effect to happen if it were non-uniform and systematic. But the simulations of MM2005 are stationary and random, corresponding to uniform PDF for the phase distribution. This means that in some simulations the climate signal might be found at the start of a sequence, the end of a sequence, the middle of the sequence etc on different runs. On average they would cancel out. But unfortunately the MBH98 algorithm manages to find the hockey stick at the end of the signal notwithstanding it being a stationary stochastic series.
Its quite important to note that WahlAmman2007 assert that the simulation of MM2005 contains the “climate signal” and hypothesise that this invalidates the result but do not offer an example or reference to support this statement. If I had reviewed that paper I would not have allowed that comment through without substantiation – it is pure speculation.
[Response: No it isn’t. It is in fact trivially true as we discussed yesterday. An autoregressive non-climatic process plus a non-stationary long term signal and a stationary component of auto-regressive ‘weather’ will not have the same sample auto-correlation as the auto-regressive non-climatic process you started with even with an infinite time-series, let alone a finite length one. – gavin]
ThinkingScientist says
RE: #177
[Response: No it isn’t. It is in fact trivially true as we discussed yesterday. An autoregressive non-climatic process plus a non-stationary long term signal and a stationary component of auto-regressive ‘weather’ will not have the same sample auto-correlation as the auto-regressive non-climatic process you started with even with an infinite time-series, let alone a finite length one. – gavin]
Your answer makes no sense. You define the signal as
An autoregressive non-climatic process +
a non-stationary long term signal +
a stationary component of auto-regressive ‘weather’
Discounting the last one, which of the other two is the AGW signature? The autoregressive non-climatic process or the non-stationary long term signal?
[Response: Sorry, I thought this was clear to you. No-one is interested in the non-climatic processes (at least in this context) – these involve residual age-related growth trends, disturbances, diffusion (in ice cores perhaps), bioturbation (in ocean or lake sediments) etc. These are the elements that might make one set of proxies, or an individual proxy record a signal that is not related to the climate. The long-term non-stationary part is of interest – that is the part that is related to some external driver (CO2, solar, volcanoes, orbital forcing etc.). This imposes auto-correlations on the proxy signal because that exists in the drivers. The last part is also of interest – it is the internal variability of the climate system and should be at least regionally coherent. But its time/space structure is also complicated – variations in the North Atlantic causing temperature changes in Europe for instance will have auto-correlation too. The last two components are what we want to derive (though the distinction between the two is hard to define). So when you are testing methods against noise, you are modelling the impact of the non-climatic processes only and I guarantee that they are not best modelled as a red-noise process with the sample auto-correlation from real proxies. – gavin]
Tom Scharf says
This article would read better if the tone was a little less hostile, but at least directly addresses M&M which is a breath of fresh air here.
MM meticulously publicly documented exactly what they did, and stated exactly why they thought the MBH analysis was incorrect. CA does show a compelling story for errors in the original calculations for anyone with an engineering degree.
Some of this article has a “these are not the droids you are looking for” feel to it though. “Non-standard PCA” – No, the PCA analysis was (innocently) wrong, “the hockey stick is still there no matter what you do” – No, the claim that there is “unprecedented warming” becomes much more hazy when the normal PCA is done, etc.
[Response: Not true. The anomalous 20th Century is seen regardless of PCA, regardless of tree rings, etc. Look at the glaciers or the boreholes for instance. – gavin]
It really does not dispute any of CA’s technical results, it simply dismisses/disagrees with the conclusions, which is a fair argument that involves opinion.
[Response: You misunderstand, the ‘issues’ raised have no actual consequences and so scientists end up with the same conclusions. McIntyre and Montford then repeat the same issues and same points, and complain when the scientific conclusions don’t change even when its been shown that those issues and points are not material. Then they start complaining about the process because they aren’t making headway on the science. This isn’t a ‘matter of opinion’, nor is it a ‘fair argument’. Please point out to me somewhere where McIntyre has acknowledged that the PCA centering issue is moot. Or that the farrago of complaints in MM03 were not justified? – gavin]
I invite anyone to look at each separate proxy time domain series involved in the PCA by itself, one at a time. It is pretty intuitively clear for anyone who has a signal processing background that if there is a common signal in there, it is very well hidden and would need to be tortured out. If you simply averaged these signals you would not get a HS. You could probably use 10 signal extraction techniques and get 10 significantly different answers.
[Response: And yet all the sensible methods people have tried don’t give random results. All the data is online – 1209 proxies in the Mann et al (2008) for instance – process away! – gavin]
The simple fact that one has to resort to PCA in the first place shows that one is struggling to find any common signal here. CA clearly showed that the HS output was very dependent on one or two series and the rest of the series were heavily discounted. Yes, it is mathematically shocking that removing 2 out of 22 series changes the output that much, it shows the data is quite non-uniform.
[Response: You mistake what the PCA method was used for here. There was a concentration of data from N. America which in any simple average would be overweighted. Thus the PCA was to reduce the number of series to a handful of patterns that would be regionally representative. Different methods deal with that potential problem differently, but the use of PCA in this case has no implication for the struggle to find a ‘common signal’ – precisely the reverse. – gavin]
OK, so I believe the proxy reconstruction is unreliable, so what? It means almost nothing. What really matters is how well we can predict the future climate, and whether it constitutes an immediate threat. Prediction skill of the climate models is the issue that matters.
Training (or tuning) climate models against highly questionable proxy reconstructions will self correct with how well they score in prediction skill. Time will tell.
[Response: That isn’t what this data is used for. See Schmidt (2010) for some more discussion of this exact point. – gavin]
Ike Solem says
You have to wonder how McIntyre would have handled an effort to discredit other kinds of estimates of geological temperature histories – an important topic in oil discovery and exploration, Mr. McIntyre’s area of claimed expertise.
For those who don’t know, the maximum temperature that an oil source rock was exposed to after burial is a key piece of information for oil companies:
How do you ask that question? Via proxies – the same as with the climate record. Now, if McIntyre wanted for some reason to discredit these proxy methods or cast doubt on their conclusions – who knows why – what would he do?
Let’s first consider some specific proxies used in oil exploration:
1) The ratio of peat to lignite to bitumin to anthracite in rocks, even in trace amounts, is one way of guessing the temperature. (Similarly, in climate science, if you can date a relic peat bog to a given date, you know it was glacier-free at the time). However, marine deposits lack such vegetation.
2) Pollen grains survive some of the harshest conditions, and as the rocks get hotter, they only turn brown. If you come upon an exposed outcrop with dark brown pollen grains in it, you can thus estimate the maximum burial temperature. (In climate, distribution of pollen reveals temperature and precipiation trends due to the ecological effects – the Younger Dryas, 12800-11500 years ago, is the name of a flower whose pollen was widespread at the time).
3) Fossil teeth are made of durable enamel, and like pollen, some types are widespread – “conodonts”in particular, have been used to trace temperatures. Light brown is associated with oil – dark brown with natural gas. (Isotopic analysis of the oxygen in tooth enamel has also been used to study climate swings, including the onset of the Little Ice Age).
If McIntyre wanted to discredit these methods and their conclusions, he could attack specific studies – for example, the original lab work with conodonts, heating them to see how they changed, was done in open air – so the conditions were unrealistic, it must all be rubbish. Don’t trust those consultants who tell you they can find oil with old teeth! Don’t fire me and hire them, more specifically…
However, attacking methods can get you bogged down in detail – a disaster for anyone wishing to cast doubt on the conclusions. Details get the audience thinking – it’s better to drown them in gibberish. Hence, it might be better to attack all the methods together using statistical arguments based on datasets constructed for maximum ambiguity – right, Mr. McIntyre? Don’t vet drilling records for quality before including them, in other words.
There are lots of tricks for doing this kind of thing, and Mr. McIntyre probably knows most of them. For example, let’s say a chemical manufacturer wants to downplay the effects of toxic process on its employees. They hire a consultant to study the issue – who conducts a medical survey of all 10,000 employees, and concludes that only 0.1% have any issues, well below the normal background incidence of the reported diseases/effects. What did we forget to mention? Yes, only 50 people actually were directly exposed to the process – 20% is a bit different from 0.1%, isn’t it?
That’s the kind of statistical gibberish that McIntyre is schooled in – and it can be used to cast doubt on any scientific issue whatsoever, as long as someone is willing to pay for the effort.
In this case, the Canadian tar sand consortium is probably the real interest backing the fraudulently dishonest claims of Mr. McIntyre – but it’s not just climate science, they are also claiming that they can use carbon capture and sequestration to clean up tar sand emissions – and that’s a claim that the U.S. State Department is using to justify their refusal to allow EPA to conduct a permitting process for any Canadian tar sand pipelines to the U.S. – yet another example of the quixotically bipolar U.S. government policy on climate and energy.
ZT says
Is there evidence there any botanical evidence that cedars respond linearly and positively to warmer temperatures? (The Gaspé series is a cedar chronology).
SteveF says
Via Stoat (via Hans Von Storch) comes a new paper by Jason Smeardon that will, as Stoat notes, probably be getting a lot of coverage soon. Points out a number of problems in recent Mann et al pseudoproxy papers. Paper here:
http://www.ldeo.columbia.edu/~jsmerdon/papers/2010b_jclim_smerdonetal.pdf
Any initial thoughts?
[Response: There is a comment submitted already. But this is off-topic here, take the discussion to Stoat. – gavin]
SteveF says
Thanks Gavin. Have mentioned the comment over at Stoat.
Judith Curry says
Gavin, the post I made in #167 was a summary of Montford’s book as closely as I can remember it, sort of a review. I did not particularly bring in my personal opinions into this, other than the framing of montford’s points. So asking me to retract a point made in a book in a review of that book is, well, pointless. your attempt to rebut my points are full of logical fallacies and arguing at points i didn’t make. As a result, Montford’s theses look even more convincing. Once you’e in a hole, you can try to climb out or keep digging. Well keep digging, Gavin. My final words: read the book.
[Response: Thanks for passing by. In future I will simply assume you are a conduit for untrue statements rather than their originator. And if we are offering advice, might I suggest that you actually engage your critical faculties before demanding that others waste their time rebutting nonsense. I, for one, have much better things to do. – gavin]
ThinkingScientist says
RE: #179
Gavin,
The discussion we have been having concerns the assertion by WahlAmman2007 that MM2005 simulations contain a “climate signal” and therefore their tests are invalid. MM2005 took the power spectrum of the proxies as the basis for stationary simulations and concluded that MBH98 algorithm is not robust and in fact finds a hockey stick when it should not. You have just stated in your reply to my post #179 that:
“So when you are testing methods against noise, you are modelling the impact of the non-climatic processes only”
But WahlAmman2007 criticise MM2005 for including the climate signal, not for only modeling a noise term. You have just described the climatic part as non-stationary but MM2005 specifically model a stationary process. You cannot be correct in your argument on either point in order to refute MM2005 using WahlAmman2007.
[Response: We are still talking at cross purposes. If I create a red-noise time series I need an auto-correlation coefficient or ARMA parameters etc. If I take those coefficients from a real world proxy, I am including auto-correlations that arose not just from the non-climatic noise, but also the auto-correlations in the climate system. Indeed, the auto-correlations will be inflated over that you would have if you just knew what the non-climatic noise was. Thus when you do a test you will not be testing the system against the presence of unwanted noise. The redder the noise the worse these methods will behave of course so it matters. – gavin]
You also state:
“No-one is interested in the non-climatic processes (at least in this context) – these involve residual age-related growth trends, disturbances, diffusion (in ice cores perhaps), bioturbation (in ocean or lake sediments) etc. These are the elements that might make one set of proxies, or an individual proxy record a signal that is not related to the climate.”
Ok, but of course we want to include these in the power spectrum because they are present in the proxies – the MBH98 algorithm being tested must not find a hockey stick in the presence of these confounding factors. These are included in the MM2005 simulations. But of course MM2005 include more than this – its not just “noise” as you describe it.
You then state:
“The long-term non-stationary part is of interest – that is the part that is related to some external driver (CO2, solar, volcanoes, orbital forcing etc.). This imposes auto-correlations on the proxy signal because there that exists in the drivers.”
I think you are unclear as to what is a stationary and non-stationary process. A volcano is a very short term process – a transient and is effectively a localized (in time) noise burst. Orbital forcing is a stationary but periodic process, not non-stationary as you say.
[Response: I’m not going to argue about terminology, but trends caused by external drivers are not representable as stationary random noise. They none the less increase the auto-correlation in the sample. Call it what you want. Note that orbital forcing over the periods covered by these periods is made up of monotonic trends. – gavin]
With your statement, referring to non-stationary climatic signals:
“This imposes auto-correlations on the proxy signal because there that exists in the drivers”
You are conflating non-stationarity and auto-correlation. They are quite different. A linear increasing function with time would be described as first order non-stationary but it would not have an autocorrelation structure. A power spectrum such as used by MM2005, after adding random phase, has an autocorrelation structure but is deliberately modelled as stationary. By your definition it therefore does not include the “climate signal” as you have stated this is non-stationary.
[Response: If I calculate the sample auto-correlation in a ‘trend+noise’ it’s higher than if I calculate it just from ‘noise’. That’s all I am saying. ]
MM2005 model a stationary stochastic process but the MBH98 algorithm somehow detects what you describe as a non-stationary forcing attributed to CO2. Let me say that again so we can be clear:
From a stationary stochastic process the MBH98 algorithm detects a non-stationary signal that is then attributed to CO2 forcing (the “hockey stick”). I think that clearly states the problem with the algorithm of MBH98: just how does it do that and still get described as robust?
[Response: Attribution is a completely different issue, and is for another day. Why the climate signal is the way it is requires a whole other set of machinery and is not related to picking out the climate signal itself. – gavin]
simon abingdon says
#168 Judith Curry
Judith 82, gavin 216. Go gavin!
dhogaza says
Tom Scharf:
“Training (or tuning) climate models against highly questionable proxy reconstructions…”
People who don’t understand how GCMs work should
1. Avoid embarrassing themselves in public
2. Ponder the notion whether or not being so confidently wrong on such an important point will lead people to wonder if you’re equally wrong with your other assertions.
Michael Ashley says
This thread is one of the best I have read at RealClimate for some time. Snipping off-topic remarks greatly increases the signal-to-noise. Let’s have more of that.
Also, huge kudos to Gavin and Tamino for continually going in and correcting the same old arguments.
The most interesting post for me has been Judith Curry’s #168 where she finally makes some specific and testable remarks (although prefaced with “I am very busy at the moment” and “this is off the top of my head”, which gives plausible deniability).
Let’s home in on Judith’s paragraph 7 where she says The Mann et al. 2008, which purports to address all the issues raised by MM and produce a range of different reconstructions using different methodologies, still do not include a single reconstruction that is free of questioned tree rings and centered PCA..
Gavin replies Absolutely untrue in all respects.
Can we have a clear comment from Judith on this point? Does she acknowledge that her paragraph 7 is wrong? Or can she show that Gavin’s response is incorrect?
dhogaza says
I encourage people to read Judith Curry’s comment #185 two or three times.
My jaw drops closer to the floor on each re-reading.
Let’s see … since Dr. Curry was only regurgitating Montford’s arguments without stating that she agrees with them (bad news for Judith – the intranets are all hooked up like altogether like and you *have* supported his points elsewhere), Gavin’s responses are invalid because he addressed them to Judith rather than Montford.
Therefore Montford’s argument is strengthened. This is an ad hom argument – Judith attacks Gavin’s style, not substance, and without addressing a single factual point made by Gavin, claims victory for Montford as a result.
Meanwhile, though she insists she’s not stating whether or not she agrees with Montford’s points, she tells people “read the book”. Why, Judy, unless you think Montford’s claims are true?
Meanwhile my response to Judy … read your point #7. Then read Mann ’08. Then re-read your point #7 and, if it’s still not clear to you why Montford’s lying, repeat until it sinks in.
Thank you.
David B. Benson says
simon abingdon (186) — As much as 82?
Biased umpire, methinks.
Hank Roberts says
> summary of Montford’s book … not … my personal opinions
Shorter: There must be a pony in there somewhere! You do the shoveling.
Peter Webster says
Gavin
Irrespective ofthe issues raised, I think that yours was the rudest response to an alternative opinion that I have come across. Could I sugegst a cold shower? To refer to someone as “bitching” is a very poor choice of words, you should know that. It is difficult to get to the substance of your response given your emoption. Do you not like contrary views?
Very disappointed. You have done better but this was a low point.
PW
[Response: Contrary views are fine, but really Peter, this is not a new issue, nor is interesting in scientific sense. Untruths, insinuations, accusations, and conspiracy mongering are not part of what scientific discussions should be about. Judith can spend time on that if she likes, but forgive me if I think it is huge waste of time. The comment about ‘bitching’ was not directed at Judith and I sincerely apologise if that was the impression given. I have amended the post accordingly. – gavin]
mike roddy says
Secular Animist’s post #39 is an important one, and points out something that many of us have talked about for months now:
Criminals hacked private emails, and managed to deflect any blame or curiosity away from themselves through members of the media eager to find a scientific “scandal”. The breakins- including the aborted one at the University of Victoria- have been weakly investigated, and little pressure has been applied by the press to encourage prosecutors to determine the responsible parties.
Scientists’ response has been to commission reports to clear CRU of scientific wrongdoing, without realizing that these reports are ignored by the public, most of whom are still fixated on the “scandal” through corporate enabled organs of the press.
This entire affair should have produced much more fighting spirit from scientists than mere data vindication, which most of us here knew would happen anyway. Intimidation of scientists, and twisting of their results, is an extremely dangerous indication of totalitarian tendencies. Scientist victims of slander and misdirection should have responded far more aggressively in public fora, and demanded media and police accountability. Baseless assaults on scientists is one of many trends that indicate our precious democracy may be slipping away. If scientists don’t step up to defend their freedom of expression, they will wake up one day to find it disappearing.
And of course Andy’s bringing up the notion of complete public access to all scientists’ emails is ludicrous, and implies once again that it is the scientists who are to blame. As Hank and many others here have pointed out, this is an indefensible position to take.
Michael Ashley says
I am gobsmacked by the audacity of Judith Curry’s #185.
First, in #74 she claims that Tamino’s review has “numerous factual errors and misrepresentations, failure to address many of the main points of the book”. And then when her attempt to back up this statement in #168 is torn to shreds by Gavin she resorts to claiming that these weren’t “particularly” her “personal opinions”, but were just a framing of Montford’s points.
What a wimpy, pathetic backdown. Sorry to be so blunt, Judith, but when you make a claim that Tamino’s review has “numerous factual errors and misrepresentations” it behooves you to actually list the errors and defend your point of view. Don’t just make vague allegations and run away when challenged.
Unless you return and clarify your accusations, your credibility in the debate has now reached zero.
Didactylos says
Judith Curry has stopped pretending to be neutral, I see. All to the good! Concern trolls are always a pain.
“your attempt to rebut my points are full of logical fallacies and arguing at points i didn’t make. As a result, Montford’s theses look even more convincing.”
Logical fail! And she’s not even attempting to explain how Tamino or Gavin are wrong in any way whatsoever. [edit – please stay polite]
dhogaza says
Peter Webster:
This reader understands that false statements are not “contrary views”. They’re simply … false.
And serious claims of malfeasance such are made by Montford should not be regurgitated by anyone who wants to be treated as a credible source unless they’ve taken the time to confirm whether or not Montford is telling the truth.
dhogaza says
And note, as predicted, it was a drive-by. There’s no evidence whatsoever that he’s read any of the responses pointing out that he’s wrong on the law, and I am certain that he’ll repeat this falsehood in the future.
John Mashey says
re: #168, item 6 (proxy independence)
I would guess that this comment goes back to:
a) McIntyre directly talkign to Montford OR
b) The Wegman Report, specifically Figure 5.8 on p.46 (p.45 of the PDF), which may possibly have come from McIntyre.
That graph has many oddities, especially within the surrounding context, including oddity of being part of section devoted ostensibly to social network analysis, not the same topic.
1) First, the Wegman Panel (WP) knowledge of proxies was such that it mostly cut-and-paste Bradley(1999), and made mistakes doing so. See #94 in this thread.
2) pp.67-92 of the WR have summaries of 16 “important” papers plus Mann’s dissertations. Of the total words in those 16 papers, ~51% are cut-and-paste, in order, part of the total ~79% that bear “striking similarity” to the text of the articles they are summarizing, but also introducing many errors, meaning changes and biases. [The backup documentation for all this will appear fairly soon, with word-by-word highlighting of that entire section, which tends to make the changes in/near big blocks of cut-and-paste leap off the page.]
3) Nevertheless, Figure 5.8 shows an exhaustive knowledge of proxies, because to get that chart, one needs to have identified every proxy, and sorted out the different names used in different papers. Some of these are not instantly obvious. For example, the WR references a Jasper proxy, whereas one of the DWJ06 proxies has one called Icefields. I happened to recognize that, since I’ve driven the Icefields Parkway on the way to Jasper … Anyway, it takes a *lot* of work to go through these papers and sort them out.
4) In the 12 headings of Figure 5.8:
5 are either summarized or seriously commented on.
1 (DWJ06) only appears on pp.46-47, with minimal comment.
5 are referenced only in passing on p.28 (klsit copied from Mann, et al (2005)
1 (Briffa00) seems mis-referenced.
5) ClimateAudit of course has at least 24 posts on these proxies through early July 2006. Hence, this does raise the question: did the WP:
a) Do all this work themselves by reading the original papers?
b) Read Mcintyre’s posts, but not cite any of them?
c) Simply get the chart from McIntyre, again without citation?
I cannot know, of course, but some people do know…
6) But in any case, it does not matter, because the chart misleading.
a) (Minor), the use of heavy black boxes is slightly unusual. Most people would just do a spreadsheet with X’s. Iti s slightly less work, but the black boxes are stronger visually.
b) (Major) The caption of Figure 5.8 includes:
“Indeed, the matrix outlined in Figure 5.8 illustrates the proxies that are used more than one time in twelve major temperature reconstruction papers. The black boxes indicate that the proxy was used in a given paper. It is clear that many of the proxies are re-used in most of the papers. It is not surprising that the papers would obtain similar results ”
“It is clear that many of the proxies are re-used in most of the papers” seems misleading. It is certainly not clear. At best, it seems strange, imprecise language for statisticians. Of 12 papers, “most” means at least 7, but only 9 of 43 proxies are used 7 or more times. Is 9/43 “many”? Most of the 43 proxies (22) are used in 2-3 studies.
BUT WORSE: note the careful wording “proxies used more than once”.
Of course, display of proxies used only once would weaken the argument, would it not? So, how many of those are there? They do not say. This is odd, because the algorithm for finding proxies used more than once:
a) Make a list of all proxies (after sorting out names).
b) Fill in the matrix of proxy vs study.
c) Only after the last study is done do you know for sure.
d) That gives you the list in Figure 5.8, with 43 proxies.
e) It leaves you X proxies used only once.
It was easy enough to check each paper for total proxy count, and subtract from that the number found in Figure 5.8, giving the number of single-use proxies. Guess what, they totaled 44 proxies. WR FIGURE 5.8 OMITTED MORE THAN HALF OF THE RELEVANT DATA. SPECIFICALLY THE HALF THAT WOULD HAVE ARGUED THE STRONGEST AGAINST THE CLAIM.
7) And of course, even that doesn’t matter much (as per Gavin’s comments). Good datasets are good datasets, and the world is full of overlapping studies that re-use them. “Independence” is hardly a binary term, and I cannot imagine anyone serious arguing that
“No science study is valid unless it uses completely distinct data from any previous study.”
Of course, if someone with serious field-knowledge wants to make the case, in credible peer-reviewed journals, that the frequently-used proxies are all wrong for some reason, and have that stand up, then that’s fine. But that doesn’t apply here.
Doug Bostrom says
Is there any way of knowing that Judith Curry actually authored the comment of 24 July 2010 at 7:43 AM? The author’s point #7 is so recklessly incorrect that I have a hard time believing somebody with a such a generally good reputation would commit a careless error of that type in public.
[edit – it is she. Further speculation is OT]