What makes science different from politics?
That’s not the start of a joke, but it is a good jumping off point for a discussion of the latest publication on paleo-reconstructions of the last couple of millennia. As has been relatively widely reported, Mike Mann and colleagues (including Ray Bradley and Malcolm Hughes) have a new paper out in PNAS with an update of their previous work. And this is where the question posed above comes in: the difference is that with time scientists can actually make progress on problems, they don’t just get stuck in an endless back and forth of the same talking points.
We discussed what would be required in an update of these millennial reconstructions a few months back and the main principles remain true now. You need proxies that are a) well-dated, b) have some fidelity to a climate variable of interest, c) have been calibrated to those variable(s), d) that are then composited together somehow, and e) that the composite has been validated against the instrumental record.
The number of well-dated proxies used in the latest paper is significantly greater than what was available a decade ago: 1209 back to 1800; 460 back to 1600; 59 back to 1000 AD; 36 back to 500 AD and 19 back to 1 BC (all data and code is available here). This is compared with 400 or so in MBH99, of which only 14 went back to 1000 AD. The increase in data availability is a pretty remarkable testament to the increased attention that the paleo-community has started to pay to the recent past – in part, no doubt, because of the higher profile this kind of reconstruction has achieved. The individual data-gatherers involved should be applauded by all.
The increase in proxy records allows a whole bunch of new things to be done. First off, the importance of tree rings can be tested more robustly. With the original MBH98 proxies, there was only enough other data to go back to 1760 if you left out the tree rings. The match was pretty good over multi-decadal periods, but the interannual variability was much larger without tree-rings. Now though, the Northern hemisphere land temperature reconstructions without tree rings can go back to 1500 AD or 1000 AD depending on which of two methodologies are used. For the NH land and ocean target, it’s even possible to get a coherent non-tree ring reconstruction back to 700 AD! As before, there are some differences (notably in the 17th Century where the tree rings indicate colder temperatures), but the recent warming is anomalous regardless.
Secondly, you can screen records and pick targets more finely: do you want only records that match local temperatures? Done. You want to get a handle on global and southern hemisphere means as well as the northern hemisphere? Done. Other screens could easily be implemented.
The two methodologies used themselves span the range of different approaches that people have used. ‘Composite and scale’ (CPS) is perhaps the simplest method – it is basically an average of all the temperature proxies scaled to the target time series. The other method is denoted ‘Error in variables’ (EIV) in this paper, but is really a simplified application of the RegEM climate field reconstruction method used in a couple of more recent papers. It is essentially a fancy multiple regression to the target time series that can incorporate non-local proxies as well. The point of using two methods is to demonstrate what is, and what is not, robust, and to give an idea of what the structural uncertainty in these estimates is – something not easily calculated using standard statistics. That uncertainty is clearly larger as you go back in time, and larger still for the southern hemisphere.
Other improvements over previous work are that more proxy data sets go past 1980, and so calibration up to 1995 is possible. That allows more of the recent trends to feed into the calibration and highlights the so-called divergence problem in some (but not all) recent tree-ring records. That divergence is significantly lessened without tree-rings or using the EIV method.
Figure: Spaghetti plot of the new reconstructions over a) 1800 and b) 1000 years
along with selected older ones for comparison.
So what does it all mean? First off, this paper (like MBH98 before it) is not an attribution study. That means that the reasons for any of the ups-and-downs in the records are not demonstrated by these papers alone. Attribution of the recent trends (as discussed in IPCC AR4) to anthropogenic effects has mostly focussed on the last 150 years and did not use any paleo-data. Nonetheless, there have been a couple of key studies that have used this kind of data along with simple energy balance models (Crowley, 2000; Hegerl et al, 2006 for instance) and it will be interesting to see if this new reconstruction will make any difference to their conclusions.
Secondly, in comparison with previous reconstructions, the current analysis does not provide many surprises. Medieval times are warmer than the Little Ice Age as before, and a little warmer using the EIV method than was the case in MBH99. The differences in the 11th Century are on the order of a couple of tenths of a degree – well within the published error bars in IPCC TAR though. Interestingly, there are quite rapid and strong drops in temperature near 1100 AD and around 1350 AD which may make interesting case studies for attribution to solar or volcanic forcings in future. Overall, there are a few more wiggles than before, but basically nothing much has changed. (Though one should always be aware of the maxim that one person’s noise is another person’s signal).
Finally, while the headline numbers ‘likely warmest since XXXX’ are of some contextual value, they aren’t the real point of this kind of study. Most of the interesting work – looking for patterns associated with solar forcing say – will start when the spatial patterns of temperature change start to be discerned – and that is still a work in progress.
So, onto the inevitable discussion! One test of whether that discussion is more political than scientific will be the extent to which people acknowledge the progress that has been made. Repetitions of tired and oft-debunked one-liners will be telling!
Kevin McKinney says
Ask (Hank) and ye shall receive. A different picture than the last one I encountered–thank you!
for4zim says
#91 Muff Potter: Kehl is a well known denialist, and his pages are a prefered choice for his alike, because of the appearance of a serious collection of sources and data. A closer look reveales, that the collection is very one-sided (“Mann’s hockey-stick was found to be false”, “Moberg et al and Loehle provide a correct impression”)
Nathan Kurz says
#100 Dave
Thanks for your responses, Dave. I agree that a presumption of bad faith is an almost insurmountable impediment, and one needs to remove this obstacle if one hopes to start communicating again.
Gavin, I realize it’s not your responsibility to patrol the skeptic hordes, but could you offer a quick summary of how the data set has been updated and where these changes are recorded? Is there a “readme” file somewhere of the sort that Dave refers to? I think (hope?) that McIntyre would happily “move on” and apologize after a clear statement that you were acting in good faith. It’s sad that it’s necessary to make such statements, but I think it is worth it if it helps people to concentrate on the science rather than the accusations.
[Response: What is the point? The presumption will be that I’ve just made something up and even if I didn’t, I’m a bad person in any case. I have no interest in communicating with people whose first and only instinct is to impugn my motives and honesty the minute they can’t work something out (and this goes back a long way). Well, tough. You guys worked it out already, and I have absolutely nothing to add. If McIntyre was half the gentleman he claimed to be, we’d all be twice as happy. – gavin]
John Mashey says
re: 103 Nathan
Are you familiar with the Data Quality Act, and what it was really intended to do?
Put another way, there’s a line between asking questions, poking at data, looking for code, to do science as normal science …
and engaging in activities designed to use up scientists’ time so they *can’t* do science. The data Quality Act falls in the latter… that’s what it was for, and certain people follow that strategy as well.
[Chris Mooney’s “The Republican War on Science” and David Michaels’ “Doubt is Their Product” are good sources on the DQA.
Hank Roberts says
Those who believe in demons _do_ see them them. Look at the result — stuff like the Data Quality Act:
http://scholar.google.com/scholar?q=%22data+quality+act%22
Philip Machanick says
Several people want to know if any other continent suffered a similar indigenous die-off to the Americas (or claimed that it happened nowhere else).
Australia did, with estimates of at least 50% being wiped out by smallpox and other imported causes. The actual numbers are not accurately known, with a wide range (see e.g. WikiPedia).
There were also select indigenous groups in South Africa that were almost wiped out by smallpox.
These were all relatively isolated populations without exposure to domestic animals that may have co-evolved diseases and immunities with people.
RichardC says
I’d like to see two things – first, a summation graph with perhaps 95% error ranges (ie, a three-line graph), and second, the same thing for the entire planet. Anyone have anything like that?
Barton Paul Levenson says
Philip, I didn’t say die-offs due to smallpox didn’t happen anywhere else. Kindly don’t put words in my mouth. What I said was that the 90% figure strikes me as very unlikely — and your citing 50% elsewhere does nothing to contradict that; it fact it strengthens my argument.
Mango says
Any chance you could update your article on the Hockey Stick for Dummies to include the latest findings?
dhogaza says
Not really. In most the US, native population densities were higher than in Australia, which has a much higher percentage of desert than the US. I base this on the fact that desert population densities of native americans here in the US were much lower than elsewhere, and I see no reason for it to be have been different in aboriginal Australia (indeed, it’s still true today, in both countries).
Lower population densities make it more difficult for disease to spread.
Regarding smallpox, I think one can replace “may have” with “must have”. After all, the world’s first vaccine came about because an astute englishman noticed that dairy farmers (or whatever you want to call them) had a lower incidence of smallpox than typical populations, and connected this with exposure to cowpox. Intentional vaccination with live cowpox was then introduced to successfully combat smallpox.
Manny, in Moncton says
Did you censor my question?
Three days ago, I asked why I could not reproduce the CRU red line with the data from the CRU itself. Where is it?
I have another question: how can the red line, smoothed over 40 years, reach 2006? Shouldn’t it stop at 1986?
[Response: Your question was answered way above (the target was CRU NH Land) and how the smoothing was done was explained in the paper (see Mann (2008)). – gavin]
R James says
I’d like to see this graph with an additional 2,000 years of history on it. I’d also like to see a global plot, rather than just the northern hemisphere. As it stands, it doesn’t show the complete picture. I believe it would put things better into perspective, and further show that the current pattern is nothing unusual.
[Response: The global picture is very similar, but with a little more noise due to less data availability in the south, and we’d all like to see another 2000 years – unfortunately the analogous data is just too sparse. – gavin]
Philip Machanick says
Barton I don’t know why you think I’m attacking you. I was just adding in some additional data.
The 50% figure I quoted is an absolute minimum. The more likely range is significantly higher but I only have time to look this up at WikiPedia, where the article is currently a mess. There is evidence that the Aboriginal die-off in Australia could have been as high as 90%. Stats from that era are poor, as aboriginal Australians people were official fauna until the 1960s (if you can believe that). We are pretty sure of such numbers from isolated populations like the Khoi people in South Africa who were similarly isolated from diseases that had migrated to Eurasians from domesticated animals, so it is plausible that it could happen on a continental scale.
dp says
Interesting about that drop around 1350. Scientists should look ar historical records. They state that temperature drops around that caused famines and left the population of Europe weakened when the Black Death struck. In England the fall in temperatures at the time are attested to in monastic chronicles.
llewelly says
Brian Fagan covers this in two of his books – I think The Long Summer: How Climate Changed Civilization. and The Little Ice Age: How Climate Made History .
Both highly recommended, wonderful discourses on the connections between climate and history.
M. Potter says
#112
Since the spaghettis start at about 200 A.D, what about the time period sometimes called “Roman Optimum” (200 B.C. to A.D. 400)? Any chance to get upper and lower bounds for global or NH temperature? And I dare to ask, what about the so called Holocene climatic optimum (9000-5000BC) ?
[Response: These are collections of data that is either annually or decadally resolved and well-dated. Unfortunately that kind of data gets rarer as you go back in time and so whether the Roman Optimum is real and what it’s spatial extent was will remain uncertain for some while. On longer time scales (multi-millennia) such dating accuracy isn’t as important and so coarser and less well-dated proxies are useful. There is a review paper on exactly this from Heinz Wanner and colleagues in press at the moment. – gavin]
Briso says
>Now a really stupid question. It looks to me like the only lines which go above the early highs generated from proxy data (~960 AD) are the instrumental record data. Does this not show that the proxy data suggests warmer times in the past than during the more recent proxy period? Comparing that to instrumental data is apples and oranges, no?
>>[Response: No. The proxies are calibrated to the instrumental target just so that they will be comparable. – gavin]
I’ve been looking at the paper again and trying to understand it. First, an important quote in the context of the AGW issue.
“Because this conclusion extends to the past 1,300 years for EIV reconstructions withholding all tree-ring data, and because non-tree-ring proxy records are generally treated in the literature as being free of limitations in recording millennial scale variability(11), the conclusion that recent NH warmth likely** exceeds that of at least the past 1,300 years thus appears reasonably robust. For the CPS (EIV) reconstructions, the instrumental warmth breaches the upper 95% confidence limits of the reconstructions beginning with the decade centered at 1997 (2001).”
Further down on the same page (italics added by me):
“Peak Medieval warmth (from roughly A.D. 950-1100) is more pronounced in the EIV reconstructions (particularly for the landonly reconstruction) than in the CPS reconstructions (Fig. 3). The EIV land-only reconstruction, in fact, indicates markedly more sustained periods of warmer NH land temperatures from A.D. 700 to the mid-fifteenth century than previous published reconstructions. Peak multidecadal warmth centered at A.D. 960 (representing average conditions over A.D. 940–980) in this case corresponds approximately to 1980 levels (representing average conditions over 1960–2000). However, as noted earlier, the most recent decadal warmth exceeds the peak reconstructed decadal warmth, taking into account the uncertainties in the reconstructions.”
OK, some questions.
1. Does the EIV reconstruction represent a forty year moving average as suggested by the part I italicized?
2. Does the instrumental record shown on the graph in Fig 3 represent a forty-year moving average? I say no, because such a plot would have an end point in 1987. It looks like a five year moving average perhaps?
3. It is true that the upper 95% confidence level of the peak warmth centered at A.D.960 of the EIV land-only reconstruction is approximately 0.4. I assume that this means that peak of the five year average temperature at that time would have been considerably higher?
4. Is it not true that whatever the red line in figure 3 is, it is an apple being compared to a pear?
5. “Peak multidecadal warmth centered at A.D. 960 (representing average conditions over A.D. 940–980) in this case corresponds approximately to 1980 levels (representing average conditions over 1960–2000).” Corresponds approximately? Shouldn’t that be exceeds significantly? If my figures are right, 1960-2000 HadCrut NH 40 year average – 0.06 (98-08 app 0.17), 960 PMW central – app 0.25, 960 PMW upper 95% – 0.4?
6. Does this paper really show that “recent NH warmth likely** exceeds that of at least the past 1,300 years”?
Briso says
In point 5 of my previous post I should have written “(68-08 app 0.17)”. Sorry about that.
Barton Paul Levenson says
Briso writes:
Yes.