Guest commentary by Darrell Kaufman (N. Arizona U.)
In a major step forward in proxy data synthesis, the PAst Global Changes (PAGES) 2k Consortium has just published a suite of continental scale reconstructions of temperature for the past two millennia in Nature Geoscience. More information about the study and its implications are available at the FAQ on the PAGES website and the datasets themselves are available at NOAA Paleoclimate.
The main conclusion of the study is that the most coherent feature in nearly all of the regional temperature reconstructions is a long-term cooling trend, which ended late in the 19th century, and which was followed by a warming trend in the 20th C. The 20th century in the reconstructions ranks as the warmest or nearly the warmest century in all regions except Antarctica. During the last 30-year period in the reconstructions (1971-2000 CE), the average reconstructed temperature among all of the regions was likely higher than anytime in at least ~1400 years. Interestingly, temperatures did not fluctuate uniformly among all regions at multi-decadal to centennial scales. For example, there were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age. Cool 30-year periods between the years 830 and 1910 CE were particularly pronounced during times of weak solar activity and strong tropical volcanic eruptions and especially if both phenomena often occurred simultaneously.
Figure: Thirty-year mean relative temperatures for the seven PAGES 2k continental-scale regions arranged vertically from north to south.
The origin of the ‘PAGES 2k Network‘ and its activities can be found here and consists of nearly 80 individual collaborators. The Consortium’s collection of local expertise and proxy records was transformed into a synthesis by a smaller team of lead authors, but the large author list recognizes that the expertise of the wider team was essential in increasing the range of data used and interpreting it.
In addition to the background available at the FAQ, I think it is important to also highlight some aspects of the analytical procedures behind the study and the vital contributions of three young co-authors.
The benefit of the ‘regions-up’ approach embodied in the PAGES-2k consortium is that it made it easy to take advantage of local expertise and include a large amount of new data that would have been more difficult to assemble for a centralized global reconstruction. However, being decentralized, the groups in different regions opted for different methodologies for building their default reconstructions. While justifiable, this does raise a question about the impact different methodologies would have. To address this, the synthesis team (ably led by Nicholas McKay) applied three particular reconstruction methods to all of the regions, as well as looking at the basic area-averaged and weighted composites. He further analyzed the site-level records individually and without many of the assumptions that underlie the regional temperature reconstructions. These results show that the long-term cooling trend and recent warming are dominant features of the dataset however you analyze it. There is a sizable fraction of the records that do not conform to the continental averages, highlighting the spatial variability and/or the noise level in specific proxies.
One of the new procedures used to reconstruct temperature is an approach developed by Sami Hanhijärvi (U. Helsinki), which was also recently applied to the North Atlantic region. The method (PaiCo) relies on pairwise comparisons to arrive at a time series that integrates records with differing temporal resolutions and relaxes assumptions about the relation between the proxy series and temperature. Hanhijärvi applied this procedure to the proxy data from each of the continental-scale regions and found that reconstructions using different approaches are similar and generally support the primary conclusions of the study.
Regions where this study helps clarify the temperature history are mainly in the Southern Hemisphere. We include new and updated temperature reconstructions from Antarctica, Australasia and South America. The proxy records from these three regions come from many sources, ranging from glacier ice to trees and from lake sediment to corals. Raphael Neukom (Swiss Federal Research Institute WSL and University of Bern) played a key role in the analyses across the Southern Hemisphere. He used principal components regression (Australasia), a scaled composite (Antarctica), and an integration of these two approaches (South America) to create the time series of annual temperature change.
Inevitably, assembling such a large and diverse dataset involves many judgement calls. The PAGES-2k consortium has tried to assess the impact of these structural decisions by using multiple methods, but we hope that this synthesis is really just the start of a more detailed analysis of regional temperature trends and we welcome constructive suggestions for improvements.
References
- . , "Continental-scale temperature variability during the past two millennia", Nature Geoscience, vol. 6, pp. 339-346, 2013. http://dx.doi.org/10.1038/ngeo1797
- S. Hanhijärvi, M.P. Tingley, and A. Korhola, "Pairwise comparisons to reconstruct mean temperature in the Arctic Atlantic Region over the last 2,000 years", Climate Dynamics, vol. 41, pp. 2039-2060, 2013. http://dx.doi.org/10.1007/s00382-013-1701-4
mike says
A discussion of some key related findings from our new paper in Journal of Climate [“Separating forced from chaotic climate variability over the past millennium”] can be found at my Facebook page
Dumb Scientist says
Fascinating article, but the FAQ and NOAA Paleoclimate and “found here” links don’t work.
[Response: Sorry – should be fixed now. – gavin]
Marco says
The first link (“published”) links back to this same page rather than the Nature Geoscience paper.
[Response: Direct link is here: http://www.nature.com/ngeo/journal/vaop/ncurrent/abs/ngeo1797.html , doi links are sometimes a little slower, but should be online soon. – gavin]
Dumb Scientist says
Yes, but Gavin just noted that the link isn’t live yet. I’m sure we’ll be the first to know when it is.
P.S. I’m watching Thin Ice now and it’s awesome so far. Congrats! Chasing Ice was also on TV last night, and so is available elsewhere too.
Nick Stokes says
I have posted a gadget that lets you interactively show plots of the individual proxy histories against a spaghetti plot background. It also shows various metadata from the archived material.
Louise Newman says
Great work – a huge amount of effort from all teams but great to see the results!
Frank says
Von Storch reported that an early method for creating reconstructions produced reconstructions with reduced variability when applied to pseudoproxies containing noise. Do we have any idea if this new PaiCo methodology is capable of reconstructing the true dynamic range of climate variability?
[Response: The Von Storch claims were wildly overstated in the first place. See e.g. this piece in Science by Wahl et al. Subsequent reconstruction work using RegEM with TTLS regularization is quite resistant to losses of low-frequency variance. See e.g. the various papers by our group of the past 5+ years here. And of course, I discuss all of this in my book “The Hockey Stick and the Climate Wars”. – mike]
Jens Raunsø Jensen says
Dr Kaufman: According to the Fig S1 in the supplementary information on the Pages website referred to above, the Pages 2K reconstructed temperature consistently overestimate global temperature in more recent decades by say 0.1 C. How has this been taken into account when concluding that “.. the average reconstructed temperature among all of the regions was likely higher than anytime in at least ~1400 years.” ? (sorry, but I can not read the paper from my current location).
OBothe says
With respect to Frank’s question in #7: The PaiCo paper states:
When SNR is small, the noise dominates the pseudo-proxy records and, therefore, the low-frequency variability is underestimated while high-frequency variability is overestimated. This is expected from any method since, in high-noise cases, the noise ‘‘overwrites’’ the information about the target in the pseudo-proxies and, therefore, no method can recover the low-frequency variability. When there is little noise, the difference in power spectrums is much smaller. It seems that the error of PaiCo can be decomposed to a slight overestimation of the millennial to centennial scale variability and a slight underestimation of the centennial to decadal scale variability. However, as shown in Fig. 4, the errors in the reconstructions are among the smallest of any reconstruction method in many of the tested settings.
Watcher says
Nick Stokes has done a real service with his “Active Viewer” gadgets, most recently for the PAGES-2k datasets. As someone trained as a physical scientist who has taken lots and lots of data in my time, I can’t help but be struck by the variability of each and every one of these proxies.
To my eye, flipping between the proxies from a given region shows there to be lots of noise both within a given series and between any pair of series. This is very evident for the various ‘CAN composite’ series, which from Nick’s map appear to be geographically close together. When I spend some time flipping between these it is really impossible for me to say with any confidence that they are measures of the same thing – whatever that thing might be – and that if they are then the signal to noise ratio is miniscule.
If I had a student who brought data like this to me along with a statistical analysis I would send him back to the lab to get better data. Paleoclimate studies can’t do this, of course, but I’m wondering if it would at least be worthwhile to calculate correlations between proxies absent any underlying assumptions in order to assess whether the data has any value.
For example, wouldn’t one expect all the geographically close ‘CAN composite’ series to be more similar to each other than they are to, say central Asia? Shouldn’t it be a requirement that this be true before going ahead and throwing it into the mix? Otherwise it seems to me that garbage is being mixed with what is already a very small signal and I can’t see any good coming of that.
If I had to give an overall impression of the state of progress of paleoclimate reconstructions over the past 15 years I would say that it largely consists of taking more and more data and dumping it into ever more elaborate statistical procedures in the hope that a silk purse will emerge. It’s not at all clear to me that we wouldn’t be better off going back a step and trying to improve the data.
[Response: And what specific suggestions for “improving the data” do you have in mind? It is worth noting that in my experience, the correlation among weather station data is no higher than among similarly spaced paleoclimate data. To me, this says that the paleoclimate data quality if just fine– and that the noise is in regional vs global climate, not the quality of the proxies themselves. In short, we need more data, not “better” data (though better is fine, obviously). Getting more data, of course, it exacty what the authors of the PAGES 2k paper are trying to do.–eric]
Chris Goodall says
Darrell Kaufman says ‘For example, there were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age’.
To a naive observer, this isn’t visually obvious. The dominant color at the year 1000 is at the warm end of the spectrum. The dominant color at the year 1650 is very much at the cold end. The temperature anomalies may not be global, in Dr Kaufman’s words, but certainly look very widespread indeed.
Am I misunderstanding the chart?
Chris Goodall
Lynn Vincentnathan says
Very meticulous and great work of a dedicated huge team.
If the image were of Vegas slot machine results, it looks like we’re about the hit the big jackpot.
Hank Roberts says
Linking’s a bit off still
— for readers if you don’t get one to work,
compare what you see hovering the mouse pointer over a link,
vs.
what you see hilighting the text containing the link and using “view source” in your browser.
Some links in text jump to footnotes at bottom of main post, which is fine.
Some links that are meant to go to another web site get “realclimate” stuck on the beginning, so although the correct URL is included it isn’t working.
I emailed a screenshot.
Ray Ladbury says
Chris Goodall,
You seem to have fallen into the same trap as the denialists–thinking that “close” counts as simultaneous, or that “a little warm” is as good as much warmer. Yes, it is true that there were several warm areas around the end of 1st millennium CE. However, the trend was not global, and the warming was nowhere near as strong as what we have seen recently.
Hank Roberts says
Ray, I think Chris Goodall’s point is the colors in the picture give a “warm” impression, and that the naive reader can be misled to “just look at the picture” — which isn’t clearly captioned.
Pointing out that it’s easy for a reader to be fooled isn’t necessarily a sign that he was fooled — it’s a caution to the editor and illustrator tho’.
the Abstract does say
“… The most coherent feature … is a long-term cooling trend, which ended late in the nineteenth century…. There were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age ….”
But — lacking access to the full article text — I can’t be sure what those various colors represent. Does the same color for each proxy over the whole time span mean
— the actual temperature at that location (however interpreted)?
— the anomaly from a baseline for that proxy in that location, up to that time?
It’s probably easy to understand once you understand it, but it can be hard to get a picture that -by-itself- conveys what’s known.
I’d love to see what Robert Rhode would do with that if he gets globalwarmingart updated.
Hank Roberts says
PS — the paper’s data files are online at the Nature link, and there’s an extensive Supplement available to anyone who wants to look:
statistics, many more charts
This does help a lot figuring out what the single picture posted above means.
Non-Scientist says
Only partly on topic:
Was there not a peak global average temp around CE 1000, as reconstructed, equal or a little warmer than today?
I thought the issue we have now is that we have sudden, unnatural warming and are headed to temperatures higher than those in the last 10K years due to total GHGs and cascade effects (such as darker poles due to melting ice).
I’ve seen numerous charts of historical temps and find the comparison of today with yrs around 1000 not entirely clear.
Hank Roberts says
> Non-Scientist
Did you look at the supplemental online info?
Look at the pictures there at least — look at
Figure S2 Proxy temperature reconstructions for the seven regions of the PAGES 2k Network.
Looking at that, it appears that yes, there was not a peak where you thought you heard about one.
Where did you hear the idea that such existed? There are local ones like the one in Europe most people know about, but that’s not global.
[Response: This is a common misconception! I have never seen any evidence to support a “global” MWP, but you hear it all the time, and it is definitely taken as fact in much of the older literature. But that was in the absence of evidence. We no have lots of evidence and yet no global MWP, the PAGES 2K paper just being one of the more recent looks at this. –eric]
Mal Adapted says
Non-Scientist:
Is this the information you’re looking for?
Watcher says
Re: #10:
By improving the data I’m suggest attempting to screen out proxies which only (or mainly) contribute noise. Take for example S. America CAN Composite 11 and 31. An admittedly quick and “unscientific” visual comparison indicates a poor correlation between them. Yet they are separated by only roughly 1 degree. That’s roughly the distance between, say, New York and Washington, D.C. Now, clearly the weather in NYC and WDC is not going to be the same, but I would venture a guess that a cold year in NYC would more often be accompanied by a cold year in WDC than a cold year in, say, Moscow.
My thought was that if one knew what to expect then one might obtain a measure of the quality of the proxy series, and that having a number of closely spaced proxies might allow one to screen out data a priori which hurt rather than helped the analysis.
I guess I should frame it as a question: has anyone ever looked at something like this? Specifically, how does the correlation between proxies spaced by X km compare with the correlation between weather stations spaced by X km? Control for meteorological factors such as altitude, proximity to oceans, etc. would obviously have to be considered.
dhogaza says
“has anyone ever looked at something like this? Specifically, how does the correlation between proxies spaced by X km compare with the correlation between weather stations spaced by X km? Control for meteorological factors such as altitude, proximity to oceans, etc. would obviously have to be considered.”
Since you are a self-proclaimed physical scientist, why aren’t you searching the literature yourself, rather thn trying to cast doubt without bothering?
“Goes to motive, yer honor”.
John Mashey says
Non-scientist:
The persistent idea that there was a “Big MWP”, i.e., a global MWP warmer than today was essentially manufactured in 2005 by wide propagation of Fig 7.1(c) from IPCC(1990). Basically, it was falsehood, but well publicized. For the history, see Wiki Talk page. This really got going in March 2005, helped a few months by the Wall Street Journal.
I’ve found at least 7 versions of the IPCC(1990) graph on p.202, none of which are the exact image, none of which showed any idea of the caveats around it, none of which admitted that the graph was heavily deprecated within a few years as unusable, and some instances of which claimed it was from 1995, in 3 instances specifically claiming ti was Figure 22, which in nonexistent.
Philip Machanick says
The first link on the word “published” points to this page (to #ITEM-15406-0).
[Response: Clicking on that will point you to the full reference at the bottom of the post, which then has a link to the published paper here –eric]
On MWP: there’s a contrarian site whose URL I forget (co2science is in the name so you can find it easily) that has amassed papers purporting to show that there is a conspiracy to suppress the MWP. That they found so many papers makes for an odd suppression conspiracy. I selected the papers they claimed had the most reliable data sets, and the dates don’t line up. So it’s not a huge surprise that someone doing good science found the same thing.
If contrarians really were sceptics, they would pick this sort of thing up themselves rather than go to enormous lengths to accumulate data that undermines their case, then persist with their claims.
Watcher says
Re: #21
“Since you are a self-proclaimed physical scientist, why aren’t you searching the literature yourself, rather thn trying to cast doubt without bothering?”
Quite simply because I don’t get paid for it and don’t have the inclination to do so.
If there weren’t a problem with signal to noise ratio in paleo data then the extraordinary statistical efforts in each and every study I’ve looked at would be unnecessary. If this sort of study hasn’t been done already I think it would be effort well spent for someone in the field, probably worth a paper or two in its own right, and should contribute to improving confidence in the resulting reconstructions.
In my field I have the luxury of being able to integrate experiments for 24 or even 48 hours to overcome poor SNR’s, but do so only as a last resort after trying everything I can think of to get a “clean” signal. All I’m doing here is suggesting something that occurred to me that might help approach that goal in paleo studies.
As for casting doubt: even though that was not my intent in this case I must insist that is the essence of a scientist’s job. No theory has ever been proved “right”. The best we can do is to come up with hypotheses which don’t appear to be wrong. If we — or better still others — work hard enough to disprove them and still can’t, then we get to call them theories.
Hank Roberts says
CO2science … climate misinformation
Ray Ladbury says
Watcher,
The problem is that if you use only “clean” data, you cannot say very much. It’s been done. Long ago. We are now in a situation where we are looking back thousands of years–those datasets will of necessity be noisy. That does not, however, make them worthless. They still contain information–signal–and it is information we cannot get any other way.
As we say in physics–all the easy problems have been solved, and they were solved before they were easy.
Paul S says
A question (or a few) about the ‘Sign relation’ attribute in the proxy metadata: This is presumably supposed to represent the relationship between the proxy data and temperature, such that a negative sign relation would mean higher proxy values indicate lower temperature?
If that is the case, has this sign relation been pre-applied to standarise the proxy data stored in the spreadsheets? I ask in relation to the Canadian tree-ring proxies, having been pointed in that direction by Watcher, across which there is a robust low signal in the early-1800s, coincident with the 1815 Tambora eruption. However, some of these proxies are listed with a negative sign relation. Does that mean the low will produce a warm spike rather than a cold spike when calibrated for temperature? Or am I barking up the wrong… well, you know?
John Mashey says
Ray Bradley’s Paleoclimatology (1999) is still pretty useful for an introduction to the challenges of paleoclimate reconstructions and ways of dealing with them. (That’s the book plagiarized and then falsified in the Wegman Report in 2006.)
I’m lucky to have known or worked with many *good* scientists, most of whom refrain from disparaging an unfamiliar field before first learning enough about it for their comments to be relevant. Anonymous Dunning-Kruger Effect is alive and well.
Watcher says
Re 26:
“The problem is that if you use only “clean” data, you cannot say very much. It’s been done. Long ago.”
Well, I can’t say I’ve ever seen a paleo study that uses what I would call “clean” data, though I admit to being a dilettante in the area. What was the source of the data and how was it known to be clean?
Getting back to my original point, which I should probably soften. Rather than checking whether the data “has any value”, maybe I should say it might give an estimate of how much value it does have. Because the discussions here have forced me to think this through a bit (funny how that works!): if one takes several years worth of data from weather stations spaced some distance apart, some measure of correlation between their annual average temp can be calculated, the more years the better. A correlation can also be calculated from proxies located a similar distance apart. In the best case they would be the same number, but in reality the proxies will almost certainly be worse. Just how much worse should be a measure of the noise in their signal. Have there been studies like this?
Perhaps close-proxy correlations could be used to generate weighting factors when used in a reconstruction, or perhaps to provide a threshold criterion for whether they should be included.
Watcher says
Re 28:
I’m sorry but what direction does this come from?
I’m lucky to have known or worked with many *good* scientists, none of whom are afraid of asking “dumb” questions in order to familiarise themselves with an unfamiliar field.
Hank Roberts says
> Watcher
he said
disparaging … before first learning
Dumb questions are part of first learning.
Disparaging while first questioning?
Not so useful.
Hank Roberts says
disparaging climatology is about what you’d expect. See any productive first-learning going on there?
Ray Ladbury says
Watcher,
Principal component analysis and the other techniques do basically what you are talking about. Remember, though, you are doing a reconstruction over thousands of years. Sometimes the reason a problem is hard is because it is inherently difficult.
And there is a big difference between asking dumb questions to famailiarize yourself with a new field and implying those in the field don’t know what they are talking about. In the words of St. Patrick, “Oh,Lord, let my words be sweet and gentle today, for tomorrow I may have to eat them.”
EFS_Junior says
Watcher,
Not an expert by any means, but …
If I’m not mistaken, most of the paleo datasets have multiple samples taken in close proximity to each other (spatially). Marine samples immediately come to mind. In that way, you get some idea of the statistical uncertainty for each proxy location. In the case of EPICA, two ice cores were eventually obtained.
Spatial autocorrelation works best at the high end of the frequency spectrum (hours, days, months or years), but most (if not all) paleo data is essentially a low pass filter, since the measurements are not instantaneous by the very nature of proxy deposition (e. g. air pocket close out depth for ice cores (snow -> firn -> ice), diffusion, etc.).
It’s not like those doing paleo studies have overlooked the obvious (e. g. are the proxy data similar to each other at the locations where those specific samples were taken).
Watcher says
Re 34:
OK, at great personal risk because there seem to be some thin skins around here….
I originally referred specifically to datasets taken in close proximity, which to my inexpert eye did not look very similar at all. Dr, Steig responded with
“in my experience, the correlation among weather station data is no higher than among similarly spaced paleoclimate data”
In retrospect I suppose this could mean that a) neither proxies nor weather stations correlate well, or b) that they both do. Given my data pair example and the way he phrased it I took his comment to indicate the former sense is what was meant.
I find it surprising that closely spaced weather stations do not correlate well. I frequently do short-hop travel between cities with roughly a 1 degree lat/lon spacing, and I find the correlation pretty good as long as a day or two is allowed for weather systems to travel. A cursory scan of the weather channel leads me to the same conclusion. I would expect the effects of a few days lag would disappear over the annual or greater time scales of proxy data.
I’m sorry if I sound pig-headed and acknowledge that my anecdotal experience can hardly be considered definitive. If it is the case that this issue has been dealt with in the literature, then perhaps someone could point me to an appropriate reference and we’ll call it a day. If not, I still think it would be great use of a grad student’s time to at least explore the area.
Marco says
Philip @23:
I actually have repeatedly asked pseudoskeptics pointing to the CO2science website to make such a graph. Usually silence ensues, because they realize that suppressed knowledge can’t be hiding in plain sight.
A website devoted to debunking the nonsense of Joanne Nova (Australian website) once showed a few examples of the lengths the Idsos go to misinform its readers:
http://itsnotnova.wordpress.com/2012/09/03/novas-warm-period/
MARodger says
Re Response @36.
Contrarians find it difficult to argue against AGW when faced with even a single hockey stick, which is why they try so hard to break any they come across. One attack-route is to harness the good old IPCC FAR figure 7.1c as their evidence and to claim the LIA & MWP have been wrongly airbrushed from the record. Thus it can be argued that contrarians began ‘manufacturing’ the MWP after the 2001 IPCC TAR gave the Mann et al 1999 graphic widespread publicity. The use of fig 7.1c is usually pretty childish stuff, as this debunking of William Happer demonstrates.
Hank Roberts says
Looked at this?
Tree-Ring Research 69(1):3-13. 2013
doi: http://dx.doi.org/10.3959/1536-1098-69.1.3
KNMI Climate Explorer: A Web-Based Research Tool for High-Resolution Paleoclimatology
Hank Roberts says
http://www.treeringsociety.org/TRBTRR/TrouetVanOldenborghTRR69-1SupplementaryMaterial.pdf
Watcher says
It looks like the BEST folks had a look at correlations between weather stations. A summary graph can be found on page 17 of BEST Methods paper’s appendix. Whether the times scales they used would be appropriate for a proxy evaluation I can’t say, but they show correlations of 0.5 or so for 1000km separations.
If proxies 1000km apart are perfect thermometers we should expect a correlation similar to weather stations as calculated on an appropriate timescale. Unless I’m missing something this offers an independent means of assessing the quality of closely grouped proxies.
flxible says
“If proxies 1000km apart are …” at different elevations, or rural vs urban, or on hilltops vs small valleys …. one wouldn’t expect perfect correlation [thermometers or proxies], even if on the same latitude line. What exactly are you questioning, Watcher, the scientists studying temperature records and reconstructions have a pretty good handle on the complexities of their field.
Dave123 says
Watcher- are you talking about correlation in the anomalies (differences from baseline) or the actual temperatures… teleconnection (the expected correlation of stations to each other in space) to my understanding is more for anomalies, not for temperatures.
Watcher says
Re: 41
“at different elevations, or rural vs urban”
Quite correct, which is way back in #20 I mentioned “Control for meteorological factors such as altitude, proximity to oceans, etc. would obviously have to be considered”. In the BEST plot they say they used random pairs of stations, which is probably why the scatter is so large. I would venture a guess that if one did correct for conditions the correlation would be even stronger.
I mean, think about it: when you check the weather on TV they show the fronts moving one way or another and they know if it will be warmer or colder tomorrow at your location by what happened in the neighboring state today.
Re: 42
“more for anomalies, not for temperatures”
Correlation between stations, period. Weather stations measure temperature, not anomalies (i.e. we’re tossing out all the data on humidity, pressure, etc) so that’s what you need to assess.
Hank Roberts says
> I would venture a guess that if one did correct for conditions …
Watcher — What do you think they did?
Why do you think that?
What’s your source for your belief about the BEST procedure?
Here’s what I found with a quick search:
http://www.physicstoday.org/resource/1/phtoad/v66/i4/p17_s1
Earth’s land surface temperature trends:
A new approach confirms previous results
Barbara Goss Levi
Physics Today / Volume 66 / Issue 4 / April 2013, page 17
http://dx.doi.org/10.1063/PT.3.1936
“… The newcomers to the task looked at many more weather stations and used a geostatistics technique to adjust for data discontinuities….”
“… write the temperature measurement for a given place and time as the sum of four terms: an average global temperature Tavg; the positional variation caused by latitude or elevation; the measurement bias, or offset variable; and the temperature associated with local weather….”
Watcher says
Re 44:
The caption to the figure itself.
“Figure 1: Mean correlation versus distance curve constructed from 500,000 pair-wise comparisons of station temperature records. Each station pair was selected at random, and the measured correlation was calculated after removing seasonality and with the requirement that they have at least 10 years of overlapping data.”
Seems clear enough. I don’t know how to post figures so the best I can do is direct you to page 17 of the link in #40.
I’ve made no comment at all about their reconstruction, which has nothing to do with proxies. I simply happened across their station correlation data which I presented as support for my hypothesis that closely spaced weather stations should be better correlated than more distant ones.
Non-Scientist says
> Hank and Mal Adapted:
That is the info I needed, especially the link to popularized form on SkS. I can’t follow a full paper, due to in some part to time but mostly to health limitations. I like to know where to find it tho.
The notion of a medieval warm period is so common I thought it was generally accepted. The term is used on Skeptical Science in the link but I could have picked it up from elsewhere. I’ll hazard a guess you could find it in Scientific American if you go back a ways.
I’m having trouble reading the graphs in that link though: in Mann 2008 it looks like a 2/10th excursion around yrs 800 and 1000 in the later PAGES it looks smaller, about .1 c and only around yr 800. I am not sure which figure to use on the next time someone tells me “herr, it wuz hotter when the Vikings xxx”
David B. Benson says
Non-Scientist @46 — How about the James Ross Island ice core proxy?
Hank Roberts says
> medieval warm period is so common I thought it was generally accepted …
It was, for a while, for the places it described.
But: no Medieval Period in South America, or Asia, or …
They have their own historical eras, and climate records.
You get the point.
The Medieval Period was European, and not all of that, limited in extent geographically as well as across time. Some locations thereabouts were warmer.
When those were the only data points they had, that’s all they talked about.
That was then, there.
Hank Roberts says
> I’m having trouble reading the graphs in that link … 2/10th … .1 c
Don’t try to get numbers off of pictures. The data is there, look for it, you want the numbers used to create the picture.
Trying to get quotable, citable, reliable numbers off a picture is fraught.
Ray Ladbury says
Nonscientist,
Actually, the denialists are playing to prejudices of those of European descent. There was definitely a warm period in Europe in this era, and since European history is all most people will have any familiarity with–even most historians–it is easy to get people to view it as global.
Denialists always hope our native stupidity is transferable to the particular stupid thing they want us to believe.