… according to the Centre for Research on the Epidemiology of Disasters in Belgium, which tracks the world’s catastrophes.
While climate-related, not all can be blamed on man-made warming or climate change. Still, extreme weather has noticeably increased over the years, says Debby Sapir, who runs the center and its database. From 1983 to 1992 the world averaged 147 climate, water and weather disasters each year. Over the past 10 years, that number has jumped to an average 306 a year.
In the United States, an index of climate extremes – hot and cold, wet and dry – kept by the National Oceanic and Atmospheric Administration has jumped 30 percent from 1992 to 2013, not counting hurricanes, based on 10-year averages.
… four methods agree in terms of mass loss and acceleration in loss at the regional scale. Over 1992–2013, the mass loss is 83±5 Gt/yr with an acceleration of 6.1±0.7 Gt/yr2. During the common period 2003–2009, the mass loss is 84±10 Gt/yr with an acceleration of 16.3±5.6 Gt/yr2, nearly three times the acceleration over 1992–2013. Over 2003–2011, the mass loss is 102±10 Gt/yr with an acceleration of 15.7±4.0 Gt/yr2.
Killiansays
Remember all those times I’ve said time is short? Well, once again we find sensitivity to change to be more dynamic than previously believed, at least by some.
Re #399 Hello – I am wondering what is the context of mitigating climate emissions in human terms. I think that anyone can ignore human nature (their motives, goals, greed, wealth, well being etc) and think that governments and institutions will act in humanities best interest but seeing as little has been achieved thus far since the first global summit in 1992 (emissions have risen by 17 billion tones since then) that you can see why a lot of informed people are more than a little cynical. Now comes Peru (now) and Paris in 2015 and cuts have to be deeper and longer lasting but in some countries the politics is far from settled (USA, Canada, Australia have outright denial on the subject and other countries to I am sure) and hence progress if far to slow and even if binding agreements are agreed it a massive shake up in the market place that threatens to change the status quo of power and influence which is lobbied hard against and hence radical change is hard to come by.
The biggest issue though is life style change as up until now the whole process of mitigating carbon emissions has been to not worry about your way of life as that will not change only the means of getting your energy will. Anyone who looks into this deeply will see that this is very hard to implement and a insane number of onshore and offshore wind turbines couple with a whole lot of solar and related solar technologies will go someway but wont stop 2C and beyond. Humans act in their own best interests mainly and the future everyone is asking for here will not come about easily and possibly not from international agreements either.
The Central England temperatures HadCET now has to have the remainder of December colder than the coldest December on record (-0.8ºC in 1890) to prevent 2014 from taking “hottest year on record” title.
The University of Reading felt confident enough with this to report a 75% chance of the warmest year on record. (I think the 75% must be the bookie’s odds, not the mathematical ones.) Talk of ‘warmest year’ sparked off the BBC reporting not just the potentially record temperature for 2014 but also the weird weather the UK has had:-
“A record wet winter with floods to match. Then the driest ever September, followed by the Halloween heat – the warmest such day on record”
Unfortunately there is a rather propagandist feel to this messaging which would be all undone by a single cold year, either in the near future or even recent chilly exemplars. 2010 for instance, was a chilly year with the second coldest December on record.
DPsays
Re 6 I understood 2010 was the coldest December ever in the UK. Looks like this year will break the record though it hasn’t felt like that. Perhaps because of all the rain.
Russellsays
With the arival of the Anthropocene, people are beginning to wonder:
Victor may find these reassuring, to make the point that what he’s discovered about statistics and charts has, in fact, been well known by, er, those who know about it:
“Well, first of all it’s an example of how a logical evaluation can cut more deeply than a merely statistical analysis.”
Sophistry. Analysis is good and logical or it is not. There’s not some über logic that exists apart from statistics by which it’s judged.
You know, the kind of rhetorical glibness that ‘debators’ use to manipulate audiences and ‘win’ arguments is a damned piss poor cousin to actual scientific analysis.
DP @7.
The reason 2010 was the coldest December on record for the UK is that the Met Office data is only given back to 1910. HadCET of course goes back to 1772 so the frosty December of 1890 features on CET but not on the UK record.
Paul Berberich @410(November Unforced Variations).
Can you explain why you consider the fluctuations in global average temperature of past decades have any bearing on future fluctuations in global average temperature. Bear in mind that the dip in temperatures between 1900 and 1920 is considered to be the result of large volcanic eruptions which are effectively random events. The levelling off of temperature in the 1940s & 1950s is considered the result of large increases in sulphur emissions. Such increases have only occurred the once. So how can the outcomes from random events or unrepeated events provide any meaningful way of projecting future outcomes? Indeed, why do you consider that the work of the IPCC can be bettered by such a simplistic approach?
The absolute.nc filer should not be mixed with GISS data. If you derive the equivalent from the GISS data using the method they have published, then you should be OK, and it would be useful to know the monthly offsets for the GISS anomaly data if only to know which July or August happens to be the hottest month thus far. But mixing the two data sets before comparing them is methodologically suspect.
Unfortunately there is a rather propagandist feel to this messaging which would be all undone by a single cold year, either in the near future or even recent chilly exemplars. 2010 for instance, was a chilly year with the second coldest December on record.
I understand your point, but the escalator is the best argument against it. That makes it clear, even without statistics, that strings of cold months or years can’t obscure the long-term warming trend.
dougsays
Best hopes for CDR?
IF the world got behind this and spent, commensurate with the problem, (R & D) where should the $ go?
Sorry, old topic for many, gone over. Appreciate genuine responses. Thx.
siddsays
Sutterley(2014) (thanks Mr. Roberts) DOI:10.1002/2014GL061940
1)remarkable agreement between all four methods
2)exhibits GRACE data thru summer 2014
3)tripling of the acceleration is consistent with a faster than quadratic mass loss function. that is, instead of mass loss best described by
m=a-b*t-c*t^2 with c tripling over twenty years, perhaps there is another term
m=a-b*t-c*t^2-d*t^3
we should know in a decade or two. The agreement is remarkable, perhaps we can soon distinguish if there exist higher order terms kicking in. Hansen played with fitting higher polynomials to Greenland, but i didn’t quite believe him.
To make myself perfectly clear, my point is not to dispute the overall findings of the paper where the graph originated, but to question the claim that this graph represents what the authors claim it represents: a trend toward ever-greater lake effect snowfall during the period 1931 through 2001.
We see a total of 70 data points, each associated with one of the years between 1931 and 2001, making each point easily identifiable by year. We also see a trend line, beginning just above -1 and ending under +1.
The first point I’d like to make is that the trend line, presumably generated by a linear regression algorithm, is reductive. I hate using that term, because of its unfortunate association with post-modernist dogma — there’s nothing wrong per se with an analysis that’s reductive. In fact one could say (in spite of postmodernist dogma) that it’s a feature, not a bug. After all, the purpose of statistics is to get beyond all the details to what underlies those details, i.e., to separate what’s essential from what’s mere “noise.”
But there’s no getting around it, the algorithm reduces a fairly rich dataset to a single straight line. And the question we must ask is: does this reduction add to or subtract from our understanding of what the data means (if anything). What concerns me most is that all too often climate scientists are content to accept their reductive statistical result as itself the equivalent of the underlying meaning, rather than a useful tool in the determination of same. This is what I think happened to the authors of the paper in question.
Let’s examine the data in some detail, point by point. Beginning with 1931 through 1943, we see a very clear zigzag trend upward, with each high point and each low point higher than the previous, with only a single exception, in 1941. From 1944 through 1969 the picture becomes more complicated. Nevertheless, if we stick with the highest points, in the years 45, 47, 59 and 71, we can discern an upward trend, which is in fact reinforced by the low points from 49 through 73, each higher than the previous. Based on this analysis, it’s probably safe to say that, yes, it does look like we have a more or less clear upward trend from 1931 through 1973.
From 1973 to 2001, however, the picture changes. One might even be able to argue for a downward trend if one concentrates on the high points from 1971 through 1985, with each high point lower than the previous, reinforced by the low points at 73, 80 and 83, also each lower than the previous — but generally speaking there doesn’t seem to be much of a trend at all from the years 1971 to 2000. Or, if you prefer, 1973 through 2000.
Of course the final datapoint of 2001, much higher than any of the others, is not part of any trend, but clearly an outlier. It’s especially difficult to see any sort of upward trend in this second segment of the graph since the high point reached in 96 is identical to that reached in 71. Since these are the highest points in the entire dataset until we reach the outlier of 2001, it’s especially difficult to see how anyone could argue for an upward trend during that period.
Of course, if we ignore all the details and simply compare the starting point in 1931 with the ending point in 2001 then we definitely see an increase in snowfall, yes. But that’s not what was claimed. What was claimed was a steady trend upward during the entire course of the period under study. And as we’ve seen there was no such overall trend. That should be obvious. And if anyone here would like to try, I invite you to perform a linear regression on that latter segment of the dataset, which I feel sure will reveal no trend. But who knows? Why not give it a try?
I would now like to quote a key passage from the cautionary tale told in my previous post (see Unforced Variations, November edition, #409): “In order to meaningfully argue that one has discovered a universal, one needs to determine not only a clear correlation based on a worldwide sample, but to also demonstrate that the same correlation can be found in each and every region represented in that sample.” Which can be rephrased in the present context as follows: “In order to meaningfully argue that one has discovered an overall trend, one needs to determine not only a clear trend based on a statistical analysis of the entire sample, but to also demonstrate that the same trend can be found in each and every significant segment represented in that sample.”
> where should the $ go?
To climate modeling (grin).
Oh, did you mean a political decision? Opinions vary. I’d suggest you
read the links in the right sidebar on every RC page for the RC site hosts’ opinions.
You’ve failed to link to the original paper, and failed to note that quite a few subsequent papers citing it point out issues with the analysis therein.
It’s always considered polite, at least, to cite the source and note that your insights, while interesting, are not new and have been published by others previously.
#17–Here’s a quote from Mike Mann that I think is apposite here. It’s not about trends, but it is about reductive approaches and what is interesting (and, perhaps, to whom):
…we focused on what we felt was most scientifically interesting, for example that we recovered an unusual pattern for the 1816 “year without a summer” that indicated a very cold Eurasia and lower than average temperatures in North America (observations that are independently confirmed by historical accounts), but a warmer than usual Middle East and Labrador (who knew?). Or that we had independently affirmed anecdotal accounts that there was a whopper of an El Niño event in 1791—a year that, according to our reconstruction, also happened to be a comparative scorcher for Europe and a large part of North America…we did the least scientifically interesting thing one could possibly do with these rich spatial patterns: We averaged them to obtain a single number for each year: the Northern Hemisphere average temperature.
The result, of course, was the original ‘hockey stick.’
For those who don’t already know that story, I summarized it here, from Dr. Mann’s memoir of a couple of years ago:
“In order to meaningfully argue that one has discovered an overall trend, one needs to determine not only a clear trend based on a statistical analysis of the entire sample, but to also demonstrate that the same trend can be found in each and every significant segment represented in that sample.”
Ah, but who decides what is, or is not, to be considered ‘significant’? I’d plump for the statistically expert, who will use appropriate standards and procedures in order to avoid over- (or under-) interpretation.
Worth reviewing, for anyone who missed this basic material — find by searching ‘oogle for: grumbine detecting trends
More Grumbine Science: Results on deciding trends
moregrumbinescience.blogspot.com/…/results-on-deciding-trends.html
Jan 5, 2009 – You need 20-30 years of data to define a climate trend in global mean ….. We find that detection of climate change-driven trends in the satellite …
More Grumbine Science: Climate Change Detection
moregrumbinescience.blogspot.com/2008/ …
Aug 12, 2008 – In doing climate change detection professionally, these sorts of analyses … significant trend in data you know should be random and trendless.
More Grumbine Science: How not to compute trends
moregrumbinescience.blogspot.com/…/how-not-to-compute-trends.html
Jul 26, 2011 – The method used to claim that there’s a cooling trend, or no warming ….. MoreGrumbineScience …. Detecting Crap on the Internet.
Mike Robertssays
The link by Killian points to a paper by Ricke and Caldeira that suggests warming from a CO2 emission peaks within about 10 years and then declines slowly for centuries, as the CO2 is taken up by the environment. This seems to contradict the climate response function shown in Hansen et al (2011), Earth’s energy imbalance and implications, which shows a rapid early response but reaching only 60% after about 40 years and still rising after centuries.
The Ricke and Caldeira paper shows a hypothetical situation but I wonder if Hansen’s earlier result is not incompatible. Comments?
Edward Greischsays
Problem wit reCaptcha
17 Victor: Back in the 1960s, it snowed 108 inches [9 feet] in 1 day in Rochester, N.Y.
Lake effect snow doesn’t happen mostly near the lakes. My home town [Olean] is 75 miles south of Buffalo. It snows 3 times as much there as in Buffalo.
No reference to the following: I heard that the lake effect forms an image of each lake centered off of the lake by some larger than you thought number of miles. So the southern border of New York state would be the center of the Lake Erie image and the Lake Ontario image would touch Vermont. Vermont is or was a ski area and is more mountainous. It snows a lot deep into Pennsylvania. The images are southeast of the lakes.
It snows more in Pittsburgh PA than it does in Iowa. Or at least it did in the 1960s. Cattaraugus county N.Y. now gets less than 100 inches per year rather than the 450 inches per year that it used to get. Allegheny county, the next county east, gets a lot less snow.
So another source of error is that publicradio1 doesn’t know where the snow belts are. I don’t have a snow belt map either. Like all TV news, they avoid reporting on places that get the really big snowfalls. What if the snow belts moved? Olean is in the Allegheny plateau. Bradford, PA, is noticeably warmer and sunnier, but not far from Olean. The geography changes a lot over the supposed images.
6 feet of snow in a day in the snow belt is not so radical. It is just bad journalism. Did they keep a constant location or did they report maximum snow over the whole lake effect area? Another problem: When the lakes freeze over, it stops snowing and gets cold, like 40 below in 1936 in Olean. Colder years have less snow.
pete bestsays
Re #18 – that link has been read by myself and it all reads well but 1.5C will do enough damage and hence Paris 2015 does not change anything from what I can see. 20 years ago the world work up to climate change but fell asleep at the wheel and now are sleep walking to a future which is not equity based but continues as it is only with less carbon being burnt to power us but how much less is anyone’s guess.
I does not add up politically that just because we are heading for a very uncertain future the status quo is going to change and that all of a sudden everyone is going to say we need to do something which changes everything, its far more likely that we do something but nothing much will change.
Victor the troll @17 is entirely wrong to suggest it is ” especially difficult to see any sort of upward trend in this second segment of the graph.” Simply, a regression on that data yields positive slopes which ever start year is chosen. Even the 1973-2000 rergression is positive. Further, the linear regression line of the full data tracks very well the 10-year rolling average. I would suggest the “difficulty” that Victor the troll talks of is but a product of his own wanton ignorance.
I see the UAH anomaly for November is now up; the month was the second warmest in the record, at 0.33C. (The warmest November ever was 2009, at 0.39.)
Eric Swansonsays
With all that’s been said and written about global warming and it’s effects on climate for more than 20 years, there’s a strong skepticism in the US public mind as a result of propaganda efforts by the denialist camp. It’s quite clear that these disinformation efforts have been successful because at present there’s little sense of the change, which amounts to less than 1C since 1900. In the USA, we live with daily temperature changes which may amount to 20C a day and perhaps 70C between summer highs and winter lows. Worse, our standard for temperature isn’t Celsius, instead we use Fahrenheit, which has smaller “degrees” and a larger yearly range. Perhaps, our predicament should be presented differently to the public, if there’s to be any hope.
Here’s an idea for consideration. Instead of focusing on the small changes in a temperature variable, let’s use percent change in absolute temperature instead. That works with either measurement system if absolute temperature is the measure. The Earth has an average temperature of about 16C, which converts to 289K. The much talked about 2K limit amounts to less than 1% of that. How does this level of change relate to previous changes? During most of the past 120,000 years, the Earth was in Ice Age conditions and paleoclimate data points to a climate which was around 5k cooler than 1900. Back then, civilization as we know it today could not have existed in large areas of Europe or the Eastern US and that was less than 2% colder. Were the Earth to experience a 5% cooling, the oceans would become frozen solid because of the land/ocean snow/ice albedo feedback.
If humans continue to dump more greenhouse gases into the atmosphere and the temperature were to increase by 5%, we would find that much of the Earth’s land area would experience dew point temperatures above the 35C (95F) threshold for heat stroke. With such high dew point temperatures, outdoor activities would be impossible during daylight hours. This situation already occurs in some locations, such as in the cane fields of Central America, where workers toiling to cut the cane must consume large quantities of water, up to 5 liters a day, just to stay cool and even then must quit working around noon as the heat becomes unbearable. Even a 2% warming would push many locations in the tropics beyond this deadly threshold, stopping many outdoor activities during daylight hours.
Our economic and political system is dominated by money and finance considerations. These guys routinely think in terms of percentages and ratios, not absolute values. Perhaps a climate presentation based on percentage change would prove to be convincing in the minds of our so-called “leaders”. There are real limits to growth and climate is one of them…
Victor,
There are many reasons why one would be interested in a linear trend even when the signal is not strictly linear. The linear term is the first that will emerge from a noisy signal over the long term–that is, the linear term tells you about the dominant forcings over the long term.
By all means, the behavior in the interim that deviates from linear (the wiggles) may also be interesting, but that is a different analysis. Do not merely blithely assume that when you don’t understand an analysis that it must be wrong.
The novelist Nathanial Rich, in his New York Times review of This Changes Everything, struck an interesting note when he compared it to The Collapse of Western Civilization, a grim “view from the future” just published by Naomi Oreskes and Erik M. Conway. Although Rich generally approves of Klein’s “robust new polemic,” he’s less sure about her optimism. He grants her the movement’s progress, but differs when it comes to its adequacy in the face of the danger. Where Klein sees that danger as reason for an all-hands-on-deck mobilization, Rich recalls her admission that it will be extremely difficult to restrict the rise in global temperatures to an average of 4°C. Given that four degrees “is the premise for the nightmarish future described by Dr. Oreskes and Dr. Conway,” he concludes that The Collapse of Western Civilization “appears to be the book that is furthest from fiction.”
The point? Only that The New York Times, a bastion of realist moderation, chose to grace This Changes Everything with a reviewer who believes that the collapse of civilization is more likely than the transformational renewal that is the keystone of Klein’s book.
Icarus62says
We’ve been emitting CO₂ in ever-increasing quantities for many decades and the natural world has been absorbing some, but not all of that CO₂ – roughly half of our annual emissions end up in the land and oceans, rather than the atmosphere. So, the quantity absorbed per year increases, but the fraction of our annual emissions absorbed remains much the same. I presume this is mainly because the partial pressure of CO₂ in the atmosphere is now substantially larger, so it is absorbed by the natural world at a higher rate.
What happens if, instead of our emissions increasing every year, they start to decline every year? Will the quantity absorbed by the natural world continue to increase, or stay the same, or will it continue to be roughly half of our now-smaller annual emissions?
Another way of looking at it: If we emit 32 billion tons annually now, and 16 billion tons is absorbed by the land and oceans, what happens if we suddenly cut our emissions to 16 billion tons? Does all 16 billion tons get absorbed by the natural world or only 8 billion tons?
I know that global warming is dictated by atmospheric concentration, not by rate of emission, but I’m wondering how easy it’s going to be to stop that concentration rising. Do we have to stop 100% of CO₂ emissions to stop it rising, or less than that? If we made a huge effort and reduced emissions by 75%, would atmospheric concentration start to fall, or just continue increasing at a slower pace? Also, how long will it be before the warming world starts adding to our carbon emissions, rather than offsetting some of them?
I have tested my analysis by fitting the data 1850-1954 and comparing the resulting forecast 1955-2014 with the data published. They agree within +- 0.15 °C. My problem is that this good agreement is only obtained when I use the gridded data set of HADCRUT4, interpolate and extrapolate the missing segments. The global data set of HadCrut4 leads to unacceptable errors. Note that the data coverage of the historical data is poor. For instance, in 1910 the data coverage is 56%.
Question for any of the Contributors of RC, and other climate scientists — of the many different views into the Lima Peru meetings, is there anyone or anything in particular you’re watching? Or anyone ‘blogging’ the meeting you can recommend?
As I said earlier I’m watching ecoequity, but that’s slow and high level updates usually after things have been decided.
The Lima meetings have a LOT of windows available.
November CET
daily max temperatures are back to the 20 year average, while daily minima are still about 1 degree C above the 20 year average. http://www.vukcevic.talktalk.net/CET-dMm.htm
The AGW advocates should learn to like or love the CO2 gas, and if it was responsible for a part of the rising temperatures since the end of 19th century, even more so. In my view global warming (along with the advances in technology and medicine) is the best thing that happened to the humanity during the last 100+ years,.
If the NOAA numbers for the global land temperatures are accurate, then the current global warming (since 1900) has been even greater and more beneficial than assumed. According to my calculation NOAA underestimates temperature rise by about 0.2C, whereby the natural variability appears to be responsible for about + or – 0.4C (0.8C pp). Is the CO2 gas or some other factor responsible for the rest, I wouldn’t be able to say; correlation factor with the CO2 concentration is R2 = 0.6877, but of course there is always possibility that the CO2 increase is a direct consequence of the rising temperatures.
Hansen said that it is not so much the ice sheet melt rate at a particular moment that is most important, but the rate of doubling of that rate. I thought that a doubling rate of ten years would be crazy quick. Recent scientific observations seem to indicate we are moving beyond crazy.
If the trend continues we won’t need to worry about thermonuclear holocaust because we will have created one for the whole planet, based on the sun’s fusion radiation and industrial society’s carbonic acid gas (aka carbon dioxide). Welcome to the global gas chamber. Humanity’s seaports, and all coastal regions, are in the cross-hairs.
Meowsays
@whatever, text:
“In order to meaningfully argue that one has discovered an overall trend, one needs to determine not only a clear trend based on a statistical analysis of the entire sample, but to also demonstrate that the same trend can be found in each and every significant segment represented in that sample.”
Please look up the statistical definition of “trend”. In the meantime, imagine the sawtooth function y=trunc(x/10)-(((x + 10) mod 10) / 10), in which each 10-point segment (e.g., x=[0..9],[10..19])has a *negative* trend, but the function has an overall positive trend of 0.1x.
Your manner of analysis is not doing climate-change “skepticism” any favors.
The graph in question has been displayed recently in media articles claiming that the Buffalo area lake-effect storm was caused by global warming. The article by Ms. Rayne that I cited was offered as a refutation. Why not simply admit she was right and that other climate scientists agree that the graph is misleading? All I’ve done is expand on her reading, using the graph as an example of how the statistics can mislead.
siddsays
Hansen is sometimes uncomfortably prescient. Let us take that line of thought a little further. The subject paper (Sutterley,2014 doi:10.1002/2014GL061940) shows mass waste tripling for the Amundsen embayment (ASE) in two decades. If, a la Hansen we say this is exponential (a very strong claim) then we get a tau of 18 odd years, and a doubling time of twelve.
Greenland by comparison, from Enderlin(2014) doi:10.1002/2013GL059010 shows a slightly faster doubling time. And Greenland is melting about twice as fast as ASE today. Also it’s getting blacker.
The mechanisms in the two cases are different. Greenland dominated by surface mass imbalance, and ASE from warm water eating the base.
Nobody seems to wanna talk about East Antarctica …
I think that within a decade, realization of real estate valuation loss in expensive coastal playgrounds of the Great and the Mighty will ensure the demise of fossil fuels quicker than attempts from ecological equity. But we persevere, nonetheless.
1973 was a low point. Try 1971 instead. (And by the way, that’s NOT a cherry pick. It’s a nit pick.) :-)
If you start with 1973 and go through 2000 there is simply no upward trend at all. Sorry if you don’t want to see that, but it’s way too obvious to dispute.
Also see #19. Evidently I’m not alone in my assessment.
So you’re saying that the lack of any upward trend in the years 1971 through 2000 in the graph in question doesn’t matter? All that matters is the result of a linear regression algorithm? You’d prefer to believe that just because a “trend” can be defined purely in statistical terms, that any such statistical result has to therefore represent an actual trend? That comes very close to being a tautology.
What I see in that graph is a rather clear upward trend about 2/3 of the way through, followed by a period of no upward trend for the following 1/3. That should be obvious simply through a close examination of the data points. The statistical result is clearly due to the strength of the initial trend, which overwhelms the shorter period of no trend. That would be the case with all sorts of datasets where a significant segment does not conform to the larger segment containing the strong trend. And I’m not referring to “noisy” data containing lots of intermittent points that don’t fit any trend, I’m referring to a continuous block of data that doesn’t conform. Due to the reductive nature of the standard algorithm such distortions are bound to happen. So what you are saying is that you don’t care, you’d prefer to go with what the statistics tells you regardless.
That’s the very definition of reductive thinking. I’m no postmodernist, but even so that’s clearly NOT the way to evaluate data.
Paul Berberich @33.
You describe what I would call an intrisic analysis, an analysis which is concerned solely with the data-set and not what that data-set represents. Using intrinsic analysis for projections as you do requires you (or anyone else) to be very very disciplined in your approach.
Firstly, you do need to at the very least address the extrinsic source of your data to prevent the accusation I made @12 that you are fitting regularity onto a set of random and one-off events. That accusation remains unanswered by you.
Secondly, you say @33 that you “tested (your) analysis by fitting the data 1850-1954 and comparing the resulting forecast 1955-2014 with the data published.” I would suggest that this is not entirely correct. What you tested was the “fit of your analysis”; that is the value of the five constants you use. I say this as I cannot see how the model you defined as the representation of global average temperature can be derived from the data of 1850-1954 alone. Your approach only makes sense if you can demonstrate how it is that the form of your function T(x) = A + Bx + Cx^2 + D sin(E +Fx) is the logical outcome of the 1850-1955 data. That is why would your model be any more correct than say T(x) = A + Bx + D sin(E +Fx)?
Of course, that would beg the questions as to why the isotopic signature of atmospheric CO2 looks just as would be expected if it were due to fossil fuel emissions, and just what happens to the other half or so of the carbon humans are known to emit to the atmosphere.
But hey, if we’re going to imagine that global warming is the best thing for humanity since, well, ever, then why strain at a couple of logical ‘gnats’?
Of course, I am not in the slightest bothered which years Victor the Troll choses to cherry-pick. As his words are cheap, it is a somewhat ephemeral situation. He did say @17 “Or, if you prefer, 1973 through 2000. “ Ditto @40 but @41 he now insists it is 1971-2000 or nothing. I think he may have better luck choosing 1971-83 as that range almost gives a statistically significant negative trend. The 1971-2000 cherry-pick, however, presents a central regression that is negative but it is also by some way not statistically significant -0.0088 +/-0.031(2sd) giving a 31% chance of an upward trend. So to state that ” there is simply no upward trend at all” is also incorrect.
And, given the noisy data being analysed, I fail to see why anybody would bring the lake-effect snowfall regression from Burnett et al (2003) and dispute its veracity in this way. The regression is fine. So now it is about cherry-picking start and end date in an attempt to keep up the attack. I am mystified. Does anybody have an inkling of what the Troll is attempting? Or why?
Can’t fault their basic concept of fairness, but I sure hope they temper it with a dash or three of expediency and pragmatism, and that their forthcoming emissions targets will be both real emissions targets AND, as promised, ‘aggressive.’
Steve Fishsays
Victor, you apparently believe that your inexpert postmodernist accusations have some support because you say “Evidently I’m not alone in my assessment.” Perhaps you can provide references to expert criticism of the Burnett et.al. (2003) research article. Google tells me that there are on the order of tens of published, peer reviewed research articles on lake effect snow in 2014 alone so you should have no problem finding expert criticism. No?
Steve
Radge Haverssays
MARodger, trolls attempt to get attention, wind people up, puff themselves up at other’s expense, and make themselves alpha trolls in the eyes of other trolls. They may have other agendas, but generally it all just boils down to being a troll. They do it because they’re messed up.
Notice how hard it works to pull Hank back in after he indicated that he was losing interest.
Meowsays
@5 Dec 2014 @ 1:28 AM: The statistical definition of “trend” is meant to help us avoid deceiving ourselves. It is a mostly-objective way of deciding (probabilistically, of course) whether a given change in a random variable is due to chance. If you abandon that definition in favor of eyeballing, you’re no longer practicing statistics, which means you’ve abandoned the best tool we have to make sense of data series.
Hank Roberts says
http://www.cbsnews.com/news/hotter-weirder-how-climate-change-has-changed-earth/
Hank Roberts says
http://www.washingtonpost.com/opinions/tom-toles-goes-green/2011/03/31/AFD04K0D_gallery.html
Hank Roberts says
Mass loss of the Amundsen Sea Embayment of West Antarctica from four independent techniques
http://onlinelibrary.wiley.com/doi/10.1002/2014GL061940/abstract
DOI: 10.1002/2014GL061940
http://www.readcube.com/articles/10.1002%2F2014GL061940
Killian says
Remember all those times I’ve said time is short? Well, once again we find sensitivity to change to be more dynamic than previously believed, at least by some.
;-)
http://phys.org/news/2014-12-co2-effects-felt-decade-emitted.html
Pete Best says
Re #399 Hello – I am wondering what is the context of mitigating climate emissions in human terms. I think that anyone can ignore human nature (their motives, goals, greed, wealth, well being etc) and think that governments and institutions will act in humanities best interest but seeing as little has been achieved thus far since the first global summit in 1992 (emissions have risen by 17 billion tones since then) that you can see why a lot of informed people are more than a little cynical. Now comes Peru (now) and Paris in 2015 and cuts have to be deeper and longer lasting but in some countries the politics is far from settled (USA, Canada, Australia have outright denial on the subject and other countries to I am sure) and hence progress if far to slow and even if binding agreements are agreed it a massive shake up in the market place that threatens to change the status quo of power and influence which is lobbied hard against and hence radical change is hard to come by.
The biggest issue though is life style change as up until now the whole process of mitigating carbon emissions has been to not worry about your way of life as that will not change only the means of getting your energy will. Anyone who looks into this deeply will see that this is very hard to implement and a insane number of onshore and offshore wind turbines couple with a whole lot of solar and related solar technologies will go someway but wont stop 2C and beyond. Humans act in their own best interests mainly and the future everyone is asking for here will not come about easily and possibly not from international agreements either.
MARodger says
The Central England temperatures HadCET now has to have the remainder of December colder than the coldest December on record (-0.8ºC in 1890) to prevent 2014 from taking “hottest year on record” title.
The University of Reading felt confident enough with this to report a 75% chance of the warmest year on record. (I think the 75% must be the bookie’s odds, not the mathematical ones.) Talk of ‘warmest year’ sparked off the BBC reporting not just the potentially record temperature for 2014 but also the weird weather the UK has had:-
Unfortunately there is a rather propagandist feel to this messaging which would be all undone by a single cold year, either in the near future or even recent chilly exemplars. 2010 for instance, was a chilly year with the second coldest December on record.
DP says
Re 6 I understood 2010 was the coldest December ever in the UK. Looks like this year will break the record though it hasn’t felt like that. Perhaps because of all the rain.
Russell says
With the arival of the Anthropocene, people are beginning to wonder:
What ever became of the Theocene and the Titanocene ?
Hank Roberts says
Victor may find these reassuring, to make the point that what he’s discovered about statistics and charts has, in fact, been well known by, er, those who know about it:
http://www.phdcomics.com/comics/archive/phd051809s.gif
http://www.nukees.com/comics/nukees20140912.gif
Radge Havers says
Sophistry. Analysis is good and logical or it is not. There’s not some über logic that exists apart from statistics by which it’s judged.
You know, the kind of rhetorical glibness that ‘debators’ use to manipulate audiences and ‘win’ arguments is a damned piss poor cousin to actual scientific analysis.
D F T Ego-tripping T
MARodger says
DP @7.
The reason 2010 was the coldest December on record for the UK is that the Met Office data is only given back to 1910. HadCET of course goes back to 1772 so the frosty December of 1890 features on CET but not on the UK record.
MARodger says
Paul Berberich @410(November Unforced Variations).
Can you explain why you consider the fluctuations in global average temperature of past decades have any bearing on future fluctuations in global average temperature. Bear in mind that the dip in temperatures between 1900 and 1920 is considered to be the result of large volcanic eruptions which are effectively random events. The levelling off of temperature in the 1940s & 1950s is considered the result of large increases in sulphur emissions. Such increases have only occurred the once. So how can the outcomes from random events or unrepeated events provide any meaningful way of projecting future outcomes? Indeed, why do you consider that the work of the IPCC can be bettered by such a simplistic approach?
Chris Dudley says
Paul (#380 November),
The absolute.nc filer should not be mixed with GISS data. If you derive the equivalent from the GISS data using the method they have published, then you should be OK, and it would be useful to know the monthly offsets for the GISS anomaly data if only to know which July or August happens to be the hottest month thus far. But mixing the two data sets before comparing them is methodologically suspect.
Mal Adapted says
MARodger:
I understand your point, but the escalator is the best argument against it. That makes it clear, even without statistics, that strings of cold months or years can’t obscure the long-term warming trend.
doug says
Best hopes for CDR?
IF the world got behind this and spent, commensurate with the problem, (R & D) where should the $ go?
Sorry, old topic for many, gone over. Appreciate genuine responses. Thx.
sidd says
Sutterley(2014) (thanks Mr. Roberts) DOI:10.1002/2014GL061940
1)remarkable agreement between all four methods
2)exhibits GRACE data thru summer 2014
3)tripling of the acceleration is consistent with a faster than quadratic mass loss function. that is, instead of mass loss best described by
m=a-b*t-c*t^2 with c tripling over twenty years, perhaps there is another term
m=a-b*t-c*t^2-d*t^3
we should know in a decade or two. The agreement is remarkable, perhaps we can soon distinguish if there exist higher order terms kicking in. Hansen played with fitting higher polynomials to Greenland, but i didn’t quite believe him.
sidd
Victor says
(Looks like posting on the old Open Thread has been closed, so I’ll continue here.)
As promised, I will now deal with the famous lake effect snow graph: http://publicradio1.wpengine.netdna-cdn.com/updraft/files/2014/11/lake-effect-snow-trends.png It’s the one on top.
To make myself perfectly clear, my point is not to dispute the overall findings of the paper where the graph originated, but to question the claim that this graph represents what the authors claim it represents: a trend toward ever-greater lake effect snowfall during the period 1931 through 2001.
We see a total of 70 data points, each associated with one of the years between 1931 and 2001, making each point easily identifiable by year. We also see a trend line, beginning just above -1 and ending under +1.
The first point I’d like to make is that the trend line, presumably generated by a linear regression algorithm, is reductive. I hate using that term, because of its unfortunate association with post-modernist dogma — there’s nothing wrong per se with an analysis that’s reductive. In fact one could say (in spite of postmodernist dogma) that it’s a feature, not a bug. After all, the purpose of statistics is to get beyond all the details to what underlies those details, i.e., to separate what’s essential from what’s mere “noise.”
But there’s no getting around it, the algorithm reduces a fairly rich dataset to a single straight line. And the question we must ask is: does this reduction add to or subtract from our understanding of what the data means (if anything). What concerns me most is that all too often climate scientists are content to accept their reductive statistical result as itself the equivalent of the underlying meaning, rather than a useful tool in the determination of same. This is what I think happened to the authors of the paper in question.
Let’s examine the data in some detail, point by point. Beginning with 1931 through 1943, we see a very clear zigzag trend upward, with each high point and each low point higher than the previous, with only a single exception, in 1941. From 1944 through 1969 the picture becomes more complicated. Nevertheless, if we stick with the highest points, in the years 45, 47, 59 and 71, we can discern an upward trend, which is in fact reinforced by the low points from 49 through 73, each higher than the previous. Based on this analysis, it’s probably safe to say that, yes, it does look like we have a more or less clear upward trend from 1931 through 1973.
From 1973 to 2001, however, the picture changes. One might even be able to argue for a downward trend if one concentrates on the high points from 1971 through 1985, with each high point lower than the previous, reinforced by the low points at 73, 80 and 83, also each lower than the previous — but generally speaking there doesn’t seem to be much of a trend at all from the years 1971 to 2000. Or, if you prefer, 1973 through 2000.
Of course the final datapoint of 2001, much higher than any of the others, is not part of any trend, but clearly an outlier. It’s especially difficult to see any sort of upward trend in this second segment of the graph since the high point reached in 96 is identical to that reached in 71. Since these are the highest points in the entire dataset until we reach the outlier of 2001, it’s especially difficult to see how anyone could argue for an upward trend during that period.
Of course, if we ignore all the details and simply compare the starting point in 1931 with the ending point in 2001 then we definitely see an increase in snowfall, yes. But that’s not what was claimed. What was claimed was a steady trend upward during the entire course of the period under study. And as we’ve seen there was no such overall trend. That should be obvious. And if anyone here would like to try, I invite you to perform a linear regression on that latter segment of the dataset, which I feel sure will reveal no trend. But who knows? Why not give it a try?
I would now like to quote a key passage from the cautionary tale told in my previous post (see Unforced Variations, November edition, #409): “In order to meaningfully argue that one has discovered a universal, one needs to determine not only a clear correlation based on a worldwide sample, but to also demonstrate that the same correlation can be found in each and every region represented in that sample.” Which can be rephrased in the present context as follows: “In order to meaningfully argue that one has discovered an overall trend, one needs to determine not only a clear trend based on a statistical analysis of the entire sample, but to also demonstrate that the same trend can be found in each and every significant segment represented in that sample.”
Enuf said.
Hank Roberts says
> where should the $ go?
To climate modeling (grin).
Oh, did you mean a political decision? Opinions vary. I’d suggest you
read the links in the right sidebar on every RC page for the RC site hosts’ opinions.
I’d add my personal favorite thinkers on the questions that go beyond the science that RC focuses on: ecoequity.org
As they say “worth reading even if you think that we’re doomed”
Hank Roberts says
I swear, last time I reply to Victor:
You’ve failed to link to the original paper, and failed to note that quite a few subsequent papers citing it point out issues with the analysis therein.
It’s always considered polite, at least, to cite the source and note that your insights, while interesting, are not new and have been published by others previously.
Kevin McKinney says
#17–Here’s a quote from Mike Mann that I think is apposite here. It’s not about trends, but it is about reductive approaches and what is interesting (and, perhaps, to whom):
The result, of course, was the original ‘hockey stick.’
For those who don’t already know that story, I summarized it here, from Dr. Mann’s memoir of a couple of years ago:
http://hubpages.com/hub/Michael-Manns-The-Hockey-Stick-And-The-Climate-Wars-A-Summary-Review
Kevin McKinney says
#17–part the second.
“In order to meaningfully argue that one has discovered an overall trend, one needs to determine not only a clear trend based on a statistical analysis of the entire sample, but to also demonstrate that the same trend can be found in each and every significant segment represented in that sample.”
– See more at: https://www.realclimate.org/index.php/archives/2014/12/unforced-variations-dec-2014/comment-page-1/#comment-619656
Ah, but who decides what is, or is not, to be considered ‘significant’? I’d plump for the statistically expert, who will use appropriate standards and procedures in order to avoid over- (or under-) interpretation.
Hank Roberts says
Worth reviewing, for anyone who missed this basic material — find by searching ‘oogle for: grumbine detecting trends
Mike Roberts says
The link by Killian points to a paper by Ricke and Caldeira that suggests warming from a CO2 emission peaks within about 10 years and then declines slowly for centuries, as the CO2 is taken up by the environment. This seems to contradict the climate response function shown in Hansen et al (2011), Earth’s energy imbalance and implications, which shows a rapid early response but reaching only 60% after about 40 years and still rising after centuries.
The Ricke and Caldeira paper shows a hypothetical situation but I wonder if Hansen’s earlier result is not incompatible. Comments?
Edward Greisch says
Problem wit reCaptcha
17 Victor: Back in the 1960s, it snowed 108 inches [9 feet] in 1 day in Rochester, N.Y.
Lake effect snow doesn’t happen mostly near the lakes. My home town [Olean] is 75 miles south of Buffalo. It snows 3 times as much there as in Buffalo.
No reference to the following: I heard that the lake effect forms an image of each lake centered off of the lake by some larger than you thought number of miles. So the southern border of New York state would be the center of the Lake Erie image and the Lake Ontario image would touch Vermont. Vermont is or was a ski area and is more mountainous. It snows a lot deep into Pennsylvania. The images are southeast of the lakes.
It snows more in Pittsburgh PA than it does in Iowa. Or at least it did in the 1960s. Cattaraugus county N.Y. now gets less than 100 inches per year rather than the 450 inches per year that it used to get. Allegheny county, the next county east, gets a lot less snow.
So another source of error is that publicradio1 doesn’t know where the snow belts are. I don’t have a snow belt map either. Like all TV news, they avoid reporting on places that get the really big snowfalls. What if the snow belts moved? Olean is in the Allegheny plateau. Bradford, PA, is noticeably warmer and sunnier, but not far from Olean. The geography changes a lot over the supposed images.
6 feet of snow in a day in the snow belt is not so radical. It is just bad journalism. Did they keep a constant location or did they report maximum snow over the whole lake effect area? Another problem: When the lakes freeze over, it stops snowing and gets cold, like 40 below in 1936 in Olean. Colder years have less snow.
pete best says
Re #18 – that link has been read by myself and it all reads well but 1.5C will do enough damage and hence Paris 2015 does not change anything from what I can see. 20 years ago the world work up to climate change but fell asleep at the wheel and now are sleep walking to a future which is not equity based but continues as it is only with less carbon being burnt to power us but how much less is anyone’s guess.
I does not add up politically that just because we are heading for a very uncertain future the status quo is going to change and that all of a sudden everyone is going to say we need to do something which changes everything, its far more likely that we do something but nothing much will change.
MARodger says
Victor the troll @17 is entirely wrong to suggest it is ” especially difficult to see any sort of upward trend in this second segment of the graph.” Simply, a regression on that data yields positive slopes which ever start year is chosen. Even the 1973-2000 rergression is positive. Further, the linear regression line of the full data tracks very well the 10-year rolling average. I would suggest the “difficulty” that Victor the troll talks of is but a product of his own wanton ignorance.
Kevin McKinney says
I see the UAH anomaly for November is now up; the month was the second warmest in the record, at 0.33C. (The warmest November ever was 2009, at 0.39.)
Eric Swanson says
With all that’s been said and written about global warming and it’s effects on climate for more than 20 years, there’s a strong skepticism in the US public mind as a result of propaganda efforts by the denialist camp. It’s quite clear that these disinformation efforts have been successful because at present there’s little sense of the change, which amounts to less than 1C since 1900. In the USA, we live with daily temperature changes which may amount to 20C a day and perhaps 70C between summer highs and winter lows. Worse, our standard for temperature isn’t Celsius, instead we use Fahrenheit, which has smaller “degrees” and a larger yearly range. Perhaps, our predicament should be presented differently to the public, if there’s to be any hope.
Here’s an idea for consideration. Instead of focusing on the small changes in a temperature variable, let’s use percent change in absolute temperature instead. That works with either measurement system if absolute temperature is the measure. The Earth has an average temperature of about 16C, which converts to 289K. The much talked about 2K limit amounts to less than 1% of that. How does this level of change relate to previous changes? During most of the past 120,000 years, the Earth was in Ice Age conditions and paleoclimate data points to a climate which was around 5k cooler than 1900. Back then, civilization as we know it today could not have existed in large areas of Europe or the Eastern US and that was less than 2% colder. Were the Earth to experience a 5% cooling, the oceans would become frozen solid because of the land/ocean snow/ice albedo feedback.
If humans continue to dump more greenhouse gases into the atmosphere and the temperature were to increase by 5%, we would find that much of the Earth’s land area would experience dew point temperatures above the 35C (95F) threshold for heat stroke. With such high dew point temperatures, outdoor activities would be impossible during daylight hours. This situation already occurs in some locations, such as in the cane fields of Central America, where workers toiling to cut the cane must consume large quantities of water, up to 5 liters a day, just to stay cool and even then must quit working around noon as the heat becomes unbearable. Even a 2% warming would push many locations in the tropics beyond this deadly threshold, stopping many outdoor activities during daylight hours.
Our economic and political system is dominated by money and finance considerations. These guys routinely think in terms of percentages and ratios, not absolute values. Perhaps a climate presentation based on percentage change would prove to be convincing in the minds of our so-called “leaders”. There are real limits to growth and climate is one of them…
Hank Roberts says
‘The only thing we learn from history is that we learn nothing from history.’
— Georg Wilhelm Friedrich Hegel —
For those interested, a bit from history found via Scholar search:
“The Cartographic Discovery of the Great Lakes Snow Belts”
http://link.springer.com/chapter/10.1007/978-3-642-33317-0_14#page-1
“Review of Lake Effect: Tales of Large Lakes, Arctic Winds, and Recurrent Snows”
By Mark Monmonier.
Syracuse University Press, 2012, ISBN: 978-0-8156-1004-5
http://cartoperspectives.org/index.php/journal/article/view/cp77-hickey/1331
Ray Ladbury says
Victor,
There are many reasons why one would be interested in a linear trend even when the signal is not strictly linear. The linear term is the first that will emerge from a noisy signal over the long term–that is, the linear term tells you about the dominant forcings over the long term.
By all means, the behavior in the interim that deviates from linear (the wiggles) may also be interesting, but that is a different analysis. Do not merely blithely assume that when you don’t understand an analysis that it must be wrong.
Hank Roberts says
History and future history, quoting from
http://thischangeseverything.org/no-time-to-spare/
Icarus62 says
We’ve been emitting CO₂ in ever-increasing quantities for many decades and the natural world has been absorbing some, but not all of that CO₂ – roughly half of our annual emissions end up in the land and oceans, rather than the atmosphere. So, the quantity absorbed per year increases, but the fraction of our annual emissions absorbed remains much the same. I presume this is mainly because the partial pressure of CO₂ in the atmosphere is now substantially larger, so it is absorbed by the natural world at a higher rate.
What happens if, instead of our emissions increasing every year, they start to decline every year? Will the quantity absorbed by the natural world continue to increase, or stay the same, or will it continue to be roughly half of our now-smaller annual emissions?
Another way of looking at it: If we emit 32 billion tons annually now, and 16 billion tons is absorbed by the land and oceans, what happens if we suddenly cut our emissions to 16 billion tons? Does all 16 billion tons get absorbed by the natural world or only 8 billion tons?
I know that global warming is dictated by atmospheric concentration, not by rate of emission, but I’m wondering how easy it’s going to be to stop that concentration rising. Do we have to stop 100% of CO₂ emissions to stop it rising, or less than that? If we made a huge effort and reduced emissions by 75%, would atmospheric concentration start to fall, or just continue increasing at a slower pace? Also, how long will it be before the warming world starts adding to our carbon emissions, rather than offsetting some of them?
Cheers!
Paul Berberich says
#12 MA Rodger,
I have tested my analysis by fitting the data 1850-1954 and comparing the resulting forecast 1955-2014 with the data published. They agree within +- 0.15 °C. My problem is that this good agreement is only obtained when I use the gridded data set of HADCRUT4, interpolate and extrapolate the missing segments. The global data set of HadCrut4 leads to unacceptable errors. Note that the data coverage of the historical data is poor. For instance, in 1910 the data coverage is 56%.
Hank Roberts says
Question for any of the Contributors of RC, and other climate scientists — of the many different views into the Lima Peru meetings, is there anyone or anything in particular you’re watching? Or anyone ‘blogging’ the meeting you can recommend?
As I said earlier I’m watching ecoequity, but that’s slow and high level updates usually after things have been decided.
The Lima meetings have a LOT of windows available.
——-
Also worth a look, from Bloomberg News:
http://www.bloomberg.com/news/2014-12-03/2014-is-likely-to-be-the-earth-s-hottest-year-ever-why-it-doesn-t-matter-.html
vukcevic says
November CET
daily max temperatures are back to the 20 year average, while daily minima are still about 1 degree C above the 20 year average.
http://www.vukcevic.talktalk.net/CET-dMm.htm
The AGW advocates should learn to like or love the CO2 gas, and if it was responsible for a part of the rising temperatures since the end of 19th century, even more so. In my view global warming (along with the advances in technology and medicine) is the best thing that happened to the humanity during the last 100+ years,.
If the NOAA numbers for the global land temperatures are accurate, then the current global warming (since 1900) has been even greater and more beneficial than assumed. According to my calculation NOAA underestimates temperature rise by about 0.2C, whereby the natural variability appears to be responsible for about + or – 0.4C (0.8C pp). Is the CO2 gas or some other factor responsible for the rest, I wouldn’t be able to say; correlation factor with the CO2 concentration is R2 = 0.6877, but of course there is always possibility that the CO2 increase is a direct consequence of the rising temperatures.
Warren Hoskins says
As soon as I saw this chapter on historians considering climate change, I thought it belonged where readers of RealClimate would see it http://www.publicbooks.org//nonfiction/changing-climates-of-history
James Newberry says
Hansen said that it is not so much the ice sheet melt rate at a particular moment that is most important, but the rate of doubling of that rate. I thought that a doubling rate of ten years would be crazy quick. Recent scientific observations seem to indicate we are moving beyond crazy.
If the trend continues we won’t need to worry about thermonuclear holocaust because we will have created one for the whole planet, based on the sun’s fusion radiation and industrial society’s carbonic acid gas (aka carbon dioxide). Welcome to the global gas chamber. Humanity’s seaports, and all coastal regions, are in the cross-hairs.
Meow says
@whatever, text:
Please look up the statistical definition of “trend”. In the meantime, imagine the sawtooth function y=trunc(x/10)-(((x + 10) mod 10) / 10), in which each 10-point segment (e.g., x=[0..9],[10..19])has a *negative* trend, but the function has an overall positive trend of 0.1x.
Your manner of analysis is not doing climate-change “skepticism” any favors.
Victor says
#19 Hank Roberts
Hmmmm. Talk about a left-handed compliment! :-)
Why not just come out and say I was right?
The graph in question has been displayed recently in media articles claiming that the Buffalo area lake-effect storm was caused by global warming. The article by Ms. Rayne that I cited was offered as a refutation. Why not simply admit she was right and that other climate scientists agree that the graph is misleading? All I’ve done is expand on her reading, using the graph as an example of how the statistics can mislead.
sidd says
Hansen is sometimes uncomfortably prescient. Let us take that line of thought a little further. The subject paper (Sutterley,2014 doi:10.1002/2014GL061940) shows mass waste tripling for the Amundsen embayment (ASE) in two decades. If, a la Hansen we say this is exponential (a very strong claim) then we get a tau of 18 odd years, and a doubling time of twelve.
Greenland by comparison, from Enderlin(2014) doi:10.1002/2013GL059010 shows a slightly faster doubling time. And Greenland is melting about twice as fast as ASE today. Also it’s getting blacker.
The mechanisms in the two cases are different. Greenland dominated by surface mass imbalance, and ASE from warm water eating the base.
Nobody seems to wanna talk about East Antarctica …
I think that within a decade, realization of real estate valuation loss in expensive coastal playgrounds of the Great and the Mighty will ensure the demise of fossil fuels quicker than attempts from ecological equity. But we persevere, nonetheless.
sidd
Victor says
#26 MARodger
“Even the 1973-2000 rergression is positive.”
1973 was a low point. Try 1971 instead. (And by the way, that’s NOT a cherry pick. It’s a nit pick.) :-)
If you start with 1973 and go through 2000 there is simply no upward trend at all. Sorry if you don’t want to see that, but it’s way too obvious to dispute.
Also see #19. Evidently I’m not alone in my assessment.
Also, see
Victor says
Oops. The sentence in my last post should read: “If you start with 1971 and go through 2000 there is simply no upward trend at all.”
Victor says
#38 Meow
So you’re saying that the lack of any upward trend in the years 1971 through 2000 in the graph in question doesn’t matter? All that matters is the result of a linear regression algorithm? You’d prefer to believe that just because a “trend” can be defined purely in statistical terms, that any such statistical result has to therefore represent an actual trend? That comes very close to being a tautology.
What I see in that graph is a rather clear upward trend about 2/3 of the way through, followed by a period of no upward trend for the following 1/3. That should be obvious simply through a close examination of the data points. The statistical result is clearly due to the strength of the initial trend, which overwhelms the shorter period of no trend. That would be the case with all sorts of datasets where a significant segment does not conform to the larger segment containing the strong trend. And I’m not referring to “noisy” data containing lots of intermittent points that don’t fit any trend, I’m referring to a continuous block of data that doesn’t conform. Due to the reductive nature of the standard algorithm such distortions are bound to happen. So what you are saying is that you don’t care, you’d prefer to go with what the statistics tells you regardless.
That’s the very definition of reductive thinking. I’m no postmodernist, but even so that’s clearly NOT the way to evaluate data.
MARodger says
Paul Berberich @33.
You describe what I would call an intrisic analysis, an analysis which is concerned solely with the data-set and not what that data-set represents. Using intrinsic analysis for projections as you do requires you (or anyone else) to be very very disciplined in your approach.
Firstly, you do need to at the very least address the extrinsic source of your data to prevent the accusation I made @12 that you are fitting regularity onto a set of random and one-off events. That accusation remains unanswered by you.
Secondly, you say @33 that you “tested (your) analysis by fitting the data 1850-1954 and comparing the resulting forecast 1955-2014 with the data published.” I would suggest that this is not entirely correct. What you tested was the “fit of your analysis”; that is the value of the five constants you use. I say this as I cannot see how the model you defined as the representation of global average temperature can be derived from the data of 1850-1954 alone. Your approach only makes sense if you can demonstrate how it is that the form of your function T(x) = A + Bx + Cx^2 + D sin(E +Fx) is the logical outcome of the 1850-1955 data. That is why would your model be any more correct than say T(x) = A + Bx + D sin(E +Fx)?
Kevin McKinney says
#35–“…of course there is always possibility that the CO2 increase is a direct consequence of the rising temperatures.”
– See more at: https://www.realclimate.org/index.php/archives/2014/12/unforced-variations-dec-2014/comment-page-1/#comment-619695
Of course, that would beg the questions as to why the isotopic signature of atmospheric CO2 looks just as would be expected if it were due to fossil fuel emissions, and just what happens to the other half or so of the carbon humans are known to emit to the atmosphere.
But hey, if we’re going to imagine that global warming is the best thing for humanity since, well, ever, then why strain at a couple of logical ‘gnats’?
MARodger says
Of course, I am not in the slightest bothered which years Victor the Troll choses to cherry-pick. As his words are cheap, it is a somewhat ephemeral situation. He did say @17 “Or, if you prefer, 1973 through 2000. “ Ditto @40 but @41 he now insists it is 1971-2000 or nothing. I think he may have better luck choosing 1971-83 as that range almost gives a statistically significant negative trend. The 1971-2000 cherry-pick, however, presents a central regression that is negative but it is also by some way not statistically significant -0.0088 +/-0.031(2sd) giving a 31% chance of an upward trend. So to state that ” there is simply no upward trend at all” is also incorrect.
And, given the noisy data being analysed, I fail to see why anybody would bring the lake-effect snowfall regression from Burnett et al (2003) and dispute its veracity in this way. The regression is fine. So now it is about cherry-picking start and end date in an attempt to keep up the attack. I am mystified. Does anybody have an inkling of what the Troll is attempting? Or why?
Kevin McKinney says
Looks like India is trying the ‘carrot and stick’ approach at Lima, too:
http://www.thehindubusinessline.com/economy/take-responsibility-india-to-tell-developed-countries-in-climate-conference/article6665445.ece
Can’t fault their basic concept of fairness, but I sure hope they temper it with a dash or three of expediency and pragmatism, and that their forthcoming emissions targets will be both real emissions targets AND, as promised, ‘aggressive.’
Steve Fish says
Victor, you apparently believe that your inexpert postmodernist accusations have some support because you say “Evidently I’m not alone in my assessment.” Perhaps you can provide references to expert criticism of the Burnett et.al. (2003) research article. Google tells me that there are on the order of tens of published, peer reviewed research articles on lake effect snow in 2014 alone so you should have no problem finding expert criticism. No?
Steve
Radge Havers says
MARodger, trolls attempt to get attention, wind people up, puff themselves up at other’s expense, and make themselves alpha trolls in the eyes of other trolls. They may have other agendas, but generally it all just boils down to being a troll. They do it because they’re messed up.
Notice how hard it works to pull Hank back in after he indicated that he was losing interest.
Meow says
@5 Dec 2014 @ 1:28 AM: The statistical definition of “trend” is meant to help us avoid deceiving ourselves. It is a mostly-objective way of deciding (probabilistically, of course) whether a given change in a random variable is due to chance. If you abandon that definition in favor of eyeballing, you’re no longer practicing statistics, which means you’ve abandoned the best tool we have to make sense of data series.
For more about deciding whether a trend exists in a series, please see http://tamino.wordpress.com/2011/07/16/trend-and-noise/ and http://tamino.wordpress.com/2012/02/07/trend-and-uncertainty/ .