Our favorite contrarian, the potty peer Christopher Monckton has been indulging in a little aristocratic artifice again. Not one to be constrained by mere facts or observable reality, he has launched a sally against Andy Revkin for reporting the shocking news that past industry disinformation campaigns were not sincere explorations of the true uncertainties in climate science.
The letter he has written to the NY Times public editor, with its liberal sprinkling of his usual pomposity, has at its heart the following graph:
Among other issues, it is quite amusing that Monckton apparently thinks that;
- trends from January 2002 are relevant to a complaint about a story discussing a 1995 report,
- someone might be fooled by the cherry-picked January 2002 start date,
- no-one would notice that he has just made up the IPCC projection curves
The last is even more amusing because he was caught out making stuff up on a slightly different figure just a few weeks ago.
To see the extent of this chicanery, one needs only plot the actual IPCC projections against the observations. This can be done a number of ways, firstly, plotting the observational data and the models used by IPCC with a common baseline of 1980-1999 temperatures (as done in the 2007 report) (Note that the model output is for the annual mean, monthly variance would be larger):
These show clearly that 2002-2009 is way too short a period for the trends to be meaningful and that Monckton’s estimate of what the IPCC projects for the current period is woefully wrong. Not just wrong, fake.
Even if one assumes that the baseline should be the year 2002 making no allowance for internal variability (which makes no sense whatsoever), you would get the following graph:
– still nothing like Monckton showed. Instead, he appears to have derived his ‘projections’ by drawing a line from 2002 to a selection of real projections in 2100 and ignoring the fact that the actual projections accelerate as time goes on, and thus strongly over-estimating the projected changes that are expected now (see here).
Lest this be thought a mere aberration or a slip of his quill, it turns out he has previously faked the data on projections of CO2 as well. This graph is from a recent presentation of his, compared to the actual projections:
How can this be described except as fake?
Apart from this nonsense, is there anything to Monckton’s complaint about Revkin’s story? Sadly no. Once one cuts out the paranoid hints about dark conspiracies between “prejudiced campaigners”, Al Gore and the New York Times editors, the only point he appear to make is that this passage from the scientific advice somehow redeems the industry lobbyists who ignored it:
The scientific basis for the Greenhouse Effect and the potential for a human impact on climate is based on well-established scientific fact, and should not be denied. While, in theory, human activities have the potential to result in net cooling, a concern about 25 years ago, the current balance between greenhouse gas emissions and the emissions of particulates and particulate-formers is such that essentially all of today’s concern is about net warming. However, as will be discussed below, it is still not possible to accurately predict the magnitude (if any), timing or impact of climate change as a result of the increase in greenhouse gas concentrations. Also, because of the complex, possibly chaotic, nature of the climate system, it may never be possible to accurately predict future climate or to estimate the impact of increased greenhouse gas concentrations.
This is a curious claim, since the passage is pretty much mainstream. For instance, in the IPCC Second Assessment Report (1995) (p528):
Complex systems often allow deterministic predictability of some characteristics … yet do not permit skilful forecasts of other phenomena …
or even more clearly in IPCC TAR (2001):
In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states….
Much more central to the point Revkin was making was the deletion of the sections dealing with how weak the standard contrarian arguments were – arguments that GCC publications continued to use for years afterward (and indeed arguments that Monckton is still using) (see this amendment to the original story).
Monckton’s ironic piece de resistance though is the fact that he entitled his letter “Deliberate Misrepresentation” – and this is possibly the only true statement in it.
Mark says
re 299.
How about “It’s been a really hot summer this year”?
For decades, centuries even millenia, it’s been used.
It also shows how with climate one swallow does not a summer make.
Mark says
re 298, why did you use a cutoff of frequencies below less than 1 year? It is still including weather.
Why not use a lowess filter on it and look at the graph?
Or scale the log (CO2) growth and fit it to the best fit of the graph and see how much residual there is and whether they can be explained?
In fact, why did you pick FFT as an analysis at all? You also have not shown what physical process your “analysis” is indicating. As many have said, correlation is not causation. They also miss of causation implies correlation.
Where is your causation that your “analysis” finds?
Mark says
Theo, what causes your cheeks to wobble as you fall down?
Does not knowing what each wobble you feel or each flap and snap of your shirt mean you can consider that the ground getting closer isn’t really a problem? That because we cannot explain why your coat snapped just THEN means that we cannot say you will go SPLAT! on the ground?
Barton Paul Levenson says
Son of Mulder writes:
Try using moving averages. 5-year, 10-year, 20-year, 40-year. See what happens to the curve.
J. Bob says
#301 Mark
The term “mizu no kokoro” is Japanese, meaning “a mind like water”. Calm water like a calm mind reflects reality.
What I was referring to, in my reply to Mike, was using finer time increments of one month, rather then the one year increments, in the Hadcet data, I initially used. This would reduce the “leakage” or discontinuities at the end points. And yes I have checked it out with some simple low-pass filters in the 0.01-0.025 freq. region. However first, I have to put together a program for calculating coefficients for higher order recursive filters. The problem with these filters is that they introduce a phase, or time, delay. This is where Fourier Convolution helps. With the low pass conventional filter, the current time period is cut off. Using a Chebyshev low pass filter, which gives a better cut-off, I have a better comparison between the shapes of the FFT and low pass filters, which I have done on a preliminary basis. Anyone who does not cross check his work with other tools, is generally asking for trouble.
Another point is that Convolution methods, like any good tool works in many diverse areas. Analyzing temperature, stock markets, or surface profiles, in single or 2D (i.e. image), it works the same.
As far as CO2, that is not what I am looking at. This is what I am looking at, “is the temperature on the earth going up, and if so, is this part of a natural cycle, or some other cause not explained?”. To date, for me, with the peaking or slight down trend of a ~50 year cycle, it looks like part of a natural cycle.
TrueSceptic says
Readers here will be interested to know that Monckton has responded over at Deltoid. 2 excerpts:-
“It would be unwise to rely on “Real” Climate on any scientific matter: Schmidt, the blogger, has a substantial financial vested interest in promoting and exaggerating the “global warming” scare. A refutation of Schmidt’s latest less-than-temperate, less-than-accurate posting will appear shortly at http://www.scienceandpublicpolicy.org.”
“Can it be, perhaps, that those who – like puir wee Schmidt at NASA, or puir wee Lambert at Deltoid – do not have the technical competence or scientific integrity to address in a balanced and reasoned manner the scientific questions I raise find it easier to argue dishonestly ad hominem than honestly ad rem? Magna est veritas, et praevalet. – Monckton of Brenchley”
The phrase “beggars belief” seems somewhat inadequate, doesn’t it?
http://scienceblogs.com/deltoid/2009/05/monckton_caught_making_things.php#comment-1618865
[Response: Ha! Pottier and pottier…. – gavin]
dhogaza says
C’mon, we’re still waiting for you to have a chat with tamino, a professional at analyzing time series, over at open mind.
Why don’t you?
If you’ve actually succeeded in overturning the work of thousands of climate scientists, don’t you want to begin convincing professional statisticians and the like, and to begin writing this up for publication in a major scientific journal?
Fame awaits you …
J. Bob says
#305 Beginning should read: #302 – Mark
#307 – Why? He shows temps increasing, and over the past ~8 years they have been going down, or holding. So which fits the data better? Besides, using even moving averages introduces phase or time delays.
As far as “debating” him, I will if I have time, however spring has FINALLY arrived, and have to much outdoor things to do. Oh, how do you know I’m not a professional? Besides I’m much to humble to overturn the thoughts of all those people, they have to make up their own minds.
James says
J. Bob Says (9 May 2009 at 9:29):
“As far as CO2, that is not what I am looking at.”
Maybe you should? It seems that you are working backwards: see if there has been an effect, and only then think about the cause. That may be a useful tool for teasing out explanations for past behavior, but for predicting the future, it’s usually more useful to start at the beginning – with known physics – and work forwards.
dhogaza says
It depends on whether your trying to show the wiggles, or the underlying trend. It’s the trend we’re we’re concerned about, not whether the current (or recently ended) La Niña has caused energy within the system to be shuffled around in a way that causes atmospheric temps to flatten, just as the next El Niño will cause warming beyond the underlying trend. We know that we’ll have future La Niña and El Niño (and other) events. We know we’re in a solar minimum. You, and Spencer with his 4th degree polynomial fit, and others doing similar things all have a common goal of making short-term noise appear to “refute” CO2-forced warming. You’ll drop it just as soon as the next El Niño awakens, never to be heard from again, I’m sure. Because the logical conclusion would have to become “oh, now it’s warming 2x as fast a the IPCC projections suggest!” and that would run counter to your agenda … all of a sudden short term “trends” will be too short-term to be useful.
That’s what you said a few weeks ago when you (temporarily) terminated your drive-by “oh, look, I proved it’s natural cycles!” “proof”. “Bye, I’m leaving, too busy, ta-ta!”
You’re an engineer, not a statistician or scientist.
CTG says
J. Bob – so you are using a technique that will find cycles in just about any data you throw at it, and you found a cycle? That’s… interesting.
But what is your hypothesis that lead you to assume a cyclical nature to the data? Did you have an a priori reason to suspect a 50-year cycle based on a hypothesis you were testing? Or did you just tweak the parameters until a cycle appeared?
More importantly, what was the null hypothesis you were testing against? “The data is not cyclical” or “The data shows random variability”? This is important, because the first one implies that you already have some reason to suspect that the data is not random.
If you take the second null hypothesis, then you certainly would not start with FFT as your analysis technique. Looking at rolling averages gives you a pretty strong hint that there is a linear trend evident, so simple regression is enough in this case – and that most certainly gives a significant result with which we can reject the null hypothesis that the data shows random variability. We can then show correlation with CO2, because we have a hypothesis that suggests we should see a correlation between the temperature increase and the CO2 increase. And whaddya know – the correlation between temperature and CO2 is significant.
Any other analysis therefore cannot exclude CO2, because we have shown:
a) CO2 has been increasing over the last 150 years
b) The global temperature been increasing over the last 150 years
c) Our knowledge of CO2 says that if a) is happening, there should be a strong correlation between a) and b), which there is
You are assuming that there is an unknown variant driving the temperature record. However, there is a known variant – CO2, so a univariate analysis of the temperature records is just not appropriate. I suppose you could do a multivariate analysis, so that you could try and analyse the variance that is not due to CO2 – but then, that’s not what you are trying to prove, is it?
Oh, and as for temperatures going down in the last 8 years, 2005 and 2007 were both warmer than 1998, so how exactly are temperatures going down?
Ray Ladbury says
Jbob, Just wondering. Do you think you are the only person to get a “Fun with Fourier Transforms” kit for Christmas and apply it to climate data? There are all sorts of peaks in there, but unfortunately there’s a dearth of physics. It’s pretty easy to cherrypick a couple of peaks that have the right frequency and explain a single trend. What do your Fourier transforms tell you about the cooling of the stratosphere or the fact that last frost dates are about 3 weeks earlier than they were a few decades ago?
Also, do you really think you’ve learned all there is to know about statistics from a fricking Six-Sigma class or two?
A good engineer is always looking to add new tools to his toolbox. The shelves for physics and statistics in yours seem pretty spare. But then to a man with only a hammer in his toolbox, everything looks like a nail.
You claim things are cooling–and yet every year this decade has been one of the 10 warmest. Funny definition of cooling.
You don’t go to Open Mind to “debate”. We’ve got enough Master “debaters” there already. Try going there to learn something.
Son of Mulder says
In # 294 Ray Ladbury said “Son of Mulder, since climate consists of longterm trends, and an El Nino lasts on order of a year, you would be incorrect in assigning El Nino to climate. Systematic changes of ENSO over time would qualify. Think trends over time, not events.”
Please read what I said… “Mark then asked “And how about “an unusually strong El Nino”?”. I’d say that wasn’t noise but just an observable in the climate system.”
I didn’t assign El Nino to climate, I said it was an observable in the climate system. Consider, how can you talk of ENSO unless there are some observables that it creates? eg El Nino, La Nina’.
Then Mark said in #297 ” No, that would be because your thermometer is not global in size. Therefore you are not measuring the earth’s average temperature with it in your back garden.”
Precisely, did you not twig I was using a reductio ad absurdum? Nor are the methods used historically to calculate global average temperature, global in size. They are a patchwork extrapolating readings and knitting together an ‘approximate measure of global average temperature’ which is not identical to the real thing which would be impractical to measure. The difference between real and any attempted measure is noise introduced by the measuring system.
Also in #297 Mark said “When you quote someone, do you ever read what you quote first to make sure you don’t look like a dork?”
I do, you clearly don’t… and I’m much more polite as well.
Barton Paul Levenson says
Son of Mulder writes:
Do you understand what sampling is and that there are ways to do it reliably?
dhogaza says
Shorter Son of Mulder: “we can’t measure anything with exact precision, therefore no useful measurements can be made”.
Just another spin on the “we don’t know everything, therefore know nothing” denialist meme.
J. Bob says
This was initially undertaken to evaluate the long term temperature trends, and if there is a basis that the recent news about global warming is a recent event, or part of long term natural cycle.
Since the raw yearly temperature data has many short term fluctuations and disturbances, mathematical methods are used to reduce these short term fluctuations, or “noise”. That is, reduce the “noise” to see what underlying trends or information is present. It means looking at lower frequency trends, say over 30-40 years to indicate climate in the past and future. A common term to do this is to introduce, what is called, a low pass filter. Meaning, it takes the raw data, and reduces the amplitude of the higher frequencies (noise), while preserving the information in the lower frequencies. Three common methods of doing this were evaluated, each with it’s own characteristics. They are:
Moving averages filtering
Classical recursive filtering
Fourier Convolution filtering
The 350+ year temperature history from Central England was used, referred in this write up, as the Hadcet data set. This is the longest continual data set, and includes both the monthly and averaged yearly data, from 1659 to the present. Yearly data was chosen, for ease of use. First step was to establish a long term trend line, using a linear equation, to aid in analysis. This is defined as T_linear. The line was computed to minimize the mean error between the actual, T_act, and the linear estimation temperature T_linear.
T_linear = 8.69 + 0.003*( Yr – 1659) 1659
tamino says
J. Bob, it seems to me you’ve been doing some reckless curve-fitting. Take a look at this.
Son of Mulder says
Barton Paul Levenson asks:
“Do you understand what sampling is and that there are ways to do it reliably?”
Yes, and that it is not ‘reliably’ but ‘as reliably as possible’. So inevitably the measuring system generates noise (which may be biased or unbiased depending on how good the correction processes are or can be) in the global record.
I’ll turn around what I’m probing to come at it from a more direct route.
With all the various oscillations ENSO, PDO, NAO …. and the temperature measurement processes that have been used over the years, how much of the ‘natural variation in global average temperature’ is due to real change in global average temperature and how much is due to the position of the fixed stations and the process to calculate the global average given the changes in temperature distribution because of the oscillations?
It’s easy to envision a situation where given a change in distribution of warm areas that may arise during an oscillation they may register on more ground stations thus creating an apparent increase in temperature whereas the reality may be that the real average hasn’t changed…equally true for colder areas as well when apparent temperature falls.
To me, such effects are noise in the measuring system not the climate.
[Response: This is exactly what the error bars ron the global and regional average temperature series shown in the IPCC report and the underlying papers on which they are based (e.g Brohan et al, 2006) represent. They represent (in large part) the uncertainties due to incomplete and time-variable sampling (which in general become larger back in time as the usampled regions increase in extent), though there are some other components of uncertainty included as well. The warming trends are highly significant even when these uncertainties are taken into account. Note that alternative “frozen grid” estimates (where only those stations which are available over the full period are used) show essentially the same behavior. This is a good place to start (in particular, the publications cited there), if you’re looking to learn more about the surface temperature record, its uncertainties, and how they are estimated. -mike]
J. Bob says
#317 tamino
I just posted the rest of the article, hope it all gets in. Read it, an then we can talk.
J. Bob says
#317 tamino – it was cut off by the less then equal, here is the thing again.
This was initially undertaken to evaluate the long term temperature trends, and if there is a basis that the recent news about global warming is a recent event, or part of long term natural cycle.
Since the raw yearly temperature data has many short term fluctuations and disturbances, mathematical methods are used to reduce these short term fluctuations, or “noise”. That is, reduce the “noise” to see what underlying trends or information is present. It means looking at lower frequency trends, say over 30-40 years to indicate climate in the past and future. A common term to do this is to introduce, what is called, a low pass filter. Meaning, it takes the raw data, and reduces the amplitude of the higher frequencies (noise), while preserving the information in the lower frequencies. Three common methods of doing this were evaluated, each with it’s own characteristics. They are:
Moving averages filtering
Classical recursive filtering
Fourier Convolution filtering
The 350+ year temperature history from Central England was used, referred in this write up, as the Hadcet data set. This is the longest continual data set, and includes both the monthly and averaged yearly data, from 1659 to the present. Yearly data was chosen, for ease of use. First step was to establish a long term trend line, using a linear equation, to aid in analysis. This is defined as T_linear. The line was computed to minimize the mean error between the actual, T_act, and the linear estimation temperature T_linear.
T_linear = 8.69 + 0.003*( Yr – 1659) Yr from 1659 to 2008
This is shown in Figure T_est_10, referenced below.
http://www.imagenerd.com/uploads/t_est_10-0EYxV.gif
The error between the actual and linear estimation becomes the basis of filtering and comparison. From figure T_est_10, significant shorter term fluctuations are seen. What we are looking for are the longer term trends buried in this seeming chaotic signal.
MOVING AVERAGE FILTERING
The first method used a 40 year moving average low pass filter. Plot point was assumed to be in the center of the sample. Fig. T_est_11, referenced below is the result. It shows the moving average plot point beginning 20 years after the initial data (1679), and terminating 20 years before the end of the data (1988). Unfortunately this does not include the current time, the period of most interest. However, the longer term frequencies start to show more clearly then in Fig. T_est_10.
http://www.imagenerd.com/uploads/t_est_11-1PVkA.gif
CLASSIC RECURSIVE DIGITAL FILTERING
The second method used more classic filtering methods, using what are called recursive, equations remove the noise and short period variations. In this case a 2 and 4 pole Chebyshev filter was used. This was to maximize attenuation, or cutting off higher frequency noise or disturbances. Only difference is the 4 pole is more complicated and provides more reduction of the higher frequencies. Cut off point was set at 0.025 cycles/year, so that time periods greater the 40 years were attenuated, or reduced. Fig T_est_12 referenced below, shows the 4 pole filter reduces the higher frequencies better then the 2 pole. But this added filtering comes at a cost of distorting the filtered signal in time. This is referred to as phase change or delay. The more poles, the more noise reduction and phase, or time delay. This is particularly noticeable in the 1720-1750 time period, where the 4 pole filter reacts more slowly to the temperature drop after 1740. This also causes a problem in evaluating current temperature trends. It tells us virtually nothing about what is going on now.
http://www.imagenerd.com/uploads/t_est_12-MRJNT.gif
CONVOLUTIONAL FILTERING
The third method is Convolution, using the Fourier transform. Since the fast Fourier transform method is used, it is referred to as the FFT Convolution. In this case, the temperature data is transferred to the frequency domain, via the FFT. The selected frequencies are removed using a mask or “kernel”. The inverse FFT is then used to transform back to the time domain world, minus the selected frequencies. In this case, all frequencies above 0.025 cycles/year were eliminated. Figure T_est_13 referenced below, shows this and it’s comparison it to the recursive filter and moving average method. The FFT convolution removes the higher frequencies, but not at the cost of having a phase of time delay. It is also smoother then the moving average, without the short term “jumps”. It also shows data in the last two decades, which is where we are now.
http://www.imagenerd.com/uploads/t_est_13-ggCU5.gif
Figure T_est_14 referenced below, is a comparison between just the moving average and FFT convolution method. While they are similar, the FFT has a smoother curve and highlights the periodic changes more. Again it shows the last 20 years in time. In this case, it shows a peaking and down trend around and after the year 2000.
http://www.imagenerd.com/uploads/t_est_14-QDLGD.gif
Is there a basis for this downward trend in the FFT method? Looking at the composite global temperature, put out by http://www.climate4you.com/
and shown in Fig T_est_15, it sure looks like it. This is important in that it is a composite of both surface and remote sensing systems such as GISS, RSS, UAH, etc. While they vary, they seem to follow the same trend.
http://www.imagenerd.com/uploads/t_est_15-nrLTG.gif
The FFT curve from 1980- 2008 was then superimposed on the composite plot, using the 1979 point as a reference. It might be off a little, but the shape is interesting. There does seem to be a strong correlation of a downward trend. The most interesting thing is that they seem to peak at the same time, and follow each other.
http://www.imagenerd.com/uploads/t_est_16-NFyvl.gif
Now some have asked that I back up my comments from above posts. Well here it is. These methods are well documented in various signal conditioning and imaging handbooks. Hence if anyone would argue these points, I would suggest they do the analysis THEMSELVES!!! NO references to anything such as this paper or that article, just use basic handbooks or text books. In fact I’ll give you a couple of hints “Scientist and Engineer’s Guide to Digital Signal Processing” by Steve Smith at http://www.dspguide .com, or “Handbook of Astronomical Image Processing” by Berry and Burnell. This was all done on a EXCEL spreadsheet, using the VB Macro option, on about 2-3 pages of code, so no expensive software is needed.
tamino says
J. Bob,
Did you read my post? You really should.
When you say, “the FFT has a smoother curve and highlights the periodic changes more,” you fail to comprehend that just because you can model, and smooth, data with periodic model functions, that does not establish periodic, pseudoperiodic, quasiperiodic, or otherwise anywhere near cyclic behavior in the signal. You can model any signal at all with enough degrees of freedom — including an elephant. That doesn’t make it cyclic. You have given us zero evidence (none at all) that any of the changes are periodic — you’ve just matched them up with periodic functions and then started talking about “highlights the periodic changes” as though there really are periodic changes. Anybody can do that, to any data whatever — it’s reckless curve-fitting.
Your FFT curve uses sinusoidal model functions, which are bounded and can only reach their extremes at the same points at which they “flatten out.” This is a property of the functions you’re using for smoothing, which is especially prevalent at the ends of the time span — including now. That’s one of the reasons you’re so convinced there’s a peaking and down trend.
If you used a linear fit only, then you couldn’t possible get a peaking and down trend. You can probably tell that’s not because there isn’t one … it’s because upward-pointing straight lines have no choice, they simply can’t turn downward. Well, sinusoids have no choice either, they must turn downward. It’s a property of the model functions, not of the physical signal.
And nowhere have you addressed about the statistics of which “trend” components are significant or even possibly so and which aren’t. You use phrases like “it shows” and “they seem to follow” as though they are reliable conclusions, but utterly ignore all smoothed estimates (including your own!) that don’t show a peak and downward trend. Drawing grand conclusions based on visual inspection of a graph from some smoothing scheme is a recipe for statistical disaster.
You haven’t “backed up your comments” at all. You’ve just shown that with enough time and enough filters, you can make a graph look like whatever you want it to.
Please don’t be so arrogant as to object to references to “this paper or that article, just use basic handbooks or text books,” as though your reading of a few basic handbooks qualifies you to draw expert conclusions, and suggest I should educate myself with them. There’s a helluva lot more to the statistical analysis of time series than is conveyed in a few basic handbooks; the most advanced stuff is in the peer-reviewed journals. I’ve authored a few of those papers. But then, I’ve been doing this for over 20 years.
You are yet another commenter (yes, there are lots) who finds some interesting analysis tools and thinks he’s smart enough to know better than those who have spent a lifetime working on this kind of analysis. Abandon that belief, and you’ll show us how smart you really are.
You should read my post; it’s here.
Dan says
re: 320. “The 350+ year temperature history from Central England was used, referred in this write up, as the Hadcet data set.”
Hello?? That’s not a *global* temperature set! The issue is *global* average temperatures, not the CET. Try using HADCRU or GISS. Good grief, the denialists are really desperate now, cherry-picking not only years but now data-sets.
Mark says
re 320, well you don’t to a fourier transform and remove monthly figures to get a ***long term*** temperature trend.
To do that, take the monthly average temperature and for each month in a decade take the average of that month’s average over that decade.
What you did doesn’t produce what you’re looking for.
J. Bob says
#321 Tamino – Thank you for you kind words.
Did you ever hear of “windowing”, to reduce the end point problem? I’ll go with my notes from Cooley and Tukey’s presentation, in late 66.
Will get back to you later, I have some fishing to do, assuming I don’t freeze to death.
walter crain says
wow, guys thanks for the discussion. i’m trying to keep up…
Igor Samoylenko says
J. Bob said:
tamino replied:
Or, in other words, you really need to know what you are doing. Just because anyone can draw a few graphs in Excel and some can even write some VBA code to go with it does not mean these graphs will convey anything meaningful.
Curious says
#324 J. Bob:
Assuming that’s sarcasm, I fully support tamino’s tone. No kindness toward misrepresentation, please, it may look as if there was something worth discussing.
Besides, asking for kindness after giving “a couple of hints” (#320) for tamino’s education is also a funny request.
Thank you, tamino and RC, for devoting your time to the exhausting task of debunking the never-ending skeptical set-ups.
Hank Roberts says
Three hits for “did you ever hear of”
http://www.google.com/search?q=“windowing”++”endpoint+problem”
J. Bob says
#328
Hank- Try “FFT windowing”, 900K plus hits
John Mashey says
Since this thread is on Monckton, here is an April 28, 2009 talk @ Texas A&M for the Young Conservatives of Texas.
This was sponsored by CFACT, which now has a nice green website.
Monckton pleads hard for the poor of the world, who should “burn more CO2” (really, said several times), like rich countries. Bad-science people banned DDT and didn’t quarantine HIV folks, and now they’re are doing climate, willhurt the poor.
01:20:00, student says he agrees with everything Monckton said there, and asks for advice in convincing a person “knowledgable in atmospheric chemistry and a true believer”. Advice: call them a global cooling denier, tell him to look at the evidence is all for natural causes. Try to make these people look at the science.
01:27:55 great barrier reef? temperatures not rising, but how about Ocean acidification?
A: we don’t really know, because the ocean is alkaline, would need very, very big change before it is acid. Cambrian period had lots of CO2, but corals OK. Likewise, Triassic, corals OK. Ocean acidification cannot be a problem.
Whole thing is 01:35 long.
Son of Mulder says
Mike, thanks for your help in my comment #318. Further I’ve never understood why actuals are compared to a family of model results where each set of models have different inputs eg growth rate of CO2 as only one such slice of models could possibly match real inputs. I assume there has been a model(s) created which has input variables that best match actual CO2 input etc say over the last 150 years. I’d not expect such a model to match the actual temperature track but I’d expect its track to have similar characteristics to the actual track eg (my candidates say) frequency of maxima and minima, ratio of number of growth years from min to max vs shrink years max to min. Growth peak to peak and min to min, rolling best fit gradient change rate. Call it ‘A best model’. Is there such a ‘best model’ or papers on such a methodology? Thanks
Tom P says
Monckton has hit, or rather flailed, back:
http://scienceandpublicpolicy.org/images/stories/papers/commentaries/chuck_yet_again_schmidt.pdf
The discussion of the science is up to his usual standards. He accuses Gavin of “tampering” with his plots. This sounds rather serious until he makes clear that the tampering in this case consists of not including Monckton’s url or original caption.
I especially like his statement “[t]here was a sharp phase-transition in the direction of global cooling late in 2001.” Exactly which two phases of the global climate Monckton had in mind is left as an exercise for the reader.
He is also most unhappy that he is not considered a member of the House of Lords:
“One of the many write-in comments on Schmidt’s blog complains that I refer to myself as a member of the Upper House of the United Kingdom legislature. So I do, for I am officially recognized by the House as the legitimate successor to my late father, the Second Viscount Monckton of Brenchley, though, since 1999, most hereditary peers, including me, do not have the right to sit or vote in the House.”
Section 1 of the 1999 House of Lords Act rather undermines his pleading:
“Exclusion of hereditary peers: No-one shall be a member of the House of Lords by virtue of a hereditary peerage.”
Monckton can imagine that he is a member of the House of Lords as much as he might imagine himself to be a scientist. Both have equal validity.
[Response: I particularly liked the fact that he excised a link to the actual IPCC projections of CO2 changes as being ‘ad hominem’ – somewhat undermining his claim of proficiency in classical languages. Perhaps I should respond to him: “Bene, cum Latine nescias, nolo manus meas in te maculare”. – gavin]
Mark says
re 329, that’s FFT windowing. Not the windowing endpoint problem.
You still haven’t shown that your analysis is of any explicative power at all yet.
J. Bob says
Walter, welcome back. But don’t stand to close, collateral damage.
Tamino – Yes I read your reference. Little weak in the FFT Convolution part. However, lets get back to your post #321. Where did I ever say I was modeling anything? I was simply using the Convolution method to filter out the short term variations, and get at the underlying longer term signals. Now I think we both realize that nothing in nature is perfectly periodic, except in the math world. Even time standards vary, depending on the precision used.
You comment about statistical modeling reflected something I noted, while doing a couple of tours of duty with the medical device and pharmaceutical people. One was the popularity of “How to Lie with Statistics” book (and after 30+ years), and the other was the more complex the statistical model, the greater distance from reality. Also how the model was attracted to the sponsor, or agency providing the money.
And yes I am also aware the FFT uses a sinusoid as a reference. I could have chosen another reference such a “saw tooth” (“Slant”), or trapezoidal, that are continuous (non-differentiable) Wieerstrasse transforms, but the FFT is pretty standard. And remember I’m re-constructing the original data, minus the higher frequency components, just like a Butterworth filter you used in you item you mentioned. Remember, the Butterworth uses a sinusoid as a way of testing it’s amplitude and phase characteristics.
So what it really comes down to is that:
1- You don’t like my opinion.
2- You don’t like the method (FFT) that gives some rational to the opinion.
Where is the spirit of Voltaire, and Zola? You make a point in you reference Dangerous Curves about the boundary problems or end points. These were recognized decades ago by Blackman & Tuckey in “Measurement of Power Spectra”. This was also stressed when Cooley & Tukey, were flown up to help implement the FFT in our Data Acquisition/Analysis real time hybrid computers . This was when their paper first came out. I guess it was late 65, not 66 like I first mentioned. The boundary problem, we called it “leakage”, was pretty much solved by a Hamming or Hanning “window”, although there are others, depending on the data. There were “empirical” methods presented, while not mathematically correct, would allow ways of reducing or removing the problem. One of the simplest was by use of a “trend” line, and “padding”.
My comments about just using references was to keep the “he said they said” out of it. I am well aware of the “peer reviewed” paper mill of “back scratching” of “grant grabbing”. Yes there are very good papers out there, but there is also a lot of clutter.
So Walter, there was a reason for that trend line we talked about a couple of months ago, when I mentioned “aiding in analysis”. That was a fun discussion. Maybe we can have some more, and I’ll bring you from the dark to the “skeptic” side of the “Force”.
Mark says
“Where did I ever say I was modeling anything? I was simply using the Convolution method to filter out the short term variations, and get at the underlying longer term signals.”
Uh, what do you think that convolution is? It’s modelling the changes as if they are superposition of sine waves.
That is a model.
Your model doesn’t have any predictive ability and you have no idea what it is telling you apart from, if you look at the right numbers, you can convince yourself that the line is going down.
David B. Benson says
J. Bob — Recent global temperatures are due in part to ABC aercols and the extended solar minimum. Despite this, 2008 CE was tenth warmest on record. The last time there was such an extended solar minimum was in 1913 CE. What rank in the record was 1913 CE? Notice the distnict upward trend in the entire record?
dhogaza says
Voltaire infamously would defend your right to make foolish statements to the death.
And afterwards, treat you with all the derision and scorn he could muster. Which was considerable.
Perhaps Walter will be fooled by your bogus “analysis”. Science will march on, unimpressed.
dhogaza says
So maybe we should stick to linear regression for this time series rather than work our way through more complicated functions until finding one that let’s us say “it’s cooling”? We gave a theoretical basis for predicting a (near) linear response to the next few doublings of CO2, imposed on the general noisy meanderings of climate, but none for it having no affect, after all …
Is that what you’re saying?
Jack Fulcher says
I’ve never seen such sophistry. You call yourselves academics? Cherry picking data is a common practice in the physical and biological sciences, and if I tried some of the things you folks get away with I’d be drummed out of the economics profession. Your regressions are poorly specified; you pick your explanatory variables more based on their t-scores than on the logic of their inclusion a priori; and you choose the time to cover and the frequency of the observations based on which combination most supports your predetermined conclusions. This goes for all sides of the present issue. One of my colleagues thinks you just get lazy because no one holds your feet to the fire when you submit for publication.
I’m just in a bad mood. Go back to your self talk, kids.
[Response: Since no one is doing any regession analysis in this post, your grumpiness seems a little misplaced. whatever. -gavin]
J. Bob says
#335
Your definition of convolution is to restrictive, try this:
A convolution is an integral that expresses the amount of overlap of one function g() as it is shifted over another function f(). It therefore “blends” one function with another.
http://mathworld.wolfram.com/Convolution.html
In addition to the Fourier transform, convolution is used with the Laplace transform and Z-transform in control systems. This form the basis of control system analysis.
#337
Google search:
“FFT” – 7.5 mil hits
“FFT windowing” – 900K+ hits
“FFT filter” – 200K+ hits
If I don’t know what I’m doing, I must have a lot of company.
First let’s see IF we have global warming, then try to find the reason.
Here’s a bit of information. Taking the raw 350+ year temperature data (minus the linear trend line) data, it transferring it into the frequency domain. Without any “mask” or “kernel” it was transformed back to the time domain. The average raw error was about 0.4-0.6 degrees about the trend line. NO error was seen between the raw and FFT reconstructed data out to four decimal places. If there was any failure in the method, it would show up, it didn’t. Not to bad for a simple EXCEL-VB program.
Jack Fulcher says
“[Response: Since no one is doing any regession analysis in this post, your grumpiness seems a little misplaced. whatever. -gavin]”
This is what I’m talking about. Regressions are mentioned in at least comments 131, 162, 311, and 338. However, this sloppiness aside, I’m talking about the work I’ve seen in your refereed journal articles. I’m sure that those here who understand statistical analysis are often chagrined to see what slips past the referees.
[Response: Nope. Still none the wiser. My publications are here – I’d be happy to discuss any regressions you think I’ve done wrong. – gavin]
Ian says
Jack Fulcher, what in the world are you talking about? Out of curiousity, could you give some specifics behind your claims?
Lack of cherry picking in econ – I work with lots of academic economists, and that’s a new one on me!
Regressions poorly specified – what regressions?
Variables chosen on t-scores – again, what regressions and t-scores?
Cherry pick observations – specifically…?
Easy publication process – pure speculation on your colleague’s part.
Roger Godby says
Mark,
I used to smoke about a half pack a day. Then one day I decided to quit and I did; another member of my family was a heavier smoker but quit cold turkey the same way. People respond differently to nicotine. Ultimately the decision of whether to use it should be based on informed personal choice (though it probably won’t be, but so what?), not what some politician or wonk decides. The occasional tale from the UK of patient X who is a smoker being denied care or placed further down on the waiting list begins to smell like eugenics.
But anyway, I could use some global warming, and so could my plants. It’s been both warmer and colder in the past. By the way, hold old is Lord Monckton? Can he recall when the Thames froze (and it did)? ;-)
Mark says
343 and why do so many people in the UK say that they will NEVER go to the pub like they used to because they can’t smoke there?
If they can’t not smoke for a few hours, doesn’t that prove it is addictive in the extreme?
Two more points:
1) Someone survived a 2 MILE fall out of an airplain during WW2. Does this mean falling 2 miles straight down isn’t going to kill you?
2) People have quit heroin cold turkey. Where do you think the term comes from??? Does this mean heroin isn’t addictive?
Mark says
re 340, so what are you mapping over the domain? A sine curve is not “a thing” it’s a mathematical construct. So what are you testing fits against the temperature curve in your convolving?
fustian says
tamino: In post 321, it seems that you are trying to suggest that a superposition of sinusoids must itself be sinusoidal. In particular, I believe you are suggesting that you cannot represent a monotonically increasing function as a superposition of sinusoids since a sinusoid must inevitably “turn downward”.
This is simply untrue.
What I believe you are failing to account for is that at any given point the superposition of sinusoids can account for equal numbers of ones turning up as are turning down. This flexibility allows a superposition of sinusoids to exactly match any curve shape.
Apologies if I have misunderstood your post.
Doug Bostrom says
#334 “J. Bob”:
Name-dropping? You appear to be implying sort of collegial relationship with C&T or even to have hired them as day labor.
To wit:
“This was also stressed when Cooley & Tukey, were flown up to help implement the FFT in our Data Acquisition/Analysis real time hybrid computers . This was when their paper first came out.”
Ah, so a project you were involved with spawned their seminal paper? Having artfully conveyed that impression, don’t you think it would be appropriate (and if you’re telling the truth perfectly safe) to come out from under the bed so those whose work you’re judging can take a look at your own professional output and form some assessment of whether they should waste a nanosecond reading what you write here? You seem to have a rather sneering attitude (“grant-grabbing”, “back-scratching”) toward the peer review process. Perhaps you’ve not opted into it, or you tried and failed and now are embittered?? You present like the classic jumped-up crackpot many academics are familiar with but maybe you’re just making the wrong impression. How are we to know?
Here are your credentials as they appear on this site, apart from the lard about Cooley and Tukey:
“You comments about flow transition brought back old memories to Cloudcroft NM, above the White Sands Missile Range. Over a few beers, a colleague and I were going over equations implementation in the range computers. This was for real-time trajectory and impact points, of missiles incoming to the range at re-entry rates. These were also used to direct the electro-optical instrumentation. In passing, my colleague mentioned that he received an award for developing a ultra low noise propeller for under sea vessels. The rest of the evening was spent discussing flow separation and pressure gradients at the surface of the propeller, and effects on turbulence and noise.”
It’s an entertaining ramble, and a very dramatic and posturing way to establish yourself. You strike a fine figure indeed but the story also reeks faintly of bulls__t. Your awkwardly injected anecdote seems crafted to establish you as a “rocket scientist” while not actually saying so in words we’d find in a CV, such as “Here is where I was educated, here is where I’ve worked, here is what I’ve published.”
Or maybe you’ve previously identified yourself on this site in some other posting? Some of us potentially gullible types need a refresher if that’s the case lest we be led astray by a smooth talking Google Pilot.
Gabriel Hanna says
My own field is physics; IR absorption by carbon dioxide is a part of what I study. And it is pretty obvious, all else being equal, increasing carbon dioxide concentration in the atmosphere has to lead to higher temperatures in the long run.
That being said, there are a couple of things worth saying about some of the arguments made here in favor of the mainstream position on AGW.
First, saying your opponents are funded by oil companies is nothing more than ad hominem. If I sell solar power generators, does that mean you should doubt me when I say the sun is going to last for 5 billion more years? A statement is true or not regardless of the source of income of the speaker. This is all rather trivial.
This ad hominem applies equally well to proponents of AGW, and not just because climate scientists are getting money from the government, but because oil companies are also funding their side of the debate. For example, the denialist shills at the Sierra Club (http://www.sierraclub.org/sierra/pickyourpoison/) gives some information on what oil companies are sponsoring. You can find a little more here (http://www.worldwatch.org/node/5934). The Wall Street Journal approaches this from a different side.
(http://www.opinionjournal.com/weekend/hottopic/?id=110009740)
Now because a lot of you are invested in this ad hominem I know you are just going to call this “greenwashing”. But of course these companies have many legitimate reasons to invest in both sides, just as corporations in general give money to both parties in Congress. They may seriously believe that carbon-neutral energy is the wave of the future or whatever, but an “evil” motive that I can suggest, as does the Wall Street Journal article, for funding environmental science and activism is simple rent-seeking. These companies stand to make billions from carbon taxes and cap-and-trade, because they are going to get first crack at a property right that won’t exist until created by Congress. And this of course has no bearing whatever on the scientific status of AGW.
Secondly, it is worthwhile to think carefully about computer modeling, as there is always the danger that the models will be treated as magic boxes which produce science. I know that there are complex systems that are difficult to treat any other way. But the history of climate modelling has been (to oversimplify) make a model, watch it come out wrong, and tweak something until it starts to look right, if the model doesn’t crash, according to [edit] the American Institute of Physics (http://www.aip.org/history/climate/GCM.htm).
Of course this doesn’t mean that we don’t know anything at all about climate science. Computer models have vastly improved our understanding about the effects of people on climate.
Thirdly, there is no computer model in the world that can tell you how many billions are worth spending to avoid the consequences, if they can be avoided at all, and even if they were known with certainty, this is not a scientific question but an economic or moral one.
I’ve seen estimates for how many people global warming is going to kill over the next hundred years. But there is something much worse than global warming, at least in terms of the number of people killed by it, and that is poverty. Simply the lack of clean water alone kills 2.2 million annually, the vast majority children under 5 (http://en.wikipedia.org/wiki/Drinking_water). And there are lots of ways poor people degrade the environment without burning coal, such as overgrazing (as the Sahara and Gobi deserts demonstrate).
While there are plenty of people who, say, oppose the Kyoto treaty because they don’t believe in the science, it is an intellectually respectable position to believe in the science and reject the treaty. Just because you agree with me on the nature of the problem, it doesn’t mean we agree about the effectiveness or desirability of the proposed solution.
But feel free to lump me in with the ID crowd and accuse me of getting money from oil companies. (I have, after all, worked under grants from the “Petroleum Research Foundation”–do you need to look further :) ?)
Gabriel Hanna says
P.S. @ Mark: A Fourier transform isn’t a “model”. Any square integrable function can be represented by a superposition of sines and cosines, that’s what a Hilbert space IS. J. Bob is just representing the time series in a different, but mathematically, equivalent way (except for the endpoints, I guess). You are right that applying FFT has no explicatory power, it’s merely descriptive. But it is more useful than Moliere’s “dormative property”. You occasionally see patterns emerge. :)
walter crain says
j.bob,
i’m impressed by your stick-to-it-iveness. i have to honestly say i’m also impressed by tamino’s analysis of your analysis, though much of the ststistics talk is over my head i admit. i think tamino is such a great explainer of things that when he explains it i understand it for just a moment.