It’s long been known that El Niño variability affects the global mean temperature anomalies. 1998 was so warm in part because of the big El Niño event over the winter of 1997-1998 which directly warmed a large part of the Pacific, and indirectly warmed (via the large increase in water vapour) an even larger region. The opposite effect was seen with the La Niña event this last winter. Since the variability associated with these events is large compared to expected global warming trends over a short number of years, the underlying trends might be more clearly seen if the El Niño events (more generally, the El Niño – Southern Oscillation (ENSO)) were taken out of the way. There is no perfect way to do this – but there are a couple of reasonable approaches.
In particular, the Thompson et al (2008) paper (discussed here), used a neat way to extract the ENSO signal from the SST data, by building a simple physical model for how the tropical Pacific anomalies affect the mean. He kindly used the same approach for the HadCRUT3v data (pictured below) and I adapted it for the GISTEMP data as well. This might not be ideal, but it’s not too bad:
(Each line has been re-adjusted so that it has a mean of zero over the period 1961-1990).
The basic picture over the long term doesn’t change. The trends over the last 30 years remain though the interannual variability is slightly reduced (as you’d expect). The magnitude of the adjustment varies between +/-0.25ºC. You can more clearly see the impacts of the volcanoes (Agung: 1963, El Chichon: 1982, Pinatubo: 1991). Over the short term though, it does make a difference. Notably, the extreme warmth in 1998 is somewhat subdued, as is last winter’s coolness. The warmest year designation (now in the absence of a strong El Niño) is more clearly seen to be 2005 (in GISTEMP) or either 2005 or 2001 (in HadCRUT3v). This last decade is still the warmest decade in the record, and the top 8 or 10 years (depending on the data source) are all in the last 10 years!
Despite our advice, people are still insisting that short term trends are meaningful, and so to keep them happy, standard linear regression trends in the ENSO-corrected annual means are all positive since 1998 (though not significantly so). These are slightly more meaningful than for the non-ENSO corrected versions, but not by much – as usual, corrections for auto-correlation would expand the error bars further.
The differences in the two products (HadCRUT3v and GISTEMP) are mostly a function of coverage and extrapolation procedures where there is an absence of data. Since one of those areas with no station coverage is the Arctic Ocean, (which as you know has been warming up somewhat), that puts in a growing difference between the products. HadCRUT3v does not extrapolate past the coast, while GISTEMP extrapolates from the circum-Arctic stations – the former implies that the Arctic is warming at the same rate as the rest of the globe, while the latter assumes that the Arctic is warming as fast as the highest measured latitudes. Both assumptions might be wrong of course, but a good test will be from the Arctic Buoy data once they have been processed up to the present and a specific Arctic Ocean product is made. There are some seasonal issues as well (spring Arctic trends are much stronger the summer trends since it is very hard to go significantly above 0ºC while there is any ice left).
Update: A similar analysis (with similar conclusions) was published by Fawcett (2008) (p141).
The ENSO-corrected data can be downloaded here. Note that because the correction is not necessarily zero for the respective baselines, each each time series needs to be independently normalised to get a common baseline.
Ray Ladbury says
Thomas says, “The only difficulty is in working out the relevant MHD equations.”
Do you realize how many plasma physicists’ heads you just made spin 1080 degrees with that statement? The real question is how well the dynamics need to be understood to reporduce the relevant physics.
Allen says
In using the FFT to compare global temperature oscillations to sunspot number oscillations, I found that some temperature data sets showed stronger correspondence than others.
Using long term temperature data from a single measurement station (virtually raw data) gave a strong temperature-sun relationship (in the few cases I tried).
Using data “averages” from many stations reduced the correspondence.
GISS and HadCRUT3 “data” is apparently heavily averaged. I have read that GISS “raw” station data contains over 20 percent interpolations (i.e. over 20 percent of the data points are effectively “made up”). Then, each point from each temperature station is corrected by making a “weighted average” with points from nearby stations (depending on their distance — sometimes up to 1200km). Then, data from all stations is combined using weighting factors to get global temperatures.
By the time one gets to the final sequence of GISS Temperature vs Month or Year (that we see published), one is dealing with an average of an average of an average … to put it simplistically. I have not read about the HadCRUT3 methodology; but, I assume it has similarities.
Anyhow, I question the validity of FFT analysis of the final GISS and HadCRUT3 temperature anomaly products — because they have been so “averaged” as to be suspect for that purpose. I would expect such “data” to show less correspondence with sunspots than raw data would show.
As a beginner in climate science (but not in science), I will continue to study these issues with an open mind.
Nice site. I have it bookmarked.
Barton Paul Levenson says
FurryCatHerder writes:
It doesn’t, they undoubtedly had an effect. But the sun is not driving the present global warming because there has been no clear trend in sunlight for 50 years.
Barton Paul Levenson says
Here is how we know ten years of climate data are not significant:
Year Anom Slope p
1988 0.180 0.020 0.000 *
1989 0.103 0.021 0.000 *
1990 0.254 0.020 0.000 *
1991 0.212 0.023 0.000 *
1992 0.061 0.025 0.000 *
1993 0.105 0.022 0.002 *
1994 0.171 0.019 0.011 *
1995 0.275 0.016 0.044 *
1996 0.137 0.016 0.092
1997 0.351 0.007 0.424
1998 0.546 0.005 0.643
1999 0.296 0.017 0.084
2000 0.270 0.012 0.279
2001 0.409 -0.003 0.618
2002 0.464 -0.012 0.095
2003 0.473 -0.017 0.116
2004 0.447 -0.020 0.270
2005 0.482 -0.040 0.179
2006 0.422 -0.020 0.000 **
2007 0.402
The first column is the year, the second column is the Hadley Centre temperature anomaly. The column labeled “slope” gives the coefficient of the time term (K yr-1) of a regression starting with the year on the left and ending with 2007. The p column measures significance — p
Nonlinear guy says
Why not continue the job and “filter out” PDO, NAO and, while we are at it, AMO etc. etc. This complete removal of “natural variability” (as if it were observation noise) would surely give us an unbiased estimate of the “global warming trend”, or would it……….?
[Response: ENSO is relatively easy to characterise and the result is pretty insensitive to the details. For these other patterns, there is more ambiguity. The other way to do it is to take the full spatial patterns and do an EOF analysis (or similar) – the first or second mode is likely to be ENSO related, and the other one is a trend and other modes will pop out further down. – gavin]
Steve Reynolds says
David B. Benson: The evidence is that the GCMs, being based on physics, model climate and paleoclimate rather well. I have no idea what else you require?
I would like to see this evidence that models just based on known physics ‘model climate and paleoclimate rather well’. My understanding is that the accuracy of fit to data depends on a number of adjustable parameters, especially to get the key feedback effects of water vapor and clouds.
[Response: Your understanding is faulty. Models are compared primarily to the current climatology and all of the adjusting goes into getting the mean climate/seasonal cycle etc. correct. The response of that model to volcanic forcings, the last ice age, changes in orbital parameters etc. are all ‘out-of-sample’ tests that are not fixed by adjusting parameters. You can show quite easily that without water-vapour feedbacks (for instance), you cannot get a good match to volcanic forcings and responses in the real world (Soden et al, 2005), or to ENSO, or to the long term trends. Cloud responses are more uncertain and that feeds in to the uncertainty in overall climate sensitivity – but the range in the AR4 models (2.1 to 4.5 deg C for 2xCO2) can’t yet be constrained by paleo-climate results which have their own uncertainties. – gavin]
Lowell says
I note that one of the strongest El Nino years appears to be 1878.
The 1878 average temp appears to be higher, in fact, than the current temp (although probably less than 1998.)
According to HadCRUT3, the 1878 anomaly peaked at +0.364C versus the current May 2008 anomaly of +0.278C.
To assess variability, one needs to go farther back than just the last 11 years, or the last 58 years (1950 was actually a very cold period with 1944 being very close to 2008 temps) or even the last 100 years (1878 was warmer than today).
http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/monthly
Nonlinear guy says
Regardless of the method(s) employed, this “linear” filtering approach still seems a little bit odd to me, as if we are trying to empty the AC power spectrum (killing the modes which probably all interact/teleconnect), in search of the “DC gain”, where actually without these modes (dynamics of Mother Earth) the gain itself wouldn’t even exist (“the mean is meaningless”).
John P. Reisman (The Centrist Party) says
#107 Lowell
I’m not sure what you are trying to say? You are discussing a peak anomaly in 1878 vs. current may anomaly of .278C.
You know what they say… anomalies happen. But then you go on to say that “1878 was warmer than today”?
The Met Office Hadley Centers site (where you picked your piece of data) does not agree with your statement
http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/
GMT is a big picture. Is there any reason behind your representation? Or, in other words, why are you saying that 1878 was warmer than today? Or are you referring to a more limited scope analysis not related to global mean temperature?
Doug Bostrom says
#102 Allen:
“I have read that GISS “raw” station data contains over 20 percent interpolations (i.e. over 20 percent of the data points are effectively “made up”). Then, each point from each temperature station is corrected by making a “weighted average” with points from nearby stations (depending on their distance — sometimes up to 1200km). Then, data from all stations is combined using weighting factors to get global temperatures.”
I’m curious to know where you read that?
John Lederer says
Couldn’t ENSO be thought of as a heat transporter? I have not seen evidence that an ENSO oscillation significantly changes the heat absorption or radiation of the earth. So the heat in an EL Nino had to come from somewhere, and the reduction in heat of an La Nina has to be balanced by an increase in heat elsewhere.
If so shouldn’t ENSO oscillations be reflected in geography or time by declines or increases in heat energy elsewhere? Presumably a perfect “global average temperature” would reflect this.
Or am I overlooking something?
[Response: Not really. ENSO changes the cloud cover and water vapour amounts and so you would expect it to affect the Top-of-the-atmosphere radiation balance which changes the overall amount of heat in the system. Indeed, some of the radiation measurements support this. You also need to think about the net heat flux into the ocean (but that is less constrained). – gavin]
David B. Benson says
Ray Ladbury (94) — I believe that IPCC AR4 states a range for climate sensitivity of 2–4.5 K (66%) with 3 K most likely. So there the error bars are -1 K and +1.5 K. Annan & Hargreaves give a means to narrow this range somewhat and I think they give 2.8 K as most likely.
And so it goes. Narrowing the error bars is actually going to be quite, quite difficult. The best I can think of is to do a corrected version of Arrhenius’s technique, using known physics up through 1977 CE to establish a estimated mean and variance. I’ve been informed that the estimated mean will be around 3 K. Then use the Annan & Hargreaves Bayesian method, with good observations conducted post 1977 CE to sharpen to the posterior. Worth doing, but not by me.
P. Lewis says
Lowell said
The annual mean for 1878 was 0.023 in a very strong El Nino phase and the 5-month average for 2008 in a La Nina phase is currently 0.241.
1878 0.155 0.364 0.293 0.309 -0.117 -0.010 -0.053 -0.044 -0.022 -0.120 -0.138 -0.335
Av. = 0.023
2008 0.053 0.192 0.430 0.254 0.278 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Av. = 0.241
And the
1877/8 June-May Av. = 0.1
2007/8 June-May Av. is 0.3
Martin Vermeer says
Re #107 Lowell:
Joking right?
The extreme for 1878, for Feb, was 0.364, higher than May 2008 (0.278), but lower than March 2008 (0.430). And 2008 isn’t over yet. Also lower than Jan 2007 (0.632) and a heck of a lot lower than Feb 1998 (0.749).
Martin Vermeer says
#102 Allen:
That’s a red flag: the noisier the data, the better the correspondence… I suggest you try your method on generated random data with realistic statistics :-)
About the reductions/averagings applied to met stations, you’re sort-of right but also confused. The GIStemp site contains some good articles by Hansen et al. on this. Also Tamino’s site contains at least one excellent post on this. I have studied the GIStemp method and understand it, so can you. You owe it to yourself if you seriously suspect that “making things up” is part of the game…
Steve Reynolds says
gavin: You can show quite easily that without water-vapour feedbacks (for instance), you cannot get a good match to volcanic forcings and responses in the real world (Soden et al, 2005)…
Thanks, the Soden paper is very interesting and I agree shows modeled water vapor effects consistent with real data.
But this evidence still shows models are not based solely on known physics; they are at least adjusted based on climatology data that has its own accuracy limitations.
[Response: Not really. Soden did not adjust his model based on this comparison. But of course data is used to build the models in general. How could it not be? – gavin]
John Lederer says
That is actually what bothers me about adjusting for El Nino or La Nina.
In each case heat is “borrowed from” or “lent to” the subsurface ocean. Presumably there is some slower process by which the heat is brought back into some sort of equilibrium of geography and thermocline.
If we compensate for the relatively rapid heat transfer of an El Nino but not for the slower return to equilibrium, aren’t we likely to see a climate picture that is unduly influenced by those gradual returns to “normal”? They will be read as “climate trend” while the rapid transfers of an ENSO event will be read as an “unusual” event to be compensated for.
David B. Benson says
Steve Reynolds (116) — You may care to read “Estimating Climate Sensitivity: Report of a Workshop (2003)”:
http://books.nap.edu/openbook.php?record_id=10787&page=7
Matti Virtanen says
Would it be possible to filter out the AGW signal and show us the global temperature development for the past 10, 20 or 50 years in the natural world without humans?
Tom Bolger says
You agree that ENSO has an effect on global temperature
Have you checked what happens to the R^2 of CO2 vs global temperature if ENSO effects are allowed for?
The R^2 should improve if CO2 is driving Temperature. Does it?
Steve Reynolds says
gavin: But of course data is used to build the models in general. How could it not be?
I agree, but some here seem to think GCMs are constructed using _only_ known physics (such as quantum mechanics and thermodynamics), and are therefore not susceptible to any of the uncertainty of climatology data (such as UHI and bucket corrections).
[Response: But that is correct. The issue of UHI does not come into the construction of the models. UHI or buckets or across-satellite calibrations affect estimates of the long term trends. Those trends are the test data for the models, not the input data. Read Schmidt et al, 2006 – not because it’s my paper, but because it shows you what goes into tuning the models – there is no long term trend data used at all. – gavin]
Mark says
Matti #119:
Well, that’s the models produced without CO2. There’s a lot else we do to change things and this isn’t easy to quantify: land clearance, overfishing, algal blooms (fertilisers being dumped), etc.
It’s been done and that’s how they know that humans have done most of the damage. Because even by tweaking things to be most generous, about 1/3 of the heating change can be made to fit the “no human CO2” scenario without putting something OBVIOUSLY wrong in there (like, say, trees outputting 100x the ozone we see in measurements today).
That isn’t what you asked for, but the result is the same as far as climate is concerned.
David B. Benson says
Matti Virtanen (119) — I believe there is an IPCC AR4 FAQ page which does that. Check the links in the Science section of the sidebar.
Steve Reynolds says
David B. Benson: You may care to read “Estimating Climate Sensitivity: Report of a Workshop (2003)”
Thanks; interesting info, but as to be expected, not very conclusive.
Allen says
#110 Doug Bostrom,
I spent 30 minutes looking for the succinct “illustrated” version. I just saw it in the last few days — but, that still means scores of potential links. I will try some more as I am now interested in saving those pages.
#115 Martin Vermeer,
Thanks for the fft tip :) For what its worth, I did see the same “fingerprint” frequencies in all the temperature data sets. However, they became less pronounced in the more heavily “reduced” data sets. Since they were “always there” and happened to be the same ones in the sunspot data, and the ones in the sunspot data were the same ones noted in the literature for sun phenomena as having physical basis, I assumed they probably were physical reality in the temperature data (doesn’t mean they were though).
I don’t think the GISS staff is making things up for nefarious purposes. Heck, I’m a retired NASA guy myself — we didn’t make things up; but, we made mistakes often enough. I imagine the methodology was intended to meet a perceived need. However, as “noisy data” raised a flag with you — data with so many interpolated points and weighting adjustments (some of which seem unjustified at this stage of my ignorance) raises a flag with me.
Anyhow, as you suggest, I’ll first work to understand the GISS process well enough to have an opinion on the method’s validity relative to its intended purpose (and other possible uses). If I remain concerned, there is a remote chance I’ll download the “raw” temperature data (millions of points I presume) and try my own reduction methodology — for the enjoyment and self edification.
Ray Ladbury says
Steve Reynolds, Any physical constant is determined from data–whether it is CO2 sensitivity or the gravitational constant. Science is empirical. You don’t get anything for free. However, once you’ve determined your constant, the degree to which your model reproduces behavior in the real world provides validation–and if the validation is strong, both the data used in fitting AND the validation result support your value for the constant.
It’s pretty hard to find strong support for a low sensitivity.
Allen says
#110 Doug Bostrom,
In answer to your query:
Don’t know this site’s policy on linking articles. So, I’ll just say google “How much Estimation is too much Estimation” (Yahoo search works too). This gives an overview. Other articles at the site go into some depth regarding the details. Despite the title, the article is not too negative. It merely raises the questions I (or any interested party) might ask.
[Response: “It merely raises questions” – hmm… the fact that not all data comes in on time and is not collected by NASA at all, isn’t worth a mention I suppose? No, it’s easier just to insinuate. – gavin]
Tilo Reber says
Being a computer scientist and not a statistician I decided to hack out my own ENSO compensation back in May, here:
http://reallyrealclimate.blogspot.com/2008/05/ten-year-hadcrut3-enso-effects.html
The divergence between Gavin’s method and my own for the period beginning in 1998 is .029 C. So they are very close. I used HadCrut3 data not HadCrut3v.
In any case, I plotted Gavin’s HadCrut3v data and his ENSO adjusted HadCrut3v data together, beginning in 1998, here:
http://reallyrealclimate.blogspot.com/2008/07/gavin-schmidt-enso-adjustment-for.html
The first thing that we can see is that there is very little divergence between the two. 0.0163 C for the 125 month period. The unadjusted HadCrut3v data fell by .00375 C over that interval and the adjusted data rose by 0.0125 C.
So then comes the next question. If the decadal warming trend caused by CO2 is .2C, and if ENSO is now adjusted for, then where is the other .187 C of temperature rise? If we are going to attribute the flat trend to elements of natural variation, and if we have already accounted for ENSO, then to what elements of natural variation can we attribute the flat trend for the last decade?
FurryCatHerder says
In re 103:
I guess I’m confused because I have looked at the correlation between the aa index and global temperatures and there does appear to be one. So other than say “Hogwash!” or something similar, how about pointing me at something which explains why there is no correlation.
Timothy Chase says
Allen (#127) wrote:
Here is something I find particularly interesting:
Global Temperature from GISS, NCDC, HadCRU
January 24, 2008
http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/
Particularly the second chart. It shows that once you adjust for different base periods, Nasa GISS, Noaa NCDC and Hadley HadCRU are virtually identical. Some deviation in the very early part — due to the sparseness of data, I would presume, and some deviation around the end — given their different treatement of the Arctic — as Hadley basically omits everything beyond the land whereas Nasa interpolates. And it isn’t like this is the only methodological difference between GISS and HadCRU. (I don’t know as much about NCDC — so I will leave it out at this point.) But despite the differences in methodology, the results are nearly identical.
This I submit constitutes evidence. But evidence for what?
In my view, for the fact that both methodologies are reality-based — that while they are different, the methodologies each have a basis in reality for doing things they way they do, that they are both rational in how they deal with the fact that we have incomplete information. And as such, both work rather well in estimating average temperature anomalies, of adhering to a reality that exists independently of each — like cartesian and polar coordinate systems to a nearly flat plane.
But I suspect that you would view their near-agreement as evidence for something else. And then the interesting question becomes whether there could ever be any evidence that would make you think otherwise — rather than interpret as further support for your view.
PS
You will find some rather interesting reviews of some the articles you are probably familiar with at the site above.
Mark says
Ray, #126
In a “debate” about the LHC on El Reg, one Anon Coward was insisting there was a real danger with it because, among other reasons, Plank’s constant was an estimate. Why was it an estimate? Because the statement of the constant had error bars.
Good joined up thinking there.
Maybe that AC was Steve?
Martin Vermeer says
Allen #125: go for it. I assume you are aware that the GIStemp software is freely downloadable. As is an alternative package called Freetemp by a guy called Van Vliet. Who was in a bit the same situation as you some years ago (when GIStemp wasn’t yet released) and decided to find out for himself. Which he very much did :-)
Matti Virtanen says
Re 123: Thanks, I forgot those graphs. But they are too general for my purposes, and stop at 2000. I wonder why the IPCC in 2006 did not include five more years of temperature data. It will be interesting to see where they end in the 2014 report if the present (post 1998) trend continues.
Barton Paul Levenson says
Tilo Reber posts:
This is, in fact, something he posts on every climate-related blog he can get to. He simply doesn’t understand the facts that A) the trend is not flat, and B) you can’t tell the trend from ten or eleven years of data, as I showed above.
Half my post got deleted because I used a less-than sign where I should have used & lt ;. Could someone correct my post, please?
Allen says
129 Timothy Chase,
“…But I suspect that you would view their near-agreement as evidence for something else. And then the interesting question becomes whether there could ever be any evidence that would make you think otherwise — rather than interpret as further support for your view.”
All I questioned was “… the validity of FFT analysis of the final GISS and HadCRUT3 temperature anomaly products …”
It was all about my back of the envelope FFT analysis — nothing more.
I’m new to this climatology and AGW. I have been paying absolutely no attention up until, maybe, three weeks ago when for some already forgotten reason I happened upon a “climate site” and became interested. Sure, I knew people were “concerned” about global warming — but, for me, it was not something I thought about. Considering how consuming it seems to many on the sites, that may be hard to fathom.
One thing obvious from the start — this is an emotionally charged subject. People easily read ulterior motives into simple discourse and are really quick to write people off as being in one “camp” or another.
Personally, at my current level of ignorance, here is what I believe: Global warming has obviously been happening over the last 100+ years. Atmospheric CO2 has been rising to “record levels” relative to the recent past. AGW is likely happening but is still based on “circumstantial evidence”. The AGW rate and the influence of “natural factors” have not been finally determined (my area of current interest). Much science remains to be done regarding consequences (good and bad). What should be done politically, is a different forum I presume.
So, maybe that puts me in the “on the fence” camp :)
Allen says
#132 Martin Vermeer
I did not know about FreeTemp. Thanks.
I presume, being so late in the game, anything I think of has already been done, and I do not want to reinvent any wheels I can download.
Fred Staples says
There is a lot of data about, Barton (104).
If you look carefully you will see that the UAH monthly data from June 2001 to date shows a significant negative trend.
Their mid-troposphere temperatures show very little trend from the start of the record (0.05 degrees per decade).
There is a distinctly defensive tone to these posts, based on the suggestion that short term temperature changes cannot yield any significant information. Statistically, the time span of the record has nothing to do with significance. A trend is significant (at a given level of probability) if it accounts for a sufficient proportion of the variance in the data. It is not if it doesn’t.
Returning to the blog (as stimulating as ever) I came across Barton’s paper on Saturation. I used the approach to add some calculations to Tamino’s explanation of the Lapse Rate at Open Mind. I would be very interested in Barton’s comments, particularly on the relative absorption of the atmosphere above Essenhigh’s extinction levels.
John P. Reisman (The Centrist Party) says
#129 Furrycatherder
Here’s a mantra for you. Correlation does not equal causation (neither does no correlation equal causation).
Some things are coincidental and some complimentary in effect. Getting climate models to work right needs positive forces and negative forces of the existing elements that impose forcing. Solar variance is just one piece of the puzzle.
http://www.agu.org/pubs/crossref/2005/2005GL023621.shtml
The biggest problem people have in understanding this global warming event is narrowly scoped data and improper context of that data when weighed with the big picture of global climate.
As I have stated before, with a properly narrowed scope, I can prove to you the world is flat.
Some might disagree with me, but why should that matter. They might even have data that disproves my assertion, but why should that matter. I am only saying that I can prove the world in flat with my narrowly scoped view. If I am foolish enough not to look at other relevant data that would affect the outcome of my assertion, then that is what I am, a fool, though Fred Singer would simply smile and say, no, ‘the world is flat’; ‘smoking is not bad for your health’; CFC’s do not harm the ozone… you need to check the source and the history behind the source to get more perspective.
If you are seeing a chart that proves otherwise, i.e. that this global warming event is caused by solar, it is likely either fraudulent or narrowly scoped, or both.
Show us your data Furrycatherder.
Also, reread, my post to you #59 above. Further warming is in the pipeline. Heck, we’re doing such a great job at warming the planet, we don’t even need sunspots to do it. Aren’t we industrious! Who needs that extra .3 W/m2 anyway…
IPCC has us at 1.6 W/m2 but you have to remember they are in Switzerland, which is probably the most conservative country on the planet. So they are not going to speculate on anything. They need to run it through their filter of 2500 scientists to gain confidence. But a conservative number is not always the real number. Current forcing estimates done by our own governments leading scientists are showing 1.9 W/m2.
Mark says
Fred Staples says in 137:
“Statistically, the time span of the record has nothing to do with significance.”
Uh, the noise goes up with the square root of the number of samples taken. The signal goes up linearly.
That is called Statistics.
If you’d done O level maths you’d have learnt that.
Hank Roberts says
Allen, I suggest the “Start Here” link at the top of the page, and the first link under “Science” in the right sidebar of the page, for your first few weeks’ reading.
As you’ll note on climate blogs, there is a great deal of repetition, people will go from blog to blog posting the same beliefs and opinions, sometimes for years without changing anything they believe or opine.
They aren’t “on the fence” — they haven’t even entered the ballpark.
Reading at least the Spencer Weart’s history will avoid that sad fate.
The science keeps changing. You can look it up.
John P. Reisman (The Centrist Party) says
#135 Allen
I don’t think that puts you in the “on the fence” camp. It puts you in the ignorant camp by definition. Since you are fro NASA, I am confident that does not offend as debate is oft encouraged within the culture.
http://www.merriam-webster.com/dictionary/ignorant
Don’t worry though, no one knows everything and I have my own areas of ignorance where I am studying to learn system dynamic interactions and components. I strongly believe in lifelong learning.
Since you’re form NASA, I’m curious where you live that you only recently noticed the global warming debate?
Your statements indicate that you do not yet understand the major dynamics at play. I myself am not a scientist but I’ve been examining this for quite some time now and I love learning so it’s been fun.
In your search for information you might want to question the sources and basis of the arguments you find in areas where there has already been proof of misleading or even scientific fraud.
For example, your post #127 led me to Steve McIntrye’s web site. I beleive that is the same Steve that was involved in a congressional debate about the hockey stick his conclusions had substance but no significant relevance. In my opinion he wanted to muddle the argument and confuse people. I addressed his argument and many others in the following article,
http://www.uscentrist.org/about/issues/environment/john_coleman
which led to this conclusion submitted in testimony:
http://www.pewclimate.org/node/2132
The hockey stick became one of the most studied pieces of science and still held up. To this day, it still looks like a hockey stick, and it is still valid.
So where you look for data and how narrowly scoped that data is and whether or not that data is relevant are critical factors.
If you want to understand, you need to understand a lot.
In post #127 you mentioned “It merely raises the questions I (or any interested party) might ask.” That is a common mistake, one I have made as well. Asking common questions is a big part of the problem for those that want to learn about this global warming event.
A common question for example like how can Co2 be the main driver of climate if it lags behind warming in the natural cycle?
The fact that Co2 is not the main driver of climate coming out of an ice age confounds people to this day… even though it is well established that Co2 is not the main driver of coming out of or going into ice ages, as that is regulated by the Milankovitch cycles.
This global warming event is entirely different because we are actin contrary to the natural cycle and have departed from the expected trend of climate minus the GHG’s and other affects.
My advice is study this site. There are literally thousands of sites out there now and most I have looked at are caught up in narrowly scoped views that are inconsiderate of the larger scope of the known, relevant, contextual, science.
Context and relevance are key to understand. Without that, it’s barking up wrong trees and whack-a-mole in the carnival of human misunderstanding.
Since you are a beginner as you have stated, keep this all in mind. There is an obvious concerted effort to confuse the issue. Context and relevance will bring you closer to the truth than naive assertions based on data out of context and less relevant. it’s a long road, but your in the right place.
If you wish to discuss any of these matters, you can contact me through my web site. I’m still ignorant too, but I do my best.
John
Steve Reynolds says
Ray: Any physical constant is determined from data–whether it is CO2 sensitivity or the gravitational constant.
Not distinguishing between constants known to many significant figures and parameters where even the sign is in dispute (cloud effects) is pretty unhelpful.
Mark says
Seve Reynolds in 142.
If clouds are low down, it’s warming. If high up, it’s cooling.
Now why is it that the uncertainty in cloud cover and cloud formation only makes things cooler?
So, given it could go either way, why not ignore it for now? I mean, clouds don’t CARE what we want them to do, do they.
wayne davidson says
Speaking of ENSO, this year looks more and more like 1997, when ENSO raged at year end, but also when the Polar Vortex was really strong in the spring. Like this year when +200 knot stratospheric winds were measured in the Arctic. Looks like a strong El-Nino may be on its way:
http://www.osdpd.noaa.gov/PSB/EPS/SST/climo&hot.html
Ray Ladbury says
Steve Reynolds, The fact that the sign is not known for clouds may indicate a variety of things: 1)lack of data, 2)the net effect may be near zero, 3)the mechanism may be obscure or there may be competing mechanisms that make it difficult to determine how to include the effects in the model. I suspect that clouds fall under 2 and 3 above. Again, you have to look at things in the context of the model.
Lawrence Brown says
Re: #129 The following site states why greenhouse gases have a much greater effect than the Sun and natural variability in explaining recent global warming.
http://www.metoffice.gov.uk/corporate/pressoffice/myths/4.html
Also the IPPC 2007 Summary For Policy Makers, The Physical Science Basis, in figure SPM.2 shows individual radiative forcing components. The Solar irradiance is given as contributing 0.12(0.06 to 0.30) watts/sq.meter, while the total net anthropogenic forcing is given as 1.6(0.6 to 0.30) W/M^2.
http://www.ipcc.ch/ipccreports/ar4-wg1.htm
Steve Reynolds says
gavin: Response[121]: Read Schmidt et al, 2006 – not because it’s my paper, but because it shows you what goes into tuning the models – there is no long term trend data used at all.
Thanks; there is lots of good info there. I notice the range of climate sensitivity reported for your various models is 2.4C to 2.8C. Those values are lower than what some people would expect.
Don’t the long term trends that are the test data for the models still influence model construction somewhat? If you see model predictions at odds with observed long term data, doesn’t that tend to influence selection of input parameters (that are poorly known anyway) to get better agreement?
[Response: Not really, because ahead of time I don’t know what the answer would be. – gavin]
Risa Bear says
Excellent graphing. May I repost, with attribution, to The Red Mullet? http://theredmullet.blogspot.com
Risa B
Steve Reynolds says
Ray: I suspect that clouds fall under 2 and 3 above. Again, you have to look at things in the context of the model.
My understanding (faulty as it may be) is 3. It generally takes a fairly high positive feedback from clouds to get a high climate sensitivity.
Allen says
Thanks to everyone who has offered advice and links. You’re becoming numerous enough that I do not want to spam with individual notes of appreciation.
#141 John, to answer your questions:
NASA Glenn RC, Ohio, USA, Materials Science, Microgravity Science, Project Scientist, Project Manager Spacelab and ISS experiments, retired 9 years. And/or under a rock? :)
FWIW, on questions of science. I think it behooves the ignorant to remain on the fence :) Moreover, I think retaining objectivity probably requires I remain there as long as possible.
No offense taken. Thanks for the links, I’ll visit them.