It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
Completely Fed Up says
Didactic: “You are being confusing. Any measurement system is ultimately just a convention, so your “explanation” doesn’t really explain anything. ”
And 100% matched to the complaint. It meant nothing too and changing the “absolute” part doesn’t tell you naff all about the temperature.
Why not use the kelvin scale?
Of course, you’d start with a scale at about 288K and so all your plots would be relative to that value…
hang on, that’s exactly the same plot as the poster was complaining about…
So no, the answer was 100% correct for the query made.
Completely Fed Up says
“As far as I am aware these are not well understood currently and are still being debated.
Alan”
And does the debate change the result significantly?
No.
But you can look and see for yourself rather than just hand-wave the problem into existence.
After all, according to quantum mechanics, you can’t know the position and velocity of anything to 100% accuracy and the nature of the macroscopic world in quantum terms is still being debated.
Does this mean we can’t find out if a driver was speeding?
No.
Why?
Because the known issues do not affect the answer we’re looking at.
Josh Cryer says
#385 John E. Pearson, thanks for the link, I hadn’t known you could coerce chaotic systems like this. Not having read the paper, could you explain what it does to the system? It seems as if their perturbations could break a system that is dependent upon chaotic runs, so any attempts to use their method with climate models would prove futile. Certainly if I were to use it in my n-body code it would cease to be an n-body code. (Note that I saw that you are not claiming that this is what is used with climate models, at least this claim cannot be made with regards to those where the code is readily available, such as GISS or CCSM.)
And don’t let some commentators get to you, it gets bad on both sides of the isle. I like that it’s not as bad here on this site as it is on others (probably because they moderate out most of the noise).
Completely Fed Up says
” Alan Millar says:
31 December 2009 at 1:25 PM
I think you mean that it doesn’t look like the modelled Earth.”
Well, given we’re living on the Earth and we are interested in the climate of the Earth, I think that is rather the point, don’t you?
And yes, the actual pinatubo eruption wasn’t modelled but what they did after the fact was change the eruption they presumed to happen with the actual measurements OF THAT VOLCANIC ERUPTION (which, remember, is not modeled, this is a climate model not a vulcanology one) and then run the same physical model again, untuned to that change, but the same parameterisations as the original had, so the ONLY new information is the measured event of Pinatubo.
If the result was significantly different from the real record of the earth that had that eruption in it, then the model would have failed. If it was slightly different, it would show how much the models could be wrong.
This is called “proving the models”.
And it passed.
Ray Ladbury says
Matthew@390 says “No. Many tiny inaccuracies can accumulate to produce model predictions that are inaccurate on time scales of 10-30 years.”
This presumes a systematic bias of “tiny inaccuracies”. It also presumes that the same bias does not prevail for the validation dataset–that is there must be a systematic shift in systematic biases. Yes, it’s possible to screw up a validation. Pray, what evidence do you have that the validation was in fact screwed up?
Matthew then reveals his true colors: “Imagine the possibility that, 20 years hence, Indian and Chinese psychological scientists and historians of science write books and hold symposia about the decline of the EU and US that was precipitated by the AGW mass hysteria that swept them. Was it just a coincidence that the hysteria developed in parallel with the “Left Behind” religious movement?”
Congratulations, Matthew, you have revealed yourself to be a tin-foil-hat-and-black-helicopter conspiracy theorist. Dude, did it ever occur to you that the reason scientists find the evidence persuasive is because the evidence is in fact persuasive? Isn’t that just a wee bit more plausible than the assumption that more than 95% of climate scientists and more than 90% of physicists, and more than 85% of chemists and… have all suddenly and simultaneously gone bat-shit crazy?
Please, please, please, please, please and pretty, please with sugar on top, Matthew, read Spencer Weart’s history. It’s referenced on the Start Here page. ‘Cause, Dude, seeing you spout this conspiracy-theory stuff is embarrassing!
Ray Ladbury says
Alan Millar says “I think you mean that it doesn’t look like the modelled Earth.”
I’m sorry. I didn’t realize it was possible to stutter in a written response. No, I mean what I said. A model with sensitivity below 2.1 degrees per doubling doesn’t look anything like Earth.
As to the rest of your screed, the fact that you seem to think it is necessary for a climate model to predict volcanic eruptions would seem to indicate that you don’t have much of a grasp of the subject matter.
Completely Fed Up says
Mathew repeats: “354, Completely fed up: Trend analysis is NOT A MODEL.
It’s a statistical model, or a mathematical model”
It’s not a model where you can do any form of simulation.
It is not the model that GCMs are a class of.
And those form-fitting analyses are worse than the physical ones you want to avoid using (mostly because they can’t be made to fit the result you’d like to see).
The form fitting is not a scientific model: you can’t say “and this causes that” because you haven’t EXPLAINED the origin of the wavelets you’ve fitted. All your intend will result in is “well, it goes up and down like this”. This doesn’t model the real world and it doesn’t model a hypothetical world (e.g. one where we tripled CO2 output or halted output of CO2 completely). It hasn’t explained anything.
It’s just made a different way to say what the line did.
Andrew says
@Completely Fed Up: “344.Andrew #304: the richest people have a tax rate of less than 20%.”
That’t not anywhere near as true as it was twenty years ago, but it is also irrelevant even if it were true.
I am using data based on actual taxes paid – not taxes avoided or reduced.
The taxes actually paid by the highest 1% earners in 2007 amounted to just over 40%. 2007 is the most recent year for which that data is available. 2008 will likely be a distorted picture, but 2009 should be relatively normal.
Here is a link to a report containing the data:
http://www.taxfoundation.org/news/show/250.html
And of course if you want the straight government data you can get it from the IRS as Excel sheets:
http://www.irs.gov/taxstats/indtaxstats/article/0,,id=133521,00.html
Note that replacing income tax with carbon tax would mean replacing the tax collected – not the tax avoided. So if we were to suppose that rich people pay very low tax rates then it would only make replacing income tax with carbon tax even more burdensome for the average taxpayer.
You can actually check for the most part people with high tax rates don’t easily reduce their tax rates – there’s 2.5 trillion dollars in tax free municipal bonds that are worthless to people who do not pay high tax rates. There is no point to buying munis if you don’t have a high tax rate. 2.5 trillion dollars is a lot of pointless.
There are lots of other tax managed financial instruments ETFs, mutual funds – and things like overpaying life insurance premiums. If you are a high earner, all sorts of salesmen come knocking on your door with these ideas. Well if it was easy to just lower your tax rate by going offshore, then none of these guys would be in business. Yet their business thrives; only because the large majority of rich people end up paying high tax rates.
I suppose I should point out I spent the last twenty years running offshore hedge funds, so the question is not new to me. If you live in the U.S. then you can invest offshore, but you are much better off reporting the income to the IRS, in which case it is taxed same as if you made it in the U.S. Now thousands of rich people were talked into trying tricks that amount to lying to the IRS about their offshore assets (and many of them are now in court trying to stay out of jail – Google “UBS tax evasion 2008” for stories). But that’s thousands, not hundreds of thousands of people. Most rich people I knew are smart enough to stay clean with the IRS. Of course there will always be some dimwit who can’t control their greed, but many of those actually get caught. The offshore tax cheat does occur, but like the infamous welfare queen with two cadillacs? It’s the exception that fires the imagination, not the bulk of the statistics.
John P. Reisman (OSS Foundation) says
#371 Jason
Insufficient?
Let’s try a thought experiment.
Someone is holding a gun to your head. The majority of weapons experts believe that the bullets in the gun are real and powerful enough to blow your head clean off. You don’t think they are. You think they are insufficient and will do no harm.
So you say, go ahead pull the trigger. After that you will decide if the bullets are real and sufficient to blow your head clean off.
This is effectively what you are saying with your argument.
More explicitly you are saying GCM’s and hindcasts are not really good enough yet, and you lack the knowledge needed to help experts improve their assumptions (second smart thing you said in your post, first was ‘hindcasts are very valuable’).
We know greenhouse gases are actually greenhouse gases (GHG’s).
We know industrial processes and land use changes have increased the levels of GHG’s in the atmosphere. We know albedo is changing.
So, essentially, your logic, is that GHG’s are not GHG’s, thus, adding more will not have a meaningful effect.
So you’re saying go ahead, pull the trigger… and if it destroys the global economy we will then know that the scientists were right and you were wrong.
Man, I’m glad you’re not my dad. It would be rough for me to listen to such obtuse reasoning on even a semi regular basis. It’s hard enough as it is.
Josh Cryer says
Ray Ladbury, BTW, I saw that you mentioned DSCOVR a few pages back, I was wondering if you were aware of CLARREO: http://clarreo.larc.nasa.gov
I wish it would happen sooner than 2016, but it should have such an accurate sampling ratio as to make land based temperature records obsolete (not that climatologists won’t still use them!).
Joseph says
Without having looked at the code of the really complex physics-based models, I can tell you it wouldn’t work like this. If the predicted temperature deviates too much from where the equilibrium temperature should be (according to irradiance and forcings) the model will self-correct. So I don’t believe you’d have this “cumulative” error you’re speculating about in any significant way.
Ken W says
ADR (384):
It’s important to understand that the total amount of CO2 in the atmosphere is increasing (nothing in this report challenges that) and that will in turn cause additional warming. It’s also important to understand that no single study is conclusive (the skeptics always like to jump on a single study that they think supports their position, while ignoring dozens of others that counter their position).
Here’s a good brief analysis of that study:
http://www.skepticalscience.com/Is-the-airborne-fraction-of-anthropogenic-CO2-emissions-increasing.html
Edward Greisch says
http://www.scientificamerican.com/article.cfm?id=local-nuclear-war&sc=DD_20091230
Scientific American has just inadvertently published a new geo-engineering solution to global warming.
SecularAnimist says
Matthew wrote: “Imagine the possibility that, 20 years hence, Indian and Chinese psychological scientists and historians of science write books and hold symposia about the decline of the EU and US that was precipitated by the AGW mass hysteria that swept them.”
Actually, they will be writing about how the decline of the USA was brought about by the death-grip of the fossil fuel industry on US energy policy, which kept the US mired in 19th century energy technologies, while China and India became the economic powerhouses of the world through massive investment in wind, solar and other renewable energy technologies that became the foundation of the New Industrial Revolution of the 21st Century.
Actually, they are already writing about that now, since it is already happening. China is already the world leader in wind and solar manufacturing and exporting — both technologies that were invented in the USA.
Doug Bostrom says
Ray Ladbury says: 31 December 2009 at 2:01 PM
“Please, please, please, please, please and pretty, please with sugar on top, Matthew, read Spencer Weart’s history.”
Just in case finding the link was baffling:
http://www.aip.org/history/climate/
Suggestion to RC: script the site so that every five posts a machine-generated post links to Weart’s site.
Tip for contrarians: read Weart’s history -before- posting here.
SecularAnimist says
My only comment about those graphs is that I admire the equanimity which seems to enable scientists to look at them dispassionately and analytically. I’m glad you can do that, because I cannot. They fill me with fear.
Andrew says
@CFU: “What is this “everything else” that is being ignored”
If you look at the policies proposed on climate remediation, (e.g. Waxman-Markey, carbon tax etc.) then climate forcings other than CO2 are treated as CO2 equivalents. This means a one-dimensional measurement of the control parameter is used, despite the application of a multi-dimensional control. In those circumstances, it is likely (and expected) that the actual linear combination of the control that is applied will be determined by the economics of the controls (the arbitrage is via cap and trade). By setting up that arbitrage we know that we will be applying a one dimensional control. Then we are minimizing the instantaneous economic cost (and hoping without any serious credibility that has some sort of global optimality).
Here is an analogy – you have several parameters for controlling your car – steering wheel and accelerator to name two. And if you make a gentle turn, you lose a little speed, so there is a sort of “equivalence” between the steering wheel and the accelerator over a small region of the parameter space. Now if you made a cap-and-trade style arbitrage between those two parameters, you might end up with the minizing combination being 20% steering and 80% accelerator. Imagine now that your car mechanically connects those two inputs with that scaling. Parallel parking might actually be possible in those circumstances. But it would be seriously more difficult than if you can use them independently.
Martin Vermeer says
Matthew #326
Ah… “wait for technology to save us.” Look, you don’t study mitigation scenarios much do you? The spending takes place over the whole time line, not all of it ‘now’. But what we do spend now, will have to be based on current technological realities. 20 years from now? Ask again then.
…and please note that not investing now in mitigation is also a ‘bet’, or should I say ‘uninvestment’, collecting compound interest at a nasty rate until we wizen up.
In your dreams.
Completely Fed Up says
“If you look at the policies proposed on climate remediation, (e.g. Waxman-Markey, carbon tax etc.) then climate forcings other than CO2 are treated as CO2 equivalents.”
OK.
“This means a one-dimensional measurement of the control parameter is used,”
Uh, no. didn’t you read your statement:
“forcings other than CO2 are treated as CO2 equivalents.”
So it’s multidimensional. Assuming this means “adapting to change in one forcing” as one-dimensional.
“Here is an analogy – you have several parameters for controlling your car – steering wheel and accelerator to name two.”
And your engine when you accelerate produces torque that will cause the car to veer. This used to happen a LOT in older cars before they started working on complex controls that counter this effect by turning the wheels appropriately.
Therefore your accelerator also has a steering wheel in it.
But it doesn’t stop there!
When you turn, old cars had the wheels spin the same speed and your inside driven wheel would skid because it was trying to move the same linear distance on the inside of a curve. So the steering wheel has a brake that slows down the inside wheel. It’s called a differential.
Your steering wheel has a brake in it.
The appropriate inclusion of one dimension into the other is warranted and makes driving both easier and simpler. Modern cars are a lot easier to drive than the Model T fords.
“But it would be seriously more difficult than if you can use them independently.”
Not in your car.
Helicopters also have several problems that mean your collective is mixed with some rudder, your rudder is mixed with some collective and engine, and your engine is mixed with some rudder.
Why?
Because without that mix, the craft is much more difficult to manage.
Cheap radio controlled helicopters are easier to fly than expensive ones BECAUSE they have fewer controls.
Your choice of analogy was exemplary. Pity it actually undermines your “everything else is ignored” idea.
Doug Bostrom says
Nicolas Nierenberg says: 30 December 2009 at 7:23 PM
Hmm, I’m not even sure at this point that we’re disagreeing, or not much, rather that I took exception to your original point after reading it too literally.
I think we could probably both agree that a model can be built from first principles, can then show reasonable competence but of course can be improved by incorporating newly identified or previously ignored influences.
I’d venture to say that if a model of a physical system can be compared to the real physical system being described by the model and that such comparisons can yield paths for improvement of the model is a validation of the model.
We could probably also agree that if the fundamentals of the model were poorly understood to the point that the model was originally invalid, attempts to improve it based on hindsight comparisons would probably make the performance of the model against hindsight even worse.
In any case, I think I did not read/understand your original post properly.
Doug Bostrom says
Oops:
“I’d venture to say that if a model of a physical system can be compared to the real physical system being described by the model and that such comparisons can yield paths for improvement of the model is a validation of the model.”
should be:
“I’d venture to say that if a model of a physical system can be compared to the real physical system being described by the model and that such comparisons can yield paths for improvement of the model, that process itself is likely a validation of the original model.”
Jim Bullis, Miastrada Co. says
Partly related to the ocean heat content chart of the above article, but also related to the mitigation discussion, I wrote the following comment to an Economist Magazine article on Copenhagen:
There is something very wrong with a developed world that wastes about 80% of the energy intended for transportation and about 60% of the energy intended to make electric power. Automobiles and trucks were designed for a world where fuel seemed unlimited. Electric power generating systems were also arranged to waste energy without seriously annoying large populations. The last hundred years have been a great ride — whoopee! Correcting this insanity is the task at hand. Global warming is a secondary issue that will be fixed only when we get the more fundamental problem solved.
It seems that the campaign against global warming could actually be a distraction that leads to failure to solve anything. The very goal of a 2 degree C limit in the rise of global surface temperature is an example of how misdirected this campaign can be. While it is plausible that man made CO2 can offset the general heat balance of the earth, it seems entirely likely that this imbalance will be taken up by the deep ocean heat capacity. We could actually have serious global warming consequences with sea level increases while the surface temperature averages rise hardly at all. All the squabbling about the temperature record may be irrelevant.
A good case can be made that the actual scientists that model the expected climate are serious people who have done very sophisticated work. Questioning their motives because they have tried to present a convincing popular case is simply wrong and mostly anti-intellectual in flavor. Still, it seems that the impending disaster of global warming is not as well understood as we might have been led to think.
Perhaps more relevant is the apparent fact that serious planning decisions continue to ignore global warming alltogether, and opt instead to solve the energy dependence problem. The bamboozled public thinks that electric cars with futuristic batteries will cut CO2 when this development will actually result only in a shift to coal as the base fuel and actually increase CO2 compared to emissions from hybrid vehicles. Why do people with economic and political good sense want to bamboozle the public? The answer is, that shifting to coal will indeed help with the oil dependency problem which is a meaningful way to perpetuate prosperity and to change the power balance in the Middle East.
Also we have great enthusiasm for the “smart grid” which promises only to slightly trim losses from the existing electric power system with the underlying result of perpetuating the system of central power plants where vast amounts of heat are wasted. The dream of wind and sun as power sources seems like a deception to justify new transmission links, when the continued uneconomic reality of these ideals shows nothing real should be expected here. The bamboozled public will wake to the reality that the new transmission links will simply enable wasteful power production practices by bringing power generated far away to the urban users. Imagine “mine-mouth” power plants in the coal regions of the USA, far out in the country alongside of windmills. Guess what the proportion of power coming from wind will be.
The real goal should be to cut energy use with due concern for the functioning of industrial society.
We have some examples of how to do better. The Aptera is a car that dramatically decreases the energy needed for personal transportation of the sort that people need, and might eventually come to believe looks good. Distributed cogeneration of electric power using natural gas, where the generators are at individual households that use the otherwise wasted heat, thus doubling or tripling the system efficiency of electric power production. Miastrada Company is also involved in such future developments. (I represent Miastrada Co.)
By working to solve the fundamental problem as discussed above, we arrive at possible solutions that make economic sense in their own right. The appropriate test of any solution is whether it is economically sustainable without long term infusions of public money, whether it comes from the Treasury or through extra costs for fuel or electricity.
The tragedy of Copenhagen is that the solutions depended on infusion of public money, whether to cap and trade or subsidize poorer countries. Overlaying all this is the incredible naivety that setting of goals and making pledges means anything at all. Realists should have stayed home once Presidents Obama and Hu agreed that neither would make binding commitments. President Obama knew with certainty that he could never get such a treaty ratified and President Hu was probably aware that he would also face intractable planners who know the real costs involved.
So we can try to spin a meaningless exercise into something that might turn out to have some positive results. Or we could try to awaken to a real challenge to rethink the way we do things in a way that fixes the fundamental problem of extreme energy waste.
Lynn Vincentnathan says
Matthew (#391), I think the hysteria is on the part of the denialists. A reasonable person (I’m talking non-scientists) when faced with a decision, would choose wisely. When choosing between acting on the false-positive (mitigating AGW, when AGW is not happening) yielded great savings and better living, and failing to act given a false negative (not mitigating when AGW really is happening) yielded great disaster, perhaps even the Vensus syndrome as top climate scientistist, James Hansen suggests, a reasonable wise person would choose — now let’s think a while — yeh, they’d choose to mitigate, save money, and have a better life, hoping like crazy the climate scientists were wrong and AGW was not happening.
So why all this hysteria about turning off lights not in use, or getting onto GreenMountain 100% wind generated electricity that costs less than polluting energy, or putting up a little investment in energy efficient appliances and products that save money in the long run, some even paying for themselves in savings and going on to save more, and certainly doing much much better than the stock market did this past decade. Why are people so scared to death about installing a $6 low-flow showerhead with off-on button that saves $2000 in hot water during its 20 year lifetime? BOO!
Barton Paul Levenson says
Sierra117 — I have CO2 concentrations from 1880 to 2007 here:
http://BartonPaulLevenson.com/Correlation.html
For more up to date information, google CDIAC.
Barton Paul Levenson says
Jason: The inability to replicate short term wiggles does not concern me, provided that those wiggles turn out to be brief interruptions in a long term trend.
BPL: Why don’t you look at the trend over more than 120 years, then?
http://BartonPaulLevenson.com/Correlation.html
I give temperature anomalies from 1880 to 2007. Graph ’em against time in Excel and tell me what you get.
Alan Millar says
Comment by Ray Ladbury — 31 December 2009 @ 2:05 PM
“As to the rest of your screed, the fact that you seem to think it is necessary for a climate model to predict volcanic eruptions would seem to indicate that you don’t have much of a grasp of the subject matter.
The point I am making is that the models are all adjusted to match past reality. Aerosols seem the favourite way to do this. In GCM reality aerosols are like Tinkerbells fairy dust they make the Models fly!
Of course different amounts of ‘fairy dust’ are needed for each model to make them match but that is no problem as there is no agreed physics to falsify it. Very conveniant!
We all know that the models have a problem with the 1940 – 1975 cooling period. However, that is all solved by inputting various parameters for aerosols and voila, we can match the data!
Unfortunately Ray, this creates a rather large elephant in the room, the 1910 – 1940 period.
One of the most interesting of the leaked e-mails is, to my eyes, the one which includes reference to the 1910 – 1940 ‘problem’
“The other interesting thing is (as Foukal et al. note — from
MAGICC) that the 1910-40 warming cannot be solar. The Sun can
get at most 10% of this with Wang et al solar, less with Foukal
solar. So this may well be NADW, as Sarah and I noted in 1987
(and also Schlesinger later). A reduced SST blip in the 1940s
makes the 1910-40 warming larger than the SH (which it
currently is not) — but not really enough.
So … why was the SH so cold around 1910? Another SST problem?”
This 1910-1940 issue goes to the heart as to what level of confidence we can have in the AGW theory and the associated GCMs.
Upto now it seems that certain AGW scientists and advocates have been happy to wave their hands a bit whilst muttering Solar and Aerosols as the answer as to why global temperatures increased at a similar rate during this period as compared to the latter part of the century, with little help from increasing CO2 levels.
I have known all along that this is rubbish. If you believe in AGW then you can only allow a small fraction of the observed increase in temperatures to be attributable to increased solar activity. As far as aerosols go, this is a direct lie. Aerosols increased very sharply during this time. This is a fact confirmed by the Greenland ice cores.
Now we can see, in writing, that this problem is unresolved by scientists at the heart of the AGW hypothesis and they do not believe the meme they have happily allowed to become established as the answer to this ‘problem’.
So we know for certain that we have a situation where an unknown combination of climatic factors caused the global temperatures to rise at a significant rate comparable to the late 20th century and this remains unresolved.
I am sure that most people here can see what this means for the AGW hypothesis. Logic dictates that if you cannot explain one rise over a similar period then you cannot explain another rise over a similar period. Unless you can identify and isolate the significant factors in the earlier period then you cannot know whether these unknown factors are driving the rise in the latter period, it is unarguable logic.
So Ray, as you beleive that the science is basically settled, please identify, isolate and quantify the precise climatic factors that drove this rise so that I can compare them against the 1976 – 2000 period to see what the difference was if any.
If you quote reduced volcanic activity as a reason, other than as a less precise analogy for aerosols which I have already shown to be false, please state the hypothesis which allows volcanic activity to warm the Earth outwith the aerosol effect.
Alan
Ray Ladbury says
Alan Millar,
The period 1910 to 1940 was also a period of low volcanic activity, rising solar intensity and increased industrial activity (CO2 production)–and this probably also contributed to the observed trend. What is more, the globe was not exactly bristling with instruments in the period, so it is not surprising there are uncertainties. The fact that we cannot account in detail for every tenth of a degree at every period in history does not negate the successes of the models in accounting for the majority of what we see in Earth’s climate.
So, Alan, what do the denialist models say about 1910-1940. Oh, that’s right, there are no denialist models. No model at all in fact that has a sensitivity less than 2.1 degrees per doubling. Do let us know when you’ve got one, Alan.
dhogaza says
Alan Millar …
As proven by this abstract …
Oh, wait, the abstract highlights a laundry lists of model results compared to observational data, it doesn’t appear they’re just making stuff up in order to make the model fly after all …
Lurkers: never trust anything a denialist says without checking up on it first.
Doug Bostrom says
Lynn Vincentnathan says: 31 December 2009 at 3:29 PM
“Why are people so scared to death about installing a $6 low-flow showerhead with off-on button that saves $2000 in hot water during its 20 year lifetime?”
Well, there’s ample evidence that fear in this case is being cultivated, as well as paralysis. Cognitive corruption is being generated and propagated by professionals skilled in the arts of public relations.
“Fear, uncertainty, and doubt”. Compare those terms to misunderstandings you see coughed up every day on this site. The general fit is remarkably tight.
The selective attention paid to this particular branch of scientific inquiry accompanied as it is by a steady undercurrent of accusations of fraud, malfeasance and incompetence is really quite aberrant, conspicuously so.
There are a lot of interests who are quite keen on waste. They report to shareholders and are judged in part on their ability to make sure we continue being wasteful and shortsighted. Nothing new, really; it’s an old story about preserving the status quo.
If the stakes were not so high, people like Gavin would be able to pursue their curiosity without being the target of character assassination campaigns and the like. As it stands, their course of inquiry has lead them into the crosshairs of powerful interests.
Some of these researchers have felt compelled to raise their hands and point out the potential danger their findings appear to identify. That just makes the situation worse for them, leading to death threats and the like, the tip of the pyramid of greed-induced madness. That’s a big problem with PR of the kind driving the contrarian community: pound on fear hard enough and you’ll bring out the crazies. Same deal as the health care debate, etc.
John E. Pearson says
403: Josh, I don’t have access to that article until I go back to work so I can’t read it. Here’s what I think they did. First it is well-known that chaotic systems have a great many unstable periodic orbits. ROnnie Mainieri and Predrag Cvitanović wrote a fairly long book about this which was never published but is probably available on the internet somewhere. You can get approximations to many statistical objects of interest by doing appropriate averages over the unstable orbits. It’s pretty technical stuff and I never understood it deeply to begin with. What I think that Yorke and co did was take advantage of the unstable periodic orbits and stabilize them by perturbing the parameters. Think of balancing a broom in the upside down position by wiggling your hand. I know they did this with an experimental system but I don’t remember much about it. THey wrote: “It is shown that one can convert a chaotic attractor to any one of a large number of possible attracting time-periodic motions by making only small time-dependent perturbations of an available system parameter. The method utilizes delay coordinate embedding, and so is applicable to experimental situations in which a priori analytical knowledge of the system dynamics is not available.” I tried to make it clear in my original remark that I don’t think this has anything at all to do with controlling the climate nor with climatologists tuning their models to the data. Mathematically controlling climate is far easier than controlling a chaotic dynamical system because if you want to turn down the temperature of the earth all you have to do is reduce the forcing a bit which is what climatologists are advocating. It doesn’t require continuous perturbations as in Grebogi, Ott & York. Simply decrease atmospheric CO2 concentration and the temperature will eventually drop (or perhaps just stop increasing).
Grabski says
Ray Ladbury:
This is where I got that idea. From this very blog:
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%)
Scenario C assumed no change in in forcings, 2000 forward. I admit I used this description of Scenario B and extrapolated. I may be a high, but the fact is that we are below Scenario C’s out of sample forecast, and it’s clear that forcings are more than 10% above that level (since that’s Scenario B’s overshoot” and they haven’t stabilized (per Scenario C)
So we have temps below the level that this model said would require no further growth from 2000 – 2009.
John P. Reisman (OSS Foundation) says
#426 Alan Millar
You are confusing the legitimate pursuit of quantifiable answers and problem resolution… with ‘fairy dust’.
I think there is a level of confidence in the 1910-1940 period for combined natural variation and imposed forcing. But the signal to noise is not so easy to parse.
But that really is a red herring isn’t it.
The current forcing, quantitative knowledge of GHG’s, land use albedo, ocean temps are clearly resolved in models and observations, for the most part. This event is human caused.
Picking an email and saying it says we know nothing ‘now’ is pretty silly really. Context is key. Figuring out a problem is not the same as resolving other problems.
As to your statement:
Solar accounts for around +/- 0.2 W/m2 of current radiative forcing. The current forcing above pre-industrial is positive around 3.6 W/m2. What are you trying to say?
Can you point me to the proof you are speaking of regarding GRIP cores?
As to your assertion:
This is a non sequitur. If someone punches you in the face and you don’t know why that would be different that in I punched you in the face and told you why. Your assertion that your logic in unarguable is ridiculous and arrogant… since I have just argued against it quite clearly.
Spaceman Spiff says
Alan Miller @395 said:
“We hear that the models are not backfitted to match the data, that’s a laugh! Unless we are being led to believe that the models predicted the Mount Pinatubo eruption. That would be a good trick!
The models were obviously backfitted to match the effects of this eruption and the assumptions and parameters used were almost certainly those that would cause the models to match the actual temperature record.”
Could you imagine at all that what was done was to inject the measured ejecta (in mass, altitude, at the appropriate latitude of the eruption) from Pinatubo into the model atmosphere, and re-run the models? And the models’ reproduction of what did happen regarding the net forcing and resulting time-lagged change in temperature provided some affirmation of the treatment of aerosols. I’d say that was a worthwhile scientific test of the model.
Saying that “aerosols” are not completely understood is pointless. So what? You wouldn’t find a climate or atmospheric scientist anywhere that would disagree with that statement. For that matter science doesn’t completely understand anything. You’ve mistaken not knowing everything with not knowing anything. None of this is terribly useful.
Finally, there is a significant and growing literature on aerosols and their direct and indirect effects on climate. I suggest making a modest attempt to investigate this yourself before drawing conclusions.
Matthew says
418, Martin Vermeer: Look, you don’t study mitigation scenarios much do you? The spending takes place over the whole time line, not all of it ‘now’. But what we do spend now, will have to be based on current technological realities. 20 years from now? Ask again then.
There are lots of mitigation strategies. $1 trillion spent over 10 years by the US and EU, if used (as some has been up till now) to finance construction in India and China, will not lead to CO2 reduction before 2050, and maybe not in this century. It gets worse: the US and EU combined produce only 1/3 of the annual anthropogenic CO2, and much less of the annual CO2 increase; we could shut down completely without affecting future global warming, as long as the rest of the world keeps growing its economies, unless the AGW theory is way off. China also, like the US and EU, is investing is CO2 capture and storage R&D. If that works well enough, which we may know in 5 years (though not with certainty) then the whole mitigation strategy will change.
I support increased production of energy from all non-carbon sources for the US, at a rate that does not cost a reduction in GDP growth, and increased investment in reforestation world-wide. I noticed that you mocked, but did not deny, the claim that too much money spent now will mean that less money is available in 2 decades. Solarizing the US economy now, by purchasing PV cells made from coal-fired factories in China (as we are doing), will accomplish less than a much slower solarization based on PV cells made from solar-powered factories, but this would will be a slower process.
Lynn Vincentnathan, I think that a motivational analysis is not the topic of this thread, but since you raised the issue you might want to look into the theory of cognitive dissonance and “post-decision dissonance reduction”. If you have been investing in reducing your carbon footprint for the last 2 decades, as you wrote, then you have a vested psychological interest in your beliefs. You can’t change your mind no matter how much evidence accumulates against AGW, according to the cognitive dissonance theorists, so you have no right to think of yourself as a more reasonable person than anybody else. Then there’s all that mythological Freudian stuff (projection, rationalization, denial, etc.) It’s sort of arrogant to think that only people who disagree with you have motivated beliefs.
With the uncertainty about the AGW theory, and the imprecision in the diverse predictions, a wise strategy is to hedge bets, i.e. invest in a diversified portfolio with a long-term perspective: CC&S, solar, wind, all biofuels, nuclear, reforestation; if CC&S works, then synfuels (like the Great Plains synfuels plant, where the CO2 is sold to Canadian oil companies to increase recovery of oil from old wells.) If you are buying a car, you might consume more total energy by purchasing a Prius than by purchasing a Corolla; and you might consume the least by maintaining and driving an old Camaro. Use the money to fund reforestation in Ecuador instead.
TRY says
#410 Josh – Wow, thanks for the link to this –
http://clarreo.larc.nasa.gov
Pretty much exactly what I was interested in, at least going by the promotional copy. It would be great to see this get launched.
Ray and Doug – great summary! this is an excellent overview
http://www.aip.org/history/climate/
Timothy #341 – If you read that link above you’ll see, as I did, how complex these issues are. And that complexity has increased over time, as the impacts of absorption, emission, saturation, convection, precipitation, cloud formation, are dealt with in more and more detail throughout the entire atmosphere – which really doesn’t have ‘layers’, just a constantly changing gas/h20 makeup, energy levels, emissions, etc. across three dimensions.
Per that link above, we are far beyond the “simple physics” in that there are so many pertinent processes and interactions that we have to rely on complex models to take everything into account.
So I guess the challenge is, that as we make these models more complex and presumably more accurate, our expectations of their predictive power go up. But without good short-term predictions, people are left jumping back and forth —
— “climate processes are complicated, so we need sophisticated models to show sensitivity to CO2 and take all the potential feedbacks and proceses into account”
— “well, the models aren’t that accurate in the 15 year time frame, so how can we rely on them in the longer term?”
— “When we’re talking about climate change and sensitivity to CO2, the underlying physics and forcing is pretty straightforward – here are the short equations showing an obvious forcing and the associated move to a higher temperature equilibrium.”
— “But those equations don’t take X into account.”
— “Right – but these models do. As I said, we need big models to show sensitivity to CO2 and take all the potential feedbacks and proceses into account”
…
…
So, maybe IR output signature is a predictable, testable item. Maybe not. Interesting question, I think.
Ernst K says
“So we have temps below the level that this model said would require no further growth from 2000 – 2009.”
But scenarios B’s forcings from emissions were about 10% higher than what actually occurred, and scenario C had the same CO2 forcings until 2000 and lower forcings from CH4 etc., so you should expect B to have over-predicted the warming, with C lagging slightly behind until 2000 after which the 2 should diverge, which they do.
If you go to the original Hansen et al 1988 paper (http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf), then you’ll see that Scenario B and C used the following CO2 forcings:
2000 CO2 = 377 ppmv for both scenarios (compare figure 2 with the equation in Appendix B)
2010 CO2 = 400 ppmv for scenario B and 377 for scenario C
Actual CO2 in 2000 was something like 365-370 and current CO2 is about 385. 377 wasn’t reached until about 2005 or so.
So scenario C was pessimistic until 2000, and only became slightly optimistic in 2005. It really shouldn’t be a surprise that C fits the actual temperatures the best.
Remember that these are climate models, not emissions models.
It would be interesting to put our best estimates of the actual CO2, trace, and aerosol concentrations into the original 1988 model and show what they would predict. As long as you don’t change any of the formulations in the model itself, it would be a perfectly valid test of the original model.
Would that be easy to do Gavin? Or would it take a lot of work to dust off the model and get it running again?
[Response: It’s the same model as EdGCM – so no, it shouldn’t be hard and I vaguely remember someone trying to do this the last time we discussed this. Not sure if it ever got finished though.- gavin]
Hank Roberts says
Grabski — you’re posting a claim about 1988 that has been debunked here repeatedly.
Please, use the search box. Look at the assumption made in 1988 for climate sensitivity. Please look things up first.
Hank Roberts says
Oh, why hope. Here, this is what you should have found:
https://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/
Jason says
“I suspect that the trend towards lower climate sensitivity numbers will continue during the new decade.
[Response: On the basis of what? This is just your wishful thinking.- gavin]”
Let me be clear then, I suspect that the current environment in climate science, which is rabidly hostile to anyone perceived as being skeptical, has biased your estimates by consistently elevating research which is perceived as being helpful to a particular political view point, and depressing results that are perceived as harmful.
I suspect that the views expressed in the Nature editorial have long been widespread throughout the field of climate science and have had a very significant impact on critical decisions concerning the acceptance of papers, and the awarding of funding.
In such an environment, I think it is inevitable that there will be an impact on the research that results, your good intentions notwithstanding.
[Response: You are very wrong in your assessment, But I doubt you will be convinced otherwise. – gavin]
Geoff Wexler says
Re #426 Alan Millar
This 1910-1940 issue goes to the heart as to what level of confidence we can have in the AGW theory
Try Schlesinger 4th. figure here:
http://www.climatewatch.noaa.gov/2009/articles/short-term-cooling-on-a-warming-planet
[Thats a popular account, an easy read; for the serious version try Google Scholar]
What do the experts here think about the AMO mechanism being part or all of the answer to Millar?
Jason says
#394 “It would help you get a job, get grants, and get tenured. If there’s one thing a graduate student prays for, it’s a clear demonstration of an unexpected result.”
In most fields this is true, but not in Climate Science.
In Climate Science, if the result is perceived as threatening the consensus political view, it is attacked without regard to its scientific merit. If it is published, an attempt is made to remove the journal editors responsible.
[Response: This is the umpteenth time you’ve made this statement, and you have still to provide a scintilla of proof for it. It simply isn’t true. Point to one paper that was ‘attacked without regard to scientific merit’ and one case where any editor has been removed for publishing such a paper. And before you start agitating – note that Hans von Storch resigned from Climate Research over poor peer-reviewing (and the actual editor involved in the relevant decision (de Freitas) was not removed), and Saiers served his full 3 year term as GRL editor. So either provide some proof for your claims, or stop making them (here at least). – gavin ]
Jason says
#397: “Why don’t you just stay away until you’ve learned enough to stop embarrassing yourself in public?”
So if I parrot Gavin who says that GCMs are not “exercises in fitting curves to hindcasts”, my comments are welcome.
But if I propose a test for whether or not Gavin (along his peers) has accidentally biased his models, I should stay away?
[Response: While you were actually asking questions and bringing information to the table you were most welcome. Once you dipped into unjustified and (still) unsupported insinuations of malfeasance and bias, no, not so much. – gavin]
Amusingly, my other posts in this thread concern the topic of whether or not grad students can enhance their career prospects by attempting to publish unexpected results that go against the prevailing political sentiment.
There is a vast difference between post on a blog and submissions to peer reviewed journals. But a host of highly regarded climate scientists: Curry, Korhola, Lindzen, Von Storch, etc. have suggested the the highly politicized attitudes expressed here extend to much more serious scientific discourse.
The only people who should be embarrassed are those who suggest that climate science, as it exists today, welcomes contributions that could result in Inhofe and his repugnant ilk scoring political points.
[Response: You are, again, wrong. Scientists want the truth – or as close as we can get to that. This is not something that the ‘repugnant ilk’ are much concerned with and so they regularly misrepresent good science (even more often than they champion nonsense). We will not prevent misrepresentations however hard we try, but you still have not demonstrated any actual evidence that good science that undermines the mainstream view is being suppressed. Given that garbage like Chilingar, Miskolczi, Gerlich and Tscheuschner, McLean et al, does get published, why can’t all these brilliant contrarians do so too? Where are the Arxiv preprints that are going to blow us all away? They simply don’t exist, and your conspiracies are simply fantasy. – gavin]
Simon Rika aka Karmakaze says
@Gavin, Ray, Et Al.
Thanks, I knew I was missing something obvious, LOL!
I have figured out at least part of what it is I was missing and thought I’d just relay it for anyone who has been following and is still confused as I was/am.
I was forgetting that “+0.2C” in January could still be far warmer as an absolute temperature than “+2.0C” in July (for my part of the world that is) and so the latter could appear to be “cooling” when it is in fact significantly warmer than the norm, this would mean plotting the actual mean temperature would make it very hard to see the changes.
Likewise +2.0 during a La Nina could be colder than +0.2 during an El Nino, so plotting the aboslute temperature could make the warming during the La Nina seem insignificant compared to during the El Nino. Plotting them simply as +0.2 and +2.0 doesn’t remove any relevent information (as some people might think), it simply highlights it.
Also, as a function of the fact that we can’t measure (or realistically handle the data from) every single molecule of air even in a single column in the atmosphere, no matter how many measurements we take, it will still only represent an average, not an absulte temperature. So, plotting it as this artificial (averaged) temperature would imply something we don’t or actually can’t realistically know to a certainty, so only the change in the artifical average is plotted to avoid confusion.
This helps incorporate something I read somewhere (probably here): that the anomaly is consistent over larger regions than the actual temperature. For example at my house it could be 15.1C and down the road it could be 15.2C but for each location that represents +0.1C anomaly. A region can be warming due to global effects, but each location within that region varies considerably in actual temperature.
Does that sound right to the experts?
I know this is essentially what you were trying to explain, I was just having difficulty relating it to the plotting of the data – I sort of had to try picturing the graph as an absolute temperature series over time and imagine how hard it would be to see.
Gavin’s links helped clarify that for me, I think, but perhaps an article in the Start Here section that shows two such graphs side-by-side so we can see how the same data can look very different depending on how it is plotted might still be helpful. As for your book, Gavin, I would very much like to read it – I shall have to find out if I can get it here in my corner of New Zealand :)
As I said, my own education in even the basics such as this is severely lacking, and I am sure some of the other commenters (especially the deniers) have the same problem. I know it is not the job of the climate scientists to teach basic skills such as this, but perhaps if there was at least a page or link here to help out people like me, some of the more silly claims might fade away. I mean to say, maybe not everyone coming here and making silly claims is being a denier, maybe they are simply confused at a level that the experts don’t even think about and wouldn’t even question among themselves. This for example isn’t even a climate issue – it’s simply a data display issue and would be common in any field dealing with complex data.
I know a simple webpage could never explain it all (that’s why we have schools and universities) but just the common stuff like why a temperature anomaly graph is better than an absolute temperature graph at accurately conveying the data, could clear up a lot of confusion and even show genuine doubters that these things are not done to “sell” the science, but to make it easier to see, and they are done for very good reason and those of us who are not experts in the field need to be sure we understand even simple stuff like this before we start assuming we even know what we are looking at.
Thanks for all your help, guys.
–
@mark #348
“Why a ‘wall of shame’ for holding a different view? I see the same sentiment on the other side of the fence.”
Let me make myself clear. I wasn’t saying people with different views were deniers and had to be sent to a wall of shame – I am saying people who post the same old easily disproven claims that they picked up from some random blog on the net, and didn’t even bother to independently verify, but then come here and imply or outright accuse the scientists of commiting fraud based on them, belong on a wall of shame – especially in isolation, so we can see just how often and how contrived it is.
If you don’t see it as shameful that someone would act like that… well, there’s your problem.
“About time we all grew up, listened, understood and debated.”
Nice sentiment – I wish they would – but instead, a wall of shame would be an approppriate response to such actions.
Pick any thread on here and you will see people who hold contrary views do get on the board, but it seems to mostly be the ones who are saying something different from the tired old denier canards. More often than not they ignore the responses, repeating the same question over and over with slight changes in wording, until they then declare victory by saying the scientists refuse to answer, when in fact they got answered every time – but ignored it. Another kind of behaviour that should be highlighted on a wall of shame.
You can lose the clear answers in a long thread like this, so if the conversation was pulled out and put on a seperate thread so you can see the ‘question, answer, ignore answwer, question again’ cycle clearly, people like you would see why these acts are so annoying.
“Sorry for the rant, but as a self confessed layman, I feel I will never have the confidence to believe either side of the argument when everyone involved seems to have tunnel vision.”
As one layman to another, let me give you a piece of advice – question your OWN assumptions BEFORE you question anyone else’s. I’ve just shown why I said what I did, but you assumed it was because of the reason you THOUGHT it was, rather than the REAL reason. If you had just questioned that assumption, you might have wondered if you were missing something and ASKED rather than RANTED.
Make sense?
–
@Grabski #351
Here is a case of needing to question your own assumptions first:
“What if it becomes clear that the climate is threatened
…
What if, what if, what if it becomes clear that the climate is threatened by continued cooling? Then trillions will have been wasted.”
Here the assumption is that the ONLY benefit of advances in efficient renewable clean energy is a reduction (or halt) of global warming. However, even if AGW turned out to be totally wrong (it won’t, the scientists are not that stupid) the benefits to our species of being able to advance renewable energy techonology or energy efficiency (ie. cheaper energy in the long run) would far outweigh the costs incurred in the meantime, in my opinion.
Wouldn’t you like to generate your own electricity from the Sun (for example) rather than having to pay some company to generate and transmit it to you, often over very long distances, with all the loss that entails (thus making the electricity you DO get more expensive)?
So before you can say “trillions will have been wasted” you will have to show that there is no benefit, AT ALL, to a move away from fossil fuels other than reduced CO2 emmissions, and that is BEFORE we (meaning us interested laymen) even consider whether the science is valid or not.
I may be wrong here, but it is my understanding that a very significant amount of the elecricity we generate is WASTED by attempting to transmit it very long distances. If we just removed or reduced that wastage, we would automatically increase the amount generated that we can actually use, and decrease the cost for it. That HAS to be a good thing, even if you don’t believe AGW is real, don’t you agree?
–
@Completely Fed Up #357
@Ray Ladbury #358 and #374
Yes, I understand that now. That was my problem as you see above – I hadn’t been able to visualise it, and because I hadn’t seen one, I was struggling to figure it out. I hope I understand correctly now. Yes, I probably should have tried graphing it myself to see, but my time is limited and this is a sort of hobby for me (increasing my knowledge of science and in general), rather than something I can devote a lot of time to.
So I just thought it would be quicker to ask someone to explain it to me, rather than taking time to figure it out for myself. It seems I might have been fundamentally flawed in that assumption – I should have questioned it first :)
(Yes, I know the delay was caused mostly by my inadequate description of my problem.)
Jason says
#409: “Someone is holding a gun to your head. The majority of weapons experts believe that the bullets in the gun are real and powerful enough to blow your head clean off. You don’t think they are. You think they are insufficient and will do no harm.
So you say, go ahead pull the trigger. After that you will decide if the bullets are real and sufficient to blow your head clean off.
This is effectively what you are saying with your argument.”
Actually, if somebody points a gun at my head I am going to wait until I am convinced it is not real before giving them permission to pull the trigger.
I’m a big fan of waiting until I am convinced, and I highly recommend that you do the same if you ever find yourself in a similar situation… especially your only alternative is putting on a Waxman-Markey brand tin foil hat and praying that it somehow stops the bullet.
Maybe this would be a more interesting discussion if somebody proposed an alternative that, in the event the models are right, would actually have a significant impact on climate change.
[Response: So now your criticism of the science is that politicians aren’t doing enough? Goal-post shifting much? – gavin]
Doug Bostrom says
Looks as though Jason has finally and reluctantly conceded that he has no actual coherent argument against mainstream science, other than some sort of issue with the tenure process.
After the dust settles, what did Jason leave us with? Nothing but ranting about how a promotional step can somehow corrupt entire fields of inquiry, inducing legions of researchers to ignore unbalanced equations in a conspiracy of silence. According to his initial spluttering on the topic, it’s not just climate science, either. What other topics we should worry about? Materials? Are composite aircraft going to come crashing down out of the sky because nobody wanted to upset the tenure applecart? How about astronomy? Should we take another look at that red shift?
How about another take? Jason’s all wound up about climate science in particular because it’s a popular and accepted fad in certain political circles. If one’s ideology is a certain way, one has no choice but to be a contrarian.
Jason says
[Response: This is the umpteenth time you’ve made this statement, and you have still to provide a scintilla of proof for it. It simply isn’t true. Point to one paper that was ‘attacked without regard to scientific merit’ and one case where any editor has been removed for publishing such a paper. And before you start agitating – note that Hans von Storch resigned from Climate Research over poor peer-reviewing (and the actual editor involved in the relevant decision (de Freitas) was not removed), and Saiers served his full 3 year term as GRL editor. So either provide some proof for your claims, or stop making them (here at least). – gavin ]
Are you seriously going to claim that an attempt was not made to remove the editors involved with these papers?
I would point to the M&M papers as an example of scientifically valid results being illegitimately criticized.
[Response: Really, that’s it? The invalid conclusion that the PCA centering mattered or the continual insinuations of scientific misconduct weren’t worthy of crticism? Oh please. – gavin]
And I wonder why McIntyre’s update of Santer et al ’08 is still being reviewed? I don’t have a scintilla of proof, but I’d bet that one of your coauthors is a reviewer and “went to town” on it. What do you think?
As a coauthor of Santer 08, when you rerun the results using the exact same methodology but stopping at 2008 instead of 1999, what does your analysis show? Are the models consistent with observations?
[Response: If you had any genuine interest, you’d actually read Santer et al (2008) and you would see that the extension to 2006 was indeed included in the supplemental material (experiment SENS2). If you’d bother to ask any of the authors about this instead simply taking McIntyre’s word for anything, you would have been told that this had been in the main paper in the initial submission and was only moved to the supplemental material at the request of a reviewer who wanted us to stick to the same period used in Douglass et al. The idea that we were hiding some adverse result is, again, just a baseless smear. I have no knowledge of the status of McIntyre’s comment, but if, like you, he hasn’t read the supplemental material, I wouldn’t be surprised if it is not well reviewed. – gavin]
I try to avoid discussion of CA issues because I know those posts aren’t allowed. But come 2010 I’ll be more than happy to discuss them. My perception of bias is ultimately rooted in concrete examples of what i consider to be scientific malpractice.
[Response: Which you have yet to demonstrate. – gavin]
Andrew says
@CFU: ““This means a one-dimensional measurement of the control parameter is used,”
Uh, no. didn’t you read your statement:”
Um, no, it means you don’t understand what it means.
If you convert everything into carbon equivalents, the schedule of conversion determines that you measure the control in one direction. There are still many directions in which control can be exerted (which is apparently where you stop following the idea). But because of arbitrage between the control, the financial cost of the various controls is minimized in one direction, and the bulk of the control will be exerted in that one direction.
So it’s a one dimensional control, with the direction of that control determined by the equivalence ratios and financial costs. Not Multidimensional.
Oddly enough that you bring up helicopters is useful. One of the most common applications of H-infinity control theory is fly-by-wire helicopters, such as the Bell 205. (I suppose Ian Postlethwaite is the poster boy for this http://www2.le.ac.uk/departments/engineering/people/academic-staff/ian-postlethwaite)
You are thinking of ways to control helicopters that are simple. But they are not the best performing controllers of helicopters. In very high performance helicopters, the controls are presented as mixed as the user interface, but the actual control is much more complex, and can only be performed by computer.
Toy helicopters are easier to control mainly because they do not have exigent control specifications – you are not flying them with high precision at the extremes of their flight envelopes. So yes, if you want to have a lot of slop and waste in your controller, sure, don’t use a good theory and things might work OK.
The EPFL “Toycopter” is a resonable example of a serious approach of modern control theory to a toy-like helicopter, and it’s not differentially flat; and cross-coupling is present.
There is a good deal more uncertainty in the climate than in helicopter. It would be nice to think that we will be able to avoid any strong nonlinearities in the large scale quantities, but there are some large scale nonlinear variables known to be involved.
I don’t see any reason to pretend that the seat of the pants approach to climate control has any particular merit.
Clark Lampson says
Any comments on one Wolfgang Knorr, of the Department of Earth Sciences at the University of Bristol, paper coming out in GRL claiming “no” increase in atmospheric CO2.
[Response: That’s a completely wrong reading of the paper. Read this instead. – gavin]
Doug Bostrom says
I’m beginning to wonder if “Jason” is perhaps somebody playing a practical joke, or just trying to make contrarians look ridiculous.
In any case, for those who accuse RealClimate of censorship and plead for all posts to be heard, be careful what you wish for, heh!
“My perception of bias is ultimately rooted in concrete examples of what i consider to be scientific malpractice.”
“I have a gub”– Take the Money and Run
Ernst K says
All this has encouraged me to sit down and read Callendar (1938) (http://www.rmets.org/pdf/qjcallender38.pdf) in a bit more detail.
What is especially interesting to me is the discussion at the end where the comments of a number of commentators question the methodology and conclusions of Callendar. It’s fascinating how several of the comments sound like they could have come from modern AGW “skeptics”, many of their key arguments are literally 70+ years old.
But these commentators were not a skeptical fringe, they probably represent a sizable portion of the mainstream at the time (I suspect it’s the vast majority but it’s obviously hard for me to know for sure). To me, the comments clearly show how the “CO2 theory of climatic change” ran counter to the “scientific consensus” of the day. The current “consensus” wasn’t the natural default of the climate community at all, it had to be built up over decades. Callendar was actually congratulated for “his courage and perseverance” for presenting this work.
Callendar’s work stands up remarkably well, but at the time there would be little further progress until Plass (1956) (another great read: http://onramp.nsdl.org/eserv/onramp:16572/n7._Plass__1956corrected.pdf). Which further suggests to me how climate scientists of the day were hardly jumping to develop Callendar’s work.
I just wish I could find some contemporary responses to Plass’ work. Any links would be greatly appreciated.
[Response: Actually I’m writing a post on this now. But if you have a library look up some letters between Lewis Kaplan and Plass in Tellus in 1960. – gavin]