Over the last couple of months there has been much blog-viating about what the models used in the IPCC 4th Assessment Report (AR4) do and do not predict about natural variability in the presence of a long-term greenhouse gas related trend. Unfortunately, much of the discussion has been based on graphics, energy-balance models and descriptions of what the forced component is, rather than the full ensemble from the coupled models. That has lead to some rather excitable but ill-informed buzz about very short time scale tendencies. We have already discussed how short term analysis of the data can be misleading, and we have previously commented on the use of the uncertainty in the ensemble mean being confused with the envelope of possible trajectories (here). The actual model outputs have been available for a long time, and it is somewhat surprising that no-one has looked specifically at it given the attention the subject has garnered. So in this post we will examine directly what the individual model simulations actually show.
First, what does the spread of simulations look like? The following figure plots the global mean temperature anomaly for 55 individual realizations of the 20th Century and their continuation for the 21st Century following the SRES A1B scenario. For our purposes this scenario is close enough to the actual forcings over recent years for it to be a valid approximation to the simulations up to the present and probable future. The equal weighted ensemble mean is plotted on top. This isn’t quite what IPCC plots (since they average over single model ensembles before averaging across models) but in this case the difference is minor.
It should be clear from the above the plot that the long term trend (the global warming signal) is robust, but it is equally obvious that the short term behaviour of any individual realisation is not. This is the impact of the uncorrelated stochastic variability (weather!) in the models that is associated with interannual and interdecadal modes in the models – these can be associated with tropical Pacific variability or fluctuations in the ocean circulation for instance. Different models have different magnitudes of this variability that spans what can be inferred from the observations and in a more sophisticated analysis you would want to adjust for that. For this post however, it suffices to just use them ‘as is’.
We can characterise the variability very easily by looking at the range of regressions (linear least squares) over various time segments and plotting the distribution. This figure shows the results for the period 2000 to 2007 and for 1995 to 2014 (inclusive) along with a Gaussian fit to the distributions. These two periods were chosen since they correspond with some previous analyses. The mean trend (and mode) in both cases is around 0.2ºC/decade (as has been widely discussed) and there is no significant difference between the trends over the two periods. There is of course a big difference in the standard deviation – which depends strongly on the length of the segment.
Over the short 8 year period, the regressions range from -0.23ºC/dec to 0.61ºC/dec. Note that this is over a period with no volcanoes, and so the variation is predominantly internal (some models have solar cycle variability included which will make a small difference). The model with the largest trend has a range of -0.21 to 0.61ºC/dec in 4 different realisations, confirming the role of internal variability. 9 simulations out of 55 have negative trends over the period.
Over the longer period, the distribution becomes tighter, and the range is reduced to -0.04 to 0.42ºC/dec. Note that even for a 20 year period, there is one realisation that has a negative trend. For that model, the 5 different realisations give a range of trends of -0.04 to 0.19ºC/dec.
Therefore:
- Claims that GCMs project monotonic rises in temperature with increasing greenhouse gases are not valid. Natural variability does not disappear because there is a long term trend. The ensemble mean is monotonically increasing in the absence of large volcanoes, but this is the forced component of climate change, not a single realisation or anything that could happen in the real world.
- Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported. Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.
- Over a twenty year period, you would be on stronger ground in arguing that a negative trend would be outside the 95% confidence limits of the expected trend (the one model run in the above ensemble suggests that would only happen ~2% of the time).
A related question that comes up is how often we should expect a global mean temperature record to be broken. This too is a function of the natural variability (the smaller it is, the sooner you expect a new record). We can examine the individual model runs to look at the distribution. There is one wrinkle here though which relates to the uncertainty in the observations. For instance, while the GISTEMP series has 2005 being slightly warmer than 1998, that is not the case in the HadCRU data. So what we are really interested in is the waiting time to the next unambiguous record i.e. a record that is at least 0.1ºC warmer than the previous one (so that it would be clear in all observational datasets). That is obviously going to take a longer time.
This figure shows the cumulative distribution of waiting times for new records in the models starting from 1990 and going to 2030. The curves should be read as the percentage of new records that you would see if you waited X years. The two curves are for a new record of any size (black) and for an unambiguous record (> 0.1ºC above the previous, red). The main result is that 95% of the time, a new record will be seen within 8 years, but that for an unambiguous record, you need to wait for 18 years to have a similar confidence. As I mentioned above, this result is dependent on the magnitude of natural variability which varies over the different models. Thus the real world expectation would not be exactly what is seen here, but this is probably reasonably indicative.
We can also look at how the Keenlyside et al results compare to the natural variability in the standard (un-initiallised) simulations. In their experiments, the decadal mean of the period 2001-2010 and 2006-2015 are cooler than 1995-2004 (using the closest approximation to their results with only annual data). In the IPCC runs, this only happens in one simulation, and then only for the first decadal mean, not the second. This implies that there may be more going on than just the tapping into the internal variability in their model. We can specifically look at the same model in the un-initiallised runs. There, the differences between first decadal means spans the range 0.09 to 0.19ºC – significantly above zero. For the second period, the range is 0.16 to 0.32 ºC. One could speculate that there is actually a cooling that is implicit to their initialisation process itself. It would be instructive to try some similar ‘perfect model’ experiments (where you try and replicate another model run rather than the real world) to investigate this further though.
Finally, I would just like to emphasize that for many of these examples, claims have circulated about the spectrum of the IPCC model responses without anyone actually looking at what those responses are. Given that the archive of these models exists and is publicly available, there is no longer any excuse for this. Therefore, if you want to make a claim about the IPCC model results, download them first!
Much thanks to Sonya Miller for producing these means from the IPCC archive.
Geoff Wexler says
Re: #73 and #12
Falsifiability part 2.
I’m sorry to return to this topic but it appears to need some more discussion.
Eric.
“whether the average relative humidity is independent of temperature is irrelevant…in a model”
Fair enough. But it does not contradict my comment which was partly to defend Popper and partly to point out that it is wrong to conclude that Popper would classify global warming as non-scientific. Unlike some other skeptics (anti Popper skeptics not anti-global warming skeptics!) I don’t think that we need throw away the falsification principle altogether. It is just that applied physics needs to be treated rather differently. Ray Ladbury made the same point in #25; I also agree with him that the overall decision about climate has to be a choice between the consensus and any alternatives that might come up.
As for relative humidity, I should have omitted the word “average”. The corrected version (without reference to averaging) is a much stronger law and thus easier to falsify. From what I have read in RC, relative humidity is an approximate output from the models, not an input as your comment suggests to me. Its approximate constancy is part of the understanding of global warming theory which is an important part of the subject. But is it necessary that it be universally true as required by Popper?
This is where Monaghan et al enters the picture. (see #12)
If it turns out that this work is corroborated, i.e. that the humidity law breaks down over the Antarctic, that would be an excellent example confirming falsification in action. In a non scientific subject this kind of falsification would be logically impossible. Suppose that existing climate models can be shown to be inconsistent with Monaghan et al’s paper. That would be a further example of falsifiability now applied to the models. But Gavin might well conclude that this modification has little effect on the estimate of the warming of 3 degs. C produced by doubling the CO2. It could get worse, perhaps revised models would come out which would be consistent with dry air over the Antarctic but also have no significant impact on the 3 degs.C estimate. Would that indicate that the forecast was non falsifiable? No, because it can be tested directly by waiting. It would indicate, something else, that the forecast does not depend on the universal and exact nature of the humidity law. There are different degrees of falsifiability and Popper’s ideal is mainly intended to apply to universal laws (that partly depends on which book by Popper you choose to read).
[Response: That is a good example because it undercuts your point completely. The Monaghan paper only speculates that water vapour changes in the models might be excessive – it shows no data confirming that, nor references any. The water vapour comment is just thrown out as a hypothesis. How therefore is that going to prove anything? You still have a situation where sparse data and imperfect models appear not to match – but unless you know why, what is to be done? Monaghan et al might well be correct – but it might be caused by too much uplift over the continent by the advection scheme, or issues with the convergence of grid boxes near the pole rather than anything to do with radiative physics. Plenty of people are working on all those issues, but until it gets fixed or understood better, Popper doesn’t really come into it. Even then, whatever turns out to be the problem will be addressed and we will carry on. – gavin]
Alexander Harvey says
Re #48
Larry,
A big issue with his passive model is that it seems to model an Earth that has no thermal mass. In particular no oceans.
His temperature projection is a simple function of the GHG concentrations. If the requirement was to model the effects of 1% per six month rise his passive model would simply arrive at his 80 year figure after 40 years.
The GCMs and better simple models should do something quite different, they should then project a temperature rise that lags well behind his passive model. This is the effect trying to heat up an Earth that has thermal mass.
Also his suggestion that the GCMs all predict little more than passive global warming is a bit of nonsense as they project not just the headline temperature but also its zonal distribution not to mention rainfall and cloud cover (which he later notes).
Simple models can (and should) be in agreement with the headline temperature projections not just for one scenario but for many different scenarios. Somehow I doubt his simple equation would pass such a test.
Best Wishes
Alexander Harvey
Richard Treadgold says
re: 23
Gavin: Thank you for responding so fully to my questions.
You say: [Try judging who sounds credible.]
Honest? Credible? Both are admirable. I guess you mean to imply that even an honest man could be deluded, so I ought to judge what is said and not just who says it. Your advice is good. Judging credibility is not easy, so I will keep a weather eye out for honesty, just in case.
You say: [Frank’s estimate is naive beyond belief]
I cannot judge the honesty or the credibility of this, though the obscure affront sounds a discordant note.
You say: [how can it possibly be that the uncertainty is that large when you look at the stability of the control runs or the spread in the different models as shown above?]
Forgive me, as I’m no modeller, but is not uncertainty just a feature of a measurement, so whatever provides a measure, whether it’s a ruler, a gauge, microwave detector, photographic film, etc., has an uncertainty associated with it? If a model (or any calculation) outputs a temperature, then uses that very measurement to output another temperature with another uncertainty, and then again, and again, then those uncertainties compound, don’t they?
Patrick Frank was describing the “minimal ±10.1% cloud error” not depicted by the IPCC in the A2 SRES projection. Does not that compounding error have an influence beyond the ‘stability’ or the ‘spread’ of the different runs (whatever those terms mean)? So wherein lies the naivety? I’m trying to understand how credible the models might be. Expecting me to believe 50 years is a big ask after not getting even two weeks’ worth out of the weather forecast. So why should I? (I’m speaking for other people—the people being asked to combat global warming.)
I said: What are the ramifications for the AGW hypothesis of the lack of atmospheric warming over the ten years since 1998? Arguably, since 1998 was driven by an exceptional El Nino, there’s been no real warming since about 1979, just going by eyeball. It’s up and down, but no trend you could hang your hat on. Temperature today is the same as 1979. See Junk Science.
You say: [You are joking right? Junk Science indeed.]
Yes, that was droll, and I smiled. But you must know that Junk Science merely redisplays data from respected scientific temperature groups (including GISS) so the drollity just covered up your disinclination to answer, didn’t it? But those datasets are trusted by a lot of people. So, for their sakes, too, I’d like to repeat my question, if that’s all right? What are the ramifications for the AGW hypothesis of the lack of global atmospheric warming over the ten years since 1998? :>)
I hope you understand why I’m pressing the point. Being an argument about warming, the temperature is central for everyone. My role with the Climate Conversation Group is to speak to public meetings about global warming, and I’m trying to gain a good understanding of the science, or at least where to locate vital bits of it. The actual temperature is fundamental. We can scarcely argue over the cause of the temperature if we don’t know or disagree on what the temperature is! So if you can refute the non-warming then I need to know—rather, I’d very much like to know—what your reasons are. So I can pass them on.
I understand that it’s impossible to ‘know’ the average temperature of the earth, whose surface varies so wondrously, but even so, we try. I seem to recall it was James Hansen who figured out a method that gives an answer we can work with.
I said: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?
You said: [The ‘theory’ that there is no weather and no other forcings and no interannual and no interdecal variability would indeed have been falsified. Congratulations. Maybe you’d care to point me to any publications that promote that theory though because I certainly don’t recognise that as a credible position. Perhaps you’d like to read the IPCC report to see what theories are in fact being proposed so that you can work on understanding them.]
You quite properly advise me to get more understanding! That’s why I’m enquiring—I acknowledge my ignorance. But people are asking questions of me, and I would like to respond to them, so I would gently ask you to state the reasons for your comments.
Since I imagine the IPCC report is a poor textbook, I wonder, could I ask you to address yourself to the reasoning in my question, rather than ask me to find some publication (that you know does not exist) that reflects it? I suspect that imposes on you a kind of burden of having to go back almost to first principles, perhaps, to answer my naive inquiries, but is it not the duty of the learned to spread knowledge? It might sound as though I’m flattering you to get my own way, but I’m not. I’m pressing on you the most rigorous logic to force you to answer me with science, not your personal preferences. Are you up to that, Gavin?
You see, I cannot accept your first response. You were surely being less scientific than sarcastic, if I were honest (even if not credible). For I did not mention weather, or variation. I simply observed that the temperature had not increased, or had not trended upwards, and asked you what that meant for the AGW theory.
If the temperature record is accepted, then for 20 years warming has been obscured by natural variation. If that was true, then how do we know that warming was present? And if warming was below the natural noise, how on earth can anyone detect the size of the human signal in the warming? If that is so then AGW need not fill us all with this dreadful twin sense of guilt and approaching fear.
So these, sir, are valid questions even from the mouths of idiots. I am faced with having, perforce, to answer such questions, and I would be grateful for all the help I can get. I’m asking others the same questions, since not only do I not know the truth, I don’t even know who’s got the truth—that’s how little I know!
Best regards,
Ray Ladbury says
Richard Treadwell,
Given the disingenuous tone of your post, I rather doubt that you are serious about wanting to learn more. However, on the off chance that you ever do become genuinely curious, here is a course of study.
First, Good God, man, learn some statistics! That anyone could look at the temperature data over the past 30 years and say there is no warming trend defies belief! You have a noisy dataset, but the linear trend is clearly upward. See Tamino on this:
http://tamino.wordpress.com/2007/08/31/garbage-is-forever/
Second, learn some physics. The greenhouse effect is known science. Why should it have stopped magically when Earth atmospheric CO2 content was at 280 ppmv? I heartily recommend Raypierre’s book on climate:
http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
Finally, learn some of the history. This is not some upstart environmentalist plot. The science is 150 years old! See Spencer Weart’s page:
http://www.aip.org/history/climate/index.html
If after looking these things over you still don’t feel you have enough ammo to blow the denialists out of the water, come back.
Lawrence McLean says
Richard Treadgold
regarding your statement:
“Last point: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?”
It is not clear if that is/was your opinion, or repetition of comments that you have heard. Whatever, it is a ridiculous statement. Any large and complex system with a relatively large response time, that response will be “noisy”.
Another climate analogy that shows how far off the mark the comment is, is the familiar seasonal change behaviour. The fundamental seasonal forcing is the change in the fraction of the given hemisphere, that is in sunlight. In spite of the smooth change of that fraction (the forcing), the response of average temperatures (daily, weekly and even monthly) is far from smooth. Sometimes even a monthly average is out of sequence from what would be expected from the forcing.
To put it bluntly, it is a stupid and ignorant statement.
Thinking about this comment reminded me of a post that reflected that there seemed to be a lot of Electrical Engineers that are climate change denialists. Maybe it is because some electrical engineers cannot get their heads around dynamic systems that have very slow response times (compared to electrical response times). Just a thought!
Geoff Wexler says
Re : answer to #101.
Thanks for your interesting reply Gavin.
“That is a good example because it undercuts your point completely”
“Popper doesn’t really come into it” (out of context).
In view of your comment, I conclude that it was a very bad example and shall therefore withdraw it. But I am not sure about the undercutting bit. It reminds me of a discussion I read about ambiguity. This is often caused by a confusion between the main point, the sub-point and the exemplifications of the points. I’m sorry if I was ambiguous. It is not hard to choose another example or the same example with a different observational method.
I think I should try again and tidy up:
1. Popper need not come into it, he may have been overated, but since others have brought it up, falsifiability deserves a brief exploration.
2. Your subject is far more falsifiable than e.g. economic modelling because unlike the latter, it is pinned down by highly exact and therefore highly falsifiable laws of universal validity.
3. There are some new universal laws which come into the explanations like the log law and the humidity law which can be used as an aid to understanding. These are also falsifiable in principle. “In principle” is good enough. This discussion is about logic.
4. That the models are in a different category from the foundation laws because they involve initial conditions additional hypotheses, approximations etc. You get the same thing all over science. It is frustrating from a practical standpoint, if they have loose joints, but climate models don’t seem special to me. They can be tested. They improve. Models of string theory, on the other hand, might have been criticised by Popper if he had been alive, because, as far as I know, they don’t yet come with a method for testing.
Jared says
Ray Ladbury Says:
12 May 2008 at 19:44
“OK, Jared, here’s a quiz. How long does an El Nino last? How about a PDO? Now, how long has the warming trend persisted (Hint: It’s still going on.) Other influences oscillate–the only one that has increased monotonically is CO2. Learn the physics.”
1. PDO/largescale ENSO trends can last between 20-40 years, from the brief time we have observed them.
2. This most recent +PDO phase began in 1977 and generally lasted until 2007, assuming that the -PDO period has truly begun. About 30 years.
3. The strongest warming trend was from 1979-1998. Since then, if one looks at ALL of the major global temperature metrics, there has been very little or no upward trend the past ten years.
CobblyWorlds says
#100 Nylo,
According to models, much of the GW will happen in mid to high latitudes as opposed to the tropics. e.g. http://www.globalwarmingart.com/wiki/Image:Global_Warming_Predictions_Map_jpg I see no reason for serious doubt about that particular model result. Loss of infra-red has a bigger role at high lattitudes. In the extreme case think about the long arctic winter “night”, no sunlight coming in, just infra-red going out.
As for rapid warmings, you don’t see spikes in the IPCC projections, but they only have a limited representation of carbon cycle feedbacks. If we go through a massive output of carbon (as CO2 or CH4) into the atmosphere from some part of the biosphere, we’ll have a better handle on what to expect, and hence how to model such feedbacks. (A bit like finding out what force will break your leg – using concrete slabs.)
And as for where much of the warming will come from, check out trends online’s inventory of global and regional emissions data/plots: http://cdiac.ornl.gov/trends/emis/em_cont.htm
Here’s China: http://cdiac.esd.ornl.gov/trends/emis/prc.htm
Here’s the US: http://cdiac.esd.ornl.gov/trends/emis/usa.htm
Now check out the “per capita” emissions for those countries, bear in mind that most of the Chinese still aren’t living particuarly emissions intensive lives (not that they’re going to hit the US level).
dhogaza says
Sigh … here it is again … the El Niño to La Niña cherry-pick.
Surely you can do better…
Ray Ladbury says
Jared, given that 1998 featured a big El Nino, and so is anomalous, I do not see how you can draw a negative trend through the data. It is still much warmer than in 1999 or 1997.
And of course, I would like to see how you explain stratospheric cooling using PDO, along with a range of other trends. And finally, there’s the question of why CO2’s greenhouse effect should magically stop at 280 ppmv. I’d especially like to see that.
Hank Roberts says
> the past ten
But it goes up to eleven!
Jared says
#109
No cherry-picking here. If you take the mean between the strong El Nino of 1998 and the strong La Nina of 1999-2000, and then compare the temps of 2001-2008 to that, you will see that GISS is the only metric that shows real warming. And that is with three El Ninos between 2002-07, and no Ninas in that period.
#110
See above.
What relationship do you see between stratospheric cooling and C02?
And as far as the greenhouse effect stopping at 280 ppmv…that question rests on the assumption that previous warming was mostly (entirely?) due to C02 concentrations.
Ray Ladbury says
Jared, CO2 accounts for 20-25% of the 33 degrees of greenhouse warming here on Earth. Why should that magically stop at 280 ppmv–the pre-industrial value. And if you don’t understand the issue with stratospheric cooling, [edit]….
[Response: ….. maybe a link is more useful? – gavin]
Jared says
Ray:
I ask you a simple question, and that is your response? I know there are different theories on stratospheric cooling, and I wanted to know which you subscribe to.
[edit – this is not a forum for random contrarian talking points to be trotted out one after another. Stick to a point and be serious or go elsewhere.]
CobblyWorlds says
#112 Jared
Including 1998 as you do is Cherry Picking.
In any of the 3 land/ocean datasets 1998 is a clear outlier.
Here are the 3 main surface dataset graphs.
GISS http://data.giss.nasa.gov/gistemp/graphs/
CRU http://www.cru.uea.ac.uk/cru/data/temperature/
GHCN http://lwf.ncdc.noaa.gov/oa/climate/research/trends.html
See for yourself, you’re looking for a dirty great big spike at 1998, not characteristic of the overall trend since the mid 1970s.
Cooling of the stratosphere & mesosphere is a consequence of the enhanced greenhouse effect: http://www.atmosphere.mpg.de/enid/20c.html
It’s part of the fingerprint of the observations predicted for the enhanced greenhouse effect. Your notion also would not explain diurnal range changes.
Jared says
#115
Wow, so using 1998 at all is cherry picking, huh? I guess it should be eliminated from the record altogether then (since it is so anomalous)…which by the way, would negate a signficant amount of the warming in the 1990s attributed to GHG. How much warming is there from 1990-2000 if you take out 1998?
It would be cherry picking if I used 1998 as a starting point and said, “Look, 2008 is much cooler than 1998, there has been no warming in the past 10 years.” That is NOT what I’m claiming.
What I’m pointing out is that if you look at HadCRUT and the two satellite metrics, global temperatures have shown no appreciable rise the past 10 years. GISS is an outlier in that it has 1998 a little cooler than the others, 2005 a little warmer, and 2007 a LOT warmer. This creates an entirely different trend when looking at the data over the past 10 years.
tamino says
Re: #116 (Jared)
First, there’s not as much difference between GISS and HadCRU as you claim. I seriously doubt that you have much knowledge about the temperature records and a proper analysis of same; you need to study this post.
You also fail to understand that global temperature is noisy enough that 10 years is not long enough to get a decent estimate of the rate of change. Limiting to 10 years enables you to focus on the wiggles created by the noise, and convince yourself that there’s no signal there; you need to study this post.
Jared says
Tamino…
1. You make a lot of assumptions about me, rather unfairly I might add.
2. Name one thing I stated that was false. Does GISS show a much greater warming trend than HadCRUT and the satellite metrics over the past 10 years? Yes. Call it noise or whatever you want, it is a fact. Has HadCRU shown decreasing temps over the past 2.5 years? Yes. 2006 was cooler than 2005, 2007 cooler than 2006, and 2008 is virtually guaranteed to be cooler than 2007. Did GISS show warmer temps for 2005, 2006 and 2007 than HadCRUT and the satellite records? Yes.
3. About the ten year thing…how many times have AGW proponents pointed to ten year periods to show warming? Many. Don’t tell me you can’t discern any trends in a 10 year period, it’s a two-way street.
CobblyWorlds says
Jared,
In post 112 your result would be clearly affected by the outlier 1998. Anyone using 1998 as a start/stop date for a claim is wrong (as far as I can see), whether they’re arguing for/against the reality of AGW. However actually Tamino is right, the greater error is probably just fussing over a few years. (I’m no statistician, my electronics has always been practically focussed).
To be specific about Tamino’s first link. For me this is the key issue:
http://tamino.files.wordpress.com/2008/01/resid1.jpg
I see no qualitative change in that graph that’s atypical for the whole period. In that graph if temperatures are swinging away from the long term trend 1975 to 2007, then there should be a significant change in the graph (as it’s the difference between the trend and each year’s value). That uses 1998’s data, but it goes right through, so it can be seen as noise.
Richard Treadgold says
104. Ray Ladbury.
Thank you for the references. I’m studying them now. I am grateful to you for accepting, however grudgingly, that I asked honest questions. You advise me to learn statistics, physics and history, and I am doing that.
Thanks for your invitation to return with further questions, but you haven’t answered these ones. To apply a small correction, I’m learning about climate science not so as to blow anyone “out of the water”, as you so militarily put it, but in order to find the truth.
105. Lawrence McLean.
When you offered the analogy of the seasonal hemispheric temperature changes, I understood and I thought this could be going somewhere.
When you reminisced about Electrical Engineers and slow response times I admit I wondered why.
You didn’t address my questions.
I would like you both to re-read my post, pretend that asking the questions is someone you respect, that a society is hanging on the answers, and try again.
Thank you.
Barton Paul Levenson says
Jared asks:
The balance of heat in the stratosphere is due to absorption of ultraviolet light by ozone and emission of infrared light by carbon dioxide. The former won’t change much; the latter is rising, and thus the stratosphere is cooling. No other method of warming the Earth would have that effect. (Or at least I can’t think of one.)
Barton Paul Levenson says
Jared posts:
Why 10 years, Jared? That’s what makes it cherry-picking; the decision to start from 1998. Why 1998 and not 1995 or 2001?
You have to use all the data, not just a segment of it that seems to support your point of view. Doing the latter is what is defined as “cherry picking,” and the denialist argument of “no global warming since 1998!” besides being wrong is a classic example of cherry picking.
Ray Ladbury says
Richard Treadgold, first, how do you figure that climate has not warmed since 1998. As has been stated many, many, many… times here 1998 was a huge El Nino. It cannot be considered a starting point. Tamino has analyzed this nearly to death here
http://tamino.wordpress.com/2007/08/31/garbage-is-forever/
You would do well to read over Tamino’s blog.
I am not really the best one to address the modeling questions, as climate science is not my day job. However, with respect to the compounding of errors, this presumes systematic bias, not just random error. Since we know that clouds both warm and cool, I rather doubt that the result is a consistent +10%. What is more, when you have uncertainties in a model the thing to do is carry out runs that cover the range of uncertainties. Suffice to say, there have been lots of attempts to show that climate models are bogus. These attempts have always been based on a fundamental misunderstanding of the models–sometimes innocent, sometimes intentional.
The thing about the climate models is that they are dynamical physics-based models. You put the physics in, constrained by independent data, and look at what comes out. There isn’t a lot of wiggle room for getting a better fit. The models do a very good job at reproducing the basic trends we see, and this provides strong evidence that the physics is not drastically wrong.
I do not know what your background is, but my advice is to come at the problem by understanding the physics. Additional pieces of advice is to ask questions, but be cautious about hijacking discussion threads, and do not discount the expertise of the professional scientists doing this work or the countless others in relevant fields who have looked at the science and found it cogent.
Anthony Kendall says
Reading through the comments here, it’s clear that there is a real passion for the scientific exploration of AGW. The data are interesting, the climate forecasts are suggestive, and yet there is enough going on within both that everyone, laypeople and scientists alike, can actively debate opposing viewpoints.
This forum seems to me rather like an undergraduate college course in which students are told to choose sides on a topic and then defend those sides. The ideas thrown back and forth are fairly well thought out, and often quite good.
But, let’s make no mistake, undergraduate-level debating is not the same thing as rigorous scientific analysis. You see, after finishing up that freshman- or sophomore-level general ed course, a climate scientist must then study for seven years to get their doctoral degree. And then, to have reached the point where those maintaining this blog are at, another decade–at least–of full-time work is required.
Those years of dedicated work and study do not make the scientist correct, but they should engender a certain amount of respect in their arguments. Also, because a very large number of scientists agree on AGW, and far fewer dissent, this does not mean that the majority is necessarily correct. But that lopsided consensus ought to at least receive careful consideration.
If upon visiting your physician, you receive an undesirable diagnosis, it’s recommendable to get a second opinion, or perhaps even a third. However, if you consistently get that undesirable diagnosis, doctor after doctor, this should tell you something. You can always eventually find an opinion that is more to your liking, but I don’t think that any of us would think this is sound medical judgment.
Why then, when the vast majority of climate scientists look at what is happening to our climate and diagnose it as AGW, do so many insist upon getting another opinion? Why is an article in Skeptic magazine, hardly a technical journal, being trotted out to attack a technical and very straightforward blog entry? The fact that those denying AGW need to cite vast global conspiracies of grant-hungry scientists, or universal academic ignorance of the urban heat island effect, to argue their point should make their point more than suspect.
But then, the great thing about science is that it doesn’t care what peoples’ opinions are. Here, the facts are in, AGW is real, all that’s left to debate are the effects of our species’ actions. But then, I guess that’s just my opinion.
Lawrence McLean says
Richard Treadgold:
When someone makes a ridiculous comment and I feel that I can contribute some constructive criticism of it, I will do so. The correct interpretation of my tone is that I am being blunt, rather than disrespectful. [edit]
As far as answering your questions. My respectful advice is (I have stated this in another recent post) that you should have confidence with Climate scientists as represented by the contributors to this site and the IPCC. This is a very good web site and you will find all or references to all of your answers here.
Another bit of advice, be ruthless with your ideas and prejudices. Objective reality, unfiltered from your own prejudices and preconceptions must always be the benchmark for your ideas when seeking the truth. If someone tells you that your house is on fire you do not go around asking other people: “Is my house on fire?” in the hope that you will get an answer that you like!
Clive says
Skeptic magazine recently published an article by Patrick Frank “A Climate of Belief” here
http://www.skeptic.com/the_magazine/featured_articles/v14n01_climate_of_belief.html#note40
In it there is also reference to “Is there a basis for global warming alarm?” by Richard S. Lindzen
Alfred P. Sloan Professor of Atmospheric Science
Massachusetts Institute of Technology
see it here : http://www.ycsg.yale.edu/climate/forms/LindzenYaleMtg.pdf
I am a layman but usually quite good at assessing the weight of scientific evidence and evaluating competing hypotheses. These two articles left me bewildered to say the least. What is wrong with the arguments presented here? I could not figure it out even with the help of Gavin’s comments above. On the face of it they look like devastating critiques of the reliability of climate modelling.
Can someone PLEASE refer me to a comprehensive and accessible critique? I must say that these arguments would have swayed me if I were to be a policy-maker.
Please help!
Anthony Kendall says
Clive,
I just finished Frank’s article, and I have to say that it makes really two assumptions that aren’t valid (and have been pointed out by Gavin).
1) The cloudiness error here reports, of ~10%, is the standard error, i.e. it’s the root-mean-square error. That is, you take the GCM ensemble cloudiness forecasts across all latitudes, subtract the observed cloudiness, square the result, and take the square root. This is perfectly acceptable to characterize many types of errors.
However, in this case he uses this number 10%, to then say that there is a 2.7 W/m^2 uncertainty in the radiative forcing in GCMs. This is not true. Globally-averaged, the radiative forcing uncertainty is much smaller, because here the appropriate error metric is not to say, as Frank does: “what is the error in cloudiness at a given latitude” but rather “what is the globally-averaged cloudiness error”. This error is much smaller, (I don’t have the numbers handy, but look at his supporting materials and integrate the area under Figure S9), indeed it seems that global average cloud cover is fairly well simulated. So, this point becomes mostly moot.
2) He then takes this 10% number, and applies it to a linear system to show that the “true” physical uncertainty in model estimates grows by compounding 10% errors each year. There are two problems here: a) as Gavin mentioned, the climate system is not an “initial value problem” but rather more a “boundary value problem”–more on that in a second, and b) the climate system is highly non-linear.
Okay, to explain. A linear system is one in which a 10%–say–change in the inputs will yield a predictably scaled percent change in the outputs. And, at any level (for instance of CO2 concentration), this would be true. The oft-quoted temperature sensitivity to a CO2 doubling assumes to a certain degree that the climate would respond linearly to greenhouse gas forcing. In fact, the climate system is highly non-linear, with a whole variety of positive and negative feedbacks that assure that the behavior of the system at a certain state of temperature, CO2, humidity, etc. will be different than at some other state.
The significance of the non-linearity of the system, along with feedbacks, is that uncertainties in input estimates do not propagate as Frank claims. Indeed, the cloud error is a random error, which further limits the propagation of that error in the actual predictions. Bias, or systematic, errors would lead to an increasing magnitude of uncertainty. But the errors in the GCMs are much more random than bias.
Even more significantly, the climate system is a boundary-value problem more than an initial-value problem. An initial-value problem is one where you specify completely the initial state of a system and then let it go. If you’ve correctly described the initial condition, and the physics of the system, it should behave appropriately moving forward. However, initial-value problems are really only appropriate for closed systems. These are ones where there is no exchange of mass or energy outside of the system. Or, that exchange is small compared to the mass and energy fluxes within the system.
On the contrary, the climate system is an open system in both ways, but particularly energetically. The energy incident upon the system from the sun drives the system in its entirety, and greatly dwarfs the energy fluxes within the system. Therefore, what’s happening at the boundary of your system will drive what happens inside. Said another way, accurately characterizing the boundary conditions is much more important than describing the dynamics of energy exchange within the system–unless those effect the boundaries. So, getting things like global average albedo, global average cloudiness, and so forth will dictate the radiative exchange to a far greater degree than the regional behaviors of the models.
Another way to look at this is that climate modelers must first “spin-up” their models for as much as 100 years to mitigate the effects of inappropriate initial starting values. After that time, the simulated system approaches an equilibrium and is ready for the actual simulation period. This is exactly how boundary-value problems behave, and this is one method to reduce the uncertainty in representing things like ocean temperature profiles in the models.
To summarize my points:
1) Frank asserts that there is a 10% error in the radiative forcing of the models, which is simply not true. At any given latitude there is a 10% uncertainty in the amount of energy incident, but the global average error is much smaller.
2) Frank mis-characterizes the system as a linear initial value problem, instead of a non-linear boundary value problem. This crucial difference means that his argument about propagation and amplification of uncertainties does not apply here. The real system is rife with positive and negative feedbacks that will respond very differently depending on the state of the system. There are certain instances where uncertainties would indeed propagate, including rapid ice sheet melting, and that is why the IPCC includes the caveat that their results do not include such effects (Which could actually lead to much more rapid warming).
Let me also state here, Frank is a PhD chemist, not a climate scientist–though there are certainly areas of real overlap there. This is why he’s liable to make such elementary mistakes when describing how the system works. It’s akin to asking a radiologist to perform a biopsy. Yeah, they both work with cancer, but in very different ways.
There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.
I hope this (somewhat long) post helps. Sorry I didn’t get a chance to read your second link.
tamino says
Re: #118 (Jared)
Do tell. Show us where “AGW proponents” use trends determined from a 10-year time span of data to show warming.
Chris says
Clive,
Re Lindzen’s “Powerpoint” presentation, the problems seem to arise from some essentially incorrect assertions.
For example, look at his first “summary” slide (slide 11). It is stated:
[“2. Although we are far from the benchmark of doubled CO2, climate forcing is already about 3/4 of what we expect from such a doubling.
3. Even if we attribute all warming over the past century to man made greenhouse gases (which we have no basis for doing), the observed warming is only about 1/3-1/6 of what models project.”]
Each of these is wildly incorrect. Focussing on point #3, we can determine that the 20th century warming (’til now) has been around 0.8 oC (either NASA GISS or Hadley data).
We know that the atmospheric CO2 concentration has risen from around 300 ppm at the start of the 20th century to 385 ppm now.
Its straightforward to calculate that with a climate sensitivity of 3oC of warming per doubling of atmospheric CO2 (the “best estimate” of the climate sensitivity which is consistent with the model data), that an increase in atmospheric CO2 from 300 to 385 ppm should yield an equilibrium temperature increase of around 1.1 oC.
Thus rather than having “only about 1/3-1/6 of what models project”, we’ve already had 0.8/1.1 or 3/4 “of what models project”.
However note that the climate sensitivity relates to the Earth’s temperature rise at equilibrium. It takes a significant amount of time for the Earth’s temperature to re-equilibrate to a higher greenhouse forcing due to the large inertia resulting from a massive ocean heat sink. If one assesses the models one can estimate that we still have something like 0.5 -0.6 oC of warming still to come from the levels of greenhouse gases already in the atmosphere.
e.g. http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf
So we might expect that the 385 ppm of atmospheric CO2 (should we stop all emissions dead right now) would give us an equilibrium temperature rise at some time in the future of 1.3-1.4 oC above early 20th century levels.
That’s a bit higher than models would predict within a set of parameters equivalent to a climate sensitivity of 3 oC. Probably some of the “excess warmth” is due to the solar contribution of the early 20th century (e.g. the period 1900ish-1940ish.
My own feeling is that Lindzen would like to promote the notion of his “Iris” effect that he describes in his Powerpoint presentation. This seems to be a notion in which the atmosphere responds to warming by increasing cloudiness that counteracts the warming (a sort of homeostatic effect that regulates the Earth’s temperature with a cloud feedback). Clearly if one wishes to promote this notion, one needs to assert that there hasn’t been as much warming as expected.
Interestingly, another contrarian notion doing the rounds right now is that much of the 20th century warming can be explained by some unspecified cloud feedback to ocean circulation (“Internal radiative forcing”!). I’m suspect that were you to read an account of that it might seem wonderfully plausible too! However, unless I’m misinterpreting everyone’s clever notions, it (a positive/warming cloud feedback) is in direct contradiction to Lindzen’s notion (a negative/cooling cloud feedback).
Hank Roberts says
Actually Lindzen’s iris works the other way — tall skinny clouds in warmer conditions; broad flat clouds in cooler — if this is correct.
Someone else has suggested it works the opposite way but has the same result, adjusting to cool the planet automagically as needed. Was that Christie, maybe?
” that cloudy-moist regions contract when the surface warms and expand when the surface cools. In each case, the change acts to oppose the surface change, and thus presents a strong negative feedback to climate change.”
http://www.esi-topics.com/fbp/2003/february03-RichardLindzen.html
Chris says
O.K. fair enough, but my point is that Lindzen’s “Iris” model is a homeostatic notion that supposedly acts to counter global warming via a cloud response, whereas Roy Spencer’s “internal radiative forcing” notion is a “feedback” (‘though I don’t think he considers the term appropriate) that amplifies, or responds to ocean circulation oscillations, and has been (supposedly) the major source of 20th century warming. In other words each uses clouds as all-encompassing explanations in opposite directions (for lack of warming according to Lindzen whose “Iris” hypothesis requires that we’ve had less warming than expected – even ‘though we haven’t; and for most of the 20th century warming according to Spencer).
It’s a tidy strategy of hunting out regions of present uncertainty upon which to construct seductive “hypotheses”, rather along the lines of the Cosmic Ray Fluxers who assert that (paraphrasing) “O.K. there hasn’t been any trend in the cosmic ray flux since at least 1958, but actually we now realize that it’s the muons that are important”
Not dissimilar to the notion of Intelligent Design which also hides within the retreating tides of present uncertainty….if I may be a tiny bit cynical :-P
David B. Benson says
Those bearing flowers (irises) need too fully explain the much warmer previous interglacials, at least terminations 2, 3 & 4. For the iris effect is presumed to prevent this, yes?
Alexander Harvey says
Re #129:
Chris,
You highlight, dismiss but do not seem to analyse:
[2. Although we are far from the benchmark of doubled CO2, climate forcing is already about 3/4 of what we expect from such a doubling.
To quote his longer version:
“In terms of climate forcing, greenhouse gases added to the atmosphere through mans activities since the late 19th Century have already produced three-quarters of the radiative forcing that we expect from a doubling of CO2.”
Well we can take a look at what he says and where we could look for such data.
If by the late 19th century we could infer 1880 and if by already we could infer 2003 then we could use the GISS Radiative forcings data for Well mixed GHGs. In that period the GISS forcing for W-M GHGs has increased by 2.7487 (W/m^2). Which is about 3/4 of 3.7 (W/m^2).
Obviously that is not the whole story but even when taking the GISS total forcings for that period you have 1.9218 (W/m^2) (1880-2003) or 1.9232 (1900-2000) which is about 52% of 3.7 W/m^2
This does not quite match your “wildly incorrect”.
Personally I would favour the 52% figure but even that would imply and equillibrium temperature rise of over 2C if the atmospheric composition was frozen at the 2000 figure. (I have assumed the same 4C/doubling that he is criticising).
You arrive at your 1.1C increase as if CO2 was the only show in town. from the GISS total forcing you would get a ~1.55C increase at equillibrium using your prefered 3C/doubling.
As only about a 0.8C increase occured about 0.7C would need to be “in the pipeline”. That is quite a high ratio of pipeline to occurrence. It may be the case but it may not.
In the aouthor’s own terms (he criticises a 4C/doubling figure) from the GISS total forcing the equillibrium increase ought to be a little over 2C (1900-2000) he claims an 0.6C +/- .15C temperature increase giving a range of 36.1% to 21.6% which is close to his 1/3 – 1/6. (How he got the 0.6C figure is a bit beyond me I though it was around 0.8C)
If he was to (cherry) pick just the W-M GHG (much like you picked just CO2 he would get 16.6% to 27.7% i.e roughly 1/4 – 1/6. (Again using his 0.6C +/- 0.15C range)
Now I do not like people picking just which bits suit them but I think you may both be guilty of that. Also either of you may have been using forcing figures that differ markedly from the GISS ones.
Finally he could have gone one step further and just set the effect of the cooling aerosols to zero and then had a 1900-2000 forcing of 3.5168 W/m^2 a whopping 95% of the effect of doubling CO2.
Best Wishes
Alexander Harvey
Jeffrey Davis says
I was under the impression that the “iris effect” was discounted because if it worked as advertised then the atmosphere would never have warmed (or cooled) with forcings the strength of the Milankovich Cycles. The iris wouldn’t open or close solely due to changes in co2.
Gerald Browning says
Anthony Kendall (#127),
Let us take your statements one at a time so that there can be no obfuscation.
Is the simple linear equation that Pat Frank used
to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.
[Response: No. There is no lag to the forcing and it would only look good in the one case he picked. It would get the wrong answer for the 20th Century, the last glacial period or any other experiment. – gavin]
Are the physical components of that linear equation based on
arguments from highly reputable authors in peer reviewed journals?
Yes or no.
[Response: No. ]
Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.
[Response: If you give a linear model a linear forcing, it will have a linear response which will match a period of roughly linear warming in the real models. Since it doesn’t have any weather or interannual variability it is bound to be a better fit to the ensemble mean than any of the real models. – gavin]
Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.
[Response: Models aren’t ‘true’. They are always approximations. – gavin]
Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?
Yes or no.
[Response: No. Accuracy is determined by analysis of the solutions compared to the real world, not by a priori claims of uselessness. – gavin]
Given that the climate models do not accurately approximate the correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?
[Response: Yes. Stratospheric cooling, response to Pinatubo, dynamical response to solar forcing, water vapour feedback, ocean heat content change… etc.]
What is the error equation for the propagation of errors for the climate or a climate model?
[Response: In a complex system with multiple feedbacks the only way to assess the affect of uncertainties in parameters on the output is to do a Monte Carlo exploration of the ‘perturbed physics’ phase space and use independently derived models. Look up climateprediction.net or indeed the robustness of many outputs in the IPCC AR4 archive. Even in a simple equation with a feedback and a heat capacity (which is already more realistic than Frank’s cartoon), it’s easy to show that error growth is bounded. So it is in climate models. – gavin]
Jerry
Alf Jones says
Have you seen Roger Pielke Jr’s Prometheus blog posting on this subject? I have to say it had a number of mistakes which seemed uncharacteristic of RPJr.
It looks at “observed trends in global surface temperature 2001-present (which slightly longer than 8 years)”, which of course is only slightly longer than 7 years. That the observed trends were up to “-1.5 +/- 2.2 C/decade” when it should be per century. If you actually look at the last 8 years of data the trends are quite different, i.e. more positive.
But his claim that a short observed cooling trend can “falsify” the models is most disappointing. Surely in the same way one hot summer does not prove Global warming is happening, one cold trend (even if real) could still be consistent with the models, it is just not going to happen that often.
Sadly I am not sure statistics is RPJr’s strong suit.
Martin Vermeer says
#131 Chris:
I seem to remember the expression “God of the Gaps”.
Nylo says
#108 CobblyWorlds: The discussion is not where the Global Warming will change surface temperatures the most. What my question discusses is HOW. In the GH theory, the surface warms because it gets extra energy by emissions from the troposphere, and the troposphere emits more because it has previously got much warmer, because of the GH gasses absorbing energy. However, provided that the troposphere is not as warm as predicted, for whatever the reasons, how is the surface going to warm as much as the models predict, if it cannot receive as much energy from the troposphere as the models predicted because the troposphere is not as warm as the models predicted, for whatever the reasons.
As a different matter, I don’t agree with you that the GH effect is maximum in the poles. The Global warming can be maximum there, but for different reasons, air flow or whatever, distributing the heat. But the GH effect cannot be maximum for three reasons:
1.- The whole atmosphere is colder there, so it cannot emit as much extra energy as in the rest of the latitudes back to the surface;
2.- The atmosphere has much less water vapour there (can reach almost 0, depending on how extreme the cold is), so also its capability to absorb the infrared radiation is very limited (let’s remember that water vapour causes an 85% of the GH effect).
3.- The surface temperatures are much colder in the poles, so the earth emits less infrared radiation there, and the ammount of radiation that each molecule of any GH gas can absorb is also less.
To sumarize, less radiation available to be absorbed by also less molecules leads to much lesser increase of temperature by absorption of energy, and also, less polar tropospheric temperature means even less emission back to the surface.
So to repeat myself, yes, the poles can increase their temperatures more than the rest, but it won’t be because of the GH effect taking place in the poles but elsewhere, and then some redistribution of the heat.
Clive van der Spuy says
Anthony Kendall Re critique of Frank.
Sir you have just made my day. Hallelujah. I could follow your argument 100%.
Your statement:”There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.”
I think it is this point about peer review where I slipped up. Should have known better. But tough for laymen to follow peer reviewed articles so we end up reading a lot of trash.
Thanks again.
Lazlo says
Ummm extremely cautious about posting here because of huge risk of getting head blown off (as in western front 1914-18) but here goes: are there any lessons in prediction markets, that have been shown to have some success in predicting various outcomes such as software project targets?
Rod B says
137: “…tough for laymen to follow peer reviewed articles…”
Actually, and more frustrating, tough for laymen to even access many peer reviewed articles without shelling out $15-$30 per article. And that’s before one knows what’s really in the article.
136: “…Not dissimilar to the notion of Intelligent Design which also hides within the retreating tides of present uncertainty…
…I seem to remember the expression “God of the Gaps”….”
Sounds like a good plan to me….. ;-)
J says
In the February post that you link to, there was some discussion of ways that the model output archive could be improved, including this:
The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.
Do you know whether anyone has done this (archived a globally averaged time series version, with annual or monthly steps, somewhere where it would be publicly available)?
Chris says
Alex, I did analyze. I did a back of the envelope calculation to show that the warming over the last century is consistent with the models. However one tries to rescue the situation, it’s not possible to support the assertion that “the observed warming is only about 1/3-1/6 of what models project”. There are at least three fundamental problems.
You highlight the first and the second one. Taking your total forcing (including, as you quite rightly point out, all greenhouse gases and not just CO2), and an expected equilibrium warming of around 1.5 oC, we’ve already had around 0.8 oC of this. However we know full well (and this is represented in models as the temporal evolution of temperature under forcings) that total greenhouse-induced warming relates to the warming at equilibrium. Let’s take the value of the “in the pipeline” warming from Hansen’s model (around 0.6 oC [*****]), and we arrive at an equilibrium warming of around 1.4 oC. You indicate that the total forcing (so far) should give us an equilibrium warming of around 1.5 oC. So the warming so far is consistent with the models/best estimate climate sensitivity (you could say we’ve had 90-95% of the warming expected).
Now either the models are laughably incorrect as Lindzen says (he asserts we’ve only had 16-33% of the expected warming) or they’re not. Your analysis is consistent with the latter, since, although we don’t know exactly how much warming we have still to come, the models are consistent with an expectation that we’re on track to have 90-95% of the expected warming [note that I’m using Lindzen’s assumption that all the warming if the last century is from greenhouse gases “Even if we attribute all warming over the past century to man made greenhouse gases…”] .
Lindzen considers that rather than 0.8 oC of warming in the last century we should have had something between 2.4 and 5.4 oC (according to his assertion of what is expected if the Earth’s temperature evolves according to the models). This would indicate a climate sensitivity somewhere around the range of 5 – 11 oC per doubling of atmospheric CO2 (or higher if the “composite” “time constant(s)” for attaining equilibrium were greater than expected).
The third problem is that the models themselves don’t over-predict warming by a factor of three to six-fold. Hansen’s early GCM model for example [***], that allows a 20-year forecast to be compared with reality, comes reasonably close to the measured temperature evolution. The model does slightly overestimate the predicted temperature but, as the authors state “Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used (12), 4.2 oC per doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3 +/- 1 oC, based mainly on paleoclimate data (17).” . Alternatively if the 20th century temperature evolution is modeled under known forcing estimates and using a climate sensitivity of 2.7 oC, then the modeled surface temperatures match the measured surface temperatures pretty well [*****]. So how can anyone assert that the models are overestimating global warming by a factor somewhere between three-fold and six-fold?
So Lindzen’s assertions about the expected warming and massive overprediction of warming by models are demonstrably wildly incorrect. No “cherrypicking” is required to establish that fact.
[***] http://pubs.giss.nasa.gov/docs/2006/2006_Hansen_etal_1.pdf (see Figure 2 and text)
[*****] http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf (see Figure (see Figure 1b and text)
Ray Ladbury says
I just had a look at the Frank paper. Good lord, it’s worse than I imagined. I actually burst out laughing when I read the following:
“If the uncertainty is larger than the effect, the effect itself becomes moot.”
THAT depends on the characteristics of the uncertainty and on the characteristics of the effect and on the time over which they persist.
Yes, there are significant uncertainties in climate models. No, the effect of adding CO2 is not among them. I’ve heard only a few skeptics out there who actually understand the physics of climate, and their reasons for dissent have lacked a sound physical basis. This is reflected in the publication record–almost nobody is publishing papers that dissent from the consensus position. Those few papers that do dissent, either misunderstand the physics or are greeted with collective sigh because they simply don’t show a way forward. Scientific consensus is achieved when the opposition stops having anything to say in refereed science journals. By any reasonable measure–publication, citations…–we’re there.
Jared says
#128
Tamino…
Remove 1998 from the records. Then tell me how much warming occurred in the 1990s. What has to be realized is that extreme anomalies work both ways. They may help a trend go upwards one way, but then help create a downward trend the other.
Also, ten years is supposedly such a short time to measure climate, but what about 20 years? Are those periods so much longer and more telling? 1978-1998, a 20 year period of definite warming…1998-2008, a ten year period of equlibrium.
Ray Ladbury says
#145–about 0.15 degrees C during the 90s. You are missing the point of requiring longer observation periods for climate effects. Noise fluctuates on short timescales, while climatic trends bear out and emerge more from the noise as observation time increases. Also:
1998–El Nino, therefore anomalous
2008–La Nina, therefore anomalous
Therefore 1978-1998–20 years of warming
1998-2008–a continuation of the warming trend
Do the frigging math. If you fit a linear trend to the data, the trend is still upward.
Gerald Browning says
Ray Ladbury (#144),
By citing each others nonsense.
If the climate models are ill posed as has been shown both mathematically and
numerically, what does that say about all of the manuscripts that have been published using climate models?
Jerry
[Response: That your complaint is unfounded? – gavin]
Jared says
Gavin – I keep trying to post and my posts are not showing up. Could you tell me why?
I will try again…
[edit]
[Response: They don’t show up because repeating the same thing over and again is tedious. Using 1998 is cherry-picking as you have been told over and over. Unless you want to say something new, don’t bother. – gavin]
Jared says
Ok, I have been informed that using 1998 is cherry picking, and therefore I apparently cannot post anything about that year. Very well.
If one looks at 2001-2008, the same trend is evident in this graph:
http://tinyurl.com/4de3v7
HAD, RSS, and UAH show no real upward trend. Now the question is, how significant is this? Is it a blip or the start of a longer trend? Time will tell…the next 10-20 years will be very telling, with the -PDO phase and projected low solar activity. All I am asking is that we have an open mind and consider multiple scenarios.
tamino says
Jared stated:
I replied:
Jared replied:
Yes, longer periods are more telling. Not only do they provide more data points, as the time span grows, the signal grows but the noise level remains the same, so there’s a larger signal-to-noise ratio.
You seem unwilling to admit to yourself that you don’t understand the impact of noise in temperature time series on estimates of trend rates. If you’re really interested in learning, you should heed the advice I gave earlier and study this post.
You should also answer my question. Can you show us where AGW proponents use trends determined from a 10-year time span of data to show warming? Or did you just make that up?