As we did roughly a year ago (and as we will probably do every year around this time), we can add another data point to a set of reasonably standard model-data comparisons that have proven interesting over the years.
First, here is the update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v, NCDC and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs.
The El Niño event that started off 2010 definitely gave last year a boost, despite the emerging La Niña towards the end of the year. An almost-record summer melt in the Arctic was also important (and probably key in explaining the difference between GISTEMP and the others). Checking up on our predictions from last year, we forecast that 2010 would be warmer than 2009 (because of the ENSO phase last January). Consistent with that, I predict that 2011 will not be quite as warm as 2010, but it will still rank easily amongst the top ten warmest years of the historical record.
The comments on last year’s post (and responses) are worth reading before commenting on this post, and there are a number of points that shouldn’t need to be repeated again:
- Short term (15 years or less) trends in global temperature are not usefully predictable as a function of current forcings. This means you can’t use such short periods to ‘prove’ that global warming has or hasn’t stopped, or that we are really cooling despite this being the warmest decade in centuries.
- The AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
- The model simulations use observed forcings up until 2000 (or 2003 in a couple of cases) and use a business-as-usual scenario subsequently (A1B). The models are not tuned to temperature trends pre-2000.
- Differences between the temperature anomaly products is related to: different selections of input data, different methods for assessing urban heating effects, and (most important) different methodologies for estimating temperatures in data-poor regions like the Arctic. GISTEMP assumes that the Arctic is warming as fast as the stations around the Arctic, while HadCRUT and NCDC assume the Arctic is warming as fast as the global mean. The former assumption is more in line with the sea ice results and independent measures from buoys and the reanalysis products.
There is one upcoming development that is worth flagging. Long in development, the new Hadley Centre analysis of sea surface temperatures (HadISST3) will soon become available. This will contain additional newly-digitised data, better corrections for artifacts in the record (such as highlighted by Thompson et al. 2007), and corrections to more recent parts of the record because of better calibrations of some SST measuring devices. Once it is published, the historical HadCRUT global temperature anomalies will also be updated. GISTEMP uses HadISST for the pre-satellite era, and so long-term trends may be affected there too (though not the more recent changes shown above).
The next figure is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. As before, I don’t have the post-2003 model output, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
To include the data from the Lyman et al (2010) paper, I am baselining all curves to the period 1975-1989, and using the 1993-2003 period to match the observational data sources a little more consistently. I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs might have gone.
Update (May 2010): The figure has been corrected for an error in the model data scaling. The original image can still be seen here.
As can be seen the long term trends in the models match those in the data, but the short-term fluctuations are both noisy and imprecise.
Looking now to the Arctic, here’s a 2010 update (courtesy of Marika Holland) showing the ongoing decrease in September sea ice extent compared to a selection of the AR4 models, again using the A1B scenario (following Stroeve et al, 2007):
In this case, the match is not very good, and possibly getting worse, but unfortunately it appears that the models are not sensitive enough.
Finally, we update the Hansen et al (1988) comparisons. As stated last year, the Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%) (and high compared to A1B), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the best estimate (~3ºC).
The trends for the period 1984 to 2010 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.27+/-0.05ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3, the trends are 0.19+/-0.05 and 0.18+/-0.04ºC/dec (note that the GISTEMP met-station index has 0.23+/-0.06ºC/dec and has 2010 as a clear record high).
As before, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world. Repeating the calculation from last year, assuming (again, a little recklessly) that the 27 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.27*0.9) * 0.19=~ 3.3 ºC. And again, it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%).
So to conclude, global warming continues. Did you really think it wouldn’t?
Kevin McKinney says
#98–“. . . until the models have the ability to predict the short term variations occurring over the time interval of one year, we don’t know how well the models have estimated natural variability.”
Nonsense. These are quite separate problems.
dhogaza says
Isotopious:
Bull. That’s equivalent to saying just because we can’t predict the whether or not it will rain on July 4th 2011 in Portland Oregon, that we can’t predict that july and august will be warmer and drier than february.
We can model natural variability in summer weather without being able to predict exactly where summer of 2011 will fall within that range.
You’re just wrong.
Isotopious says
“Bull. That’s equivalent to saying just because we can’t predict the whether or not it will rain on July 4th 2011 in Portland Oregon, that we can’t predict that july and august will be warmer and drier than february.”
I see what you mean, however, we could be more specific, and ask if we can predict whether next January will be above or below average temperature. In this case we can’t predict the result; the science is not good enough. The physics have not been established unlike in your example.
So have a guess, just the good ol’ above/ below, yes/ no, 1/ 0, will suffice. No need for some ancy-fancy decimal point value…I don’t want the whole world, etc…
DrGroan says
Is there any release date available for the new HadISST products. I have an issue which i believe will be fixed by the Thompson correction. I will probably delay my manuscript submission slightly, if i can get my hands on the new version of hadisst soon.
Ray Ladbury says
Isotopious@98 demonstrates a deep misunderstanding of climate. The reason why CO2 trumps natural variability IN THE LONG RUN is not becuase it is, at present, much larger than energy fluctuations due to natural variability, but because it’s sign is consistent. It is the same reason why gravity–despite being the weakest of forces trumps all others at the level of cosmology.
And his contention that we cannot be confident in the models until they can predict on yearly timescales is utter BS. I do not know a fund manager who will predict with confidence how his fund will do on a yearly timescale, and yet they wager billions on decadal timescales. Folks, come on, think about the dynamics of the system before you post this crap!
TimTheToolMan says
TimTheToolMan asks : Regarding model output for OHC, Gavin writes : ” As before, I don’t have the post-2003 model output”
Why not? I dont understand. Are you saying the models never output any OHC predictions past 2003?
Gavin responds : [Response: No. It is a diagnostic that would need to be calculated, and I haven’t done it. That’s all. – gavin]
Now I’m even more confused. How is it that arguably the most important aspect of AGW (ie the Ocean Heat Content) has not been calculated from the model output past 2003?
[Response: It has. But I didn’t do it, and I don’t have the answers sitting on my hard drive ready to be put on a figure for a blog post. And if I don’t have it handy, then I would have to do it all myself (no time), or get someone else to do it (they have more important things to do). If someone who has done it, wants to pass it along, or if someone wants to do the calculation, I’d be happy to update the figure. – gavin]
steve says
How much global warmimg has occurred in “polar regions” and how many temperature stations record this warming?
captdallas2 says
Ray Ladbury,
“The reason why CO2 trumps natural variability IN THE LONG RUN is not becuase it is, at present, much larger than energy fluctuations due to natural variability, but because it’s sign is consistent.”
But doesn’t ~3.3 sensitivity indicate that CO2 does trump natural variability in the short run?”
Ray Ladbury says
Capt. Dallas, No. That’s 3.3 K per doubling. CO2 doesn’t double overnight.
[comment was moved to Bore Hole -moderator]
Tom Scharf says
“…The simulations from 1988 – or even earlier – have proved skillful, though if you think that means they were perfect (or that they would need to be), you are somewhat confused. – gavin]”
I guess I am somewhat confused.
When you say a model is skillful, you must answer the question: skillful as compared to what? Skillful requires that the model performs better than a BASELINE model. One baseline model is a simple linear trend from the start of the century. This model performs as good or better than Hansen’s prediction. It takes less than 1 second to run, uses no physics, and is more accurate. (admittedly there are many possible different simplified models).
[Response: At the time (1988), there were no suggestions that climate should be following a linear trend (though if you know of some prediction along those lines from the 1980s, please let me know – the earliest I can find is from 1992, and the prediction was for 0.1 degC/dec). Instead, there were plentiful predictions of no change in mean climate, and indeed, persistence is a very standard naive baseline. Hansen’s model was very skillful compared to that. To argue that a specific linear trend should be used as a naive baseline is fine – except that you have to show that your linear trend up to 1984 was a reasonable thing to do – and should you use a 10yr, 20yr, 30yr etc. period? How well did that recipe validate in previous periods (because if it didn’t, it wouldn’t be a sensible forecast). Post-hoc trolling for a specific start point and metric now that you know what has actually occurred is not convincing. This was explored in some detail in Hargreaves (2010). – gavin]
To prove skillful for the point you are trying to make (CO2 is a climate driver), the model must perform statistically better than a model that doesn’t use this large CO2 forcing. Hansen’s scenario C, which assumes a significant reduction in CO2, matches what really happened, business as usual CO2 increases.
[Response: No it didn’t. The different scenarios have net radiative forcing in 2010 (with respect to 1984) of 1.6 W/m2, 1.2 W/m2 and 0.5 W/m2 – compared to ~1.1 W/m2 in the observed forcing since then. The test of the model is whether, given the observed changes in forcing, it produces a skillful prediction using the scenario most closely related to the observations – which is B (once you acknowledge the slight overestimate in the forcings). One could use the responses of all three scenarios relative to their specific forcings to make an estimate of what the model would have given using the exact observed forcings, but just using scenario C – which has diverged significantly from the actual forcings – is not going to be useful. This is mainly because of the time lag to the forcings – the differences between B and C temperature trends aren’t yet significant (though they will be in a few years), and in 2010 do not reflect the difference in scenario. If you are suggesting that scenario C will continue to be a better fit, I think this is highly unlikely. – gavin]
To confused people like myself, this suggests the CO2 forcing aspect of this model is WRONG based on actual performance.
[Response: I looked into what you could change in the model that would have done better (there is no such thing as a RIGHT/WRONG distinction – only gradations of skill), and I estimated that a model with a sensitivity of ~3 deg C/2xCO2 give the observed forcings would have had higher skill. Do you disagree with that? Since that is indeed our best guess for the sensitivity, and is also close to the mid-point of the sensitivities of the current crop of models, do you agree that this is a reasonable estimate? – gavin]
This leads confused people like myself to not trust new models until they have proved skillful against a reasonable baseline model, which is provided at the time of the model release.
[Response: Then you are stuck with looking at old models – which in fact did prove skillful compared to naive baselines provided at the time of release (see above). I prefer to use old models and their biases in order to update my Bayesian model for what is likely to happen. If a model with a sensitivity of 4.2 deg C/2xCO2 went a little high (given current understandings of the forcings), and a model with a sensitivity of 3 deg C/2xCO2 would have been spot on, I think that is support for a sensitivity of around 3 deg C, and that is definitely cause for concern. – gavin]
Large positive CO2 forcings calls for accelerated warming as CO2 increases.
I don’t see this signal in the data (yet). It’s not there.
captdallas2 says
Ray Ladbury,
I am aware that is ~3.3 K is for a doubling and that during the last decade Atmospheric CO2 increased from ~ 340 ppm in 1983 to ~390 ppm in 2010. My question was, would not ~ 3.3 K sensitivity indicate that over that short period (27 years), CO2 warming exceeded natural variation?
That would of course lead into a question of how over the period 1913 to 1940, though you could pick virtually any 27 year period, natural variability could create similar changes, but not so much now?
tony says
Does your graph have a different baseline to the one published in IPCC 2007 ts26 ? http://www.ipcc.ch/graphics/ar4-wg1/jpg/ts26.jpg
It has models and observations matching at about 2000, wheras you don’t?
[Response: That seems to have a baseline of 1990-1999 (according to the caption), so that isn’t the same as 1980-1999 used above. – gavin]
Stan Khury says
I am new to the website. I have a basic (and what may seem like a trivial) question, but I am looking for a pointer to a place on the sebsite that tells about the conventions used to consolidate multiple observations/measurements at different points on the surface of the earth (e.g., different oceans) at different times (i.e., the seasons and such). Any pointers to that spot would be kindly apprecited.
Brian Dodge says
I think Alan Millar thinks that http://www.woodfortrees.org/plot/gistemp/mean:12 is GISS model output, not data.
He says “Well when I look at it the GISS decadal, climate only, signal trend, matches the weather and climate decadal signal trend. Up, down, up, up, and up. In none of them do we see an opposite trend over the whole decade.”
– Which is clearly not the case with GISS Model E – go to http://data.giss.nasa.gov/modelE/transient/Rc_jt.1.01.html and hit the “show plot” button.
Confusion over the difference between models and data aside, when we look at 30 year or longer CLIMATE trends in the data, rather than decadal WEATHER trends, what we see is http://www.woodfortrees.org/plot/hadcrut3vgl/mean:12/plot/hadcrut3vgl/from:1890/trend/plot/hadcrut3vgl/from:1950/trend/plot/hadcrut3vgl/from:1980/trend/plot/esrl-co2/offset:-350/scale:0.01
-Accelerating warming trends as the CO2 forcing increases.
tamino says
Re: #110 (Tom Scharf)
You say
You are incorrect.
Septic Matthew says
102, dhogaza: That’s equivalent to saying just because we can’t predict the whether or not it will rain on July 4th 2011 in Portland Oregon, that we can’t predict that july and august will be warmer and drier than february.
We can model natural variability in summer weather without being able to predict exactly where summer of 2011 will fall within that range.
I think that’s a misleading (though commonly used) analogy, for two reasons. First, the seasonal differences are caused by solar variation, whereas we are naturally worried in AGW discussions by CO2, H2O, and CH4 effects. Second, predictions of seasonal effects are simple extrapolations of statistical records accumulated in real time over many generations, and over many summer/winter cycles, but we do not have comparable statistical records accumulated in real time over many cycles of increasing/decreasing GHGs.
Septic Matthew says
Hansen’s scenario c is clearly the most accurate of the 3 to date; is any conclusion to be drawn from this?
The graph of ocean heat content shows a slight decline in the year 2010; this co-occurred with a decline in sea surface and a record (or near record) surface temperature. That makes it look like 2010 was characterized by a slight departure from the average net transfers of heat between ocean and surface. Is that a fair statement?
Brian Dodge says
captdallas2 — 24 Jan 2011 @ 12:34 PM
“That would of course lead into a question of how over the period 1913 to 1940, though you could pick virtually any 27 year period, natural variability could create similar changes, but not so much now?”
perhaps because over that cherrypicked 27 year period 1913 to 1940, when CO2 was much lower and rising much slower than it is now, the natural variation in solar output was up, whereas despite the natural variation in solar output being down since 1980, temperatures and CO2 are up.
http://www.woodfortrees.org/plot/hadcrut3vgl/mean:12/offset:-0.1/plot/hadcrut3vgl/from:1913/to:1940/trend/plot/sidc-ssn/from:1913/to:1940/trend/scale:0.01/offset:-0.4/plot/hadcrut3vgl/from:1980/trend/plot/sidc-ssn/from:1980/trend/scale:0.01/offset:-0.4
(This uses scaled and offset Sunspot Number as a proxy for solar output, since that’s all woodfortrees has available.)
jacob l says
re 113
Basically the temperatures are converted to anomalies from a common base period,weighted based on area then averaged, where things get complicated is dealing with missing data and other quality control issues.
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf describes how Hansen did it for land
M says
“That would of course lead into a question of how over the period 1913 to 1940, though you could pick virtually any 27 year period, natural variability could create similar changes, but not so much now?”
Natural variability is not a magic wand that just produces warming and cooling. Natural variability itself is the result of underlying processes, some known, some not known. My recollection is that the 1913-1940 warming coincided with at least two natural warming processes: increasing solar warming, and decreasing volcanic activity. As our observational capacity improves, we should expect to have fewer and fewer instances of unexplainable climate changes. In this case, GHG warming explains the last several decades nicely, but changes in known natural processes do not. Therefore, in order to come up with an alternative explanation, one has to simultaneously show why GHGs are not causing the warming they would be expected to based on physical principles, and at the same time come up with a natural source of temperature change that can match the magnitude and patterns of the observed change. Good luck…
-M
Hank Roberts says
SM, if you’d read the prior discussion, you wouldn’t be repeating the same question based on the same misunderstanding already answered over and over.
Seriously, unless you’re trying intentionally to repeat the talking point that the old Scenario C is accurately describing current events (despite different assumptions and different facts) — you could avoid making that mistake.
Have a look at the answers already given to that question. They might help.
captdallas2 says
117 septic Mathew,
The temperature may appear to be closer to scenario C but that has nothing to do with the accuracy of scenario C. Each scenario predicts a response based on action or inaction to curb CO2 output. The business as usual pretty much is the only scenario we should be looking at unless thinking about doing something counts as doing something.
In another three to five years we may be able to coax out a trend without being accused of cherry picking. Then some of the questions about natural variability may be answered.
captdallas2 says
M says,
“Natural variability is not a magic wand that just produces warming and cooling. Natural variability itself is the result of underlying processes, some known, some not known. My recollection is that the 1913-1940 warming coincided with at least two natural warming processes: increasing solar warming, and decreasing volcanic activity.”
Solar increase has been pretty much ruled out as being significant during that period. Aerosols, natural and man made are interesting. The unknowns are more interesting though.
captdallas2 says
Brian Dodge,
I guess you could call it cherry picking since I picked it because it had a similar slope without as much CO2 increase. Kinda the point.
BTW, There are more recent solar studies than Lean 1998, 2000 and 2005. 1 Watt/meter squared TOA variation in TSI during a solar ~11 year cycle “may” contribute 0.1 degree temperature variation. I think Dr. Lean herself said that not too long ago.
David B. Benson says
captdallas2 @111 — Pleasse take the time to study the ultrasimple model in
https://www.realclimate.org/index.php/archives/2010/10/unforced-variations-3-2/comment-page-5/#comment-189329
Hank Roberts says
For Stan Khury, here’s one way to go about answering questions like yours. I took your question and pasted it into Google, and from among the first page of hits, here for example is one that may be helpful to get an idea of how scientists work to make observations taken in many places at many times useful. Just an example, you’ll find much more out there.
Geodetic Observations and Global Reference Frame …
http://www.nbmg.unr.edu/staff/pdfs/Blewitt_Chapter09.pdf
“… Geodetic observations are necessary to characterize highly accurate spatial and temporal changes of the Earth system that relate to sea-level changes. Quantifying the long-term change in sea-level imposes stringent observation requirements that can only be addressed within the context of a stable, global reference system. This is absolutely necessary in order to meaningfully compare, with sub-millimeter accuracy, sea-level measurements today to measurements decades later. Geodetic observations can provide the basis for a global reference frame with sufficient accuracy. Significantly, this reference frame can be extended to all regional and local studies in order to link multidisciplinary observations ….”
Take any set of observations and you can find similar information.
Here’s a bit about the CO2 record, for example: http://www.esrl.noaa.gov/gmd/ccgg/trends/ They explain how the seasonal variations are handled on that page to produce the annual trend. You can find much more.
Septic Matthew says
121, Hank Roberts, if the question was answered, I missed the answer.
Hank Roberts says
> SM
> Scenario C
For example, click: https://www.realclimate.org/index.php/archives/2011/01/2010-updates-to-model-data-comparisons/comment-page-3/#comment-198490
and earlier; find “scenario c” in this thread, and at rc globally.
Don’t miss the inline responses.
Phil Scadden says
septic Matthew – follow Gavin’s response in 110 carefully.
captdallas2 says
121 David B. Benson,
Read it. 1 in 4 x10 to the 40th means there is a snowballs chance in hell that CO2 increase won’t lead to warming. I agree. I just tend to agree with Arrhenius’ final stab at sensitivity and Manabe. Arrhenius’ first shot was around 5.5 then he adjusted downward to 1.6 (2.1 with water vapor. Manabe, kinda got to shoot for his average ~2.2. The Charney compromise of 1979 (1.5 – 3.0 – 4.5) never impressed me despite the rigorous mathematics.
Anyway, the point I was making is that older solar variation estimates are over used (by both camps, er tribes) and that natural variation could quite possibly be underestimated. There is more to climate oscillations than the ENSO and AMO, which may be part of a tri-pole (Tsonis el al. https://pantherfile.uwm.edu/aatsonis/www/JKLI-1907.pdf )
Cherry picked though it may be, 1903 to 1940 is an interesting period.
Ray Ladbury says
capt. dallas, Gee, and here I thought we ought to be going with what the evidence said–which is ~3 degrees per doubling, with 90% confidence between 2 and 4.5. We know more that did Arrhenius or Manabe.
Doug Proctor says
Okay … read your reply, but it still looks like Scenario C is the close comparison to GISTemp and HadCruT. Also, Lyman (2010) looks way out of line with other measurements.
What would Scenario C tell you about CO2 sensistivity for doubling?
David B. Benson says
captdallas2 @130 — To become more impressed by the estimate of about 3 K for Charney equilibrium climate sensitivity, read papers by Annan & Hargreaves.
I also have a zero-dimensional two reservoir model using annualized data. The only sources of internal variability included are ENSO and the AMO. Looking at the autocorrelations there does not appear to be anything left to explain except as randeom noise.
Doug Proctor says
“The temperature may appear to be closer to scenario C but that has nothing to do with the accuracy of scenario C”
I’m missing something I REALLY would appreciate being corrected about. I understood that the point of graphing things together like the measured temp and the Scenarios, was that the one Scenario that looked most like the measurement was the most likely Scenario to go with. Why is “B” better than “C”, when “C” looks most like GISTemp and HadCruT?
Seriously, I’d like to know. I have no idea how to interpret such comparisons otherwise and will always make the same mistake.
Septic Matthew says
129, Phil Scaddon
thank you. I had missed it. It seems to suggest that the better fit of scenario 3 to the data might be meaningful should it persist.
tamino says
Re: #134 (Doug Proctor)
Scenarios A, B, and C are the same model, but with different forcings (different greenhouse gas emissions forecasts). Model B is preferred because it’s the one for which the emissions forecast is closest to what actually happened.
But scenario B turned out to be too warm. That indicates that the model itself was probably too sensitive to climate forcings. As Gavin said, that model has a climate sensitivity of 4.2 deg.C per doubling of CO2. The best estimates of climate sensitivity (around 3 deg.C per doubling of CO2) indicate that that’s too much — in agreement with the conclusion from the model-data comparison.
Hank Roberts says
> It seems to suggest
> that the better fit of scenario 3
> to the data might be meaningful should it persist.
What is this “it” you are relying on, SM?
Look at what’s actually being written, not what some “it seems” “to suggest”
Look at the assumptions at the time for each of those scenarios.
Compare them to what people are telling you.
Where are you finding anyone saying Scenario C has a better fit?
Hint: fit isn’t the line drawn on the page; fit is the assumptions as well as the outcome. C has too high a climate sensitivity and a cutoff on use of fossil fuels. Reality — differs.
Oh, don’t listen to me, listen to the scientists here.
Look in the right sidebar for the list “… with inline responses” to keep up, and do the searches on the subject. You’ve apparently been missing the most useful information here — the scientists’ inline answers — since you haven’t read them.
Chris Colose says
Doug Proctor (#134)
When you try to predict the climate out in the future, you have two big uncertainties. First, you have the socio-economic uncertainties (which dictate emissions and CO2 growth rates, etc) and is mostly determined by human political choices. Secondly, you also have the actual physical uncertainties in the climate system.
Scenario’s A, B, and C were primarily about the former component. They represent possible future concentrations of the main greenhouse gases. In Scenario C, trace gas growth is substantially reduced between 1900 and 2000 so that the forcing no longer increases into the 21st century. This is not what actually happened. Scenario B was a bit more conservative about greenhouse growth rates, and it’s not what happened either but it’s the closest one. Keep in mind also that the actual forcing is not too well known because of tropospheric aerosols. Thus, even before we talk about the actual trends in temperature, we known Scenario B is the most useful comparison point.
Keep in mind though that actual forcing growth and Scenario B growth are not completely equivalent. Also keep in mind that the climate sensitivity (this is the physical aspect now) is a bit high in the 1988 model paper, so you’d expect some differences between observations and models.
Hank Roberts says
Oh, good grief, people, is this what’s got you going on this so avidly?
… climatedepot.com Jan. 21
… Oops-Temperatures-have-fallen-below-Hansens-Scenario-C …
captdallas2 says
Ray said
“capt. dallas, Gee, and here I thought we ought to be going with what the evidence said–which is ~3 degrees per doubling, with 90% confidence between 2 and 4.5. We know more that did Arrhenius or Manabe.”
Ouch Ray, is there anything we don’t know?
Oh! Right! That brings us back to 1913 to 1940. Solar is much less likely to have driven the rise than expect only 5 years ago (0.1 w/m^2 Wang 2005 slightly less with Svalggard 2007 Perminger 2010) and aerosols have the largest uncertainty (when you include cloud albedo).
Update: Zeke posted a neat look at mid century warming over at Lucia’s.
http://rankexploits.com/musings/2011/more-mid-20th-century-warming/#more-13706
Northern high latitudes dominated the warming? Oscillation warm phase synchronization?
Ray Ladbury says
Captdallas,
Yes, there are things we do not know. There are also things we do know–like about a dozen independent lines of evidence that all point to a sensitivity of about 3 degrees per doubling. And we know that it is much easier to get an Earth-like climate with a sensitivity that is higher than 3 than it is with one that is lower.
http://agwobserver.wordpress.com/2009/11/05/papers-on-climate-sensitivity-estimates/
All told, there’s 5% of the probability distribution for CO2 sensitivity between 0 and 2 degrees per doubling. Theres an equal amount from 4.5 to infinity. You seem awfully willing to bet the future of humanity on a 20:1 longshot.
Alexander Harvey says
From where might I obtain the OHC for the model runs in a similar format, e.g. globalised, or data that is freely available in any format?
Nothing like it seems to be archived at Climate Explorer.
Alex
[Response: In CMIP3 it wasn’t a requested diagnostic, and so you will need to calculate it from the ocean temperature anomalies integrated over depth. For CMIP5 it is requested and so it should be available pre-computed. – gavin]
jgarland says
@99: Essentially you’re saying you need to be able to predict the weather in order to be able to predict the climate. This is demonstrably false.
You’re making a fundamental error with respect to levels of analysis: You don’t need to and often simply cannot measure a phenomenon at all levels. That you cannot measure at one level simply does NOT mean you cannot at another level. You can assert it, but you’d be wrong and the number of counterexamples to your “logic” is legion.
For example, no (modern!) baseball coach would let the results of a single managerial decision affect making that same decision over and over again. Any single event–the results of a particular baseball play–may in fact be forever unpredictable. That does not at all mean that aggregate events (e.g., wins, climate) cannot be predicted. I’d choose Helton or Pujols (very high on base + slugging percentage) to pinch hit in a crucial situation every time over some player with a low value on that statistic. That is not to say either Pujols or Helton could strike out, hit into a double play, etc. when some rookie might have hit an HR in any particular instance. In fact just over half the time they will fail. But I’d make that same managerial decision every time if I wanted to keep my job.
Alexander Harvey says
Gavin,
Thank you for the information on CMIP3/5. Sadly I would not expect to de in a position to do the calculations even if I had the anomalies.
I notice that the ensemble trend (1993-2002) and hence your extrapolation amounts to (1993-2010)~12E22Joules/17yrs which with the same 85% above 750m 15% below correction (as in Hansen,… yourself et al 2005 Earth’s Energy Imbalance: Confirmation and Implications) and a straight 750m/700m ratio correction gives about 0.55W/m^2 global (total area 5.1E14m^2) for the period. Is this figure in agreement with your understanding, and that Model ER was tracking that rate at the 0-700m integration level.
It seems modest compared to figures quoted elsewhere (e.g. CERES analyses) and notably does not give rise to a significant model mean vs NOAA OHC discreptancy. Which is reassuring, but a little puzzling as I have seen figures such as a requirement for ~0.9W/m^2 quoted and hence a search for additional stored heat beyond what can be reasonably deduced from the unadjusted NOAA OHC data. I am not bothered by the squiggles, a year or two over or under budget is not much of an issue as far as I am concerned. I further discount the 0.04W/m^2 (atmosphere and land and melted sea ice and land ice) mentioned in your research article as relatively minor given known uncertainties.
Which brings me belatedly to by last query. Do you have any figures for the (below 700m/above 700m ratios) for the ensemble? I should like to know if the ensemble has a significant requirement for storage below 700m.
Many Thanks
Alex
Doug Proctor says
Okay: as I think I understand it, Scenario C is not to be considered because it assumed that no emissions occurred after 2000, which clearly isn’t the case. However, the tracking between Scenarios and actual temperatures is best for Scenario C. Which is to suggest: 1) the emissions have virtually no impact at this time on global temperatures, or 2) all of the impact of emissions since 2000 has been offset by natural processes that have not been modelled. Either way, the correlation of actual temperatures with Scenario C is important.
To dismiss the Scenario C correlation as not being “useful”, when “useful” is not defined (for purposes of controls under a precautionary principle?) [edit – don’t go there]
[Response: You are not getting the point. Take a classic little trick: what is 19/95? One method might be to cancel the ‘9’s to get 1/5 (which is the correct answer). However, this method is completely wrong and so even though (coincidentally) the method came up with the right answer, it is not ‘useful’ (except as a party trick). The point is that the coincidence of the wrong forcing, a slightly high sensitivity and a lucky realisation of the internal variability isn’t useful in the same way. Perhaps the correct answer (temporarily), but not one that was got using a correct or useful method. How do you suppose we should use that to make predictions for future events? – gavin]
[edit – if you want to have a conversation, don’t insult the person you are conversing with]
The comparison you have done with what Hansen said in 1988 – which is still valid, as the models have not substantially changed since then – is embarassing in your denial. If the correlations were positive, that temperatures matched Scenario B, would you accept skeptics saying, “Sure, but really, Scenario C is more useful”, and if the ocean-heat data looked like Lyman (2010), them saying “Sure, but that’s only because deeper heat is being transfered to the surface and replaced by cooler waters, but we can’t see it”?
[Response: Huh? The models have changed a lot – the results have not changed much. There is a difference. As for Lyman et al, that is what the OHC data look like (as far as we can tell), so I don’t get your point at all. – gavin]
Septic Matthew says
110, gavin in comment: If you are suggesting that scenario C will continue to be a better fit, I think this is highly unlikely. – gavin
137, Hank Roberts: Where are you finding anyone saying Scenario C has a better fit?
It’s very clear from the graph that the forecast from scenario c fits the data better than the other two forecasts; that the clearly counterfactual assumptions in scenario c produce a better fit (so far) suggests that scenarios a and b are untrustworthy guides to the future.
Much has been learned since Hansen ran those models. Would it not be more appropriate to rerun the models over the same time span that Hansen ran them, using current best estimates of parameters (such as sensitivity to CO2 doubling) and see what those predict? The difference between the a,b, and c scenarios of 1988 and the a,b, and c scenarios of 2011 would, I propose, be a measure of the importance (implications for the future) of what has been learned in the time since. At least if the model itself is sufficiently trustworthy.
If the modeled results from scenario c (with best possible parameter estimates and counterfactual CO2 assumptions) continue for a sufficient time to be closer to actual data than the modeled results from scenarios a and b (with best possible estimates of parameters and accurate CO2 assumptions), then the model that produced the computed results will have been disconfirmed.
I don’t think the Hansen graphs have any importance at all, except historical. They were like “baby steps”, when 22 years later the erstwhile “baby” is playing in the Super Bowl. I would much rather see the outputs, over the same epoch, of the same model with the current best estimates of all inputs that have had to be estimated or measured.
Nick O. says
#149 – Sceptic Matthew: “I don’t think the Hansen graphs have any importance at all, except historical.”
I think that is a little harsh, and we should not underestimate the importance of consistency in forecasting the general trends, nor the context it brings to how we interpret and present our present model results.
For one thing, given that Hansen was using a fairly simple model and did not have the benefit of the computing power and data sets that we have now, I think his model actually did a pretty good job. More to the point, the warming trend his graph shows has not, to my knowledge, been contradicted by any subsequent climate modelling, despite our better estimates of climate sensitivity etc. This suggests to me that he was getting the basics more or less right, which in turn emphasises the point that the best models and theory we have all predict and have consistently predicted the same thing: warming, and quite a bit of it by the end of this century if we keep dumping CO2 in the atmosphere at our current rates. Put this another way: are there any models out there showing a consistent cooling trend over the next 30-50 years? One also wonders how much longer we have to keep predicting the same thing, again and again, before the message really sinks home.
Hank Roberts says
> the forecast from scenario c fits the data
SM, you’re confused and it’s hard to see why you cling to this.
I predict I can boil a gallon of water using gasoline and a bellows to force air, and it will take 5 minutes.
We boil the water but do it using kerosene and a tank of compressed air.
It takes 5 minutes.
Did my forecast fit the data?
C’mon. Details matter.
Anne van der Bom says
Doug,
Sure, but really, Scenario C is more useful
No, scenario B is more useful.
You appear to think that different scenarios have different physics. They don’t. The only difference between scenarios are emissions (which depend on economic development and politics. Both are outside the realm of climate models to predict). They are useful to show to the policy makers: “if we do this than that is the expected consequence”.
So scenario C is useless since it assumed an emissions path that was nowhere near reality. Scenario B is closest to real-world emissions over the past 2 decades.
Septic Matthew says
148, Hank Roberts.
147, Nick O.: I think that is a little harsh,
The 1988 model was a good step forward. Should it really not be updated with the best current information? I hope this exercise can be repeated annually or at 5 year intervals, and include the predictions from other models for comparison (one other is presented above), such as Latif et al’s model and Tsonis et al’s model, along with the simple linear plus sinusoid. I mentioned Wald’s sequential analysis (and its descendants); I hope that there is sufficient evidence to decide that one of them is really accurate enough to base policy decisions on before more than 20 more years pass.
One also wonders how much longer we have to keep predicting the same thing, again and again, before the message really sinks home.
Until you have a long record of reasonably consistent prediction accuracy. As long as the predictions are closer to the prediction from scenario a than to the subsequent data record the forecasts will remain unbelievable to most people; the more such forecasts are repeated, the more unbelievable they will become, unless the data start clearly trending more toward scenario a.