It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
TRY says
194 – Hey, finding pictures online is a great contribution! Gold star.
If you bothered to read my previous comments you’d see that I asked the exact same question. The fact is, it’s an interesting question and one worth discussing.
You’re exactly right. Warming is a result of an imbalance between inbound and outbound radiation. Your opponents claim that inbound radiation varies over time. Surely there is some variation, however slight? Are we tracking this outside the earth’s atmosphere?
And what about outbound radiation? The AIRS system does seem capable of tracking radiation at some level. What’s it going to take to track that data across a wide IR range seasonally? We have data from at least 1997. No change to 2003. What about to now? Where are those studies? We should in theory see a change in radiation signature over time. Wouldn’t this go a long way towards addressing the issue of actual impact of increasing/decreasing CO2 in the atmosphere. We’re working hard to track temperatures – why not actual radiation output, the first degree, measurable impact of CO2?
Your opponents claim that in fact CO2 does not have the claimed impact on outbound radiation: clouds, overlapping absorption bands, saturation, less positive feedback than expected from water vapor, etc, etc. Lots of discussion on boths sides.
I have zero interest in rehashing all of these secondary issues. I particularly could care less about these “angels on the head of a pin” statistical discussions of temperature measurements and trends. Is it decadel? 15 years? Is there selection bias in what gets studied, funded, published? Does the north pole ice signal have less noise, or the south pole ice signal? Again, zero interest.
Also, Hank Roberts, I’ll quote you
“Please. Go shake up _your_ public relations officers today.
Show them what Google Image finds, on any subject related to climate or your own research field. Make them afraid for their own and their university’s budget if they don’t get better at this.”
Seriously?
Doug Bostrom says
#187 John:
Unless I’m missing your thrust it does not seem as though you read Gavin’s text, starting with the title of the post: “Updates to model-data comparisons”.
Nicolas Nierenberg says
Gavin,
It doesn’t really matter if the runs were done in 2004 or 2006. I’ll accept that they are a forecast starting in 2004 instead of 2006, it doesn’t change my point. The modelers knew what the actual results were for the earlier periods, so these types of models can’t be judged based on anything before that time.
Show me a model of the economy that matches up until this year and I will say that’s nice. Give me a model today that matches the next ten years and I’ll say that’s impressive.
sierra117 says
re @191
Thank you Gavin. I went to the NOAA website and found a chart of the El Nino oscillations which confirmed your remark (re El Nino). I didn’t realise how big the 98 (and 83) El Nino events were by comparison to this years.
Thank you Tamino; I have read through your articles.
Has anyone undertaken any research to see how well the El Nino/La Nina oscillations correlate with variances from the global temperature trend line? Seems to me just by looking at the peaks and troughs in these charts there is a good correlation; but appearances can be deceptive and I know there is not substitute for a descent mathematical analysis.
Hank Roberts says
Leo, can you find that quote again? Google doesn’t find it.
> Sigurdur
> Grow lines
Say what? What does that mean? I tried to look it up and found wackiness.
Are you thinking about the change in plant hardiness zones?
http://www.arborday.org/media/mapchanges.cfm
TRY says
A reason this radiation analysis might be interesting. Not much warming trend this decade compared to last – why? No impact from CO2 warming, or CO2 warming overwhelmed by a source of cooling? It seems these two scenarios would involve somewhat different radiation output. Do we know what these differences might look like? And can we measure them. I assume, bottom line, that the answers are no. Does seem worthwhile, however.
Don Shor says
[The implied claim] “…that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen … is simply not true.”
Given the very wide range of possible outcomes shown in the first graph, anything – including a decline in temperature nearly every year from 1980 to the present – would have been within the model estimates. So that may put the lie to George Will’s claim, but it also doesn’t speak very well for the utility of the models.
[Response: On short timescales, absolutely. On long timescales they all warm strongly and that is the risk they are highlighting. However, a decline in temperatures since 1980 would definitely be outside the model envelope (see comment 158). – gavin]
“So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good…”
In the third graph, the data came in 27% below Dr. Hansen’s Scenario B, and even below his most conservative Scenario C. That is “pretty good?” What would be “not so good?”
[Response: The forcing from Scenario B is the one closest to what happened – ‘A’ and ‘C’ are therefore moot for deciding whether the model has skill. You can quantify skill of a prediction by comparing it with a reference (‘naive’) hypothesis of ‘no warming’ say. The skill is calculated as 1-MSE_pred/MSE_ref (MSE=Mean Square Error), and positive numbers indicate skill. If you were doing it on the trend, the skill of the B simulation compared to a no warming case is 0.86. The skill is similar if you take each annual anomaly (0.75). So yes, that’s pretty good – the model told us something we didn’t know before. ‘Not so good’ would have been a linear extrapolation from pre-1984 temperatures, for instance. Obviously ahead of time we don’t know which scenario is going to be best (though we might have an opinion). If you felt the scenarios were equally likely, you might think that the average over all of them would be your forecast. That too would have had skill. – gavin]
Martin Vermeer says
Matthew #45
Matthew, you’re illustrating the fallacy I was describing. The point (Jeffreys’s point) you’re missing is that hypotheses need to be intercompared, not proven true in isolation. Your ‘operational distinction’ doesn’t really exist: valid decision making is based on valid inference.
As it is, we’re betting the farm (as in, worst case, the continued existence of our civilization) on the correctness of the ‘Ostrich Hypothesis’, which violates textbook physics and is flatly contradicted by multiple independent lines of evidence. Can you say ‘wishful thinking’?
There is no way not to bet. Responsible betters use the best info available, warts and all.
Welcome to decision making under uncertainty, AKA risk management.
Silk says
Re #155 “Also I made a mistake in my original post – the TAR projections were for warming of 0.3 to 0.4 with the actual measured warming being between 0 to 0.05 (not 0.5 as I initially said)”
No, you made more than one mistake in your original post. Your claim that TAR predicted a 0.3 to 0.4 deg temperature increase this decade is false
See
http://www.grida.no/publications/other/ipcc_tar/
The MAXIMUM increase suggested (not predicted – SRES are not predictions) was 0.24 degrees.
Th A1B scenario, which (so far as I recall) is closest, in emissions terms, to what actually happened, suggested an increase of 0.14 degrees this decade.
Which is a pretty darn good estimate, taking which ever trend you want (0.06+/-0.14 ºC/dec or -0.04+/-0.23 ºC/dec)
So, skeptics, we have a decade of reasonably successful prediction of global temperature change, and now we have better models.
Next.
Martin Vermeer says
Matthew #195 (eh, mistake in previous, should be #195)
At first I didn’t understand this metaphor at all… but perhaps it is relevant after all, but in the reverse direction :-)
The situation we’re in is not the analog of your metaphor, where we are unaware of General Relativity; on the contrary, we know both that AGW is real and what causes it; only, some folks don’t like that reality. We know too much, not too little.
Proposing a better metaphor: GR is known, and validly explains the anomalous precession of Mercury’s line of apsides — but some influential folks don’t like it, for ideological or vested-interest reasons. They just know — KNOW, I tell you — that this is ‘degenerate physics’ foisted upon the world by a global conspiracy of physicists, who also suppress observational data on the planet Vulcan / the inner asteroid belt / the flattening of the Sun / the deviation from inverse-square [take your pick, never mind they cannot all be true]. And anyway, 42 arc seconds per century is very, very small. It’s less than 1% of the total precession — are those “scientists” really claiming that you can measure such a small quantity reliably? And the godless materialists pushing GR also believe in a Big Bang, denying divine creation. It’s all part of a plot to corrupt our youth.
(I can’t think of a realistic way in which a very large amount of money would be riding on this; but it shouldn’t have to matter, should it. The truth is the truth.)
Dappled Water says
#197 – What the heck are grow lines?. Are you referring to hardiness zones?.
http://www.arborday.org/media/mapchanges.cfm
Jim says
Just curious, how do most models ‘model’ the atmosphere? I see in the AR4 it’s “~100 km (T106)”, what does that mean for the size of each block or individual unit of atmosphere?
Is it calculated in spherical coordinates (e.g. radius, angle z, angle x) or translated into a sort of 3d brick version of a map of the Earth?
Do they cover all layers from the ground/ocean to the tropo/strato/meso/thermo -spheres?
For the troposphere is it modeled as flattened at the poles, and bulging at the equator?
Is there accounting for the Coriolis force? Or the hemispheric prevailing winds (Polar easterlies, prevailing westerlies, trade winds)?
I think I read somewhere they are struggling with cloud cover and albedo, is there some progress on that task?
Thanks,
Geoff Wexler says
TRY
#163
So feed x photons into the atmosphere that are absorbed by CO2 – what % are re-emitted by CO2, what % are transferred to other molecules, what % of those eventually cause IR emissions by H2O, CH4, vs transfer to ground via convection, etc.
#166 and #170
Again, different than this idea that 100% of the energy that CO2 absorbs in a specific wavelength is emitted in that same wavelength, right?
This is how I see it now:
Try to separate off the greenhouse discussion from convection, by considering a time so short i.e. a small fraction of a second, that there is certainly no weather and no significant drift (convection) and so long that there is a large number of absorptions, emissions and collisions, i.e. radiation transfer and greenhouse effect.
In that case the local temperature , density, pressure etc. will be constant and there will be local themodynamiuc equilibrium at a constant temperature and the proposition that 100% of the energy absorbed by CO2 in a specific wavelength is emitted in that same wavelength becomes exact . If this were not true then the CO2 would either cool or warm in contradiction to the hypothesis of constant temperature. It remains true even when other greenhouse gases such as H2O and CH4 are taken into account. This is because the energy transfers from CO2 to e.g. H2O and in the opposite direction have to balance.
That result is very probably an example of the principle of detailed balance which you can read up. If this balance were violated you could probably apply it to violate the second law of thermodynamics. For the same reason, there will be no net energy lost to the non greenhouse gases such as O2 , and N2. Their role is to help maintain thermodynamic equilibrium by acting as a heat reservoir.
Ray Ladbury says
One of the fallacies we are hearing from the denialosphere is the presentation of the choices available to us. We do not face a choice between “doing nothing” and “investing trillions”. The next few decades will of necessity see a revolution in our energy infrastructure quite independent of anything we do about climate change. The era of cheap petroleum is over. The choice is whether we invest in a sustainable, clean energy infrastructure or whether we invest slightly less in a dirty and temporary fix relying on coal, oil shale, tar sands, etc. The choice is whether we invest as science counsels us or whether we do exactly the opposite–in other words, the choices are science and anti-science.
We are also told that in order to justify the extra investment in clean energy infrastructure, we need to hold the science to some higher standard than mere scientific truth? Really? Should the standards of scientific truth depend on the desirability of its implications? Should we really be willing to bet the future of human civilization on a 20:1 longshot to avoid an investment that amounts to about 1-2% of global economic output over a few decades?
Of course what is happening here is a conflation of scientific truth with probabilistic risk assessment(PRA). The flaw here is making acceptance of the risk contingent on its consequences–and that violates every tenet of both science and PRA. Scientific truth is scientific truth. You can take 90-95% CL to the bank. The key is to accept the best science we have and then formulate policies that take into account both costs and consequences of action and inaction.
The level of scrutiny climate science has sustained is unprecedented. Its methods and conclusions have survived not just internal review, but review by National Academies, Professional Societies and even hostile legislative committees. Not one such review panel has dissented from the consensus position–that we are warming the planet and that we need to do something about it. Even the theory of Evolution has not been subjected to such scrutiny! There is still room for debate, but the legitimate debate now concerns what to do about this crisis, not about whether the crisis is real.
Ray Ladbury says
TRY, I work building satellites (could ya’ tell?), so when you ask me, you will tend to get a much more detailed and technical answer than when you ask others. I start thinking “How?” rather than “What?”. Keep in mind that the phenomenon we are discussing is “climate”. That is true whether our “eyes” are in the sky of on the ground. You still need to look at long-term, global behavior.
I strongly favor this sort of program. DISCOVR would have been a good start–but a read on the history of this program is educational as to the influence of denialist elements on climate science:
http://www.desmogblog.com/a-desmogblog-exclusive-investigation-into-nasas-dscovr-climate-station
Simon Rika aka Karmakaze says
@Ray Ladbury #171
Is it easier to see the trend because of the use of a reference average temp (ie “1970-1999 average”)? I mean if you showed absulute temps the variability is so much that it basically looks like noise? I am just trying to wrap my head around why a graph that shows an absolute temp is harder to evaluate.
In case I’m not making myself clear, say we have a graph of annual mean temp, and the average annual mean temp for the period 1970 to 1999 was 14.5C and this years’ annual mean temp was 14.6c would that not show the trend as clearly as this year being +0.1C anomaly compared to the 1970-1990 average?
Thanks for the help, I appreciate it.
I know this is off-topic, but it is a question that has been bouncing around my head for awhile now, and I just thought I’d ask.
Barton Paul Levenson says
Pat: Since we have been warming from the 1600’s (or cooling since the Holocene Optimum), when did it switch from natural to anthropogenic?
BPL: We passed the peak of the interglacial 6,000 years ago and have been cooling, on average, since then–until the industrial revolution started.
Pat: And why can’t the man made warming overcome the trivial effects of the cold phase PDO.
BPL: WHAT effects? Be specific, please. Do you mean on temperature? Try here:
http://BartonPaulLevenson.com/Ball.html
http://BartonPaulLevenson.com/Reber.html
http://BartonPaulLevenson.com/VV.html
Barton Paul Levenson says
Terry,
Relative humidity stays fixed on global average:
Gettelman, A. and Q. Fu 2008. “Observed and Simulated Upper-Tropospheric Water Vapor Feedback.” J. Clim. 21, 3282-3289.
“Satellite measurements from the Atmospheric Infrared Sounder (AIRS) in the upper troposphere over 4.5 yr are used to assess the covariation of upper-tropospheric humidity and temperature with surface temperatures, which can be used to constrain the upper-tropospheric moistening due to the water vapor feedback. Results are compared to simulations from a general circulation model, the NCAR Community Atmosphere Model (CAM), to see if the model can reproduce the variations. Results indicate that the upper troposphere maintains nearly constant relative humidity for observed perturbations to ocean surface temperatures over the observed period, with increases in temperature ~1.5 times the changes at the surface, and corresponding increases in water vapor (specific humidity) of 10%–25% °C^-1. Increases in water vapor are largest at pressures below 400 hPa, but they have a double peak structure. Simulations reproduce these changes quantitatively and qualitatively. Agreement is best when the model is sorted for satellite sampling thresholds. This indicates that the model reproduces the moistening associated with the observed uppertropospheric water vapor feedback. The results are not qualitatively sensitive to model resolution or model physics.”
Manabe, S. and R.T. Wetherall 1967. “Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative Humidity.” J. Atmos. Sci. 24, 241-259.
Minschwaner, K., and A. E. Dessler, 2004. “Water vapor feedback in the tropical upper troposphere: Model results and observations.” J. Climate, 17, 1272–1282.
Soden, B.J., D. L. Jackson, V. Ramaswamy, M. D. Schwarzkopf, and X. Huang, 2005. “The radiative signature of upper tropospheric moistening.” Science, 310, 841–844.
Barton Paul Levenson says
sierra117: My background is in statistics and computer science; so I know a little about the techniques used in climate science… Gavin, you mentioned that a trend of 15 years or less is insignificant statistically. I am curious to understand why you say that.
BPL: Your “background is in statistics” but you don’t understand why a trend of less than 15 years in annual temperature figures is useless? I find that hard to believe.
Try this: Do a linear regression of mean annual global temperature anomalies on time. Use the past 5 years, 6, years, 7 years, etc. up to 30 years. Then tell me where the t-statistics on the time term hit the 95% confidence level.
Let me know if you don’t understand what I mean by “linear regression,” “t-statistic,” or “confidence level.”
Barton Paul Levenson says
TRY: We should in theory see a change in radiation signature over time.
BPL: I listed the studies for you which found exactly that. You completely ignored them. [edit]
Barton Paul Levenson says
TRY: No impact from CO2 warming, or CO2 warming overwhelmed by a source of cooling? It seems these two scenarios would involve somewhat different radiation output. Do we know what these differences might look like? And can we measure them. I assume, bottom line, that the answers are no.
BPL: I showed you, bottom line, that it had already been done. You showed no response to my rather long post with complete citations. You’re just going to keep repeating this false point, sometimes phrasing it as a question, to give the impression that there’s a big fat hole in our empirical knowledge which invalidates AGW theory. [edit]
Barton Paul Levenson says
Gavin: The skill is calculated as 1-MSE_pred/MSE_ref (MSE=Mean Square Error),
BPL: I don’t get it–wouldn’t the MSE for a prediction of “no warming” always be zero (MSE for a flat line)? How are you calculating MSE_ref?
[Response: It’s the mean square difference from what actually happened. – gavin]
Richard Steckis says
“As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still.”
1. The current temperatures are not in the middle of the model estimate but close to the lower third of the model estimates.
2. Of course you are assuming no future La Nina slumps?
3. I think the El Nino has to extend into the NH summer to make 2010 the warmest year of the decade (the new decade does not start till 2011). Current modelling of this El Nino by Australia’s BOM still has this one dissipating during March which is the usual month for these events to start breaking down.
wil says
#40 “I conclude that we are in very deep trouble. I don’t see how anybody can disagree. The 2002 to 2009 nitpick is just that, a nitpick, no doubt caused by weather.”
You can go back much further than 2002. You can also start in 2001, 2000, 1999, 1998, 1997 or 1996. Take for instance the NOAA-NCDC data from 1996 to 2009 (that is 14 years!) and you still will find that the slope is not significantly different from zero (p=0.09). Isn’t that a reason for serious doubt? How long can you blame the weather for the discrepancy?
If you want to convince people who have an open mind but are not willing to believe just anything you will need better arguments.
sierra117 says
@219
BPL….I understand the maths…where can I get hold of the data?
[Response: Here. – gavin]
Rob says
What would the output of the GCMs look like if there were no CO2 increase? Any links to a graph? thanks!
[Response: Assuming you mean no forcings of any kind, then the ensemble mean would be flat, but you’d still see excursions of the same magnitude as the grey bands above. – gavin]
Anonymous Coward says
Thomas (#193) wrote: “No serious climate scientists thinks we are in danger of setting the runaway affect”.
The thing is, Hansen claims there is a clear (if not quite present yet) danger. He may not be “serious” but he’s too big a name to be dismissed out of hand. So, again, my question is: did Hansen substantiate his extraordinary claim publically?
An H2O runaway greenhouse would be the worst catastrophe ever. There is no risk humanity has a handle on that’s more serious so I do not understand Hansen’s seemingly flippant way of bringing it up.
The minimum amount of absorbed solar radiation for a runaway effect on Earth seems to be close to 290W/m^2. Thankfully, the albedo of the Earth isn’t going get close to 15% any time soon so we seem to be well under the danger zone (barring weird cloud effects). I don’t see how such an amount of absorbed solar radiation would translate to a global average temperature of 50C (as Thomas wrote).
Grabski says
a > 25% underestimate for scenario C (from the 2007 review)
The forcing from Scenario B is the one closest to what happened – ‘A’ and ‘C’ are therefore moot for deciding whether the model has skill.
…
But the data are 0.5 degrees below B scenario, and in fact are below the C scenario. So forcings in C are 25% less than actual and temps are below Scenario C; why is that moot? There’s a lot of information from that miss about the models.
Ray Ladbury says
wil@224 There’s a reason why climate is considered 30 years or longer. See here:
http://tamino.wordpress.com/2009/12/15/how-long/
I’ve always found it astounding that people want to draw conclusions on a decade of information while 1)ignoring the current decade is the warmest on record; and 2)ignoring the previous 3 decades.
Ray Ladbury says
Simon@216. Try looking at monthly data–lots of up and down, right? Hard to spot a trend. Now average over a year. Still lots of up and down, but the trend is easier to see. Now try looking at 5 year averages and the trend becomes quite clear. Essentially, if you remove known sources of noise (like annual variation in absolute temperature) the trend becomes easier to see.
Geoff Wexler says
#201 TRY (again)
“Your opponents claim that ……..
I have zero interest in rehashing all of these secondary issues….”
Really? Then why have you just listed them all? Far from being secondary many were of crucial importance, but were settled years ago.
Lots of discussion on boths sides.
Thats just what the pro-CO2 lobby want everyone to believe; just non-rigorous discussion, every single step uncertain and no conclusions.
Dale Power says
The Mars Bar theory holds some merit if you are willing to look outside the strict scientific box for a bit.
Same with the Pirate index.
1. The Mars Bar theory indicates that increased non-essential product production (one major sign of the industrial/consumer age) tracks well with increased Global Temperatures. It is not the direct cause of course, but rather a sign of what is going on with the overall societal complex in a given time period. It is actually valuable to note this, as it demonstrates the relationship between human activity and Temperature increase over time.
2. The Pirate Index shows how an increase in societal input, in the form of policing, shows some relationship to the above mentioned Mars Bar Theory.
As wealth increased due to the industrial age, resources were more available to combat Piracy on the high seas. Both the policing and the funds available to pay for it track with temperature increases and proceed those increases, showing a cause and effect relationship in potential.
Yes, this is far from real science, but it is the kind of thing that catches the public imagination!
People do NOT want to take responsibility for, well, almost anything in life! You have to show people what is going on in many different ways if you want them to believe you.
Who has the scientific background to actually write this up? (Provided it honestly holds up data wise.)
This is the kind of “device” (I almost said “trick”!) that will actually impact the minds of people, so is worth doing!
John E. Pearson says
sierra117: Here is a very simple argument that explains why there ought to be some time scale before which trends cannot be ascertained. Think of a random walker with a deterministic component to his walk, a constant drift speed v. If the variance of the stochastic motion is D t (t is time) then the distribution function for the walker’s position will be Gaussian with mean v t and variance D t . At early times the width of the distribution (w = sqrt(D t) is much bigger than the walker’s mean position (v t) and one cannot ascertain the trend (i.e. v) from data (i.e. time series of the walkers position). The width w=sqrt(D t) is equal to the mean (v t) at time T: sqrt(D T)=vT so that T = D/v^2. For times small compared to T =D/v^2 the trend cannot be learned with any accuracy. It has often been misstated here that this is an issue of the amount of data that one has. This is not quite correct. It doesn’t matter how much data you have for early times. Even if you could collect continuous time and position data (an infinite amount of data) you still would not be able to learn the trend v at times small compared to T. The noise itself (which is characterized by “D”) sets a limit on how well v can be ascertained for short times.
Hank Roberts says
> Simon 216
Try moving the slider on the little applet on this page:
http://hot-topic.co.nz/keep-out-of-the-kitchen/
Howard S. says
Yes Ray, 229. There’s a reason why climate is considered 30 years or longer. See here:
http://www.youtube.com/watch?v=8mxmo9DskYE&feature=player_embedded
Jim Eager says
Jason @192, I actually agree with you that a carbon tax would be far preferable to a cap & trade regime, as does James Hansen, btw, but you missed the meaning of my question: just halting CO2 emissions will not be enough to prevent global warming induced climate change since a halt will not reduce the amount of extra carbon that has been added to the active carbon cycle on any meaningful human time scale.
Putting a price on carbon will certainly spur investment and research into developing methods to actually remove excess carbon from the atmosphere, but given that a good deal of energy is released by burning fossil carbon, not to mention the energy expended on digging up, pumping and refining that carbon, basic physics dictates that removing that carbon and sequestering it from the active carbon cycle will require an amount of energy of at least the same magnitude. Where will that energy come from?
It took over two centuries from a near standing start to liberate the 300+Gt of carbon that we have added to the atmosphere and active carbon cycle. It is thus reasonable that it will take a similar time scale from a standing start to sequester enough C to reduce the carbon reservoir to a level that will avert the full consequences of the climate change that we have set in motion.
By diverting your comment into political accusations you avoided both answering my questions and dealing with the physical reality of our situation.
wil says
@Ray Ladbury #229
“I’ve always found it astounding that people want to draw conclusions on a decade of information while 1)ignoring the current decade is the warmest on record; and 2)ignoring the previous 3 decades”.
If you read my words carefully (#224): I was talking about 14 years which is really more than a decade. And of course I am not ignoring the previous decades, but precisely because of the strong upward trend in those years the shift in the trend during the last 15 years is so remarkabl. It is too simple to blame it all on “the weather”, and I am looking (on sites like this one) for a better explanation. Can someone convince me that we can just ignore the last 15 years? And how much longer can we go on ignoring?
[Response: Don’t be ridiculous. No-one is ‘ignoring’ anything – unless it is commenters who keep on ignoring evidence that demonstrates that short term trends have less significance than they think. – gavin]
Ray Ladbury says
Wil says, “And of course I am not ignoring the previous decades, but precisely because of the strong upward trend in those years the shift in the trend during the last 15 years is so remarkabl.”
http://tamino.wordpress.com/2009/12/07/riddle-me-this/
Uh, you were saying?
Edward Greisch says
What is OHC? Over Head Cam? Maybe you mean Ocean Heat Content? Acronyms need to be spelled out more often, please, everybody.
[Response: Sorry. Check out the acronym index though. – gavin]
RB says
@203 Nierenberg
I’ll take a stab at the stock market. It is currently priced to return 6% annually over the next ten years with a yearly standard deviation of 20%. That is, at the end of 10 years, the stock market will be (1.8 +/- 1.1) times current S&P 500 of ~1100. This methodology was actually successful based on 1999 projections leading up to 2009.
http://www.hussmanfunds.com/wmc/wmc091214.htm
Ike Solem says
What, no mention of the paucity of oceanic data for comparison purposes?
To be more clear, consider this phrase from the end of this article:
But with such poor collection of ocean subsurface data over the 1984-2009 period, how can you be sure what “reality” consists of? Imagine, for example, that atmospheric radiosonde temperature data was a spotty as ocean subsurface data – what kind of conclusions could you then draw?
Obviously, it’s a total failure of NASA’s earth-monitoring job – but hey, whatever, let’s launch another deep space microwave background probe – that’s interesting, and since it won’t generate data that will annoy the fossil fuel lobby, it’s far more likely to be launched than Triana! Yes, that’s how you succeed in science these days – focus on topics that are safe and likely to be funded by the complex – and for goodness sake, don’t criticize your “colleagues.” That creates bad feelings, you know.
honorable says
The cavalier way with which you block comments that might appear critical is simply disgusting. I’m very disturbed by climategate and the way RealClimate is managed. More science and less activism, please. Criticism is the essence of the scientific approach. In my scientific research, I have always welcomed criticism, [edit]
By the way, I am a professor of Medicine in a first rate North American university.
[Response: Oh please. People repeating the same old tired nonsense add nothing to a comment thread and I make no apology for trying to maintain the signal to noise ratio. Not every letter to the editor gets published either. – gavin]
David Watt says
Aren’t you working the data just a little Gavin.
If you take the data from any post 2000 start point I don’t think the correlation looks very good at all and the post 2007 onwards section (i.e. the bit where AR4 became a prediction) looks pretty terrible.
How can you be sure that the 2007 downward blip can all be pinned on La Nina? Given the way this winter is shaping up right now, wouldn’t it be wiser to wait and see if the (so far) rather fragile recovery continues.
[Response: There is always noise. No point in ‘waiting to see’ if it disappears because it won’t. But keeping these kinds of comparisons up-to-date still seems worthwhile. – gavin]
Jason says
“But since I can do this already for the Hansen simulation, and also show that the AR4 models do as good a job for the same period (0.21+/-0.16 degC/dec), why isn’t that sufficient for you?”
Pielke Jr.’s points 1,2, and 3 in http://rogerpielkejr.blogspot.com/2009/12/consistent-with-fallacy-how-not-to.html do a good job of defining how to make a prediction such that it robust against the appearance of ex-post facto criteria selection.
Your comparison with Hansen 1988 (which I’ve seen interpreted differently especially regarding the comparison of actual and (then) expected forcings) is not robust in this regard.
A “luke-warmer” climate model crafted in 1988 with dramatically lower sensitivity than Hansen’s would also (today) appear consistent with observations. Would I be correct to assume that you agree with Hansen’s error bars on his modern estimate of sensitivity? (+/-0.5 deg) and therefore consider such a low sensitivity to be improbable?
As for the AR4 models, while accurate hind-casting is obviously important, it is the ability of climate models to forecast that I am interested in.
[Response: Assuming that the 26 year trend scales linearly with the equilibrium sensitivity and forcing (big assumption), any model that had a sensitivity of below 2.1 deg C would have done worse (lower skill) than the Hansen et al model (that would be 1.9 deg C without a 10% correction for the high forcing). There is plenty of other evidence that sensitivity is greater than 2 deg C, and so this is somewhat confirming of that. As for Roger, I was not part of the team that made those early runs, so according to his no-post-facto rule, I’m apparently not allowed to analyse them since I didn’t make a statement about what they did while I was still in high school. Brilliant. – gavin]
Jason says
#198: “Not really too hard to figure out why that isn’t the Democrats first guess though, is it?”
My point was this:
Democrats could right now get enormous Republican support for a plan to dramatically reduce US CO2 emissions to a level that even Hansen would approve of.
If Democrats actually believed that failing to act decisively _now_ would result in catastrophe, then they would do so. Asking people (even the poor) to pay for the carbon they use isn’t such a terrible thing. It certainly won’t doom civilization as we know it.
Democrats don’t actually believe that there is anything pressing about climate change which is why we wound up with a pork laden bill which does nothing to reduce emissions, and probably won’t even become law.
Dave Salt says
Plots comparing modelled ‘scenarios’ with real-world data can, undoubtedly, be considered as evidence that the implicit assumptions and mechanisms incorporated within the climate models are correct. However, hard sciences like physics, chemistry or biology also require that a theory should show how it can be falsified and, in so doing, provide a means to test its fidelity against real-world behaviour.
Based upon my limited understanding of climate science and the theory of catastrophic anthropogenic global warming (AGW), I see little evidence to suggest that this key step in the Scientific Method has been applied or even that it is considered important. I know about the predicted “hot spot” above the equatorial troposphere but am lead to believe that current observations are, at best, insufficient to support the theory or, at worst, appear to falsify it. Given this situation, I can fully understand why some regard climate science as more akin to what Richard Feynman described as a ‘Cargo Cult Science’ rather than a hard science.
Nevertheless, still think that there may be genuine reasons to believe that the current AWG narrative is correct and so would be grateful for any pointers as to what they may be. More specifically, I’d be very interested to hear of any real-world evidence that proves the dominance of positive feedbacks within the Earth’s climate system since this appears to be the key mechanism behind the so-called ‘enhanced greenhouse effect’ (Cf. IPCC TAR, Section 1.3.1), which amplifies the impact of a doubling of CO2 levels from a less than 2C to more than 4C and so raises the spectre of world-wide catastrophe.
David Watt says
My problem is that I don’t really think it is “noise”.
The UK Met Office obviously thinks like you that it is and has seen each tranche of cooling since 2007 as “anomalous” and about to return to the “trend”.
Perhaps that is why its last four attempts at a seasonal forecast have all turned out to be drastically wrong and why Vicky Pope now says it is much easier to forecast 30 years ahead than it is for three months.
[Response: Vicky may well be right. But don’t misunderstand my use of the term ‘noise’. It is only noise for people looking for trends in climate driven by external processes. It isn’t noise for people doing seasonal forecasts, or people trying to understand El Nino, or the AMO or the PDO or the NAO or blocking events, or storm dynamics, or cloud formation etc. The main issue here is to what extent those drivers will affect climate in the long-term, and so that is what I’m discussing. – gavin]
Hank Roberts says
> Jason says: 30 December 2009 at 1:0 PM
> …
> … Democrats could right now get enormous Republican
> support for a plan to dramatically reduce US CO2 emissions
Please post that plan, and get the enormous Republican to sign up to support it. That support would be good to see.
(*Not here*– pick a website known to attract attention, for example Deltoid, which always welcomes promising ideas, or maybe ClimateProgress.)
John E. Pearson says
Wel @224.
I ran regressions on this data: http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.txt
starting in 1970,1975,1980,1985,1990,1995 & 1998 (the hot year) and found the following.
start N trend t-score P(T < t_score)
year (K/yr)
1970 40 .016 12.1 *
1975 35 .017 10.5 *
1980 30 .016 7.8 *
1985 25 .018 6.94 *
1990 20 .018 4.69 .9999
1995 15 .015 2.92 .99
1998 12 .011 1.4 .9
*The P-value calculator I used gave unity for these.
It sure looks significant to me. I don't see how you are concluding there isn't a significant trend.
Ernst K says
Comment by Nicolas Nierenberg — 30 December 2009 @ 12:20 AM (203)
It doesn’t really matter if the runs were done in 2004 or 2006. I’ll accept that they are a forecast starting in 2004 instead of 2006, it doesn’t change my point. The modelers knew what the actual results were for the earlier periods, so these types of models can’t be judged based on anything before that time.
Show me a model of the economy that matches up until this year and I will say that’s nice. Give me a model today that matches the next ten years and I’ll say that’s impressive.
As I tried to explain back in comment 162, GCMs are physical models (i.e. built on established physics). Obviously, Economic models aren’t based on physics – you’d deserve a few hundred, if not several thousand, Nobel Prizes if you could pull off an economic model from physics alone.
I actually thought of commenting that many apparently well educated “skeptics” don’t seem to understand the difference between physical and conceptual models, but I didn’t want to insult them. So much for that.
Nicolas, your comment here betrays your complete lack of understanding of even the most basic concepts of physical modeling. Either that or you’re being completely disingenuous yourself.