It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?
For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.
As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.
We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.
Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.
Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.
(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.
And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).
The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.
The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.
So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.
David Harrington says
One question:
Why exclude the satellite data from the analysis? The temperature sets quoted are predominantly surface based and their coverage is largely from the continental USA. They will also have been “homogenized” and “corrected”. What happen if the satellite data from UAH is added for example?
[Response: The MSU diagnostic is a different metric and I don’t have that handy for all the models. The size of the unforced variability is higher than in the surface temperatures and the structural uncertainty in the trends (UAH vs RSS vs Fu etc.) is larger too. Note too that there have been plenty of judgment calls in how to splice the different satellites together. – gavin]
Edie says
>As we get to the end of another year,
Excuse me, but what does another (in your view Gregorian calendar) year have to do with climate? You know there are other man-made calendars besides Gregorian which a large part of (but not all of) the world uses. The climate suddenly seasonally stops and says “oh look at those humans, we should stop winter in its tracks because they have this Gregorian thing called New Years Eve”.
What a crock of balderdash.
[Response: Let me know when you’ve got the rest of the world to go along with that. In the meantime, I had annual mean model output, and I compare it with annual mean data. Feel free to do it differently if you think it matters. – gavin]
Dave Salt says
Dr schmidt mentions radiative transfer in the atmosphere in response to Graham (#9). However, my understanding is that it’s the dominance of positive feedbacks within the climate models that’s driving catastrophic global warming; the so-called ‘enhanced greenhouse effect’ (Cf. IPCC TAR Sect. 1.3.1).
If true, then the important point is not the radiative transfer of CO2 but the rationale for and details of the positive feedback mechanisms. I assume this would also include the necessary real-world evidence to either support or falsify their existence/dominance.
Edie says
> [Response: It’s possible that different parts of the ocean ‘weather’ will be predictable over
> different timescales. ENSO might only be for six months, but the AMO might give useful info
> a decade or so out. However, the about of variance that you might be able to explain could
> still be small. It’s an active research area. – gavin]
All the king’s horses and all the king’s men couldn’t confidently predict the regime change from La Nina to El Nino in 2009. ENSO dynamical and statistical models? [edit]
[Response: The current El Nino was predicted months ahead of time. – gavin]
Edie says
> Do we have an idea from vulcanolagists how often these (and stronger) eruptions might be
> expected to occur?
Um Douglas, you’ve been watching too much Star Trek. Spock was a Vulcan but not a volcanologist.
Bill K says
@Gavin 22: No, but what it means is that if the models are uniformly distributed amongst that degree of centigrade (which I’m sure they’re not), that they’re actually quite *inaccurate*, which demonstrates rather the opposite point of what you’re trying to make.
Or to put it another way, I can claim I’m very accurate because my models predict a temperature between absolute zero and the surface temperature of the sun, but that error range is so large, it means I’m not really predicting anything.
[Response: Agreed. If the range is too wide then the observation is not particularly useful in deciding whether models are any good. Thus short term trends are not useful (a point I’ve made repeatedly). But you can see that the longer temperature trends are going to start being useful, and the long term changes in OHC are clearly useful. Note that the OHC data has taken many years for all of the observational network problems to be dealt with (if indeed they all have been) and so were not available prior to the models being run and so have a higher utility. – gavin]
Edie says
> [Response: Perhaps you can point me to a subroutine that uses Mars bar concentrations in a
> calculation of the radiative transfer in the atmosphere? And then show me the lab results that
> calibrate it’s effects? The radiative code involving CO2 is available here for comparison. – gavin]
Graham is an obvious troll so not worth paying attention to him. However, it would be very interesting to see your group invite Roy Spencer for a debate (forget wasting time on Rupert Murdoch’s Wall St. Journal incompetent clowns who wouldn’t know how to parse a FORTRAN subroutine). Clearly Spencer is competent enough in the context of radiation fluxes and forces to intelligently discuss feedbacks such as from clouds. I think its ludicrous for anyone in climate science (and that goes for the IPCC) to be able to walk around with supreme confidence as to how clouds will play a role in GHG warming (and then to try and model these creatures we call clouds — good heavens, not even our operationally used NWP models such as the GFS, UK Met, or ECMWF used for synoptic weather forecasts can do anything more than parameterize clouds. LOL.
Edie says
> Douglas (7) — Big volcanoes erupt randomly so other than putting in some in model runs
> following the power law distribution on VIX magnitude there is not much to do. But as Gavin
> states, occasion eruptions won’t matter much and big ones are certainly only occasional.
David Benson — but you forgot something. What about the omnipotent “plan B” geoengineers? Super wealthy Nathan Myhrvold was trying to capitalize on lame Copenhagen with some mass media fun and games by appearing on Fareed Zakaria’s CNN game show called GPS, doing a sales pitch to the incompetent public about how the world should pay Nathan (and his pal Bill Gates) a lot of cash for his geoengineering patents for example so we can begin preemptively injecting S02 into the stratosphere with a garden hose suspended by helium balloons to simulate volcanoes LOL — Mhyrvold and Gates, you can’t pull the wool over our eyes (take your money and write more crappy closed source proprietary Windows software).
sod says
very good post. thanks for all this time invested in educating those of us who want to learn and even those who don t.
Paul UK says
@16 and pirate cooling.
Some anthropogenic intervention may reduce the number of pirates, resulting in some warming. It would seem that we need to make sure we have the correct number of pirates in the world if we are to maintain a habitable climate.
Jim Cross says
Since CO2 has been increasing since the 19th century, why would you (even recklessly) use only a 26 year period to estimate sensitivity?
[Response: I like to live dangerously. – gavin]
Nick says
Gavin
Response: Err.. no. Since none of these models were fit to any of these measures, this is perhaps more likely a case of you not wanting to pay attention to anything that might threaten your presupposition that models can’t possibly work? For graph 1, I used all the models with no picking to see which ones did better in the hindcast.
Then Gavin you have misunderstood my post, but I’ll address your points.
1. Selection bias. Unless you understand selection bias, you wouldn’t make statements that none of the models fit the historical data. If they didn’t you wouldn’t present the data. There’s a precedent for this in the ‘hide the decline’ email, because its about selection bias.
Now to where you’ve misunderstood the post. It is all about a priori tests. You’re presenting a case where you know some of the results, and are giving the impression that the models were in place for the whole period.
So a simple question.
Why didn’t you start the graphs from the date of the forecast? Why did you also include historical data?
Nick
[Response: Because short term trends are not useful to look at and the earlier period has not been selected in any way. But if you want, here is a picture from 2002 onwards. Let me know if that tells you anything useful. – gavin]
jyyh says
well, pirates do lessen the need to transport goods, since they do it themselves. this in turn could lessen the profits made by the shipping companies, which then would be a proxy indicative of the number of pirates. actual pirate numbers are hard to measure, so this proxy might give some additional verification of the truth. then there’s of course the question of insurances, pirate transportations (so i’ve told) will not be insured, so use of a modifier in the form of incurance losses in the eq is justified.
barry says
Gavin, I’m sure this has been answered elsewhere on the site: why is the fit to the Pinatubo event so well reproduced in the ensemble? It suggestes to me this feature is deliberately simulated in all/most of the runs WRT hindcasting. I read Ch 8 and couldn’t find an unequivocal statement on that, although it does mention that model skill is tested by simulating Pinatubo-like events.
[Response: The aerosols from Pinatubo were included in all of our runs for instance and many of the other groups as well. So it is unsurprising that they cool at the time (even in the ensemble mean). The details of that response (in regional temperatures, rainfall, wind patterns, stratospheric temperatures, radiation etc.) are very useful in evaluating model responses. – gavin]
[you commented] “For graph 2, the ocean heat content numbers are new and were not used in any model training, and for graph 3, the true projections started in 1984 as stated. That gives 26 years for evaluation, something clearly not available for the AR4 models (projections starting from 2003 or 2000 depending on the modeling group)”.
I’m wondering what empirical data are used for (tuning/tweaking/callibrating?) models pre-2000/2003 that get dropped when the models start projecting from 2000/2003.
Hope the questions aren’t too dopey.
Examinator says
Hi,
I’m sorry to go off topic but I don’t know how to contact any one.
I’m not a scientist, however, have you scientists seen this article?
http://insciences.org/article.php?article_id=8012 I’d be interested to hear your comments. To the uninitiated, like me, it seems as though it should be considered. Is it credible or just a loner.
[Response: Not credible. – gavin]
I’ve been recommending Davids lectures to all and sundry on an Australian web site http://forum.onlineopinion.com.au/ The owner is Graham Young. The other week they ran a discussion on the lectures and the science, unfortunately there were questions we couldn’t sort out.
Specifically Graham has some issues regarding water in the atmosphere feedback that I can’t answer. I think it would be helpful maybe helpful if some scientist on this site were to write an article and help us out with proper feedback comments. One of the big problems we and others are facing is that we don’t fully understand the science and that informed discussion is thin on the ground to find. There is a lot of emotions and opinions, despite that there are a sizable group who DO want to understand the facts more thoroughly.
Many thanks in anticipation
examinator (lowly commenter) and regular reader of this site.
TH says
CO2 is running above the high end of estimates, so only the highest scenarios are relevant. It is absurd to pat yourself on the back for being close to low CO2 growth scenarios.
[Response: Not true. CO2 concentrations are right in the middle and in any case the variations in scenarios are not important until ~2030. – gavin]
Ray Ladbury says
Simon@50
It looks like the 1997 datapoint is also slightly higher than 2008, but the El Nino actually started in Fall 1997, I think. But yes, your interpretation is correct. In other words every year this decade will be among the top 10 warmest years except 2008.
Ray Ladbury says
Nick@62, your comment is an indication that you don’t understand how these models work. They are not fits to data as such. Rather they are dynamical physical models where one makes the best determination of the physical parameters and then validates the model against other data. There are good reasons for NOT TUNING the models against short-term data to get better agreement, as the goal is predictive power rather than mere explanatory power. The “Start Here” section has some good references on how the models work.
Schlonz says
Shouldn’t the world have long become uninhabitable if the postulated positive feedback processes existed?
[Response: No. Positive feedbacks in climate are amplifications over a no-feedback situation, not a runaway unstable process. – gavin]
Josh Cryer says
Good stuff, Gavin. I’d like to point to your article written almost 5 years ago: https://www.realclimate.org/index.php/archives/2005/01/is-climate-modelling-science/
It seems there is a lot of confusion here about models and “temperature fitting.” Fortunately GISS model code is available (and I assume the bits from ModelE that are left out will be available in due time; presumably after you guys write some nifty papers about them). So if anyone, especially internet pundits, have a problem, they are free to voice it.
Unlike, say, Scafetta, who I learned in the news today is refusing to provide the source code for his “solar variance contributes 50% of warming” claims: http://www.newscientist.com/article/dn18307-sceptical-climate-researcher-wont-divulge-key-program.html
Barton Paul Levenson says
Dave Salt,
The major positive feedback is water vapor. As the CO2 greenhouse effect heats the Earth, more water vapor evaporates and stays in the air. H2O is an even stronger greenhouse gas, so the temperature goes up further–a radiative effect. Radiative transfer is how most of the temperature changes are taking place.
Dan L. says
Icarus says
Do we know whether the ‘wiggles’ on the curves of observational data represent real fluctuations, or just shortcomings in our measurement methods? For example, where there is a big spike in OHC from around 2002 – 2005, and then a levelling off, does that really mean that the oceans were somehow absorbing more heat for a few years and then absorbing less after that (e.g. by changes in cloud cover or radiation from oceans to space), or does it mean that the heat increase is steady and our measurements aren’t up to tracking how all that heat is moving around the planet? Also, in the short term could it look like the oceans aren’t warming much, when in fact the increased heat is there but it’s going into melting ice at the poles rather than raising the temperature of the water?
Thanks in advance for any comments…
[Response: It’s not clear. No historical data product is perfect, and there may be more revisions to come. The best we can look for is consistency among independent measures (with independent problems). However, there clearly is interannual variability in OHC but exactly what the right number is, is as yet unlcear. The long term trends seem to be more robust though. -gavin]
greg kai says
@69,Gavin:
Schlonz still raised a valid point imho: a strong positive feedback would still lead to an unstable system if it exceed the negative feedback of T^4 thermal radiation. I think it would be usefull to know at which point this happen, so we are able to “normalize” all feedback mechanism (for example, >1 is unstable, 0 is no feedback (effect of CO2 alone), <0 means some additional stabilizing effect besides the simple T^4 thermal radiation…
Eli Rabett says
Examinator @ 65 Eli has put up something on the Lu paper, and there are other points in the comments. There will be more at Rabett Run. It’s a pinata
B.D. says
[Response: The current El Nino was predicted months ahead of time. – gavin]
Uh, no… The current El Nino began in May-June-July, but here are the synopses from CPC’s archive:
January 2009:
Synopsis: Developing La Niña conditions are likely to continue into Northern Hemisphere Spring 2009.
February 2009:
Synopsis: La Niña is expected to continue into Northern Hemisphere Spring 2009.
March 2009:
Synopsis: La Niña is expected to gradually weaken with increasing chances (greater than 50%) for ENSO-neutral conditions during the Northern Hemisphere Spring.
April 2009:
Synopsis: A transition to ENSO-neutral conditions is expected during April 2009.
May 2009:
Synopsis: ENSO-neutral conditions are expected to continue into the Northern Hemisphere Summer.
June 2009:
Synopsis: Conditions are favorable for a transition from ENSO-neutral to El Niño conditions during June − August 2009.
July 2009:
Synopsis: El Niño conditions will continue to develop and are expected to last through the Northern Hemisphere Winter 2009-2010.
El Nino was not predicted until it was already happening.
tamino says
Re: #73 (Icarus)
Gavin’s right that the best determinant of whether the wiggles are physical or observational fluctuations, is comparison of independent data sets.
I don’t know about ocean heat content, but a comparison of GISS to RSS satellite temperature data indicates that much (even most?) of the wiggling is physical rather than observational fluctuation. The match is impressive, especially considering that the two data sets are measuring different things (GISS is surface temperature, RSS is lower-troposphere temperature).
TH says
The cone has widened to nearly a full degree and continues to widen. It would be almost impossible to miss. The skill level demonstrated is equivalent to a sportscaster forecasting either win, lose or draw.
[Response: What would you have me do, pretend that there isn’t a large spread in short term trends? I’ve said repeatedly that on these kinds of time periods trends are not very informative. With the Hansen projections (26 years) it’s getting to be useful. For the time being the AR4 projection spread is interesting because it undermines the very frequent assumptions that IPCC apparently forecast monotonic increases in temperature, or that models somehow don’t take internal variability into account. Modellers don’t expect to win prizes for temperatures falling inside the cone after 5 or 10 years (at least not with these kinds of models), but people need to know there is a cone. It just is what it is. – gavin]
Bill says
Is it normal in the USA, that professionally employed scientific staff are allowed to continually post information and opinion on blogs like this ? Do such opinions have managerial approval as would usually be required in most organisations?
[Response: NASA has strong policies in place to allow for scientists to discuss their science (or even their opinions about policy) with no restrictions. I doubt that you would want it any other way. – gavin]
Hank Roberts says
>> Schlonz
> Greg Kai
You misunderstand “positive feedback” I think. Try some of
http://www.google.com/search?q=positive+feedback+converging+series+climate+runaway
For example:
http://www.skepticalscience.com/argument.php?p=2&t=80&&a=19
From the comments there:
“Tom Dayton at 04:30 AM on 3 December, 2009
villar, you can demonstrate non-runaway positive feedback in a spreadsheet:
Cell A1: 0
Cell A2: 10
Cell A3: =A2+0.5*(A2-A1)
Cell A4: =A3+0.5*(A3-A2)
Now copy and paste cell A4 into cell A5 and on down the column for about 15 cells. The formula should automatically adjust to each cell, so each cell’s value is the previous cell’s value plus 50% of the increase that the previous cell had experienced over its predecessor. The feedback is an increase of each increase, not of the total resulting amount.”
TH says
“Carbon dioxide (CO2) levels in the atmosphere have risen 35% faster than expected since 2000, says a study. International scientists found that inefficiency in the use of fossil fuels increased levels of CO2 by 17%.”
http://news.bbc.co.uk/2/hi/7058074.stm
That doesn’t sound like “in the middle.”
[Response: That is emissions, not concentrations. – gavin]
Ron Taylor says
Graham (#9) provides an amusing example of someone who teaches simulation studies at university level, but does not know the difference between a physical simulation and a statistical correlation.
Jim Eager says
Re Schlonz @69: Shouldn’t the world have long become uninhabitable if the postulated positive feedback processes existed?
Only if if the feedbacks formed an increasing series, as in 1 + 1.5 + 2 + 2.5 + 3…
But they are instead a self-limiting decaying series, as in 1 + .75 + .5 + .25…
To think of it another way, if the postulated positive feedback processes did not exist Earth would never have gone into an ice age, much less exited from one.
pat says
I am having difficulty understanding the graphs…can the Hansengraphs into two graphs, natural global warming and the anthropogenic part.
Ron Taylor says
sidd says: “Thanx for getting back to the science. Looking at the grafs, i notice that ocean heat content jumps about the time surface temperatures fall below ensemble mean. But thats maybe, probably, just a coincidence…?”
Maybe not. This is my layman’s take on a possible reason. If there is an overturning circulation that brings cooler water to the surface and takes warmer surface water deeper, then two things happen: (1) the increased temperature difference at the ocean/atmosphere boundary will cause the ocean to absorb heat from the atmosphere more rapidly, and (2) the atmosphere will be cooled more rapidly as it gives up that increased heat.
Lynn Vincentnathan says
None of this makes me happy. Would that the denialists be right.
As for Hansen’s projections, yesterday I read in his STORMS OF MY GRANDCHILDREN book (pp. 59-69) how that U.S. decided to put more of its money into studying outer space, and did not do the necessary for studying effects of aerosols (I guess that is some of the known unknowns talked about here).
It’s like we’re flying almost blind and the clouds of aerosols are blocking a clear view of the mountain up ahead. We could fly higher (mitigate like crazy) to be sure and overpass the mountain, but the powers that be plus the average Joe in the streets say no, and just hope that behind those clouds of unknowing there is sky and not a solid mountain.
Mike Cloghessy says
Hey Gavin…when is NASA/GISS going to release its raw data for independent research? I know of the lawsuit filed by CEI and others, so when can we expect the release of the information under FOIA?
[Response: What data are you talking about? All our model results are online, as is the code for GISS ModelE, as is the raw data that it is used in GISTEMP, as is the code that does that analysis. I thought that you guys were going to be playing nice once everything was online… ;) – gavin]
Andrew says
@ lamont: “when are scientists going to stop writing code in fortran?”
They may already have for the most part. But “bad old” Fortran (e.g. f77) is like carbon dioxide. It hangs around for decades after it was originally emitted.
To tell the truth, modern Fortran is actually quite useful for new projects. I just retired from decades of work including among many other things, supervising a large scale “linguistically diverse” scientific computing effort which did not actually have any Fortran left in it for the past ten years. I am free of constraints of the past in my new work, and after closely considering all the alternatives, most of which I am already fluent in, I have chosen (PGI) Fortran as the compiled language component because of the high level of support for vectorizing, parallelizing, and acceleration (in my case CUDA GPGPU).
In less than a week I was able to build a machine from commodity parts (3x nVidia GTX 285 for those who want to keep score), install 64 bit Linux and PGI Workstation, neither of which I had personally been exposed to previously, develop, test, and run code which was an order of magnitude faster (on the 720 GPU core) than I could run on the host processor (Core i7 975 OC to 4.2 Ghz). All in, less than $4000. Without that Fortran compiler, the CUDA kernel in C (nvcc) took a lot longer to write, (I know, because I did that too). And the corresponding MATLAB version (even including the latest parallelism available) is another factor slower.
So here we are in 2010, and an expert in scientific computing well versed in non-Fortran solutions, can honestly and reasonably come to the conclusion that in the circumstances, his best choice is Fortran, even without any need to interact with an old “dusty deck” code base.
As a strange postscript, in 2008 my thesis adviser needed me to produce a figure from my (1984) thesis for a review paper, but with higher resolution to meet current publication standards. That code had been written in CDC Fortran (for a 6600), and I only have it on punched cards. In order to port the code to any machine I would have to type it one way or another, so I chose to rewrite the code in MATLAB, which had the advantage of allowing me to redevelop the code from the mathematical basics, which were quite clearly described in my thesis.
So yes, Fortran is not always the best choice, but oddly enough, at this point in time, there actually can be reasonably general circumstances under which Fortran actually is the best choice.
Back in the middle 1980s we used to say that it was impossible to predict what the language of scientific computing would look like in the next millenium, but we could predict with confidence that it would be called Fortran. We could have been more optimistic on the visibility.
S. Molnar says
Re El Nino predictions: I usually go with the Australians with this. Here is a prediction from the (latest and greatest experimental) POAMA model made last January that seems to predict El Nino about 6 months out, which is what happened:
http://poama.bom.gov.au/experimental/poama15/plots/20090101/ssta_nino34.gif
You can generate plots and find more information on this page:
http://poama.bom.gov.au/experimental/poama15/sst_index_rt.html
Hank Roberts says
> pinata
One hard whack and it spills everything?
I bet Eli means it’s a cornucopia.
Cornucopinata, maybe — keep whacking and it keeps spilling more?
TRY says
A few things: Gavin was kind enough to point me to the 1992 paper with predictions for the Pinatubo eruption.
Hansen
If I recall it had a variety of predictions, most predicting a somewhat larger temp drop than what actually occured. We now see that models “predict” the exact temp. decrease that actually occured. Call it back-fit, selection bias, or what have you, but it is there.
Apart from Pinatubo, the general theme of this post and the supportive comments is that models are not particularly predictive in short (<15 year) time frame? That they reflect the underlying physics, but to the extent future temperature diverges from or matches models, it doesn't provide more or less evidence for AGW?
Back to the physics, then, we're talking about radiative forcing.
As Ray has explained, CO2 absorbs and emits IR in specific wavelengths (spread somewhat as a result of pressure, etc). He's also suggested that some of the IR absorbed is converted to kinetic energy through molecular collisions. As a system, then, should we expect an increase in blackbody-type radiation from the atmosphere as a result of CO2 IR absorption? Let's say 50% of the absorbed IR is re-emitted directly by CO2 at the same wavelength. 50% is passed to other molecules through collisions. Then those other molecules emit various other IR wavelengths?
I keep coming back to the expected changes in outbound radiation signature we'd expect to see as a result of increased CO2 in the atmosphere. Can we predict this? Can we measure it?
Ray posted this picture in response to these questions, but I'm not sure if it's model output or real observations – he didn't answer that followup question:
http://www.atmosphere.mpg.de/media/archive/1460.jpg
These seem like reasonable questions to me!
AManuel says
I have not run the numbers, but from my visual observation of your first graph the models are no better than a linear fit of the data from 1980 to 2001 (date of IPCC AR4) projected forward to 2010.
wallruss says
What is the point of this article given that “I’ve said repeatedly that on these kinds of time periods trends are not very informative?”. The only model made a period of time ago to be of interest in Hansens, which was not very good.
Grabski says
Why is the Hansen forecast below the ‘C’ scenario? It’s already 0.5 degrees below the ‘B’ scenario forecast (about 50%).
Ray Ladbury says
TRY, Again, a molecule can only emit radiative energy in wavelengths where it can also absorb radiative energy. So, there will be increases, but only those associated with increased thermal excitation of atmospheric gas molecules. Except for gasses that have such modes in the IR, this will be negligible (that is, the excited modes are “frozen out” at these cold temperatures). These are just the greenhouse gasses. For the most part, the energy goes into increasing the kinetic energy of atmospheric gas molecules. The solid/liquid Earth surface will be a much more efficient radiator.
Yes the figure I posted was a theoretical calculation. I had previously linked to measurements. Here are both together:
http://www.atmosphere.mpg.de/enid/20c.html
Note that cooling and warming occur about where they are predicted.
Ray Ladbury says
AManuel, Almost. The way a scientist would say it is that the models are consistent with a roughly linear increase in temperature over the past 30 years.
Bob Tisdale says
What paper were the OHC models associated with? Hansen et al 2005 illustrates only a decade of OHC model data.
John P. Reisman (OSS Foundation) says
#87 Mike Cloghessy
Just for fun try this in Google including the quotes:
“Here is all the super secret hidden data that has been available the entire time that everyone has claimed that the data and the code is hidden”
While the phrase is not entirely accurate because some things came online over time, it does characterize the reality v. the whiny nature of those at CEI.
David Miller says
Dan L asks in #72 why 7000 PPM of CO2 didn’t cause runaway global warming, and how similar levels could now.
I think the answer to that, Dan, is that the hundreds of millions of years ago when CO2 was that high the sun was younger and putting out less energy. Folks who study the sun assure us this is so; as time goes by and the sun turns H into He the density goes up. Because the density goes up conditions are hotter and the ions are closer so the amount of fusion goes up. Go back in time far enough and the output was at least 10% lower. With lower insolation you need more of a greenhouse effect to keep temperatures livable.
Martin Vermeer says
Dan L. #72:
> …but the Earth has seen nearly 7000 ppm CO2 in the Cambrian Period. What stopped the runaway then?
The fainter Sun back then was one factor…
A very interesting talk worth watching on this is Richard Alley’s AGU Bjerknes talk:__
http://www.agu.org/meetings/fm09/lectures/lecture_videos/A23A.shtml