In the lead up to the 4th Assessment Report, all the main climate modelling groups (17 of them at last count) made a series of coordinated simulations for the 20th Century and various scenarios for the future. All of this output is publicly available in the PCMDI IPCC AR4 archive (now officially called the CMIP3 archive, in recognition of the two previous, though less comprehensive, collections). We’ve mentioned this archive before in passing, but we’ve never really discussed what it is, how it came to be, how it is being used and how it is (or should be) radically transforming the comparisons of model output and observational data.
First off, it’s important to note that this effort was not organised by IPCC itself. Instead, it was coordinated by the Working Group on Coupled Modelling (WGCM), an unpaid committee that is part of an alphabet soup of committees, nominally run by the WMO, that try to coordinate all aspects of climate-related research. In the lead up to AR4, WGCM took up the task of deciding what the key experiments would be, what would be requested from the modelling groups and how the data archive would be organised. This was highly non-trivial, and adjustments to the data requirements were still being made right up until the last minute. While this may seem arcane, or even boring, the point I’d like to leave is that just ‘making data available’ is the least of the problems in making data useful. There was a good summary of the process in Bulletin of the American Meteorological Society last month.
Previous efforts to coordinate model simulations had come up against two main barriers: getting the modelling groups to participate and making sure enough data was saved that useful work could be done.
Modelling groups tend to work in cycles. That is, there will be a period of a few years of development of a new model then a year or two of analysis and use of that model, until there is enough momentum and new ideas to upgrade the model and starting a new round of development. These cycles can be driven by purchasing policies for new computers, staff turnover, general enthusiasm, developmental delays etc. and until recently were unique to each modelling group. When new initiatives are announced (and they come roughly once every six months), the decision of the modelling group to participate depends on where they are in their cycle. If they are in the middle of the development phase, they will likely not want to use their last model (because the new one will almost certainly be better), but they might not be able to use the new one either because it just isn’t ready. These phasing issues definitely impacted earlier attempts to produce model output archives.
What was different this time round is that the IPCC timetable has, after almost 20 years, managed to synchronise development cycles such that, with only a couple of notable exceptions, most groups were ready with their new models early in 2004 – which is when these simulations needed to start if the analysis was going to be available for the AR4 report being written in 2005/6. (It’s interesting to compare this with nonlinear phase synchronisation in, for instance, fireflies).
The other big change this time around was the amount of data requested. The diagnostics in previous archives had been relatively sparse – the main atmospheric variables (temperature, precipitation, winds etc.) but not huge amounts extra, and generally only at monthly resolution. This had limited the usefulness of the previous archives because if something interesting was seen, it was almost impossible to diagnose why it had happened without having access to more information. This time, the diagnostic requests for the atmospheric, ocean, land and ice were much more extensive and a significant amount of high-frequency data was asked for as well (i.e. 6 hourly fields). For the first time, this meant that outsiders could really look at the ‘weather’ regimes of the climate models.
The work involved in these experiments was significant and unfunded. At GISS, the simulations took about a year to do. That includes a few partial do-overs to fix small problems (like an inadvertent mis-specification of the ozone depletion trend), the processing of the data, the transfer to PCMDI and the ongoing checking to make sure that the data was what it was supposed to be. The amount of data was so large – about a dozen different experiments, a few ensemble members for most experiments, large amounts of high-frequency data – that transferring it to PCMDI over the internet would have taken years. Thus, all the data was shipped on terabyte hard drives.
Once the data was available from all the modelling groups (all in consistent netcdf files with standardised names and formatting), a few groups were given some seed money from NSF/NOAA/NASA to get cracking on various important comparisons. However, the number of people who have registered to use the data (more than 1000) far exceeded the number of people who were actually being paid to look at it. Although some of the people who were looking at the data were from the modelling groups, the vast majority were from the wider academic community and for many it was the first time that they’d had direct access to raw GCM output.
With that influx of new talent, many innovative diagnostics were examined. Many, indeed, that hadn’t been looked at by the modelling groups themselves, even internally. It is possibly under-appreciated that the number of possible model-data comparisons far exceeds the capacity of any one modelling center to examine them.
The advantages of the database is the ability to address a number of different kinds of uncertainty, not everything of course, but certainly more than was available before. Specifically, the uncertainty in distinguishing forced and unforced variability and the uncertainty due to model imperfections.
When comparing climate models to reality the first problem to confront is the ‘weather’, defined loosely as the unforced variability (that exists on multiple timescales). Any particular realisation of a climate model simulation, say of the 20th Century, will have a different sequence of weather – that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing (increases in greenhouse gases, volcanoes etc.). There is no expectation that the weather in any one model will be correlated to that in the real world either. So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer). However, in the model archive it is relatively easy to distinguish.
The standard trick is to look at the ensemble of model runs. If each run has different, uncorrelated weather, then averaging over the different simulations (the ensemble mean) gives an estimate of the underlying forced change. Normally this is done for one single model and for metrics like the global mean temperature, only a few ensemble members are needed to reduce the noise. For other metrics – like regional diagnostics – more ensemble members are required. There is another standard way to reduce weather noise, and that is to average over time, or over specific events. If you are interested in the impact of volcanic eruptions, it is basically equivalent to run the same eruption 20 times with different starting points, or collect together the response of 20 different eruptions. The same can be done with the response to El Niño for instance.
With the new archive though, people have tried something new – averaging the results of all the different models. This is termed a meta-ensemble, and at first thought it doesn’t seem very sensible. Unlike the weather noise, the difference between models is not drawn from a nicely behaved distribution, the models are not independent in any solidly statistical sense, and no-one really thinks they are all equally valid. Thus many of the pre-requisites for making this mathematically sound are missing, or at best, unquantified. Expectations from a meta-ensemble are therefore low. But, and this is a curious thing, it turns out that the meta-ensemble of all the IPCC simulations actually outperforms any single model when compared to the real world. That implies that at least some part of the model differences is in fact random and can be cancelled out. Of course, many systematic problems remain even in a meta-ensemble.
There are lots of ongoing attempts to refine this. What happens if you try and exclude some models that don’t pass an initial screening? Can you weight the models in an optimum way to improve forecasts? Unfortunately, there doesn’t seem to be any universal way to do this despite a few successful attempts. More research on this question is definitely needed.
Note however that the ensemble or meta-ensemble only gives a measure of the central tendency or forced component. They do not help answer the question of whether the models are consistent with any observed change. For that, one needs to look at the spread of the model simulations, noting that each simulation is a potential realisation of the underlying assumptions in the models. Do not – for instance, confuse the uncertainty in the estimate of the ensemble mean with the spread!
Particularly important simulations for model-data comparisons are the forced coupled-model runs for the 20th Century, and ‘AMIP’-style runs for the late 20th Century. ‘AMIP’ runs are atmospheric model runs that impose the observed sea surface temperature conditions instead of calculating them with an ocean model, optionally using other forcings as well and are particularly useful if it matters that you get the timing and amplitude of El Niño correct in a comparison. No more need the question be asked ‘what do the models say?’ – you can ask them directly.
The usefulness of any comparison is whether it really provides a constraint on the models and there are plenty of good examples of this. What is ideal are diagnostics that are robust in the models, not too affected by weather, and can be estimated in the real world e.g Ben Santer’s paper on tropospheric trends, the discussion we had on global dimming trends, and the AR4 report is full of more examples. What isn’t useful are short period and/or limited area diagnostics for which the ensemble spread is enormous.
CMIP3 2.0?
In such a large endeavor, it’s inevitable that not everything is done to everyone’s satisfaction and that in hindsight some opportunities were missed. The following items should therefore be read as suggestions for next time around, and not as criticisms of the organisation this time.
Initially the model output was only accessible to people who had registered and had a specific proposal to study the data. While this makes some sense in discouraging needless duplication of effort, it isn’t necessary and discourages the kind of casual browsing that is useful for getting a feel for the output or spotting something unexpected. However, the archive will soon be available with no restrictions and hopefully that setup can be maintained for other archives in future.
Another issue with access is the sheer amount amount of data and the relative slowness of downloading data over the internet. Here some lessons could be taken from more popular high-bandwidth applications. Reducing time-to-download for videos or music has relied on distributed access to the data. Applications like BitTorrent manage download speeds that are hugely faster than direct downloads because you end up getting data from dozens of locations at the same time, from people who’d downloaded the same thing as you. Therefore the more popular an item, the quicker it is to download. There is much that could be learned from this data model.
The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.
Finally, the essence of the Web 2.0 movement is interactivity – consumers can also be producers. In the current CMIP3 setup, the modelling groups are the producers but the return flow of information is rather limited. People who analyse the data have published many interesting papers (over 380 and counting) but their analyses have not been ‘mainstreamed’ into model development efforts. For instance, there is a great paper by Lin et al on tropical intra-seasonal variability (such as the Madden-Julian Oscillation) in the models. Their analysis was quite complex and would be a useful addition to the suite of diagnostics regularly tested in model development, but it is impractical to expect Dr. Lin to just redo his analysis every time the models change. A better model would be for the archive to host the analysis scripts as well so that they could be accessed as easily as the data. There are of course issues of citation with such an idea, but it needn’t be insuperable. In a similar way, how many times did different people calculate the NAO or Niño 3.4 indices in the models? Having some organised user-generated content could have saved a lot of time there.
Maybe some of these ideas (and any others readers might care to suggest), could even be tried out relatively soon…
Conclusion
The diagnoses of the archive done so far are really only the tip of the iceberg compared to what could be done and it is very likely that the archive will be providing an invaluable resource for researchers for years. It is beyond question that the organisers deserve a great deal of gratitude from the community for having spearheaded this.
gerald says
Ok Gavin, all well and good, and I am sure that this “new archive” will continue to confirm your previous “findings” of Global Warming based on human activities. I put it to you that all of your models may in fact be based on a major flaw. The assumption that the earth is round and not flat. Why don’t you do a complete recalculation based on the flat earth thesis and then we will see. Of course as a sceptic with lots of opinions and no scientific training I cannot be expected to do any actual real work on this matter but I do expect you to turn your life’s work upside down to answer my argument (that is how the game played isn’t it?).
Besides even if the world warms and the Arctic, Antarctic and Greenland ice caps all melt it will not be an issue. Like I just told you the earth is flat and the excess water will just fall of the edge. What stops the existing water from flowing off, I hear you ask, well you are the scientist why don’t you work it out?
Actually I am continually amazed at the amount of well reasoned detail presented on this site and your seemingly endless patience in explaining and re-explaining the basics of the science.
Thank you
Gerald
David B. Benson says
Very clear, except:
What is WMO?
[Response: World Meteorological Organization. -rasmus]
Ray Ladbury says
Thanks, Gavin, for this interesting insight. I’m somewhat intrigued by the performance of the ensemble averages. Has anyone done any sort of jacknifing or bootstrapping analysis that looks at performance eliminating one or more models from the average? Since you don’t really know how things are distributed, this might be the only way to explore the variability.
Robert A. Rohde says
Archives are very good, yes. However, as a science communicator / technical illustrator, I find the “Terms of Use” in these model comparison projects to be toxic:
I can’t so much as make a Wikipedia plot without violating those terms, since those plots aren’t part of any “research project” intended for academic publication. And I certainly couldn’t use the data to make plots for books, magazines, or other commercial publications.
Computer climate modeling is (usually) funded by public funds, so it seems strange to create any strong restrictions on the further use of that data.
You mention:
Does that include relaxing the non-commercial, academic paper oriented requirements?
[Response: Non-commercial will probably stay. That is because many of the national modelling groups (not in the US though) have mandates to make money as well as do research and would not contribute to these archives if they felt their paid work would be undermined. I imagine that they are worried about someone else downscaling their projections and selling it as regional climate forecasts. However, I see no reason for other restrictions. Open access should definitely allow, nay encourage, more casual use of the data. I will query the organisers and see what is planned. – gavin]
tamino says
I’d like to thank the climate science community in general for the high degree of free access to data. I’m a data analysis junkie, after all, and I could spend several lifetimes just analyzing the data I’ve already saved on my computer.
It would be lifetimes well spent.
Joe says
Gavin,
I think it’s a bit of an understatement to say that not everything at the archive is done to everyone’s satisfaction.
In fact, there’s some mysterious vetting process that comes into play when you register at the archive. If you pass, you’re granted access. If not, you’re invited to apply again. And again. And again.
An example: I applied for access to the archive last year. My goal was to add this valuable data to an archive that I maintain that makes it relatively easy (I hope) to download and/or visualize geophysical data. I was repeatedly denied access. It took four months and the threat of a formal FOIA request before I was allowed any access at all. And I still only have access to the US model data.
Any idea when this archive will truly be open?
[Response: The problem was that some groups were not ok with third parties hosting the data. This was not a problem for any of the US groups though. The fully open access was proposed a couple of months back and no-one (AFAIK) seemed to mind. Thus I would anticipate that it is imminent. If someone who has actual knowledge wants to let me know, they can email or leave or comment here. – gavin]
AK says
Since you ask for suggestion(s):
Wouldn’t it be interesting to run a large ensemble of coupled runs and cherry-pick the ones whose SST’s match observations? You could compare the range of weather patterns with that of both the “un-picked-over” ensemble and the ‘AMIP’ runs.
Nick Barnes says
David B. Benson @ 2: WMO = World Meteorological Organization, http://www.wmo.ch/
Gerry Beauregard says
Re. 1 “What is WMO”:
World Meteorological Organization
http://www.wmo.ch/pages/about/index_en.html
“The World Meteorological Organization (WMO) is a specialized agency of the United Nations. It is the UN system’s authoritative voice on the state and behaviour of the Earth’s atmosphere, its interaction with the oceans, the climate it produces and the resulting distribution of water resources.”
Chad says
“The other way to reduce download speeds is to make sure that you only download what is wanted. ”
I think you meant “the other way to increase download speeds…”
minor point.
[Response: true. I’ve changed it to ‘download time’. – gavin]
Aaron Lewis says
re # 1
My Google must be dumber than yours. I get “World Meteorological Organization (WMO) Homepage – Organisation …World Meteorological Organization – Official United Nations’ authoritative voice on weather” over and over. I can not find enough differnt possible choices to turn it into a good question.
Lloyd Flack says
Have any statisticians experienced in meta-analysis looked at these archives? Meta-analysis , especially using Bayesian approaches has been used a lot especially in medical fields.
The differences in the plausibility of models to me seems to invite a Bayesian approach. I would suggest seeking out some statisticians with the appropriate expertise.
Chris Colose says
#1
world meteorological organizaion
tamino says
Re: #1 (gerald)
I nominate you for “best comment ever.”
Rod B says
This post/thread offers some interesting and helpful insights. A couple of questions: 1) I didn’t fully comprehend “unfunded”. Does this mean that no funds were from IPCC, WGCM, or WMO or other UN-related body and that all funding came from the enterprises employing the scientists and buying the computers? Surely the guys and gals did not work for no pay…??? Question is: who paid the bills?
2) I too am bothered by the raw averaging process for meta-ensemble. But is ther any other process that could have been clearly better? Also, you imply that everyone ‘lucked out’ when the meta averaging came out better that any one model (I assume by comparing historical information…???). But none-the-less shouldn’t there still be a pile of concern/nervousness/interest? There seems to be at least a small thread that says you have validated the models with a process a little like playing the slots.
Hank Roberts says
Lloyd, search: +climatologist +Bayesian
See also: http://www.cell2soul.org/issues/article.php?issue_dir=v2/i2&article_num=a18
The Patient from Hell: How I Worked with My Doctors to Get the Best of Modern Medicine and How You Can Too by Stephen H. Schneider, with Janica Lane
De Capo Lifelong Books (2005); 300 pages;
ISBN: 0738210250
http://www.patientfromhell.org
“… applied subjective probability analysis (Bayesian updating) based on knowledge, experience, and intuition when conclusive hard data is lacking; examined historical data to calculate risk, from mild to catastrophic (risk = probability + consequence); and repeatedly determined whether to push for a Type I risk, where one spends the money and acts to prevent a bad outcome despite lack of surety of the symptom, or whether to accept a Type II risk, where one saves the money and doesn’t act, accepting that the consequences may be disastrous after all….”
Robert A. Rohde says
Re: Gavin’s response #4,
If there have to be restrictions to appease some national modeling groups, then those restrictions really should be displayed at the dataset level rather than having a blanket policy covering the entire portal. Appeals to the most restrictive preferences are inherently counter-productive. Not to mention that even within modeling groups, some data is treated more freely than others (e.g. the Hadley Centre policy on summary datasets).
[Response: Agreed. – gavin]
Martin Vermeer says
gavin:
Isn’t this just a result of — and an indication — that all models contain flaws, but they are mostly different flaws for each different model? Then, when you construct a meta-ensemble, you just ‘dilute’ each flaw with all the non-so-flawed other models. No statistics/stochastics/randomness involved.
Do I miss something?
[Response: Yes. But that is a statistical effect. The flaws (in some measure) must be statistically independent. – gavin]
Nick O. says
Many thanks for this, Gavin, very useful.
I would add to some of the comments above about the importance of considering other statistical approaches and modes of presentation. In my own work, I use metamodels and bootstrapping techniques, and am moving towards adopting some of the Bayesian methods noted above e.g. based on Kennedy and O’Hagan, 2001 (although I don’t myself think they are always the best approach or the most suitable – depends a lot upon the aim of the intended analysis and type of research question).
One thing I am also working on is the presentation of altenative predicted futures and associated risk, using a modified kind of contingency table. By this method, one takes into account the equifinality inherent in most, if not all, of the types of model commonly used in the geosciences; I would imagine that climate models suffer similar problems, so it would be useful at some point (when I can make some time, dammit!) to have a go at these, too. Hence an archive like this is of great value.
More of the same!
Ray Ladbury says
Rod B. asks about the issue of underfunding:
“Surely the guys and gals did not work for no pay…??? Question is: who paid the bills?”
Actually, that is probably precisely what it means–a lot of unpaid overtime. This is quite common in the sciences. I typically work 60 hours in an average week. During crunch times I work 80. I am not atypical. We rationalize this often by saying our work is our hobby as well as our day job. The fact of the matter is that grants rarely cover all the work that needs to be done to publish papers. The government learned long ago that most scientists are more motivated by a challenging problem than by a paycheck.
He then asks about the averaging process for the ensemble and whether there might be a better way of doing the average.
One possibility might be to use weights based on how the models according to various information criteria (e.g. AIC, BIC, TIC, etc.). This would allow models of different complexities to be compared and ensure that overfit models were downweighted approproately. Note that the AIC would (of course) have to be based on the information used to calibrate the model, not how well the model predicted the phenomenon under study. Actually, model averaging has been found to outperform the results of any single model, especially when models are appropriately weighted. What this may be telling you is that no particular model is vastly superior to any other. Also note that such a process is compatible with Lloyd’s suggestion of a Bayesian meta-analysis.
Peter says
I don’t beleive IPCC – it estimates that aviation is responsible for around 3.5% of anthropogenic climate change?!
http://www.climateactionprogramme.org/news/article/reducing_airline_emissions_can_it_be_done/
Bryan says
Three things:
1) Next time round we are planning on a distributed archive, which will allow download of subsets (in space and time). Planning for this is actively underway. It’s not obvious that download speeds will be actively enhanced by the distribution though, because the data volumes this time around are likely to minimise the number of copies which can exist.
2) Regrettably it is likely that much data in the archive will have more restricted access conditions than the American participants can allow. Gavin’s explanations is correct.
3) Funding: In general the UN and IPCC can’t spend money, they ask the nation states to do things, which then get funded “locally”. The AR4 archive that Gavin is describing was funded by internal US funds. As I understand it, it was unfunded in the sense that the archive hosts moved money from another task to support the archive. For the next assessment report we hope that a number of other nations will be contributing effort and sharing the load …
tharanga says
Speaking of model meta-analysis – is there a comparison available of the slow response times/lags in the system, among the different models? I was curious, for example, how much further different metrics would change (and how quickly) if magically, all greenhouse gas emissions stopped completely tomorrow. Or, alternatively, if the CO2 concentration instantaneously jumped from 280 to 380 ppmv, how long it would take for metrics to approach a new equilibrium.
[Response: The simulation that everyone did was to fix concentrations at 2000 levels (‘committment runs’). It takes a few decades to get 80% or so of the way to equilibrium, and much longer for the remaining 20%. Sea level rise (due to thermal expansion) continues for centuries. There was a paper by Meehl et al (2005) that discussed this. – gavin]
B Buckner says
Gavin,
In your response to #23 you talk of the decades and more it takes to reach equilibrium once CO2 concentrations stabilize. I presume this passage of time is related to the “heat in the pipeline” and the lag in heating the oceans. While this is logical (to me) and presumably based on sound science, I don’t see a lag in temperature behavior as depicted by the GISS land and ocean surface temperature record here:
http://data.giss.nasa.gov/gistemp/graphs/
The ocean surface temperatures go up and down (at a smaller magnitude) at the same time as the land surface temperatures. I see no evidence of a lag going back to the beginning of the record in 1880. Why is this?
[Response: In Fig A4 the ocean temperatures are clearly damped compared to Land. There are multiple timescales here though – some are short (seasonal and interannual) which make many short term anomalies line up, but the long time scales come into the problem for the long term trend and they are the ones that come into play for the “in the pipeline” effect. – gavin]
Ray Ladbury says
Peter says: “I don’t beleive IPCC – it estimates that aviation is responsible for around 3.5% of anthropogenic climate change?!”
And since your “belief” is not based on any evidence or even any facts that you have cited, it is relevant to the discussion exactly…how?
Nick Gotts says
An excellent initiative! Please keep pressing for maximum access, Gavin.
Dodo says
Is there any way to establish a ranking of the 17 modelling groups, by their success in predicting changes in climate – if that is what they are working on? If their goal is something different, how would success in that endeavor be measured? Thank you.
[Response: The best you can do so far is assess the skill of the climatology (no trends) – Reichler and Kim have done some work on that (see here). Success in projections will need some more time. – gavin]
Gaelan Clark says
Ray, I like your notion of wanting the “exact” information on relevant issues. Speaking towards this, please tell me exactly how the IPCC gets to 2.5-3 degrees C temperature increase from doubled CO2.
I thank you in advance for your assistance.
[Response: We’ve gone over why the ‘best guess’ climate sensitivity is 3 deg C a dozen times. It’s not a secret. – gavin]
Joe says
A few more comments:
1) Funding: I’m not sure how much finding (if any) the archive hosts received for
the AR4 simulations. However, they apparently have received about
$13.5 million (over five years) for the next round of
simulations. I don’t know if any of this will go to the modeling
centers to defray the costs of preparing data for archival.
2) Last time I checked, none of the AR4 data (including US data) were
available for public download via ftp or http. Even though the USA data
*are* freely available via OPeNDAP, there is no mention of this anywhere at
the archive Web site. There’s no reason why there should be any
restrictions on access to the US data.
3) One of the WGCM members told me a few months ago that rumor had it
(he wasn’t actually at the meeting) that at least one
modeling group (Hadley) was opposed to opening up the AR4
archive.
4) The rules for who is or is not allowed to access the archive need to be
clearer. For instance, who decides whether an applicant is
permitted to use the archive? What are the ground rules for making
this decision?
5) It’s really unfortunate that the next archive will be as restricted as
the present one. I think the modeling and archive centers have a lot
to learn from the open source software community.
[Response: I can’t speak for other modelling groups, but much of the GISS data is available on our own servers. GISS received no additional money over our standard model development grants to do simulations for AR4, and most groups were in the same boat. Discussion of how the next archive will be set up is ongoing, and you should be vocal in sharing your concerns. None of these things appear insuperable. – gavin]
Barton Paul Levenson says
The aviation figure does seem high to me — has anyone checked whether it makes sense? What are the respective magnitudes involved? I don’t think you could get that much from the well-mixed greenhouse gases alone produced by jet engines; is there an effect from creating high-altitude cirrus clouds?
David B. Benson says
Thanks to all who responded regarding WMO. I ought to have guessed the answer. Anyway, the link was of interest.
Ray Ladbury says
Gaelan, I’m not sure I understand your question. I presume you can read English. You are as capable of going to the summaries as I am. Or did you just want to say something clever and this was the best you could do on short notice?
Harold Pierce Jr says
Re #21
A Boeing 747-400ER starts with 63,500 US gal of fuel for a long flight, which is about 50% of the take-off weight. The new super jumbo A380A has a capacity for 83,500 US gal of fuel. The fuel burn rate of a Boeing 737-400 is about 3,000 liters per hour. If you add up all the emmisions from all classes of aviation (i.e., private, commercial, government and military), I wouldn’t be suprised that total is much higher.
Hydrocarbons fuels will always be used by boats, planes, freight trains and trucks, construction, mining
forestry and agricultural machinery, all military vehicles and mobile weaponary (e.g., tanks), most all cars and light trucks, diesel-electric generating systems which are used extensively thru out the world (e.g., in small countries and at gold and diamond mines), etc because these fuels have high energy densities. They are readily prepared from crude oil by fractional distillation and blending, low energy processes that do not involve the breaking and making of chemical bonds. These fuels are highly portable and can be stored indefinetly in sealed containers or tanks under an inert nitrogen atmosphere. Hydrocabons fuels are chemically inert (except to oxygen, halogens and certain other highly reactive chemicals) and do not corrode metals or attack rubber and other materials(e.g, gaskets) used for construction of engines. Since gasoline has a flash point of -40 deg C, it can be used in very cold climates.
Some other heavy hitters that will always use megagobs of fossil fuels are lime and cement kilns, metal smelters (especially steel mills), founderies making engine blocks, pipe, tools, big nuts and bolts, rail car wheels, etc, all factories that manufacture ceramics materials and products (e.g., bricks, blocks, tiles, pottery, dishes, glass sheets and bottles, etc), all food preparation (e.g, large bakeries) and processing (e.g., sterilization). Fossil fuels will always be used for residential space and water heating especially in cold climates.
The chemical process industries require petroleum feedstocks and use lots of energy for the manufacure of an amazing array of materials such as carpet and cloth fibers, paint, plastics, exotic material like silicones and Teflon, and so forth.
I could go on on listing all the human activities that will always use fossil fuels because there never will be any suitable and economical subsitutes with requisite physical and chemical properties.
The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.
Jim Eager says
Re Harold Pierce @ 21: “Hydrocarbons fuels will always be used by boats, planes, freight trains and trucks, construction, mining…”
Mainline electrified freight railroad milage was once quite extensive in North America and there is no reason why it could not be again. Of course, there would be no net gain unless the electrical power is produced without burning fossil carbon. Ocean shipping was once entirely wind-powered and there is great potential for at least reducing the amount of fossil carbon fuel use by augmenting with sail power.
“Some other heavy hitters that will always use megagobs of fossil fuels are lime and cement kilns, metal smelters (especially steel mills), founderies making engine blocks, pipe, tools, big nuts and bolts, rail car wheels, etc…”
Perhaps you’ve not heard of electric arc furnaces, which have extensively displaced open hearth furnaces in steel making and smelting, with the electricity sometimes even produced by carbon-free hyrdoelectric plants.
Never say never, it’ll trip you up every time.
Rod B says
Ray (20), Interesting; thanks.
Nick O. says
re #33 (Harold Pierce Jr) “The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.”
Pretty big claim, there, Harold, don’t you think? What assumptions are you *really* making? Over what timescales? And what uncertainties are there around your assumptions, I wonder?
re # 20 (Ray Ladbury) “This would allow models of different complexities to be compared and ensure that overfit models were downweighted approproately.”
Ray, would you just clarify for me what you mean here by ‘overfit’? I may be confusing this with ‘overfitted’ i.e. the possibility that the underlying model functionality is more complex than necessary to explain the variance and trends in the system. (It’s a nice problem to have to untangle). I’m also interested in the weighting procedure, and to what extent this applies to functional parameters (process rate coefficients, exponents, thresholds, that sort of thing) as well as models as a whole. Any core refs here, for example?
Nick Gotts says
RE #33 [Harold Pierce Jr.] “The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.”
Well, you live and learn. I’ve been thinking the Earth (and therefore the supply of fossil fuels) was finite!
Gaelan Clark says
No Ray, I intend nothing clever. Indeed, the question is pretty straightforward and requires no inference as to intention. Just an answer as to the exposition of the supposed temperature increase for the doubling of atmospheric CO2. If you don’t have the answer just say so.—I can not understand that you do not understand.
Dr. Schmidt has been kind enough to advise that this is a “best guess”, which is ok if we are gambling with our own money. But when the gamble is with the collective pocket books of the entire planet, I posit we need better than a “guess”.—A “guess” by the way that is not clearly stated anywhere in the IPCC literature of possible doomsday scenarios that, according to the IPCC, have I very high probability of occuring—–that does not sound like a “guess” to me.
So, Ray, when you ask for others their “exact” information, why is it so hard for you to reference yours?
Steve Case says
Harold Pierce #33 is right. The six or seven billion people in the world burn stuff every day of their lives. Do we really think we can regulate them all to change their way of life? If the production of CO2 is going to change the climate, then we ought to prepare for it. It makes more sense to get off the tracks then it does trying to stop the train.
David B. Benson says
Harold Pierce Jr (33) — Au contriare, visit
http://biopact.com/
to discover the biofuel alternatives being developed for all the uses you mention, except aviation, where 50% biofuel appears to be the goal.
Barton Paul Levenson says
Harold Pierce writes:
[[I could go on on listing all the human activities that will always use fossil fuels because there never will be any suitable and economical subsitutes with requisite physical and chemical properties.
The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.]]
You do realize the supply of the stuff is FINITE, right?
Barton Paul Levenson says
Gaelan Clark writes:
[[Dr. Schmidt has been kind enough to advise that this is a “best guess”, which is ok if we are gambling with our own money. But when the gamble is with the collective pocket books of the entire planet, I posit we need better than a “guess”.—A “guess” by the way that is not clearly stated anywhere in the IPCC literature of possible doomsday scenarios that, according to the IPCC, have I very high probability of occuring—–that does not sound like a “guess” to me.]]
Here are 61 estimates:
http://members.aol.com/bpl1960/ClimateSensitivity.html
Barton Paul Levenson says
Steve Case writes:
[[Harold Pierce #33 is right. The six or seven billion people in the world burn stuff every day of their lives. Do we really think we can regulate them all to change their way of life?]]
Yes. And, more importantly, switch to other sources of energy.
[[ If the production of CO2 is going to change the climate, then we ought to prepare for it. It makes more sense to get off the tracks then it does trying to stop the train.]]
Not if the train is five miles away and you have the engineer’s cell phone number.
Nick Gotts says
Re #39 [Steve Case] “If the production of CO2 is going to change the climate, then we ought to prepare for it. It makes more sense to get off the tracks then it does trying to stop the train.”
In this case, getting “off the tracks”, means getting off the Earth. How do you propose we go about it?
bert says
[I’ve been thinking the Earth (and therefore the supply of fossil fuels) was finite!]
Well, there is some dispute about that. Fossil fuels may not be the correct term for oil as we know it and it may not be finite.
http://planetgore.nationalreview.com/post/?q=OTY0NzIzMzQ0YTA1NWJkOWQ1ZmYwNzE2NTU1YjQ2Mjg=
[Response: This is a typical over-reaction. It is well known that not all hydrocarbons are biogenic (methane on Mars and Titan anyone?). Showing that this can be seen on Earth is a long way from showing that all oil is produced that way (it’s not), let alone that it implies that supplies are effectively infinite. – gavin]
Alastair McDonald says
Re #43 Barton,
If you have got the engineer’s cell phone number could you please let me have it so I can ring him and get him to stop?
Of course if that is the number of the White House then I have to warn you that it does not seem to work. They tell me there is a speed fiend in the cab who will stop for no one :-(
Lawrence Brown says
Is there such a thing as a point of diminishing returns in using meta ensembles-kind of a Tower of Babel effect?
Will we go from storage needs of 100s terabytes to quadrillion bytes and higher? This feels like it can lead to complex logistical problems.
Timothy Chase says
B Buckner (#24) wrote:
Ocean surface temperatures go up and down at a smaller magnitude in sync in the short-run due to their greater thermal inertia. They are being warmed or cooled at the same time, but given a general, long-term warming trend will take longer to warm up to the point that the earth’s system achieves radiation balance — with their warming pushing the radiation balance a little further away due to increased water vapor pressure coming off their surfaces leading to a further enhancement of the greenhouse effect due to the water vapor content of the atmosphere.
But the thermal inertia is just part of the problem. There is also ocean circulation, primarily in terms of the thermohaline but also in terms of ocean waves, which redistributes the heat to deeper waters. In the case of thermohaline circulation, it takes an individual molecule take on the average 3500 years to make a complete circuit,. It will take a great deal of time for the ocean to achieve the eventual quasi-equilibrium heat distribution – which is why the last 20% that Gavin inlines about takes so long.
Russell says
gavin,
I thought I would let you know that the ice coverage for the Arctic Ocean is back to normal. I knew you were concerned about it last month, but now we can all sleep soundly, knowing the polar bears will last a few more months, at least.
http://arctic.atmos.uiuc.edu/cryosphere/
[Response: Only if your definition of normal is 1 million km2 less than climatology. – gavin]
Martin Vermeer says
Re: #41
And if the “stuff” weren’t, then the oxygen to burn it is. Even ignoring for the sake of argument our own breathing needs.
(Reductio ad absurdum… the Earth will become uninhabitable in any one of a broad palette of ugly ways, long before even getting close to this limit.)