Almost 30 years ago, Jule Charney made the first modern estimate of the range of climate sensitivity to a doubling of CO2. He took the average from two climate models (2ºC from Suki Manabe at GFDL, 4ºC from Jim Hansen at GISS) to get a mean of 3ºC, added half a degree on either side for the error and produced the canonical 1.5-4.5ºC range which survived unscathed even up to the IPCC TAR (2001) report. Admittedly, this was not the most sophisticated calculation ever, but individual analyses based on various approaches have not generally been able to improve substantially on this rough estimate, and indeed, have often suggested that quite high numbers (>6ºC) were difficult to completely rule out. However, a new paper in GRL this week by Annan and Hargreaves combines a number of these independent estimates to come up with the strong statement that the most likely value is about 2.9ºC with a 95% probability that the value is less than 4.5ºC.
Before I get into what the new paper actually shows, a brief digresssion…
We have discussed climate sensitivity frequently in previous posts and we have often referred to the constraints on its range that can be derived from paleo-climates, particularly the last glacial maximum (LGM). I was recently asked to explain why we can use the paleo-climate record this way when it is clear that the greenhouse gas changes (and ice sheets and vegetation) in the past were feedbacks to the orbital forcing rather than imposed forcings. This could seem a bit confusing.
First, it probably needs to be made clearer that generally speaking radiative forcing and climate sensitivity are useful constructs that apply to a subsystem of the climate and are valid only for restricted timescales – the atmosphere and upper ocean on multi-decadal periods. This corresponds in scope (not un-coincidentally) to the atmospheric component of General Circulation Models (GCMs) coupled to (at least) a mixed-layer ocean. For this subsystem, many of the longer term feedbacks in the full climate system (such as ice sheets, vegetation response, the carbon cycle) and some of the shorter term bio-geophysical feedbacks (methane, dust and other aerosols) are explicitly excluded. Changes in these excluded feaures are therefore regarded as external forcings.
Why this subsystem? Well, historically it was the first configuration in which projections of climate change in the future could be usefully made. More importantly, this system has the very nice property that the global mean of instantaneous forcing calculations (the difference in the radiation fluxes at the tropopause when you change greenhouse gases or aerosols or whatever) are a very good predictor for the eventual global mean response. It is this empirical property that makes radiative forcing and climate sensitivity such useful concepts. For instance, this allows us to compare the global effects of very different forcings in a consistent manner, without having to run the model to equilibirum every time.
To see why a more expansive system may not be as useful, we can think about the forcings for the ice ages themselves. These are thought to be driven by the large regional changes in insolation driven by orbital changes. However, in the global mean, these changes sum to zero (or very close to it), and so the global mean sensitivity to global mean forcings is huge (or even undefined) and not very useful to understanding the eventual ice sheet growth or carbon cycle feedbacks. The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed.
So in order to constrain the climate sensitivity from the paleo-data, we need to find a period under which our restricted subsystem is stable – i.e. all the boundary conditions are relatively constant, and the climate itself is stable over a long enough period that we can assume that the radiation is pretty much balanced. The last glacial maximum (LGM) fits this restriction very well, and so is frequently used as a constraint. From at least Lorius et al (1991) – when we first had reasonable estimates of the greenhouse gases from the ice cores, to an upcoming paper by Schneider von Deimling et al, where they test a multi-model ensemble (1000 members) against LGM data to conclude that models with sensitivities greater than about 4.3ºC can’t match the data. In posts here, I too have used the LGM constraint here to demonstrate why extremely low (< 1ºC) or extremely high (> 6ºC) sensitivities can probably be ruled out.
In essence, I was using my informed prior beliefs to assess the likelihood of a new claim that climate sensitivity could be really high or low. My understanding of the paleo-climate record implied (to me) that the wide spread of results from (for instance, the first reports from the climateprediction.net experiment) were a function of their methodology but not a possible feature of the real world. Specifically, if one test has a stronger constraint than another, it’s natural to prefer the stronger constraint, or in other words, an experiment that produces looser constraints doesn’t make previous experiments that produced stronger constraints invalid. This is an example of ‘Bayesian inference‘. A nice description of how Bayesian thinking is generally applied is available at James Annan’s blog (here and here).
Of course, my application of Bayesian thinking was rather informal, and anything that can be done in such an arm waving way is probably better done in a formal way since you get much better control on the uncertainties. This is exactly what Annan and Hargreaves have done. Bayes theorem provides a simple formula for calculating how much each new bit of information improves (or not) your prior estimates and this can be applied to the uncertain distribution of climate sensitivity.
A+H combine three independently determined constraints using Bayes Theorem and come up with a new distribution that is the most likely given the different pieces of information. Specifically they take constraints from the 20th Century (1 to 10ºC), the constraints from responses to volcanic eruptions (1.5 to 6ºC) and the LGM data (-0.6 to 6.1ºC – a widened range to account for extra paleo-climatic uncertainties) to come to a formal Bayesian conclusion that is much tighter than each of the individual estimates. They find that the mean value is close to 3ºC, and with 95% limits at 1.7ºC and 4.9ºC, and a high probability that sensitivity is less than 4.5ºC. Unsurprisingly, it is the LGM data that makes very large sensitivities extremely unlikely. The paper is very clearly written and well worth reading for more of the details.
The mathematics therefore demonstrates what the scientists basically thought all along. Plus ça change indeed…
Urs Neu says
Re 87 and following
As far as I understand the modelling assumptions, they allways assume a 1% increase in CO2 equivalents, that means all the greenhouse gases, not CO2 alone. So if comparing to the measurements, one should compare to the total of greenhouse gases, not CO2 alone.
James Annan says
Urs (101),
Yes that’s very true – however, in reality the other GHGs actually aren’t increasing to make up the difference (methane is currently flat and could even be decreasing slightly, as I’ve mentioned before).
Paul (100),
Write it up and publish it then – good luck :-)
Hank Roberts says
>write it up and publish it
No, we’re citizens asking you, or advertising for other experts, who are willing to talk to us (well, I’m not a Japanese citizen, but talk to us anyhow), about questions you perhaps don’t think are useful. But these questions are about surprises.
Take examples outside your immediate area, James (at least I think so). Because these questions are addressed to any climatologist working who is reading this. I hope many are and more will be encouraged to respond to all this.
Assuming ‘sensitivity’ a term applicable: Didn’t we have very low estimates for the sensitivity of the Greenland glaciers, and the Antarctic ice shelves, to warming, for example? Would such estimates have changed after the icequakes got noisy and the Ross collapsed?
Don’t we now have very low estimates for the sensitivity of the very deep ocean to atmospheric warming? Would finding warming in the deep ocean (as I believe some Japanese research vessels reported a few years ago) change our expectation?
Hank Roberts says
Another example — five years old, and I don’t know where the journal publications are! — describing documentation from drilling of sudden warming caused by events like major seismic events rather than predictable astronomical changes. I don’t know what to make of this, but — do we expect the unexpected when calculating risks?
http://www.sciencedaily.com/releases/2001/11/011120045859.htm
QUOTE
November 23, 2001
Global Warming Periods More Common Than Thought, Deep-Sea Drilling Off Japan Now Demonstrates
CHAPEL HILL — Core samples from a deep-sea drilling expedition in the western Pacific clearly show multiple episodes of warming that date back as far as 135 million years, according to one of the project’s lead scientists. Analysis of the samples indicates warming events on Earth were more common than researchers previously believed.
The expedition aboard the scientific drill ship “JOIDES Resolution,” which ended in late October, also revealed that vast areas of the Pacific Ocean were low in oxygen for periods of up to a million years each, said Dr. Timothy Bralower. A marine geologist, Bralower is professor and chair of geological sciences at the University of North Carolina at Chapel Hill.
“These ocean-wide anoxic events were some of the most radical environmental changes experienced by Earth in the last several hundred million years,” he said.
… Drilling took place on Shatsky Rise, an underwater plateau more than 1,000 miles east of Japan. Its purpose was to better document and understand past global warming.
In geologic time, episodes of warming began almost instantaneously — over a span of about a thousand years, Bralower said.
“Warming bursts may have been triggered by large volcanic eruptions or submarine landslides that released carbon dioxide and methane, both greenhouse gases,” he said. “Besides reducing the ocean’s oxygen-carrying capacity, warming also increased the water’s corrosive characteristics and dissolved shells of surface-dwelling organisms before they could settle to the bottom.”
In some especially striking layers of black, carbon-rich mud, only the remains of algae and bacteria were left, he said.
“The sheer number of cores that reveal the critical warming events found on this expedition — three from the 125-million-year event and 10 for the 55-million-year Paleocene event — exceeds the number of cores recovered for these time intervals by all previous ocean drilling expeditions combined,” Bralower said.
——–
END QUOTE
———-
Ok, enough from me. I’m not saying you all should have modeled this sort of event, I don’t see how you can. I’m asking whether we can expect the estimates of possible warming to include some possibility of such events. Major seismicity/undersea landslide/methane release. Or even Antarctic icecap melting and releasing methane hydrates, if there are any buried under grounded thick ice — and do we know if such exist nowadays?
Forget the asteroids — how about Earth’s hiccups interfering with smooth predictable change? Risk?
James Annan says
Hank,
Paul already writes stuff in Science, so I think my suggestion is a reasonable one if he thinks an argument along those lines will stand up to peer review. It would certainly be amusing if all those who have spent the last few years trying to generate estimates of climate sensitivity were to decide in the light of our result that it’s actually not possible to do this after all, we just have to assert S > 4.5C at the 10% level regardless. Time will tell.
As for things like methane eruptions – this is outside the realm of what we were estimating. There’s not any sign of increased methane yet, and the recent RC article clearly played down the risk, but it would be hard to rule out the possibility of it. However, note that if people can come up with 100 scary catastrophe theories, then even if we can rule them all out at the 99% level (which is well-nigh impossible to achieve on a formal basis), they still “win” overall, and can claim that the end of the world is nigh. IMO this says more about the power of imagination than it does about reality.
Florent Dieterlen says
Hello,
I am doing a model for earth mean temperature with a new method, and i am looking for data (time series):
– recent (e.g. 1900-2005)earth mean temperature
– recent marine benthic oxygen isotope values
– recent dust (aerosol) values
– recent Na values
I have the ancient values through Vostok and DomeC, but I need recent ones also…
Can someone tell me where I can find that?
Florent
Hank Roberts says
>outside the realm of what we were estimating
That is a good clear answer to my question, thank you — I thought perhaps your procedure relied on a “bottom line” outcome of what we know from prior climate change — including known and unknown details.
In addition to methane outbursts, what else did you choose to rule out of the realm used to make your estimates?
I guess I’m puzzled why the half of the CO2 total humans released, in the past 30 years, is not like a methane outburst. I can understand why the first half of the CO2 people released over 11,000 years (per Dr. Ruddiman) would give sensitivity range comparable to how past climate behaved in the absence of any sudden spike like a methane outburst.
But I wonder — if you next do include methane outbursts from the past climate record in the realm of calculation — would the human CO2 release of the past 30 years compare to an outburst event? Would you get a different value for sensitivity, if outbursts were in the realm considered?
Paul Baer says
Hi James – your suggestion to write it up and submit it is obviously correct. In the meanwhile, however, I hope that you and others in this forum can help me explore the arguments. I think that progress in this area regarding the use of Bayesian and related models of uncertainty depends on generating active discussion. Typically scientific “results” are generated in collective settings (think of a lab group at one scale); I’m hopeful that realclimate can help facilitate a “virtual collective” on this topic. (If there’s another discussion forum that would be more appropriate, I’m happy to move the discussion too.)
In any case: I’m working on a much longer argument, but as I reread the comments of yours to which I’m responding, I was struck by this: “Independence is a bit more of a tricky one, but note that accounting for dependence could result in a stronger result as well as a weaker one.”
As I understand the method, I can’t see how that could be true. Isn’t multiplication of the PDFs the method that produces the greatest narrowing of uncertainty, based on the assumption of complete independence? How, mathematically and conceptually, could greater narrowing of uncertainty come from asserting any positive degree of dependence?
–Paul
James Annan says
Hank,
The point is that simply from the POV of looking at how the physical climate system (ocean + atmosphere including sea-ice) responds to forcing, we would consider methane as an additional forcing. Any methane would (to first order) have the same effect as an equivalent CO2 concentration. I thought Gavin explained it pretty well in the article itself.
James Annan says
Paul,
Sure, I have no objection to discussion, and here is as good as anywhere. I didn’t mean to sound snarky in my previous comment.
Independence.
I’ll start off by clarifying exactly what “independence” actually means in this situation. The first thing some people think when I assert that the different observations are independent, is “oh no they aren’t – they are measuring the same thing (sensitivity = S) so a high value for one observation would lead us to expect a high value for other observations”. This is true, but it is the independence of the error that matters. One way of looking at it is to ask the question “if we knew S, would knowing the value of an observation X1 change our expectation of a presently unknown observation X2?” If the answer is no, the observational errors are independent.
An example as to how combining observations with strongly dependent errors can give a result which is much more accurate than an assumption of independence would lead you to expect:
Assume you have an apple of unknown weight A, and weigh it on a balance against some old weights with limited accuracy. The weights that balance the apple add up to an indicated X, but the marked weight has an error of some unknown value e.
So we have
A=X+e
and now know the weight of the apple with some limited accuracy.
Now we take the apple, and these exact weights, and put them all on the same side of the balance. On the other side, we use some well-calibrated weights and obtain a value of Y (which has an error which is small enough to be negligible).
So we have
A+X+e = Y
which again gives us the weight of the apple with the same magnitude of error e, ie: A=Y-X-e
Combining these estimates under an assumption of independent errors, we’d get an overall value of A=Y/2 with an error e/sqrt(2). But note that if you simply add the equations, you get
A+A+X+e=Y+X+e
ie 2A=Y
and we get A = Y/2 with no error at all! The reason is that although the second estimate has an error of the same magnitude, and the error is highly dependent on the error of the first measurement, the two errors are negatively correlated. So they cancel better than they would if they were independent.
Now I’m not claiming that this is the case in our work and that any dependencies are negatively rather than positively correlated – merely pointing it out as a possibility in reply to those who claim that since we ignored dependency, we have necessarily overestimated the tightness of the result. In any case, even though one could argue for some possibility of dependence in some cases, I really can’t see what hint of a possibility there is of a dependency between the ERBE data set examined by Forster and Gregory, the proxy records contained in sediment cores which inform us about paleoclimate, and the magnitude of the seasonal cycle for example. And I note that such an assumption (independence) is entirely routine in the absence of strong arguments (certainly, in the absence of any meaningful argument at all) to the contrary.
In summary (for now) I’d like to re-emphasise just what most of these previous estimates were doing. They start off from an extremely pessimistic prior – a uniform prior on [0,20] might sound like just “ignorance” but in fact it represent the prior belief that 10 < S < 20 is ten times as likely 2.5 < S < 3.5, for example (and S > 5 is 15 times as likely). Then, using a very limited subset of the available evidence, we cannot rule out the high values with 100% certainty, so even though the agreement with observations is poor and the likelihood of these high values is very low compared to the well-fitting values close to 3, the posterior probability integrated across this range is not quite negligible. If we start off from a prior that does not assign such a strong belief to high S in the first place, or (equivalently) use some more evidence out of the mountain that points to a moderate value, then this problem simply goes away. As was noted on RC some time ago (and also here), these estimates which assigned significant probability to high S never did actually amount to any genuine evidence for this. All we have really done is to formalise those arguments.
It’s also worth noting that Chris Forest has already generated more moderate estimates over several years (most recently in Jan 2006), by using an old “expert prior” (which originates in the 80s I think) together with the recent warming trend. It is perhaps unfortunate that these estimates didn’t receive as much attention as the more exciting results in the same papers which he generated from a uniform prior. It may be a little questionable how much belief we should place in expert priors from 20 years ago – on the other hand, the overall result IMO is pretty much the same as if we took an intelligent look at other data such as the paleoclimate record.
Paul Baer says
Hi James – thanks for the prompt and thorough reply. I hope it has some value to you to spend some time replying, as well as to me and others.
Your point about dependence is interesting – I’ve seen something like that apples-and-scales argument before. But my question was very specifically about the Bayesian calculation you were doing.
I’m still interested answer to this question: doesn’t the multiplication algorithm simplyl assume complete independence, and isn’t it thus true that in this particular problem structure, dependence could only weaken the conclusion?
My concern here really is that there’s a methodological leap that justifies the multiplicative assumption without any ability to test either the legitimacy of the assumption or the sensitivity of the result to alternative assumptions. In particular, I don’t think that widening the spread of the component PDFs is a methodological substitute for a modification of the multiplicative algorithm, because (as you noted in an email to me) if you combine by multipleication any two distributions with the same centroid, the result is narrower than the narrowest of the originals.
Although I haven’t yet really spent much time thinking about it, I suspect there are any number of dependencies in the three major constraints that you use (leaving the ERBE data set for another time), for example in the methods that are used to estimate radiative forcing in each era?
More generally, one of the intuitions I’m pursuing is that if you have one estimate of a value with a spread of X, and a second estimate that has a similar centroid but a spread of 2x, it’s at least possible that your revised distribution will be between the two, rather than narrower than X. I believe that in fact the actual “substance” of the experiments is as important to the “joint” PDF that emerge as the shape of the originals. I’ll have to see if I can come up with some good examples.
Thanks again for engaging in the discussion.
By the way, I’m not sure if I said this in one of my earlier postings, I actually agree with your fundamental claim that alternative lines of evidence suggest that very high values of S are very unlikely, although I really do think that values of ~5º can’t be ruled out with high confidence. But my concerns in this debate are significantly methodological. I don’t remember if you read chapter 3 of my dissertation, but in part I’m really interested in the question of what PDFs mean, and what kind of inferences can be drawn from them. Fundamentally, I believe that Bayesian methods are normative rather than “objective,” and thus that the action-informing power of their conclusions requires additional levels of justification.
–pb
James Annan says
Hi Paul,
Yes it’s always useful to rehearse and clarify arguments. OTOH I am fully aware of the limitations of debate in changing minds!
The example I gave is exactly what you said you were seeking – a case where errors are negatively correlated and thus an assumption of independence is pessimistic. The first data point gives rise to a likelihood N(X,E) where E is the (estimated) st dev of the unknown actual error e. The second measurment gives N(Y-X,E). Both of these are gaussian, they each individually give rise to pdfs of that shape under the assumption of a uniform prior. Combining them under an assumption of independence via Bayes gives us N(Y/2,E/sqrt(2)) which is a tighter gaussian, but still with significant uncertainty. But if we account for the dependence we get N(Y/2,0). I could have allowed for the case in which Y has an error too, which would have conveyed the same underlying message but required more ascii-unfriendly algebra. There is no fundamental difference in principle between this simple example and all the other more detailed stuff in real cases.
It is interesting that you bring up forcing errors. It is well known that the 20th century warming tells us that a larger-than-expected forcing from suphate aerosols implies a higher sensitivity (or else we would not have warmed so much). However, if the forcing from a volcanic eruption is higher than expected, this tells us that sensitivity is lower than we currently estimate, or else we would have seen a bigger cold perturbation at that time! This is precisely a case where plausibly codependent errors are negatively correlated.
Your suggestion about averaging pdfs makes some sense when the pdfs are equally plausible analyses of the same information. In fact our 20th century constraint can be viewed in that light (average of the various published analyses), although we didn’t actually perform this operation, merely chose a convenient distribution that seemed roughly similar. But if you ask two people the time, one of them has a watch that says 9:45 am (as mine does) and the other says “well, it’s daylight, I guess that means it’s between 6am and 6pm” would you really average their resulting implied pdfs (one a narrow gaussian around 9:30, the other uniform on [6am,6pm]) and act as if there was a 25% probability of it being afternoon?
Hank Roberts says
I think my rather inarticulate question is understood — and I’m confident Paul Baer understands why I’m asking better than I do. You’re both so articulate I get lost, though.
Am I asking if surprises are likely hidden by the “implicit assumption” Gavin describes, as James points out, above?
“… The concept could be extended to include some of the shorter time scale bio-geophysical feedbacks but that is only starting to be done in practice. Most discussions of the climate sensitivity in the literature implicitly assume that these are fixed.”
I understand that assumption’s been needed til now for modeling — and wonder what risks there are for underestimating by making that assumption.
Ian K says
Is “Almost 30 years ago” too modest? My 1968 Encyclopedia Britannica says: Recent calculations indicate that a doubling of the carbon dioxide concentration in the atmosphere would cause an average rise in the earth’s surface temperature of about 3.6 C (6.5F). (page 184 of volume 18, article on Pollution by Edward R Hermann, Assoc Prof of Environmental Health, North Western University)
[Response: Is there a proper reference with that comment? – gavin]
Hank Roberts says
Ian’s quoting the first line of your article. But estimates of the amount of warming go back to Arrhenius, as single numbers. This is about figuring an over-or-under likelihood around such a number.
Ian K says
Gavin I suppose one would have to ask Prof Hermann (or this descendants) if one were to dig deeper.The article is really concerned with the direct and immediate effects of pollutants. A fuller extract from the part of the article directed to CO2 reads:
Although the doubling time for fuel consumption in the world is currently 20 years, statistically significant evidence of a build up in atmospheric carbon dioxide concentrations has not been established, even though the burning of carbonaceous matter has produced great quantities of carbon dioxide. Measurements during the last century, however, indicate that worldwide they may be increasing. Concern has been expressed by some scientists about such an occurrence, since carbon dioxide is an excellent absorber of infrared radiant energy. Recent calculations indicate that a doubling of the carbon dioxide concentration in the atmosphere would cause an average rise in the earth’s surface temperature of about 3.6 C (6.5F). A temperature shift of this magnitude would have far-reaching hydrological and meteorological effects: the polar ice masses would be reduced and the ocean levels would rise. Although the carbon dioxide theory has plausibly explained the climatic oscillations of geologic time, accompanied by the coming and going of glacial periods, the present annual production of carbon dioxide by fuel combustion is only enough to raise the global atmospheric concentration by 1 or 2 parts per million, approximately less than 0.0002% if not counterbalanced by plant photosynthesis. Since the carbon dioxide concentration of the atmosphere is about 300 parts per million (0.03%), the production over a few years would appear to be insignificant. Furthermore, the available sinks of marine and terrestrial plant life capable of reducing carbon dioxide seem entirely adequate to maintain the ecological balance for centuries unless other factors come into play. The problem of air pollution with carbon dioxide therefore does not seem to be alarmingly great. Quantitatively, however, knowledge is lacking.
[Response: Interesting. At the time he wrote though, there was enough evidence that CO2 levels were rising (from Mauna Loa published in Keeling (1960), Callendar (1958)) and that the ocean would not absorb most of the emissions (Revelle and Suess, 1957), so he should have known a little better. I think though that the suspiciously precise 3.6 deg C change is an error though. As far as I can tell, the only credible estimate that had been made at that point was the one by Manabe and Weatherald (1967), and they had a sensitivity of ~2 deg C. If you convert that to Fahrenheit, you get 3.6 deg F, so I think it likely that there was a unit mix up at some point. -gavin]
James Annan says
Hank (113),
I think we can agree that any major surprises on the global scale, if they were going to happen, would be hidden by our analysis (and other similar attempts). However, in order to be “likely hidden”, they’d have to be “likely” in the first place (at least under one plausible reading of your comment), which seems (very) unlikely to me :-)
Hope that..um…makes things clearer?
Hank Roberts says
I understand the problem, at least superficially (wry grin). I’m still hunting for journal publication detailing the Japanese deepsea cores done in 2001 (mentioned in #104, as indicating many sudden warming events not otherwise known). I can’t expect you to consider such as likely, if the work hasn’t been published, eh?
Alastair McDonald says
James, Read “Abrupt Climate Change – Inevitable Surprises” by the Committee on Abrupt Climate Change, NAS, 2002 at http://www.nap.edu/catalog/10136.html
We are not going to get any warning of abrupt climate change. If we did, it would not be abrupt! The Permian Triassic (P-T) mass extinction, the Paleocene Eocene Thermal Maximum (PETM) and minor extinction, and the end of the Younger Dryas may all have been scary events but they did happen. There is some unconvincing ideas about why the Younger Dryas began, but none for why it ended, nor can the rapid warmings which followed the other Daansgard-Oescher events be explained. In other words, all the evidence is that when global temperatures rise, they do so abruptly not smoothly as one would expect.
Your application of Bayseian logic to PDF (probability distribution functions) is really just, in effect, a matter of averaging averages. As you admit, it will hide the little evidence there is for abrupt change. Can’t you see that the complacency which this breeds is extremely dangerous?
Cheers, Alastair.
Ian K says
Thank you for your response Gavin. A mix-up in the units may well be the reason for the apparent prescience as to sensitivity. Alternatively it could be that this entry was actually written even earlier than 1967 (which might also give some excuse for the author’s ignorance of evidence for atmospheric buildup of CO2)and therefore this entry may appear in earlier copies of the Encyclopedia Britannica. Hey, has anyone out there got a copy of EB dated between 1963 and 1967, say, who could check the entry on *Pollution*?
Ian Castles says
Re #89, Stephen Berg’s statement that ‘American per capita GHG emissions has increased 13% since 1990’ is incorrect. US per capita emissions have DECREASED slightly since 1990. The 13% figure relates to the TOTAL increase.
Re 120, I have a 1963 edition of the EB and there’s no entry on ‘Pollution.’
Ian K says
Thanks Ian. How times have changed since 1963! I wonder when Britannica first considered pollution an issue. Was it previously subsumed under another heading, perhaps?
Ian Castles says
Re #122. Ian K, the Index volume of the 1963 EB has three entries under ‘pollution’. The first is ‘Pollution (Hinduism)’, and refers the reader to the article on ‘Caste’. The next entry says ‘Pollution: see Refuse disposal; Sewage disposal; Water supply and purification.’ And the last of the entries says ‘Pollution, Air: see Air: Pollution’, which in turn lists references to ‘cancer’, ‘legislation’, ‘refuse disposal’, ‘smog’ and ‘ventilation.’
It’s of interest that there is an entry in the 1963 EB for GREENHOUSE, which is devoted entirely to ‘structures used to grow plants’. The next index entry is ‘Greenhouse effect (astron.)’, and refers the reader to articles on ‘Mars’ and ‘Soldering’. The article on Mars refers to a ‘greenhouse effect’ which ‘is produced by the blanketing effect of water vapour and carbon dioxide in the Martian atmosphere, and is consistent with the theory that the surface of the planet is covered with dust.’ This effect was identified by John Strong and William M Sinton from radiometric measures with the 200-in. telescope at Palomar in 1954. I can’t find any reference to ‘greenhouse effect’ in the article on soldering, and so far as I can see there is no reference anywhere in EB 1963 to greenhouse gases or to a greenhouse effect with reference to planet Earth.
Ian K says
Thanks again, Ian: its good to think that these ancient tomes of ours have some uses. It goes to show that we live in a very different world now. Sorry to lead you into an historical cul de sac.
Tom Fiddaman says
Today in Nature, apparent corroboration from the proxy record:
Climate sensitivity constrained by temperature reconstructions over the past seven centuries
Gabriele C. Hegerl, Thomas J. Crowley, William T. Hyde and David J. Frame
The magnitude and impact of future global warming depends on the sensitivity of the climate system to changes in greenhouse gas concentrations. The commonly accepted range for the equilibrium global mean temperature change in response to a doubling of the atmospheric carbon dioxide concentration1, termed climate sensitivity, is 1.5â??4.5 K (ref. 2). A number of observational studies3, 4, 5, 6, 7, 8, 9, 10, however, find a substantial probability of significantly higher sensitivities, yielding upper limits on climate sensitivity of 7.7 K to above 9 K (refs 3, 4, 5, 6, 7â??8). Here we demonstrate that such observational estimates of climate sensitivity can be tightened if reconstructions of Northern Hemisphere temperature over the past several centuries are considered. We use large-ensemble energy balance modelling and simulate the temperature response to past solar, volcanic and greenhouse gas forcing to determine which climate sensitivities yield simulations that are in agreement with proxy reconstructions. After accounting for the uncertainty in reconstructions and estimates of past external forcing, we find an independent estimate of climate sensitivity that is very similar to those from instrumental data. If the latter are combined with the result from all proxy reconstructions, then the 5â??95 per cent range shrinks to 1.5â??6.2 K, thus substantially reducing the probability of very high climate sensitivity.
Looking forward to RC comments.
[Response: You might well find http://julesandjames.blogspot.com/2006/04/hegerl-et-al-on-climate-sensitivity.html intersting – William]