Guest commentary by Steve Sherwood
There are four independent instrumental records of sufficient length and potential accuracy to tell us about 20th-century climate change. The two longest ones are of temperature near the Earth’s surface: a vast network of weather stations over land areas, and ship data from the oceans. While land surface observations go back hundreds of years in a few places, data of sufficient coverage for estimating global temperature have been available only since the end of the 19th century. These have shown about a 0.7 C warming over land during the last century, with somewhat less increase indicated over oceans. The land records contain artifacts due to things like urbanization or tree growth around station locations, buildings or air conditioners being installed near stations, etc., but laborious data screening, correction procedures, and a-posteriori tests have convinced nearly all researchers that the reported land warming trend must be largely correct. Qualitative indicators like sea ice coverage, spring thaw dates, and melting permafrost provide strong additional evidence that trends have been positive at middle and high northern latitudes, while glacier retreat suggests warming aloft at lower latitudes.
The other two climate records, so-called “upper air” records, measure temperatures in Earth’s troposphere and stratosphere. The troposphere—that part of the atmosphere that is involved in weather, about 85% by mass—is expected to warm at roughly the same rate as the surface. In the tropics, simple thermodynamics (as covered in many undergraduate meteorology courses) dictates that it should actually warm faster, up to about 1.8 times faster by the time you get to 12 km or so; at higher latitudes this ratio is affected by other factors and decreases, but does not fall very far below 1. These theoretical expectations are echoed by all numerical climate models regardless of whether the surface temperature changes as part of a natural fluctuation, increased solar heating, or increased opacity of greenhouse gases.
It turns out that the upper-air records have not shown the warming that should accompany the reported increases at the surface. Both the Microwave Sounding Unit (MSU) satellite (analyzed by the University of Alabama in Huntsville by John Christy and Roy Spencer) and weather balloon data (trends reported by a number of researchers, notably Jim Angell at NOAA) have failed to show significant warming since the satellite record began in late 1978, even though the surface record has been rising at its fastest pace (~0.15 C/decade) since instrumental records began. On the other hand both records have shown dramatic cooling in the stratosphere, where cooling is indeed expected due to increasing greenhouse gases and decreasing ozone (which heats the stratosphere due to its absorption of solar ultraviolet radiation). The sondes in particular have shown a lot more cooling than the satellites, almost certainly too much, leading one to wonder whether their tropospheric trends are also too low.
The non-warming troposphere has been a thorn in the side of climate detection and attribution efforts to date. Some have used it to question the surface record (though that argument has won few adherents within the climate community), while others have used it to deny an anthropogenic role in surface warming (an illogical argument since the atmosphere should follow no matter what causes the surface to warm). The most favored explanation has been that the “lapse rate,” or decrease in temperature as you go up in the atmosphere, has actually been increasing. This would contradict all of our climate models and would spell trouble for our understanding of the atmosphere, especially in the tropics.
This assumes that the observed trends are all real, which is reasonable when two independent measurements agree. But both upper-air observing systems are poorly suited in many respects for extracting small, long-term changes. These problems are sufficiently serious that the US National Weather Service (NESDIS) adjusts satellite data every week to match radiosondes, in effect relying upon radiosondes as a reference instrument. This incidentally means that the NCEP/NCAR climate reanalysis products are ultimately calibrated to radiosonde temperatures. Recent developments concerning the MSU satellite data are discussed in a companion piece.
What can the Radiosonde data tell us?
Radiosondes themselves have significant problems and were also not designed for detection of small climate changes. These problems have been well documented anecdotally, and have been dutifully acknowledged by those who have published trends in radiosonde temperatures. The cautions urged by these researchers in interpreting the results have not always been taken on board by others however.
Few if any sites have used exactly the same technology for the entire length of their record, and large artifacts have been identified in association with changes from one manufacturer to another or design upgrades by the same manufacturer. Artifacts have even been caused by changing software and bug fixes, balloon technology, and tether lengths. Alas, many changes over time have not been recorded, and consistent corrections have proven elusive even for recorded changes. While all commonly used radiosondes have nominal temperature accuracy of 0.1 or 0.2 K, these accuracies are verified only in highly idealized laboratory conditions. Much larger errors are known to be possible in the real world. The most egregious example is when the temperature sensor becomes coated with ice in a rain cloud, in which case upper tropospheric temperatures can be as much as 20 C too warm. This particular scenario is fairly easy to spot and such soundings can be removed, but one can see the potential problems if many, less obvious errors are present or if the sensor had only a little bit of ice on it! Another potential problem is pressure readings; if these are off, the reported temperature will have been measured at the wrong level.
The Sherwood et al. study in Science Express concerns one particular type of long-recognized radiosonde error, that caused by the sun shining on the “thermistor” (basically, a cheap thermometer easily read by an electric circuit). This problem has been documented, notably by Luers and Eskridge (1995,1998), but correcting for it in the past has proven difficult and previously its magnitude was poorly known except under controlled conditions. The most popular radiosonde manufacturer worldwide today is the Vaisala corporation, whose strategy for coping with solar heating is to concede that it will happen and try to correct for it: the thermistor is mounted on a “boom” that sticks into the air flow where the sun can shine on it, but the heating error is estimated from the measured ascent rate and solar zenith angle and subtracted from the reported temperature. The magnitude of this correction can be several degrees, has varied with changing designs, and may not always have been properly applied in the past especially if time of day, station location, or instrument version were incorrectly coded. The US radiosonde, until recently made exclusively by the VIZ corporation and now under contract to two separate manufacturers, has followed the strategy of trying to insulate the thermistor from solar effects by ducting it inside a white plastic and cardboard housing. However, this strategy is unlikely to completely prevent solar heating. The first US radiosonde designs, which had less effective shielding and lacked the white coating subsequently applied to the sensor to limit is solar absorption, showed obvious signs of solar heating error. Many other radiosonde designs exist; larger countries historically designed and built their own sondes, but some countries have abandoned their national sondes and started buying from (usually) Vaisala.
The Sherwood et al. study is the first to try and quantify the solar-heating error over time. We recognized that the true difference between daytime and nighttime temperatures through the troposphere and lower stratosphere should, on average, be rather small, and moreover should have changed very little over the last few decades. We also recognized that this difference could be observed quite accurately by examining consecutive daytime and nighttime observations. Nighttime observations at many stations are much more rare than daytime ones, so this strategy means throwing out most of the daytime data; this is one reason why previous, less focused investigations did not detect this particular problem. This data-treatment technique revealed that, as you go back farther in time, the daytime observations become progressively warmer compared to nighttime observations. This is a clear indication that, back in the 1960’s and 1970’s especially, the sun shining on the instruments was making readings too high. This problem disappeared by the late 1990’s.
The key thing here is not simply the existence of this problem, but the change over time. It turns out that in the tropics the artificial boost in the early readings was just about equal, on average, to the increase in surface temperature over the 1979-97 period (the trend in solar heating bias was -0.16 K/decade averaged from 850-300 hPa). In other words, this effect by itself could explain why reported temperatures did not increase–the increases in actual air temperature were nearly balanced by decreases in the (uncorrected) heating of the instrument by the sun. This effect was large in the tropics because of heavy reliance on daytime data in previous climatologies, and because the daytime biases there changed the most. Correcting for this one effect does not bring trends into perfect agreement with those predicted based on the surface—they still fall slightly short in the tropics during the last two decades, and are too strong in the southern hemisphere extratropics when measured over the last four decades—but these remaining discrepancies are well within what would be expected based on other errors and the poor spatial sampling of the radiosonde network.
An important caveat is that, when instrument designs change, this can affect not only the daytime heating of the thermistor but can also affect the accuracy at night. Thus, correcting for this effect alone does not guarantee an accurate atmospheric trend. The other errors are, unfortunately, not as easy to quantify as the solar heating error. It is not clear what direction they may have pushed trends. Thus we are still in the dark as to the exact amount of warming that has occurred in the atmosphere. The one thing we do know is that we should not hang our hat on the trends in the reported observations until this, and all other problems, are sorted out.
Conclusion
The most likely resolution of the “lapse-rate conundrum,” in my view anyway, is that both upper-air records gave the wrong result. The instrument problems uncovered by these papers indicate that there is no longer any compelling reason to conclude that anything strange has happened to lapse rates. From the point of view of the scientific method, the data do not contradict or demand rejection of the hypotheses embodied by models that predict moist-adiabatic lapse rates, so these hypotheses still stand on the basis of their documented successes elsewhere. Further work with the data may lead us to more confident trends, and who knows, they might again disagree to some extent with what models predict and send us back to the “drawing board.” But not at the present time.
References:
J. K. Luers, R. E. Eskridge, J. Appl. Meteor. 34, 1241 (1995).
J. K. Luers, R. E. Eskridge, J. Climate 11, 1002 (1998).
Jeff Simchak says
My comment as an Engineer, not a scientist:
Has any work been done to study the biological response to higher levels of CO2? I know as a woodworker, the wood grains are inferior from the present (Compared to 100 years ago) because the growth rates of the trees. The growth rings are so wide, the wood isnâ??t as hard. I would think since plants feed mainly on CO2, this would have a huge effect. I know just the volume of water that tree’s introduce into the atmosphere is astounding. It maybe relatable to the increases in rain and storms. Could the amount of water cycling had been greatly increased due to ‘increasing the plants metabolism’. Not to mention the plankton and algae.
I also recall reading somewhere that records and ecological data from 1800-1900 show about 0.8C warming trend; while 1900-2000 was about 0.4C. They stated a reason of coming out of an ice age from 10,000 years ago, and a mini ice age in the early medieval times. The kick-off of the Industrial revolution was not really in gear until the 1870’s and increasing ever since. Why would the possibly natural temperature increase slow during the increase in CO2, CFC’s and NOx? wouldn’t the current theorys call for an acceleration?
Why isn’t the pressure for change being placed where it is most needed? On growing third world industrial nations like China, India, Korea, etc. The US was a revolutionary force in automotive clean air acts, scrubbers, toxic cleanup etc… Our pace has slowed as the costs clime the bleeding edge of cost/results. I have worked in the Auto industry all my life and remember the OEM’s blocking of the EGR valves and eliminating the Catalytic converters in cars sold to Canada and Europe all during the 1970’s.
Another thing that bothers me with news reporting is the thought that somehow Electric cars are a solution. Well letâ??s see: first you generate electricity from (coal, Nuclear or Gas) at(Efficiency losses >30%). Pump it 30 miles to your home through resistive wires (Efficiency losses >40%). Run a battery charger circuit and charge a bank of 16 (Lead acid or Metal Hydroxide-Ecological disasters) at (Efficiency losses >35%). Then discharge the batteries to run an electric motor (Efficiency losses >15%). So basically you need to generate 3.5 times the electricity used to run the car to replace a Gasoline engine with a >58% total efficiency with better emissions then the power plant? (Guesses on efficiency)
Iâ??m sorry but Public Transportation and Bio-Diesel makes 100 times more sense. I hope a fuel cell that doesnâ??t use 5Kg of Platinum (At 5 times the cost of Gold) and actually burns something other then gasoline or diesel can be developed! How about alcohol as a hydrogen base? I know, I know, too much bacteria methane gas, Why not run them on Cow farts?
Just a few thoughts,
Jeff
Armand MacMurray says
Thanks for providing some really interesting background information on the radiosondes! I’m also interested in the correction procedures you mention for the “heat island” effect on surface temps; if you could provide a pointer to an article/book/website providing more detail on those corrections, I’d greatly appreciate it.
Steven Sherwood says
Armand,
A good review article on this is Peterson et al., Int. J. Climatol., 1998, p 1493.
There are more recent articles by Tom Peterson and by David Parker putting the homogenized surface record through some tests designed to expose problems (they didn’t find any). Roger Pielke has published several recent papers raising heat-island and land-use related worries as well as microclimate exposure issues and showing the impact that they can have in isolated cases.
Russ says
I’d be interested to hear your take on the accuracy of the radiometers produced by Radiometrics. I’m an engineer, not a meteorologist, but it does seem that while tricky to operate, these instruments could overcome some of the radiosonde problems. I notice that they use the radiosondes for a reference too. Although I think it may primarily be just for a comparison basis to prove that their technology works.
http://www.radiometrics.com/pubs.htm
Dano says
Re: #1 if you go to the library and use ISI, you’ll find a paper using the DMSP to estimate the extent of the UHI, and way back in 1989 you’ll find a paper by Karl that discusses corrections. But an interesting new paper creates, to me, the best thoughts: Jin ML, Dickinson RE, Zhang DL 2005. The footprint of urban areas on global climate as characterized by MODIS. Journ. Clim. 18:10 pp. 1551-1565. An interesting way to go about solving a problem.
Best,
D
Lynn Vincentnathan says
This is really interesting. I was going to respond “tongue-in-cheek” on another site to a contrarian who keeps claiming the antarctic has been cooling (ergo GW is not happening), with, “Well, maybe they used faulty thermometers, especially in the past when thermometer technology wasn’t so advanced.” But, of course, I understood that would also apply to all other places around the world, as well….
Coby says
Wow, what a mess of difficulties. It will be very hard, for me at least, to ever have much faith in measurements that require so much correction and adjustment. In a case like this what will happen is these readings will be poked and prodded and nudged until they finally do come in line with model expectations at which point people will stop looking for new biases and new error mechanisms. But this will not be a good indicator that all of the problems have been found, only that everyone is comfortable with the results.
Thank you for that illuminating article, I don’t know what it will take for me to ever care again what the radiosonde’s say.
I was wondering if it would not be possible, in a practical sense or even in just a theoretical one, to have enough stations situated in mountain ranges and island volcanoes to get a more reliable view of lapse rates and warming trends (starting now of course) in the lower part of the troposhere. Are the weather dynamics too overwhelming to get any useful readings this way? Is that just too small a layer to be useful?
Lynn Vincentnathan says
I’m wondering if this article is related to what I just read about the troposphere found to be actually warming & fitting the CC models, now that 2 teams of scientists have corrected for the satellite orbit.
http://www.climateark.org/articles/reader.asp?linkid=45118
Michael Tobis says
Re # 6:
Neither the sonde programs nor the MSU units were designed to detect long-term trends. They are enormously useful in other applications. The design and deployment of these instruments should not be criticized on the grounds that they are not especially useful for purposes for which they were not designed.
The fact that the NCEP reanalysis is implicitly calibrated to a drifting (biased) instrumental record is something I had not heard discussed previously, though. It seems this should be a matter of some concern in studies of the long-term record.
Steve Latham says
Geez, it sucks that climatologists can’t hang their hats on at least one line of evidence. I know it only relates to a certain part of the troposphere and there is probably contamination from surface effects, but does this emphasize to a greater extent the importance of tropical glaciers in understanding tropospheric tropospheric trends over the past 100 or so years? Do the new calibrations change the understanding of the tropical glacier data? Here is a quote from another realclimate post (see here https://www.realclimate.org/index.php?p=157#more-157):
{Kaser et al also argue that surface and mid-tropospheric (Kilimanjaro-height) temperature trends have been weak in the tropics, in “recent decades.” One of the papers cited in support of this is the analysis of weather balloon data by [Gaffen et al, 2000], which covers the period 1960 to 1997. It is true that this study shows a weak (cooling) trend in mid-tropospheric temperatures over the short period from 1979-1997, but what is more important is that the study shows a pronounced mid-tropospheric warming trend of .2 degrees C per decade over the full 1960-1997 period. Moreover, few of the sondes are in the inner tropics, spatial coverage is spotty, and there are questions of instrumental and diurnal sampling errors that may have complicated detection of the trend in the past decade. Analysis of satellite data by [Fu et al, 2004] reveals a tropical mid-tropospheric temperature trend that continues into the post-1979 period, at a rate of about .16 degrees C per decade. When one recalls that tropical temperatures aloft are geographically uniform, this data provides powerful support for the notion that East African glaciers, in common with others, have been subjected to the influences of warming. Set against this is the surface temperature record from the East African Highlands, reported by [Hay et al 2002]. This dataset shows little trend in surface temperature over the location covered, during the 20th century. However, surface temperature is more geographically variable than mid-tropospheric temperature, and is strongly influenced by the diurnal cycle and by soil moisture. The large decadal and local variability of surface temperature may have interfered with the detection of an underlying temperature trend (more “noise” less “signal”). It is unclear whether this estimate of temperature trend is more relevant to Kilimanjaro summit conditions than the sonde and satellite estimate.}
Hans Erren says
re#8
Neither are surface stations. Homogenisations works nicely in densely monitored areas like europe and US but breaks down in sparse areas. Have a look at the GHCN in the tropics…
Joel Shore says
In response to #1: You should recognize that while the Clean Air Act, catalytic converters, scrubbers, and all that stuff are great for cleaning up conventional air pollutants, they do nothing (or at least very little) to reduce the emissions of greenhouse gases. Unfortunately, the emission of CO2 is an inevitable byproduct of the combustion of fossil fuels (or most any organic matter)…It is not just a product of incomplete combustion (or contaminants in the fuel) like the pollutants like CO, SO2, and NOx. Thus, the only way to reduce the amount of CO2 going into the air is to combust less fossil fuels…or to learn how to sequester the CO2. And, while it is true that the developing nation’s tend to have the highest RATE of growth of greenhouse gas emissions, it is still the developed world…and the U.S. in particular…that have the highest amount of greenhouse gas emissions per capita. We are also responsible for the lion’s share of the rise in CO2 levels that have already occurred, and we have better technology. These are all natural reasons why the developing world would expect us to go first in trying to stabilize and reduce emissions.
It is true that you have to look at the whole lifecycle of an electric car to be sure it is more efficient. However, it does afford some additional advantages because it allows for the possibility that the electricity could be produced without CO2 emissions (by renewable resources like solar or wind) or that emissions from the electricity production could be more easily sequestered. And, it also makes it easier to have really good anti-pollutant controls at the site of the electricity generation so there are less of the ordinary pollutants. [Another advantage of electric…or hybrid…cars is the ability to easily recapture some of the energy of motion when braking the car and convert it into electricity that you can then use to run it.]
Note, by the way, that fuel cells are not a magic bullet either as the production of hydrogen also requires energy. So, the issues with fuel cell powered cars and electric cars are actually quite similar.
Ferdinand Engelbeen says
Steve,
May I disagree with the conclusion? There are and were problems with all kinds of temperature records, as good as for satellite data as for radiosonde and surface data. Thus even if the satellite data now are corrected and are more in line with the expectations of the models, one need to see if the discrepancy which is left over is not based on problems with the surface data.
To give you an idea, just look at any GISS surface data series around the equator (where the largest discrepancy was found):
Look e.g. at the data for Salvador, a town of 1.5 million inhabitants. That should be compared with rural stations to correct for urban heat island effect. But the nearest rural stations are 458-542 km away from Salvador (Caetite, Caravela, Remanso). And their data are so spurious, that it is impossible to deduct any trend from them. Quixeramobin is the nearest rural station with more or less reliable data over a longer time span, and shows very different trends than Salvador. Or look at Kinshasha (what a mess!) with 1.3 million inhabitants, or Brazzaville (opposite the Congo stream), and something rural in the neighbourhood (Mouyondzi – 173 km, M’Pouya – 215 km, Djambala – 219 km,…). East Africa is not better: compare the “trends” of Nairobi with these of Narok, Makindu, Kisumu, Garissa,…
Rural data trends with some reliability on a longer time span are very rare in the whole tropics. Only fast expanding towns have (sometimes) longer data sets which are hardly correctable. The unreliability of the data in the tropic range is thus obvious, that one can wonder how a global surface temperature trend can be calculated to any accuracy…
Engineer-Poet says
Jeff Simchak: I anticipated some of your objections in a piece just a few days ago. Click my name and the hotlink will take you straight to my brief analysis.
George says
This discussion is blowing smoke. This .org is doing a poor job of presenting science. Without data tables and figures no one can analyze the data – if in fact there are data. One comment makes sense – comment 13. The surface data for the tropics look pretty unreliable. Do you have data to prove otherwise?
And back to smoke. There have been a number of articles about the role of smoke in heating the layer below the tropical inversion. Inversions complicate the analysis of the lapse rate, especially when smoke is added. It isn’t simple thermodynamics. The assertion that an undergrad would understand the problem is an attempt to intimidate the non-scientist and bully those who have different perspectives.
There are a number of strong lines of evidence of “global warming” that I don’t dispute. For example, the sea ice data and ocean temperature data are looking more and more convincing with time. However, asserting that most climate scientists don’t think that there are problems with the surface data for the tropics is not a scientific argument. It’s an opinion. If you want to convince people, try using science.
[Response:I would point out that if you look at the combined ocean and land data for the tropics (available at the GISS web site), the ocean (still part of the surface after all) shows significant and widespread warming. Since the ocean is actually the majority of the surface in this region, problems with the (admitedly less than perfect) land stations and contintental aerosol effects are secondary issues. Aersols, since they absorb as well as reflect, act to warm the atmosphere with respect to the surface though, and should therefore push the system in the ‘wrong’ way for your arguement. -gavin]
Klaus Flemloese, Denmark says
Reply to Ferdinand EngelBeen #
You have written in #!3:
â??There are and were problems with all kinds of temperature records, as good as for satellite data as for radiosonde and surface data. Thus even if the satellite data now are corrected and are more in line with the expectations of the models, one need to see if the discrepancy which is left over is not based on problems with the surface data.â??
To my understanding the discussion about the surface temperature has been settled long time ago.
It is therefore only a matter reading RealClimate:
http://www.brighton73.freeserve.co.uk/gw/temperature.htm#urbanheatislands
or reading Tom Rees:
http://www.brighton73.freeserve.co.uk/gw/temperature.htm#urbanheatislands
When you have a very large dataset, it is possible by cherry picking to find out layers to indicate â??there is something rottenâ??. Using this method you are to my understanding indicating that there has been an undetected â??gross errorâ?? in the methods used to calculate the surface temperature and the statisticians not done a good job.
From a theoretical point of view I could be the case. However it is unlikely since so many resources have been devoted to analyse the temperature development and so much have been published on this subject.
I am sure the RealClimate will be able to provide a comprehensive list of reference.
It is likely that this subject and since this subject will come up again and again, and therefore there is a need for a presentation of this subject for 1) Journalist 2) Laymen 3) Scientists 4) Statisticians.
Hans Erren says
re: 15
Klaus, the main problem with sparse surface data is inhomogeneity. You can’t solve that when you don’t have neighbours to compare with. I know that UHI is not an issue in US and Europe, because these are also the regions where sat and surf agree best. It’s the rest of the world where the problems (oops, challenges) are.
Ferdinand Engelbeen says
Klaus, re #15:
I am sure that the UHI problem is largely resolved in developed countries, as there are a lot of rural stations which can be used to compensate for the UHI of large towns (there are some residual individual and regional problems, like irrigation in valleys, but that doesn’t influence the general trend that much). The problems arise in less developed countries, especially in the tropics, where the largest discrepancy was seen. In near all of these countries, there simply are very few reliable rural stations, mostly more or less reliable measurements in fast expanding towns.
No statistician is able to make something reliable from unreliable data.
A little challenge for you: just count the number of rural stations in the vicinity of urban stations in the 20N-20S (or 30N-30S) band that produce something useful in the 1979-2005 period of interest…
Tom Rees says
Ferdinand, the long-term trends from urban stations aren’t used to create the gridded dataset (only annual-scale fluctuations). All the long term trends are from rural stations. What is happening to the trends from stations identified as urban is irrelevant for this discussion. For more information on how the UHI effect is removed from the GISS analysis, see Hansen et al, 2001
Therefore, the only questions are: ‘Are the rural stations correctly identified?’, and ‘Are there other, systematic errors in the rural trends?’. There is good evidence that the answer to both these question is no: (The insensitivy of the results to methodology of selecting rural stations, the Parker et al windy days study, and the fact that data from satellite skin surface measurements, from sea surface temperatures, deep ocean temps as we as tropospheric temps are all in good agreement).
Ferdinand Engelbeen says
Tom re #19:
As Hansen indeed only used rural stations for his global temperature trend outside the USA, I need to change the challenge: find out the station density of rural stations in the GISS database for the tropics (20N-20S or 30N-30S) where in the 1979-2005 period the data show some reliability… Good luck with that!
Steven Sherwood says
Just to respond briefly to a few of the comments:
As Gavin points out, the Tropics are mainly ocean so it is the ocean data, not the land surface data that mainly determine the trend we are talking about there. That said, the independent ocean and land data show roughly consistent warming rates. I did not say anywhere in this piece that the land data were free of problems, or that scientists thought they were, only that most have concluded they are in significantly better shape than the other observations. That conclusion is supported by tests applied to each dataset (see my earlier posting) and thus has a scientific basis.
The role of smoke (#15) was an obvious candidate for explaining purported lapse-rate changes. However, published impacts are localized, and model calculations do not show a significant effect on lapse rates averaged over the whole tropics. I have a student looking at the possible indirect effects of aerosol on lapse rates. Believe me, I am quite prepared to believe the models are missing something and that is what got me into this area of research in the first place. We are also continuing to look more closely at the radiosonde data to see if we will be able to find evidence for interesting (though perhaps less dramatic than before) lapse-rate changes.
My comment on thermodynamics was intended to counter what seems to be a prevailing notion that global climate models are so complicated we don’t understand anything they do. In many cases this is true, but some results (like lapse rate) derive from simple physics built into the models (this doesn’t mean it’s correct, but means the implications are greater if it is wrong).
George says
Reply to Gavin RE:response to comment 15. I agree that the surface in the tropics has warmed. As I wrote, sea surface temperature data show warming. You misunderstood my position and knocked down a straw man.
My point about smoke concerns the lapse rate. The surface layer below the inversion is warming, in part, because smoke is trapped below (and in) the inversion. This warming below the inversion may be increasing the lapse rate.
My criticism concerns the original post which states, “In the tropics, simple thermodynamics (as covered in many undergraduate meteorology courses) dictates that it should actually warm faster, up to about 1.8 times faster by the time you get to 12 km or so; at higher latitudes this ratio is affected by other factors and decreases, but does not fall very far below 1. These theoretical expectations are echoed by all numerical climate models regardless of whether the surface temperature changes as part of a natural fluctuation, increased solar heating, or increased opacity of greenhouse gases.”
I disagree with aspects of this statement because it does not consider the effects of inversions and the complex processes involving water vapor. These processes affect the lapse rate. Moreover, I don’t think that the tone of that paragraph contributes to the purported educational mission of your group because it implies that those who disagree don’t understand elementary thermodynamics. Perhaps I misspeak. That paragraph is so convoluted that it’s easily misunderstood.
Models can’t be improved if they aren’t critically assessed. It isn’t simple thermodynamics. Emanuel, who has literally written the book on convection, states “But the physics of the processes controlling water vapor in the atmosphere are poorly understood and even more poorly represented in climate models, and what actually happens in the atmosphere is largely unknown because of poor measurements. It is now widely recognized that improvements in understanding and predicting climate hinge largely on a better understanding of the processes controlling atmospheric water vapor. ” http://wind.mit.edu/~emanuel/anthro.html
In conclusion, the research on radiosonde measurement problems looks promising but it is only a small part of a larger problem of poor measurements and poor models.
Aloha,
[Response: The predicted/theoretocal lapse rate changes do include water vapour condensation processes (which is why it is different from the dry adiabat of course), and as show in the Santer et al paper, all data and models agree that this works well for short term (monthly to interannual) variability. It is conceivable that aerosol effects (which includes ‘smoke’) could also affect the lapse rate, but the aerosols tend to warm where they are located and depending on the composition, cool below – this gives an impact that – if it was a large factor in the tropical mean – would produce changes even larger than predicted from the moist adiabatic theory. This would make the S+C numbers even further off. Note too that the models do include representations of aerosol changes over this period – though imperfectly. Deciding on whether the models are ‘poor’ however, depends upon how much trust can be put on details of the data – and as we have seen, there are a number of still outstanding issues (RSS vs. S+C v5.2 for instance) that mean that these data do not lend support to the idea that the models are poor. – gavin]
Mike Doran says
#6 antarctic has been cooling (ergo GW is not happening) by Lynn Vincentnathan
Peter Doran’s research (no relation) shows warming and cooling.
I think I can explain. Oceans that are warmer are more conductive–about a percent more conductive per each degF. Problem is that impedence is not just about resistance but also about induction. And capacitive couplings can occur better with warmer, more conductive ‘plates’ that the oceans present. So you have on the one hand warrming oceans and on the other hand high pressures building around Antarctica preventing surface lows from bringing warmer conditions inside Antarctica. You have more intense capacitive couplings in some places impacting microphysics and less intense in others, depending on the ocean currents and the induction meaning they hold.
Eli Rabett says
Here is another vertical scale issue to ponder. Climate reconstructions based on borehole and ocean sediments (Moburg) are lower over the past few hundred years than reconstructions based on surface proxys.
Stephen Berg says
Three new articles I have seen today:
http://www.nature.com/news/2005/050808/full/050808-13.html
http://www.nytimes.com/2005/08/12/science/earth/12climate.long.html?ex=1281499200&en=2588a631b8c5cc5d&ei=5090&partner=rssuserland&emc=rss
http://www.sciencedaily.com/releases/2005/08/050814164823.htm
Stephen Berg says
I forgot to enclose the link to the Grist Magazine blurb on the articles:
http://www.grist.org/news/daily/2005/08/12/3/index.html
Lynn Vincentnathan says
RE #1 & #12 on electric vehicles. From what I’ve heard from people who convert them & read in books, even if the source of electricity is coal or oil, EVs are still 1/4 to 1/3 more efficient (even figuring in batteries & their manufacture) than ICE vehicles. And, of course, if your electricity is wind-generated (which is available in many states for a bit more, or even less, as in Texas), then GHGs for transportation go way way down. Maintenance for EVs is also much easier & cheaper, and less frequent.
I’m just waiting for plug-in hybrids (with a range of 10 or more miles) to come out, then on 95% of my driving days I can run the car strickly on wind power. People can also convert ICE cars to electric, and I understand it’s not too difficult; there are EV clubs around the nation that can help.
Hans Erren says
re: #19
I have some bad experience with the automated way GISS corrects for urban trends and inhomogeneities.
GISS doesn’t detect jumps, and adds warming trends to rural stations.
http://home.casema.nl/errenwijlens/co2/homogen.htm
in particular these two graphs:
http://home.casema.nl/errenwijlens/co2/debilthomogenisations.gif
http://home.casema.nl/errenwijlens/co2/ucclehomogenisations.gif
Alastair McDonald says
Reply to Gavin RE:response to comment 22.
A satisfactory response to the short term but failing in the long term is a classic case of a bad model. It is what neural network scientists call over-fitting. It implies that you have matched your model to fit the current circumstances, but because the logic is wrong, then it does not work when the environment is changed. I have been looking at the GISS ModelE1 and it seems to me that the radiation emitted by each layer is being calculated using Planck’s function for blackbody radiation. Perhaps you could correct me if I am wrong. However if that is so, please note that in the real world the radiation emitted by each layer originates only from the the greenhouse gases, and is not cavity radiation. I believe that this is a hangover from Schwarzschild’s equation, and is a problem with all of the GCMs. In other words, the radiation scheme used in all computer models is wrong! Unbelievable but true.
HTH,
Cheers, Alastair.
[Response: Unbelievable and untrue, actually. I suggest you read a relevant text (Houghton “Physics of Atmospheres” is good on this – chapter 4). The Planck function is used, but it is multiplied by a wavelength dependent function so that you only integrate over the approximation to the lines. -gavin]
Norm Stephens says
Has this study considered the effect of oceanic circulation cycles such as the 50-70 year Atlantic Multidecadal Oscillation? A century ago might have been during a relativley cool point in the cycle compared to now. The AMO has been studied going back at least 500 years.
It is a little dangerous to project a trend from two points on a sine wave, especially when the measurements of the two points are subject to “error correction.”
Lynn Vincentnathan says
RE the back & forth about models, & that they are only best for short term predictions….it seems to me CC scientists are doing a remarkable job in taking on the whole earth-solar system with its kazillion variables (maybe even some we don’t know about yet) & making models out of it. The fact that the models have so many variables now that fit fairly nicely what real world data we have is a testament to very hard work over many years (also on the part of the folks who develop powerful computers). I’m simply amazed. My economics professor the 1st day of class threw a balsa model airplane, & it flew, then crashed. He said economics provides models for the real world, and they work pretty good, but they are not the real world.
Don’t burn out. The smoke generated from burn out may be harmful to your health & the world’s health. (Or has this analogy/model already crashed?)
extagen extenze says
I think most scientists would now agree that the climate is truly warming however the cause is still in doubt. If in the end man is the cause, it will be too late by then to do anything about it. I feel we have no choice but to assume mankind is the fundemental cause and start taking the proper steps to control the problem. I think first and foremeost concern should be the world’s growing population. Stop the increase in people and you will stop the warming of the planet to a large degree.
Hans Erren says
Why is it that we worry about temperature in 2100?
The effects in 2100 are caused by emissions in 2080.
Everybody in this forum will be dead by then, and also their children.
Reminds me of the worry for horse manure in the cities at the beginning of the 20th century.
In 50 years people have other worries, and we don’t need to worry about them.
JohnLopresti says
Living many years at the margin of old logging, more recent logging, a ridgeline overlooking an alluvial valley where a small city is growing and similar cities extend to the horizon in inland coastal range mountains in CA, I am reminded often of the coolness in the tall forest contrasting to the heat generated on sunny days where forests are discontinuous because of logging or absent because of urban growth.
Next study desertification.
It would be interesting to describe the thermodynamics of treeshade.
It is because I have lived in this place so long and recognize the patent changes and link to forest condition that I keep returning to this amateur hypothesis.
Now for the research.
Murray duffin says
The faith that UHI effect has been adequately compensated does not take into effect what Pielke Sr. refers to as land use change effects, that have also tended to raise rural readings. Long term changes in the way sea surface temperatures are measured also tend to introduce warming. In fact all long term surface instrument changes are in a direction to introduce warming as pointed out years ago by Daly. Global averaging of stations has also not been compensated for dropouts which have reduced the total number of reporting stations dranatically since 1989. If all effects were accounted for the surface temperature would undoubtedly prove to be affected by much more than 0.05 degrees.
The url referenced in #16 makes several statements that are simply not supportable over the course of a century, and gives figures for % of rural and urban stations as if their ratio was fixed, which it assuredly has not been. It is good that the sat. and sonde readings are being corrected. Now we need a real effort to also correct the surface instrument averages. Murray
Steve Bloom says
Re #33: Well, Hans, that explains a lot. Of course your point of view ignores the fact that the effects of excess atmospheric CO2 last for considerably more than 20 years, and assumes that climate “tipping points” (such as we may now be seeing in the Arctic) can’t possibly be a problem for us (“us” being the privileged residents of North America and Western Europe).
Re #34: In case you don’t know about it, Google Scholar at http://scholar.google.com/ is an excellent on-line resource for this type of research. Many of the studies are subscriber-only, but even in that case at least the abstracts can be seen.
Eli Rabett says
Hmm… it seems that Hans Erren has adopted the French king strategy, apres moi le deluge. This tends to end badly, cf 1790. OTOH, we do have the moral issues of leaving a place no worse than we found it and since I intend to live forever, I guess that makes me even more interested in the issue.
Alastair McDonald says
Re #31 from Lynn, where she congratulates the climate modellers on a job well done. I thought her story of the model plane was most apposite. The sciences of economics and earth science share a feature which makes them stand out from most of the other physical sciences. In neither science is it possible to set up experiments to test theories. Thus one can propose hypotheses, and create models, but the only way to test them is to wait and see what happens.
We know from the ice cores and other paleological evidence that the climate changed abruptly in the past, and in the not so distant past as well. Yet the current climate models cannot reproduce those events. It is all very well admiring the well constructed model plane, and the complications of the climate models, but if after a short period of simulating the real thing, then they crash, one cannot really call them good models. Of course, in this case it is the climate system itself which will crash and not the model, which would still be predicting a smooth transistion to a warmer world.
The climate models use a technique for calculating the greenhouse effect that predates quantum mechanics, and proper peer review. These days science is thought to progress through small steps, each thoroughly checked. That is fine, except when a mistake has been made in the past and not noticed. That is what has happened here, and so the progress in the climate models has now halted. For 15 years the prediction of warming resulting from a doubling of CO2 has varied by 300% from 1.5 to 4.5 K. For 15 years the climate modellers have been claiming it will take them 15 years to get the clouds and aerosols right.
Of course you may claim that the weather men do produce better forecasts, but that is because they have learnt where the models go wrong, and can adjust their forecast appropriately. The climate models cannot even predict the height of the cloud base correctly, but the weather men know how much to add to the model value depending on the time of day.
Don’t be fooled by the dulcet tones of the sirens used by the oil and coal industries to lure us onto the rocks. What we need is a return to a world where the hand that rocks the cradle rules the world!
Hans Erren says
Re# 35
Indeed the Bern model assumes a saturation of the sinks, whereas observations show an ever rising sink increase. No wonder they calculate 1200 ppm for 2100!
http://home.casema.nl/errenwijlens/co2/sink.htm
The observed half life for CO2 in the atmosphere is 55 years. If you want to rely on the SRES models that assume absurd CO2 emissions (A2 and A1FI), fine, but don’t build a policy on it for the next fifty years.
Steve Latham says
Hi Hans (#39),
I guess you disagree with what D Archer posted on RealClimate a while ago (https://www.realclimate.org/index.php?p=134):
“When you release a slug of new CO2 into the atmosphere, dissolution in the ocean gets rid of about three quarters of it, more or less, depending on how much is released. The rest has to await neutralization by reaction with CaCO3 or igneous rocks on land and in the ocean [2-6]. These rock reactions also restore the pH of the ocean from the CO2 acid spike. My model indicates that about 7% of carbon released today will still be in the atmosphere in 100,000 years [7]. I calculate a mean lifetime, from the sum of all the processes, of about 30,000 years. That’s a deceptive number, because it is so strongly influenced by the immense longevity of that long tail. If one is forced to simplify reality into a single number for popular discussion, several hundred years is a sensible number to choose, because it tells three-quarters of the story, and the part of the story which applies to our own lifetimes.
However, the long tail is a lot of baby to throw out in the name of bath-time simplicity. Major ice sheets, in particular in Greenland [8], ocean methane clathrate deposits [9], and future evolution of glacial/interglacial cycles [10] might be affected by that long tail. A better shorthand for public discussion might be that CO2 sticks around for hundreds of years, plus 25% that sticks around forever.”
Questions: 1) From where do you get 55 years and do you believe that estimate to be more defensible? 2) Why do you not agree with theory that saturation should increase in the future? 3) What’s going on in 1998 for the link you posted (thanks for that) — is it that the warm ocean surface in that year would not absorb much CO2?
And a note for Eli (#37): I also plan to live forever — so far so good!
Tom Fiddaman says
Re #39:
So, how do you get from “The observed half life for CO2 in the atmosphere is 55 years” to “Why is it that we worry about temperature in 2100? The effects in 2100 are caused by emissions in 2080.” ? Even if you accept the 55yr time constant, this is clearly wrong, and gets worse when you consider the thermal lags in the system.
Re #40:
Evidently the source of the 55yr estimate is the loony Dietze model. There’s an email dialog on Dietze’s web site between him and some real carbon cycle modelers (Goudriaan and Joos for example). It reads like the Monte Python dead parrot routine – Dietze is simply ineducable. It’s hilarious how he makes complex assertions about problems with the representation of the vertical mixing structure etc. in other models, based on what appears to be a 1st order box model.
I didn’t succeed in finding a definitive set of Dietze’s actual dynamic equations for atmospheric CO2 on the Daly web site; two different models seem to be implied. But the origin of the 55yrs appears to be a single, static, linear calculation of the time constant via Little’s Law: The Lifetime of CO2. That’s a pretty cavalier attitude to fitting the data especially given that much richer information is available, e.g. bomb isotopes, which the model would fail to fit. Even if you accept the assertion of linearity, the maximum likelihood estimate of the time constant for a 1st order model is much longer; I’ve tried it (as did Nordhaus, who arrived at 120 years using OLS, which isn’t really right, but leaves 55yrs way out in the cold).
Murray duffin says
Re #40 and #41 Long input:
From http://cdiac.esd.ornl.gov/pns/faq.html
snip Q. How long does it take for the oceans and terrestrial biosphere to take up carbon after it is burned?
A. For a single molecule of CO2 released from the burning of a pound of carbon, say from burning coal, the time required is 3-4 years. This estimate is based on the carbon mass in the atmosphere and up take rates for the oceans and terrestrial biosphere. Model estimates for the atmospheric lifetime of a large pulse of CO2 has been estimated to be 50-200 years (i.e., the time required for a large injection to be completely dampened from the atmosphere). Snip
This range seems to be an actual range depending on time frame, rather than the uncertainty among models. [See below].
See http://cdiac.esd.ornl.gov/ftp/ndp030/global.1751_2002.ems
For the 5 decades from 1953 through 2003, we have now had 4, 3, 2, 1, and 0 half lives respectively, using an average half life of 11 years, (based on real 14C measurement). We get a total remaining injection in 2004 from the prior 5 decades of 139 Gt, which equates to an increase in atmospheric concentration of 66 ppm. The actual increase from 1954 to 2004 was very near 63 ppm. This result lends some credibility to the 50 year atmospheric residence time estimate above. A 200 year residence time gives an 81 ppm delta since 1954, which is much too high.
Surprisingly, if we go all the way back to 1750 and compute the residence time using fuel emissions only we get a value very close to 200 years. (A 40 year ½ life gives a ppm delta of 99 vs an actual of 96 using 280 ppm as the correct value in 1750). If we assume that terrestrial uptake closely matches land use emissions, (this is essentially the IPCC assumption), and we know that the airborne fraction from 1964 through 2003 had a weighted average of 58%, to shift to a long term 40 year ½ life from a near term 11 year ½ life, we would have to have prior 40 year period weighted average airborne fractions like 80% for ’24-’63, and near 90% before that. As the airborne fraction has been steadily dropping, this may be realistic. Since emissions in the last 40 years have been 3 times higher than in the period from 1924 to 1963 and 30 times higher than 1844 to 1883 it is not too hard to believe that the rapid growth in atmospheric partial pressure has forced such a change in airborne fraction
From Archer, I think chapter 9 snip
We expect that added CO2 will partition itself between the atmosphere and the ocean in proportion to the sizes of the reservoirs, and in the ocean we expect that size to be the buffering capacity. The relative sizes of the preanthropogenic atmosphere and the atmosphere plus ocean buffer are proportioned 560:(560+2500) equals ~18%. This crudely
predicted atmospheric fraction is comparable to the model atmospheric
fraction after 1000 years, which ranges from 14-30%, depending on
the size of the fossil fuel release. Snip
And also snip
The bottom line is that about 15-30% of the CO2 released by burning fossil fuel will still be in the atmosphere in 1000 years,snip
I have been trying to figure out what this meant, apart from the
obvious errors. The errors are:
a)that the surface ocean buffer circa 1994 is given by the IPCC as
1020 Gt, which would give a preanthropogenic value of 900 Gt, not
2500
b)the value of the ratio is then 38% not 18%.
c)These values are inventories or stocks, not reservoirs. The
reservoirs are vastly larger. I’ll admit this last one is a quibble. I know what he meant — I think.
Probably the partitioning he wanted is among the atmosphere, the
terrestrial “reservoir” and the surface ocean buffer, which would be
560/(560+900+2190)= 15%, which is still just within his range of 14-
30%.
The question is, “What does this mean?”
Consider that we emit a pulse of 230 Gt from 1954 through 2003, of
which about 130 Gt is retained in the atmosphere. Then we stop C
emissions. We will get back to our 1954 starting level of about 320
ppm in between 50 and 200 years, but probably closer to 50 years.
The pulse has then disappeared.
However, what if all of the molecules of our pulse are colored
green, like tennis balls, while the original 320 ppm are colored
white, and like tennis balls the color has no effect at all on their
behaviour? Then when they have partitioned themselves according to
the original distribution, we will still have 15% of the green
molecules in the atmosphere, and these will only disappear over the
longer time that it takes for mixing with the deep ocean and
permanent uptake in the terrestrial sink, possibly more than 1000
years. That for sure gives a long residence time for the green
molecules, but it doesn’t lengthen the residence time of
the “pulse”. I can’t think of any other explanation.
[Ad homs deleted – gavin]
Murray
Lynn Vincentnathan says
This is a bit off-topic, but I just read about a new model for the end-Permian extinction at:
http://www.ens-newswire.com/ens/aug2005/2005-08-29-01.asp
I’m interested in this, even though I understand we are not very likely to reach a tipping point in this century which might lead to such a runaway GW scenario, but it motivates me all the more to reduce my GHGs.
And I think #39 is a bit flippant (if I understand him correctly) about not being concerned about the future. I’m even concerned about that CO2 tail 100,000 years from now. The idea that my GHG emissions of today might be harming people & other life forms even far into the future is as disturbing to me as harming people living today.
Hans Erren says
My take from the observations is that each year 1.6% of the excess CO2 over 280 ppm (the equilibrium) will be absorbed, predominantly as straighforward diffusion. Increasing CO2 in the atmosphere means therefore an increasing sink. There is no distinction between “old” and “new” CO2, just atmospheric concentration. Compare it to a leaky bicycle tyre: the higher the pressure, the faster the flow.
Yes, the flow out of the atmosphere is modulated by temperature, which is similar to pinching the leak, gives a beautiful graph btw:
http://home.casema.nl/errenwijlens/co2/co2lt_en.gif
Murray duffin says
I agree with #44 re old and new completely, but it seems Archer makes the distinction, as I tried to illustrate. Murray
Tom Fiddaman says
Re #42:
The point I should have made in #41 is that back-of-the-envelope calculations that imply 1st order models of the type dCO2 = a*E – (CO2-CO2(0))/tau are not well constrained by the Mauna Loa atmospheric CO2 data. You can have any tau you want between at least 40 and 400 years and still get a good fit by varying a. It’s even worse if you cherry-pick tau by making calculations with a favorable year’s flux and stock imbalance (as Dietze does) or ignore all time series information by making calculations aggregated over a century.
A real carbon cycle model needs to conform to physical laws (e.g. carbon chemistry in the ocean, conservation of carbon), be robust in extreme conditions, and fit all available data (e.g. isotopes & ice cores). Linear box models, while conceptually useful, fail several of those tests right away. The skeptic models above don’t even meet the minimal requirement of writing down all your equations in one place in a standard format and demonstrating how they fit to data with appropriate statistics.
What the skeptics have really done so far is use simple models to observe the “missing sink,” which other carbon cycle modelers discovered and named years ago (and more or less attributed to NH terrestrial biosphere). The skeptics’ original contribution is to attribute the missing sink to ocean uptake, which unfortunately violates what’s known about the ocean carbon budget. If they want to be taken seriously, the burden is on them to get some data beyond the Mauna Loa CO2 record that supports their position, and address the data that refutes it. To refute Bern etc., they need to find an actual problem, either conceptual or fit to data; constructing an alternate hypothesis with arbitrary parameterization and limited data isn’t sufficient.
Also, to pass the snicker test, skeptics (particularly Dietze) need to give up the pretense that linear impulse response is the be-all-end-all and stop making silly assertions about Bern and other models that confuse model structure with rough characterizations of behavior for purposes of discourse. A good start would be to actually replicate some of the classic models (e.g. Oeschger) in transparent simulation software, and then develop, share, and preferably publish credible alternatives meeting the tests above.
Lynn Vincentnathan says
RE #44 & 45, I hope you’re not making the contrarian argument that whatever GHGs humans emit are aborbed into nature, and it is only nature’s GHGs that are up there in the atmosphere, or that somehow human emissions are absorbed first, and nature’s emissions last. So blame it on nature.
If so, I think we have to look at the marginal effect, or what would be the concentration of GHGs in the atmosphere, if humans had not started emitting so much over the past 150 years, then compare that with the situation today. And then figure the overall effects on the world of the “before” situation & compare with what is & will happen “extra” with the human emissions. As I’ve mentioned before it is the last few inches of flood, or dryness of drought, or degree of heat, or intensity of storm that does much more damage than the first few increments. Of course, if everything is destroyed & all people killed in a community, then you get a flat line, while the storm, etc. might be raging more & more fiercely.
Stephen Berg says
Re: #47,
“I hope you’re not making the contrarian argument that whatever GHGs humans emit are aborbed into nature, and it is only nature’s GHGs that are up there in the atmosphere, or that somehow human emissions are absorbed first, and nature’s emissions last. So blame it on nature.”
Here’s an article which indicates that soon, the nature will not be a net absorber (if it even is at the moment), but will be a net emitter of GHGs.
“Warming hits ‘tipping point'”:
http://www.guardian.co.uk/climatechange/story/0,12374,1546824,00.html
Hans Erren says
re: #48, now why didn’t this runaway happen in the Holocene climate optimum, or the eemian interglacial when temperatures were significantly higher?
Hans Erren says
re 48:
No absorption doesn’t distinguish between sources, 1.6% of all excess CO2 in the atamosphere is absorbed annually.
And I just don’t like to extrapolate uncertainties, something I learned in exploration geophysics the hard way. IMHO Al climate “forecasts” should limit themselves to 40 years as an absolute maximum.
[Response:plug removed – WMC]