We are disappointed that the Wall Street Journal (WSJ) has chosen to yet again distort the science behind human-caused climate change and global warming in their recent editorial “Kyoto By Degrees” (6/21/05) (subscription required).
Last week, the U.S. National Academy of Sciences and 10 other leading world bodies expressed the consensus view that “there is now strong evidence that significant global warming is occurring” and that “It is likely that most of the warming in recent decades can be attributed to human activities”. And just last week, USA Today editorialized that “not only is the science in, it is also overwhelming”.
It is puzzling then that the WSJ editors could claim that “the scientific case….looks weaker all the time”.
While we resist commenting on policy matters (e.g. the relative merits of the Kyoto Protocol or the various bills before the US Senate), we will staunchly defend the science against distortions and misrepresentations, be they intentional or not. In this spirit, we respond here to the scientifically inaccurate or incorrect assertions made in the editorial.
Since that Byrd-Hagel vote eight years ago, the case for linking fossil fuels to global warming has, if anything, become even more doubtful.
This statement stands in stark opposition to the actual findings of the world scientific community (e.g. the various National Academies, the Intergovernmental Panel on Climate Change (IPCC)), and the vast majority of actual peer-reviewed scientific studies.
The Earth currently does seem to be in a warming period, though how warm and for how long no one knows.
If we interpret to “know” as “is judged to be the case based on the available evidence”, the statement is patently false. As detailed in previous discussions at RealClimate (see here and here and here) it is the consensus of the scientific community, based on more than a dozen independent studies using both empirical data and theoretical models (including the most recent studies in Nature and Science), that average surface temperatures over the Northern Hemisphere during the past few decades appear to be unprecedented in a very long term context, probably over at least the past 2000 years.
In particular, no one knows whether this is unusual or merely something that happens periodically for natural reasons.
This is incorrect. The natural causes of past climate variations are increasingly well-understood, and they cannot explain the recent global warming. As discussed elsewhere on this site, modeling studies indicate that the modest cooling of hemispheric or global mean temperatures during the 15th-19th centuries (relative to the warmer temperatures of the 11th-14th centuries) appears to have been associated with a combination of lowered solar irradiance and a particularly intense period of explosive volcanic activity. When these same models are forced with only natural radiative forcing during the 20th century [see e.g. Crowley (2000)] they actually exhibit a modest cooling trend. In other words, the same natural forcings that appear responsible for the modest large-scale cooling of the “Little Ice Age” should have lead to a cooling trend during the 20th century (some warming during the early 20th century arises from a modest apparent increase in solar irradiance at that time, but the increase in volcanism during the late 20th century leads to a net negative 20th century trend in natural radiative forcing). In short, given natural forcing factors alone, we should have basically remained in the “Little Ice Age”. The only way to explain the upturn in temperatures during the 20th century, as shown by Crowley (2000) and many others, is indeed through the additional impact of anthropogenic (i.e., human) factors, on top of the natural factors.
Most global warming alarms are based on computer simulations that are largely speculative and depend on a multitude of debatable assumptions.
This is not correct. Concern about global warming is not based primarily on models, but rather on an understanding of the basic physics of the greenhouse effect and on observed data. We know from data that we have caused the CO2 concentration in the atmosphere to rise sharply during the past century: it is now much higher than any time during the past 650,000 years (which is as far back as reliable ice core data exist). And we know that this rise in CO2-concentration changes the radiation balance of the planet and leads to a warming of global surface temperature. This is scientifically undisputed and well-established physics, which has been known since in the year 1896 the Swedish Nobel prize winner Svante Arrhenius calculated the climatic effect of a rise in CO2.
Since there is a continued increase in emissions of (in particular) CO2, continued greenhouse warming is highly likely to continue. The models serve merely to quantify these basic facts more accurately, calculate the regional climate response, and compute effects (such as the expected increase in ocean heat content or sea level) which can be tested against observed data from the real world.
The editorial then returns to the issue of paleoclimate reconstructions and the so-called “Hockey Stick”, repeating literally each of RealClimate’s documented “Hockey Stick” myths:
Then there’s the famous “hockey stick” data from American geoscientist Michael Mann. Prior to publication of Mr. Mann’s data in 1998, all climate scientists accepted that the Earth had undergone large temperature variations within recorded human history.
The actual prevailing view of the paleoclimate research community that emerged during the early 1990s, when long-term proxy data became more widely available and it was possible to synthesize them into estimates of large-scale temperature changes in past centuries, was that the average temperature over the Northern Hemisphere varied by significantly less than 1 degree C in previous centuries (i.e., the variations in past centuries were small compared to the observed 20th century warming). This conclusion was common to numerous studies from the early and mid 1990s that preceeded Mann et al (1998). The Mann et al (1998) estimates of Northern Hemisphere average temperature change were, in fact, quite similar to those from these previous studies (e.g. Bradley and Jones, 1993; Overpeck et al, 1997), but simply extended the estimates a bit further back (from AD 1500 to AD 1400). In reality, the primary contribution of Mann et al (1998) was that it reconstructed the actual spatial patterns of past temperature variations, allowing insights into the complex patterns of cooling and warming in past centuries. In fact, regional temperatures changes (e.g. in Europe) appear to have been significantly larger, and quite different, from those for the Northern Hemisphere on the whole. Neglecting the significance of the large regional differences in past temperature changes is another classic pitfall in the arguments put forward by many climate change contrarians (see Myth #2 here).
The WSJ editorial continues,
This included a Medieval warm period when the Vikings farmed Greenland and a “little ice age” more recently when the Thames River often froze solid.
The sentence, first of all, perpetuates two well-known fallacies regarding the so-called “Medieval Warm Period” and “Little Ice Age”. See the RealClimate discussions of the Little Ice Age and Medieval Warm Period for explanations of why both the Viking colonization of Greenland and the freezing of the River Thames actually tells us relatively little about past climate change.
The actual large-scale climate changes during these intervals were complicated, and not easily summarized by simple labels and cherry-picked anecdotes. Climate changes in past centuries were significant in some parts of the world, but they were often opposite (e.g. warm vs. cold) in different regions at any given time, in sharp contrast with the global synchrony of 20th century warming.
The WSJ then continue with a statement that is problematic on several levels,
Seen in that perspective, the slight warming believed to have occurred in the past century could well be no more than a natural rebound, especially since most of that warming occurred before 1940.
Firstly, the overall warming of the globe of nearly 1 degree C since 1900 is hardly “slight”. That warming is about 1/5 of the total warming of the globe from the depths of the last Major Ice Age (about 20,000 years ago) to present.
Secondly, the argument that the climate should have naturally “rebounded” with warming during the 20th century defies the actual peer-reviewed scientific studies which, as discussed earlier, suggest that the climate should have actually cooled during the 20th century, not warmed, if natural factors were primarily at play. Anthropogenic greenhouse gases are required to explain the observed warming. Also, it is incorrect that most of the warming occurred before 1940; in contrast, the warming since 1970 is larger than that up to 1940.
The WSJ proceeds with the claim that key scientific findings that are common to numerous independent studies (specifically that late 20th century hemispheric warmth is anomalous in the context of past centuries) can somehow be pinned on one particular research group or even individual (see Hockey Stick Myth #1 here):
Enter Mr. Mann, who suggested that both the history books and other historical temperature data were wrong. His temperature graph for the past millennium was essentially flat until the 20th century, when a sharp upward spike occurs — i.e., it looks like a hockey stick. The graph was embraced by the global warming lobby as proof that we are in a crisis, and that radical solutions are called for.
This is patently incorrect.
The WSJ then echoes the final of the “Hockey Stick” myths discussed on RealClimate (see Myth #4 here), claiming that the “Hockey Stick” has now been discredited:
But then, in 2003, Canadian mathematician Stephen McIntyre and economist Ross McKitrick published a critique calling Mr. Mann’s work riddled with “collation errors, unjustifiable truncations or extrapolation of source data, obsolete data, geographical location errors, incorrect calculations of principal components, and other quality control defects.”
In fact, these claims have been demonstrated to be incorrect by an independent group of scientists at the National Center for Atmospheric Research (NCAR) . These authors have shown that the ‘alternative’ reconstruction promoted by McIntyre and McKitrick (which disagrees not only with the Mann et al reconstruction, but nearly a dozen independent reconstructions that agreee with the Mann et al reconstruction within statistical uncertainties) is the result of censoring of key data from the original Mann et al (1998) dataset. Unlike the original reconstruction of Mann et al (1998), the reconstruction that McIntyre and McKitrick produced using the censored data set fails standard statistical tests for validity, and, in the words of the NCAR group, is “shown to be without statistical and climatological merit”.
Correct for those errors, they showed, and the Medieval warm period returns.
This is just bizarre. The claimed reconstruction of McIntyre and McKitrick, in fact, indicates anomalous warmth during the 15th century. This period doesn’t fall even remotely within the interval commonly referred to as the “Medieval Warm Period” Instead, it actually falls within the heart of the “Little Ice Age” itself! The irony of contrarian supporters of the claims of McIntyre and McKitrick is that, if their reconstruction were valid (which it is not), it would actually argue for unusual warmth during the heart of the “Little Ice Age”.
Mr. Mann has never offered a serious rebuttal to the McIntyre-McKitrick critique.
This is not true. In fact, Dr. Mann has demonstrated the falsehood of the various McIntyre and McKitrick claims in excruciating detail on RealClimate (see here and here). More significantly, however, McIntyre and McKitrick’s claims have now been refuted by at least two independent teams of climate researchers (see here and here).
He has refused to fully explain his methodology.
Quite to the contrary, the methodology used was described in detail not just in the original 1998 publication, but in an expanded description provided last year on Nature’s supplementary website. The original description was adequate for other researchers with appropriate training to closely reproduce both the algorithm and the published results. In fact, the NCAR researchers referred to above have made their code for implementing the Mann et al (1998) method publically available here. In the process (scroll down to bottom of this page) , they demonstrate the impacts of the numerous fundamental errors made by McIntyre and McKitrick. With this publically available code and data, anyone can reproduce and check the reconstruction of Mann et al (1998).
Meanwhile, a review of about 200 different temperature studies was published in 2003 by Willie Soon and Sallie Baliunas of the Harvard-Smithsonian Center for Astrophysics in the journal Climate Research. It likewise reaffirmed the longstanding consensus that there have been large temperature variations over the past millennium.
The irony here could not be more striking, for if the editors of the WSJ were familiar with the news reported by the WSJ, they surely should have been aware that the Soon and Baliunas study was discredited on the pages of the Wall Street Journal itself. Indeed, as described in that previous WSJ article, a team of leading climate researchers detailed the numerous fundamental flaws in the Soon and Baliunas paper in in the American Geophysical Union journal “Eos“. This matter has been discussed elsewhere at great length, including previously on RealClimate both here and here (see “Myth #2”).
computer models …suggest the upper atmosphere should have warmed substantially in recent decades. But data from weather balloons and satellites don’t match the projections.
There is indeed a discrepency between some analyses (but not others) of the satellite data in the lower troposphere and model predictions, yet the satellite data continue to undergo quality checking and re-processing and there are a few indications (such as Fu and Johansen, 2005) that problems still exist. The jury is still out on this issue (but they may be about to come back in to the room).
There’s also the matter of the alleged melting of the Antarctic ice cover, threatening a catastrophic sea level rise. In fact, recent data suggest the ice is thickening and temperatures are dropping in most of the continent.
Models actually predict that the interior of the ice sheets should gain mass because of the increased snowfall that goes along with warmer temperatures, and recent observations actually agree with those predictions. Some areas, such as the Antarctic peninsula and the West Antarctic ice sheet are losing mass, consistent with temperature trends there.
Finally, an increasing number of scientists are concluding that variations in solar radiation associated with sun spots — that’s right, the heat of the sun — play a major role in Earth’s climate.
Solar forcing of climate is indeed an important component, and in the pre-industrial period appears to explain a significant share of the century-scale variability (though there is large uncertainty in the magnitude of the effect). There is likely an increase in the early 20th Century as referred to above, however, since 1950 there is no evidence for increases in solar activity (most indices are flat) and so it is highly unlikely to be able to explain the bulk of the current warming trend.
The WSJ editors then try to reverse nearly two decades of scientific research by promoting a qualitative graph (which is not actually based on any real data) that was offered simply as a schematic by the IPCC back in 1990:
So what would be a fair representation of how most scientists view the climate of the past 1,000 years? We’d suggest the graph nearby, which we reprint exactly as it appeared in the first report of the U.N.’s Intergovernmental Panel on Climate Change (hardly a group of oil-funded hacks) in 1990. It shows that our own warming period is neither unique nor all that hot.
The WSJ may prefer to use 15 year-old guesses on which to base their opinion, but the scientific community has an understandable preference towards up-to-date and quantitified research.
Steve Latham says
Re David Wojick’s post (#100 as of June 29, 2005):
David, I don’t think you can look at the number of papers on a particular topic and then infer that views about the underlying science are changing. Although authors of peer reviewed articles don’t often write in the abstract, “Joe Blow is wrong,” if they have the opportunity they will say something like, “Recent publications regarding [some topic] suggest that x is related to y such that we can predict or explain z [a list of citations would be given here]. Because this is an important idea with far-reaching consequences, we used an independent or better method or data set and found that x, y, and z are actually related to each other in a different manner or only under some other circumstances p, q, and r.” Therefore I think that you should try to satisfy Steve Bloom’s request and include some citations that support your point. (Please especially consider naming the peer-reviewed studies that say natural variation in given conditions could explain current trends on their own — this is of particular interest to me because climatologists have stated several times on this website that the natural variation in non-anthro forcings including solar activity would actually be negative [i.e., cooling] over the last few decades.)
Here’s how efforts are directed at the fisheries agency where I work. First a lot of data are collected. From historical estimates of the number of reproducing parents in the previous generation, a number of salmon from a given stock is expected to return. They mix with different stocks which have their own somewhat independent abundances. The abundances of all of these stocks combine to define an acceptable amount of fishing. The problem is that the actual abundances are not known ‘perfectly’ until after the fishery when everything can be tallied up in the end. (I hope this is more relevant regarding uncertainty than the courtroom analogy.) How do we deal with that uncertainty? Well, first of all, best predictions for the abundances of each stock are made. These have implications, as I’ve indicated, and parties who don’t like those implications tell us to throw out some data or include some other variables. Rarely those other variables appear to be informative and the best estimates evolve; usually it turns out to be a sensitivity analysis showing that the predictions are robust. Eventually everyone agrees that, given the available data, we have a best set of predictions. That consensus is the painless part. The next part by far represents the greater amount of work — risk analysis. This is the part where the variability around our best estimates is thoroughly examined. Various fishing plans are submitted and simulation after simulation is done to determine how often, under each scenario, with differing sets of assumptions, catch and escapement objectives are reached. Eventually, again, there is agreement on which are the best fishing plans for dealing with the uncertainties under various scenarios. Finally, as fishing begins, the fishing yields some new information, further narrowing the range of possibilities, and so more work is done to further quantify benefits and risks of various approaches.
It is very rare that the new data radically change the early predictions — usually we simply home in on more accurate quantification. If you were simply to focus on the amount of work done examining some variables over that period of time, you would erroneously think that the number of adults spawning in the previous generation was thought not to be as important as all of the other details that receive attention later in the process. Yet, with every new years’ data, the most important variable in the subsequent forecast is the number of spawners in the previous generation. Generalized, this is the process: science examines some apparent correlations, examines mechanisms, and determines that some causative relationship exists. Then someone determines this knowledge has implications for policy. The scientists figure out that refining the nature of the relationship to make more precise predictions would be helpful, so they work on it. Because this involves studying all of the minor influences that cause variance in the more obvious relationship, they have to do a lot of work to quantify those influences. I suspect this is what you are seeing in climate science.
Henry Molvar says
Re: #60 Dear caerbannog: Thanks for the link!
RealClimate group: You hit the jackpot, the grand slam and the pot at the end of the rainbow. Just as a Congressional inquiry cannot be ignored neither can the response.
Although the time frame may be onerous, the opportunity to RESPOND is wonderful. I know you’ll make the most of it.
David H says
Pressure of work has kept me away from this site for some weeks but last nights incredible BBC Newsnight feature on contrarians has brought me back. It was by far the most partisan BBC programme ever. Why was all the bile heaped upon us poor sceptics? Now I see. President Bush does not plan to be humiliated by our Tony next week and has been keeping his powder dry.
Richard Harvey says
Referring to David Wojick’s comment — post #100 : “If a study provides new evidence, or a new mechanism, for natural variability, that is consistent with what we observe, then that supports the theory that the present warming is natural, not caused by human emissions”.
How can you come to this conclusion?? The fact that we may find a new mechanism for natural variability does *not* mean that this will necessarily explain the current surface temperature trend, let alone support the theory that the present warming is natural, it only suggests that climate is more complex than we thought. So this leap of faith is extremely puzzling to me. Current climate sensitivity studies comparing model results and the paleo record suggest quite strongly that most feedbacks have probably been included in models. So it is highly unlikely (though not impossible) that we’ll suddenly discover a feedback mechanism which will be able to counter the greenhouse radiative forcing currently underway.
So far, *no* contrarian to the anthropogenic nature of global warming has come up with convincing arguments. I second Steve Bloom’s request that you provide hard evidence of your claims about natural variability.
Finally, obsessing about the fact that we should wait until more evidence about climate change comes in is foolish and irresponsible. The evidence so far accumulated is sufficient to justify action.
GMT says
Extagen @ #95: having followed the debate for some years, it appears that the issue of causality is merely the fallback for the contrarians. They’ve finally had to give up arguing that it wasn’t happening at all.
Actually, there was a very amusing period about six years ago where some contrarians would argue the two cases alternately, i.e., that it wasn’t happening, but it wasn’t our fault anyway.
Furthermore, it was at about that time that the Coal Lobby’s astroturf group “The Greening Earth Society” flirted with a third way: that it was indeed happening and it was going to feed the poor and make us all rich. I wonder what happened to them?
Science, as I’m sure all here are aware but some would never admit, is only one dog in this fight.
[Response: The three main types of sceptics are called trend sceptics, attribution sceptics and impact sceptics. I discuss these three types and their main arguments in this article. -stefan]
Manny says
#106 – It’s happening? We’re currently in a 7-year global cooling trend according to the satellite data.
David Wojick says
Re#106. Different skeptics argue different things. The argument for the anthro GHG theory of dangerous warming is complex, involving several big steps and numerous subarguments. Different skeptics question different steps in that argument. For example, Fred Singer questions the warming in the surface record, so argues that there is no significant warming to explain, or fear. At the other extreme, Pat Michaels accepts the anthro GHG theory but argues that the warming is not dangerous. He claims the warming in the next 100 years will be about the same as in the last 100. Greening Earth’s argument was that (1) warming is net beneficial to humans and (2) the CO2 increase is beneficial to all life because atmospheric CO2 is the world’s food supply. There are lots of other skeptics, who object to other specific moves in the logic of the anthro ghg theory. It would be fun to map them.
But these are all scientific issues. The complexity of the climate system, including our role, makes the science complex. The complexity of the science makes the logic of the anthro ghg dangerous warming theory complex, which in turn makes the debate wonderfully complex. It is a fascinating situation, probably unique in history.
Richard Harvey says
Re: #108:
Mr. Wojick, you exagerate the complexity of the issue in the sense that we basically don’t know *anything*. This allows you and other skeptics to argue for more coal and oil burning until we know for sure. But this will *never* happen as we all know. The key is that we know enough now to have big *concerns* about dangerous anthpogenic influence.
Lynn Vincentnathan says
RE#107, yeah it may be cooling up there where the satellites are, but down here under the greenhouse blanket it’s getting a bit too snug. Maybe (this is just a guess) because the greenhouse blanket keeps the heat close to earth, the heat isn’t been radiated up to the satellites as much anymore.
Brian Jackson says
Re #107 (Manny): Presumably you are referring to the MSU channel 2 data, which measures temperatures throughout the troposphere. If so, which analysis are you using? Mears and Weltz? Vinnikov and Grody? Fu et al? Spencer and Christy? If the latter, is it the old version 5.1, or the new version 5.2, which has just been released and shows a higher trend since 1979?
I also find it interesting that you specify 7 years. Why not 6 years? Or 8 years? It wouldn’t have anything to do with the very strong El Nino of 1998 would it? You know, the one that caused a big 0.5 deg C spike in the tropospheric temperature that year?
Manny says
(thank you moderators for finally posting my comments)
re: #111
Here’s the msu data:
http://www.ghcc.msfc.nasa.gov/MSU/msusci.html
http://www.nsstc.uah.edu/data/msu/t2lt/tltglhmam_5.1
Feel free to chart it for yourself since jan98. Add a linear trendline, and you’ll see the cooling trend.
“I also find it interesting that you specify 7 years. Why not 6 years? Or 8 years? ”
Why indeed? Why 100 years? Why since 1400? Why does any one choose one particular starting point over another?
Maybe global temps peaked in 1998 and we’re seeing a falling off now. If so, 1998 would be a key climate historical point just as 1940 appears to be (when another temp downturn occured).
Klaus Flemloese, Denmark says
Soon and Baliunas in favor of global warming!
Willie Soon and Sallie Baliunas conclude in their paper from 2003 “Proxy climatic and environmental changes of the past 1000 years” (SB2003), that
“However, considered as an ensemble of individual expert opinions, the assemblage of local representations of climate establishes both the Little Ice Age and the Medieval Warm Period as climatic anomalies with worldwide imprintsâ?¦â??
Across the world, many records reveal that the 20th century is probably not the warmest nor a uniquely extreme climatic period of the last millennium.”
Properly analyzed, the paper shows no conclusion on existence of LIA and MWP as a worldwide phenomenon, and their paper indicates the 20th century temperature is the highest during the last 1000 years. This is in contradiction to their conclusions.
The following remarks are given totally within the framework of their paper without questioning in any way the purpose of the paper, the methods, the data definitions, the data, their observations on the proxies and their link between an anomaly and LIA /MWP. However, I disagree with the conclusions for the following statistical reasons.
Soon and Baliunas try to show by asking the following questions in respect of the individual proxies using 50 years average, that an indication of the LIA and the MWP can be found in most proxies and that the 20th century is not the warmest period of time over the last 1000 years. A summary of their findings is given in the following lines:
Question A.
Is 20th century warmer than the previous 900 years?
Out of 104 proxy data they have found a “yes” in 22 cases and “no” in 82 cases.
Question B.
An anomaly is identified during 800-1300?
Out of 125 proxy data they have found a “yes” in 123 cases and “no” in 2 cases.
Question C.
An anomaly is identified during 1300 -1900?
Out of 115 proxy data they have found a “yes” in 108 cases and “no” in 7 cases.
Without any statistical judgment these results looks at firsthand as at confirmation of their conclusion. However, in all three cases it is a matter of misusing the maximum or the minimum of series of measurements with different length and not defining a null hypothesis and not testing it at all.
Please note that an anomaly in a statistical sense is an observation, which is smaller than the p%-percentile or larger than the (1-p) % percentiles, where p<50%. In a given data set an anomaly is found by finding the maximum or the minimum of the observations over a given number of years. This is the key to understand the methods used by Soon and Baliunas.
In respect of question A is possible to show under very general condition in theory and by simulations, that if there is no trend in the temperature, then the probability of observing a higher temperature during the period 1000-1850 compared to the period 1900-1950 are close to 94.4%=850/(50+850).
Using the binomial distribution with 104 trials and p=0.944 it follows, that the probability of observing “no” in 82 out of 104 trials is less then 0.0001%. This means that we have observed significant fewer “no”s than expected. The 95% confidence interval for p equals [70.5%; 81.1%]. This indicates that the 20th century is the warmest during the last 1000 years.
In respect of question B and C it follows from simulation experiments using a 1% percentile to define an anomaly and a red noise with coefficient 0.2, that the probability of observing an anomaly for one proxy during 450 and 550 years respectively is round about 98%. Using the binomial distribution it follows, that the observed values under question B and C are as expected and in no way significant.
There is no basis to support their conclusion from a statistical point of view in the data they have used. However, their conclusion might be correct, but for other reasons than they have presented in SB2003.
Its is unbelievably that WSJ and others are using this paper as an argument against global warming, when it actually is in favor of global warming !
I have worked this analysis out in more details, but have no access to a webblog to publish it on. Writing a scientific paper about is not relevant since SB2003 previously have been discredited. However, as an educational example within statistics and probability theory, the SB2003 paper might be useful.
Reference:
Willie Soon, Sallie Baliunas: Proxy climatic and environmental changes of the past 1000 years. Climate Research 23:89-110, 2003.
Mike Mann et al: On Past Temperatures and Anomalous late – 20th Century Warmth, EOS, Vol 84, No 27, page 256, 8.july 2003.
Julius Solnes: Stochastic Processes and Random Vibration: Theory and Practice, page 34, Wiley 1977.
Thomas Mikosch et al: Modeling Extremal Events, page 307, Springer 1999.
SteveF says
Interesting letter on the politics side of things:
http://www.democrats.reform.house.gov/Documents/20050701123028-71010.pdf
Hans Erren says
re 106
Stefan, it’s intriguing you quote a climate sensitivity of 3K as central value, whereas radiative equilibrium comes no further than 1K. Climate models have a range of 1K to 3K for CO2 doubling, the 1.5 to 4.5 K range is completely depending on the assumed slowness of the system, with equilibrium times of several centuries!
The CSM model as used recently by CKO in the Dutch Challenge Project has an reported sensitivity of 1K for CO2 doubling.
[Response: S-B is only valid for black bodies. Since the Earth is not a black body, it isn’t a good estimate for the surface air temperature. This has been discussed here. The model range for the current IPCC round is from 2.7 to 4.1 deg C for a doubling of CO2. -gavin]
dave says
Re: #100, 106 Stefan’s comments
Stefan’s comments and the linked in paper are very informative. The subject of “natural variability” deserves a separate post and comment thread all by itself. Invariably, no matter what the subject, there are comments citing some unspecified natural variability hypothesis to explain the current warming. Part of the “official” climate change language of the current government is “climate variability”. I hope RC will address the issue directly and ask those “skeptics” to put their money where their mouth is, so to speak.
Mike Doran says
Two main responses.
One the idea that cosmic ray flux is s source of climate variability doesn’t mean that CO2 doesn’t equally have an electrical meaning. It does. CO2 impacts conductivity w/ surface lows and gas exchanges which occur as a result.
Two complexity strongly suggests responsive design. Which brings us back to gaia, to ecology, to stewardship.
David Wojick says
Re#109 et al, I am not saying we “know nothing,” quite the opposite. We now have $40 billion worth of new knowledge. This is a lot given that the US cancer research program is about $5 billion per year. The point is that in science new knowledge often increases uncertainty among competing theories. Astronomy and cosmology are in just that boat right now. Results from Hubble and other new instruments have called much of the theoretical framework into question, so hypotheses have blossomed.
By far the biggest result from the 15 year climate research program is the discovery of natural variability. When the program started (along with the FCCC treaty) it was believed that climate was relatively stable and so anthro GHGs must be causing the warming in the surface record. That stability assumption was found not to be true and the research has become quite complex as a result.
Below are two quotes that make the point nicely. These two reports are good studies in the research issues, even today. They can be read online. They are much better than the 1995 and 2001 IPCC reports, because the latter are really arguments for the anthro GHG dangerous warming theory.
“The evidence of natural variations in the climate system — which was once assumed to be relatively stable — clearly reveals that climate has changed, is changing, and will continue to do so with or without anthropogenic influences.” (Dec-Cen Variability, Summary)
Decade-to-Century-Scale Climate Variability and Change: A Science Strategy (1998) 160 pages. Usually referred to as “Dec-Cen Variability.”
http://www.nap.edu/catalog/6129.html
“Climate research on decade to century (“dec-cen”) timescales is relatively new. Only recently have we obtained sufficient high-resolution paleoclimate records, and acquired faster computers and improved models allowing long-term simulations, to examine past change on these timescales. This research has led to genuinely novel insights, most notably that the past assumption of a relatively stable climate state on dec-cen timescales since the last glaciation is no longer a viable tenant. The paleorecords reveal considerable variability occurring over all timescales, while modeling and theoretical studies indicate modes of internal and coupled variability driving variations over dec-cen timescales as well.
“Thus, dec-cen climate research is only at the beginning of its learning curve, with dramatic findings appearing at an impressive rate. In this area even the most fundamental scientific issues are evolving rapidly. Adaptability to new directions and opportunities is therefore imperative to advance understanding of climate variability and change on these timescales.” (Pathways, p. 129)
Global Environmental Change: Research Pathways for the Next Decade (1999) 621 pages. Called “Pathways.”
http://www.nap.edu/catalog/5992.html
Dano says
I heartily agree with Klaus (the first comment is this thread is my proof :o) ).
S&B didn’t bother to mention the multiproxy studies, they just cherry-picked the local studies that supported their contention. Plus, using a 50-year time frame effectively narrows the warming episodes.
Best,
D
David Wojick says
Re#101 (and part of the #100 Response). To see where solar-climate research is going, here are two fairly recent workshops. Some of the presentations provide references. The basic point is that we do not understand the sun-climate link, certainly not enough to rule it out as the main driver of the 20th Century surface temperature record.
The first workshop is “Solar Variability on Decadal to Millennial Timescales: Influences on Earth Climate Change and Prediction,” hosted by the Universities Space Research Association. USRA includes 95 universities that do space research, mostly from the National Aeronautics and Space Administration. USRA says it has “launched an initiative to develop the means to increase understanding and improve prediction of solar variability and its effects on Earth, especially its climate.”
http://www.usra.edu/hq/meetings/scw/
According to USRA, the research issues are broad. They say, “This multi-disciplinary workshop was designed to open communication and forge collaborations between the disparate realms of policy and science, and to provide a platform for scientists of varied fields (climatology, paleoclimatology, atmospheric chemistry, solar and stellar astrophysics, etc.) to present their measurements and models in an effort to more precisely define problems scientists face when trying to show the causal link of multi-decadal variability of the Sunâs output and Earthâs climate. Uncertainties remain not only regarding the solar measurements, but also on the climate response to solar changes by virtue of the complexities of the climate system.”
Another major workshop is “Decadal Variability in the Sun and Climate,” the annual meeting of the Solar Radiation and Climate Experiment. SORCE is a NASA-sponsored satellite mission that provides new measurements of incoming x-ray, ultraviolet, visible, near-infrared, and total solar radiation.
http://lasp.colorado.edu/sorce/2004ScienceMeeting/Meeting_Review.html
SORCE explains the research issue this way: “Discerning the role of the Sun in climate variations on time scales of decades is a challenging task. That climate forcing is well correlated with variations in the Sunâs energy output is now relatively well established for total and UV irradiance using high-precision, space-based solar measurements spanning more than two decades. When the Sun is near the maximum of its activity cycle, it is about 0.1 percent brighter overall, with much greater changes at UV wavelengths. SORCE measures these variations with unprecedented accuracy, precision, and spectral coverage across the UV, visible, and IR. But the climate response to these measured solar variations presents a major puzzle.”
[Response: We all agree that there are interesting issues in solar-climate links, and there is a large uncertainty in how to calibrate solar activity to climate forcings. However, there is no evidence for a large role for solar forcing over the last 50 years or so. No indices related to solar activity show any significant rise over this period. -gavin]
Stephen Berg says
Re: #114,
Thanks for the link!
What an exceptional letter, cutting to the heart of what is going on in the US Government. What is really occurring is a sort of inquisition, trying to smear and destroy the careers of outstanding scientists, perpetrated by the fossil fuel industry. Ross Gelbspan’s The Heat is On and Boiling Point provide even more evidence of this “crime taking place,” (as Mr. Gelbspan says so succinctly).
As far as I’m concerned, the debate is pretty much over. Climate change is happening. The numerous threads on RealClimate and scientific reports that seem to be a daily occurrence are stacking the evidence heavily in favour of this conclusion. Therefore, the time for action is now before our existence on this planet is put in jeopardy.
Ethan says
Re comment 118 ” Results from Hubble and other new instruments have called much of the theoretical framework into question, so hypotheses have blossomed.”
This is off topic, but this claim is wrong. These results have vastly narrowed the uncertainties in
the parameters used in the theoretical framework, and spawned a large number of further
directions for research. They did not cast the previous theoretical framework into doubt. On
the contrary, they allowed us to rule out many previously viable alternatives.
If this is your best analogy then your argument is in trouble.
Hans Erren says
re: 115,
Gavin, I am a bit puzzled about your response
Firstly, the Stefan Boltzmann equation contains an emissivity term which has been accounted for. see eg http://hanserren.cwhoutwijk.nl/co2/sb.htm
Secondly, the CSM model used by Dutch climatologists has a proven sensitivity of 1 K/2xCO2; why is this model not considered “state of the art”?
Thirdly, the analysis of CGM models by Lawrence Livermore has a range of 1 to 3 deg C for a doubling of CO2.
[Response: First, SB even using an emissivity still doesn’t account for any feedbacks – which we know exist. Secondly, CSM is not state of the art because it was superceded by CSM2 and now by CSM3. And thirdly, and most importantly, you confuse the transient climate response (TCR) at time of CO2 doubling in the 1% increasing CO2 CMIP runs with the equilibrium climate response to CO2 doubling (which is always larger). You should assume that it is the equilibrium sensitivity (IPCC likely range 1.5-4.5, current models (for the IPCC AR4) range 2.7-4.1) that people are talking about unless they specifically state that they aren’t. – gavin]
David H says
Here are a couple of quotes from a “peer reviewed” report published a few hours ago.
para 22 “We sought evidence that refuted the claims of McIntyre and McKitrick, but have not come across any detailed rebuttal.”
Para 23 “We are in no position to determine who is right and who is wrong in the growing debate on the hockey stick. If there are historical periods of marked temperature increase, it seems to us it is important to know why these occurred. Overall, we can only urge that the issue is pursued in the next IPCC Assessment.”
I say “peer reviewed” because it is a committee report from the UK’s House of Lords [i.e. Peers -gavin] at: http://www.publications.parliament.uk/pa/ld200506/ldselect/ldeconaf/12/12i.pdf
David Wojick says
Re #120, you say “However, there is no evidence for a large role for solar forcing over the last 50 years or so. No indices related to solar activity show any significant rise over this period.”
I am surprised you would attempt to dismiss an entire scientific community with a simple formula like this, but I notice they appear frequently on these pages.
[Response: I have no idea what you mean here. I have published a number of papers on solar-climate connections and am involved in a number of ongoing related projects. Presumably I am therefore part of the community that I am dismissing? You confuse interest in solar connections with a dismissal of anthropogenic forcings. These things are actually independent. – gavin]
First, the surface temperature record has not increased for the last 50 years, only half of that, before which it cooled (while CO2 levels went up). Second the solar parameter need not increase with warming. We are looking for an indirect mechanism which could as easily be driven by a solar parameter decrease, such as a reduced solar wind. Third, since an indirect effect by definition involves other parameters, this is not a simple correlation exercise. Any more than CO2 forcing is, since CO2 levels do not correlate with the globally averaged surface temperature record.
The reason we are looking is because there is lots of strong long term statistical evidence for a sun-climate link. Since we do not understand the mechanism, we do not know what to look for over the recent decadal timescale. This is not a lack of evidence, it is a lack of understanding. It is probably the biggest uncertainty is climate science today.
As far as there not being any correlation over the last 50 years, I am quite sure that is not true. However, I do not follow the literature that closely, because I am tracking the whole of the science. But I recently came across this:
Ref: Le Mouel, Jean-Louis, Vladimir Kossobokova, and Vincent Courtillot, 2005. On long-term variations of simple geomagnetic indices and slow changes in magnetospheric currents: The emergence of anthropogenic global warming
after 1990? Earth and Planetary Science Letters Vol. 232, No 3-4, pp. 273-286, April 15, 2005
It is amusing that they think demonstrating a solar explanation ONLY through 1990 is an argument for anthropogenic warming. The IPCC SAR claimed that solar played no role for the last 145 years. According to these folks (I do not claim they are right) it explains everything except the last 15 years. The scientific trend, as opposed to the temperature trend, is clearly in the direction of a complete solar explanation. That is my only point in all of this, except that other communities are working on other explanations as well. Thus the uncertainty is increasing.
Hugh says
#ref. 124 Thanks for that link David H
I find it hard to believe that the Commons Committee felt it appropriate to make specific comment on ‘the hockey-stick’ and were prepared to listen to Ross McKitrick’s evidence about it’s uncertainty (sic) in person…without inviting comment from the actual author?
Any comments guys?
Dano says
Re: 124 (David H).
[Mildly edited to prevent inadvertent embarrassment – gavin]
You forgot to include the last line of the para 22 you like so much: One curious feature of the debate over Professor Mann’s time series is that the critics appear to ignore other studies which secure similar hockey stick Pictures. [footnote omitted]
HTH,
Dano
Richard Harvey says
Mr. Wojick’s comment (re: #125):
“First, the surface temperature record has not increased for the last 50 years[…]”
and :
“[…] CO2 levels do not correlate with the globally averaged surface temperature record”
These statements are akin now to pretending that the Earth is flat. Since Mr. Wojick is not a climatologist nor atmospheric scientist, but only an observer of the science of climate, I’d suggest that he’d refrain from such gratuitous phrases, unless he can come up with hard evidence for them, which he (or anybody else) has not done so far.
Richard Harvey says
Re: #122 by Ethan : I also was appalled at that claim that Hubble has increased uncertainty in the astrophysical field. That statement is utterly false. We now know the age of the universe to within a few 100’s of millions of years…
David H says
Re # 126 (Hugh)
They invited submissions on the Internet – anyone and everyone was invited to submit. If you read the published evidence many did on all sides of the debate. I wish Prof. Mann had testified. Reading between the lines I think they asked and got the usual brush off but if I am wrong I apologise to Prof. Mann. He only ever answered one of my emails!
If you read the evidence on their web site you can see some “schoolboy howlers” from the so called experts. Guess who said “Basically, if there was no CO2, we would be as cold as Mars or somewhere like that, and we would not have human life”. You can also see spin at its very best, for example: “Then, if you get to the middle of the century, you find the temperature rise stops somewhat from1950-1970, “??
Re #127 (Dano)
No, I was being selective you can all read the full report. But many of the so called other independent studies are far from independent and are also under scrutiny for lack of any audit trail. However they are not the IPCC trademark.
[Response: With all due respect to the House of Lords, putting up a website and expecting the world to come visit is not really a sufficient strategy to get a well rounded impression of the science. -gavin]
Steve Bloom says
Re #124: I’ll stifle the urge to make fun of some of the titles of these folks. In any case, this was the Lords, not a very representative body to begin with, and more to the point it was the Economics Committee (a nest of Thatcherites, perhaps?) rather than the Science Committee. From a quick perusal of their web page, the latter seem strangely supportive of stronger action on global warming. Also, note that the advisor to the Economics Committee for this report was David Pearce, an economist, and that, based on the CVs provided, none of the Economics Committee members seem to have any scientific expertise. The evidence-gathering process seems also to have been a little over-solicitous of skeptics. I know we’re all shocked.
A quick look at the BBC World Service site at http://news.bbc.co.uk/2/hi/uk_news/politics/4655373.stm finds the following fascinating headline and sub-headline:
‘Wakeham sees upside from warming
‘The ex-minister chairs the Lords economic affairs committee
Siberia will become a “rather nice place to live” and Europe will benefit “on balance” from global warming, says ex-Enron director Lord Wakeham.’
Enron, eh? It’s a small world after all.
Re #125: It’s a little strange to cite a study (actually just the abstract) in support of your position but reject the authors’ conclusions, but I suppose consistency is the hobgoblin of small minds. To point out one obvious and very large flaw in your reasoning, one would have to look carefully at the effects of aerosols (global dimming) before drawing any firm conclusions that insolation changes rather than anthropogenic factors were the principal cause of the warming from 1900 to 1990, even given an apparent correlation. There’s nothing in the abstract to indicate that aerosols were considered. Finally, just who are these other “communities” you refer to? Name some names, please.
David H says
Gavin,
Re your comment in #130, are you suggesting that only the contrarians have a well organised network to inform each other of what’s going on in the climate debate and time their press releases for maximum impact? The witnesses included Dr Pachauri, Sir John Houghton, Sir David King etc, who obviously took the enquiry very seriously and am willing to wager they made sure you all knew about it.
Steve,
Re 131, I think you are whistling in the dark. You need to understand our peculiar system of government, then you should read all their CV’s and bear in mind that all three major parties in the UK are very pro Kyoto. What has happened here is the first independent look at Kyoto in the UK. The report overall is not contrarian but, in my view correctly, points out the weakness in the Kyoto process. Its conclusions will have been factored into the shifting stance of Tony Blair at the G8.
They quote one witness:
“Consensus is the stuff of politics, not science. Science proceeds by observation, hypothesis and experiment. Professional scientists rarely draw firm conclusions from a single article, but consider its contribution in the context of other publications and their own experience, knowledge and speculations”.
and they conclude:
“We are concerned that there may be political interference in the nomination of scientists whose credentials should rest solely with their scientific qualifications for the tasks involved”.
I am sure none of the professionals here will disagree.
Dano says
But many of the so called other independent studies are far from independent and are also under scrutiny for lack of any audit trail. [#130]
Using the underlying premise of Steve Bloom’s post above (131), I must point out that there is no evidence given for this assertion. Please, in the future, give details of studies that are so-called not independent, and be specific in pointing out the methodology or conclusion that suffers, and be so good as to show what the conclusion would be if it were so-called independent (presumably from a study that is so-called independent).
I see a lot of such assertions, none of which give specifics. I invite you to be the first.
D