These days, when global warming inactivists need to trot out somebody with some semblance of scientific credentials (from the dwindling supply who have made themselves available for such purposes), it seems that they increasingly turn to Roy Spencer, a Principal Research Scientist at the University of Alabama. Roy does have a handful of peer-reviewed publications, some of which have quite decent and interesting results in them. However, the thing you have to understand is that what he gets through peer-review is far less threatening to the mainstream picture of anthropogenic global warming than you’d think from the spin he puts on it in press releases, presentations and the blogosphere. His recent guest article on Pielke Sr’s site is a case in point, and provides the fodder for our discussion today.
Actually, Roy has been pretty busy dishing out the confusion recently. Future posts will take a look at his mass market book on climate change, entitled Climate Confusion, published last month, and his article in National Review. We’ll also dig into some of his peer reviewed work, notably the recent paper by Spencer and Braswell on climate sensitivity, and his paper on tropical clouds which is widely misquoted as supporting Lindzen’s IRIS conjecture regarding stabilizing cloud feedback. But on to today’s cooking lesson.
They call it "Internal Radiative Forcing." We call it "weather."
In Spencer and Braswell (2008), and to an even greater extent in his blog article, Spencer tries to introduce the rather peculiar notion of "internal radiative forcing" as distinct from cloud or water vapor feedback. He goes so far as to say that the IPCC is biased against "internal radiative forcing," in favor of treating cloud effects as feedback. Just what does he mean by this notion? And what, if any, difference does it make to the way IPCC models are formulated? The answer to the latter question is easy: none, since the concept of feedbacks is just something used to try to make sense of what a model does, and does not actually enter into the formulation of the model itself.
Clouds respond on a time scale of hours to weather conditions like the appearance of fronts, to oceanic conditions, and to external radiative forcing (such as the rising and setting of the Sun). Does Spencer really think that a subsystem with such a quick intrinsic time scale can just up and decide to lock into some new configuration and stay there for decades, forcing the ocean to be dragged along into some compatible state? Or does he perhaps mean that slow components,like the ocean, modulate the clouds, and the resulting cloud radiative forcing amplifies or damps the resulting interannual or decadal variability? That latter sounds a lot like a cloud feedback to me — acting on natural variability whose root cause is in the ponderous motions of the ocean.
Think of it like a pot of water boiling on a stove. What ultimately controls the rate of boiling, the setting of the stove knob or the turbulent fluctuations of the bubbles rising through the water? Roy’s idea about clouds is like saying that you should expect big, long-lasting variations in the boiling rate because sometimes all the steam bubbles will decide to form on the left half of the pot leaving the right half bubble-free — and that things will remain that way despite all the turbulence for hours on end.
The only sense that can be made of Spencer’s notion is that there is some natural variability in the climate system, which in turn causes a natural variability to some extent in the radiation budget of the planet, which in turn may modify the natural variability. Is this news? Is this shocking? Is this something that should lead us to doubt model predictions of global warming? No — it is just part and parcel of the same old question of whether the pattern of the 20th and 21st century can be ascribed to natural variability without the effect of anthropogenic greenhouse gases. The IPCC, among others, nailed that, and nobody has demonstrated that natural variability can do the trick. Roy thinks he has, but as we shall soon see, it’s all a matter of how you run your ingredients through the food processor.
The impressive graph that isn’t
So here’s what Roy did. He took two indices of interannual variability: the Southern Oscillation (SOI) index, which is a proxy for El Nino, and the Pacific Decadal Oscillation Index (PDOI). He formed an ad-hoc weighted sum of these indices,and then multiplied by an ad-hoc scaling factor to turn the resulting time series into a time series of radiative forcing in Watts per square meter. Then he used that time series to drive a simple linear globally averaged mixed layer ocean model incorporating a linearized term representing heat loss to space. And voila, look what comes out of the oven!
Roy is really taken with this graph. So much so that he uses it as a banner near the top of his climate confusion web site under the heading "Could Global Warming Be Mostly Natural?" But is it as good as it looks? To find out, I programmed up his model myself, but chose the set of adjustable parameters based on compatibility with observations constraining reasonable magnitudes for these parameters. Here’s what I came up with:
So why does Roy’s graph look so much better than mine? As Julia Child said, "It’s so beautifully arranged on the plate – you know someone’s fingers have been all over it."
A Cooking lesson
Lesson One: Jack up the radiative forcing beyond all reason. Reliable data on decadal variability of the Earth’s radiation budget are hard to come by, but to provide some reality check I based my setting of the scaling factor between radiative forcing and the SOI/PDOI index on the tropical data of Wielecki et al 2002 (as corrected in response to Trenberth’s criticism here.) The data is shown below. On interannual time scales, it’s mostly the net top-of-atmosphere flux that counts, so the curve to look at is the green NET curve in the bottom-most panel.
Except for the response to the Pinatubo eruption (the pronounced dip during 1991), the fluctuations are on the order of 1 W/m2 or less once you smooth on an annual time scale. Based on this estimate and on the typical magnitude of Spencer’s combined SOI/PDOI index, I chose a scaling factor (Roy’s a) of 0.27 W/m2 .. In his article, Roy uses a value ten times as big, but then he partly covers up how large the annual radiative forcing is by showing only the five year averages. With Roy’s value of the scaling coefficient, the annual radiative forcing looks like this
which is clearly grossly exaggerated compared to the data. Moreover, in my own estimate of the scaling factor I tried to match the overall magnitude of the fluctuations, whereas restricting the estimate to that part of the observed fluctuation which correlates with the SOI/PDOI index could reduce the factor further. Finally, even insofar as some part of climate change could be ascribed to long term cloud changes associated with the PDOI and SOI, one cannot exclude the possibility that those changes are driven by the warming — in other words a feedback. Still, let’s go ahead and ignore all that, and put in Roy’s value of the scaling coefficient, and see what we get.
So here’s our cooked graph as of Lesson 1 of the recipe:
Lesson Two: Use a completely unrealistic mixed layer depth. OK, so we’ve goosed up the amplitude of the temperature signal to where it looks more impressive, but the wild interannual swings in temperature look completely unlike the real thing. What to do about that? This brings us to the issue of mixed layer depth. The mixed layer depth determines the response time of the model, since a deeper mixed layer has more mass and takes longer to heat up, all other things being equal. The actual ocean mixed layer has a depth on the order of 50 meters. That’s why we got such large amplitude and high frequency fluctuations in the previous graph. What value does Roy use for the mixed layer depth? One kilometer. To be sure, on the centennial scale, some heat does get buried several hundred meters deep in the ocean, at least in some limited parts of the ocean. However, to assume that all radiative imbalances are instantaneously mixed away to a depth of 1000 meters is oceanographically ludicrous. Let’s do it anyway. After all, as Julia Child said, "In cooking you’ve got to have a ‘What the Hell’ attitude." Here’s the result now:
Lesson 3: Pick an initial condition way out of equilibrium. It looks better, especially in the latter part of the century. But it doesn’t get the trend in the early century right. Gotta keep cooking! The essential ingredient this time is the choice of initial condition for the model. If we initialize the anomaly at -0.4C, which amounts to an assumption that the system is wildly out of equilibrium in 1900, then this is what we get:
Now, it’s finally looking ready to serve up to the unsuspecting diners. Note that it’s the adoption of an unrealistically large mixed layer depth that allows Roy to monkey with the early-century trend by adjusting the initial condition. With a more realistic mixed layer depth, changing the initial condition on temperature anomaly only leads to a rapid adjustment period affecting the first few years.
My graph is not absolutely identical to Roy’s, because there are minor differences in the initialization, the temperature offset used to define anomalies, and the temperature data set I’m using as a basis for comparision. My point though, is that this is not an exacting recipe: it’s hash — or Hamburger Helper — not soufflé. Following Roy’s recipe, you can get a reasonable-looking fit to data with very little fine-tuning because Roy has given himself a lot of elbow room to play around in: you have the choice of any two variability indices among dozens available, you make an arbitrary linear combination of them to suit your purposes, you choose whatever mixed layer depth you want, and you finish it all off by allowing yourself the luxury of diddling the initial condition. With all those degrees of freedom, I daresay you could fit the temperature record using hog-belly futures and New Zealand sheep population. Anybody want to try?
Postlude: Fool me once …
Why am I not surprised about all this shameless cookery? Perhaps it’s because I remember this 1997 gem from the front page of the Wall Street Journal, entitled "Science has Spoken:Global Warming is a Myth":
That’s not Roy’s prose, but it is Roy’s data over there in the graph on the right, which purports to show that the climate has been cooling, not warming. We now know, of course, that the satellite data set confirms that the climate is warming , and indeed at very nearly the same rate as indicated by the surface temperature records. Now, there’s nothing wrong with making mistakes when pursuing an innovative observational method, but Spencer and Christy sat by for most of a decade allowing — indeed encouraging — the use of their data set as an icon for global warming skeptics. They committed serial errors in the data analysis, but insisted they were right and models and thermometers were wrong. They did little or nothing to root out possible sources of errors, and left it to others to clean up the mess, as has now been done.
So after that history, we’re supposed to savor all Roy’s new cookery?
That’s an awful lot to swallow.
Chris Colose says
Alastair,
I would not expect cloud cover to change dramatically as relative humidity remains constant. It is not a given that warmer temperatures will lead to more cloud cover. In midlatitudes it’s a lot warmer in summer than winter, and there’s lots more evaporation in summer, yet if anything there are fewer clouds because relative humidity is on average lower, and also there are more clouds over the polar oceans than the tropical and subtropical oceans even though the latter are warmer. And as raypierre has mentioned before, you also need to complete the argument by discussing how high clouds respond, since increasing them should warm the surface. From personal correspondence with Anthony DelGenio, he seems pretty convinced that LCC should decline. Reading the literature out there also leads me to believe the best evidence points toward a neutral to positive cloud feedback.
//”temperatures will rise until enough cloud forms to cut off the supply of solar radiation to the surface.”//
That wouldn’t be good.
by the way, Venus probably did have liquid water (it’s still in traces as vapor in the atmosphere).
As for the cloud cover in other paleo-times, this is probably further evidence if anything that clouds do not serve as a stabilizing feedback, an probably had to assist CO2 (or whatever else) during many times to get full deglaciation, or to get as hot as some times did…during the Cretaceous for instance (by the way, since this paper was referenced in the PP presentation, I thought raypiere might have meant Cretaceous but he may have been using it as a template for other times?)
see: http://chriscolose.wordpress.com/2008/04/13/the-uncloudy-cretaceous/
Richard Pauli says
It goes like this.
(from http://scienceblogs.com/denialism/about.php)
Denialist says something wacky…
Commenter or blogger corrects their mistake…
Denialist says same thing, changes argument slightly…
Commenter or blogger again corrects their mistake…
Denialist says something even wackier, says it disproves all of a field of science…
Commenter or blogger, exasperated, corrects it and threatens disemvowelment…
Denialist restates original wacky argument…
Commenter or blogger’s head explodes, calls denialist an idiot.
Denialist says he won because commenter or blogger resorted to ad hominem.
Alastair McDonald says
Re #150 & #151
Barton,
Thanks for your offer of more data, but what you have supplied tends to agree with my “What goes up must come down” hypothesis, which is as far as I wish to take quantitative cloud theories at present. The main difficulty with the hypothesis is that alhough it is true for mass, it is not true for planetary surface area covered. For instance on earth there is a polar vortex where the air descends rapidly from the stratosphere in a narrow column, and then spreads out slowly lifting a broad area of temperate air.
Without the polar ice, then the vortex will cease and the temperate cloud will disappear. Could we then have a static temperate and polar region with no rising and falling air? Would we then have no cloud or continuous cloud? Perhaps the Ferrel and polar cells will merge, with descending air in the subtropics (as now) and rising air at the pole. Thus the temperate regions would be in a warm air stream, with clouds at the pole keeping it warm during the winter.
Chris,
That would account for the crocodiles in Hudson Bay during the Cretaceous.
I agree with you that cloud cover will not change dramatically in the short term just because temperatures rises. But what I am arguing is that there is no other negative feedback to prevent temperatures rising, so that they will continue to rise until the clouds do change dramatically.
From a geological perspective, this contradicts the nineteenth century Principle of Uniformitarianism, or at least that part of it that believes that all climate changes in the past were the result of long slow processes. We now know that catastrophes have happened in the geological record with the most notable the mass extinction of the Dinosaurs. However, that event is not the only abrupt change in strata or palaeological record. In fact, a rapid climate change happened only 10,000 years ago when the Younger Dryas mini ice age ended and the Holocene inter-glacial began.
The ending of the Younger Dryas has not yet been successfully modeled and I believe that the continuing commitment to the out-dated Principle of Uniformitarianism that is one of the factors in this failure.
Cheers, Alastair.
John D M says
Are we fully appreciating the point that clouds are merely a byproduct of heat being transported from the surface to upper levels. The more clouds there are, the more heat that has been transferred from the surface to be dissipated into the upper levels, and eventually into space, is it not?
Maggie Rosenthal says
Ray,
Do you not believe in such a thing as stochastic variability?
Ray Ladbury says
Maggie Rosenthal, Stochastic variability is not a matter of belief. It is an empirical question to be verified for each system individually. I see no evidence it applies in climate.
John D M., The well mixed character of CO2 means that energy transported high into the troposphere will not necessarily be lost–and is less and less likely to be lost as ghgs increase.
Arch Stanton says
John D M Says:
“Are we fully appreciating the point that clouds are merely a byproduct of heat being transported from the surface to upper levels. The more clouds there are, the more heat that has been transferred from the surface to be dissipated into the upper levels, and eventually into space, is it not?”
Yes. The IPCC AR4 FAQ 1.1 discusses this. (link upper right of this page)
Doug Bostrom says
Speaking of cooking data, there’s a fellow who goes by the moniker of “Steven Goddard” now publishing regular climate opinion pieces at “The Register”, a popular online IT journal.
Goddard’s articles seem to be focused on selective interpretation and in some cases outright rearrangement of datasets to “create controversy” in the area of surface temperature measurements. For what they are, the pieces are reasonably well written.
His focus however does not appear to be pursuit of scientific inquiry but rather seems to be on tearing down Dr. James Hansen’s reputation, by innuendo and some cases explicit accusations of scientific misconduct.
Unfortunately, these pieces are gaining fairly wide resonance, to the point that Senator Inhofe has now included them in his body of denial references.
Steven Goddard’s CV is unavailable. He is self-described as an “independent scientist/engineer” offended by Dr. Hansen’s misconduct. Other than numerous cites of the “The Register” articles, searches for Goddard yield little but a few comments on the New York times that seem favorable to the choice of coal over nuclear power, plus self-promoting posts on such locales as Free Republic, et al.
I have requested that “The Register” publish Goddard’s CV, have used the author contact facility at “The Register” to ask Goddard for his CV, and finally have moved on to requesting his resume via the comments facility on his “Register” articles. No response has been forthcoming. My requests via comments have been moderated out.
I suspect that “Steven Goddard” is a clever nom de plume, including Goddard as either a joke or a method of generating search hits on “Goddard” + “climate” or the like.
I encourage anybody who may be interested in removing the veil of anonymity shielding “Steve Goddard” as he degrades the transparently visible Dr. Hansen to contact “The Register” and inquire as to whether there is some reason why “Mr. Goddard” should need to be kept invisible.
“Goddard”‘s latest screed may be found at:
http://www.theregister.co.uk/2008/07/03/goddard_polar_ice/
Hank Roberts says
Remember The Register is a humor magazine.
See today’s sidebar for instance, they have a piece titled:
“Schwarzenegger seizes Tesla Motors plant for California”
Nobody takes this stuff seriously, do they?
llewelly says
This is true – and also, like much of the tech geek community (I say this as a once proud member of the tech geek community), El Reg is strongly influenced by the opinions of AGW-denialists. But the unfortunate truth is that despite a focus on humor and undue influence by fringe ideas, El Reg was for many years, the best source of tech industry news. Even today, only a few sources, such as Ars Technica, are better. This is not a defense of El Reg – it is an admission that tech industry news is overwhelmingly dominated by garbage and delusion. When RC debunks some ridiculously wrong global warming article in Wired, it’s common for commenters here to assume ‘well, bad article on global warming, but Wired is still a fine tech magazine’ – but no, that isn’t the case. Frankly, if it’s in Wired, and it’s not by Schneier, chances are it’s no more accurate than the bad global warming articles RC has debunked, whether it’s about technology or not. Sadly that’s par for the course in tech industry news; Wired is no worse – or no better than the rest.
But the point is – tech geeks tend to grant a lot of weight to The Register largely because the rest of tech news industry is so deplorable.
Doug Bostrom says
#159 Hank:
Despite their provenance, Goddard’s articles have exploded (splattered?) into prominence in the Zone of Denial. Cites have moved beyond the depths of climatefraud.com and into such loftier realms as the medium market newspapers such as the Orange County Register. Expect him to be cited as “the other point of view” on Fox News, soon. (Of course in this case it’s going to be a little difficult if Goddard can’t actually be seen because he does not exist.)
This a pretty typical pattern of late. If an outfit such as, say “Swiftboat Veterans for Truth” can just claw their way onto the first media rung the sky becomes the limit.
Meanwhile, notice how the story about Hansen in the media is gradually changing, moving from coverage of his work and more into how “controversial” Hansen himself is. Unanswered accusations of misconduct moving upward through the media food chain are a key part of this process of character assassination.
Watching the seething mass of non-scientists standing in ignorant and ill-considered judgment of folks such as James Hansen is –extremely– irritating, leaving alone the particular topic of discussion. I’m not a scientist myself, but I’m married to one. I see nothing atypical in her honesty and receptivity to criticism of her work, her self-scrutiny not least of all. I count numerous scientists among my friends, among whom it is plain to see that stacking another properly mortared brick on our pyramid of knowledge transcends all other motivations. I’ve watched my spouse’s grad students sweating the details of their work and greatly admire the intense focus they bring to “getting it right”. I’m acutely aware of the extremely high level of integrity that overwhelmingly dominates the process of scientific inquiry, particularly of the academic type. Finally, I’m well familiar with the process of peer review, how very difficult it is to slip a ringer into a decent journal and how completely impossible it would be to do so repeatedly. People like El Reg’s “Steven Goddard” are simply vandals, wreckers, morons who really have no clue of what it is they’re attempting to break.
So not really very funny, at the end of the day.
Ray Ladbury says
Doug Bostrom, The intersection of Goddard’s audience with the reality-based community is the empty set. It does not matter whether he is debunked–people who want to believe him will do so, because they simply don’t care. The best we can hope to do is expose these people for the idiots they are. The fact that he chooses to remain anonymous probably only adds to his credibility in the eyes of these sheep.
Martin says
@Doug Bostrom
I run a big science blog – The Lay Scientist – and I’m trying to work on an article about Steven Goddard – any chance you could get in touch? You can contact me through the website, http://www.layscience.net
Richard Patton says
raypierre said:
=====
it is just part and parcel of the same old question of whether the pattern of the 20th and 21st century can be ascribed to natural variability without the effect of anthropogenic greenhouse gases. The IPCC, among others, nailed that, and nobody has demonstrated that natural variability can do the trick.
=====
I am confused about out current ability to explain natural climate variation. My understanding is that we currently believe the LIA was primarily caused by low TSI as shown in LEAN 2000. However, it is also my understanding that there is new compelling evidence of a floor in TSI and that there are new TSI reconstructions that show us a new picture of this (WANG 2005 and LEIF 2007) http://www.leif.org/research/TSI-LEIF.pdf.
Based on these new reconstructions how can we now explain the LIA?
It is also my understanding that a significant amount of the early 20th century warming is assumed to be because of a concomitant increase in TSI which apparently did not happen either (according to these reconstructions).
Have these new TSI reconstructions been taken into account?
[Response: Actually, in most model simulations that have been done, the hemspheric-scale “LIA” is reproduced largely as a response to explosive volcanic forcing, not solar. See e.g. Crowley (2000). In many cases, the newer solar reconstructions (which still have a long-term trend similar to the older ones, but a factor of 2 or 3 smaller amplitude), actually yield a better match with paleoreconstructions. See e.g. the discussion in Shindell et al (2001). -mike]
David W says
raypierre: “So here’s what Roy did. He took two indices of interannual variability: the Southern Oscillation (SOI) index, which is a proxy for El Nino, and the Pacific Decadal Oscillation Index (PDOI). He formed an ad-hoc weighted sum of these indices..”
Maybe Roy should have revised these indices you speak of.
In my opinion, these indices leave a lot to be desired….
Garth C says
Why do all the charts always end in 1998, i can’t find anything about what currently happening with the climate
Nick Gotts says
#166 – Garth C.
Then you haven’t looked. I can be pretty certain of this, since http://en.wikipedia.org/wiki/Global_warming
gives a graph going to 2007. In fact, I’m pretty certain that your post is a [edit] attempt to exploit the fact that a lot of people will look at the last post on a thread, and if it hasn’t been answered, assume there isn’t an answer.
Hank Roberts says
> a lot of people will look at the last post on a thread
Cautionary, true that.
Contributors, please, remember Nick’s observation above, when locking topics. Check how it reads from the last post up a few; if it’s been packed with denial at the end, ask someone to do a useful summary? Volunteers will. Else it misleads.
Nick Gotts says
Good thought Hank – I’d certainly be willing to summarise a thread.
Garth C says
Thanks for the information but some still only go to 2004 but they are more current. I don’t know why your so hostle it was a simple question, Oh and it was a double post sort of i didn’t notice the the think at the bottom where you have to verify your posts.
Ed M. says
This is useful information, but now that the Spencer and Braswell article is in print in the reputable J. Climate 21(21):5624-5628 Nov 2008, are there plans to submit a formal comment to the journal? That would be make this addition to the literature more complete.
[Response: Ray’s commentary is not about that paper – but about the much less well supported internet postings Spencer has posted. There is often a big difference between what gets into the peer-reviewed literature and what is claimed outside of it. – gavin]
Hank Roberts says
Garth, you got a really weird string pasted in the “Website” field that will appear under your name next time you post a reply. (Having anything there makes your name show up clickable.) Delete that or replace it with a real website URL if you have one and it’ll quit giving you the ‘Incorrect CAPTCHA’ message.
On charts, tell us where you’re looking, and we can help you find more current material.
No offense intended — people often show up new here with a proclamation that something doesn’t exist. Imagine going to the library reference desk and saying that (grin). Nobody knows where everything is.
Bryan S says
Re: #113: Gavin, you stated the following regarding the ocean mixing time controlling the equilibrium climate sensitivity “The mixing time impacts the transient sensitivity (i.e. the warming expected by 2050 say), but not the equilibrium value.”
After thinking about this a bit more (and re-reading the Steve Schwartz paper), are you sure of this? A greater effective heat capacity of the ocean (that which is effectively coupled to the atmosphere over the time frame of interest) should give a lower equilibrium climate sensitivity for a given relaxation time according to Schwart’s equation #14.
If the deep ocean is mixed more quickly, why should this not yield a greater effective heat capacity and a resulting lower equilibrium climate sensitivity for a given change in GHG forcing?
Sorry it took me several months to get back to this question.
[Response: But the relaxation time is not independent of the sensitivity, so your statement “the .. sensitivity for a given relaxation time” does not make sense. The equilibrium sensitivity is driven by the atmospheric radiation balance, which only depends on the surface temperatures of the ocean – it doesn’t depend on how long it took to get there. However, the bigger the atmospheric feedbacks, the more there is for the ocean to do (which takes more time). This was explored thoroughly in one of Hansen’s early papers. – gavin]
Bryan S says
Gavin,
Do you then reject the concept expressed in Schwartz’s simple energy balance model by: sensitivity=t/C where t is the relaxation time, and C is the effective heat capacity?
I understand your point that equilibrium sensitivity is only driven by the atmospheric radiation balance, but couldn’t the case also be made that the SST (driving the heat and moisture fluxes to the atmosphere) is related closely to OHC, which is thus related to the effective heat capacity, a time-dependent function?
[Response: No – why would you think that? The basic concept is fine (if a little simplistic) – but it’s clear that the sensitivity and time-constant go up together. The problem with Schwartz’s idea was that it doesn’t work in the presence of noise and gives biased estimates even in simple systems. – gavin]
Bryan S says
Gavin,
But the sensitivity is also not independent of the effective heat capacity, so I am interested in why it is clear that the sensitivity and time-constant necessarily go up together?
[Response: But it is! As an example, if I take an atmosphere with a mixed-layer model and vary the depth (say 50m, 100m, 200m etc.), the model equilibrium sensitivity doesn’t change. All that is affected by the increased effective heat capacity is the transient behaviour. Just put dT/dt=0 in the simple energy balance equation, ‘C’ drops out. – gavin]