In an earlier post, we discussed a review article by Frohlich et al. on solar activity and its relationship with our climate. We thought that paper was quite sound. This September saw a new article in the Geophysical Research Letters with the title «Phenomenological solar signature in 400 years of reconstructed Northern Hemisphere temperature record» by Scafetta & West (henceforth referred to as SW). This article has now been cited by US Senator James Inhofe in a senate hearing that took place on 25 September 2006 . SW find that solar forcing accounts for ~50% of 20C warming, but this conclusion relies on some rather primitive correlations and is sensitive to assumptions (see recent post by Gavin on attribution). We said before that peer review is a necessary but not sufficient condition. So what wrong with it…?
The greatest flaw, I think, lies in how their novel scale-by-scale transfer sensitivity model (they call it “SbS-TCSM”) is constructed. Coefficients, that they call transfer functions, are estimated by taking the difference between the mean temperature of the 18th and 17th centuries, and then dividing this by the difference in the averages of the total solar irradiances for the corresponding centuries. Thus:
Z = [ T(18th C.) – T(17th C.) ] / [ I(18th C.) – I(17th C.) ]
Here T(.) is the temperature average for the century while I(.) is the irradiance average. If the two terms, I(18th C.) & I(17th C.), in the denominator have very similar values, then the problem is ill-conditioned: small variations in the input values lead to large changes in the answers; which implies very large
error bounds. In my physics undergraduate course, we learned that one should stay away from analyses based on the difference between two large but almost equal numbers, especially when their accuracy is not exceptional. And using differences of two large and similar figures in a denominator is asking for trouble.
So when SW repeated the exercise for the differences between the 19th and 17th centuries, and for three different estimates of the total solar irradiance, the results gave a wide range of different values for the transfer functions: from 0.20 to 0.57! The problem is really that SW assume that all climatic fluctuations in the 17th to the 19th centuries to solar activity, and hence neglect factors (natural forcings) such as landscape changes (that the North America and Europe underwent large-scale de-forestation), volcanism (see IPCC TAR Fig 6-8), and internal variations due to chaotic dynamics. It is, however, possible to select two intervals over which the average total solar irradiance is the same but not so for the temperature. When the difference in the denominator of their equation is small (the changes in the total solar irradiance are small), then the model blows up because other factors also affect the temperature (i.e. the difference in temperature is not zero). Thus their model is likely to exaggerate the importance the solar activity.
To show that the equation is close to blowing up (being “ill-defined”) their exercise can be repeated for the differences between 19th and 18th centuries (which was not done in the SW paper). A simple calculation for the 19th and 18th centuries is quickly and easily done using results from their table 1 and figures 1-2: A back-of-the envelope calculation based on the 19th and 18th centuries suggests that the transfer functions now would yield an increase of almost 1K for the period 1900-2000, most of which should have been realized by 1980! One problem seems to be that now the reconstruction based on solar activity increases faster than the actual temperature proxy. That would be difficult to explain physically (without invoking a negative forcing).
The SW paper does discuss effects of changes in land-use, but only to argue that the recently observed warming in the Northern Hemisphere may be over-estimated due to e.g. heat-island effects. SW fails to mention effects that may counter-act warming trends, such as irrigation, better shielding of the thermometers, and increased aerosol loadings, in addition to forgetting the fact that forests were cut down on a large scale in both Europe and North America in the earlier centuries. Another weakness is that the SW analysis relies on just one paleoclimatic temperature reconstruction, but using other reconstructions is likely to yield other results.
Looking at the SW curves in more detail (their Fig. 2), one of the most pronounced changes in their solar-based temperature predictions is a cooling at the beginning of the record (before 1650), but a corresponding drop is not seen in the temperature curve before 1650. It is of roughly similar magnitude as the increase between 1900 and 1950, but it is not discussed in the paper. As in their earlier papers, the solar-based reconstructions are not in phase with the proxy data. However, SW argue that by using different data for the solar irradiance, the peaks in 1940 (SW claim it is in 1950) and 1960 would be in better agreement. So why not show it then? Why use lesser data?
The curves in Figure 2 (Fig. 2 here shows the essential details of their figure) of the SW paper suggests that their reconstruction increases from -0.4 to 0K between 1900 and 2000, whereas the the proxy data for the temperature from Moberg et al. (2005) changes from -0.4 to more than +0.6K (by rough eye-balling). One statement made both in the abstract of the SW paper and the Discussion and Conclusions (and cited in the senate hearing) is that «the sun might have contributed to approximately 50% of the total global surface since 1900 [Scafetta and West, 2006 – an earlier paper this year])». But the figure in the SW paper would suggest at the most 40%! So why quote another figure? The older Scafetta and West (2006) paper which they cite is discussed here (also published in Geophysical Research Letters), and I’m not convinced that the figures from that paper are correct either.
There are some reasons to think that solar activity may have played some role in the past (at least before 1940), but I must admit, I’m far from convinced by this paper because of the method adopted. It is no coincidende why regression is a more widely used approach, especially in cases where many factors may play a role. The proper way to address this question, I think, would be to identify all the physical relationships, and if possible set up the equation with right dimensions and with all appropriate non-linear terms, and then apply a regression analysis (eg. used in “finger print” methods). Last week, we discussed the importance of a physical model in making attributions because statistical correlations are incapable of distinguishing between forcings with similar trends. Here is an example of a paper that has exactly that problem.
There is also a new paper out on the relationship between galactic cosmic rays and low clouds by Svensmark. We will write a post on this paper shortly.
L. David Cooke says
RE: #199
Mr. Fiddaman;
Would it not be a reasonable approach to establish a gold standard of earth Solar Energy Flux equilibrium. Why not start with a pure theoritical 24 hour rotating blackbody with 1/2 recieving full spectrum 1370 watts, from an object of approximately 30 arc min. and 1/2 of the blackbody exposed to a less then 3 Deg. K and establish a gold standard equilibrium point.
Once this has been established why not establish the current observed base value of the real thing. By taking a known full spectrum Solar TOA value and you can measure with the same instrument the emitted or upwelling TOA value you should be able to discern the residual energy added to the earth. If you performed a Granger Casuality analysis to check for the trend or lead/lag variability you could then clearly state the current equilibrium point.
Once you have established these two values you can then add in the IPCC contributors and the variability of their values plus their forcing or feedback values to determine if the contributor values result in a positive or negative value in relation between the gold standard and the observational data. If as everyone suspects there should be a positive value then you can specify once and for all a specific measure that is only refuteable by varing the contributors.
Once this is established we can start to ascertain the contributor values in a clear and concise manner and place them in the GCMS. We then increase the accuracy of the value and variability or period of each contributor as the measurements improve. The end result would be a very good model at least of the equilibrium and establish the standards of measure as we move towards the final product.
Maybe this is already being done and the data simply is not being published. Maybe no one wants to measure this data this accuratly? Then again it is possible that is these standards and a standard approach were applied then maybe the opportunity for an individual team to shine is lost. If we don’t consider bringing this approach to the table I am afraid that there may be political forces that will and they will directly tie the future research funding to a program such as this. If the community is not out in front of a project like this it is possible that it could lose it right to choose. Is that what we really want? If a government mandated program for a GCM became a political priority I am afraid this could become a crushing blow to the many diverse programs now being investigated.
Dave Cooke
Tom Fiddaman says
Re 201
I think this was supposed to further that agenda, but I recall hearing that it had been sidelined.
I’m with Hank.
Barton Paul Levenson says
Re #201 and “Maybe this is already being done and the data simply is not being published.”
Maybe it’s already being done and you’re simply not aware of where it’s been published. Most stuff about radiative equilibrium and planetary temperatures can be found in introductory astronomy texts as well as climatology papers and texts.
L. David Cooke says
RE: # 202/203
Mr. Fiddaman;
Thanks, that is something I plan to explore further. If the satellite is at the L1 point and my memory serves me correctly, it should place the object directly in line between the earth and the sun. I wonder if they are going to use a flat mirror and a set of tuned/filtered pyrometers or a set of prisims and multiple pyrometers? I think the former would reduce the probability of deviation between detectors though the later is likely less expensive.
Mr. Levenson;
If you have any suggestions for links this would be valuable. I have been researching the data sets for nearly 9 years now and have not found data in this basic format yet with a level of confidence that exceeds 97% or a margin of error less then 3%. If you have a reference please insure it is publically available as I would not qualify for most, as I am not in the profession. My thanks for your assistance.
Dave Cooke
L. David Cooke says
RE: 202
Tom;
I wrote the NASA Project Manager to see if they have an update on the status. Apparently this was one of the lost projects in the payload roster. Reviewing the instruments in the pdf was very interesting, the high sensitivity/resolution for the CCD and a lack of shielding in the wake of possible ICME solar flux in the next few years may be worrisome. It is too bad they have probably already built the craft. If they could come up with a cheap booster alternative or hitchhike a ride for a throwaway version to at least grab 12 months of data for a baseline would be welcome…
In light of your link, I wonder if there is a measured source as had been suggested by another poster. Who knows there might have been other birds that have experiments that have the tools to at least capture samples that can be extrapolated.
Dave Cooke
Tom Fiddaman says
I’m going to slide one in here because I just missed the cutoff on the other recent attribution thread.
Re 111
Perhaps I was too tongue-in-cheek, because I meant that Motl goofed up the math in the process of pointing out the logarithmic effect. Since he’s a theoretical physicist we can excuse his innumeracy. :) I suspect that Motl sources his 1.0C and 0.76C numbers (2x vs. present forcing) from Lindzen, rather than using his own equation. Ironically, he cites this RC post as backing for his views, even though it clearly points out the error of neglecting thermal inertia. I guess we have to excuse his illiteracy, too.
Motl also misstates other work he cites. For example, he says that Annan’s reply to Hegerl concludes that the actual sensitivity is about 5 times smaller than the Hegerl et al. upper bound, but reading the actual reply, it’s clear that the upper bound refers to Hegerl et al.’s naive prior, not their final result, and that the 5x should be at most 4x, even if you think it makes sense to compare upper bounds to means (less than 2x otherwise).
The thermal inertia lag is nontrivial – it means that current temperature is less than the equilibrium temperature expected from current forcing by a factor of tau*g, where tau = time constant of thermal inerta and g = growth rate of emissions. That could easily be 50%, which means that even if atmospheric CO2 levels off today, there’s as much warming in the pipeline as we’ve already seen. Of course, emissions themselves are above uptake, so the equilibrium temperature implied by today’s emissions is more like 4x current (1/44%/50%). And then there’s the inertia in the economy….
Richard Barger says
A layman here (or non-expert), so be patient.
On forcings, from the GISS 2005 paper cited above (I think, this is a very long thread).
“A CO2 standard seems better not only for the practical reason given above, but because actual solar forcing is complex and the climate response to it is not well known. Solar irradiance change has a strong spectral dependence [Lean, 2000], and resulting climate changes may include indirect effects of induced ozone change [RFCR; Haigh, 1999; Shindell et al., 1999a] and conceivably even cosmic ray effects on clouds [Dickinson, 1975]. Furthermore, it has been suggested that an important mechanism for solar influence on climate is via dynamical effects on the Artic Oscillation [Shindell et al., 2001, 2003b]. Our understanding of these phenomena and our ability to model them are primitive, which argues against using solar forcing as a standard for comparing simulated climate effects. ”
Doesn’t this imply that the CO2 forcing is really a forcing for CO2 and all other factors not explained by the other forcings? In particular, wouldn’t it include solar forcings that resulted from solar effects not included by the assumptions on solar forcing? Specifically, that the true solar effect would expected to be lagged, frequency dependent and non-linear?
John Dodds says
Re 187 Tom
I see the reasoning for why the magnitude of the Convective (+ conductive) feedback is NOT sufficient to compensate for the added GHG forcing (1/tau analysis), however, lets consider that my misidentified original “convective feedback” concept should really be defined as the response to the added GHG warming @ ground & the energy dis-equilibrium at TOA. Any temp increase results in a feedback of Convection & conduction AND radiation in order to return to equilibrium as required by the Stefan -Boltzmann equ applied to the GHG warming effect.. So I can see that the combination of all three would have the magnitude to compensate.
My question then becomes WHEN does this return to equilibrium accur? The GCMs say that it takes many many years. & I am NOT sure why it takes so long. It would seem to me that whenever a single added GHG raises the temperature by delta T, then the Stefan-Boltzmann (SB)feedback effect (above) would immediately respond by compensating with the feedback (Conv & cond AND radiation) that would return the earth system to equilibrim as FAST as the radiative (& convective)effects can transfer the GHG warming delta T to space. Isn’t the SB Feedback FASTER than the added GHG warming? I do not under stand why it would take years – ie your last paragraph in 187 & the GCM results – One related question the energy dis-equilibrium (& GHG warming) accumulates to get the many years & 3+ degrees effect, Does this change if you run a 1880-2000 case or a 1750-2000 case or what about the 13,000+ year case which is actually how long the GHGs have been increasing since the last ice age. Does the dis-equilibrium really last that long?
BUT I leave this question for another day. I need to think about it a little. Thanks for the consideration.