Over the last couple of months there has been much blog-viating about what the models used in the IPCC 4th Assessment Report (AR4) do and do not predict about natural variability in the presence of a long-term greenhouse gas related trend. Unfortunately, much of the discussion has been based on graphics, energy-balance models and descriptions of what the forced component is, rather than the full ensemble from the coupled models. That has lead to some rather excitable but ill-informed buzz about very short time scale tendencies. We have already discussed how short term analysis of the data can be misleading, and we have previously commented on the use of the uncertainty in the ensemble mean being confused with the envelope of possible trajectories (here). The actual model outputs have been available for a long time, and it is somewhat surprising that no-one has looked specifically at it given the attention the subject has garnered. So in this post we will examine directly what the individual model simulations actually show.
First, what does the spread of simulations look like? The following figure plots the global mean temperature anomaly for 55 individual realizations of the 20th Century and their continuation for the 21st Century following the SRES A1B scenario. For our purposes this scenario is close enough to the actual forcings over recent years for it to be a valid approximation to the simulations up to the present and probable future. The equal weighted ensemble mean is plotted on top. This isn’t quite what IPCC plots (since they average over single model ensembles before averaging across models) but in this case the difference is minor.
It should be clear from the above the plot that the long term trend (the global warming signal) is robust, but it is equally obvious that the short term behaviour of any individual realisation is not. This is the impact of the uncorrelated stochastic variability (weather!) in the models that is associated with interannual and interdecadal modes in the models – these can be associated with tropical Pacific variability or fluctuations in the ocean circulation for instance. Different models have different magnitudes of this variability that spans what can be inferred from the observations and in a more sophisticated analysis you would want to adjust for that. For this post however, it suffices to just use them ‘as is’.
We can characterise the variability very easily by looking at the range of regressions (linear least squares) over various time segments and plotting the distribution. This figure shows the results for the period 2000 to 2007 and for 1995 to 2014 (inclusive) along with a Gaussian fit to the distributions. These two periods were chosen since they correspond with some previous analyses. The mean trend (and mode) in both cases is around 0.2ºC/decade (as has been widely discussed) and there is no significant difference between the trends over the two periods. There is of course a big difference in the standard deviation – which depends strongly on the length of the segment.
Over the short 8 year period, the regressions range from -0.23ºC/dec to 0.61ºC/dec. Note that this is over a period with no volcanoes, and so the variation is predominantly internal (some models have solar cycle variability included which will make a small difference). The model with the largest trend has a range of -0.21 to 0.61ºC/dec in 4 different realisations, confirming the role of internal variability. 9 simulations out of 55 have negative trends over the period.
Over the longer period, the distribution becomes tighter, and the range is reduced to -0.04 to 0.42ºC/dec. Note that even for a 20 year period, there is one realisation that has a negative trend. For that model, the 5 different realisations give a range of trends of -0.04 to 0.19ºC/dec.
Therefore:
- Claims that GCMs project monotonic rises in temperature with increasing greenhouse gases are not valid. Natural variability does not disappear because there is a long term trend. The ensemble mean is monotonically increasing in the absence of large volcanoes, but this is the forced component of climate change, not a single realisation or anything that could happen in the real world.
- Claims that a negative observed trend over the last 8 years would be inconsistent with the models cannot be supported. Similar claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.
- Over a twenty year period, you would be on stronger ground in arguing that a negative trend would be outside the 95% confidence limits of the expected trend (the one model run in the above ensemble suggests that would only happen ~2% of the time).
A related question that comes up is how often we should expect a global mean temperature record to be broken. This too is a function of the natural variability (the smaller it is, the sooner you expect a new record). We can examine the individual model runs to look at the distribution. There is one wrinkle here though which relates to the uncertainty in the observations. For instance, while the GISTEMP series has 2005 being slightly warmer than 1998, that is not the case in the HadCRU data. So what we are really interested in is the waiting time to the next unambiguous record i.e. a record that is at least 0.1ºC warmer than the previous one (so that it would be clear in all observational datasets). That is obviously going to take a longer time.
This figure shows the cumulative distribution of waiting times for new records in the models starting from 1990 and going to 2030. The curves should be read as the percentage of new records that you would see if you waited X years. The two curves are for a new record of any size (black) and for an unambiguous record (> 0.1ºC above the previous, red). The main result is that 95% of the time, a new record will be seen within 8 years, but that for an unambiguous record, you need to wait for 18 years to have a similar confidence. As I mentioned above, this result is dependent on the magnitude of natural variability which varies over the different models. Thus the real world expectation would not be exactly what is seen here, but this is probably reasonably indicative.
We can also look at how the Keenlyside et al results compare to the natural variability in the standard (un-initiallised) simulations. In their experiments, the decadal mean of the period 2001-2010 and 2006-2015 are cooler than 1995-2004 (using the closest approximation to their results with only annual data). In the IPCC runs, this only happens in one simulation, and then only for the first decadal mean, not the second. This implies that there may be more going on than just the tapping into the internal variability in their model. We can specifically look at the same model in the un-initiallised runs. There, the differences between first decadal means spans the range 0.09 to 0.19ºC – significantly above zero. For the second period, the range is 0.16 to 0.32 ºC. One could speculate that there is actually a cooling that is implicit to their initialisation process itself. It would be instructive to try some similar ‘perfect model’ experiments (where you try and replicate another model run rather than the real world) to investigate this further though.
Finally, I would just like to emphasize that for many of these examples, claims have circulated about the spectrum of the IPCC model responses without anyone actually looking at what those responses are. Given that the archive of these models exists and is publicly available, there is no longer any excuse for this. Therefore, if you want to make a claim about the IPCC model results, download them first!
Much thanks to Sonya Miller for producing these means from the IPCC archive.
Timothy says
[4] – Also, I’ve noticed that the various models tend to agree with each other within hindcasts, but there is rather more of a spread in the future projections. I’m told that the hindcasts are honest exercises, and not curve-fits, but in that case, shouldn’t there be more of a spread amongst the models in the hindcasts, as well?
The spread is caused because the different models respond more or less sensitively to the forcing. In the historical period the total forcing is less than the forcing that is expected as part of the A1B scenario. I haven’t done this analysis, but I would expect that if you plotted the graph in terms of the percentage anomalies of each run from the ensemble mean, you would see much more constant spread throughout the length of the run.
What I’m saying is that the spread in absolute terms is growing, but in relative terms it [probably] isn’t.
As to your question about initialisation, the standard IPCC procedure, as I understand it, is to use a “spinup” run to initialise the model. This uses constant 1860 “pre-industrial” conditions (ie CO2, methane, etc) for the model so that it can be in a steady equilibrium state when the historical GHG forcings are applied.
Then different start points can be taken from different points of this spinup run for different ensemble members. Normally, the scenario runs (A1B, etc) are started from the end of the “historical” runs.
There isn’t so much observational data for 1860, so it would be hard to construct an “ideal” set of initial conditions. The main critierion has been to have a model that is in equilibrium, so that you know that any warming in the experiment is due to the forcing added to that experiment, and not long timescale reactions to imbalances still present in the model.
Timothy says
[44] – There’s another recent-ish volcano (El Chichon) that had a climatic impact, I think that was 1983.
BRIAN M FLYNN says
When the “chill” early last March was hyped to negate global warming over the past 100 years, Dr. Christy admonished, “The 0.59 C drop we have seen in the past 12 months is unusual, but not unprecedented; April 1998 to April 1999 saw a 0.71 C fall. The long-term climate trend from November 1978 through (and including) January 2008 continues to show a modest warming at the rate of about 0.14 C (0.25 degrees F) per decade…One cool year does not erase decades of climate data, nor does it more than minimally change the long-term climate trend. Long-term climate change is just that “long term” and 12 months of data are little more than a blip on the screen.”
Dr. Hansen responded somewhat more hyperbolically in **Cold Weather**: “The reason to show these [monthly and decadal GISS, RSS, and UAH data] is to expose the recent nonsense that has appeared in the blogosphere, to the effect that recent cooling has wiped out global warming of the past century, and the Earth may be headed into an ice age. On the contrary, these misleaders have foolishly (or devilishly) fixated on a natural fluctuation that will soon disappear… Note that even the UAH data now have a substantial warming trend (0.14°C per decade). RSS find 0.18°C per decade, close to the surface temperature [GISS] trend (0.17°C per decade). The large short-term temperature fluctuations have no bearing on the global warming matter…”.
Regardless whether the long term GW is “moderate”, “substantial”, or even ongoing, the recent indices of a “chill” (or al least “offset in projected AGW”) for AGW advocates will continue to represent a mere respite from the “ultimate truth” of AGW and its consequences. For other advocates (even those who at the least acknowledge GW), the “chill” is a fortuitous event, allowing us all, perhaps, to proceed more deliberately, question the models supporting AGW claims, allocate more appropriately available resources between mitigation and adaption strategies, and develop better technology and energy use.
Although not a scientist, I find that examining models are endeavors on “shifting sand”. My opinion is based upon the writings of those who are or should be most knowledgeable about them.
Dr. Hansen et. al. did spend some time on the deficiencies of ModelE (2006) (see Dangerous human-made interference with climate: a GISS modelE Study (published May, 2007)). They concluded by saying, “Despite these model limitations, in IPCC model inter-comparisons, the model used for the simulations reported here, i.e. modelE with the Russell ocean, fares about as well as the typical global model in the verisimilitude of its climatology. Comparisons so far include the ocean’s thermohaline circulation (Sun and Bleck, 2006), the ocean’s heat uptake (Forest et al., 2006), the atmosphere’s annular variability and response to forcings (Miller et al., 2006), and radiative forcing calculations (Collins et al., 2006). The ability of the GISS model to match climatology, compared with other models, varies from being better than average on some fields (radiation quantities, upper tropospheric temperature) to poorer than average on others (stationary wave activity, sea level pressure).” Thus, these admitted deficiencies, which then included (among other things) the absence of a gravity wave representation for the atmosphere (and, likely, for the ocean as well) and the yielding of “only slight el-Nino like variability” (and, likely, la-Nina like variability as well) and other acknowledgements present the avenues within which observations may nevertheless be “consistent with” (or, one step removed, “not inconsistent with”) climate models.
Lyman [Willis] et al. in “Recent Cooling of the Upper Ocean” (published October, 2006) likewise mentioned the shortcomings of models: “The relatively small magnitude of the globally averaged [decrease in ocean heat content anomaly (“OCHA”)] is dwarfed by much larger regional variations in OHCA (Figure 2). … Changes such as these are also due to mesoscale eddy advection, advection of heat by large-scale currents, and interannual to decadal shifts … associated with climate phenomena such as El Nino… the North Atlantic Oscillation …the Pacific Decadal Oscillation …and the Antarctic Oscillation….Owing in part to the strength of these advection driven changes, the source of the recent globally averaged cooling (Figure 1) cannot be localized from OHCA data alone.” They pointed to other possible sources of the “cooling” by saying, “Assuming that the 3.2 (± 1.1) 1022 J was not **transported to the deep ocean**, previous work suggests that the scale of the heat loss is too large to be stored in any single component of the Earth’s climate system [Levitus et al., 2005]. A likely source of the cooling is a small net imbalance in the 340 W/m2 of radiation that the Earth **exchanges with space**.” (emphasis added). They then concluded, in part: “…the updated time series of ocean heat content presented here (Figure 1) and the newly estimated confidence limits (Figure 3) support the significance of previously reported large interannual variability in globally integrated upper-ocean heat content.” Willis et al. went further: “However, **the physical causes for this type of variability are not yet well understood**. Furthermore, **this variability is not adequately simulated in the current generation of coupled climate models used to study the impact of anthropogenic influences on climate** … Although these models do simulate the long-term rates of ocean warming, **this lack of interannual variability represents a shortcoming that may complicate detection and attribution of human-induced climate influences.**” (emphasis added)
The Lyman [Willis] et al. 2006 paper was published approximately six months after the article, **Earth’s Big Heat Bucket** (at http://earthobservatory.nasa.gov/Study/HeatBucket/). Then, Hansen was reported to have an interest in the paper, **Interannual Variability in Upper Ocean Heat Content, Temperature, and Thermostatic Expansion on Global Scales** Journal of Geophysical Research (109) (published December, 2004) in which Willis et al., by using satellite altimetric height combined with in situ temperature profiles, found an implication of “an oceanic warming rate of 0.86 ± 0.12 watts per square meter of ocean (0.29 ± 0.04 pW) from 1993 to 2003 for the upper 750 m of the water column.”, and Hansen thus looked to the ocean and Willis for the “smoking gun” of earth’s energy imbalance caused by greenhouse gases. (More on the use of altimetry below.) NASA quoted Hansen: “Josh Willis’ paper spurred my colleagues and me to compare our climate model results with observations,” says Hansen. Hansen, Willis, and several colleagues used the global climate model of the NASA Goddard Institute for Space Studies (GISS), which predicts the evolution of climate based on various forcings…. Hansen and his collaborators ran five climate simulations covering the years 1880 to 2003 to estimate change in Earth’s energy budget. Taking the average of the five model runs, the team found that over the last decade, heat content in the top 750 meters of the ocean increased ….. The models predicted that as of 2003, the Earth would have to be absorbing about 0.85 watts per square meter more energy than it was radiating back into space—an amount that closely matched the measurements of ocean warming that Willis had compiled in his previous [2004] work. The Earth, they conclude, has an energy imbalance. “I describe this imbalance as the smoking gun or the innate greenhouse effect,” Hansen says. “It’s the most fundamental result that you expect from the added greenhouse gases. The [greenhouse] mechanism works by reducing heat radiation to space and causing this imbalance. So if we can quantify that imbalance [through our predictions], and verify that it not only is there, but it is of the magnitude that we expected, then that’s a very big, fundamental confirmation of the whole global warming problem.”
Because Lyman [Willis] et al. (2006) was published approximately seven months after the Second Order Draft of the IPCC’s WG1 (March, 2006, of which Willis was contributing author), the issue of ocean “cooling” was apparently untimely for the IPCC’s FAR compilation published in early 2007. However, the paper was not untimely for Hansen et al. (2007) to remark: “Note the slow decline of the planetary energy imbalance after 2100 (Fig. 3b), which reflects the shape of the surface temperature response to a climate forcing. Figure 4d in Efficacy (2005) shows that 50% of the equilibrium response is achieved within 25 years, but only 75% after 150 years, and the final 25% requires several centuries. This behavior of the coupled model occurs because the deep ocean continues to take up heat for centuries. Verification of this behavior in the real world requires data on deep ocean temperature change. In the model, heat storage associated with this long tail of the response curve occurs mainly in the Southern Ocean. Measured ocean heat storage in the past decade (Willis et al., 2004; Lyman [Willis] et al., 2006) presents limited evidence of this phenomenon, but the record is too short and the measurements too shallow for full confirmation. Ongoing simulations with modelE coupled to the current version of the Bleck (2002) ocean model show less deep mixing of heat anomalies.” No mention by Dr. Hansen was made of the “cooling” in the upper ocean (750m) as found by Lyman [Willis] et al.(2006), nor of a “smoking gun”.
The first public critique of Lyman [Willis] et al. (2006) apparently arose from AchutaRao et al., **Simulated and observed variability in ocean temperature and heat content** (published June 19, 2007). They concluded that by use of 13 numerical models [upon 2005 World Ocean Atlas (WOA-2005) data with “infill” data], their “work does not support the recent claim that the 0- to 700-m layer of the global ocean experienced a substantial OHC decrease over the 2003 to 2005 time period. We show that the 2003–2005 cooling is largely an artifact of a systematic change in the observing system, with the deployment of Argo floats reducing a warm bias in the original observing system.” By July 10, 2007, Lyman [Willis] et al. (2006) echoed the claim of bias in their own “Correction to Recent Cooling In the Upper Ocean” stating “most of the **rapid** decrease in globally integrated [upper ocean (750 m) OCHA] between 2003 and 2005…appears to be an artifact resulting from the combination of two different instrument biases (emphasis added)”. But, they went further, “although Lyman [Willis] et al. carefully estimated sampling errors, they did not investigate potential biases among different instruments”; and, “Both biases [in certain Argo floats and XBTs] appear to have contributed equally to the spurious cooling.”
Despite the assertion, however, the bias in the Argo system was apparently accounted for in Lyman [Willis] et al. (2006): “In order to test for potential biases due to this change in the observing system [to Argo], globally averaged OHCA was also computed **without** profiling float data (Figure 1, gray line). The cooling event persisted with removal of all Argo data from the OHCA estimate, albeit more weakly and with much larger error bars. This result suggests that the cooling event is real and not related to any potential bias introduced by the large changes in the characteristics of the ocean observing system during the advent of the Argo Project. Estimates of OHCA made using only data from profiling floats (not shown) also yielded a recent cooling of similar magnitude. (emphasis added) And, although much was made about the warm biased XBTs being a source of the **rapid** decrease in OHC, no mention of finding warming then was made by either AchutaRao et al. (2007) or Lyman [Willis] et al. in their “Correction” (2007).
When the Argo results gained more notoriety this year, Willis [Lyman] et al. published **In Situ Data Biases and Recent Ocean Heat Content Variability** (February 29, 2008) and still concluded that “no significant warming or cooling is observed in upper-ocean heat content between 2004 and 2006”. But, by then, Willis [Lyman] et al. claimed that “the cooling reported by Lyman et al. (2006) would have implied a very rapid increase in the rate of ice melt in order to account for the fairly steady increase in global mean sea level rise observed by satellite altimeters over the past several years. The absence of a significant cooling signal in the OHCA analyses presented here brings estimates of upper-ocean thermosteric sea level variability into closer agreement with altimeter-derived measurements of global mean sea level rise. Nevertheless, some discrepancy remains in the globally averaged sea level budget and observations of the rate of ocean mass increase and upper-ocean warming are still too small to fully account for recent rates of sea level rise (Willis et al. 2008).” Gone then was any reference to “advection driven changes” or an assumption that heat was “transported to the deep ocean” which otherwise may have accounted for any cooling, or at least no warming, reported in Lyman [Willis] et al. (2006).
The foregoing make clear that upper ocean cooling or no warming is not “consistent with” models supporting GW unless the heat or energy imbalance determined by Hansen’s models has in fact been transported to the deep ocean (more than 3000 m) which Dr. Roger Pielke Sr. for years has suggested, or it has escaped to space (as Dr. Kevin Trenberth is recently reported as saying “[the extra heat is] probably going back out into space” and “send[s] people back to the drawing board”) Then, altimeter-derived measurements of global mean sea level rise could still be meaningful even in presence of a significant cooling or at least no warming in the upper-ocean. With the heat in “the deep”, however, reliance upon altimetry data as a proxy for heat content in the upper ocean may be misplaced and concern about CO2 re-emerging into the atmosphere may over-emphasized.
Ray Ladbury says
OK, Jared, here’s a quiz. How long does an El Nino last? How about a PDO? Now, how long has the warming trend persisted (Hint: It’s still going on.) Other influences oscillate–the only one that has increased monotonically is CO2. Learn the physics.
david abrams says
“Response: For a record that would be unambiguous (and therefore clear in all estimates of the trend) the 50% waiting period is somewhere around 6 years according to this rough study.”
Fine, let’s go with the “unambiguous” line. Here’s what I propose: On January 31, 2014, we each get to pick one of the leading calculations of world temperature anomaly (GISS, HADCrut, etc.) We then take the arithmetic mean of the two calculations for each of the years 2008, 2009, 2010, 2011, 2012, and 2013. If even ONE of those yearly averages is more than 0.1C above the highest average (as calculated using the same measures) for each of the years between 1980 and 2007, inclusive, then you win the bet.
“Let me think about the bet. – gavin”
Think all you like, but based on your challenge to the Germans it seems to me you ought to jump on it. Does 1 thousand dollars donated to a charity of the winner’s choosing seem reasonable?
Chris N says
Lamont,
In response to #44, how about the US recession in 80-82? Less production, less aerosols, higher temperatures? Most of the warming in the mid-90’s is likely due to the 1990 Clean Air Act, and the fall of the Former Soviet Union. Most don’t realize that there was a major shift to low sulfur coal or installation of FGD processes in the early 90’s.
Gavin,
How are aerosol forcings chosen for the various models? Can anyone chose any number they like for their hindcasts? Does it vary per year? It appears that most climate modelers assume aerosol loading is getting worse each subsequent year? Why so, and how so? Is there one graph anywhere in the world that shows the results of a climate model where aerosol forcing is varied, i.e., 0.1x, 0.33x, 0.5x, 0.67x, 0.75x, 0.9x, and 1.0x?
Vincent Gray says
What happened to the ancient truism that a correlation, however convincing, does not prove cause and effect. Chabging the word to “consistent with” does not change this.
Other correlations, such as the one with ocean oscillations are much more “consistent”, are they not?
[Response: Pray tell, to what correlations to you refer? None were discussed in the above post. – gavin]
Chuck Booth says
Re # 48 Larry
Global warming, or not, we still have the problem of ocean acidification caused by rising levels of atmospheric CO2 – that is serious enough itself:
Coral Reefs Under Rapid Climate Change and Ocean Acidification
O. Hoegh-Guldberg et al. Science 14 December 2007:Vol. 318. no. 5857, pp. 1737 – 1742
http://preview.tinyurl.com/5a7cqc
Anthropogenic ocean acidification over the twenty-first century and its impact on calcifying organisms
James C. Orr et al. Nature 29 September 2005: Vol. 437, pp. 681-686
http://www.ipsl.jussieu.fr/~jomce/acidification/paper/Orr_OnlineNature04095.pdf
Impact of Anthropogenic CO2 on the CaCO3 System in the Oceans
Richard A. Feely et al. Science 16 July 2004:
Vol. 305. no. 5682, pp. 362 – 366
http://www.sciencemag.org/cgi/content/abstract/305/5682/362
Philip Machanick says
Chris N #56: Try searching for aerosol (google allows you to specify a site e.g. site:giss.nasa.gov to narrow the search). You might find a few things of interest at Global Aerosol Climatology Project (GACP).
Some people still seem to be having trouble understanding that over a short period, natural variability will overwhelm a long-term trend. As an experiment, I took the oldest instrument data set I could find, HadCRUT3, and took the first 50 years, which as far as I could tell was not subject to any significant forcing (and temperature variation was nearly flat over the period), and added a modest trend to it, to make it look like the trend over the last 50 years. Just as with current data, even though I KNOW there is a trend there because I added it in, you can find periods of 10 years that are flat or even decreasing.
Comments and corrections welcome.
Nylo says
Gavin: “We all know that the forcing is not linear in concentration. But it isn’t decreasing, it is increasing logarithmically. And it is certainly not decreasing exponentially”.
Ray: “How can you expect to be taken seriously when you haven’t even bothered to acquaint yourself with the physics of the model you are arguing against?”
Both of you misunderstood my words. Of course the TOTAL green house effect increases as long as the concentration increases. What decreases exponentially is the ammount of GH effect CONTRIBUTED by the ammount of CO2 we add each year. In other words, tomrrow’s addition of 5 ppm won’t be as important as today’s addition of 5 ppm. This results in a logarithmic TOTAL increase of the warming, but which is LESS than linear. On the other hand we are adding CO2 every year faster than the year before. So the total increase of the warming effect will be somewhat faster than logarithmic, as tamino points out. However, it will still be SLOWER than linear. That’s why I used a green prediction that was LINEAR. The real expected increase should be even less than that.
[Response: There is no dispute about the physics – it’s a matter of language, yours was extremely unclear to the point of being misleading. But CO2 increases are exponential, giving a linear forcing trend (and indeed a little faster than linear). – gavin]
Nylo says
I still don’t see anyone answering the question of how will the troposphere warm the surface in the way the models predict, if it is not as hot as the models predicted it to be.
Barton Paul Levenson says
Bryan S writes:
Conservation of energy?
Barton Paul Levenson says
Lamont writes:
If I had to guess, I’d say the series of recessions in 1980-1982 that slowed down the world economy, thereby slowing production, thereby releasing fewer aerosols, therefore permitting more solar absorption and higher temperatures. But I don’t know exactly how I’d go about testing the theory. Maybe a time series for industrial aerosols? Does anyone have one?
pete best says
Do these model runs assume that CO2 release (sources) and sinks are to stay the same. Does they propose that GHG emissions and ocean/plant take up stays constant over the 21st century?
Geoff Wexler says
Falsifiability.
We don’t have to wait to falsify the theory of global warming. It can be done now , and very easily, by falsifying the principle of conservation of energy. That would incidentally, also solve the problem of generating renewable energy. The patent office receives a regular series of designs which claim to do this and which are not given the benefit of publicity by American Petroleum or Exxon. The reason why it is very easy is that we only need to verify one of these claims. Notice that ‘very easy’ is a logical idea not a practical one. To simplify the point I am ignoring the valid point that the Patent Office would have to invoke some other theories in order to carry out its tests.
In so far as global warming theory has rock solid foundations it is because it is an application of highly falsifiable universal theories or laws such as the above. Notice the word ‘universal’. A single prediction is not the same as a universal theory in at least two ways; first the assymmetry between falsification and verification can break down and secondly it can involve lots of initial conditions (data) as well as universal laws. Popper’s ideas
were not so trivial that they were intended to apply to the collection of data.
I think the best way to apply falsificationism is to apply it to universal laws. Not to apply it to the estimate that doubling the pre-industrial CO2 will produce 3 degs.C warming but to the related law (postulated by Arrhenius) that the warming produced by such a doubling does not depend on the starting point. Another example might be that the average relative humidity is independent of temperature (also suggested by Arrhenius). Both such laws are easy to falsify in the logical sense.
As for checking up on the forecast , Gavin has answered that one here and in the previous thread. Falsification is part of a discussion about the demarcation problem between science and non science and the waiting time for falsification does not come into it. (Even Popper would have agreed).
To summarise a piece of applied physics cannot be dismissed as nonscientific if its main predictions are harder to falsify than the laws from which they are deduced provided it can be tested by waiting.
Timo Hämeranta says
Gavin explained: The variations for single models are related to the initial conditions. The variations across different models are related to both initial conditions and structural uncertainties (different parameterisations, solvers, resolution etc.).”
And the results are as follows:
“Cloud climate feedback constitutes the most important uncertainty in climate modelling, and currently even its sign is still unknown. In the recently published report of the intergovernmental panel on climate change (IPCC), 6 out of 20 climate models showed a positive and 14 a negative cloud radiative feedback in a doubled CO2 scenario.”
Quote from the study
Wagner, Thomas, S. Beirle, T. Deutschmann, M. Grzegorski, and U. Platt, 2008. Dependence of cloud properties derived from spectrally resolved visible satellite observations on surface temperature. Atmospheric Chemistry and Physics Vol. 8, No 9, pp. 2299-2312, May 5, 2008
GlenFergus says
#54, #47
The PDO appears to persist for “20-30 years“, but the the record is too short for much confidence. The point re a probable PDO contribution to the recent observed warming trend (~1978 to present) appears basically valid. PDO correlates with more and stronger El Ninos, which clearly correlate with higher global mean temps. This one isn’t going away guys, though the rush from the denyospere to embrace it smacks of serious desperation.
The more interesting question is whether PDO post ’78 is (oceanic) weather, or is it actually climate? A random variation in the state of the Pacific, or warming-driven? How would we tell? Maybe paleo SSTs? Eemian? Pliocene?
Down here in desperately dry Oz, people have been looking longingly for a PDO shift for a while now, but the recent bust up of the La Nina seems to have crueled hopes again.
Larry says
Gavin
Thanks for responding. I liked your clock analogy, but if my watch is slow, then it falls behind a minute today and another minute tomorrow, ad infinitem. If it’s randomly off and is a minute slow today, it randomly errs again tomorrow. Unless I reset it (would that we could reset the climate), tomorrow’s error could also be a minute slow. I.e., the errors might average to 0, but with a lower probability, they could also accumulate.
Chuck Booth (#58)
I’m not denying anything; just trying to get my arms around this very complex subject. I’m ready to be convinced, but I keep bumping up against rebuttals that I am unable to refute. Neither Frank nor I dispute the greenhouse effect. What he seems to be on about is its relative significance, given all the other things that affect climate.
Ray Ladbury says
Nylo, Sorry, but if you do not know the difference between a logarithmic increase and an exponential decrease, we don’t have much to talk about. If you want to talk about the incremental contribution of an additional amount of ghg, you would take the differential of ln(x) and multiply by dx–there’s no way that is “exponentially decreasing”. [edit]
JBL says
Nylo in 60: a further comment on precision in language. “Exponentially decreasing” is just wrong. If the marginal forcing were decreasing exponentially, the total forcing would approach some upper limit. As it is, the total forcing proceeds logarithmically, so the marginal forcing decreases like 1/x, i.e. not exponentially.
Ray Ladbury says
#57 Vincent Gray–what about when you have an established correlation AND an established physical mechanism that explains it and makes predictions that are subsequently verified. I believe that does define causation a la the scientific method, does it not?
Mark says
Maybe an analogous way of displaying this result is to ask this question:
Roll 100 dice for, say, 10 rolls and record how each dice rolls. How many of these 100 dice will indicate that the dice is loaded?
That’s the chance that we would not see any global warming in some of the current models.
Now try 100 dice for 20 rolls. How many will indicate that the dice is loaded?
That’s the chance that if we ran for another 20 years, we would find any models showing no global warming.
Or have I got the take-home message wrong?
Eric (skeptic) says
#65, Geoff, and Nylo, a model prediction going bad does not “falsify” a model. But physical laws behind the model are not fruitful for falsification either. For example whether the average relative humidity is independent of temperature is irrelevant because the average is meaningless in a model unless it is parameterized into uselessness.
OTOH Nylo, a climate model that doesn’t predict an ENSO phase change is not false or useless because prediction of the timing of such changes is not necessary for climate fidelity. However accurate modeling is needed which means sufficient resolution and adequate coverage of inputs. The nonlinear chaotic interaction of a sufficiently resolved atmosphere and ocean interaction should enable the parameterization at a fine scale of the events that can ultimately trigger a phase shift. That may require a few more years of processing power and model enhancement, but I think it is inevitable.
stevenmosher says
gavin,
when I look at the spread of “forecasts” presented here I wonder how well each model that produced these forecasts did at hindcast. A model that does poorly in hindcast really should not be used in forecast? thats a question really. Anyway, Judith Curry wrote the following and I’m wondering what your take on the issue is
“What David Douglass says is absolutely correct. At the recent NOAA review of GFDL modelling activities (we discussed this somewhere on another thread), I brought up the issue numerous times that you should not look at projections from models that do not verify well against historical observations. This is particularly true if you are using the IPCC results in some sort of regional study. The simulations should pass some simple observational tests: a credible mean value, a credible annual cycle, appropriate magnitude of interannual variability. Toss out the models that don’t pass this test, and look at the projections from those that do pass the test. This generated much discussion, here are some of the counter arguments:
1) when you do the forward projections and compare the 4 or so models that do pass the observational tests with those that don’t, you don’t see any separation in the envelope of forward projections
2) some argue that a multiple model ensemble with a large number of multiple models (even bad ones) is better than a single good model
My thinking on this was unswayed but arguments #1 and #2. I think you need to choose the models that perform best against the observations (and that have a significant number of ensemble members from the particular model), assemble the error statistics for each model, and use these error statistics to create a multi-model ensemble projection.
This whole topic is being hotly debated in climate community right now, as people who are interested in various applications (regional floods and droughts, health issues, whatever) are doing things like average the results of all the IPCC models. there is a huge need to figure out how to interpret the IPCC scenario simulations.”
[Response: Judith points are valid issues and I discussed just that in a recent post. I have no idea what Douglass has to do with that. – gavin]
Craig P says
I have a question regarding the practice of averaging the results of various model simulations to obtain an average trend line.
It is obvious that one can mathematically perform this averaging, obtain distributions of outcomes, and calculate standard deviations to get a sense of the variation in the predictions. But does this mathematical exercise yield the same information about uncertainty that you get when applying the same computations to experimental data?
It is my understanding that a fundamental assumption underlying the application of statistics to experimental data is that all the variation in the date comes from measurement errors that are randomly distributed. If the variations are randomly distributed, then averaging of a lot of measurements can be used to reduce uncertaintly about the mean value.
But in the case of computer models, doesn’t much of the variation among different models come from systematic, rather than random variation? But that, I mean that the models give different results because they differ in assumptions made, and in computational strategies employed. Under these circumstances, can you attribute any significance to the “confidence” limits calculated from the standard deviation of the computed average? Is there statistical theory to underpin the notion that averaging outcomes which contain systematic errors can be used to reduce uncertainty about the mean value?
To illustrate my concern, consider a simple (and exaggerated) example where we have 4 climate simulations, each from a different model. The predicted rate of temperature change for each model is as follows:
Model 1 = +0.6 C/decade
Model 2 = +0.4 C/decade
Model 3 = 0.0 C/decade
Model 4 = -0.2 C/decade
Lets further suppose that the average temperature gain per decade (for some observable period) was actually +0.2 C/decade.
Now if I were to compare the actual data to the model predictions, I’d be tempted to conclude that none of the models is any good. Yet if I average the 4 results, the average agrees perfectly with the observed trend.
With regard to this example, I would ask: By combining the results of 4 poor models to get an average result that matches reality, have I really proven that I understand how to model temperature change? For me, the answer is obvious: I haven’t.
So when I see a discourse such that you have just provided, I do find myself wondering if all this averaging is mainly a way to hide the inability of these models to correctly predict climate trends.
In response to point #16 Gavin offers a balance of evidence argument. I’d agree that the balance of evidence is that the surface of the planet has gotten warmer in recent decades. But aren’t modeling results essential to make the case that CO2 is the primary driver (e.g., to validate the causal link). So isn’t the goodness of the models an essential issue with respect to whether or not we should impose a possibly large social and economic cost by attempting to control atmospheric CO2 levels?
Timothy says
[74] – I believe there’s some evidence from seasonal forecasting that leaving in “poor” models in a multi-model ensemble improves the performance of the ensemble as a whole. (This was for ENSO prediction, it might be different for other regional applications)
This is counter-intuitive.
I think that there is some interesting work being done on how to use hindcasts to constrain the forecasts in a statistical way.
[64] – These models all use the A1B scenario for future GHG emissions, and don’t use interactive carbon cycles. The A1B scenario is generally considered one of the “high” emission scenarios (but there’s little sign of anything being done to avoid that), you can find out more about that on the IPCC website if you google for SRES.
They do miss out the carbon-cycle feedback, but the section on that in the AR4 report was that results from other modelling studies all showed a weaker feedback than that in the original Cox et al paper that flagged it up as an issue.
[Ongoing discussion about 1980-1982] – It occurs to me that this was in the wake of the Iranian revolution, etc, and I recall that oil in the Middle East has a particularly high sulphur content compared to oil from elsewhere. This might be relevant.
stevenmosher says
Gavin, the reference to Douglas is immaterial to the question.
I’m looking for some kind of direct comment from you on this question.
Should one only accept “forecasts” from models that hindcast well?
Or should bad hindcasters get to forecast? When a bad hindcaster
forecasts , is the uncertainty of forecasts increased? I’ll resubmit
my request to get data from IPCC, but in the mean time your take on the matter is appreciated
The “forecasts” you depict come from various models. Some, one could speculate, hindcast better than others. Should the bad hindcasters be included in forecasting? If one excludes the bad hindcasters
then what does the spread look like?
Anyway, Many thanks and Kudos for doing this post.
[Response: You need to demonstrate in any particular case (e.g. if you want to look at N. Atl. temperatures, or Sahel rainfall or Australian windiness or whatever) that you have a) a difference in what the ‘good’ models project and what the rest do, b) some reason to think that the metric you are testing against is relevant for the sensitivity, and c) some out-of-sample case where you can show your selection worked better than the naive approach. Turns out it is much harder to do all three than you (or Judith) might expect. I think paleo info would be useful for this, since the changes are larger, but as I said a week or so back, the databases to allow this don’t yet exist. – gavin]
Ray Ladbury says
Steven Mosher, Gavin et al., I am wondering whether a model averaging with weights determined by some statistical test might not be appropriate. I am not sure that a hindcast is necessarily the appropriate statistical test, though. Each of the models determines the strengths of various forcers from various independent data sources. When there are enough data sources, one might be able to construct weights from Akaike or Bayesian Information criteria for who well the models explain the data on which they are based (that is, best fit for one data type will be different from best fit for another, so you settle on an overall best fit with a certain likelihood of the contributing data). Such a weighted ensemble average has been shown to outperform even the “best” model in the ensemble. You can sort of see why. Even if a model is mostly wrong, it be closer to right in some aspects than other models in the ensemble. Thus assigning it a weight based on its performance will do a better job than arbitrarily weighting it to zero.
stevenmosher says
RE 74.
Gavin I am having trouble unstanding this comment made by Dr. Curry
with your depiction of internal varaibility
“1) when you do the forward projections and compare the 4 or so models that do pass the observational tests with those that don’t, you don’t see any separation in the envelope of forward projections”
So, you’ve presented a panoply of forward projections from a collection of 22 models, only 4 of which pass what Dr. Curry refers to as an observational test. What does the foreward projection look like for the 4 out of 22 models that hindcast well? If a model doesn’t hindcast well, 18 out of 22 according to Dr. Curry, then what is the point exactly of doing statistics on their forward projections?
ModelE I’ll note ( thanks for the links to the data!) hindcasts like a champ.
[Response: She is noting that there is often little or no difference between the a projection (not a forecast) that uses a subset of the models and a projection from the full set. Therefore the skill in a hindcast does not constrain the projection. This is counterintuitive, but might simply be a reflection that people haven’t used appropriate measures in the hindcasts – i.e. getting mean climate right doesn’t constrain the sensitivity. This is an ongoing line of research. I have no idea what test she is specifically talking about. – gavin]
Nylo says
#73 Eric, if you are using an average of model runs in order to get rid of weather effects, because they allegedly will cancel out and keep only the climate signal, then what can you compare your model with? You cannot compare it to the real data because real data is climate PLUS weather. So in order to compare, you would first need to decide how much of our current warming is climate and how much of it is weather. And you cannot use the models to take that decision, because you would have an invalid circular proving: I prove reality is climate because it is coincident with the model, and I prove the model is right because it matches reality, which I know is only climate, because… because… well, because it is like the model. See the nonsense?
But scientists cannot agree on how much of the current warming is weather and how much is climate, especially when talking about the warming we had between 1978 and now. So the models cannot be compared to anything. If you think that no weather-related effects have been happening in these 30 years, then the models are good. If you think we have been suffering warming from ENSO and PDO and other causes for 30 years, and that without them the climate-only influenced temperature should be 0.2ºC colder by now, and therefore expect some cooling, then the models are crap and their predictions are holy crap.
Manu D says
#75 Craig P
I believe you’ve overlooked one important point.
Over the longer period, the distribution becomes tighter, and the range is reduced to -0.04 to 0.42ºC/dec.
So the fact that the average between models happens to correspond to the actual trend is not due to chance.
To come back to one of your conclusion:
Now if I were to compare the actual data to the model predictions, I’d be tempted to conclude that none of the models is any good.
You’re right up to a certain point: an individual model is not good to project the T anomaly over just a few years, and it’s not design to (e.g. no initialization to actual past or present system state). One model projection over a few years should be taken carefully, as well as the ensemble average.
And I guess people at RC and all scientists always said that. I believe that the point here is to tell people who insist on comparing measured variability over short term to look at individual models instead of ensemble average, i.e.: If you want to see stable temperatures over a few years, then compare data with a model whose variability is in phase with actual measurements (but that would be by chance, as these models are not initialized to actual system states) and not with the ensemble average. The latest smooths out variability, and gives mostly the long term trend.
But I guess the problem here is that you somehow assumes that the large discrepancy you took as an example are conserved whatever period you average over. If that would be the case, I think one could say that model projections are extremely uncertain.
However, imagine that the hypothetical trends you’ve taken as examples were derived from signals composed of the sum of a sinusoid (call it variability) and a linearly increasing signal (call it trend). Let’s say that the linear signal is pretty similar among various models, but variability is not and differs in phase and amplitude. Now of course if you’d compute a trend at scales shorter than the period of the sinusoid, or at scales where the trend is smaller that the short term variability, you would find large disparities between various pseudo-trends, because they’d be dominated by variability. However, the longer the time period you consider, the smaller variability impacts your computed trend. Then you start to see a narrower distribution around your mean value, and an individual model is more likely to tell you about the actual trend. My guess is that’s what is showed in figure 2 here.
Michael Smith says
The surface temperature observations and troposphere temperature observations move together — when one takes a swing up or down, so does the other, indicating that whatever is causing these swings affects both in the same manner (if not exactly to the same magnitude). Looking at the data, that seems to have been the case for the entire satellite era.
In view of this, can any of you explain how “climate” or stochastic uncertainty can cause the relationship between surface heating trends and troposphere heating trends to be inverted versus what AGW theory predicts and requires?
In other words, given how the surface and troposphere observations move together, how can “climate” account for the fact that the troposphere observations DON’T match the models but the surface trends DO?
[Response: Your first point is key – moist adiabatic amplification is ubiquitous on all timescales and from all relevant processes. This is however contradicted by your second point where you seem to think it is only related to AGW – it isn’t, it is simply the signal of warming, however that may be generated. However, there is noise in the system, and there is large uncertainty in the observations. If you take that all into account, there still remains some bias, but there is no obvious inconsistency. But more on this in a few days…. – gavin]
Ray Ladbury says
Nylo asks: “See the nonsense?”
Why, yes, as a matter of fact. Nylo, climate models are dynamical. That means there is very little wiggle room in many of the parameters that go into them. So, let’s say (and there’s no evidence for this) that you are right and that there has been less “climate-related warming,” that there has just been a conspiracy of nature to make the past 30 years heat up. The forcing due to CO2 will not change very much in response to that observation, because it is constrained independently by several other lines of data. Instead, it would imply that there was some other countervailing factor that countered the warming due to increasing CO2. Now there are two possibilities:
1)this additional factor is again independent, and just happens to be active right now. In this case, it will only persist on its own timescale, and when it peters out, warming due to CO2 will kick in with a vengeance (CO2’s effects persist for a VERY long time).
2)if the additional factor is a negative feedback triggered by increased CO2, then it may limit warming. However, it likely has some limit, and when that limit is exceeded, THEN CO2 kicks in with a vengeance and we’re still in the soup.
And you have to not only come up with your magical factor to counter CO2, but explain how that factor limits warming, but somehow doesn’t affect stratospheric cooling and all the other trends that a greenhouse warming mechanism explains. Let us know how that goes.
Eric (skeptic) says
#80 Nylo, I think it’s fair to say there’s no way to compare currently. I believe however that the medium to long run fidelity of models will be sufficient once the chaotic effects (and modeling of “initial” conditions) reaches sufficient resolution and adequate handling of inputs. I can’t say if those inputs need to include exotic things like cosmic ray cloud formation, but I’m pretty sure that medium and smaller scale weather is a must.
Ultimately the weather in such a model can be compared on a local level when primed with enough initial conditions (that’s only data after all). People may accuse me of conflating weather and climate models, but I would maintain that this would solve the problem of oversimplified climate models. Parameters, like how many cumulus clouds I see out my window are important for climate. I can count them and insert the parameters into the climate model, or I can model them. But if I count them, how do I know how that parameter changes with climate change?
I agree that averaging different models is not the way to extract climate, but results from localized climate models (verified locally) can be used for parameters for the global climate models.
Geoff Wexler says
Re #62.
Barton Paul Levinson.
Conservation of energy and warming in the pipeline via natural fluctuations?
Suppose that we apply a forcing, wait until a steady state is reached and then switch off all further forcing. Then allow natural fluctuations to proceed. Just how severe a constraint is the conservation of energy in the presence of strong positive feedback? Just one example. Suppose that the switch-off occurs near a critical point for some new contribution to positive feedback via the greenhouse effect. Such a climate would be ‘metastable’ with respect to natural fluctuations. Could one of them over-shoot the critical point and start something irreversible which would involve a slow transition to a different energy regime? Perhaps this could be described either as a natural or as a delayed effect of the previous forcing? Is this nonsense?
Eric (skeptic) says
#83, Ray, the additional factor is water vapor feedback and it doesn’t counter CO2, it adds to it. See https://www.realclimate.org/index.php?p=142 here. The post points out the increase in RH varies depending mainly on latitude. It actually varies a lot depending on numerous local factors, seasons, and global patterns. The variation in water vapor feedback is basically why the climate models have predicted less or more warming than reality. The other trends (e.g. stratospheric cooling) are equally varying and ambiguous. The explanation for that is not a magical missing factor, just inadequate modeling of water vapor feedback as it is controlled by large and small scale weather patterns.
Gerald Browning says
Might I suggest that the commenters on this thread read the article by Pat Frank that recently appeared in Skeptic for a scientific (not handwaving) discussion of the value of climate models and the errors induced by the simplifications used in the parameterizations. That article, along with the mathematical discussion of the unbounded exponential growth (ill posedness) of the hydrostatic system of equations numerically approximated
by the current climate models should mathematically clarify
the discussion about the value of the models.
Jerry
[Response: That’s funny. None of the models show this kind of behaviour, and Frank’s ludicrous extrapolation of a constant bias to a linearly growing error over time is no proof that they do. That article is an embarrassment to the otherwise reputable magazine. – gavin]
Gerald Browning says
Gavin,
Before so quickly dismissing Pat Frank’s article, why don’t you post a link to it so other readers on your site can decide for themselves the merits of the article? What was really hilarious was the subsequent typical hand waving response that was published by one of the original reviewers of the manuscript.
I also see that you did not disagree with the results from the mathematical manuscript published by Heinz Kreiss and me that shows that the initial value problem for the hydrostatic system approximated by all current climate models is ill posed. This is a mathematical problem with the continuum PDE system .
Can you explain why the unbounded exponential growth does not appear in these climate models? Might I suggest it is because they are not accurately approximating the differential system? For numerical reults that illustrate the presence of the unbounded growth and subsequent lack of convergence of the
numerical approximations, your readers can look on climate audit under the thread called Exponential Growth in Physical Systems.The reference that mathematically analyzes the problem with the continuum system is cited on that thread.
Jerry
[Response: I have no desire to force poor readers to wade through Frank’s nonsense and since it is just another random piece of ‘climate contraian’ flotsam for surfers to steer around it is a waste of everyones time to treat it with any scientific respect. If that is what he desired, he should submit it to a technical journal (good luck with that!). The reason why models don’t have unbounded growth of errors is simple – climate (and models) are constrained by very powerful forces – outgoing long wave radiation, the specific heat of water, conservation of energy and numerous negative feedbacks. I suggest you actually try running a model (EdGCM for instance) and examining whether the errors in the first 10 years, or 100 years are substantially different to the errors in after 1000 years or more. They aren’t, since the models are essentially boundary value problems, not initial value problems. Your papers and discussions elsewhere are not particular relevant. – gavin]
Bryan S says
Ray Ladbury and Barton Paul Levenson: Roy Spencer has recently argued that an “internal radiative forcing” can impute a long-term trend in climate. He makes the case using some simple models. Please read: http://climatesci.org/2008/04/22/internal-radiative-forcing-and-the-illusion-of-a-sensitive-climate-system-by-roy-spencer/ I ask that you (or Gavin) clearly state why his hypothesis is wrong. Does his model violate conservation of energy?
Other smart folks such as Carl Wunsch (hardly a denialist) have made the same points that I have borrowed from them about the long memory of initial conditions in the ocean, and the fact that this mixes an initial value problem with the boundary value problem in multi-decadal climate prediction. Gavin has apparently resisted this notion however, based on the statistical stability that he notes in the GCM ensemble means. Since Wunsch is well-versed in modeling ocean circulation with GCMs, it seems odd that he would not also see such a clear cut boundary values problem in the real ocean.
[Response: There is no such thing as ‘internal radiative forcing’ by anyone else’s definition. Spencer appears to have re-invented the rather well known fact that clouds affect radiation transfer. But we will have something more substantial on this soon. As to Wunsch’s point, I don’t know what you are referring to. We have spent the best part of a week discussing the initial boundary value problem for short term forecasts. That has very little to do with the equilibrium climate sensitivity or the long term trends though. – gavin]
Bryan S says
You did not answer my questions about Spencer’s ideas (you force me to await impatiently for your future post). Gavin, so are you flatly stating that the climate cannot and (does not) have a trend across a wide range of potential time scales owing to random fluctuations?
Also, flatly stating that initial values have “very little” to do with the long-term climate sensitivity is your hypothesis, and you should be expected to stand behind it. But in saying “very little”, I want to know how much is “very little”. I am afraid “very little” may be “more than you think”.
On Wunsch, after taking a considerable amount of effort to plod through several of his papers (not an easy read), I think he gives an abbreviated layman’s version of his sentiments here: http://royalsociety.org/page.asp?id=4688&tip=1 (there is caution for everyone here) In his peer-reviewed papers http://ocean.mit.edu/~cwunsch/papersonline/schmittneretal_book_chapter.pdf he states his case with more technical jargon. Read the last section of this paper.
[Response: Bryan, there are always potentials for these things, but if you look for hard evidence it is hard to find. Coupled models have centennial variabilty during their spin ups, but not so much once they equilibriate, so there’s not much support there. In the real world there is enough correlation with various forcings for long term changes to give no obvious reason to suspect that the trends aren’t forced. So let me turn it around – offer some positive evidence that this has happened. As for the influence of initial conditions, you can look at the IPCC model trends – every single one was initialised differently and with wildly unreal ocean states. Yet after 100 years of simulation, the 20 year trends fall very neatly into line. That is then the influence of the initial conditions. – gavin]
Gerald Browning says
Gavin,
It is irrelevant what journal the article was submitted to. The scientific question is does the mathematical argument that Pat Frank used stand up to careful scientific scrutiny. None of the reviewers could refute his rigorous mathematical arguments and thus the Editor had no choice but to publish the article. Pat has used a simple linear formula to create a more accurate climate forecast than the ensemble of climate models (the accuracy has been statistically verified). Isn’t that a bit curious given that the models have wasted incredible amounts of computer resources? And one only need compare Pat’s article with the “rebuttal” published by a reviewer to see the difference in quality between the two manuscripts.
[Response: I quite agree (but only with your last line). Frank’s argument is bogus – you know it, I know it, he knows it. Since models do not behave as he suggests they should, perhaps that should alert you that there is something wrong with his thinking. Further comments simply re-iterating how brilliant he is are not requested. – gavin]
The mathematical proof of the ill posedness of the hydrostatic system is based on rigorous PDE theory. Please feel free to disprove the ill posedness if you can. However, you forgot to mention that fast exponential growth has been seen in runs of the NCAR Clark-Hall and WRF models also as predicted by the Bounded Derivative Theory. Your dismissal of rigorous mathematics is possibly a bit naive?
If any of your readers really wants to understand if the climate models are
producing anything near reality, they will need to proceed beyond hand waving arguments and look at evidence that cannot be refuted.
Jerry
Philip Machanick says
To all the clouds will save us from global warming people: what is the evidence for this?
As I understand it, whether the forcing is [a] increasing GHG, [b] a change in the sun’s output or [c] a change in the Earth’s orbit, you add energy to the system. Various feedbacks add to or reduce the initial impulse. It doesn’t matter whether the initial impulse is a, b or c. The feedbacks don’t know what added energy to the system. Why then is CO_2 magically different to these other forcings in inducing a negative feedback that automagically damps the temperature increase to something non-harmful to the environment? Or did something so radically different to today’s conditions happen in previous warming events in the paleoclimate, when temperatures rose significantly above today’s levels?
dhogaza says
You mean stuff like predicting stratospheric cooling, and subsequent observation of stratospheric cooling? You mean stuff like predicting the effect of Pinatabu, and the subsequent observation that the predicting cooling effect closely matched the model results?
And other predictions that have been made, and subsequently observed?
Alexander Harvey says
In the spirit of fair play I have a little challenge!
Can anyone here come up with a model that deals with all the known forcings, and if desired oscillations, that mathches or exceeds the IPCC range on climatic warming for a doubling of CO2 (or equivalent) concentrations.
By this I mean a mathematics based model that can be made available to all of us for reproduction of the results and criticism of the method.
It can be as simple a model as desired but it needs to be at least plausible.
For those that think CO2 has little or no effect you should be showing that the vectors (forcings and oscillations) would indicate a doubling is below the low end of the IPCC range.
For those that think that the opposite you should be showing that the vectors imply a temperature increase at or beyond the high end of the IPCC range.
I only ask that the model does not break fundamental laws or is in some way equally implausible and that the “model” can be run (i.e. the results reproduced) on a domestic PC and that it gives some value equivalent to climate sensitivity.
Now the large scale modellers (Hadley Centre etc.) have to produce models that are open to criticism. They cannot simply cherry pick vectors they like. It seems to me that they are too often criticised on the basis of particular results (vectors) that could seem to contradict their position (e.g. solar variation, PDO etc.).
If anyone can come up with a model no matter how simple that deals with all the relevant vectors (CO2, CH4, SO4, solar etc.) and produces a result that meets or exceeds the IPCC range I should like to see it.
If anyone is prepared to take this up so will I.
Happy Hunting
Alexander Harvey
erm-fzed says
Does this address the recent Koutsoyiannis criticism of IPCC recently posted on (notification of) Climate audit?(for some one not as steeped in models, and willing to follow that much math talk).
John Norris says
re: response to number 16
[… PS. you don’t need climate models to know we have a problem. – gavin]
Gavin, did you release that your reference to show you don’t need models, references models?
Roger A. Pielke Sr. says
Gavin – In #49, you write “Roger, everyone’s time is limited. This is why public archives exist (here is the link again). As I have said many times, if you want an analysis done, you are best off doing it yourself”.
With respect to this response, I would welcome your (and Jim’s) comments as to why the upper ocean heat content changes in Joules are a lower priority to you (and to GISS) when it can be effectively used to more accurately diagnose the global average radiative imbalance than can the surface temperature trends that you present in your weblog.
At least for the GISS model (which should be easy for you to provide us), the presentation of a plot analogous to your first figure above, but for upper ocean (upper 700m) heat content, would help move the scientific discussion forward. If you cannot produce for us, please tell us why not.
[Response: If you want the net TOA imbalance, then you should simply look at it – fig 3 in Hansen et al (2007). The raw data is available here. For the OHC anomaly itself, you need to download the data from the IPCC archive. Roger, you need to realise that this blog is a volunteer effort maintained when I have spare time. If something comes across my desk and I can use it here, I will, but I am not in a position to specifically do research just for this blog or for you. I am genuinely interested in what the OHC numbers look like, but I do not have the time to do it. If you have a graduate student available it would make a great project. – gavin]
Gerald Browning says
Gavin,
[Response: I quite agree (but only with your last line). Frank’s argument is bogus – you know it, I know it, he knows it. Since models do not behave as he suggests they should, perhaps that should alert you that there is something wrong with his thinking. Further comments simply re-iterating how brilliant he is are not requested. – gavin]
I point out that there was not a single mathematical equation in the “rebuttal”, only verbiage. Please indicate where Pat Frank’s linear fit is less accurate than the ensemble of climate models in a rigorous scientific manner, i.e. with equations and statistics and not more verbiage.
And I am still waiting for you to disprove that the hydrostatic system is lll posed and that when the solution of that system is properly resolved with a physically realistic Reynolds number that it will exhibit unbounded exponential growth.
Jerry
Chris N says
Gavin,
I don’t know how you put up with the skeptical posts day in and day out (mine included). It can’t be good for your health. Just imagine doing this for another year, for instance, if next year is cooler than this year. In other words, it’s likely only to get worse with no apparent end in sight. You should take care of yourself, seriously.
Nylo says
I’m sorry to bring the topic back, but getting a plausible answer to it is important to me. In fact, it would make me less of an skeptic.
I have heard many explanations as to why the tropical troposphere is not as warm as the models predicted it to be: the wind, air convection, etc would have taken some of the heat in the troposphere to other layers. It’s ok for me, I believe those explanations. But I still don’t understand and haven’t seen explained why we should expect a quick future increase in the warming, if the warming is supposed to come from the troposphere, and the troposphere is not as warm as predicted. Any clues, please? Anything that sounds plausible will do.
Thanks.