Guest Commentary from Urs Neu
To understand the influence of climate change on tropical cyclone and hurricane activity, it is crucial to know how this activity has varied in the past. There have been a number of interesting new studies of Atlantic tropical cyclones (TCs) and hurricanes (tropical cyclones with maximum sustained winds exceeding 74 miles per hour) since my review of the topic a couple years ago (see here and here). These newer studies underscore that, while our knowledge continues to improve in this area, key uncertainties persist. In particular, it remains very difficult to confidently estimate trends in the past.
In assessing past trends, one must distinguish between two distinct time intervals: 1) the period of historical observations (mainly after 1850), and 2) the earlier period for which TC activity can only be reconstructed using proxy data. Furthermore, we have to distinguish between trends in tropical cyclone (TC) number and TC intensity–the latter measure is particularly important from the standpoint of impacts. There is no a priori reason to expect these quantities to vary in concert, either in the past, or in the future. Unfortunately, uncertainties are much greater for intensity than for counts.
In this article, I will review our current understanding of Atlantic TC and hurricane trends with respect to: A) the historical record of basin-wide TC numbers; B) the historical record of hurricanes and TC intensity; C) distant past proxy estimates of TC (primarily, hurricane only) counts; and D) distant past proxy measures of TC/hurricane intensity. I will conclude with a discussion of current methods for forecasting Atlantic hurricane activity.
The historical record of Atlantic tropical cyclones of the U.S. National Hurricane Center (HURDAT) goes back to 1850. However, only since the start of the satellite area in the 1970s has an area-wide observation system been available. Before that, the density of observations increased with time, either gradually (e.g. density of ship tracks or settlements on coasts), or stepwise (e.g. the introduction of reconnaissance flights in 1944 and launch of GEOS satellites in 1975). There was also subjectivity in various analysis methods, be they the interpretation of local observational data from stations and ships, or of satellite pictures (Dvorak method). Some improvements are obtained by the reanalysis of the whole data set with the same method (e.g. Kossin et al. 2007) or by the analysis of additional data (past meteorological observations not yet included in the analysis, e.g. Landsea et al. 2008).
However, the inhomogeneity caused by a changing and incomplete areal coverage of measurements before the start of the satellite era in 1975 will never be completely eliminated. The only way to correct for this is to estimate the number of TCs ‘missed’ by the observations in earlier times, using indirect methods. One way to do this is to estimate average numbers of ‘missed’ TCs for certain periods using the relationship of general parameters which are known in the past to the TC numbers in the satellite era. While such estimates won’t reveal the ‘right’ TC numbers for individual years, the average ‘missings’ (“undercount bias”) will improve the analysis of long-term trends.
In recent years there have been a number of attempts to estimate the undercount bias. A first attempt of Landsea (2007) using the percentage of landfalling TCs as a basis has been argued to be implausible, as the percentage of landfalling TCs has multidecadal variations and is thus not constant as assumed (Holland 2007). A second approach, by Mann et al (2007), used the statistical relationships of seasonal TC counts to climate variables (NAO, El Niño, and Main Development Region SST). A third method analyzed past ship tracks (Chang and Guo 2007, Vecchi and Knutson 2008 (“VK08”)). These estimates used a new reanalysis of the ICOADS ship track data. Reanalysis of the ICOADS data for 1911-1925 has led to the detection of about one additional TC per year (Landsea et al. 2008), which suggests that the same reanalyis of pre-1911 data might also lead to the detection of additional TCs.
A recent analysis (Landsea et al. 2009, “L09”) looked at trends of TCs of different life times and estimated the undercount bias of medium to long living TCs (more than two days) using the ship track method of VK08. They find that most of the positive trend in total TC numbers over the record originates from a strong positive trend of short lived TCs (two days or shorter) and, accordingly, that there has been no significant long-term trend in moderate to long-lived TCs. Moreover, the observed long-term decrease in average TC life-time is also explained by the increase in short-lived TCs. The authors discuss the possible reasons of the increase in short-lived TCs and suggest that the trend is primarily due to improved observations rather than a real phenomenon.
The approach of looking at short-lived TCs seems basically reasonable, since the probability of missing short-lived cyclones with local observations (ships etc.) seems higher than for longer-lived TCs. The strong trend in short-lived TCs and absence of the same for longer-lived TCs is an interesting result. However, the nature of the observed strong trend of ‘shorties’ is very unclear: If we assume that the whole trend in short-lived TCs is due to observational changes, the undercount bias before 1920 would be about two TCs per year on average compared to the 1975-2000 period (see Fig. 2 in L09; about 0.5 vs. 2.5 TCs per year). However, the undercount bias of short-lived TCs revealed by the ship track (VK08) method is only between 0.5 and 1 per year for the period 1880-1920 (see Fig. 4 in L09). Thus there is a clear discrepancy between the bias suggested by the ship track reconstruction and a bias resulting from the hypothesis of no trend in short-lived TCs. At first sight the result of the ICOADS reanalysis (Landsea 2008) for 1911-1925 suggesting the possible detection of one additional TC per year before 1910 would be a nice explanation for the discrepancy of about 1 to 1.5 TCs per year. However, a quick look at the newly detected TCs for 1911-1925 reveals that most of them have a life-time of more than two days and thus do not add to the trend of short-lived TCs. What else could explain the discrepancy? The VK08 method assumes no trend in TC number. Thus the method only allows to test if it is possible to reject the hypothesis that there is no change in TC frequency. If the analysis had revealed a significant trend, there would be an inconsistency with the method (because less or more cyclones in the early period would alter the bias correction).
Another new paper (Emanuel 2010, currently in discussion at JAMES) provides additional information: A downscaling method (described in Emanuel et al. 2008) using a reanalysis data set driven by surface temperature, pressure, and sea ice for the period 1908-1958 reveals an increasing trend of short-lived TCs (as defined in L09), which is not much smaller than the trend in the HURDAT record. This indicates that the observed positive trend in short-lived TCs, which is particularly pronounced in this period, might well be real and only partly due to an observational bias. On the other hand, the analysis of all TCs in the North Atlantic reveals a trend for 1908-1958 that is significantly lower than the HURDAT record, which is in line with the existence of a general undercount bias.
Another explanation that has been offered for the strong increase in short-lived TCs during the last decade is that due to new analysis methods, like e.g. Quikscat, additional short and weak cyclones have been detected. While Landsea (2007) estimated the corresponding ‘overcount bias’ at about one TC per year, Landsea et al. 2009 suggest now that this bias might be much larger. To explain the recent increase it would require to be about two TCs per year. However, this hypothesis is mainly based on one year with an exceptional high fraction of short-lived and relatively weak TCs (2007 with 9 out of 15). For the other recent years these fractions (2006: 1 out of 9; 2008: 4 out of 16; and 2009: 3 out of 9) were not far from the average before 2000 (about 2.5 out of 9). With regard to the high variability of the number of short-lived cyclones as well as their fraction of the total number, it is hard to see any evidence for an observational bias that is higher than suggested by Landsea (2007) which is based on those storms actually detected through reanalysis of Quikscat data (less than one per year).
In summary, the new analyses reveal that most of the long-term trend in TCs over the hurricane record is due to an increase in short-lived TCs and that there seems to be no significant trend of medium to long-lived TCs. However, it seems unlikely that this trend can be explained solely by an observational bias. Establishing the underlying physical reasons for these trends is a challenge for future research.
If we are interested in damages, of course trends in the number of hurricanes or even major hurricanes is arguably more important than trends in e.g. basin-wide TC counts. Most of the undercount bias discussion has focused on tropical storm number in general. The problem of observational biases in the historical record for hurricanes is more or less the same than for tropical storms in general. On the one hand, the probability for a tropical storm system with hurricane force winds to remain undetected likely is smaller than for a weak tropical storm (because of its higher wind speed and longer life-time). But on the other hand, the probability that hurricane force winds in a tropical storm are missed might be similar to the one for missing a weak tropical storm, because in a hurricane the diameter of the area of hurricane force wind is only about half that of tropical storm force winds. Moreover, it has to be considered that ships tried to avoid strong winds if ever possible. Thus the ship track method to estimate the undercount bias might underestimate the number of missings. However, the analysis of ship records and observations uses extrapolations if there is evidence for stronger winds than are measured, which would compensate at least partly for the ‘avoiding problem’.
Until now the analysis methods described above have rarely been extended to hurricanes. The VK08 ship track method has recently been applied to hurricanes, with the above mentioned caveats (Vecchi and Knutson, submitted). The results show that more or less as many hurricanes per year would have been ‘missed’ as tropical storms (rising from 1 in the mid-20th century to about 3 in the 1880ies). This would turn the long-term trend from positive to slightly negative. Even considering the caveats of the method, this analysis shows that uncertainties in the record are too strong to get any reliable trend information for hurricanes.
Moreover, confident estimates of trends in intensity or related indices (like “ACE” or “PDI”) are even more difficult to achieve, since their estimation requires wide areal data coverage. While the HURDAT data base contains an ACE index calculation, the uncertainties are probably quite a bit larger than for other metrics. A downscaling-based PDI estimate over the period 1908-1950 (Emanuel, 2010) reveals a factor of two difference with HURDAT. It has been argued that wind speed has been overestimated even during the aircraft reconnaissance era (1944-1970), based on shifts in the observed wind speed/pressure relationship (Landsea 1993, Emanuel 2007). There is also evidence for an underestimation of wind speed in ship-based and coastal observations before the change from Beaufort to anemometer wind measurements (e.g. Cardone 1990). This change took place spatially and temporally inhomogeneous.
The problem of reconstructing past basin wide hurricane activity is the lack of spatially comprehensive proxy data because a) most of the area where hurricanes occur is open ocean, b) hurricanes are of geographically limited extent, and c) the number of occurrence at a given location is very restricted. Local proxies of hurricane occurrence only give a very patchy impression of what has happened. There are two options to tackle these problems: 1) to summarize as much individual records as possible (over space and time) to circumvent the high spatial and temporal variability to some extent, and 2) to look for proxies of spatially more homogeneous parameters which are linked to basin wide hurricane activity.
Until now, there have been made reconstructions of hurricane activity for several locations at the American coast (e.g. Donnelly and Woodruff 2007, Elsner et al. 2008). However, these reconstructions can only give a very rough impression due to the high spatial variability of hurricane activity. Donnelly and Woodruff e.g. found some correlations of past hurricane activity to El Niño and West African Monsoon, relations that are well-known from current records.
Until recently there has been only one attempt to reconstruct basin wide past activity: Nyberg et al. (2007) have used proxies which they suggested to represent basin-wide wind shear for such a reconstruction. The problems of this reconstruction – mainly an opposite long-term trend of their two proxy series, problems with calibrations, and discrepancies of reconstructed and observed records – have been discussed by Neu (2008).
Recent work by Mann et al. (2009) has sought to employ and compare both approaches, i.e. integration of as many individual sites as available as well as using a statistical model based on proxies for the key governing climate variables. They present two independent reconstructions of hurricane activity over the last 1500 years. Both reconstructions consistently show a peak in Atlantic hurricane activity during medieval times, although the subsequent decrease is not synchronous. The statistical model allows to explain the medieval peak, which is similar to current levels of activity, by the La-Niña-like climate conditions and a relatively warm tropical Atlantic. This work is an important step forward and provides the most comprehensive information that is possible with existing records. However, as the authors acknowledge, the number of individual records is still rather low, and proxies for potentially important large-scale influences (e.g. African monsoon) are not used. Nevertheless, although the assessment leaves many open questions, there are some features which seem reasonably robust and match the current knowledge on external influences on Atlantic hurricane activity, like ENSO and surface temperature. However, there is still lots to be done.
While the number of tropical cyclones or hurricanes in the distant past is difficult to expolore, changes in tropical cyclone intensity is even more demanding. Most proxies like overwash sediments are based on cut-off effects, i.e. a yes-no information (i.e. if an event lead to an overwash or not). The amount of deposit is very difficult to link to intensity since there are a lot of confounding factors. Proxies linked to basin-wide information on existing pre-conditions influencing tropical cyclone strength might be a solution. However, corresponding uncertainties are so high that there is little chance to get a reasonable signal-to-noise ratio to distinguish any trends, at least with existing proxy records.
Inherent uncertainties in the observation Atlantic TC/hurricane record preclude confident estimates of trends prior to the mid 20th century. This does not mean that was cannot draw some instructive conclusions from the record. For example, it is possible to draw useful conclusions regarding the factors that influence TC and hurricane activity from interannual through interdecadal timescale relationships, even if long-term trends might be compromised by observational biases. These relationships, in turn, might inform our understanding of future climate change impacts on Atlantic tropical cyclone behavior.
It is worth a short remark on last year’s hurricane season: As is expected from the development of El Niño, the hurricane season 2009 was rather quiet (9 tropical storms, 3 hurricanes, 2 major hurricanes). Exceptional was the development of a tropical storm (“Grace”) from an extratropical cyclone as far northeast as never observed before (near the Azores), which only became extratropical some 200 miles southwest of the British Isles. It is difficult to speculate about the meaning of such single events (like also the first tropical storm in the South Atlantic some years ago), e.g. if this is a signal of the extension of the area of occurrence of tropical storms, since these phenomena would likely not have been detected before the satellite period and also within that period not for sure.
Finally, some remarks about the outlook for the current season: Current atmospheric and oceanic conditions seem very conducive for an active hurricane season, including very warm SSTs in the tropical Atlantic and the Caribbean and a medium probability for the development of La Niña which is supportive for the development of tropical storms in the Atlantic. For a more detailed description see the seasonal forecast of NOAA. The forecasts from different authors are almost all in the same range as the one from NOAA, including the one of Klotzbach and Gray, who are applying the third forecast algorithm in the last four years, since the older ones haven’t shown any forecast skill at all (correlations near zero). However, seasonal forecasts seem really tricky, as the attempts of Klotzbach and Gray to find predictive patterns in atmosphere and ocean have shown. While their patterns show very good hindcast skills over about 50 years (correlations of about 0.8), the forecast skill for the following years immediately dropped near zero. It is not clear if they have tested the hindcast skill for different periods (e.g. even vs. odd years) and could have found out earlier. If their new method is more skillful, the years to come should demonstrate this.
A recently developed forecasting approach by Sabbatelli and Mann has shown promising skill, but until now has only been applied for two years (2007/2009), which provides a limited basis for evaluation. For 2010, the method predicts between 23 +/-5 named storms, a number that is somewhat higher than the other forecasts.
The current forecast range for the 2010 season based on all published predictions is:
14-28 tropical storms
8-14 hurricanes
3-7 major hurricanes
It will surely be instructive to do a post-mortem on the forecasts when the season is done.
Acknowledgments
I’d like to thank Gabriel Vecchi for helpful discussions and Mike Mann and Chris Landsea for their comments.
References
Cardone V. J., J.G. Greenwood, and M. A. Cane, 1990: On trends in historical marine wind data. J. Climate, 3, 113–127.
Chang E.K.M, Y. Guo, 2007: Is the number of North Atlantic tropical cyclones significantly underestimated prior to the availability of satellite observations? Geophys. Res. Lett., 34, L14801.
Donnelly, J. P. and Woodruff, J. D., 2006: Intense hurricane activity over the past 5,000 years controlled by El Niño and the West African monsoon. Nature 447, 465–468
Elsner, J. B., Jagger, T. H. and Liu, K. B., 2008: Comparison of hurricane return levels using historical and geological records. J. Appl. Met. Climatol. 47, 368–374.
Emanuel K., 2007: Environmental factors affecting tropical cyclone power dissipation. J. Climate, 20, 5497-5509.
Emanuel K., R. Sundararajan, and J. Williams, 2008: Hurricanes and global warming: Results from downscaling IPCC AR4 simulations. Bull. Amer. Meteor. Soc. 89, 347‐367.
Emanuel K, 2010: Tropical Cyclone Activity Downscaled from NOAA‐CIRES Reanalysis, 1908‐1958. J. Adv. Modeling Earth Syst., in press.
Holland G.J., 2007: Misuse of Landfall as a Proxy for Atlantic Tropical Cyclone Activity. EOS Trans., 88, 349-356.
Landsea, C. W., 1993: A climatology of intense (or major) Atlantic hurricanes. Mon. Wea. Rev., 121, 1703–1713.
Landsea C.W., 2007: Counting Atlantic Tropical Cyclones Back to 1900. EOS Trans., 88, 197-208.
Landsea C.W., D.A. Glenn, W. Bredemeyer, M. Chenoweth, R. Ellis, J. Gamache, L. Hufstetler, C. Mock, R. Perez, R. Prieto, J. Sanchez-Sesma, D. Thomas, and L. Woolcock, 2008: A Reanalysis of the 1911–20 Atlantic Hurricane Database. J. Climate 21, 2138-2168.
Landsea C.W., G.A. Vecchi, L. Bengtsson, and T.R. Knutson, 2009: Impact of Duration Threshods on Atlantic Tropical Cyclone Counts. J. Climate, DOI: 10.1175/2009JCLI3034.1 (published online)
Mann, M.E., Sabbatelli, T.A., Neu, U., 2007: Evidence for a Modest Undercount Bias in Early Historical Atlantic Tropical Cyclone Counts, Geophys. Res. Lett., 34, L22707, doi:10.1029/2007GL031781.
Mann M.E., J.D. Woodruff, J.P. Donelly, and Z. Zhang, 2009: Atlantic hurricanes and climate over the past 1,500 years. Nature, 460, 880-883.
Neu, U., 2008: Is recent major hurricane activity normal? Nature 451, E5
Nyberg, J. et al., 2007: Low Atlantic hurricane activity in the 1970s and 1980s compared to the past 270 years. Nature 447, 698–702
Sabbatelli T.A, M.E. Mann, 2007: The influence of Climate State Variables on Atlantic Tropical Cyclone Occurrence. J. Geophys. Res., 112, D17114.
Scott, D. B., Collins, E. S., Gayes, P. T. and Wright, E., 2003: Records of prehistoric hurricanes on the South Carolina coast based on micropaleontological and sedimentological evidence, with comparison to other Atlantic Coast records. GSA Bull. 115, 1027–1039.
Vecchi G.A., T.R. Knutson, 2008: On Estimates of Historical North Atlantic Tropical Cyclone Activity. J. Climate, 21, 3580-3600.
Vecchi G.A., T.R. Knutson, 2010: Estimating annual numbers of Atlantic hurricanes missing from the HURDAT2 database (1878-1965) using ship track density (submitted)
Kevin McKinney says
“. . . the still-vexed Bermoothes” [Bermudas.]
–The Tempest, Shakespeare
Clearly a ‘vexed’ subject, these 400 years later. Thanks for shedding what light is so far to be had.
Steve Bloom says
Urs, have you seen Emanuel’s new breakthrough paper? It seems that GCMs fail to capture lower stratospheric cooling, something that Emanuel says he only became aware of about six months ago(!). Taking that factor into account seems to make everything fall into place relative to understanding recent trends. According to the audio, Tom Knutson is on board, although not being in the loop on such things I’m not aware of reactions from anyone else in the field. Among other implications, it would seem that 2005 is not such an outlier after all. Anyway, your (and Mike’s) thoughts on this would be appreciated.
[Response: Others can comment too, but the tool being used here (AMIP style runs with only specified SST) has absolutely no chance of capturing the lower stratospheric cooling that has the occurred over the last few decades. None whatsoever. That cooling is driven by a combination of stratospheric ozone depletion and increasing CO2 – and since the process is predominantly radiative (and not related to SST in any way), I don’t quite see the point. If you want models with LS cooling, use the models driven by the actual forcings: compare this (AMIP/SST only) to this (AMIP+forcings). – gavin]
[Response: Actually, Kerry’s work uses the boundary conditions from the full three-dimensional fields of the CMIP3 historical simulations, so these aren’t AMIP–unless, I’m missing something here. – mike]
colin Aldridge says
A very useful summary of current research which as I read it says no observable short term trend except, possibly/probably? for short lived tropical storms which are on the increase. Hurricanes correlate with La Nina’s as is well known and maybe we have more Hiurricanes in medieval times ( does this correlate with the medieval warm period I wonder.
I thought that most GCM’s suggest more El Nino’s but conversely more high intensity hurricanes. Am I wrong.
Chris Colose says
Urs, thank you for this great summary article. It provides a great background into the issue. There’s also a nice review article put out recently by Knutson et al. (including Landsea and Emanuel as co-authors) which I think a lot of people would find useful: ftp://texmex.mit.edu/pub/emanuel/PAPERS/ngeo_779_MAR10_-_print_issue.pdf
Steve Bloom says
Re #2 response: Thanks, Gavin, but now I’m a little mystified. Maybe the issue is specific to the GCMs that have been used to model TCs? Also, I had the impression from the paper that the modeling exercise Emanuel did in it was for the different purpose of isolating the effect of the lower strat cooling.
[Response: You can’t use GCMs to directly model TCs – all such projections have been based on a statistical downscaling technique or they used GCM temperature anomalies to drive a higher resolution TC model. The coupled GCMs – run with the appropriate forcings – do produce lower stratospheric cooling, and that continues into the future. Perhaps someone who knows better (or has read Emanuel’s paper in full) could comment? – gavin]
Re #4: Chris, that’s the TC consensus paper that Emanuel and Knutson now seem to have abandoned.
David Miller says
Colin asks in post #3 I thought that most GCM’s suggest more El Nino’s but conversely more high intensity hurricanes. Am I wrong.
As I understand it, those aren’t necessarily in conflict. El nino’s produce fewer hurricanes because of wind shear at higher altitudes. However, with higher SST’s and fewer hurricanes removing energy those hurricanes that do get going will be higher intensity.
Subsequent La Nina years will have both more hurricanes and more intense hurricanes, both driven by SST’s and a lack of wind shear.
I don’t think El Nino’s and more hurricanes go together, but El Nino’s and more intense hurricanes certainly can.
Kerry Emanuel says
Perhaps I can shed some light on this. First, the paper in question is a work in progress and will involve quite a few experts on the stratosphere before it is published in peer-reviewed literature. We have run several AGCMs driven only by varying SST and sea ice, but have also performed some simulations with models that allow CO2, ozone, and/or aerosols to vary. None of these capture the full extent of the observed cooling of the lower stratosphere. There is a growing literature on the subject of lower stratospheric cooling (see, for example, Randel et al., JGR, 2009) but the issue of what is causing it appears far from settled. The altitudes that appear to affect tropical cyclone activity are a little too low to pick up a strong direct CO2-induced cooling, but ozone depletion is almost certainly one factor. We are also working on the dynamical response of the stratosphere to changes in the meridional extent of the Hadley circulation at its base.
The one results I am fairly sure about is that, whatever the cause, the cooling of the lower stratosphere is essential in explaining the uptick of Atlantic tropical cyclone activity since the early 1990s, through its effect on potential intensity. That no GCM, including models that focus on stratospheric chemistry (according to Susan Solomon) captures this adequately is worrisome when it comes to using those models to project tropical cyclone activity forward.
We are working hard on this and hope to have something more definitive by fall.
[Response: Thanks for stopping by. As I’m sure you are aware, but it’s worth mentioning anyway, there are reasons to think that the observations might be exaggerating the cooling a little (specifically the radio-sondes due to correlations in time with balloon hardiness….), but that is another topic. – gavin]
Pierre Champagne says
Coincidently, the headlines on the National Geographic News page read:
“Ominous” Pre-Katrina Conditions Now in Atlantic
Current warm waters, calm winds resemble those preceding worst hurricane season.
We need to get more aggressive with climate change and adopt a strategy such as:
A Structural Approach for Carbon Emissions, Renewable Energy, Non-Renewable Resources, Recycling, Population Growth, Toxic Contaminants, …
The individual components of the approach–the environment is not only about climate change–an be view at:
Climate Change–Emission Reduction–Recycling–Toxic Contaminants–Non-Renewable Resources–Population Growth
Isaac Held says
Gavin, re your answer in #5;
Your categorical statement that GCMs cannot simulate TCs is not correct. I will only mention our recent work at 50km resolution, running over observed SSTs, but there are a number of other studies along similar lines.
stefan says
The discussion on Kerry’s new work is fascinating. In the recent Nature Geoscience review, the authors (including Kerry) wrote: “However, future projections based on theory and high-resolution dynamical models consistently indicate that greenhouse warming will cause the globally averaged intensity of tropical cyclones to shift towards stronger storms, with intensity increases of 2–11% by 2100.”
When that came out I felt that it expresses a little too much confidence in our current ability to model future TC changes. In contrast, in the Copenhagen Diagnosis we concluded that “we have as yet no robust capacity to project future changes in tropical cyclone activity.”
Now that the stratospheric cooling issue opens up a whole new ball game, not yet properly included in the models, our more cautious statement seems doubly justified.
Urs Neu says
Thanks for the links to new papers. They mainly address processes and projections, which was not a focus of this post. Of course, projections are the thing we are most interested in.
As Kerry Emanuel convincingly explains, the influence of lower stratosphere cooling seems to be an important factor to consider. It seems plausible that it explains part of the recent observed increase. However, at present it is difficult to estimate how much of the increase might be explained. As Gavin mentioned, concerning the tropical upper troposphere and lower stratosphere there is probably a similar amount of uncertainties in the measurements as are in the models. And it is not clear which ones are closer to reality. Maybe it is somewhere in between.
A point to consider in the discussion on the meaning of this influence is that since this cooling is probably mainly due to ozone depletion, we have to take into account a possible recovery of the ozone layer when talking about long-term changes (e.g. until the end of the 21st century). In case of a possible recovery of the ozone layer during this century, for projections to 2100 a possible problem of GCMs with this effect might not matter too much. On the other hand, if stratospheric cooling turns out to be the main reason of the recent strong increase (which is stronger than what we expect from SST warming), we have to expect that a high activity level might persist for the next couple of decades.
Thus we are very curious about Kerry Emanuels (and others) further work on this.
Moreover, there are many influences on tropical cyclone frequency and intensity. Therefore, periods of high activity can be caused by different processes or combinations of processes. Bell and Chelliah (2006) e.g. attribute the high Atlantic activity in the 1950ies mainly to West African monsoon activity while current high activity is more related to warm SSTs.
Considering this it seems clear, that
a) long-term trends (e.g. over the whole hurricane record) are not much help for projecting future developments,
b) the absence of a long-term trend does not mean that the current increase has nothing to do with greenhouse gases and that there will be no trend in the future, and
c) past correlations of some influencing factors and observations might not be very helpful for projections, since the importance of different influencing factors and processes changes over time (as is underlined by the fact that procedures with good hindcast performance not necessarily show a good forecast skill).
Actually there is an awful lot of things about future development of tropical cyclone activity that we do not know (yet). But this does not mean that (as is a favourite hobby of denialists to promote) we do know nothing and cannot say anything about the influence of global warming on tropical cyclones. We do know e.g. that likely there will be considerably more precipitation in connection with tropical cyclones. And since a large part of the damages caused by hurricanes is due to precipitation, this is not comforting at all. In connection with potential damage, this knowledge (that we have) might be even more important than the one (that is still limited) about an increase of intensity, because it concerns all the landfalling tropical cyclones and not only the rather rare category 4 and 5 landfalls.
Scott A Mandia says
Regarding stratospheric cooling, Randel, et al. (2009) show that the lower stratosphere has not noticeably cooled since 1995. This is no surprise because ozone levels are increasing. One would assume that increasing ozone would lead to warming so GHG cooling must still be cooling the lower portion.
Randel, W. J., et al. (2009). An update of observed stratospheric temperature trends. J. Geophys. Res., 114, D02107, doi:10.1029/2008JD010421.
Schwarzkopf & Ramaswamy (2008) used an atmosphere-ocean climate model to investigate the evolution of stratospheric temperatures over the twentieth century. They modeled known anthropogenic and natural forcing agents. In the global lower-to-middle stratosphere (20–30 km) their simulations produce a sustained, significant cooling by 1920, earlier than in any lower atmospheric region, largely resulting from carbon dioxide increases. After 1979, stratospheric ozone decreases reinforced the cooling.
The results indicate that natural forcing mechanisms cannot cause stratospheric cooling while increased CO2 would be responsible for most of the cooling in the upper stratosphere and a significant amount of the lower portion of the statosphere even with leveling or increasing ozone after 1995.
Schwarzkopf, M. D., & Ramaswamy, V. (2008) Evolution of stratospheric temperature in the 20th century, Geophys. Res. Lett., 35, L03705, doi:10.1029/2007GL032489.
Scott A. Mandia, Professor of Physical Sciences
Selden, NY
Global Warming: Man or Myth?
My Global Warming Blog
Twitter: AGW_Prof
“Global Warming Fact of the Day” Facebook Group
Steve Bloom says
Urs, my apologies if I actually was off-topic, but what I thought was most interesting about the paper was the excellent match obtained with the TC trend of the last ~40 years once lower strat cooling was taken into account. One can argue about the relative precision of the lower strat temp measurements, but the difference between Kerry’s results taking them into account versus prior efforts that don’t really is quite striking. In the audio, Kerry mentions that Tom Knutson has gotten confirming results using “a different method,” which even though no details are provided tends to increase my confidence in the paper, TK not being chopped liver when it comes to these things.
Scott, bear in mind that we want to consider the lower strat temps where the TC outflows exist (the tropics and sub-tropics) rather than globally. Possibly there are also zonal differences that matter.
Guy Schiavone says
In the recent readjustment analysis on Atlantic basin tropical cyclone frequency for the pre-1958 period, what I find lacking is attention to possible over-counting of tropical storms. Aircraft reconnaissance and post-satellite era sensors such as QuickSCAT have allowed verification of a closed surface circulation that eliminates from the seasonal counts the number of strong tropical waves that may have been counted as tropical storms in earlier times. Tropical waves may persist for days with strong tropical storm-force surface winds but with no completely closed surface circulation, before either dissipating or finally organizing into a true tropical cyclone. The possibility that these strong tropical waves may have been recorded as tropical storms in the earlier era of ship and land station observations seems to be ignored by the recent research. Emanuel 2010 notes,
“…The slightly higher residual variance than would be predicted in a Poisson process may indicate either that there is still some residual climate signal in the residual series, that the random component of
variability does not have the character of a Poisson process, or
that the corrected best-track data and the downscaled events
are not after all drawn from the same population.”
It would seem to me that further analysis of the importance of the second and especially the third possibilities is merited with a reconsideration of possible over-counting as well as under-counting in older observations.
Scott A Mandia says
According to Federov, A., Brierley, C., & Emanuel, K. (2010) in Nature during the early Pliocene there was an almost permanent El Nino and much greater TC activity in all tropical basins. The authors also state that “GCM calculations show an atmospheric circulation for the early Pliocene with weaker (meridional) Hadley and (zonal) Walker cells—the weakened atmospheric circulation implies reduced vertical wind shear, which is favourable for tropical cyclones.”
Currently, El Nino years result in more vertical shear in the Atlantic basin which inhibits hurricane activity. Why then does Federov, et al.’s model show much greater hurricane activity in the Atlantic with a semi-permanent El Nino? Is it that the shear has been effectively moved poleward due to expanding Hadley Cells in a warmer world? Is this what they mean by weakened Hadley Cells? Does the Walker Cell weaken because there is less SST gradient across the Pacific?
BTW, figure 2 in that paper is just breathtaking. The image is a sobering look at what the year 2100 and beyond may hold for us. Awful!
Federov, A., Brierley, C., & Emanuel, K. (2010). Tropical cyclones and permanent El Nin˜o in the early Pliocene epoch, Nature, 463, doi:10.1038/nature08831
Scott A Mandia says
Steve Bloom: Please send me an email (mandias -at- sunysuffolk.edu). I wish to ask you a question “offline”. Thanks.
Urs Neu says
Steve, I agree that the match looks convincing. However, since the downscaling strongly depends on the reanalysis, and the reanalysis depends on mesurements that have a considerable uncertainty I wouldn’t be sure that stratospheric cooling explains most of the increase. If stratospheric cooling is actually lower than the measurements (which there is some evidence for), the match would be less good and there would be room for other influencing factors. It’s possible that it is the principal factor, but we shouldn’t be fooled by (too) good correlations given these uncertainties. It would be interesting to have the pre-1980 data which have been excluded due to inhomogeneity problems.
Guy, I agree that we have to look at over-counting. How far the effect you mention has been accounted for by the reconstruction should be discussed by the specialists (as e.g. Chris Landsea).
Scott, El Nino in principle leads to a change of the regional distribution of tropical cyclone activity (roughly a decrease in the Atlantic, an increase in the Eastern North Pacific, and regional shifts in the Western North and South Pacific, respectively). Globally, there is no significant change in frequency. If you now look at a state (the Pliocene) where there is an overall increase in tropical SSTs and reduced wind shear, you will see an overall increase in tropical cyclones. These factors are more important than the redistribution effect of El Nino.
The weakening and northward expansion of the Hadley cell are two different things (although they are connected to some extent). The weakening is related to the speed of the circulation, the expansion is related to the geographical location of upward (ITC) and downward (subtropical subsidence) flow.
Scott A Mandia says
Thank you, Urs.
Steve Bloom says
Urs, I don’t think Kerry is promoting lower strat cooling as the principal factor; that’s still SSTs. A relatively small change in inflow minus outflow temp seems to make for a large change in the trend, all else equal.
Guy, the weakest storms have been the largest problem in the reconstructions all along, and it seems doubtful that there will ever be a definitive answer, but FWIW this issue isn’t very important when looking at total TC energy metrics since the weakest storms don’t have much. Of course everyone’s very interested in comparing the counts, but arguably that’s a little horse-racey.
RaymondT says
This post is truly captivating. As an engineer I was wondering how you can possibly test the capability of the GCM to predict QUANTITATIVELY an increase in hurricanes (Paper by Knutson et al.) with increasing radiative forcing due to CO2? How significant is an increase of 2 to 11% by 2100 given the uncertainties in the climate models ? I think all policy makers who wish to spend 1% of GDP should read this post to better appreciate the uncertainties in attributing catastrophic climate events to increase CO2 levels. They would realize that although the consensus on increased global temperatures with increased CO2 levels is strong, the attribution of increased hurricane activity with increasing CO2 levels is far from being established.
Urs Neu says
Steve: yes, of course. I was thinking of the explanation of the residual that is not explained by SSTs, or the increase that is beyond what we expect as an effect of warming, respectively.
Scott A Mandia says
Raymond,
The possible impact of climate change on hurricane frequency and intensity is just one of many impacts that are very concerning. Sea level rise alone should warrant spending 1% to 2% GDP to try to mitigate. Then we can add on the most likely negative impacts on:
Freshwater Resources, Ecosystems, Ecosystem Services, Biodiversity (inc. Ocean Acidification), Agriculture, Fisheries, Food Production,
Public Health, and National and Human Security.
1% – 2% GDP seems like small change to me given the many trillions of dollars that climate change is very likely to cause – let alone the human suffering. We should also not forget the suffering of nature.
I will be adding many new Web pages on my GWMM Web site summarizing these impacts. In the meantime, I am posting them on my blog at http://profmandia.wordpress.com/
Monday will feature Sea Level Rise & the Coastal Environment
Steve Bloom says
Re #20: Raymond, to resort to an oft-used analogy, think of it like buying fire insurance. My own, I think somewhat more apt, development of that is to say that not taking strong action against climate change is like refusing to buy fire insurance after you’ve found out that the insurance agent moonlights as the local arsonist. You might get away without buying the insurance, but the smart money bets the other way.
Please also note that Knutson has confirmed Kerry’s results.
The “tempest in a teapot” over TC projections (OK, it’s a big teapot) also needs to be placed in its broader context. As the planet warms, the water vapor content of the atmosphere increases (basic physics), by about 4% over the last thirty years. As this proceeds, extreme precipitation events inevitably become more common. TCs are a special class of such events, but far from the only one. Extratropical cyclones, e.g., (the sort that starred in The Perfect Storm) are far more common than TCs and are not limited by shear winds in the way TCs are. Also include just plain heavy rains of the sort we recently saw in Tennessee. See a discussion of this broader topic here.
Finally, NOAA’s National Climatic Data Center now has a page tracking extremes.
Doug Bostrom says
Further to Steve Bloom’s comment about insurance, we currently spend something like 6% of global GDP on insurance of various kinds. Raymond wonders about the wisdom of spending 1% of GDP to reduce our risk.
One reasonably could argue that climate mitigation expenditures are at least as sensible as the astronomical amount of money we now spend on
As with the capital amassed via insurance, that 1% will not vanish but in instead will end up circulating in our economy. Money spent on insurance reemerges in various forms, some used to replace value lost as per contractual arrangements, some in the form of capital investments, leaving the rest dissipated via administrative expenses and profits amassed and spent. Money spent on mitigation of climate change will be directed to a purpose benefiting us in the medium to long term, as our temporary fixation with fossil fuels becomes impossible due to supply problems. Thus mitigation money will not only help to control our risk but will also be directly beneficial to economic activity now and in the future as it is used not only to extend our current fuel supply but provide the substitutes we’re certain to require down the road.
In sum, it’s hard to characterize money spent on mitigation as a waste, or characterize it as part of a zero-sum game. Mitigation expenditures represent a change in the vector of money but not a disappearance.
Meanwhile, down the road, if we’ve not spent sufficient capital to handle weaning ourselves from fossil fuels we’ll see a real crash in GDP as we’re simply unable to fuel “growth.” The later we put off these expenditures the more difficult and costly the weaning process will be because the fossil fuel lever we have to work with now grows ever shorter on the long end while the load on the short end becomes heavier.
Brian Dodge says
What’s the expected effect of AGW on the harmattan winds, and soil moisture in dust source areas in the Sahara? My understanding of the dynamics is that more dust = fewer hurricanes, so those changes could impact TC frequency.
llewelly says
Steve Bloom says 18 June 2010 at 1:43 PM:
There are two serious problems with your analogy. First, no-one (so far as I know) is emitting GHGs with the primary deliberate intent of causing global warming. Second, in your analogy, the insurance agent is using a kind of terrorism to extort funds. Despite the claims of those who advocate a State of Fear type conspiracy theory, there is no evidence anyone is using terrorism to force people reduce GHG emissions. Your analogy (unintentionally) implies two untrue items, the second of which is a common talking point of the more extreme denialists.
Steve Bloom says
No analogy is perfect, llewelly. The point of my variation was to emphasize the certainty of the bad consequences, something the straight version doesn’t do (since buildings burn down relatively rarely).
Could a mod fix the html in #23? Thanks.
James Staples says
I think I once read about a study of Alpine Ice Core Data, which contained speculation that a Hurricane or Tropical Storm may have survived crossing the Atlantic and the narrowest part of the Iberian Penninsula, thus making it into the Medeteranean Sea; which resulted in a singluar deluge that added a distinctly thick layer of summer-time snow pack to the Alps, which thus formed a much thicker than normal annular layer to the Ice Pack.
This is a distant memory – and may have simply been specualtion, as I’ve been studying this issue (as a ‘lay-polymath’) ever since Dr. Hansen hit the Science Magazines, in the 1970’s.
If it hasn’t happened already, I would have to wonder if it won’t anyway – as the northern parts of Normandy/Brittany/Galicia have been battered by Hurricanes that DID make most of said trip before, eh?
Tim Jones says
Regarding NOAA Satellite and Information Service. http://www.osei.noaa.gov/
Interesting that this government agency is quoting Fox News as its source of information.
Normally we would be having pictures of Atlantic tropical cyclones, among other events But we don’t have any yet this year. The examples may be OT, but they only serve to illustrate the point.
To wit:
http://www.osei.noaa.gov/Events/Current/OILgulfmexico144_MO.jpg
At least 6 million gallons of crude have spewed into the Gulf, though some scientists have said they believe the spill already surpasses the 11 million-gallon 1989 Exxon Valdez oil spill off Alaska as the worst in U.S. history. The spill’s impact now stretches across 150 miles, from Dauphin Island, Ala. to Grand Isle, La. BP said Its costs for responding to the spill had grown to about $760 million, including containment efforts, drilling a relief well to stop the leak permanently, grants to Gulf states for their response costs and paying damage claims, as reported by Fox News.
&
http://www.osei.noaa.gov/Events/Current/OILgulfmexico130_MO.jpg
An estimated 3.5 million gallons of oil have spilled since an explosion on April 20 on the drilling rig, the Deepwater Horizon, 50 miles off the Louisiana coast. At that pace, the spill would surpass the 11 million gallons spilled in the Exxon Valdez disaster by next month. BP which is responsible for the cleanup, said Monday the spill has cost it $350 million so far for immediate response, containment efforts, commitments to the Gulf Coast states, and settlements and federal costs, as reported by Fox News.
Urs Neu says
Brian,
Yes, as far as we know dust from the Sahara seems to influence tropical storm frequency over the Atlantic. Dust production depends on the winds over the Western Sahara (e.g. monsoon). However, changes of the corresponding wind systems by AGW is a very uncertain thing. Thus this is one more influence on tropical storms where we don’t really know what effects AGW will have.
Raymond, the more uncertain a bad development is, the better off you are to avoid it, since there will not be the possibility of insurance (insurance is only possible for things for which a certain knowledge of the probability of occurrence and of the possible damage is available).
llewelly says
James Staples says:
19 June 2010 at 5:15 PM:
There are some complicating issues. First, some European weather reporters use the term “hurricane” to refer to extratropical storms, some of which have tropical origins, and some of which do not. Second, there is no historical record of a tropical cyclone reaching mainland Europe with both tropical characteristics and hurricane force winds. A few systems have reached the Azores with tropical characteristics, and one system (Debbie of 1961) reach Ireland with tropical characteristics. Hurricane Vince (2005), which reached the Iberian peninsula as a tropical depression, is the perhaps the nearest event to a historical example of a hurricane reaching mainland Europe with tropical characteristics. Third, tropical cyclones often transition into extratropical cyclones, losing tropical characteristics, and a fair portion of these pass through or near European waters, but long after they have lost all tropical characteristics. Some of these systems have brought hurricane force winds and / or heavy rainfall to European coastlines, or even the Mediterranean. An approximate overview of these systems can be found in wikipedia, with the normal caveats about checking the references and the modification history of the pages(0). Finally – I don’t think the “singluar deluge” you refer to requires that the system retain tropical characteristics all the way to Europe.
The role of global warming in all this is unclear, except for the well-supported fact that warm air can hold more water, and thus global warming contributes to increase in the severity of heavy precipitation events in general. There has been some suggestion that global warming would cause a poleward expansion of the areas affected by hurricanes, but despite strong evidence for poleward expansion of warm SSTs, there does not seem to be much evidence for poleward expansion of hurricane activity, except perhaps in the Atlantic, which represents only 11% of global activity (although it is by far the best studied).
(0)For example, the wikipedia page on Hurricane Arlene of 1987 claims “The remnant moisture from Arlene continued through the Mediterranean Sea and produced heavy rains across Italy on August 27” and references Extreme Precipitation Events Over Northwest Italy, but in table 1 on page 4, Arlene is one of 3 systems marked with a question mark, which I take to mean there was not a clear link between Arlene and the precipitation event. However the paper does contain several examples of extra tropical remnants of tropical cyclones contributing to heavy precipitation over northern Italy, including the alps.
MalcolmT says
@ Raymond and all who replied re ‘insurance’:
Don’t forget the Stern Review http://www.webcitation.org/5nCeyEYJr which said very bluntly that we are much better off, economically, acting now than waiting until later.
Harry Eagar says
Just because they cannot confidently estimate past trends, does not mean they haven’t done so.
Jeffrey Eric Grant says
I am a retired Engineer, with considerable interest in ans some experience in working within the Climate Sciences. I ask any of you gentlemen to direct to me a scholarly article on the exact nature that CO2 plays in increasing sea level changes. I am looking for a scientific study based on actual experimentation. I am looking for a proof of the theory. I am well versed in the scientific method and wish to undertake a study of the science. I have been looking, but have not found convincing proof. What I have found is a lot of hypothesis and now a large amount of political dissertation.
Please send me some web site that has the proof.
Thanks.
[Response: Hmmm. If you are looking for “proof”, you should consult a site dealing with math or formal logic. Science, which is what this site is about, deals with degrees of confidence and the weight of evidence, not “proof”. If you want to learn something about the science underlying the issue of sea level rise, just enter the term in the search box at the upper right corner of our main page. – mike]
Ray Ladbury says
Jeffrey Eric Grant,
OK, so what precisely is your question?
Are you looking for evidence (proof is for mathematicians) that CO2 is a greenhouse gas? The experimental record dates back to the 1850s.
Are you looking for evidence that increased CO2 raises temperatures? You’ll find a lot of good studies here:
http://agwobserver.wordpress.com/2009/11/05/papers-on-climate-sensitivity-estimates/
Hopefully you are not in doubt that ice will melt if temperatures rise, are you? We lost 2 trillion tonnes of ice from 2003 to 2008. Do you question that this will cause sea level to rise?
Chris Colose says
Jeffrey Eric Grant,
The CO2 itself doesn’t have anything to do with the sea level rise. Sea levels respond temperature, and glacier/ice-sheet melt. The same would happen if you turn up the sunlight a bit. We know that CO2 causes warming through radiation physics, and we know that warmer temperatures melt more ice through basic thermodynamics, common experience, and past climate evidence. Qualitatively, it’s all a very easy and robust chain of logic.
Warmer ocean water respond via thermal expansion (the “thermosteric component”). It is also responding, to a slightly lesser extent (although with the largest potential for future sea level changes) to melting of ice (the “eustatic component”). If you want to confirm this physics with experiment, you just need a cup of water and ice cubes.
SecularAnimist says
Ray Ladbury wrote: “Are you looking for evidence (proof is for mathematicians) that CO2 is a greenhouse gas?”
Funny thing about that “proof is for mathematicians” line that climate scientists are so fond of — it is not at all what the judge said when he instructed a jury that I served on regarding standards of proof for a criminal prosecution. And the judge definitely did say “proof” — as in “proved beyond a reasonable doubt”.
The evidence does in fact prove beyond a reasonable doubt that CO2 is a greenhouse gas.
Jeffrey Eric Grant says
Maybe my question was too vague. That happens a lot. I have studied the IPCC Report 4 carefully. I am aware that CO2 is in fact a greenhouse gas. I am looking for the “proof” that CO2 is the main reason for the increased global atmospheric temperatures. I do not have a formal education concerning atmospheric science except for two college level classes. However, before I can pin the temperature increase (mainly) on CO2, I must first rule out other, more obvious players: Sunlight and Cloud Cover. What evidence do we have that these two natural players are not increasing the temperature? Or, can Cloud Cover act in a negative feedback, countering increased temps?
If these questions sound elementary for you, please excuse me… because I do have some knowledege of the subject, and considerable training & experience in related fields, I believe, as part of the ‘public’, I should be able to grasp the logic and understanding that leads to AGW. My trouble is I have not yet found the key to the conclusion. If I can’t “get it”, then a vast majority of ‘the public’ can’t get it either.
Maybe that’s why there is so much politics in the news concerning AGW?
Barton Paul Levenson says
JEG: I am looking for the “proof” that CO2 is the main reason for the increased global atmospheric temperatures. I do not have a formal education concerning atmospheric science except for two college level classes. However, before I can pin the temperature increase (mainly) on CO2, I must first rule out other, more obvious players: Sunlight and Cloud Cover.
BPL: Have you tried doing a time series multiple regression of dT on those three factors?
I have. CO2 accounts for 76% of the dT variance 1880-2008. Sunlight accounts for 1-2%. I don’t have a time series for either albedo or cloud cover back that far, but if you can point me to one, I’d be glad to factor it in. BTW, not everybody agrees with the Palle group’s Big Bear Solar Observatory series. Earthshine is a notoriously unreliable way to measure the Earth’s albedo.
Kevin McKinney says
Jeffrey (#38), perhaps some recapitulation of the history of the science might be helpful?
I have a series of articles on that–the latest one deals with the “life, work and times” of Guy Callendar, who basically brought CO2-climate theory into the 20th century. His first paper on the topic came in 1938. His ideas were significantly vindicated following the IGY in ’58, and he corresponded, basically as a mentor, with a couple of notables in the development of the theory–Gilbert Plass (about whom RC did a post a few months back) and Dr. David Keeling, after whom the famous “Keeling Curve” of CO2 readings is named.
From there, you can trace things right on back to Fourier and his heat budget (1824), if you wish.
The Callendar article:
http://hubpages.com/hub/Global-Warming-Science-And-The-Wars
I mustn’t neglect to mention the possibility of just consulting the science itself, if you’d prefer less of the “human factor.” You can do that by reading original climatology paper relevant to the discussion here:
http://wiki.nsdl.org/index.php/PALE:ClassicArticles/GlobalWarming
(This source also gives access to relatively more recent “classics.”)
David B. Benson says
Jeffrey Eric Grant (38) — Start with
https://www.realclimate.org/index.php/archives/2010/03/unforced-variations-3/comment-page-12/#comment-168530
and for more along these lines read Tol, R.S.J. and A.F. de Vos (1998), ‘A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect’, Climatic Change, 38, 87-112.
For my link, note that 2xCO2 alone can be shown, just from the atmospheric physics, to produce 1.2 K of temperatue increase. Adding in the water vapor positive feedback is a bit hard, but about doubles that; 2.4 K total. Then one has to subtract the ocean heat uptake and any net low cloud cover effect. So it was easier to just determine the overall OGTR, a transient response.
Didactylos says
Jeffrey:
“Getting it” is really not difficult. There is no trend in cloud cover or solar output that can even begin to explain the temperature increase that we have seen. Greenhouse gases explain it perfectly.
And really, that’s all there is to it.
Yes, there’s a lot more detail if you scrape below the surface, but none of the detail changes these basic, top page facts.
Those people who don’t want to accept these basic findings will blow a lot of smoke about the fact that we don’t understand everything about clouds, and that changes to total solar irradiance does indeed act as a climate forcing. But we know that the solar forcing is very small, and that clouds have not changed dramatically over the last few decades.
So, let’s imagine that there *is* some mysterious “unknown” that is a perfectly natural cause of global warming. This leaves us with not one, but TWO problems: 1) What is this unknown? It’s not the sun or clouds, we do know that. Nor is it volcanoes, orbital changes, or any of the other theories that are routinely trotted out but were discarded long ago. 2) We also have to explain why our greenhouse emissions *aren’t* having the warming effect that basic physics demands that they should.
Or, we can trust to Occam’s Razor and accept that the greenhouse gases are having the expected effect, and that there is no mysterious unknown preventing that, and no mysterious unknown that coincidentally is causing warming by exactly the same amount.
Jeffrey, you’re an engineer. Which do you prefer: the theory that explains everything we can observe, or the theory that relies on two massive gaps in our knowledge and a huge coincidence?
Jeffrey Eric Grant says
Thanks for the information. I have some additional reading to do. However, of course, heat comes from the sun (and to a lesser degress, the core of the earth). The Greenhouse Effect heats up the atmosphere (where would we be without that?). Increasing CO2 will increase the GHG and positive feedbacks will multiply that effect. What are the limits? Will it go on forever, or is there some reversing mechanism ?
Sorry, I will stop for a while while I digest what has already been presented to me. I thank you for your time & effort…
Ray Ladbury says
Jeffrey, you seem to be laboring under the same misapprehension that many others share–that is, that we saw temperatures warming and said, “Oh, it must be CO2!” In fact, Svante Arrhenius predicted that anthropogenic CO2 would warm the planet 114 years ago, and over 70 years before the onset of the current warming epoch. Arrhenius based his prediction on what was then the already well established science of the role of greenhouse gasses in Earth’s climate.
Now we can posit all sorts of causes for which we have zero evidence and no causal mechanism, or we can go with the known science. I’ve always found that science works best when we try to explain the unknown in terms of the known.
Barton Paul Levenson says
JEG,
Warming does trigger positive feedbacks (and negative ones), but the result is a converging series, not a diverging one. It doesn’t necessarily run away.