This month’s open thread. We’re going to guess that most of what people want to talk about is related to the IPCC WG1 AR5 report… Have at it!
Reader Interactions
286 Responses to "Unforced Variations: Oct 2013"
Radge Haverssays
@98
Perhaps, although since we’re looking at a time-worn political tactic, and we’re living in strangely charged political times (also human), I’m more inclined to ascribe a secondary role to simple wishful thinking. That’s in general, as I’m also thinking of exchanges that have taken place on other threads as well as what goes on outside the RC enclave.
Watchersays
re 93:
Ray,
If you look at my posts you will note that I have made no mention of the “16 year hiatus” except tangentially (#78) in defending Dr. Curry’s right to wonder at the differences between the draft and final AR5 SPM. You’re the one telling me I’m talking about a hiatus. As for whether a 16 year hiatus has ever been simulated, I would defer to von Storch:
At my institute, we analyzed how often such a 15-year stagnation in global warming occurred in the simulations. The answer was: in under 2 percent of all the times we ran the simulation.
Rather, the point of my posts was to discuss something that was new to me: why the modeling community doesn’t appear to consider an accurate simulation of absolute temperature important, despite (once again) Mauritsen:
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act
And yet both the figure that drew my attention and Gavin’s comments seem to indicate that it’s not considered a big deal, since what they are after is the trend. Other comments from yourself and Magma are in the same vein, but I’m not convinced that you are climate modelers so I don’t want to use them as an example of the community. One should try to avoid putting words into other peoples’ mouths, don’t you think?
Patrick 027says
It’s also worth pointing out that the range in model results are what they are, whatever the errors in baseline temperature; and they are clustered into a bundle. And also, there are sensitivity estimates from observations and paleoclimatic studies.
Assume you have an aqueous reaction mixture. Try running your simulation at 294K, and at 292K — straddling the freezing point of the mixture of course. My guess is very little change in your reaction rate model, but a very big change in real life.
Latent heat makes a huge thermodynamic difference when the temperature is swinging through a phase transition, which it will do repeatedly in a climate simulation. I wouldn’t think this is news to any climate modelers.
Dave123says
@Watcher – What I think you’re missing in your focus on the freezing point of water is so far as climate models go, it means that you move a few meters higher or lower in the atmosphere when there’s a temperature error around the freezing point, and otherwise it doesn’t make much difference in kinetic expressions, especially compared to the other approximations that are being made. Same thing on the ground, a few days earlier or later, a few meters higher or lower, a few degrees further north or south. As long as the error is reasonably consistent, the trends will emerge. Now I’ll stand ready for correction from someone who can point to a specific equation/scenario that’s very sensitive and tightly coupled, but in the general sense of things those are the consequences I see from the kinds of equations I understand.
Going back to reliability- in our systems the objective function was economic maximization- which meant “more”. More of course had upper limits because of dangerous runaways. Running my model simulations with dozens if not hundreds of scenarios provided guidelines and boundaries that would have been obscenely expensive and risky to try to produce experimentally. That’s where good enough comes in- stopping short of that last 1/2 %yield/productivity in order to have a 3 degree margin on temperature measurements.
I suggest that how I used reactor models is more like how we use climate models….looking for trends, responses and safe margins, not absolutes.
> Reality is only one possible realization of the climate system.
Could someone put that in understandable fifth-grade English?
If I’m the one who started “clockwork model” forgive me. I made that term up handwaving. Is there really a kind of model meant to behave consistently, much the same way each time it’s run?
‘Cause I’m contrasting that to a climate model in which significant events occur with some range of likelihood and timing, not tied down because the model includes modeling natural variability. When and how big and where the volcanos erupt in each century, the usual example.
I think of climate as more like a variant or mutant Rube Goldberg machine — one that ends up reaching one of a variety of different likely outcomes each time it’s run.
(was such a thing ever built in hardware? something like the Diskworld hydraulic model of money flow in AnkhMorpork …).
Doug Bostromsays
Booker’s “Climate change ‘scientists’ are just another pressure group” is remarkably brazen, a heroic attempt at establishing an expedient cognitive short-circuit. If Booker can only lead people to believe such a thing, then it’s no longer necessary to scrutinize, understand or acknowledge any of the details of difference between a broad scientific consensus and a demagogue’s appeals to emotion.
“Both sides are the same, just pick the one that fits your predisposition.
No, but nauseatingly cynical.
“…why the modeling community doesn’t appear to consider an accurate simulation of absolute temperature important…it’s not considered a big deal, since what they are after is the trend.”
There you go, you said it. Simulation or physical experiment, if you’re trying to predict the statistical reliability of a population of light bulbs it doesn’t matter which particular bulbs fail, the numbers are still useful. If your simulation is also trying to predict which particular light bulb will burn out first, second or later then the the model will of course “fail” even as it continues to produce useful statistics.
In order to have a lot of pointless discussion and doubt it’s imperative that we forget the fundamental purpose of climate simulations.
Judith Curry is not stupid but she is deeply mysterious.
Watchersays
Re: 102 & 104
Patrick 027 — I dunno, it’s hard to see anything clustered in this figure.
Dave123: I expect you’re right for a reaction vessel where you’re trying to supply energy to get over an activation barrier. Can I assume the reactions are essentially irreversible once they occur and you’re mostly tweaking which pathways are favoured?
In a climate model it is precisely those few metres of altitude or few degrees of latitude that are at issue. Essentially one is modeling a reversible system (in the sense of a local equilibrium rather than a strict thermodynamic reversibility) in an almost-steady state situation: heat is being pumped in, transported by storms, currents, etc., contributing to melting/freezing … gads. An unholy mess and while I do go on I have to take my hat off to people who are willing to tackle it.
Anyway, exactly where all of this is going on (i.e. what latitide, altitude, etc.) is at the heart of the predictions people care about: whether the methane clathrates will be released or Greenland melts or what have you. Hence, I think it matters more than in a chemical reactor.
David B. Bensonsays
Magma @74 — Eroded and that in just the past few years,
Dave123says
@Watcher- It will take someone with more of an inside view of the models, but the errors in the various other components besides temperature are more likely to be limiting factors. But no, the few meters or degrees of latitude are precisely not the issue. You need a sensitivity analysis on the estimated errors in lots of other variables and ‘constants’ before you can get too interested in temperature by itself. Look at the error bars on known forcings going into climate models. Those are bigger on a percentage scale than temperature. (See approved summary for policy makers SPM.5: All of the error bars are far greater than 1 degree out of 300.
Dave123says
Watcher- no the problem with the reactor wasn’t putting heat in, it was taking heat out….hence the concern about runaways. The concern was also that if you assumed a normal error distribution on your temperatures measurements (despite calibration), the possibility that 1 error in 10000 could be measuring low enough to significantly miss boarderline runaway conditions had to be reckoned with.
Yes, it is a matter of conservation. Thomas has a fair point, but most dirt has seen its fair share of ground water as well. There is heterogeneity in the distribution of uranium in soil, and it is also expected in coal ash for that reason alone.
Regarding screening, perhaps I should make that clear. There is radioactivity occurring throughout the Earth’s crust. We are subject to very little of it because it is also blocked by the Earth’s crust. Laying down a layer that has the same concentration of radioactive isotopes as the rest of the crust makes no difference in radiation exposure because it screens the layer below. That is why bulldozers don’t raise radiation levels. They may have made a bigger pile of stuff that has some radioactivity, and there is indeed a greater number of decays in that volume than before the pile was there, but the old surface is screened by the new material and no longer contributes to the surface radiation.
Dave123says
@hank 106- My guess is that there are two broad categories of physics based models, and that you can split them into clockwork and non-clockwork. The kinds of reactor modeling I did is very clockwork, because there were no random inputs. We controlled the horizontal, we controlled the vertical. Given a scenario, the same result pops out every time you run the model. Thing is these models contained so many elements of a climate model: heat transfer, mass transfer, rate of reaction, temperature and concentration dependence, and parallel to uncertainties in forcings, we had uncertainties (albeit small) in some heats of reaction, the rate constants and some exponents (order effects) on concentrations due to gas and solid phase adsorption/desorption effects. We had to tune around those. In the end, everything was uncannily, eerily accurate, down to a little ‘hiccup’ that I thought was below the noise threshold being reproduced and observed large scale.
But we didn’t have random events thrown in, such as volcanoes or ENSO. ENSO might relate because it would affect humidity and that had strong influence on one system (but not the other), but even there, given a the same input we obtained the same output, and we were only concerned with steady states.
Climate is different- its not steady state. There’s no control panel to tweak anything to compensate for an uncontrolled variable. It’s vastly more complex. We could simplify out the CFD in our systems at both macro and micro scales because we had consistent turbulent flow and appropriate time scales compared to diffusion vs convection, you can’t do that in climate models.
In fact, with all of the uncertainties going into climate models (from what the true values of forcings are ….and whether there’s local, contingent or emergent variation in forcing values) I’m sometimes amazed that they come as close as they do.
Ray Ladburysays
Watcher,
If you wish to avoid misunderstanding, then might I suggest expressing yourself clearly? Do not take a cue from Aunt Judy where you advance to the very verge of taking a position and then withdraw into plausible deniability.
Unfortunately, von Storch is engaging in his penchant for doing science by press–long before he has allowed his peers to have their say of his analysis. Everything depends on how von Storch asked the question. If one asks whether a particular 15 year period will show insignificant warming, then, yes, the odds will be small. If however, one asks whether some 15-year period in a span of 50 years will show low warming, then the odds will be a whole lot higher. And finally, if one asks whether a 15-year trend beginning with an El Nino that was about 2 sigma higher than the trend might show negligible warming, I might take those odds. And finally, when you specify that the two end years be La Ninas, it’s damn near a lead-pipe cinch. But Judy is not interested in lead-pipe cinches. She’d rather be mystified. It makes it easier to fool readers like you.
What you believe is inconsistent with the measurements reported in the published literature on the subject. You’re entitled to your own beliefs. Repeating your conclusions doesn’t improve the mismatch: whatever source you’re relying on isn’t published. I rely on the published science. Enough, eh?
Actually, you have not demonstrated anything here. You cite the paper that has been disproved as evidence. That is both a false appeal to authority and circular. You provide links that show that coal ash and soil have an overlapping range of radioactivity to claim that they are different when you link shows they are not. If you can’t argue cogently, you can’t be persuasive. Read more carefully and do some thinking. You’ll understand eventually.
Doug Bostromsays
Amazing how we can make something that is fairly simple in its broad features unnecessarily complicated to think about, but that’s a common problem these days with the epidemic application of tactical and strategic confusion.
Combustion of coal to ash: does it concentrate any elements and/or compounds?
If we start with coal that includes various impurities and burn it, we end up with a residue chiefly consisting of those impurities, in particular those that are not volatile at the combustion temperatures involved.
But apparently we’re supposed to believe that the properties of coal ash are the same as coal. How does that work?
Meanwhile, it’s known that particular deposits of coal are associated with particular impurities.
I’ve a feeling we’ll next be hearing that the mercury found in coal to a greater or lesser extent magically vanishes as the coal is burned.
[Response: My sources tell me that the inference that the banding is annual (and hence the ‘instant’ conclusion) is quite controversial. More study is definitely needed on this. – gavin]
> the paper that has been disproved
Please get help. Ask a librarian.
Doug Bostromsays
That article on what we might be able to infer are varves is amazingly devoid of any useful content. Apparently there’s disagreement on whether the layers are actually varves?
Beef: In a perfect world I’d get free access to some number of (one, even) PNAS articles for each bleary-eyed 6am delivery of a passenger to the airport made so that the passenger could attend a NAS meeting across the country, in the swamp. Meetings which– by the way– are essentially a gift.
They could be layers of mud from annual extreme rainfall events if these cores are from near shore locations; they could be sediment from plankton blooms. Or much else.
The bands are layers at some particular location — which could be due to local extremes while global CO2 was changing relatively slowly. Or so I’d guess.
“Would you have confidence in a model’s trend if the absolute value were 1 % off but the trend, integrated over time, produced a 10 % difference?”
Well perhaps I would, but the trend as measured (and modeled) from 1900 to present integrates to less than a 0.3% difference in absolute T. And there are no “random” events in there, since historical forcings are used to generate the curves and the models are aligned to data. The very scariest IPCC scenarios when carried out to 2035 or so (the approved SPM Figure 1.4 ) end up at +2K relative to 1950, so call that 2.5K above 1900. That makes it somewhere around 0.8%.
You also ask,
“How far up or down, and for that matter, north or south, does the 273.15 K isotherm shift among models for the same forcing?”
A good question, which is rather the point I was after but was too lazy to look up. According to various Googled references, I would say around 500m of altitude and 1000km of latitude for the observed 3K spread in the hindcast models shown in Mauritsen et al. That takes in the entire extent of Greenland and its ice shelf, for example.
By the way, thanks for your extensive comments on chaotic aspects of climate (and geology, of all things). You’ve obviously given this some thought and I appreciated some of the insights.
And finally Ray, from 114:
“If you wish to avoid misunderstanding, then might I suggest expressing yourself clearly? Do not take a cue from Aunt Judy where you advance to the very verge of taking a position and then withdraw into plausible deniability.”
OK, how’s this. Gavin once advised me to go softly and not say I think something is absurd when it’s been pronounced on by someone who has studied it extensively. I’m trying to follow that advice.
In my trivial, stunted world of clockwork modeling I would consider it absurd to take a model that is only “good” to 1% and use it to predict things occurring on a scale of less than 1%. Apparently, models infinitely more complex than mine, that contain chaotic features and dozens of poorly characterised parametrisations should not be judged by the same standards. I know that sounds sarcastic, but it is the gist of what has been suggested to me here.
I find this surprising and so apparently does Dr. Curry. I’m trying to be polite because this is a legitimate field of study of no little importance and still being actively pursued. Happily, for the most part discourse here remains civilised, but that’s not universally true.
Watcher,
Fine, except I don’t know of any basic conclusions on which the basic science depends that are contingent on the models being right to 1%. And again, one is not looking at absolute numbers from the models but rather at trends and how robust those trends might be over various ranges of initial conditions and parametric values. That is more than sufficient to reach conclusions.
As to what Aunt Judy presents, it relies on a thorough misunderstanding of statistics and confidence intervals. If you want to understand where she went wrong, Watcher, that is where I would start. If you can understand it, maybe you can explain it to Judy–Dog knows many have tried.
siddsays
Thanx for pointing out the Wright paper.
1)delta-O18 variations are quite strong evidence for the layers being annual
2)The ability to temporally differentiate delta-CaCO3 and delta-C13 response is another huge argument for the layers being annual
3)The speed of CaCO3 decrease is astonishing, ” … %CaCO3 shows a more abrupt decrease, from 6% to 1% within one layer.” Acidification occurred in an eyeblink. “Precipitous” is the term the authors use.
4)3000 GT carbon release estimate: “Given the rapidity of the onset, magnitude of the δ13C excursion, and that the observed calcite compensation depth shoaling in deep ocean requires ∼3,000 GtC(3), two mechanisms meet these criteria: large igneous province-produced thermogenic methane (6, 7) and cometary carbon (11,12). The latter is consistent with the recent discovery of a substantial accumulation of nonbiogenic magnetic nanoparticles in the Marlboro clay, whose origin is best ascribed to impact condensate (71).”
sidd
Patrick 027says
re coal ash (vs coal – 121 Doug Bostrom ; PS I think I read once that Vanadium is particularly concentrated in petroleum (?) … tunicates? okay, never mind that…), dirt, groundwater – actually, surface dirt is exposed to rainwater (or ice) :) except when the water table comes up to meet it. I don’t know a lot about groundwater but there’s a diagram in a geology book, “Continents and Supercontinents”, which shows how ‘groundfluid’ changes with depth – go far enough and it’s NaCl or even CO2 (be careful on framing that last one in any climate context to avoid confusion). Anyway,
…
re Watcher @104 – maybe I missed something earlier, but wouldn’t such a model easily take the latent heating and phase change into account? If you’re point is that a T error is important, that’s a good example of when it would be.
Although – maybe a bit nitpicky but – the phase transition process generally requires nucleation and growth of new phases. Growth can be delayed, relative to thermodynamic equibrium of a whole volume, due to any nonzero diffusion of heat and/or matter requiring a thermal and/or compositional gradient, so at some distance from the new phase there may be some supersaturation or supercooling (or superwarming?) – itself making nucleation at those places more likely (unless it is too cold). But nucleation takes space and time, or else seeding…
(in a “Good Eats” episode on chocolate, Alton Brown added cocoa butter already in the alpha phase (or beta?) to melted chocolate (as I recall, it was still molten at that point) to seed alpha (or beta?) cocoa butter. There are six different ways for cocoa butter to crystalize; I got the impression alpha is the the one at thermodynamic equilibrium under typical eating conditions – and it’s the most desirable for eating solid chocolate; beta decays to alpha readily, so beta’s acceptable. Martensite is a (brittle) form of steel which is in thermodynamic disequilibrium indefinitely; from memory, I think it would, given the opportunity, decay to Fe + cementite, but cementite itself is a disequilibrium phase, and would change into Fe plus graphite if it could. But graphite crystals often don’t form in the cooling process, unless you add, for example, (enough) Si (as best I can recall) to the mix. (if you really need to know that, double check for yourself first))
…more to the point, though, homogeneous (no seed crystals like silver iodide) nucleation of ice occurs at a certain rate per unit volume at a given temperature; -40 deg C and F (it’s the same at that point) is considered the temperature at which pure liquid water (even down to cloud droplet size) will freeze without ice nuclei (aerosols effective in nucleating ice – from a show on the Weather Channel, that can include (at least one species of) bacteria ) without waiting too long (I infer based on the concept of nucleation rate). Aerosols vary in their ability to nucleate ice and, at least in updrafts, there is generally a population of supercooled liquid cloud droplets above freezing level. Of course, once ice is present, you’ve got the Bergeron process…
(Also, condensation of liquid water can be delayed, although condensation may start below 100% RH (relative humidity) (relative to a flat surface of pure liquid H2O) because of aerosols which are soluble in water – forming haze particles which for a given concentration of solute and radius are in equilibrium with a given RH; concentration decreases with radius and so RH needs to rise for growth. It (typically, at least, from what I learned) needs to rise above 100 % RH eventually because as solute concentration becomes smaller (with radius^-3), the effect of the small radius, which increases the pressurization of the droplet from surface tension, starts to dominate. When the droplet grows to a size where equilibrium RH peaks, the haze particle turns into a cloud droplet. See Kohler Curve)
Although none of that necessarily counters your argument, but it may be worth knowing.
———-
re @108:
I should clarify – I was specifically thinking of equilibrium climate sensitivity. Of course, trends (averaged over internal variability) should generally be proportional to that, given the same heat capacity and forcing (although the trends could vary independently of equilibrium sensitivity if models have different rates of heating of the deep ocean, for example).
(regarding recent trends and models, I remember reading something (at RC or maybe SkepticalScience ?) relatively recently, that a correction had to be made to a figure in the IPCC because the model results were not properly aligned with the observations for a good trend comparison. I don’t know which version of that graph, if either, that Judith Curry is using.)
See the last figure in the second link in my 103 for what I had in mind (about equilibrium sensitivity) (although that doesn’t actually show clustering by individual models, but it compares the probability curve derived from models with that derived from other lines of evidence. One of those lines is climate reaction to individual volcanic eruption(s?) (Pinatubo)).
As to the sensivity of climate sensitivity to temperature – well,
Is there a correlation between modelled T and modelled change in T? (I don’t know.)
How much difference is there in the same model sensitivity, for a doubling of CO2 vs. quadrupling vs. halving of CO2 (technically, when forcing is expressed as W/m2, though each doubling of CO2 has a similar forcing to the last within some range of CO2 values (Venusian conditions are outside of that range)? I’ve gotten the impression there isn’t much (at least for Charney sensitivity, or otherwise not including ice sheets). Compare to geologic history – e.g. the first figure in the second link in my 103 – here again: https://www.skepticalscience.com/hansen-and-sato-2012-climate-sensitivity.html – farther down that page, in the “Earth System Sensitivity” section (emphasis mine):
Hansen and Sato examine the longer-term Earth System Sensitivity by adding in slow feedbacks one-by-one, starting with surface albedo. Hansen and Sato note the longer-term sensitivity is
“…more dependent on the initial climate state and the sign of the forcing. The fast-feedback climate sensitivity is a reasonably smooth curve, because the principal fast-feedback mechanisms (water vapor, clouds, aerosols, sea ice) do not have sharp threshold changes. Minor exceptions, such as the fact that Arctic sea ice may disappear with a relatively small increase of climate forcing above the Holocene level, might put a small wave in the fast-feedback curve.”
This initial state dependency is illustrated by the more complex shape of the upper curve in Figure 1 above. For example, during a cooling event to a glacial period like the LGM, the long-term Earth System Sensitivity is approximately 6°C for an equivalent forcing to a doubling (or in this case halving) of CO2. This is primarily due to the increase in the Earth’s reflectivity as large ice sheets form.
During a period like the Holocene while warming to a Pliocene-like climate, slow feedbacks (such as reduced ice and increased vegetation cover) increase the sensitivity to around 4.5°C for doubled CO2. However, a climate warm enough to lose the entire Antarctic ice sheet would have a long-term sensitivity of close to 6°C. Fortunately it would take a very long time to lose the entire Antarctic ice sheet.
Although it is noted at the end of that section that the first figure is a schematic, the quote contains the reasoning behind expecting the fast-feedback sensitivity to be as such (smoothly varying).
Note that the link implies (See forcings vs. feedbacks) that Hansen and Sato 2012 (HS12) includes aerosols in the fast-feedbacks (it says that HS11 argue that it should be treated as such).
The “Earth System Sensitivity” section implies CO2 is still treated as a forcing, which is necessary if you want to consider sensitivity to atmospheric CO2. I’m unclear on how CH4 is treated. But if you are modeling climate responses to atmospheric CO2 and CH4 (and CFCs and N2O, etc.) changes, you can’t have CH4 and ultimately /or CO2 emissions from hydrates, thawing permafrost, and ecosystems in general as feedbacks in the model. And I don’t think the models used (or cited) in the IPCC generally include such feedbacks. This isn’t to say such feedbacks haven’t been studied, and if they occur and are sizable, the models which treat them as forcings would still be useful – just adjust the forcing to the known additional feedback, at least to start with.
—
From what I’ve read, (one?/the?) reason for the longer tails in the distributions in the last figure of the above link on the side of higher climate sensitivity is that, if sensitivity varies with climate, it is more likely to change by more than some amount for a larger climate change; thus, for smaller climate sensitivities, sensitivity is less likely to change over the course of a given climate response to a given forcing, whereas for a larger sensitivity initially, the climate change may either be more limited when reaching a range of small sensitivity, or enhanced farther by increasing sensitivity.
—-
Two final points:
As one factor which would affect Earth system sensitivity is the loss of ice sheets – if ice sheets melt sooner rather than later, sensitivity would be decreased for farther warming sooner rather than later, though averaged over the range from recent past to ice-free, the equilibrium response would be unaffected by the excact ice sheet dependence on climate (and rate of equilibration), because the start and end points would be the same.
Concerning the sensitivity at this point, do the different models start with different ice sheets, and if modelled, do the ice sheet losses correlate with starting temperature, and/or are the ice sheet losses contribute significantly to the modelled sensitivity? Etc. for seasonal snow and sea ice (whose losses have been underestimated by models, from what I’ve read). I’m not sure about the last parts but I’d think ice sheets are initialized to be as they are in reality.
Very broadly, it is possible to imagine systems where changes can be modelled with much more confidence than absolute values, even independently of them. Take any mass, and add 1 g to it, and the result is 1 g more, so it will weigh 1 g * gravitational acceleration more (for a ‘clockwork’ example).
@ 125: see above on ice sheets;
And there are no “random” events in there, since historical forcings are used to generate the curves and the models are aligned to data. The very scariest IPCC scenarios when carried out to 2035 or so (the approved SPM Figure 1.4 ) end up at +2K relative to 1950, so call that 2.5K above 1900. That makes it somewhere around 0.8%.
From what I’ve read, models, when tuned, are tuned to best match a given average climate, not to match trends (see https://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/ , https://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/ ). Historical volcanic eruptions should be matched in timing, though models are not generally tuned for climate sensitivity so the amplitude of the responses would be matched except by the realistic performance of the model. Internal variability can’t be matched up by tuning, so far as I know (if it could be, that would imply it is predictable over such a span of time. But we can’t predict ENSO even a decade in advance. We could confirm that this matching isn’t done by looking at individual model runs for the historical period, but I don’t have time right now to find that.
Brian Dodgesays
Watcher – Assume you have a flat aqueous reaction vessel, running with a 10 degree temperature gradient from side to side. Run your simulation with the cool side at 292K, and again at 291K. Do you see a big difference?
Antarctic sea ice is increasing in the winter. Since the sun is below the horizon, the change in albedo this causes has no effect on forcing. The radiating surface temperature of water is about 270 degrees K; the radiating surface temperature of sea ice varies from ~270 to ~250 K; how much less heat is radiated away by ice, and is this increase in Antarctic ice a positive or negative forcing?
No problem there. I’m just wondering if it could be a local plankton or algae blooming. That would be of one of the nastier sorts — blooming in season, then dying off, acidifying the local area and making a sedimentary layer, over several years perhaps. Not sure how they go from the layers in the drill core to a global change.
Doug Bostromsays
128 Sidd vs 119 Gavin:
Looks like varve deposit, then? What’s the basis of the controversy over the interpretation as such?
Leaving the objection hanging undefined is kind of like “I should tell you that… oh, never mind.” :-)
The event, which has occurred in the same region over the past six years, always during the summer, has grown exponentially since its last noteable interference in 2008. This year’s growth is reportedly double in size, measuring in at more than 11,158 square miles.
While strange in appearance, the algae is reportedly nontoxic to humans but can, however, leave behind the toxic gas hydrogen sulphide.
Get local severe precipitation, lots of runoff, etc. and I’d imagine this could cause, locally, the deposition of such annual layers.
Now, if they’re global, then we’re due to add our comparable layers real soon.
Yesterday the Guardian presented two interactive graphs to sum up the essence of IPCC AR5.
Myself, I feel it really only requires the first of these graphs. It is presented to demonstrate the impact of the revisions in the AR5 ECS estimate from AR4, something described by the Guardian as not very important although, as the Guardian points out, to denialists such a revision is seen as of great importance, a game-changing development. The impact of the ECS revision is shown by clicking alternately on the right-hand buttons.
The left-hand buttons show the impact of changing CO2 mitigation strategies. And of course, these are the strategies that denialists always consider to be irrelevant. So presumably, clicking on different ones would make only minor changes. Such is the power of denial.
The mercury is volatile and disperses. That is the reason why you can’t eat much fish anymore. In large parts of the country, pregnant women are not supposed to eat any stream or lake caught fish owing to the risk of mercury induced birth defects. This a problem for coal. Natural gas does not pollute in this manner when it is burned. Interestingly, the Reagan crime wave may well have been induced by lead added to gasoline. http://www.motherjones.com/environment/2013/01/lead-crime-link-gasoline
Doug Bostromsays
Yeah, Chris (136). Sorry about that; I was alluding to the magic implied by an assumption that the properties of coal ash are the same as coal.
The same magic that we wish would make mercury vanish but does not also fails to stop the concentration of less volatile impurities in coal combustion ash. The ash is effectively a concentrate and to the extent the source coal contained element X and element X doesn’t go up the stack, element X is found in the ash, concentrated.
Because of geochemistry, movement of water etc. some coal contains more of particular impurities than other coal.
Lots of coal is burned for a given thermal output so there’s lots of feedstock used for the production of a particular quantity of ash and hence concentration of impurities in that ash.
Ash does not magically vanish but instead has to be disposed of. “Disposal” include such things as fill for concrete products, wallboard, fill in construction sites.
Because of the concentration effect of combustion, coal ash falls into the category of something called “technologically-enhanced, naturally-occurring radioactive materials,” or “TENORM.”
TENORM is not dirt. Note that we don’t add dirt to wallboard or concrete products and other applications of coal ash TENORM.
Equating dirt with TENORM is not based on facts.
Arne Melsomsays
I am a bit puzzled by the IPCC’s assessment of the confidence in the projected changes in the overturning circulation in the Atlantic Ocean (AMOC). There is no observational evidence of trends (p. SPM-5); it is likely that the AMOC will weaken by 2050 (p. SPM-17); and it is very likely that the AMOC will weaken in the 21st century (p. SPM-17).
My understanding is that the basis for this evaluation is simulations that reveal an increasing atmospheric freshwater flux from the subtropics to mid-latitudes and beyond. The resulting decrease in upper ocean salinity makes the water column more stable, and less deep water is formed. At least, that’s what the model results reveal.
Here’s my take on this topic: (1) The water masses that sink (or is mixed vertically) in the north are relatively salty due to a significant influence of northward transport from the subtropics, where the waters are projected to become more salty. So the projections for changes in overturning depend on the models’ description of how the originally saltier waters are mixed as they flow northwards and are cooled. (2) The sinking at high latitudes is a compensation to vertical mixing driven by wind forcing and breaking of internal waves (e.g. Wunsch & Ferrari; Ann. Rev. Fluid Mech., 2004). Again, the projections for changes in the AMOC depend on mixing parameterizations in the models.
(1) and (2) are both small-scale mixing processes that I believe must be highly non-isotropic. Isn’t it notoriously difficult to accurately parameterize such processes? E.g. do the model results agree with the confinement of vertical mixing to complex topographical features, as reported by Wunsch & Ferrari? As quoted above, we have no observational record of trends in the AMOC, so it seems to me that it is difficult to assess the models’ performance in the present context. Although I understand that there are large-scale changes that may give rise to a reduced overturning, I must admit that I’m wondering how the IPCC finds it *very likely* that the AMOC will weaken in the 21st century, if this is based more or less solely on model projections of trends that cannot be validated by an observational record.
But I’m not familiar with neither the full literature of relevance, nor of details in the evaluation of the model results. Did I perhaps go wrong somewhere?
siddsays
Re: Wright(2013)
Fig. 4 shows the effect of ocean depth of sediment deposition upon the size of the delta-C13 excursion.
1)This is very nice becoz it shows a path to reconcile deep and shallow sediment records from PETM.
2)This is also nice becoz it uses the Archer model
3)Coupled with the time differentiated CaCO3 and delta-C13 response, it is a nice test of the Archer model.
4)Wouldn’t it be nice if Archer would comment ?
5)I do hope someone does delta-N15 measurements also to illuminate the nitrogen pathways
Says that climate sensitivity is high (from the Joe Romm article):
The Earth’s actual sensitivity to a doubling of CO2 levels from preindustrial levels (to 550 ppm) — including slow feedbacks — is likely to be larger than 3–4°C (5.4-7.2°F).
ozajhsays
Meanwhile here Downunder we’re enjoying our mid-Spring weather. Now that those pesky lefties have been vigorously (and, I have to reluctantly admit, deservedly) thrown out of power, all that nonsense about AGW and taxing carbon emissions can be made to disappear and a story such as
If, for example, you evaporate water to get back the salt crystals you mixed into it, you don’t get more salt back. The carbon in coal came from the air, and on becoming solid, diluted the dirt in its vicinity. When combustion turns the carbon back into air, you don’t get back more of the diluted material than you started with.
You may wish to argue that the hydrous or carbonate components of the original dirt were driven off as well during combustion, but they’ll come right back exothermically, as you point out, in concrete products. So while it isn’t dirt, coal ash is no different than (clay-type) dirt in its radioactivity. Thus, the claims of the nuke boosters are untrue. Burning coal does not increase radiation exposure. Rather it decreases it owing to the dilution of carbon-14 in the food supply.
Now, if we had build houses out of coal to shield ourselves from background radiation, and those houses had burned down, then our radiation exposure would increase owing to coal combustion. Not because of any increase in the background, but rather because of the loss of shielding. But we don’t use coal that way. Who knows? A few more Fukushima’s and maybe we’ll want to.
Doug Bostromsays
“Now, if we had build houses out of coal…”
Well, we do in part, but from concentrated residue from coal. Coal isn’t coal ash.
Whatever. We’re dancing on the line between fact and wish.
For people interested in facts, EPA’s info is still available as of this writing:
Says that climate sensitivity is high (from the Joe Romm article):
The Earth’s actual sensitivity to a doubling of CO2 levels from preindustrial levels (to 550 ppm) — including slow feedbacks — is likely to be larger than 3–4°C (5.4-7.2°F).
I stated as long ago as 2007, I believe, and possibly on this site, that the sensitivity had to be on the high end, likely more in the 4 – 6 range, because the changes we were seeing were already so far beyond what we were supposed to be seeing. This was prima facie evidence that the models were off.
One example? Look at the ASI extent charts going all the way back. We start to see the decline not in 1979, the oft-cited beginning of the satellite record (the citation of which causes the layperson to think ice started declining in 1979 and also skews the total actual decline), but in 1953. CO2 ppm at that time was around 315. Yet, we all know that the ice doesn’t respond to a 0.8 or what have you yearly change all of a sudden, but responds to the collective rise in energy/heat in the oceans and atmosphere. The simple conclusion? The planet started responding long before 1953 and the *effect* was seen in 1953 in the form of melting ASI.
Pretty clear that planet started responding in real, visible ways once we passed the 300 ppm threshhold, give or take a few ppm.
Again, prima facie. Without a single scientific study and nothing but the ice record we can draw these conclusions. Does that not indicate a need for greater flexibility in our thinking and public discourse by not just laypersons, but the scientific community, too?
Note the differences in reaction: Wow, that confirms what our eyes see vs. is that really there?
Scientific reticence is a maladaptive behavior when you’re going 100 mph and the wall is clearly visible in the headlights.
I hope we figure out how to combine policy risk analysis and the science before it is too late.
A starting point? Talk to scientists in terms of .05 and .01 validity; talk to the public in terms of what those numbers mean in the real world: Certainty. Not mostly certain or kind of certain or pretty much all certain… call it what it is: Dead certain in any sense that is meaningful. Remove the wiggle room since it’s not really there anyway.
Were we to do this, scientists could still maintain their reticence in halls of science while helping move policy forward in the halls of government and streets of communities.
SecularAnimistsays
FYI …
The projected timing of climate departure from recent variability Nature502, 183–187 (10 October 2013)
Abstract:
Ecological and societal disruptions by modern climate change are critically determined by the time frame over which climates shift beyond historical analogues. Here we present a new index of the year when the projected mean climate of a given location moves to a state continuously outside the bounds of historical variability under alternative greenhouse gas emissions scenarios. Using 1860 to 2005 as the historical period, this index has a global mean of 2069 (±18 years s.d.) for near-surface air temperature under an emissions stabilization scenario and 2047 (±14 years s.d.) under a ‘business-as-usual’ scenario. Unprecedented climates will occur earliest in the tropics and among low-income countries, highlighting the vulnerability of global biodiversity and the limited governmental capacity to respond to the impacts of climate change. Our findings shed light on the urgency of mitigating greenhouse gas emissions if climates potentially harmful to biodiversity and society are to be prevented.
Hansen states in his latest missive: “In my opinion, multi-meter sea level rise will occur this century, if the huge business-as-usual climate forcing actually occurs.”
I fear worse. I have stated previously my reasons for thinking that we are locked into 1m SLR from GIS+AIS alone this century. I believe we have already pumped enough heat into the ocean to destabilize WAIS, regardless of future emission trajectory.
I think what people describe with the car heading towards a cliff is best described with a large impact ice sheet disintegration event. This is in the cards…
Have a read through this and its antecedents; I’m old and gray enough to remember back when drumlins were thought to have been created by long slow processes under the icecaps — then one day they watched one happen, and time stood still. Or speeded up. Or something.
“Rapid Sediment Erosion and Drumlin Formation Observed Beneath a Fast-Flowing Antarctic Ice Stream – AM Smith, T Murray, KW Nicholls, K Makinson, G … – American Geophysical Union, Fall Meeting 2005
I’m hornswoggled to see only one subsequent cite. This was, I thought, one of the early cracks in the long-held idea that the Antarctic could not change rapidly.
Radge Havers says
@98
Perhaps, although since we’re looking at a time-worn political tactic, and we’re living in strangely charged political times (also human), I’m more inclined to ascribe a secondary role to simple wishful thinking. That’s in general, as I’m also thinking of exchanges that have taken place on other threads as well as what goes on outside the RC enclave.
Watcher says
re 93:
Ray,
If you look at my posts you will note that I have made no mention of the “16 year hiatus” except tangentially (#78) in defending Dr. Curry’s right to wonder at the differences between the draft and final AR5 SPM. You’re the one telling me I’m talking about a hiatus. As for whether a 16 year hiatus has ever been simulated, I would defer to von Storch:
At my institute, we analyzed how often such a 15-year stagnation in global warming occurred in the simulations. The answer was: in under 2 percent of all the times we ran the simulation.
Rather, the point of my posts was to discuss something that was new to me: why the modeling community doesn’t appear to consider an accurate simulation of absolute temperature important, despite (once again) Mauritsen:
To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act
And yet both the figure that drew my attention and Gavin’s comments seem to indicate that it’s not considered a big deal, since what they are after is the trend. Other comments from yourself and Magma are in the same vein, but I’m not convinced that you are climate modelers so I don’t want to use them as an example of the community. One should try to avoid putting words into other peoples’ mouths, don’t you think?
Patrick 027 says
It’s also worth pointing out that the range in model results are what they are, whatever the errors in baseline temperature; and they are clustered into a bundle. And also, there are sensitivity estimates from observations and paleoclimatic studies.
http://www.skepticalscience.com/Estimating-climate-sensitivity-from-3-million-years-ago.html
https://www.skepticalscience.com/hansen-and-sato-2012-climate-sensitivity.html
Watcher says
Re: 92
Dave123,
Assume you have an aqueous reaction mixture. Try running your simulation at 294K, and at 292K — straddling the freezing point of the mixture of course. My guess is very little change in your reaction rate model, but a very big change in real life.
Latent heat makes a huge thermodynamic difference when the temperature is swinging through a phase transition, which it will do repeatedly in a climate simulation. I wouldn’t think this is news to any climate modelers.
Dave123 says
@Watcher – What I think you’re missing in your focus on the freezing point of water is so far as climate models go, it means that you move a few meters higher or lower in the atmosphere when there’s a temperature error around the freezing point, and otherwise it doesn’t make much difference in kinetic expressions, especially compared to the other approximations that are being made. Same thing on the ground, a few days earlier or later, a few meters higher or lower, a few degrees further north or south. As long as the error is reasonably consistent, the trends will emerge. Now I’ll stand ready for correction from someone who can point to a specific equation/scenario that’s very sensitive and tightly coupled, but in the general sense of things those are the consequences I see from the kinds of equations I understand.
Going back to reliability- in our systems the objective function was economic maximization- which meant “more”. More of course had upper limits because of dangerous runaways. Running my model simulations with dozens if not hundreds of scenarios provided guidelines and boundaries that would have been obscenely expensive and risky to try to produce experimentally. That’s where good enough comes in- stopping short of that last 1/2 %yield/productivity in order to have a 3 degree margin on temperature measurements.
I suggest that how I used reactor models is more like how we use climate models….looking for trends, responses and safe margins, not absolutes.
Hank Roberts says
> Reality is only one possible realization of the climate system.
Could someone put that in understandable fifth-grade English?
If I’m the one who started “clockwork model” forgive me. I made that term up handwaving. Is there really a kind of model meant to behave consistently, much the same way each time it’s run?
‘Cause I’m contrasting that to a climate model in which significant events occur with some range of likelihood and timing, not tied down because the model includes modeling natural variability. When and how big and where the volcanos erupt in each century, the usual example.
I think of climate as more like a variant or mutant Rube Goldberg machine — one that ends up reaching one of a variety of different likely outcomes each time it’s run.
(was such a thing ever built in hardware? something like the Diskworld hydraulic model of money flow in AnkhMorpork …).
Doug Bostrom says
Booker’s “Climate change ‘scientists’ are just another pressure group” is remarkably brazen, a heroic attempt at establishing an expedient cognitive short-circuit. If Booker can only lead people to believe such a thing, then it’s no longer necessary to scrutinize, understand or acknowledge any of the details of difference between a broad scientific consensus and a demagogue’s appeals to emotion.
“Both sides are the same, just pick the one that fits your predisposition.
No, but nauseatingly cynical.
“…why the modeling community doesn’t appear to consider an accurate simulation of absolute temperature important…it’s not considered a big deal, since what they are after is the trend.”
There you go, you said it. Simulation or physical experiment, if you’re trying to predict the statistical reliability of a population of light bulbs it doesn’t matter which particular bulbs fail, the numbers are still useful. If your simulation is also trying to predict which particular light bulb will burn out first, second or later then the the model will of course “fail” even as it continues to produce useful statistics.
In order to have a lot of pointless discussion and doubt it’s imperative that we forget the fundamental purpose of climate simulations.
Judith Curry is not stupid but she is deeply mysterious.
Watcher says
Re: 102 & 104
Patrick 027 — I dunno, it’s hard to see anything clustered in this figure.
Dave123: I expect you’re right for a reaction vessel where you’re trying to supply energy to get over an activation barrier. Can I assume the reactions are essentially irreversible once they occur and you’re mostly tweaking which pathways are favoured?
In a climate model it is precisely those few metres of altitude or few degrees of latitude that are at issue. Essentially one is modeling a reversible system (in the sense of a local equilibrium rather than a strict thermodynamic reversibility) in an almost-steady state situation: heat is being pumped in, transported by storms, currents, etc., contributing to melting/freezing … gads. An unholy mess and while I do go on I have to take my hat off to people who are willing to tackle it.
Anyway, exactly where all of this is going on (i.e. what latitide, altitude, etc.) is at the heart of the predictions people care about: whether the methane clathrates will be released or Greenland melts or what have you. Hence, I think it matters more than in a chemical reactor.
David B. Benson says
Magma @74 — Eroded and that in just the past few years,
Dave123 says
@Watcher- It will take someone with more of an inside view of the models, but the errors in the various other components besides temperature are more likely to be limiting factors. But no, the few meters or degrees of latitude are precisely not the issue. You need a sensitivity analysis on the estimated errors in lots of other variables and ‘constants’ before you can get too interested in temperature by itself. Look at the error bars on known forcings going into climate models. Those are bigger on a percentage scale than temperature. (See approved summary for policy makers SPM.5: All of the error bars are far greater than 1 degree out of 300.
Dave123 says
Watcher- no the problem with the reactor wasn’t putting heat in, it was taking heat out….hence the concern about runaways. The concern was also that if you assumed a normal error distribution on your temperatures measurements (despite calibration), the possibility that 1 error in 10000 could be measuring low enough to significantly miss boarderline runaway conditions had to be reckoned with.
Chris Dudley says
Hank (#91),
Yes, it is a matter of conservation. Thomas has a fair point, but most dirt has seen its fair share of ground water as well. There is heterogeneity in the distribution of uranium in soil, and it is also expected in coal ash for that reason alone.
Regarding screening, perhaps I should make that clear. There is radioactivity occurring throughout the Earth’s crust. We are subject to very little of it because it is also blocked by the Earth’s crust. Laying down a layer that has the same concentration of radioactive isotopes as the rest of the crust makes no difference in radiation exposure because it screens the layer below. That is why bulldozers don’t raise radiation levels. They may have made a bigger pile of stuff that has some radioactivity, and there is indeed a greater number of decays in that volume than before the pile was there, but the old surface is screened by the new material and no longer contributes to the surface radiation.
Dave123 says
@hank 106- My guess is that there are two broad categories of physics based models, and that you can split them into clockwork and non-clockwork. The kinds of reactor modeling I did is very clockwork, because there were no random inputs. We controlled the horizontal, we controlled the vertical. Given a scenario, the same result pops out every time you run the model. Thing is these models contained so many elements of a climate model: heat transfer, mass transfer, rate of reaction, temperature and concentration dependence, and parallel to uncertainties in forcings, we had uncertainties (albeit small) in some heats of reaction, the rate constants and some exponents (order effects) on concentrations due to gas and solid phase adsorption/desorption effects. We had to tune around those. In the end, everything was uncannily, eerily accurate, down to a little ‘hiccup’ that I thought was below the noise threshold being reproduced and observed large scale.
But we didn’t have random events thrown in, such as volcanoes or ENSO. ENSO might relate because it would affect humidity and that had strong influence on one system (but not the other), but even there, given a the same input we obtained the same output, and we were only concerned with steady states.
Climate is different- its not steady state. There’s no control panel to tweak anything to compensate for an uncontrolled variable. It’s vastly more complex. We could simplify out the CFD in our systems at both macro and micro scales because we had consistent turbulent flow and appropriate time scales compared to diffusion vs convection, you can’t do that in climate models.
In fact, with all of the uncertainties going into climate models (from what the true values of forcings are ….and whether there’s local, contingent or emergent variation in forcing values) I’m sometimes amazed that they come as close as they do.
Ray Ladbury says
Watcher,
If you wish to avoid misunderstanding, then might I suggest expressing yourself clearly? Do not take a cue from Aunt Judy where you advance to the very verge of taking a position and then withdraw into plausible deniability.
Unfortunately, von Storch is engaging in his penchant for doing science by press–long before he has allowed his peers to have their say of his analysis. Everything depends on how von Storch asked the question. If one asks whether a particular 15 year period will show insignificant warming, then, yes, the odds will be small. If however, one asks whether some 15-year period in a span of 50 years will show low warming, then the odds will be a whole lot higher. And finally, if one asks whether a 15-year trend beginning with an El Nino that was about 2 sigma higher than the trend might show negligible warming, I might take those odds. And finally, when you specify that the two end years be La Ninas, it’s damn near a lead-pipe cinch. But Judy is not interested in lead-pipe cinches. She’d rather be mystified. It makes it easier to fool readers like you.
Flakmeister says
CERN Cloud experiment press release:
http://press.web.cern.ch/press-releases/2013/10/cerns-cloud-experiment-shines-new-light-climate-change
Cosmics have negligible effect on aerosols associated with Amines and sulphuric acid…
Oh my, Jasper K. has let down denialosphere yet again…
Hank Roberts says
> Chris Dudley
What you believe is inconsistent with the measurements reported in the published literature on the subject. You’re entitled to your own beliefs. Repeating your conclusions doesn’t improve the mismatch: whatever source you’re relying on isn’t published. I rely on the published science. Enough, eh?
Chris Dudley says
Hank (#116),
Actually, you have not demonstrated anything here. You cite the paper that has been disproved as evidence. That is both a false appeal to authority and circular. You provide links that show that coal ash and soil have an overlapping range of radioactivity to claim that they are different when you link shows they are not. If you can’t argue cogently, you can’t be persuasive. Read more carefully and do some thinking. You’ll understand eventually.
Doug Bostrom says
Amazing how we can make something that is fairly simple in its broad features unnecessarily complicated to think about, but that’s a common problem these days with the epidemic application of tactical and strategic confusion.
Combustion of coal to ash: does it concentrate any elements and/or compounds?
If we start with coal that includes various impurities and burn it, we end up with a residue chiefly consisting of those impurities, in particular those that are not volatile at the combustion temperatures involved.
But apparently we’re supposed to believe that the properties of coal ash are the same as coal. How does that work?
Meanwhile, it’s known that particular deposits of coal are associated with particular impurities.
I’ve a feeling we’ll next be hearing that the mercury found in coal to a greater or lesser extent magically vanishes as the coal is burned.
prokaryotes says
New finding shows climate change can happen in a geological instant
[Response: My sources tell me that the inference that the banding is annual (and hence the ‘instant’ conclusion) is quite controversial. More study is definitely needed on this. – gavin]
Hank Roberts says
> the paper that has been disproved
Please get help. Ask a librarian.
Doug Bostrom says
That article on what we might be able to infer are varves is amazingly devoid of any useful content. Apparently there’s disagreement on whether the layers are actually varves?
Anyway, here’s the abstract:
http://www.pnas.org/content/110/40/15908.abstract?sid=05a949a1-8171-4d96-88e1-3d060fc2c8ad
Doug Bostrom says
Beef: In a perfect world I’d get free access to some number of (one, even) PNAS articles for each bleary-eyed 6am delivery of a passenger to the airport made so that the passenger could attend a NAS meeting across the country, in the swamp. Meetings which– by the way– are essentially a gift.
It’s just not a fair world. :-)
Hank Roberts says
> geological instant
The linked story quotes the authors:
The bands could contain many possible proxies, many quite recently discovered. http://scholar.google.com/scholar?as_ylo=2013&q=paleo+core+proxies
They could be layers of mud from annual extreme rainfall events if these cores are from near shore locations; they could be sediment from plankton blooms. Or much else.
The bands are layers at some particular location — which could be due to local extremes while global CO2 was changing relatively slowly. Or so I’d guess.
Hank Roberts says
More better info on location and what’s in the bands
> Wright and Schaller
doi: 10.1073/pnas.1309188110
PNAS October 1, 2013 vol. 110 no. 40 15908-15913
http://www.pnas.org/content/110/40/15908.abstract
supporting information online at http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1309188110/-/DCSupplemental.
Watcher says
Re: 100
Patrick 027,
You ask
“Would you have confidence in a model’s trend if the absolute value were 1 % off but the trend, integrated over time, produced a 10 % difference?”
Well perhaps I would, but the trend as measured (and modeled) from 1900 to present integrates to less than a 0.3% difference in absolute T. And there are no “random” events in there, since historical forcings are used to generate the curves and the models are aligned to data. The very scariest IPCC scenarios when carried out to 2035 or so (the approved SPM Figure 1.4 ) end up at +2K relative to 1950, so call that 2.5K above 1900. That makes it somewhere around 0.8%.
You also ask,
“How far up or down, and for that matter, north or south, does the 273.15 K isotherm shift among models for the same forcing?”
A good question, which is rather the point I was after but was too lazy to look up. According to various Googled references, I would say around 500m of altitude and 1000km of latitude for the observed 3K spread in the hindcast models shown in Mauritsen et al. That takes in the entire extent of Greenland and its ice shelf, for example.
By the way, thanks for your extensive comments on chaotic aspects of climate (and geology, of all things). You’ve obviously given this some thought and I appreciated some of the insights.
And finally Ray, from 114:
“If you wish to avoid misunderstanding, then might I suggest expressing yourself clearly? Do not take a cue from Aunt Judy where you advance to the very verge of taking a position and then withdraw into plausible deniability.”
OK, how’s this. Gavin once advised me to go softly and not say I think something is absurd when it’s been pronounced on by someone who has studied it extensively. I’m trying to follow that advice.
In my trivial, stunted world of clockwork modeling I would consider it absurd to take a model that is only “good” to 1% and use it to predict things occurring on a scale of less than 1%. Apparently, models infinitely more complex than mine, that contain chaotic features and dozens of poorly characterised parametrisations should not be judged by the same standards. I know that sounds sarcastic, but it is the gist of what has been suggested to me here.
I find this surprising and so apparently does Dr. Curry. I’m trying to be polite because this is a legitimate field of study of no little importance and still being actively pursued. Happily, for the most part discourse here remains civilised, but that’s not universally true.
Is that clear enough?
Hank Roberts says
> integrates to less than a 0.3% difference in absolute T
Variations in insolation are about one part in about 1300.
Climate models aren’t “infinitely” more complex.
> my trivial, stunted world of clockwork
There, there, it’s not so bad as all that.
But do you model anything that includes natural variability? So your model runs are different?
We have one run of the Earth (or rather we have scattered little bits of the run to date, from various proxies, collected where someone drilled a hole carefully).
http://www.gfz-potsdam.de/en/research/organizational-units/departments-of-the-gfz/department-5/climate-dynamics-and-landscape-evolution/projects/icdp-elgygytgyn-drilling-project/
Ray Ladbury says
Watcher,
Fine, except I don’t know of any basic conclusions on which the basic science depends that are contingent on the models being right to 1%. And again, one is not looking at absolute numbers from the models but rather at trends and how robust those trends might be over various ranges of initial conditions and parametric values. That is more than sufficient to reach conclusions.
As to what Aunt Judy presents, it relies on a thorough misunderstanding of statistics and confidence intervals. If you want to understand where she went wrong, Watcher, that is where I would start. If you can understand it, maybe you can explain it to Judy–Dog knows many have tried.
sidd says
Thanx for pointing out the Wright paper.
1)delta-O18 variations are quite strong evidence for the layers being annual
2)The ability to temporally differentiate delta-CaCO3 and delta-C13 response is another huge argument for the layers being annual
3)The speed of CaCO3 decrease is astonishing, ” … %CaCO3 shows a more abrupt decrease, from 6% to 1% within one layer.” Acidification occurred in an eyeblink. “Precipitous” is the term the authors use.
4)3000 GT carbon release estimate: “Given the rapidity of the onset, magnitude of the δ13C excursion, and that the observed calcite compensation depth shoaling in deep ocean requires ∼3,000 GtC(3), two mechanisms meet these criteria: large igneous province-produced thermogenic methane (6, 7) and cometary carbon (11,12). The latter is consistent with the recent discovery of a substantial accumulation of nonbiogenic magnetic nanoparticles in the Marlboro clay, whose origin is best ascribed to impact condensate (71).”
sidd
Patrick 027 says
re coal ash (vs coal – 121 Doug Bostrom ; PS I think I read once that Vanadium is particularly concentrated in petroleum (?) … tunicates? okay, never mind that…), dirt, groundwater – actually, surface dirt is exposed to rainwater (or ice) :) except when the water table comes up to meet it. I don’t know a lot about groundwater but there’s a diagram in a geology book, “Continents and Supercontinents”, which shows how ‘groundfluid’ changes with depth – go far enough and it’s NaCl or even CO2 (be careful on framing that last one in any climate context to avoid confusion). Anyway,
…
re Watcher @104 – maybe I missed something earlier, but wouldn’t such a model easily take the latent heating and phase change into account? If you’re point is that a T error is important, that’s a good example of when it would be.
Although – maybe a bit nitpicky but – the phase transition process generally requires nucleation and growth of new phases. Growth can be delayed, relative to thermodynamic equibrium of a whole volume, due to any nonzero diffusion of heat and/or matter requiring a thermal and/or compositional gradient, so at some distance from the new phase there may be some supersaturation or supercooling (or superwarming?) – itself making nucleation at those places more likely (unless it is too cold). But nucleation takes space and time, or else seeding…
(in a “Good Eats” episode on chocolate, Alton Brown added cocoa butter already in the alpha phase (or beta?) to melted chocolate (as I recall, it was still molten at that point) to seed alpha (or beta?) cocoa butter. There are six different ways for cocoa butter to crystalize; I got the impression alpha is the the one at thermodynamic equilibrium under typical eating conditions – and it’s the most desirable for eating solid chocolate; beta decays to alpha readily, so beta’s acceptable. Martensite is a (brittle) form of steel which is in thermodynamic disequilibrium indefinitely; from memory, I think it would, given the opportunity, decay to Fe + cementite, but cementite itself is a disequilibrium phase, and would change into Fe plus graphite if it could. But graphite crystals often don’t form in the cooling process, unless you add, for example, (enough) Si (as best I can recall) to the mix. (if you really need to know that, double check for yourself first))
…more to the point, though, homogeneous (no seed crystals like silver iodide) nucleation of ice occurs at a certain rate per unit volume at a given temperature; -40 deg C and F (it’s the same at that point) is considered the temperature at which pure liquid water (even down to cloud droplet size) will freeze without ice nuclei (aerosols effective in nucleating ice – from a show on the Weather Channel, that can include (at least one species of) bacteria ) without waiting too long (I infer based on the concept of nucleation rate). Aerosols vary in their ability to nucleate ice and, at least in updrafts, there is generally a population of supercooled liquid cloud droplets above freezing level. Of course, once ice is present, you’ve got the Bergeron process…
(Also, condensation of liquid water can be delayed, although condensation may start below 100% RH (relative humidity) (relative to a flat surface of pure liquid H2O) because of aerosols which are soluble in water – forming haze particles which for a given concentration of solute and radius are in equilibrium with a given RH; concentration decreases with radius and so RH needs to rise for growth. It (typically, at least, from what I learned) needs to rise above 100 % RH eventually because as solute concentration becomes smaller (with radius^-3), the effect of the small radius, which increases the pressurization of the droplet from surface tension, starts to dominate. When the droplet grows to a size where equilibrium RH peaks, the haze particle turns into a cloud droplet. See Kohler Curve)
Although none of that necessarily counters your argument, but it may be worth knowing.
———-
re @108:
Which figure? (in http://judithcurry.com/2013/10/02/spinning-the-climate-model-observation-comparison-part-ii/ )
I should clarify – I was specifically thinking of equilibrium climate sensitivity. Of course, trends (averaged over internal variability) should generally be proportional to that, given the same heat capacity and forcing (although the trends could vary independently of equilibrium sensitivity if models have different rates of heating of the deep ocean, for example).
(regarding recent trends and models, I remember reading something (at RC or maybe SkepticalScience ?) relatively recently, that a correction had to be made to a figure in the IPCC because the model results were not properly aligned with the observations for a good trend comparison. I don’t know which version of that graph, if either, that Judith Curry is using.)
See the last figure in the second link in my 103 for what I had in mind (about equilibrium sensitivity) (although that doesn’t actually show clustering by individual models, but it compares the probability curve derived from models with that derived from other lines of evidence. One of those lines is climate reaction to individual volcanic eruption(s?) (Pinatubo)).
As to the sensivity of climate sensitivity to temperature – well,
Is there a correlation between modelled T and modelled change in T? (I don’t know.)
How much difference is there in the same model sensitivity, for a doubling of CO2 vs. quadrupling vs. halving of CO2 (technically, when forcing is expressed as W/m2, though each doubling of CO2 has a similar forcing to the last within some range of CO2 values (Venusian conditions are outside of that range)? I’ve gotten the impression there isn’t much (at least for Charney sensitivity, or otherwise not including ice sheets). Compare to geologic history – e.g. the first figure in the second link in my 103 – here again: https://www.skepticalscience.com/hansen-and-sato-2012-climate-sensitivity.html – farther down that page, in the “Earth System Sensitivity” section (emphasis mine):
Although it is noted at the end of that section that the first figure is a schematic, the quote contains the reasoning behind expecting the fast-feedback sensitivity to be as such (smoothly varying).
Note that the link implies (See forcings vs. feedbacks) that Hansen and Sato 2012 (HS12) includes aerosols in the fast-feedbacks (it says that HS11 argue that it should be treated as such).
The “Earth System Sensitivity” section implies CO2 is still treated as a forcing, which is necessary if you want to consider sensitivity to atmospheric CO2. I’m unclear on how CH4 is treated. But if you are modeling climate responses to atmospheric CO2 and CH4 (and CFCs and N2O, etc.) changes, you can’t have CH4 and ultimately /or CO2 emissions from hydrates, thawing permafrost, and ecosystems in general as feedbacks in the model. And I don’t think the models used (or cited) in the IPCC generally include such feedbacks. This isn’t to say such feedbacks haven’t been studied, and if they occur and are sizable, the models which treat them as forcings would still be useful – just adjust the forcing to the known additional feedback, at least to start with.
—
From what I’ve read, (one?/the?) reason for the longer tails in the distributions in the last figure of the above link on the side of higher climate sensitivity is that, if sensitivity varies with climate, it is more likely to change by more than some amount for a larger climate change; thus, for smaller climate sensitivities, sensitivity is less likely to change over the course of a given climate response to a given forcing, whereas for a larger sensitivity initially, the climate change may either be more limited when reaching a range of small sensitivity, or enhanced farther by increasing sensitivity.
—-
Two final points:
As one factor which would affect Earth system sensitivity is the loss of ice sheets – if ice sheets melt sooner rather than later, sensitivity would be decreased for farther warming sooner rather than later, though averaged over the range from recent past to ice-free, the equilibrium response would be unaffected by the excact ice sheet dependence on climate (and rate of equilibration), because the start and end points would be the same.
Concerning the sensitivity at this point, do the different models start with different ice sheets, and if modelled, do the ice sheet losses correlate with starting temperature, and/or are the ice sheet losses contribute significantly to the modelled sensitivity? Etc. for seasonal snow and sea ice (whose losses have been underestimated by models, from what I’ve read). I’m not sure about the last parts but I’d think ice sheets are initialized to be as they are in reality.
Very broadly, it is possible to imagine systems where changes can be modelled with much more confidence than absolute values, even independently of them. Take any mass, and add 1 g to it, and the result is 1 g more, so it will weigh 1 g * gravitational acceleration more (for a ‘clockwork’ example).
@ 125: see above on ice sheets;
And there are no “random” events in there, since historical forcings are used to generate the curves and the models are aligned to data. The very scariest IPCC scenarios when carried out to 2035 or so (the approved SPM Figure 1.4 ) end up at +2K relative to 1950, so call that 2.5K above 1900. That makes it somewhere around 0.8%.
From what I’ve read, models, when tuned, are tuned to best match a given average climate, not to match trends (see https://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/ , https://www.realclimate.org/index.php/archives/2009/01/faq-on-climate-models-part-ii/ ). Historical volcanic eruptions should be matched in timing, though models are not generally tuned for climate sensitivity so the amplitude of the responses would be matched except by the realistic performance of the model. Internal variability can’t be matched up by tuning, so far as I know (if it could be, that would imply it is predictable over such a span of time. But we can’t predict ENSO even a decade in advance. We could confirm that this matching isn’t done by looking at individual model runs for the historical period, but I don’t have time right now to find that.
Brian Dodge says
Watcher – Assume you have a flat aqueous reaction vessel, running with a 10 degree temperature gradient from side to side. Run your simulation with the cool side at 292K, and again at 291K. Do you see a big difference?
Antarctic sea ice is increasing in the winter. Since the sun is below the horizon, the change in albedo this causes has no effect on forcing. The radiating surface temperature of water is about 270 degrees K; the radiating surface temperature of sea ice varies from ~270 to ~250 K; how much less heat is radiated away by ice, and is this increase in Antarctic ice a positive or negative forcing?
Hank Roberts says
> Wright … argument for the layers being annual
No problem there. I’m just wondering if it could be a local plankton or algae blooming. That would be of one of the nastier sorts — blooming in season, then dying off, acidifying the local area and making a sedimentary layer, over several years perhaps. Not sure how they go from the layers in the drill core to a global change.
Doug Bostrom says
128 Sidd vs 119 Gavin:
Looks like varve deposit, then? What’s the basis of the controversy over the interpretation as such?
Leaving the objection hanging undefined is kind of like “I should tell you that… oh, never mind.” :-)
Hank Roberts says
PS, I was thinking about this sort of thing. It would change the pH of the water, I think:
http://www.ibtimes.com/green-algae-takes-over-yellow-sea-china-toxic-gas-covers-more-11000-miles-beach-photo-1335031
Get local severe precipitation, lots of runoff, etc. and I’d imagine this could cause, locally, the deposition of such annual layers.
Now, if they’re global, then we’re due to add our comparable layers real soon.
MARodger says
Yesterday the Guardian presented two interactive graphs to sum up the essence of IPCC AR5.
Myself, I feel it really only requires the first of these graphs. It is presented to demonstrate the impact of the revisions in the AR5 ECS estimate from AR4, something described by the Guardian as not very important although, as the Guardian points out, to denialists such a revision is seen as of great importance, a game-changing development. The impact of the ECS revision is shown by clicking alternately on the right-hand buttons.
The left-hand buttons show the impact of changing CO2 mitigation strategies. And of course, these are the strategies that denialists always consider to be irrelevant. So presumably, clicking on different ones would make only minor changes. Such is the power of denial.
MARodger says
Ooops. Graph with lef-right buttons here.
Chris Dudley says
Doug (#118),
The mercury is volatile and disperses. That is the reason why you can’t eat much fish anymore. In large parts of the country, pregnant women are not supposed to eat any stream or lake caught fish owing to the risk of mercury induced birth defects. This a problem for coal. Natural gas does not pollute in this manner when it is burned. Interestingly, the Reagan crime wave may well have been induced by lead added to gasoline. http://www.motherjones.com/environment/2013/01/lead-crime-link-gasoline
Doug Bostrom says
Yeah, Chris (136). Sorry about that; I was alluding to the magic implied by an assumption that the properties of coal ash are the same as coal.
The same magic that we wish would make mercury vanish but does not also fails to stop the concentration of less volatile impurities in coal combustion ash. The ash is effectively a concentrate and to the extent the source coal contained element X and element X doesn’t go up the stack, element X is found in the ash, concentrated.
Because of geochemistry, movement of water etc. some coal contains more of particular impurities than other coal.
Lots of coal is burned for a given thermal output so there’s lots of feedstock used for the production of a particular quantity of ash and hence concentration of impurities in that ash.
Ash does not magically vanish but instead has to be disposed of. “Disposal” include such things as fill for concrete products, wallboard, fill in construction sites.
Because of the concentration effect of combustion, coal ash falls into the category of something called “technologically-enhanced, naturally-occurring radioactive materials,” or “TENORM.”
TENORM is not dirt. Note that we don’t add dirt to wallboard or concrete products and other applications of coal ash TENORM.
Equating dirt with TENORM is not based on facts.
Arne Melsom says
I am a bit puzzled by the IPCC’s assessment of the confidence in the projected changes in the overturning circulation in the Atlantic Ocean (AMOC). There is no observational evidence of trends (p. SPM-5); it is likely that the AMOC will weaken by 2050 (p. SPM-17); and it is very likely that the AMOC will weaken in the 21st century (p. SPM-17).
My understanding is that the basis for this evaluation is simulations that reveal an increasing atmospheric freshwater flux from the subtropics to mid-latitudes and beyond. The resulting decrease in upper ocean salinity makes the water column more stable, and less deep water is formed. At least, that’s what the model results reveal.
Here’s my take on this topic: (1) The water masses that sink (or is mixed vertically) in the north are relatively salty due to a significant influence of northward transport from the subtropics, where the waters are projected to become more salty. So the projections for changes in overturning depend on the models’ description of how the originally saltier waters are mixed as they flow northwards and are cooled. (2) The sinking at high latitudes is a compensation to vertical mixing driven by wind forcing and breaking of internal waves (e.g. Wunsch & Ferrari; Ann. Rev. Fluid Mech., 2004). Again, the projections for changes in the AMOC depend on mixing parameterizations in the models.
(1) and (2) are both small-scale mixing processes that I believe must be highly non-isotropic. Isn’t it notoriously difficult to accurately parameterize such processes? E.g. do the model results agree with the confinement of vertical mixing to complex topographical features, as reported by Wunsch & Ferrari? As quoted above, we have no observational record of trends in the AMOC, so it seems to me that it is difficult to assess the models’ performance in the present context. Although I understand that there are large-scale changes that may give rise to a reduced overturning, I must admit that I’m wondering how the IPCC finds it *very likely* that the AMOC will weaken in the 21st century, if this is based more or less solely on model projections of trends that cannot be validated by an observational record.
But I’m not familiar with neither the full literature of relevance, nor of details in the evaluation of the model results. Did I perhaps go wrong somewhere?
sidd says
Re: Wright(2013)
Fig. 4 shows the effect of ocean depth of sediment deposition upon the size of the delta-C13 excursion.
1)This is very nice becoz it shows a path to reconcile deep and shallow sediment records from PETM.
2)This is also nice becoz it uses the Archer model
3)Coupled with the time differentiated CaCO3 and delta-C13 response, it is a nice test of the Archer model.
4)Wouldn’t it be nice if Archer would comment ?
5)I do hope someone does delta-N15 measurements also to illuminate the nitrogen pathways
sidd
Brucie A. says
James Hansen has another paper out: http://www.columbia.edu/~jeh1/mailings/2013/20130926_PTRSpaperDiscussion.pdf
Says that climate sensitivity is high (from the Joe Romm article):
The Earth’s actual sensitivity to a doubling of CO2 levels from preindustrial levels (to 550 ppm) — including slow feedbacks — is likely to be larger than 3–4°C (5.4-7.2°F).
ozajh says
Meanwhile here Downunder we’re enjoying our mid-Spring weather. Now that those pesky lefties have been vigorously (and, I have to reluctantly admit, deservedly) thrown out of power, all that nonsense about AGW and taxing carbon emissions can be made to disappear and a story such as
NSW Fire Ban
can run in the Mass Media without even a suggestion of possible causation.
Chris Dudley says
Doug (#137),
If, for example, you evaporate water to get back the salt crystals you mixed into it, you don’t get more salt back. The carbon in coal came from the air, and on becoming solid, diluted the dirt in its vicinity. When combustion turns the carbon back into air, you don’t get back more of the diluted material than you started with.
Chris Dudley says
Doug (#137) cont.
You may wish to argue that the hydrous or carbonate components of the original dirt were driven off as well during combustion, but they’ll come right back exothermically, as you point out, in concrete products. So while it isn’t dirt, coal ash is no different than (clay-type) dirt in its radioactivity. Thus, the claims of the nuke boosters are untrue. Burning coal does not increase radiation exposure. Rather it decreases it owing to the dilution of carbon-14 in the food supply.
Now, if we had build houses out of coal to shield ourselves from background radiation, and those houses had burned down, then our radiation exposure would increase owing to coal combustion. Not because of any increase in the background, but rather because of the loss of shielding. But we don’t use coal that way. Who knows? A few more Fukushima’s and maybe we’ll want to.
Doug Bostrom says
“Now, if we had build houses out of coal…”
Well, we do in part, but from concentrated residue from coal. Coal isn’t coal ash.
Whatever. We’re dancing on the line between fact and wish.
For people interested in facts, EPA’s info is still available as of this writing:
http://www.epa.gov/radiation/tenorm/coalandcoalash.html
Or go with wishful; I don’t have a pointer for wishes because they mostly exist between our ears.
prokaryotes says
Video: Rate-dependent hysteresis and observed thermophoresis in electronic circuits.
Can the memristive equations help in modelling ice sheet dynamics one day? Is this useful at all?
Killian says
Brucie A. said, James Hansen has another paper out: http://www.columbia.edu/~jeh1/mailings/2013/20130926_PTRSpaperDiscussion.pdf
Says that climate sensitivity is high (from the Joe Romm article):
The Earth’s actual sensitivity to a doubling of CO2 levels from preindustrial levels (to 550 ppm) — including slow feedbacks — is likely to be larger than 3–4°C (5.4-7.2°F).
I stated as long ago as 2007, I believe, and possibly on this site, that the sensitivity had to be on the high end, likely more in the 4 – 6 range, because the changes we were seeing were already so far beyond what we were supposed to be seeing. This was prima facie evidence that the models were off.
One example? Look at the ASI extent charts going all the way back. We start to see the decline not in 1979, the oft-cited beginning of the satellite record (the citation of which causes the layperson to think ice started declining in 1979 and also skews the total actual decline), but in 1953. CO2 ppm at that time was around 315. Yet, we all know that the ice doesn’t respond to a 0.8 or what have you yearly change all of a sudden, but responds to the collective rise in energy/heat in the oceans and atmosphere. The simple conclusion? The planet started responding long before 1953 and the *effect* was seen in 1953 in the form of melting ASI.
Pretty clear that planet started responding in real, visible ways once we passed the 300 ppm threshhold, give or take a few ppm.
Again, prima facie. Without a single scientific study and nothing but the ice record we can draw these conclusions. Does that not indicate a need for greater flexibility in our thinking and public discourse by not just laypersons, but the scientific community, too?
Gavin, on the other hand, continues to take what I consider to be an overly cautious view of possibilities. The other day he immediately posted that the short-term reaction at the PETM from the recent paper ( http://thinkprogress.org/climate/2013/10/08/2750191/petm-co2-levels-doubled-55-million-years-ago-global-temperatures-jumped/ ) was controversial and should be treated with caution.
Note the differences in reaction: Wow, that confirms what our eyes see vs. is that really there?
Scientific reticence is a maladaptive behavior when you’re going 100 mph and the wall is clearly visible in the headlights.
I hope we figure out how to combine policy risk analysis and the science before it is too late.
A starting point? Talk to scientists in terms of .05 and .01 validity; talk to the public in terms of what those numbers mean in the real world: Certainty. Not mostly certain or kind of certain or pretty much all certain… call it what it is: Dead certain in any sense that is meaningful. Remove the wiggle room since it’s not really there anyway.
Were we to do this, scientists could still maintain their reticence in halls of science while helping move policy forward in the halls of government and streets of communities.
SecularAnimist says
FYI …
The projected timing of climate departure from recent variability
Nature 502, 183–187 (10 October 2013)
Abstract:
http://www.nature.com/nature/journal/v502/n7470/full/nature12540.html
Discussion at Climate Central:
http://www.climatecentral.org/news/one-billion-people-face-entirely-new-climate-by-2050-study-16587
sidd says
Hansen states in his latest missive: “In my opinion, multi-meter sea level rise will occur this century, if the huge business-as-usual climate forcing actually occurs.”
I fear worse. I have stated previously my reasons for thinking that we are locked into 1m SLR from GIS+AIS alone this century. I believe we have already pumped enough heat into the ocean to destabilize WAIS, regardless of future emission trajectory.
sidd
prokaryotes says
I think what people describe with the car heading towards a cliff is best described with a large impact ice sheet disintegration event. This is in the cards…
New iceberg theory points to areas at risk of rapid disintegration
Hank Roberts says
Have a read through this and its antecedents; I’m old and gray enough to remember back when drumlins were thought to have been created by long slow processes under the icecaps — then one day they watched one happen, and time stood still. Or speeded up. Or something.
“Rapid Sediment Erosion and Drumlin Formation Observed Beneath a Fast-Flowing Antarctic Ice Stream – AM Smith, T Murray, KW Nicholls, K Makinson, G … – American Geophysical Union, Fall Meeting 2005
mentioned at http://scienceblogs.com/stoat/2007/02/05/why-do-science-in-antarctica/
Current link to the abstract is:
http://adsabs.harvard.edu/abs/2005AGUFM.C13A..04S
I’m hornswoggled to see only one subsequent cite. This was, I thought, one of the early cracks in the long-held idea that the Antarctic could not change rapidly.