After a record-breaking 2010 in terms of surface melt area in Greenland Tedesco et al, 2011, numbers from 2011 have been eagerly awaited. Marco Tedseco and his group have now just reported their results. This is unrelated to other Greenland meltdown this week that occurred at the launch of the new Times Atlas.
The melt index anomaly is the number of days with detectable surface melt compared to the baseline period of 1979-2010. The higher the number, the more melt days there were. While this did not match the record 2010 levels, depending on the analysis 2011 was either the 3rd or 6th year in the rankings.
Analysis of the surface mass balance via regional modelling demonstrates that there has been an increasing imbalance between snowfall and runoff over recent years, leading to a lowering of ice elevation, even in the absence of dynamical ice effects (which are also occurring, mostly near the ice sheet edge).
Figure 2. Regional model-based estimates of snowfall (orange), surface melt and runoff (yellow) and the net accumulation (Gt/yr) (blue) since 1958.
The estimated 2010 or 2011 surface mass imbalance (~300 Gt/yr) is comparable to the GRACE estimates of the total mass loss (which includes ice loss via dynamic effects such as the speeding up of outlet glaciers) of 248 ± 43 Gt/yr for the years 2005-2009 Chen et al, 2011. Data for 2010 and 2011 will thus be interesting to see.
While the accelerating mass loss from Greenland is of course a concern, the large exaggeration of that loss rate by Harper Collins in the press release for the 2011 edition of the Times Atlas was of course completely wrong. The publishers have issued a ‘clarification’, which doesn’t really clarify much (Update (Sep 22, 2011): The clarification has been clarified from the original statement). As discussed on the CRYOLIST listserv, the confusion came most likely from a confusion in definitions of what is the permanent ice sheet, and what are glaciers, with the ‘glaciers’ being either dropped from the Atlas entirely or colored brown (instead of white) (No-one that I have seen has posted the legend from the Atlas that gives the definition of the various shadings, though in the 1994 edition I have, glaciers are (unsurprisingly) white, not brown).
The Times is still claiming that it stands by its maps (Update: The new clarification no longer makes this claim). This is quite silly, and presumably reflects the fact that it would be very expensive to reprint all the atlases that may have already been printed. In case this isn’t already clear, there is simply no measure — neither thickness nor areal extent — by which Greenland can be said to have lost 15 % of its ice. As a letter written by a group of scientists from the Scott Polar Research Institute says, “Recent satellite images of Greenland make it clear that there are in fact still numerous glaciers and permanent ice cover where the new Times Atlas shows ice-free conditions and the emergence of new lands”.
The push back from glaciologists on this issue was a good example of how the science community can organise and provide corrections of high-profile mis-statements by non-scientists – by connecting directly with journalists, providing easy access to the real data, and tracking down the source of the confusion. In so doing it also provides an opportunity to give the real story – that change in Greenland and the rest of the Arctic is fast and accelerating.
Update: Here is a figure showing the preliminary GRACE results for Greenland for 2010 and the beginning of 2011. The anomaly in 2010 was huge, indicating perhaps a total loss of about 500 Gt/yr:
Figure 3. The upper panel shows estimated monthly mass anomalies over Greenland from 01-2003 through 07-2011 as in Velicogna (2009); the lower panel shows the detrended monthly data (blue), plus a monthly climatology from 2004 through 2009, highlighting the exceptional mass loss in 2010 (~500 Gt-H2O).
Further Update: A more detailed analysis from the Scott Polar Research Institute in the UK demonstrates quite clearly that the Times map is wrong, and inconsistent with all other ice sheets in the Atlas:
Figure 4. The new Times Atlas map (left) together with a mosaic of two satellite images taken in August 2011 (right). Also on the right are contours of ice thickness shown every 500m. The blue line is the 500m ice thickness contour which is the same as the outline on the Times Map. The red line is the 0m ice thickness contour which would be a better representation of the ice sheet than the blue line. The Times Map excludes all the ice between the red and blue lines. Furthermore, it excludes all the ice caps and glaciers that are not part of the main ice sheet, visible on the satellite image but not shown on the map. (SPRI).
References
- M. Tedesco, X. Fettweis, M.R. van den Broeke, R.S.W. van de Wal, C.J.P.P. Smeets, W.J. van de Berg, M.C. Serreze, and J.E. Box, "The role of albedo and accumulation in the 2010 melting record in Greenland", Environmental Research Letters, vol. 6, pp. 014005, 2011. http://dx.doi.org/10.1088/1748-9326/6/1/014005
- J.L. Chen, C.R. Wilson, and B.D. Tapley, "Interannual variability of Greenland ice losses from satellite gravimetry", Journal of Geophysical Research, vol. 116, 2011. http://dx.doi.org/10.1029/2010JB007789
- I. Velicogna, "Increasing rates of ice mass loss from the Greenland and Antarctic ice sheets revealed by GRACE", Geophysical Research Letters, vol. 36, 2009. http://dx.doi.org/10.1029/2009GL040222
Todd Albert says
Thanks for the CRYOLIST plug Gavin. For those wanting to join or more information on the list, I would direct them to http://cryolist.org/
Great post and excellent work by Marco et al.
Jeffrey Davis says
The Times Atlas is being published by, IIRC, a Murdoch company. Consider that the first news squib I saw about the issue equated it with the IPCC glacier issue.
Paranoid? Sure. Why aren’t you?
daedalus2u says
A metric I would like to see is the average density of the Greenland ice sheet and from that infer an average temperature.
There were reports that the sheet was not thinning from satellite altimetry, but then the gravity measurements showed that there was mass loss. I suspect that what was happening was that there was mass loss, but there was also thermal expansion of the ice, as melt water drained down through the ice and froze, depositing its heat of fusion and raising the temperature.
The only thing that keeps the ice at the bottom frozen is the temperature gradient through the sheet to the surface during winter. If the sheet becomes isothermal (at the melting point) due to melt water, then the bottom can’t stay frozen.
Just thinking out loud (and I only know enough to be dangerous ;), but with IR radiation mapping during winter, one might be able to infer a sub-surface temperature profile and compare that with mass loss and ice sheet thickness.
[Response: The thickness of the greenland ice sheet is ~2000 m on average (don’t quote me, that’s a ballpark estimate). The thickness of the firn is about 70 m in the dry snow zone, less along the coast. That means 70/2000 of the depth is of density ranging from about 0.35 at the surface to about 0.8 at 70 m. The rest (1830/2000) is nearly pure ice – density about 0.9. You can do the math from there and you’ve got about as good an estimate as you are going to get. As for temperature, the ice sheet is not going to be isothermal for a long time. At the summit, for example, the mean annual T is about -30 C, and the temperature is quite constant to a depth of 1500 m or more, due to advection of cold ice downwards. The temperature then increases towards the bed (the major control on that gradient being the heating from the ground — the geothermal heat flux). Changing this profile — even with major melting at the surface — will take on order of the characteristic response time, of about 3000 m / 0.25 m /year (height over accumulation rate) = 12,000 years. Even if I’m off by an order of magnitude, we’re still talking about many hundreds of years. So the ice sheet becoming isothermal is not something to worry about in anything like the short term. (Of course, things can certainly sped up if we start getting lots and lots and lots of melt at the surface all the way to the summit, but we’re not there yet (it’s still only a few days/year and a few hours/day up there. And getting that water all the way down through 3000 m of ice well below freezing is not easy. Keep in mind that the ice at the coast where one hears about the moulins, etc., is already isothermal, and has had a long time to get that way.–eric]
Doug Bostrom says
The shape of the accumulation graph looks distressingly similar to the shape of Arctic sea ice graphs.
ldavidcooke says
Re:4
Hey Doug,
One of the greatest contributors in this case can be seen in the monthly SST anomolies near the SW coast, along with what appears to be a Gulf Stream eddy. The combination of the eddy and higher then normal SSTs in this region started late last Summer continuing through late Winter. Whether this could be related to the Blocking High near Iceland during the same period is a possibility. The question at least for me is what drove the Blocking High…, the LaNina, the +VE or NAO, or was it driven by changes in the N. Jet Stream…?
Cheers!
Dave Cooke
Brian Blagden says
Looking at Figure 1 to the Tedesco et. al. paper (i.e. the Map detailing the 2011 anomaly for the number of melting days as reproduced in your post) two things are immediately obvious.
-The melting days anomaly is greater on the West of the Ice Sheet.
– Areas of above average melting (red and orange areas) are occasionally found immediately adjacent to areas of average or indeed below average melting (accumulation).
What factor(s) would give rise to the difference in the East – West melting pattern? Is this related to topography ?
What factor would lead to areas of melting being juxtaposed with areas of accumulation (-ve melting anomaly)?
Brian Blagden says
Just noted my mistake -ve anomaly does not equal accumulation.
Cheers
Kevin McKinney says
“What factor would lead to areas of melting being juxtaposed with areas of accumulation (-ve melting anomaly)?”
Just guessing, but I’d think that it’s because those areas went ice-free–when that happens, presumably the melt-day anomaly goes to zero.
The SMB graph somehow brought the words “death spiral” to mind once again, though that’s based partly on a purely visual reading that doesn’t take into account the fact that it’s anomalies on the graph, not absolute values. Still–not comforting to contemplate at all.
Bill says
The field validation of this paper rests on 6 automated weather stations in southwest Greenland. The K transect is by no means representative of other areas of Greenland — climatically or topographically.
The problems of applying surface mass balance models, be they energy or degree day, derived in one area of Greenland to different regions are well known.
One should also wonder about the validity of using a model originally intended to study rainfall, etc. in west Africa applied to Greenland. For instance, does the model allow for superimposed ice?
Finally, while the MODIS albedo data product is useful, it is only as good as the clear sky. Wy the 16-day product rather than the daily?
Doug Bostrom says
Bill says:
21 Sep 2011 at 7:34 PM
The field validation of this paper rests on 6 automated weather stations in southwest Greenland. The K transect is by no means representative of other areas of Greenland… [etc]”
”Victory will be achieved when uncertainties in climate science become part of the conventional wisdom” for ”average citizens” and ”the media” (Cushman 1998).
You stopped at “uncertainties,” Bill. Meanwhile, doubt is not an argument. I’ll hazard a guess that developing your ideas sufficiently to override the research findings presented here will require a lot of work. So, what’s your objective? Improved knowledge, or doubt?
Kevin McKinney says
#9–“the field validation of this paper. . .”
Perhaps you missed it, but *three* papers are cited. Presumably you mean Tedesco et al, since the other two are gravimetric?
Be that as it may, whatever validity your criticisms may have, it would be rather odd if they somehow conjured the kinds of numbers we’re seeing here. They’ve been using the same algorithm all along, after all.
And the gravimetric results do seem pretty consistent, don’t they?
David Young says
An obvious question is how this acceleration can be possible in light of the satellite data showing sea level falling over the last 2 years. Maybe the satellite data is in error or else ice is accumulating somewhere else. You know that there is also a recently published paper on driftwood in Greenland showing that the north coast was ice free within the last few millenia. The main question that comes to mind is so what.
Dan DaSilva says
Sometimes doubt is an increase in knowledge over being sure.
Peter KM says
Re: 9
Bill, your view on the validation and applicability of regional climate modelling over Greenland is a bit too simplistic. The validation of the MAR model (output of which is plotted above), but also that of e.g. the RACMO model is not based solely on the K transect data. The surface mass balance (SMB) results show a very favourable correlation (r^2>0.90) with more than 500 SMB observations all over the ice sheet, from firn cores, snow pits, etc. Meteorology is extensively validated against observations from a dozen of weather stations and more than 50 coastal stations. Melt extent from the MAR model compares favourably to microwave data.
While regional climate modelling may have started off to study tropical or arid regions, it is perfectly legitimate to apply regional climate models over Greenland too, as long as you adapt the model to the specific circumstances. MAR, RACMO and other models are equipped with relatively sophisticated snow physics packages that treat albedo and meltwater realistically.
Assigning an uncertainty to modeled ice sheet SMB is an interesting challenge, certainly. But it would be unfair to write off this developing field by sowing unfounded doubts.
t_p_hamilton says
David Young said:”An obvious question is how this acceleration can be possible in light of the satellite data showing sea level falling over the last 2 years.”
Among the not so obvious answers: http://blogs.discovermagazine.com/badastronomy/2011/08/26/sea-level-rise-has-slowed-temporarily/
Paul S says
#12, David Young – It’s thought precipitation changes associated with ENSO are mostly the cause of the 2010 drop: http://www.ouramazingplanet.com/sea-levels-fell-last-year-1915/
Another thing to keep in mind that will be/has been a factor in the near future/recent past is the rapid building of massive dam projects across the developing world. These impound water and will offset sea level changes caused by climatic forces.
Kevin McKinney says
#12–
“An obvious question is how this acceleration can be possible in light of the satellite data showing sea level falling over the last 2 years. Maybe the satellite data is in error or else ice is accumulating somewhere else.”
You do know–don’t you?–that there are other factors affecting sea level besides Greenlandic ice melt?
“You know that there is also a recently published paper on driftwood in Greenland showing that the north coast was ice free within the last few millenia.”
#13–
“Sometimes doubt is an increase in knowledge over being sure.”
And sometimes it’s pointless second-guessing which only serves to hold back the development of further knowledge. Luckily, that will certainly not happen in this case.
In response to that, I can’t do better than your own next sentence: “The main question that comes to mind is so what.” An ice-free North coast in the past few millenia says nothing about the desirability of driving ice melt (or climate change generally) at extremely rapid rates.
And ‘desirable’ is not what I would call it.
Bruce Tabor says
If Greenland lost 15 % of its ice the sea level would ris eby about 1 metre. I think we would notice!
Thanks Gavin.
Philip Machanick says
Thanks for firmly debunking the errors in the Times Atlas. The denialosphere would have it that scientists are alarmists and are ganging up on their “sensible” caution. Scientists don’t have to work as hard on debunking “alarmist” errors because not that many of those make it into the research literature.
Thanks for posting the link to the CRYOLIST listserv. Very informative to see informed discussion on preventing someone else’s error from spreading.
Sekerob says
#15/16 Heavy deluges [what else to expect with 4% more vapor in the air and enough nucleating dust and MM aerosols] are accounting for an est. 6mm drop since winter 2010. SkS has a featuring article up on this. http://www.skepticalscience.com/Extreme-Flooding-In-2010-2011-Lowers-Global-Sea-Level.html
Bibasir says
I don’t understand the graph titled “Greenland Ice Mass change from GRACE” Does it mean that Greenland ice mass was growing from 2003 to 2006 and presumably if the trend is extrapolated back it was growing before 2003?
[Response: No. The zero line is arbitrary. It is simply the net loss from the mid-point of the series. – gavin]
Kevin McKinney says
“If Greenland lost 15 % of its ice the sea level would ris eby about 1 metre. I think we would notice!”
Yes, if it were 15% by volume. Extent is a whole different metric–remember, the central cap thickness is measurable in kilometers. Of course, it’s moot right now, since we all agree the Times messed up.
Peter KM says
Re: 18 and 22,
“Yes, if it were 15% by volume. Extent is a whole different metric–remember, the central cap thickness is measurable in kilometers.”
In fact, a 15% reduction in extent would likely equate to a more than 15% reduction in volume. Because of ice dynamics, volume and area of a glacier or an ice sheet are intimately related – if you notice a 15% reduction of ice extent at the margin, then a lot of ice will have disappeared far upstream, too.
oakwood says
Good news. Its good to see when the evidence for desparate global warming is building up.
Brian Blagden says
#8… in response to my question “What factor would lead to areas of melting being juxtaposed with areas of accumulation (-ve melting anomaly)?” You responded: “Just guessing, but I’d think that it’s because those areas went ice-free–when that happens, presumably the melt-day anomaly goes to zero.”
Thanks for this. Looking at the distribution of the blue areas (below average melt) shown in Figure 1 to the Tedesco et.al. paper would appear to support your suggestion. However there is no indication within the said paper that this is the case, rather it is simply stated “Blue areas (mostly absent in the figure) indicate locations where this year’s melting was below the average.”
Further, the size and extent of these areas, particularly those found on the Western edges of the Ice sheet would possibly preclude such a conclusion.
Finally, if a zero melt day anomaly were to reflect ice-free conditions, as you suggest then, much of the southern and eastern portion of the ice sheet (blue- green areas in figure 1) would actually be exposed bedrock.
Kevin McKinney says
“Because of ice dynamics, volume and area of a glacier or an ice sheet are intimately related – if you notice a 15% reduction of ice extent at the margin, then a lot of ice will have disappeared far upstream, too.”
At the scale of a glacier, yes. Over the scale of Greenland as a whole, I think not–particularly given the topography. But I’m not speaking from expertise, nor specific knowledge, so I could certainly be convinced otherwise.
MalcolmT says
Just a site-mechanics note: Fig 4 is 1.8 MB and was going to take me 8 minutes to download … smaller would be better for everyone :-)
Kevin McKinney says
#25–You seem to be correct. I was actually thinking we might have bare rock in those areas, but I can see that that is not the case (even if the Times thought so, too. . .)
Clearly I need a new guess–er, hypothesis.
So, how about this: the line demarcating the negative anomalies from the positive one represents the edge of the heavy-melt area during the reference period (2004-2009). From reading the ’11 report we know that the melt season was “short but intense,” and that it started slowly, with the first couple months of the melt season being cold. Then in June conditions flipped into “intense” mode, and we saw lots of melt for the rest of the season.
So, the slow start to the season meant that the areas normally melting first–mostly those areas discussed in your last comment, the ones along the Western margin–saw negative melt-day anomalies. Those areas *normally* start melting in May or even April, but in 2011, mostly didn’t melt much until June.
By contrast, the areas inland of the heavy-melt line don’t usually start melting until June anyway; these, affected by the intensity of the melt, melted more consistently and probably considerably longer, too, racking up those big positive anomalies. In this scenario, the sharp demarcation is sort of a statistical artifact, I suppose–a weak explanation of that sharp boundary, but perhaps better than none at all.
Can you poke a hole in that idea, or does it actually work?
Dave Rado says
In a BBC Radio 4 interview, the publishers, Harper Collins, appear now to have made a partial retraction. See http://www.bbc.co.uk/iplayer/console/b014qnlb – the interview starts 1 hour 51 minutes and 45 seconds into the programme, so use the slider to fast forward to that point. Would be interested in your thoughts regarding their new position!
[Response: This is the same statement made in their clarified clarification (linked above). – gavin]
NeilT says
Wilst it’s good to see consistend data coming from Greenland, which tends to shut the denial crowd up a bit, it’s also worrying too.
The problem I’m having with the denial mongers is their position.
“Climate scientists are all a bunch of alarmists who lie about the situation and destroy data”. Is their position and no matter how much you point them at skeptical science or peer reviewed data, they only read the denial bunk and never read the quality data coming from such studies as these.
I’m reminded of the people who stayed for the Cat5 tropical storm in Aus. They were told evacuate. They said no. When the storm came in and they found they were in the path of the surge, they frantically poned saying “evacuate me”. Fortunately the Aussies said “no ride it out it was your choice”.
I wonder how we make the Deniers “ride it out” without taking us all down with them???
Peter KM says
Re 28:
I think that is the most likely explanation. The lowermost parts of Greenland are actually very rough, and the surface there consists of deep crevasses and large hummocks. Winter snow tends to accumulate in the crevasses and gullies, so that at the end of winter there is already ice at the surface – so the melt season very close to the margin is very long. A cold start of the melt season and a subsequent warm peak can explain the pattern in the figure.
Russ R. says
Let’s keep this “Greenland meltdown” in perspective, shall we?
Looking at the GRACE data shown above, from the winter peak in 2003, to the winter peak in 2011, there’s ~2000 Gt of cumulative ice loss, or ~250 Gt/yr.
There appears to be some acceleration in the trend, though with only 8 years of data it’s tough to say whether or not it’s significant.
Ice has a minimum density of 0.9167 g/cm³ (or Gt/km³) so that ~250 Gt/yr works out to at most ~273 km³/yr.
Greenland’s ice sheet has a total volume of 2,850,000 km³ (so sayeth Wikipedia). At the current rate of ice loss, that works out to ~10,450 years for Greenland to fully “meltdown”. Which is why I’m not terribly alarmed.
[Response: Then I would posit that you are not thinking very clearly about this. Indeed, 8 years is not long, but mass loss does appear to be accelerating. Greenland is indeed a lot of ice, but we don’t need to melt a lot of it to have a big effect – even a 10% loss this century would be devastating (and upper limits are perhaps 30% (Pfeffer et al, 2008)). Now, this is why you are not thinking – let’s assume that at any particular level of regional temperature, we can expect a certain loss rate – this is reasonable because, as you say, there is a lot of ice in Greenland. One of the things about ice melting (and this goes for dynamic ice sheet effects as well) is that melt/loss rates increase more than linearly with temperature. Thus as temperatures continue to warm the default expectation must be that mass loss will accelerate – exactly how rapidly that goes is still uncertain. But extrapolating from average loss rates over the last ten years and expecting they will remain constant is guaranteed to be an underestimate. – gavin]
Mike says
Maybe a cheap way Time could correct its atlas would be to change the publication date to 2050.
bernie says
Gavin:
Eric in response to #3, seems to be saying the same as Russ R and they end up with similar time frames. Clearly if things change, things change … but at the moment the title is a little hyperbolic.
[Response: The title was a play on words – but if I have to explain it, I guess it wasn’t obvious. But I don’t see your point at all. No-one is seriously claiming the whole ice sheet is going to disappear any time soon (except perhaps the Times Atlas cartographers) – but that doesn’t mean that important losses can’t occur. A loss rate of 500 Gt/yr is about 1.5 mm/year sea level equivalent. And this is just one element in the sea level rise – small ice caps are melting faster, thermal expansion will increase in line with ocean heat content changes and Antarctic ice sheets are also losing mass. Significant accelerations in the ice sheet components are expected. – gavin]
Russ R. says
@ Gavin (#32 inline response),
I don’t disagree with any of the points you’ve made above… Yes, there is evidence of acceleration, yes, my extrapolation of the current loss rate will be an underestimate, and yes, the rate of loss can increase more than linearly with temperature (though this would greatly depend on the shape of the terrain).
You say: “even a 10% loss this century would be devastating”.
Again, I don’t disagree, as this would mean ~70 cm of sea-level rise, but let’s think about what would be required for that to actually happen? For Greenland to lose 285,000 km³ of ice in 89 years, the AVERAGE rate of loss would have to be more than 3,200 km³/yr, or nearly 12 times faster than the current loss rate (273 km³/yr).
If you were to take the current rate of ice loss as a starting point, and assume a constant rate of acceleration, then by the end of the century, the annual loss rate would need to reach nearly 6,200 km³/yr, or nearly 23 times the current rate, to result in a cumulative 10% loss.
So, there’s still at least an order of magnitude difference between observed ice loss and serious cause for concern.
[Response: By the end of the century temperatures could be higher by 3 or 4 or more degrees C under reasonable (i.e. not worst case) business as usual scenarios. The last time temperatures were that high around Greenland (Eemian times) it lost more than 30% of it’s mass (and maybe over half) though at a rate that is still uncertain. I’m not saying that a 10% loss is definite, but I’m afraid I do not share your apparent faith in the linearity of ice sheet dynamics. My expectation is that mass loses will continue to accelerate as the planet warms and it wouldn’t take much to have accelerations that lead to big cumulative loss rates. Perhaps we can agree that this needs to be closely monitored? – gavin]
mauri pelto says
I certainly appreciate the timeliness with which this data has been generated and shared by Tedesco and others. To me the most notable anomaly is along the north coast from Petermann to Ryder Glacier. This area has a shorter melt season and to have an anomaly of comparable magnitude in added days to the central west coast from Jakobshavn north past Upernavik is of note.
Brian Blagden says
#28 Kevin and #31 Peter.
Again thanks for this. I think you could be on to something. Thanks for addressing my question…I had no idea what was causing the sharp boundary, and clearly was unable to produce a hypothesis myself. Cheers. Brian
David Young says
Some things don’t add up to me about this data. First, sea level seems to have been falling for a couple of years. Where is the Greenland melt water going? Maybe Antarctica is gaining ice. Also there seems to be historical evidence that even the north coast of Greenland was ice free within the last 10000 years.
[Response: Antarctica as a whole is not gaining ice. And the Early Holocene ice minimum is related to the changes in the orbit of the Earth (principally because the perihelion was in N.H. summer, rather than in January). – gavin]
Something else troubles me about the models on which this all seems to be based. I believe it was you Gavin who used the conversion of a time dependent nonlinear system into a boundary value problem idea. Having 30 years experience in the field of computational fluid dynamics, I know that this is standard doctrine. You use Reynolds averaging to average the fluctuations that are not visible on your grid and then you claim that the time averaged fluctuations can be represented by a turbulence model. The impression one gets from the literature is unfortunately quite wrong about the accuracy of these models. They are badly wrong in many common cases. This is why the FAA still requires extensive flight testing before certification. It is quite remarkable how cherry picked the literature is in aerodynamic CFD. Fortunately, some of the engineers doing the designs are CFD “skeptics.”
[Response: I’m not really sure what relevance models have here. This post is all about measurements from Greenland. More generally, there are always unresolved scales in models and they certainly influence the larger scale. This is why we build parameterisations, and why we evaluate the models against real world variability and change. The credibility of the enterprise comes from those evaluations, not on some assumption that turbulence isn’t important. – gavin]
David Young says
Sorry for the duplicated comments.
I am sceptical of the water on land arguments. The graphic shows some areas of increase and some areas of decrease. There is no information about the mean change.
I would like Gavin’s comment on the boundary value problem method as it seems to me to have a very weak mathematical foundation.
toko.dave says
Russ R.
Russ R.
I’m not sure you want to go there…Remember the glacial maximum in North America was only 18,000 years ago and you could make some simple calculation about the volume of ice involved and how quickly it vanished. If there’s a glaciologist in the crowd they could help out?, but I think the general consensus would be that once the tipping point for glaciers is reached, collapse is astonishingly rapid. If I get a chance this weekend I’ll visit with a couple friends familiar with Pleistocene geology and see if I can come up with some references.
EFS_Junior says
Somewhat related.
There is a new paper (open access) by Church, et. al. in GRL titled “Revisiting the Earth’s sea-level and energy budgets from 1961 to 2008”
(Received 5 July 2011; accepted 17 August 2011; published 16 September 2011.)
at;
http://www.agu.org/pubs/crossref/2011/2011GL048794.shtml
Don’t forget the grab the SI on ground water depletion included with the paper.
David Young says
The relevance of the models has to do with your point that by 2100 we will see 3-4 degrees C warming and that therefore the ice melt is bound to accelerate. This is based on the models. My point is merely that the mathematical foundation of the models is weak. In particular, vortex dynamics is a particularly weak point of the CFD models, but these dynamics are critical in climate. There is the further problem that if the fluctuations are visible on your grid, the separation of scales implicit in Reynolds averaging breaks down and the model results once again depend critically on the grid resolution.
[Response: Actually no. This kind of forecast doesn’t depend too much on the models at all – it is mainly related to the climate sensitivity which can be constrained independently of the models (i.e. via paleo-climate data), moderated by the thermal inertia of the oceans and assuming the (very likely) continuation of CO2 emissions at present or accelerated rates. Such a calculation is obviously uncertain, but the mid point is roughly where I said. It could be less, or it could be more, but putting all your faith in an assumption that climate sensitivity is negligible goes against logic and the past climate record. Taking your bigger point though, it doesn’t really work. Model results don’t depend critically on resolution – the climate sensitivity of the models is not a function of this in any obvious way, and the patterns of warming seen in coarse resolution models from the 1980s are very similar to those from AR4 or the upcoming AR5 (~50 times more horizontal grid points). More resolution has been useful for ENSO, or improving the impact of mountains on Rossby waves, and many other issues, but if you were correct that modeling was fundamentally constrained by Reynolds averaging, why do models get anything right? The seasonal cycle, response to volcanoes, storm tracks etc? There is nothing specific in the models that forces these responses. – gavin]
Stuart says
“Philip Machanick: Thanks for firmly debunking the errors in the Times Atlas. The denialosphere would have it that scientists are alarmists and are ganging up on their “sensible” caution. Scientists don’t have to work as hard on debunking “alarmist” errors because not that many of those make it into the research literature.”
Surely the Times Atlas mistake would be denialist proganda anyway – if Greenland had already lost nearly 1/6th of its ice and the sea rise had only risen fairly moderately as it has done, this would indicate even the total melt of Greenland wouldn’t be a big issue.
EFS_Junior says
David Young,
Are you “trying” to play the part of some kind of fluid dynamics concern troll?
Because the slippery slope argument you are trying to make takes us to nowhere, no knowledge gained, everything is a one-off trial and error experiment.
All models are in essence scale models, governed by the laws of geometric, kinematic, and dynamic similitude.
Whether they be physical or numerical or analytical models.
What we include or exclude will always be bound by finite resources placed on any type of problem we are trying to solve.
CFD?
RANS?
Doesn’t matter.
In the end, you’re damned if you do, and damned if you don’t.
But wait, let me help you out there Bucko, because, we’ve already been there, done that;
http://en.wikipedia.org/wiki/Lagrangian_and_Eulerian_specification_of_the_flow_field
http://en.wikipedia.org/wiki/Coordinate_systems
http://en.wikipedia.org/wiki/Frames_of_reference
http://en.wikipedia.org/wiki/Newton's_laws_of_motion
http://en.wikipedia.org/wiki/Lagrangian_mechanics
http://en.wikipedia.org/wiki/Hamiltonian_mechanics
There is, and cannot be, any ultimate end game to modelling, in and of itself.
We will always model things, just like we always have, regardless of our ever finite resources.
If we waited for the perfect model for everything in life, then we’d never get off the starting line, to begin with.
We’d still be living in caves, or some such.
:-(
Recaptcha: endsolo pestilence
Ray Ladbury says
David Young,
You are operating under a fundamental misconception about scientific models. Their purpose is not to get answers, but to gain insight. The 3-4 degree increase depends much more on empirical data. Models give insight into how it could manifest or play out and what could influence it. Indeed, one of the strongest constraints limiting the high side of climate sensitivities is from the models. If you don’t trust the models, you’d better be doubly concerned.
David Young says
What I hear Gavin saying is that the complex models aren’t really the basis for the 3-4 degrees but much simpler energy balance models and empirical evidence from paleoclimate. As to paleoclimate, uncertaintities increase as one goes back in time and in any case, the ice ages and interglacdials were surely due to differences in the pole to equator temperature difference and not by CO2. Surely, the positive feedback must have been huge to get 10 degrees C out of a 25% increase in CO2. Surely, applying the simple models to the last 1000 years or even the last 100 is better. But even discussing this seems to be a problem. I guess the question that arises is the same one as arises in CFD, viz., are the billions invested in more complex models really worth it. The problem often is that adding more terms merely adds more tunable constants, making the overall uncertainty actually greater. If this is the case, then one must answer some fundamental questions, such as why do simple models show much more warming in the last century than was actually observed? I know about the aerosols and the heat in the deep oceans. As I recall from AR4, the error bar on aerosols was 100%.
The point about resolution is not that the results depend on resolution, but that the results are badly wrong with any reasonable resolution, for example for vortex shedding. They can be wrong by 100% in body forces, a catastrophic error. Steady state solutions when there is a stable orbit or a strange attractor are obviously spurious and badly wrong. It is true however, that in such a situation, details of resolution can make huge differences in the output. I must say that in CFD 15 years ago, the thought process was like yours Gavin. Only recently have people begun to really look carefully for these things. The problem is that you must be LOOKING FOR these things. If you just adjust your constants for each case to get the test data, your predictive skill in low. If the models don’t show these phenonena, which we know are present in fluid dynamics systems, they must be wrong in their modeling of resolvable scales or the subgrid models are very badly wrong or possibly both. However, this is problem dependent.
It seems to me that more investment in fundamental fluid dynamics and atmospheric physics and in better measurements is the way to proceed here. The recent CERN research should surely be followed up urgently, not with a 15 year time lag.
David Young says
Let me just clarify this with 3 questions:
1. What happens in these models in the simple case of a stable orbit such as vortex shedding? Do you get the right time averaged result?
Vortex dynamics is critical to climate and weather, but its known to be poorly posed.
2. What is the time scale on which it is claimed the models are reliable? The longer the time scale, the more complex the subgrid models must be because I think everyone acknowledges that eventually the small scales influence the resolved scales.
3. The simpler models it seems to me are also subject to uncertainty, even they are at least understandable. Do we have sufficient data to calibrate them?
By the way, I’m not responding to suggestions to look at Wikipedia articles on fluid dynamics. There are the equivalent of unrefereed literature
[Response:1) In the atmosphere, the key ‘vortices’ are the mid-latitude storm systems. The tracks and intensities of the storms (in the North Atlantic for instance) in the models are largely correct though there are usually systematic offsets from the observations (i.e. insufficient persistence into the Barents Sea, slight departures from the observed mean track etc.). Variability in the tracks on a year by year basis, for instance, as diagnosed by the models NAO index resembles that observed, explaining about the same amount of variability as observed.
2) The models are run in different modes – either as an initial value problem (such as for a weather forecast), or as a boundary value problem (say a 2xCO2 climate, or a historical hindcast). IVP simulations have sensitive dependence on initial conditions and diverge rapidly (over a week or so) from the reality. BVP simulations do not have such a dependence and have stable statistics regardless of how they are initialised (at least for IC perturbations within the range of actual observations and for the class of model that was used for AR4). The errors in each case are diagnosed differently and are related to different aspects of the simulation. Long term biases in the BVP simulations relate far more to overall energy fluxes than the errors in IVP simulations, which are related mainly to atmospheric dynamics. There are hybrid kinds of simulations (for instance, initialised decadal predcitions) which combine both aspects and have more complex kinds of problems. However the target for each kind of simulation is different – IVP runs are trying to capture the specific path of the weather, BVP runs are trying to capture the statistical sensitivity of all the possible paths. Thus the time scale for reliability of BVP runs needs a sufficient time scale such that you have a good sampling of possible paths (i.e. the signal has to be distinguished from the weather ‘noise’). For many quantities and for forcings similar to that produced by anthropogenic CO2, that requires simulations of 20 to 30 years. For larger forcings (say a big volcanic eruption), the signal can rise out of the weather ‘noise’ more rapidly. Note that BVP simulations are stable – their errors do not grow in time in situations with constant forcing. For simulations of the future of course, scenario uncertainty does grow in time, and that dominates uncertainty past the 50 year timescale.
3) Simpler models can be designed to fit many aspects of the global temperature time series, or the most straightforward aspects of the atmospheric dynamics (Q-G models with dry physics for instance) (See Held, 2005 in BAMS for more examples). But many of the key questions – regional variability, changes in the patterns of rainfall or statistics of drought, or the interplay of dynamics and thermodynamics in sea ice change – can only be approached using comprehensive models.- gavin]
David Young says
One other thing. Bear in mind, I’m not saying turbulence modeling always gives incorrect results. Generally, for stable problem such as attached flows, they are pretty good even though even there there are crossflow and vorticity issues. But if the problem is at all hard such as separated flow or vortex dominated problems, the results are much worse than most people realize, mostly because not very many negative results are published. In any case, there is no a priori reason to expect these things to be right. It’s all about better measurements and calibrating simple models. It is a distinctive feature of the modern mathematical understanding of the Navier-Stokes equations that not all problems are going to be solvable with any accuracy at all.
To put it provocatively, if there are positive feedbacks, fluid dynamics is not terribly useful except possible for stability analysis. If the feedbacks are negative, they there is hope!!
JK says
Although I don’t necessarily agree with David’s take, it seems like a useful opportunity to ask if experts can point me in the right direction, or tell me if my thoughts are nonsense.
It seems to me that turbulent mixing processes are very important to getting sensitivity right.
Roughly speaking, in the case of ocean mixing, getting the rate right tells us how much warming is ‘in the pipeline’ given observed warming to date.
In the case of atmospheric mixing, I recall hearing that while warming will result in more evaporation, relative humidity depends on turbulent mixing. I presume this is because the greater evaporation near the surface must mix into the rest of the atmosphere to affect overall humidity.
Getting this right is needed for the water vapor and cloud feedbacks.
If that’s correct then it would seem that getting these right is crucial to improve model calculations of sensitivity, and perhaps a key limiting factor (at least for some purposes). My first question is whether these are important considerations?
Presuming that they are, my next question is about how these are treated in models? My first guess would be that sub-grid mixing is parameterised as an effective diffusion (this is related to what is meant by ‘diapyncal mixing’ co-efficients in the ocean?) Apart from the general capacity of models to reproduce realistic circulations, what are the empirical and / or theoretical justifications got these parameterisations? Do we have a good idea of the likely range of applicability of the approximations? Is there a basis for making measurements of the relevant parameters? Presumably weather forecasting would be a useful way to test parameterisation of atmospheric and perhaps ocean parameterisation? If these issues are important could an expert point me to an introduction / tutorial / review?
David Young says
JK, You are correct, the standard turbulence models are based on the idea that turbulent fluid is “more viscous” and thus you add additional viscosity. Normally, this viscosity is governed by complex differential equations. The problem is that these differential equation are based on the theory of thin shear layers. Vortex dynamics is generally very poorly represented. The coefficients of the differential equation for the eddy viscosity (the added viscosity) are determined by matching a few (usually 2) canonical cases, usually a flat plat boundary layer in a zero pressure gradient and a straight wake ( a sheet of vorticity). However, these models are just wrong if vorticity impinges on the boundary layer for example. They are useless for predicting a von Karman vortex street. There are literally hundreds of turbulence models used for different ranges of Reynolds numbers, etc. It is one of the dramatically weak points of all viscous fluid dynamics. I can cite many instances in the literature where the limitations of these models are documented. One might be a recent report (from MIT) by Drela and Fairman. Unfortunately, the paper was rejected by a journal as being “well know”. the problem is that it is not well known to most people in the field. There is another interesting paper in the AIAA Journal by Darmofal and Krakos showing the problem with the boundary value ploy for the Navier-Stokes equations. The result is that depending on the method used to get to steady state, you can get any answer you want within a factor of 2. Not very comforting.