Alert readers will have noticed the fewer-than-normal postings over the last couple of weeks. This is related mostly to pressures associated with real work (remember that we do have day jobs). In my case, it is because of the preparations for the next IPCC assessment and the need for our group to have a functioning and reasonably realistic climate model with which to start the new round of simulations. These all need to be up and running very quickly if we are going to make the early 2010 deadlines.
But, to be frank, there has been another reason. When we started this blog, there was a lot of ground to cover – how climate models worked, the difference between short term noise and long term signal, how the carbon cycle worked, connections between climate change and air quality, aerosol effects, the relevance of paleo-climate, the nature of rapid climate change etc. These things were/are fun to talk about and it was/is easy for us to share our enthusiasm for the science and, more importantly, the scientific process.
However, recently there has been more of a sense that the issues being discussed (in the media or online) have a bit of a groundhog day quality to them. The same nonsense, the same logical fallacies, the same confusions – all seem to be endlessly repeated. The same strawmen are being constructed and demolished as if they were part of a make-work scheme for the building industry attached to the stimulus proposal. Indeed, the enthusiastic recycling of talking points long thought to have been dead and buried has been given a huge boost by the publication of a new book by Ian Plimer who seems to have been collecting them for years. Given the number of simply made–up ‘facts’ in that tome, one soon realises that the concept of an objective reality against which one should measure claims and judge arguments is not something that is universally shared. This is troubling – and although there is certainly a role for some to point out the incoherence of such arguments (which in that case Tim Lambert and Ian Enting are doing very well), it isn’t something that requires much in the way of physical understanding or scientific background. (As an aside this is a good video description of the now-classic Dunning and Kruger papers on how the people who are most wrong are the least able to perceive it).
The Onion had a great piece last week that encapsulates the trajectory of these discussions very well. This will of course be familiar to anyone who has followed a comment thread too far into the weeds, and is one of the main reasons why people with actual, constructive things to add to a discourse get discouraged from wading into wikipedia, blogs or the media. One has to hope that there is the possibility of progress before one engages.
However there is still cause to engage – not out of the hope that the people who make idiotic statements can be educated – but because bystanders deserve to know where better information can be found. Still, it can sometimes be hard to find the enthusiasm. A case in point is a 100+ comment thread criticising my recent book in which it was clear that not a single critic had read a word of it (you can find the thread easily enough if you need to – it’s too stupid to link to). Not only had no-one read it, none of the commenters even seemed to think they needed to – most found it easier to imagine what was contained within and criticise that instead. It is vaguely amusing in a somewhat uncomfortable way.
Communicating with people who won’t open the book, read the blog post or watch the program because they already ‘know’ what must be in it, is tough and probably not worth one’s time. But communication in general is worthwhile and finding ways to get even a few people to turn the page and allow themselves to be engaged by what is actually a fantastic human and scientific story, is something worth a lot of our time.
Along those lines, Randy Olson (a scientist-turned-filmmaker-and-author) has a new book coming out called “Don’t Be Such a Scientist: Talking Substance in an Age of Style” which could potentially be a useful addition to that discussion. There is a nice post over at Chris Mooney’s blog here, though read Bob Grumbine’s comments as well. (For those of you unfamiliar the Bob’s name, he was one of the stalwarts of the Usenet sci.environment discussions back in the ‘old’ days, along with Michael Tobis, Eli Rabett and our own William Connolley. He too has his own blog now).
All of this is really just an introduction to these questions: What is it that you feel needs more explaining? What interesting bits of the science would you like to know more about? Is there really anything new under the contrarian sun that needs addressing? Let us know in the comments and we’ll take a look. Thanks.
Hank Roberts says
RichardC, why teh play stupid?
Why be so wrong to no avail but to call attention to yourself?
(unless someone else is pretending; I can’t see the IP address, just the name someone typed in to post the above).
Assuming it’s you: You know how to look these things up.
Like this:
http://www.google.com/search?q=what+the+IPCC+does+with+regard+to+sea+level+rise%3F
Take your question, your exact words.
Paste them into the search box.
Hit RETURN
Hank Roberts says
Excellent general education about climate science post here
(in English):
http://www.scienceblogs.de/lindaunobel/2009/06/surfaces-ammonia-ozone-and-scientific-destiny.php
Hank Roberts says
For sake of completeness-within-Google-searches, also do the other searches:
Scholar:
http://scholar.google.com/scholar?q=what+the+IPCC+does+with+regard+to+sea+level+rise%3F
Images:
http://images.google.com/images?tab=si&sa=N&q=what+the+IPCC+does+with+regard+to+sea+level+rise%3F
Video:
http://video.google.com/videosearch?q=what%20the%20IPCC%20does%20with%20regard%20to%20sea%20level%20rise%3F
BobFJ says
RichardC,849, you quoted this:
Gavin’s statement is absolutely correct, however the irrelevance is; no one on this thread has claimed that net feedback value IS zero, but simply that it is not comfirmable.
Perhaps you could try reading my 840 more carefully.
Mark says
“Yet isn’t that exactly what the IPCC does with regard to sea level rise?”
No.
BobFJ says
JBL 832:
You are right, even though this website is flagged as “Climate science from climate scientists”, bloggers may validly comment on the politics, aesthetics and environmental impact, and all sorts of stuff associated with that science.
[edit – everyone needs to calm down and stay substantive]
manacker says
Richard C
“Uncertainty does not mean that a value of zero is valid. – gavin]”
Uncertainty (with regard to the net feedback from clouds) does not mean that a value of zero is valid, as Gavin has said.
What it does mean (as Ramanathan and Inamdar have stated):
http://www-ramanathan.ucsd.edu/FCMTheRadiativeForcingDuetoCloudsandWaterVapor.pdf
“Cloud feedback. This is still an unresolved issue (see Chapter 8). The few results we have on the role of cloud feedback in climate change is mostly from GCMs. Their treatment of clouds is so rudimentary that we need an observational basis to check the model conclusions. We do not know how the net forcing of -18W/m^2 will change in response to global warming. Thus, the magnitude as well as the sign of the cloud feedback is uncertain.”
So R+I are not telling us that the net cloud feedback is zero, but that we just don’t know whether it is positive or negative.
Max
manacker says
Chris Dudley
Thanks for your #834.
You have cited a very optimistic estimate of possible future petroleum reserves at 5 trillion barrels.
I cited another very optimistic estimate at 4.35 trillion barrels.
The difference between your estimate and mine makes a difference of 24 ppmv in the “maximum-ever all-time” anthropogenic atmospheric CO2 from 1,000 to 1,024 ppm.
So we are basically in agreement and there is no ”gross underestimate”.
If you start talking about 8 trillion barrels over the next 500 years, we’d have to look at just how long CO2 stays in the atmosphere in the first place.
But, hey, even if it ALL stayed in the atmosphere over 500 years the 8 trillion barrels would add 24 * 3,000 / 650 = 110 ppm CO2, so we’d have 1,124 ppmv CO2 instead of 1,000 ppmv.
Face it, Chris, there is no more out there. That’s all there is.
Max
Chris Dudley says
Max (#858),
You have misread me, doubtless because you are unfamiliar with the oil industry. I distinguished between conventional oil and oil shale though I am grouping enhanced recovery in with conventional oil. My lower limit, (I did not estimate tar sands) then is about 13 billion barrels or three times what you mistakenly take as an upper limit.
And, while I think coal begins to become uneconomical as an energy source around 2035 or sooner in the US, coal-to-liquid could easily substitute for coal demand in a way that makes very marginal resouces desirable to mine. So, I also question your figures for coal considering them to be too low as well.
Jim Eager says
Max wrote @812: “The link you cited confirmed that of the 2,500 GtCO2 equivalent only a relatively small portion of the assumed global methane hydrate reserves will be economically viable to recover.”
Max assumes that economic recovery and subsequent combustion by humans is the only way that methane hydrates can vent their carbon into the atmosphere.
This is one of the feedbacks that he is choosing to ignore.
manacker says
Chris
We are talking about trillions of barrels.
Tar sands and shale are all included. Enhanced recovery is included as well, as is ANWR, the OCS, offshore Brazil, Arctic onshore and offshore, etc.
While methane hydrates are not included under “petroleum reserves”, we have shown that they do not greatly influence the total.
The upper limit on petroleum reserves is somewhere around 4.5 trillion barrels. If you believe this is low by a factor of 3, please indicate where all this as yet undiscovered oil is supposed to be and why Oil and Gas Journal and all the rest have not yet found all this oil.
Max
Jim Eager says
@792 Patrick wrote: “if you took all the CO2 out of the atmosphere, the cooling would pull most of the water vapor out of the atmosphere as well.”
and
@796 Peter Martin asked Patrick: “How much is “most”? And, again, the effect would be highly non-linear so how much of the GHE would be left? Would it produce a snowball earth? Or would the ocean still be ice free in tropical regions?”
A good question that I didn’t see answered later. How much water vapour would the atmosphere hold in the absence of CO2, anyone?
Surely through even just sublimation there would be some.
manacker says
Chris
The USEIA estimates world-wide coal reserves at 840 billion mt.
I have seen optimistic estimates that this is probably as much as 1,500 billion mt, if marginal sources are included as well as sources that have not been discovered as yet.
Do you believe that the long-term coal reserves of 1,500 billion mt is too low?
How much too low is this estimate?
Where is all this missing coal?
I agree with your estimate that liquid fuels and petrochemical feedstocks from coal (SASOL) will become more attractive as petroleum resources dwindle and become more expensive, but I do not believe that the long-term estimate of 1,500 billion mt reserves is far off unless you can show me some data.
Max
manacker says
Jim Eager
Reur 860. No I am not “ignoring” methane hydrates as a source of energy as well as CO2, as you imply.
If we simply assume that a major portion of the existing methane hydrates (and the much larger associated CO2 content) will be released to the atmosphere as a result of global warming, we are kidding ourselves.
Even the most pessimistic assumptions only have this occurring (maybe) over centuries (provided temperatures rise by several degrees, which is very much in question).
I do believe that we will figure out how to “mine” a significant portion of the methane hydrates as an energy source, but that this will take a significantly higher oil price than $100/bbl to be economically viable on a large scale.
But who knows? I may be wrong and someone may figure out how to do this economically in competition with a lower oil price.
Max
David B. Benson says
Jim Eager (862) — Read in Ray Pierrehumbert’s
http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
about “snowball earth”.
manacker says
Jim Eager Re my 865
The link to the total atmospheric water vapor content data (specific humidity) is:
http://www.cdc.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Specific+Humidity+(up+to+300mb+only)&level=300&lat1=90&lat2=-90&lon1=180&lon2=-180&iseas=1&mon1=0&mon2=11&iarea=1&typeout=2&Submit=Create+Timeseries
The link I attached was for relative humidity.
[Response: Hmm… You do know these are not observations? Instead this is a reanalysis project of which there are many (NCEP, ERA40, JRA, MERRA), and none of the others show this drop? and that direct observations don’t show it either? Which leads one to suspect that it is an artifact of the data assimilation system used… Of course, if you are aware of all that…. -gavin]
Both show a strong decline from 1948 to 2008.
Max
Jim Bouldin says
ATTENTION EVERYONE:
manacker has now proved beyond the shadow of a doubt that we can burn every last drop of fossil carbon (which, along with the atmosphere constitutes the entire carbon cycle) and there will be NO serious repercussions of any kind.
You may now return to whatever it was you were doing, or burning, before the AGW crowd so rudely distracted you. Sorry for the inconvenience.
Chris Dudley says
Max (#861),
There is no need for new discoveries. One only needs to value oil as a fuel rather than an energy source. At the moment, there appears to be no replacement for liquid hydrocarbon fuels for aviation, as an example, so that synthetic fuel productions is under active development by the Air Force. If energy becomes essentially free (think stranded wind power in Texas) then it will surely make sense to pump existing wells using more energy than is produced from the oil that the wells produce just as it can make sense to lose in efficiency when making synthetic fuel. This means that we want to look at oil-in-place rather than economically recoverable reserves to understand how much carbon in oil is available to enter the atmosphere. The current economic filter will not stand unless we essentially legislate that it does. This is why you estimates are incorrect. They assume oil must be an energy source. It does not have to be an energy source to be useful so we may expend energy to get at the two thirds or more of oil-in-place that can’t be got with net positive energy.
manacker says
Chris
You are wrong.
There is a certain finite amount of petroleum oil out there, regardless of how we value it, but it will never make sense to expend substantially more energy to recover oil that the recovered oil itself contains.
I agree that petroleum will be shifted more and more toward higher added-value end uses (such as petrochemicals, etc.) rather than just use as a fuel, and this could shift this slightly.
The same is probably true (longer term) for coal, using liquefaction processes.
But energy will never become “essentially free”. Wind and solar will always have a hard time competing with nuclear (except for very localized conditions), due to the low reliability.
There is just so much fossil fuel on this planet, and that’s it. If and when it is all consumed, it will have added around 700 ppmv or so to the “natural” atmospheric CO2 level of 280 ppmv.
That’s why the estimates I cited are not far off.
Since you have no better estimates, we can assume that the estimates I have cited are correct.
Max
PS BTW, I’ve spent some time in the oilfields myself.
Chris Colose says
862, Jim Eager,
I haven’t seen any publications describing how much of the water vapor/cloud greenhouse effect you would lose if CO2 (or all the non-condensible greenhouse gases for that matter) would preciptiate from the atmosphere completely, though it would certainly be significant. It’s a good thought exercise which I hope someone might have some insight on. A quick back-of-the-envelope calculation with the Clausius-Clapeyron equation and I have a near 50% reduction in the saturation vapor pressure with a 10 C drop from the present climate, which is just an upper bound near 9 millibars. A much better answer would require an AOGCM taking into account feedbacks, particularly ice-albedo since you’d expect a much larger surface albedo, potentially raising the planetary albedo significantly greater than the current 30%. It would be a real snowball earth!
manacker says
Gavin,
You indicate (comment to #866) that the published NOAA record on specific humidity 1948-2008 is not based on actual physical observations.
Hmmm…
We have a NOAA record that shows a steady reduction in specific humidity from 1948 to 2008.
Sounds pretty good to me.
Can you cite other long-term records that refute this?
If so, please be more specific and explain why the NOAA record is still out there.
Max
[Response: Are you always this lackadaisical with your sources? The NCEP reanalysis is a very useful (if somewhat superseded) way of assimilating what data there is from observations and filling in the rest with a model. Trends in such a system are heavily effected by changes in instrumentation and networks and need to be verified each and every time. If the other reanalysis projects have different results (and in this case they do), you cannot claim that this is a robust result – it is clearly dependent on the data assimilation system. Given that there are direct and indirect measurements of water vapour increases (at least over the satellite era) (NVAP and HIRES), that are consistent with clear surface trends, the balance of evidence is that the NCEP long term result is a an outlier and should not be relied upon. NCEP is useful for many other things and so no-one would say it should be removed. But insisting that it is correct merely because it is available is a somewhat odd attitude. And if you don’t want to believe me, read up on reanalysis problems (Bengtsson et al, 2004). There was even a CA thread that went in some detail on why this result is unlikely to be realistic. – gavin]
Chris Colose says
manacker (871):
Soden et al (2005) on “Radiative Signature of Atmospheric Moistening” provide a much better determination of increases in specific humidity over the last several decades. In the paper, they also state:
“Although an international network of weather balloons has carried water vapor sensors for more than half a century, changes in instrumentation and poor calibration make such sensors unsuitable for detecting trends in upper tropospheric water vapor (27). Similarly, global reanalysis products also suffer from spurious variability and trends related to changes in data quality and data coverage (24). ”
There reference (24) given is
http://www.cgd.ucar.edu/cas/Staff/Fasullo/refs/Trenberth2005FasulloSmith.pdf . They state in the abstract,
“Major problems are found in the means, variability
and trends from 1988 to 2001 for both reanalyses
from National Centers for Environmental Prediction
(NCEP) and the ERA-40 reanalysis over the oceans, and
for the NASA water vapor project (NVAP) dataset
more generally. NCEP and ERA-40 values are reasonable
over land where constrained by radiosondes.
Accordingly, users of these data should take great care
in accepting results as real.”
Patrick 027 says
I thought coal had about 5000 Gt C. That could add about 2400 ppm CO2 to the atmosphere (or atmosphere + ocean + etc, but with feedbacks and biogeochemical constraints in the ‘short’ term…)… And how much more expensive could coal get and still compete with oil and gas?
Hank Roberts says
How much carbon dioxide can there possibly be in the atmosphere?
http://www.globalwarmingart.com/images/thumb/7/76/Phanerozoic_Carbon_Dioxide.png/350px-Phanerozoic_Carbon_Dioxide.png
Melting ice changing albedo:
http://www.google.com/search?q=albedo+change+ice+melting+water
Image: http://www.windows.ucar.edu/earth/polar/images/sea_ice_nasa.jpg
Hank Roberts says
Oh, here’s SHEBA, which I’d been looking for:
Surface Heat Budget of the Arctic:
http://www.crrel.usace.army.mil/sid/perovich/SHEBAice/
http://www.crrel.usace.army.mil/sid/perovich/SHEBAice/spectalb.htm
A YEAR ON THE ICE
ICE-ALBEDO FEEDBACK
… OPTICS
Albedo
Total results
Spectral results
Transmittance
In-ice irradiance
Ultraviolet
Jim Eager says
Thanks David, I downloaded the current draft of Ray’s Climate Book a few weeks ago but have not yet had time to dig into it. Thanks for the pointer.
Chris Dudley says
Max (#863),
What you give as world reserves for coal is about the same amount as the USGS estimates for the coal resource in the the US alone, so you are probably off by a factor of four or so. Resouces become reserves when economic assumptions change. If we are no longer looking to coal for energy but rather as a source of carbon for making synthetic fuel then it is as valuable as oil. In the Bakken Formation, when oil is at $60/barrel, we are happy to use horizontal drilling in a very thin formation at great depth. Coal will be treated the same way if it becomes valuable. It is also worth remembering that the USGS ignores coal below 6000 feet but coal can exist even deeper than this so there may be a great deal more of it. You could be off be a factor of 8 or 12 rather than 4. Gold mines are worked at 10,000 feet deep.
Chris Dudley says
Max (#869),
My comment was about your understanding of how resources are related to reserves, not about your experience in the oil industry.
You are forgetting that whale oil was sought not so much for its ability to run an engine but rather to travel in a wick. A great deal of wind energy was consumed in pursuit of that kind of oil. Given the large distances involved, it may well be that more wind energy was used than produced in whale oil.
You must ask yourself the question why do we need oil? If it turns out that we need it for its ability to flow and store energy densely more than we need it as an energy source, then we will attempt to produce it using alternative energy sources. It is possible to produce it directly from the air (or sea water; the Navy is working on that), but sucking the very last drops from the ground is probably easier and will happen unless we legislate that it doesn’t.
The very last drops are very many more than you have estimated assuming that oil needs to be an energy source.
For your own amusement, you may want to take a look at what new nuclear energy costs: http://www.rmi.org/images/PDFs/Energy/E09-01_NuclPwrClimFixFolly1i09.pdf
On the other hand, wind and solar power are becoming less expensive and cheap-as-paint solar in the making now.
PeterMartin says
Max,
You say ” …but it will never make sense to expend substantially more energy to recover oil that the recovered oil itself contains.”
If carbon emissions into the atmoshpere are assigned some realsitic economic value, the so-called price of carbon, then what you are saying will very likely turn out to be true.
However, if the price is too low, or even zero, then it is quite easy to suggest examples of how a less than 50% total energy efficiency would still be economically viable.
BobFJ says
Anyone?
Further my 842/848, concerning the interdependence of AGW feedback parameters, both positive and negative, that net to a feedback of still unknown magnitude and sign:
I see that my mention of a major negative feedback that is not included in the general discussion, arising from evapo-transpiration, (E-T), (according to data from the IPCC), has provoked nil response. This may be because the IPCC et al emphasis has been on water vapour and clouds, and E-T has apparently been ignored by them, et al.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OK, let’s see if I can explain the contextual importance of it. For example, Max and I have mentioned the work of atmospheric physicist Andrew Dessler (NASA etc) WRT water vapour.
This is part of what Andrew has had to say in his lead article over at Gristmill; my bold emphasis added:
http://www.grist.org/article/Negative-climate-feedback-is-as-real-as-the-Easter-Bunny/
And yet! Roy Spencer has in fact published in AGL, (= peer reviewed) evidence of strong negative feedback in clouds, that Andrew refused to discuss in the blog following his article, despite several invitations to do so.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now, concerning the science, let’s see if I can describe this (ignored?) major negative feedback another way:
(1) Clouds have many species of wide temporal and spatial distribution that have feedbacks varying between negative and positive, in a very complex manner.
(2) These clouds are comprised of water particles, that are either liquid or frozen, that are coalesced on nuclei of solid particulates of many species, both natural and anthro‘.
(3) Whatever the structure and effect of clouds, (of uncertain feedback magnitude and sign), and also the amount of positive water vapour feedback, (the major GHG), in the atmosphere, they ALL depend primarily on E-T.
(4) Thus, putting aside a few relatively trivial complications, such as regional weather, and especially rainfall variations, if there is an INCREASE in water content in the atmosphere, it follows that it would have to be because of INCREASED E-T.
(5) However, E-T results in evaporative cooling, which according to the IPCC AR4, 2007, amounts to ~46% of the HEAT loss from the surface, compared with only 15% of HEAT loss via EMR (long-wave radiation) absorption in the GHG’s!
(6) Thus it follows, that E-T would be a major negative feedback, and likely overwhelming of all the others.
(7) Additionally, there is an interactive effect from thermals, (simple convection & conduction), given as ~15%, bringing the non-EMR net to ~61% of surface cooling.
(8) Additionally, it has been argued that advection would increase with rising temperatures, which would result in a further increase in E-T
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(See 848 for IPCC data source)
Anyone interested in this good news?
[Response: Unfortunately you seem unclear on the concept of a feedback. There are two parts – the factor needs to be affected by the temperature (or radiation budget) (E-T fits this), but then it has to feed back and affect the temperature itself (this it does not do). Any eventual effect of increased E-T works via the water vapor, lapse rate or cloud feedbacks we are already discussing. E-T on its own is not a feedback and it’s certaintly not a missing factor in models. I would estimate that 100% of them include evaporation. – gavin]
manacker says
Chris Colose
Reur 872. Yeah.
When the physically observed measurements do not confirm the hypothesis, blame it on poor measurement techniques and results (rather than on a flawed hypothesis). This is a classical dodge.
[Response: This is a very foolish statement. Any apparent mismatch in a model-data comparison can be because of three things – an incorrect or incomplete model, flawed data or a misunderstanding in the comparison itself (i.e. apples to oranges). All of these have happened in climate science in recent years, and prior prejudices that say that one of the data or models or methods must be perfect is simply cutting off real options. Science works because no assumption is sacrosanct – your attitude is fundamentally unscientific. Plus, Chris is still an undergraduate – and yet he knows more about how science really works than you. Think about that. – gavin]
But there are other measurements out there that confirm that the GCM assumption that water vapor increases with warming to maintain constant relative humidity is incorrect, thereby exaggerating the water vapor feedback.
Minschwaner + Dessler found that water vapor did increase with warming over the tropics, but that this was only a fraction of the increase as assumed by the climate models (i.e. maintaining constant relative humidity).
http://www.ametsoc.org/amsnews/minschwaner_march04.pdf
Fig.7 from the M+D report shows this clearly.
http://farm4.static.flickr.com/3347/3610454667_9ac0b7773f_b.jpg
Max
[Response: I’m going to hazard a guess that Dessler of Sherwood and Dessler (text) is aware of the work of Dessler in Minchwaner and Dessler. – gavin]
manacker says
Patrick 027
Re #873
Proven coal reserves today are around 840 billion mt. Optimistically estimated future finds put the world total at 1,500 billion mt.
At 91% average carbon content this would generate 5,000 GtCO2, so your number is correct (but it is GtCO2 and not GtC).
The mass of the atmosphere is 5,140,000 Gt, so this would add 972 ppm(mass) to the atmosphere or around 640 ppmv (not 2,400 ppmv).
Max
manacker says
Gavin
Thanks for your comment to my 871.
Yes. The historical 1948-2008 NOAA record showing steadily declining atmospheric specific humidity over this entire period may indeed have some glitches. It is just one set of data among many.
The over-simplistic assumption that water vapor increases with warming in goose-step with the Clausius-Clapeyron law to maintain constant relative humidity has been shown to be incorrect by Minschwaner + Dessler.
So there may actually be an increase of atmospheric specific humidity with warming (as M+D found), but it is much smaller than that assumed by the climate models (i.e. constant relative humidity).
Max
[Response: Climate models in no way assume constant relative humidity. This has been discussed here many times before. If you continue to post such disingenuous claims, you will loose your privilege to post here. – mike]
manacker says
Patrick 027
Re my #877
The figure I gave you for increase in atmospheric CO2 from all of our planet’s coal reserves is based on 100% of the emitted CO2 “staying” in the atmosphere. Currently this is less than 60%, with the remainder disappearing somewhere else (oceans, biosphere, etc.)
Max
Bart Verheggen says
BobFJ (880),
Your point 4 sounds a tad too strong: “If there is an INCREASE in water content in the atmosphere, it follows that it would have to be because of INCREASED E-T.” Isn’t it at least as plausible that the atmospheric water content increased because (some of) the sinks decreased? E.g. Cloud formation depends (amongst other things) on the relative (as opposed to absolute) humidity. At a higher temperature, 100% relative humidity corresponds to a higher absolute water vapor concentration than at lower temperature (Clausius-Clapeyron), so in a warmer world the water vapor can reach higher concentrations before the raining-out sink kicks in.
Kevin McKinney says
BobFJ, your point #6 appears to me to be a non-sequitur.
Specifically, you cite information that E-T affects surface temperature. However, a “feedback” in this context must affect the terrestrial energy budget, which you haven’t addressed.
Off the top of my (underinformed) head, it seems conceivable that E-T could shift heat from the surface to the lower troposphere with no significant effect at TOA.
Jim Eager says
Max wrote @864: “If we simply assume that a major portion of the existing methane hydrates (and the much larger associated CO2 content) will be released to the atmosphere as a result of global warming, we are kidding ourselves.”
Well, no one “assumes” that they will, Max. Strawman much, do you?
However, you have no trouble at all assuming that climate sensitivity is so low that we needn’t worry about more than a few degrees of temperature rise from a doubling of CO2.
You also have no trouble at all assuming that feedbacks will not be strong enough to significantly amplify direct CO2 induced warming, and may even cancel some of that warming.
I sincerely hope that you are right, but you haven’t presented any convincing evidence that you may be.
Jim Eager says
Thanks Chris (870), I will dig into the chapter on Snowball Earth in Ray’s Climate Book as David suggested and revisit this if I have any more specific questions.
Jim Bouldin says
manacker, seriously, please do educate yourself. Or at least allow others to do it for you. New estimates show that there is enough carbon just in the upper 3 meters of permafrost soil areas to double the pre-industrial [CO2], not even counting deeper frozen methane or above-ground carbon.
Tarnocai, C., J. G. Canadell, E. A. G. Schuur, P. Kuhry, G. Mazhitova, and S. Zimov (2009). Soil organic carbon pools in the northern circumpolar permafrost region. Global Biogeochem. Cycles 23: GB2023, doi:10.1029/2008GB003327.
Abstract:
The Northern Circumpolar Soil Carbon Database was developed in order to determine carbon pools in soils of the northern circumpolar permafrost region. The area of all soils in the northern permafrost region is approximately 18,782 × 103 km2, or approximately 16% of the global soil area. In the northern permafrost region, organic soils (peatlands) and cryoturbated permafrost-affected mineral soils have the highest mean soil organic carbon contents (32.2–69.6 kg m−2). Here we report a new estimate of the carbon pools in soils of the northern permafrost region, including deeper layers and pools not accounted for in previous analyses. Carbon pools were estimated to be 191.29 Pg for the 0–30 cm depth, 495.80 Pg for the 0–100 cm depth, and 1024.00 Pg for the 0–300 cm depth. Our estimate for the first meter of soil alone is about double that reported for this region in previous analyses. Carbon pools in layers deeper than 300 cm were estimated to be 407 Pg in yedoma deposits and 241 Pg in deltaic deposits. In total, the northern permafrost region contains approximately 1672 Pg of organic carbon, of which approximately 1466 Pg, or 88%, occurs in perennially frozen soils and deposits. This 1672 Pg of organic carbon would account for approximately 50% of the estimated global belowground organic carbon pool.
(Petagram (Pg) = Gigaton (Gt) = billion metric tons = 10^15g)
Jim Bouldin says
p.s. I did a double take when I saw the per meter soil org. C numbers given (32 to 70 kg/m2) for the peatlands and freeze-thawed mineral soils. For comparison that’s about 1.5 to 3.5 the amount in live trees in a Sierra Nevada old growth mixed conifer forest. And those are big trees.
manacker says
Mike
Thanks for your comment to my 883, which I directed at Gavin.
You state: “Climate models in no way assume constant relative humidity”
Sorry. But when I read AR4 Chapter 3, I see that over the ocean a constant relative humidity is assumed, whereas over the land a modest reduction of relative humidity is assumed.
[Response: I think you have some basic reading comprehension problems. Chapter 3 is about observations not models, and is very clear that “there is no detectable trend in upper-tropospheric relative humidity” (section 3.4.2.3) citing Soden et al 2005. And as someone who writes climate model code, I can assure you that constant relative humidity is not ‘assumed’, and is not even seen – however simulated variations are small. – gavin]
[edit of tedious nonsense]
manacker says
Jim Bouldin
You wrote (889): “New estimates show that there is enough carbon just in the upper 3 meters of permafrost soil areas to double the pre-industrial [CO2], not even counting deeper frozen methane or above-ground carbon.”
Maybe so, Jim. Let’s assume these “new estimates” are right.
But there is absolutely no evidence that major portions of this carbon will be released to the atmosphere any time in the foreseeable future. This is pure conjecture.
If all this carbon were to be released (over the next several hundred years or so), you tell me that it would raise atmospheric CO2 concentration by 280 ppmv.
Not really a big deal, since it is a very big “maybe”.
But let’s play your game and say that half of this is released in the next 91 years (by 2100), and that anthropogenic CO2 increases to 560 ppmv by then.
This would mean that the CO2 concentration then would be around 560 + 140 = 700 ppmv.
Still no real big deal..
Max
BobFJ says
Gavin, Reur response appended on my 880
I must confess that I don’t know the “official” definition of ‘feedback’, and I’ll accept yours. However:
(a) Is not the observed increase in surface temperature that is approaching 1C over the last 150 years, the means by which global warming has been assessed?
(a) Are you saying that the largest source of HEAT loss from the surface that is given by the IPCC as ~46% of the total budget, does not affect that same surface temperature?
(b) Do you agree that if the water content (vapour and clouds) in the atmosphere increases, then the fundamental source would be E-T. (putting aside a few complications that I and Bart Verheggen have mentioned)
(c) If there is increased E-T, surely, it follows that there will be a further negative effect on surface temperature.
(d) If we consider clouds, depending on species, it is agreed that they have a variety of negative and positive effects, e.g. albedo in addition to radiative effects etc. I’m struggling to understand the semantic difference in effect between say albedo, and E-T
[Response: a) yes by definition, b) of course, c) No. It depends on what happens to water vapor and clouds. d) the difference is that clouds are affected by temperature (via convection, stability etc.) and affect temperature (via solar and long wave effects). – gavin]
PeterMartin says
Jim,
You write “manacker, seriously, please do educate yourself”. Yes that would be good.
Unfortunately it is just a waste of time trying to help Max see sense on the AGW issue. It isn’t that he is incapable, but he just doesn’t want to. It doesn’t suit his world outlook.
I gave up trying on another forum, and I think he has followed me on to this one, when he refused to accept that it didn’t make any sense to use, as he had been doing and/or quoting others who were with approval, a figure of 0.8K/W/m^2 for a conversion from climate forcing to temperature rise for solar effects, but only 0.25K/W/m^2 for GHG climate forcing.
Patrick 027 says
James and others – regarding resource use of solar power :
BACKGROUND OF PV (PHOTOVOLTAIC) CELLS AND PANELS:
Electron-hole pairs are created by the absorption of photons; within a semiconductor, a photon excites an electron from a valence band (below the fermi level) to a conduction band (above the fermi level), and a hole (unoccupied state) is left behind. (In some designs, a photon can produce a pair (or more?) of excited electron hole pairs.)
—–
(The fermi level is the level where, at local thermodynamic equilibrium, the probability of an available state being occupied is 50 %. When all energy is thermalized, The probability of a state being occupied varies in a particular way over energy level, decreasing from near 100 % to near 0 % going from lower to higher energy states – the range of energies with intermediate probabilities is larger at higher temperatures. When the distribution of states is assymetric across the fermi level, the greater thermal excitation of electrons across the fermi level at higher temperatures can also relocate the fermi level relative to the electronic states.
When a material does not have a net charge or is not as a whole held at some nonzero voltage, the work function is the difference between the fermi level and vacuum zero – the energy an electron at rest has in the absence of any other electrical charges (as in a vacuum). Using vacuum zero as the reference energy level, when different materials with different work functions are in contact, there will be a built-in potential, the difference between the fermi levels.
IF my understanding is correct:
Electrons will tend to flow from the high fermi level material (low work function) to the low fermi level material (high work function); the build up of charge raises the energy of all states where electrons have accumulated (included the zero energy level that was vacuum zero; the work function is still the difference between the local zero energy level and the local fermi level); at equilibrium the fermi level is flat and the local zero energy varies across the interface between materials.
IF my understanding is correct:
The variations in the temperature dependence of the fermi level among different materials is what gives rise to thermocouple behavior.
—–
In most photovoltaic designs, photons must generally be at or above the band gap energy of the photovoltaic layer (or that layer with the smallest band-gap energy), and the energized electrons and holes settle (thermally, within some short period of time) toward the edges of the energy bands nearest the fermi level. Except for hot carrier technology (where electrons and holes are extracted before thermal relaxation nears completion), the band-gap energy of the absorbing layer is an upper limit of the voltage that can be supplied. But other limitations may also exist – the variation in work function across a p-n junction, for example.
In a p-type semiconductor, the fermi level is closer to the bottom of the conduction band than the top of the valence band; the reverse is the case in an n-type semiconductor. Doping a semiconductor with small amounts of strategically chosen impurities can add some electronic states within the band gap and change the fermi level – this is done with Si to create p-type and n-type material. Some semiconductors are intrinsically p-type or n-type. Impurities can also increase the conductivity by ‘donating’ electrons from states near the conduction band edge into the conduction band in an n-type material or ‘accepting’ electrons into states near the valence band edge from the valence band, leaving holes in the valence band; the energy transitions required can easily occur by thermal excitation of electrons (however, I think doping can also increase imperfections that increase recombination (?)). Thermal excitation of electrons across the fermi level to leave energy bands neither fully occupied or fully unoccupied allows electrical conduction in metals.
———
PS the same band edge (lowest or highest energy level within an energy band) can correspond to multiple energy bands, wherein each true individual band is a set of states wherein energy E is a single-valued (at least not counting effects of magnetism and electron spin) function of electron wave vector k (which is proportional to momentum). Within a large object, there is a sufficient density of states that the function E(k) can be approximated as being continuous, but actually, there is a discrete set of k values for energy states that can be occupied by electrons with time-independent energy (it is not actually true that electrons jump from energy level to energy level without ever being at an intermediate energy level, but it is true that there are such intermediate states that can only be occupied if the energy is changing over time, implying an ongoing process of transition between states). Within an energy band, electrons (and holes) can generally move from one (k,E) value to another similar value via thermal processes (interactions with phonons) – at least if the density of states is great enough for small transitions to be available (might not be the case in nanostructures? PS k is a vector in three dimensional k-space; confinement in one or two spatial directions can make k more obviously discrete in some dimensions while still being approximately continuous in others). Direct absorption of a photon by an electron results in a change in E with very little change in k, because photons have very little momentum relative to their energy; thus, the direct absorption of a photon by an electron requires a transition between states in different energy bands, as opposed to states within the same band.
PS the above at least applies to the case where electrons are described by plane waves. In an atom or molecule, electron waves are a bit more complicated, but the same concept of E as a function of the electron wave function still applies. In the absence of any variations in potential energy, there is only one energy band (only one value of E for each k or analogous variable), and E is proportional to the square of the momentum. In materials, fluctuations in potential energy (around atoms, etc.) gives rise to wave reflections and interference patterns and somehow this allows any k values to correspond to a value of k within the (first?) brillioun zone … and that’s getting beyond what I can explain … but it allows multiple energy levels to exist at the same k value.
Actual velocity of electrons is the group velocity of the waves, which I think is proportional to the gradient of E in k-space. The average velocity over all states in an energy band is zero, which is why a nonzero electric current requires energy bands with at least some occupied states and some unoccupied states.
——–
I-V CURVES OF SOLAR CELLS AND DIODES:
At a given illumination, photovoltaic cell provides current I as a function of voltage V across external resistance. For an open circuit, I = 0, their is an open-circuit voltage Voc. When there is a short circuit, V = 0, and their is a short circuit current Isc. The fill factor = Pmax / (Voc*Isc), where Pmax is the maximum power that can be provided; Pmax occurs for the combination of V and I with the greatest product P = V*I.
(PS there could be some variations in the following when there are heterojunctions or other arrangments are involved; this is my understanding of a p-n homojunction solar cell, such as crystalline Si):
When the external resistance is zero (short circuit), the voltage difference between the electrodes must be zero. The fermi level is thus nearly flat, so the entirety of the built-in potential (the variation of the work function across the p-n junction) should be available to organize and accelerate the electrons in the conduction band and the holes in the valance band in opposite directions across the p-n junction. Even so, not all electron-hole pairs produced will add to the external current, because there will be some recombination.
—
The fractional current loss due to recombination must somehow be proportional to the time that electrons and holes spend while in each other’s vicinity, and is also related to the density of imperfections in the crystal lattice. *** As I understand it, producing more electron-hole pairs closer to the p-n junction will reduce recombination by decreasing the time it takes for electrons on the p-type side and holes on the n-type side to ross the junction; this can be done by concentrating light absorption into a smaller volume near the junction or by folding the junction so that it fills the volume of the absorbing layer. *** More rapid acceleration of electrons and holes by the built-in potential might also help – ohmic resistance will get in the way by disrupting electron motions. (PS combining seperate parts of my knowlegde, I conclude that the acceleration of electrons must involve changes in energy level by thermal excitations and relaxations (interaction with phonons – quanta of atomic/molecular/crystal-lattice vibrations (there are thermal and acoustic phonons)), with some statistical organization by variation in potential (and magnetic fields, where that comes into the picture) over space on any given locally defined energy level (relative to local zero energy).
*** Voc and Pmax could be increased by reducing the ohmic (or whatever other?) resistance of the light-absorbing layer. This can be accomplished by using a thinner layer (with light absorption concentrated into a smaller volume – either by light-trapping using total internal reflection, or by using photosensitizers – a photosensitizing layer at the p-n junction or perhaps some way of harnessing surface plasmons***). It could also be done by using electrodes that protrude into the semiconducting layer (the p-n junction would be folded around those protrusions).
—
As the external resistance is increased, the voltage difference across the solar cell must increase to support a given current through the external resistance. Within the solar cell, this counteracts the built-in potential; relative to an external reference, the fermi level must rise from the p-type side to the n-type side, while the the variation in energy across the p-n junction (the built in potential) of the locally-same states decreases. This can reduce the number of charge carriers that escape the cell before recombination. However, aside from ohmic resistance contributions within the cell (?), the current does not decrease much if at all (?) until the voltage increases up to some point.
(PS one thing to keep in mind is that electric potential is defined for positive charges; the decrease in energy along the same locally-defined state from the p-type side to the n-type side across the p-n junction is reduced by a positive voltage change in the same direction, not the opposite direction – a concentration of electrons will raise the energy levels but reduce the voltage at that location.)
As the voltage increases, at some point the current starts to decline significantly. It decreases more and more per unit increase in voltage. When the external resistance goes to infinity, the current goes to zero. The build up of electrons in the conduction band on the n-type side and of holes in the valence band on the p-type side creates an electric field – a voltage difference. In an open circuit, this voltage difference builds until it completely (?) cancels the built-in potential, at which point electrons and holes can drift back across the p-n junction and there is an equilibrium between the rate at which electron hole pairs are created by the incident light and the rate of recombination (? actually, it seems to me that some degree of partial cancelation of the built in potential would accomplish this – there still has to be some remaining portion of the built-in potential not canceled by the charge build up in order to sustain the charge build up; but reducing that portion, even if not to zero, will allow greater drifting of charge carriers back across the junction to aid in the balance between electron-hole pair creation and recombination).
Note that the point of maximum power is somewhere along the sharp turn of the I-V curve – let’s designate the corresponding I and V as Imax and Vmax. This point is where the I-V curve is tangent to a hyperbolar I*V = P.
As a graph, this I-V curve actually extends beyond where it intersects the I axis (Isc, V=0) and the V axis (Voc, I=0). If rotated so that the I axis points to the left and the V axis points upward, the graph looks a bit like a logarithmic function that has been translated along the I axis: V ~= Voc * log( Isc – I ), but it obviously can’t be exactly that because such a function is undefined for I = Isc. I’m not sure what the exact form of the equation would be. I have gotten the impression that ideally, there is a range of V in which I is constant at I = Isc.
The extension toward negative V corresponds to the situation in which a voltage is applied to the solar cell such that, were the solar cell an ohmic device, it would tend to drive a current in the same direction as the current that the illuminated solar cell would produce. Without changing the incident light, however, the current cannot be increased beyond a certain limit. Perhaps such a voltage could increase I just a little beyond Isc by reducing recombination, but there is generally little change in I until the voltage is so great as to cause an electrical breakdown (PS would this turn the p-n junction into a tunnel junction?).
The extension into negative I corresponds to the situation in which a voltage is applied to the solar cell that would increase the voltage difference across the cell and tend to drive a current in the opposite direction. A voltage difference greater than Voc would indeed accomplish this.
Setting aside distortion of the shape of the I-V curve by ohmic contributions to internal resistance, my understanding is that changing the incident radiation changes the Isc but leaves the shape of the curve unchanged – the function is translated along the I axis – although I’m not quite sure if the graph is truly unchanged in shape or if it only approximately the same shape. In the dark, a p-n junction acts as – in fact it is – a diode, that allows a current through (electrons flowing from n-type to p-type – defined as I less than 0 (a negative current) in this context) when a voltage (a positive voltage in this context) is applied; the current doesn’t become significant (if at all nonzero) until V gets near the sharp turn in the curve. In the other direction, a negative V cannot produce a positive current until sufficiently large as to cause an electrical breakdown of the device.
If the shape is not distorted by other factors, Isc should be proportional to the number of photons per unit time incident on the cell with energy greater than the band gap energy(ies) of the absorbing layer(s). Imax also obviously decreases with decreasing incident light. If the curve does not change shape and if Vmax stayed constant, Imax would have a % decrease greater than the % decrease in incident photons because Imax is at least somewhat less than Isc; however, a linear proportionality of Imax to photon flux may be a good approximation for sufficiently large fill factors (fill factor = (Imax*Vmax) / (Isc*Voc). However, Vmax is not invariant – it will also decline with a decreasing photon flux, though proportionately much less if the fill factor is sufficiently high. Assuming the shape of the curve is constant, as the curve translates to lower I, it will have the same slope dI/dV at a given V, whereas the slope dI/dV of the hyperbola that intesects the curve at that point will be reduced; Vmax has to decrease in order for Imax and Vmax to be at the point where a hyperbola is tangent to the curve.
A fill factor of 1 would imply constant I = Isc = Imax out to V = Voc = Vmax, at which point I would drop to 0. A fill factor of 0.25 would correspond to a linear relationship with constant slope dI/dV. Note that the fill factor declines with decreasing photon flux. A stated fill factor for a solar cell or module would be given for some standard condition – typically under a standard 1 full sun (Air mass 1, 1000 W/m2). A higher fill factor corresponds to less loss in efficiency with decreasing insolation.
——
CELLS IN SERIES, SHUNT DIODES:
One issue with solar cells is that when cells are connected in series, each cell has to have the same current I; thus, there is some inefficiency if they were to have different Imax values as defined individually. Aside from variations in recombination rates or the production of multiple electron-hole pairs by single photons, the cells in a series should be sized to absorb the same number photons in electron-hole pair production. This obviously pertains to area for cells laid side by side in a panel. It also pertains to thickness (depending on absorptivity of layers) and the shape of the incident light spectrum for cells with different band-gap energies that are stacked to form a multijunction cell.
When a cell in a series is partially or completely shaded, it reduces the current that can flow through all the cells. The other cells will impose a voltage on the shade cell as charge carriers accumulate, which could concievably decrease recombination within that cell and partly counteract partial shading, but my impression is that this is very limited. The shaded cell will thus partly or completely idle the other cells in a series unless the other cells supply enough voltage to cause the electrical breakdown of the cell – which can damage the shaded cell by the resulting heating.
This problem can be solved by using shunt diodes. These are wired in parallel with the cells in a series – to prevent damage, across a series with the potential to cause electrical breakdown of a single member, or with smaller series to increase performance when occasional shading occurs. The diodes are oriented in the opposite direction (as defined by the p and n layers) as the solar cells, to block current flowing backward, but to allow forward current to bypass a shaded cell.
Unfortunately, while this will save cells from damage, it will not allow a partially shaded cell to produce energy at all. The voltage across the shaded cell has to actually reverse and surpass some magnitude in order to make the shunt diode in parallel with the shaded cell conduct significant current. If the cell still produces enough current to limit the build up of charge, the current will be limited by the shaded cell; sufficient limitation of current allows the voltage to build up enough to allow further current reductions in the shaded cell to be compensated with current increases in the shunt diode. The current in the shaded cell and in the diode requires power input. But it is better than complete idling of a series of cells from one shaded cell.
Note that shunt diode wiring within a panel would make additional shunt diodes for panels connected in series redundant – a shaded panel could pass current through its shunt diodes.
** An alternative that could reduce shading losses would be to connect batteries (or capacitors) in parallel to the cells; when the cell voltage drops below some level, the battery will release energy and add current; otherwise, the voltage of the cell will force some current backwards through the battery, storing energy. But this is probably not a practical solution in the near future (?).
——-
CELLS IN PARALLEL:
What about cells wired in parallel? Actually, all the different areas within any given single-junction cell (outside of any nanostructures or multiple band gap technology, etc.) are connected in parallel, and multiple cells or series of cells connected in parallel should be at least somewhat analogous. If one cell is partly shaded, it’s individual Imax decreases roughly in proportion to the shading and it’s individual Vmax will decrease a little. If each parallel cell were producing power at its individual Imax and Vmax, there would be voltage differences along the connections between them that would send current from cell to cell – this would effectively reduce the currents of the shaded cells even further while increasing their voltages and increase the currents of the unshaded cells while decreasing their voltages, until all voltages were the same (setting aside small ohmic resistances in the connections). If the I-V curve shape is unchanged for each cell and simply translates along the I axis following changes in light, or if this is approximately the case, then the net result will be the same or approximately the same as if the shading were spread out among all cells. Thus, uneven lighting should not be much of a problem for parallel cells.
When series of cells are in parallel and one series has a single partially-shaded or completely shaded cell, the series by itself will have a reduced Vmax and Imax. Similar logic as above should apply for the net result – the I of the series with the shaded cell will decrease while its V is increased, with opposite changes for the other series.
______________________________
ASSESSING MATERIAL NEEDS FOR PHOTOVOLTAIC POWER (EXCLUDING SOME BALANCE OF SYSTEM ITEMS) – REGULAR FLAT PANELS:
SAMPLE CALCULATION (IN THE ABSENCE OF SOME ACTUAL KNOWLEDGE OF SOLAR PANEL AND SOLAR INSTALLATION DESIGNS – this is to get a tangible feel for what may be or could be used):
(In the following, I use ‘um’ to refer to microns; the actual symbol uses the greek letter ‘mu’ for the metric prefix ‘micro’.)
———
SAMPLE MATERIALS:
A Conductor of 10^-7 ohm*m (approximately the resistivity of Fe at near room temperature; Cu has about 1/6 that resistivity; Al is in between. What is the resistivity of stainless steel (somwhere around 10 to 20 % Cr as far as I know – PS Cr resources are surprisingly abundant)?)
A transparent conducting electrode of 10^-4 ohm*m (I don’t know the actual resistivity of tin oxide or indium tin oxide; other options include some polymers, carbon nanotubes, ZnO doped with Al or something like that, … but of course, material combinations have to have fit electronically (to function) and thermodynamically (stability over time), so not any combination will do).
A semiconductor of 10 ohm*m (actual values can be higher or much lower)
A resistor of 10^10 ohm*m (etc.)
————
FORMULA FOR VOLTAGE DROP ACROSS THE AREA OF A SOLAR CELL OR PANEL:
rho = resistivity
L = distance current travels
w = width across which the current flows
J = current per unit area of panel
h = thickness of conducting layer
S = spacing of front conducting grid
R = resistance = rho*L/(h*w)
I = L*w*J
V = I*rho*L/(h*w)
V = (L*w*J) * rho * L / (h*w)
V = (L*J) * rho * L / h
V = J * L^2 * rho/h
—————–
LET (at standard 1000 W/m2, solar spectrum through 1 atmosphere):
Vc = 0.5 V (the Vmax of the cell before any internal resistance) (this is a reasonable value for crystalline Si after internal resistance; crystalline Si has a band gap of just over 1 eV. The built-in potential has to be less than the band gap or else the fermi level would actually go into the conduction and/or valence band, turning one or both of the p and n-type layers (except near the p-n junction) into metals, allowing significant unproductive absorption of electons (unless that whole layer is then used as a photosensitizer (?) within another material, or is used with hot carrier technology or has opportunities for thermalization reduced by nanostructuring…)
J0 = 300 A/m2 (the Imax per unit area of the cell).
Vc * J0 = 150 W, implying an efficiency of 15 % under a standard full sun (1000 W/m2).
CALCULATE VOLTAGE LOSSES:
—————
Photovoltaic layers (different formula): FIND Vs:
d (layer thickness) = 10 um = 1e-5 m
R = rho * d/area
Vs = J*area * rho * d/area
Vs = J * rho * d
Vs = 3e2 A/m2 * 1e1 ohm*m * 1e-5 m
= 3e-2 V
Vs = 0.03 V
Vs/Vc = 0.03/0.5 = 6 %
—————
transparent conducting electrode layer
h0 = 0.1 um = 1e-7 m thick,
rho0 = 10^-4 ohm*m
S = 2 mm = 2e-3 m
L0 = 1/2 * spacing of front conducting grid = 1e-3 m
V0 = J0 * L0^2 * rho0/h0
V0 = 3e2 A/m2 * 1e-6 m2 * 1e-4/1e-7 ohm
= 3e-4 * 1e3 V
= 0.3 V
Try h0 = 0.2 um and S = 1 mm to reduce V0 by factor of 2*(2^2) = 8:
Then V0 = 0.0375 V.
V0/Vc = 3 * 0.0125/0.5 = 3 * 0.025 = 7.5 %.
————
Back Electrode
h1 = 10 um = 1e-5 m
rho1 = 1e-7 ohm*m
Let L1 = 1 cm = 1e-2 m (There is some flexibility here; w and L can be adjusted for various cell arrangements in a panel.)
V1 = 3e2 * 1e-4 m2 * 1e-2 ohm
= 3e-4 V
= 0.0003 V
V1/Vc = 0.0003/0.5 = 0.06 %
————
Front Conducting grid
Assume cross section (perpendicular to grid lines) has same average h2 as h1 and same rho2 as rho1:
V2/Vc = 0.06 %
(V1+V2)/Vc = 0.12 %.
____________________
There has been a 13.62 % voltage drop thus far.
With 20 um of conductor, 10 um of PV material, and 0.2 um of a transparent conducting electrode.
At this point, cells are connected in series in a panel (where the front grid lines of one cell run directly to the back electrode of an adjacent cell). To minimize problems with shadows, a shunt diode can be used in the panel. Add another area-averaged 10 to 20 um of conductor for this. (The panel could be designed so that shunt diodes within a panel could be snapped or screwed in or out, and thus replaced, if necessary, without actually taking the panel apart each time. But that might not be necessary. ?)
Use a series of 60 cells in a panel, so the panel Vm is 60 * 0.5 V = 30 V (the 13.62 % voltage drop also sums over the series), and the panel Jm = 300/60 A/m2 = 5 A/m2.
*******
Note that the 13.62 % voltage drop thus far is taken into account in the efficiency of a solar panel. A 14.6846 % efficient panel at the same J with the above specifications would have an efficiency of 17 % before the voltage losses just considered ( 17*(1-0.1362) = 14.6846 ). Current losses due to recombination and shading by the front conducting grid would also be taken into account in the efficiency of a cell or panel.
*******
——————–
Step 3:
Assume a wire length of L3 = 5 m connects a set of 4 panels (total area 4 m2) in series to get 120 V (such a voltage could also be built directly into the panel by putting a greater number of smaller cells or using larger panels). Some of that length bipasses panels through shunt diodes.
Cross sectional area CA3 = 2 mm2 = 2e-6 m2.
rho3 = 1e-7 ohm*m
I3 = 5 A
V3 = I3 * L3 * rho3/A3
V3 = 5 A * 5 m * 1e-7 ohm*m / 2e-6 m2
= 25/2 * 1e-1 V
= 1.25 V
V3/120 V ~= 1.041667 %
Volume of wire = 5 m * 2e-6 m2 = 1e-5 m3. This volume distributed over the area of the panels (4 m2) is a layer 2.5 um thick.
Additional length of wire of 25 m (total ’round trip’) to connect to an inverter, power conditioning, and regular building wiring would add another 12.5 um to the layer thickness and another 5.2 % voltage drop.
****
Making the wire thicker by a factor X would increase the layer thickness by a factor X and reduce the voltage drop by a factor X. Increasing the voltage of the panels or series of panels by a factor Y would decrease the percent voltage drop by the same factor while preserving the equivalent layer thickness.
Increasing the necessary length of wire by Z would increase both voltage losses and equivalent layer thickness by the same factor.
****
*******
Total voltage losses thus far:
13.62 % within panel (6 % in the PV layer, 7.5 % in the transparent front electrode, and 0.12 % in the back electrode and front grid)
+ 6.25*Z/(X*Y) %
Layer thickness of conductor rho = 10^-7 ohm*m thus far:
30 um to 40 um within the panel (20 in the front and back of cells, more in shunt diode wiring) + 15*Z*X um.
Layer thickness of transparent front electrode = 0.2 um.
Layer thickness of PV layer = 10 um.
Note that I took the longest distances the current would travel to figure out the layer thickness and voltage losses of the back electrode, front conducting grid, and transparent front electrode, so either the layer thickness, voltage drop, or both, of those layers will be an overestimate. That gives us a safety factor.
ALSO NOTE: obviously, one would want to design the system to produce a desired voltage AFTER the voltage losses within the system. I’m not actually trying to design a specific system here.
*******
How long would wires need to be for large rooftops of large buildings. how high can the panel series voltage go before it would be unsafe in residential or commercial settings? (PS in the case of 240 V or multiples of 120 V, inverters would take power input in series and output an AC current in parallel ? – and so on for batteries, etc.).
How much Cu wire is in an inverter?
———————-
CURRENT LEAKAGE:
Consider panels in series so that the extreme panels may be at a 1000 V potential (+ or -).
The charges in the encapsulating dielectric material would seperate, producing a net surface charge. A discharge of the surface charge would be replenished by charges diffusing through the material. How dense would this surface charge be? Would it be dangerous, or would contact produce little more than the typical static shocks people get walking across carpets and touching doorknobs in the winter at some latitudes, etc? Would repetative discharging eventually cause damage to the material (affecting optical properties of the front layer, for example)?
In case it is necessary to ground the exteriors of panels:
Current leakage per unit area at 1000 V for a resistor of 1e10 ohm*m, thickness U = 100 um = 1e-4 m thick:
V = I*R = J*area*rho*U/area = J*rho*U
J = V/(rho*U)
J4 = 1e3 V / (1e10 ohm*m * 1e-4 m) = 1e-3 A/m2.
Note the 30 V panels above produced 5 A/m2. This current leakage is a 0.02 % loss.
However, the 5 A/m2 has to be divided by the number of panels in a series to get the actual output current per panel area. 1000 V / 30 V ~= 33.33 . For 33.33 panels in a series, the current leakage is 0.67 % loss. The loss will be proportional to the square of the voltage of the series since the current per unit area is inversely proportional to the number of panels in a series. The loss actually has to be doubled because of losses through both the front and back (unless the back is a nonissue in this context); however, it can be halved because the voltage will vary along the series and the average magnitude of the voltage will half of the extremes at the ends of a series.
In this case, a 1000 V series would be fine, but a 10,000 V series would suffer great loss (67 %). But using an encapsulating material of resistivity of 10^12 ohm*m would solve that problem. The trick is to find such a material with optical transparency in the wavelengths that the solar cell utilitizes, especially those the solar cell uses efficiently – and such a material that won’t cloud up or turn yellow or dark after a mere few decades in the sun. I have not gotten the impression that this is a particularly hard problem to solve, but it is not something I have a lot of information about.
1e-3 A/m2 of current would need to be grounded in this case. Allowing a voltage of 1 V, for an average length of 10 m, the cross sectional area of the grounding wire per m2 of panel would be:
area/m2 = 1e-3 A/m2 * 1e-7 ohm*m * 1e1 m / 1 V = 1e-9 m2/m2. The volume per unit panel area of this grounding wire is then 10 m * 1e-9 m2/m2 = 1e-8 m3/m2 = 10 nm. That is a 10 nm layer of conducter, or 0.01 um layer, in volume equivalent.
———–
Consider:
Wires at 1000 V potential, length 30 m, radius on the scale of 1 mm, carrying 5 A, with insulator of 10^12 ohm*m, thikness 0.5 mm (about the same volume of insulator as metal wire): Potential (if the wires got wet, etc.) leakage I
~ (14e-3 m * 30 m) * 1000 V / ( 1e12 ohm*m * 5e-4 m)
= 4.2e-2 m2 * 1e3 V / 5e8 ohm*m2
~= 1e-7 A.
1e-7 A / 5 A = 0.000002 %.
Is that okay? I suspect it is.
__________________________________
To be continued….
BobFJ says
Bart Verheggen 885, you wrote in part:
To a degree, but with increased alleged importance, you are repeating roughly what I said, which, taking point 4) in isolation, was in full, and with emphasis added:
(4) Thus, putting aside a few relatively trivial complications, SUCH AS regional weather, and especially rainfall variations, if there is an INCREASE in water content in the atmosphere, it follows that it would have to be because of INCREASED E-T.
I guess you are making an intuitive statement that conflicts with my intuitive proposal. Would that be correct?
[Response: Your statement is wrong. It is like saying that the amount of water in the bath is only proportional to the water coming in from the faucet without thinking about whether the plug is in or not. – gavin]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
BTW, when I refer to water content in the atmosphere, I mean it in terms of mass, regardless of phase or saturation levels.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Also, bear in mind my original enquiry was partly that;
Clouds and water vapour are very popular considerations in the AGW debate, but maybe 46% of the HEAT loss via E-T, (= a major cooling effect, per IPCC), from the surface does not seem to be of interest.
[Response: All the energy fluxes are ‘of interest’ – E-T is just not a feedback. – gavin]
BobFJ says
Kevin McKinney 886, you wrote in part:
Well actually Kevin, I have addressed the two predominant primary forms of energy that are considered in the terrestrial energy budget from an impeccable data source, the IPCC:
(A) HEAT: according to the IPCC (FAQ 1.1; Figure 1*), the terrestrial HEAT loss from the surface amounts to ~61% of the energy budget. (the processes involved are complicated, and e.g. involve some transient WORK)
(B) EMR: the same IPCC source gives the balance in the energy budget as long-wave radiative loss comprising ~24% directly to space, and ~15% transient through GH absorption.
You also wrote:
However, global warming has been assessed and defined in terms of somewhat less than 1C temperature rise of the surface over the last 150 years. In terms of impact on our environment, it is the T in the lower layers of the atmosphere, and the surface layers of the oceans comprising ~70% of the earth’s surface area, that have prime importance.
E-T has a proportionally massive cooling effect at the surface, where it really matters.
* extracted here: http://farm4.static.flickr.com/3178/3064545467_7b7d04b38d_o.gif
manacker says
Bob_FJ and Gavin
You have both touched upon a problem on overall net climate feedbacks where there appears to be a basic dilemma.
As Wiki tells us:
http://en.wikipedia.org/wiki/Scientific_skepticism
“Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a scientific or practical, epistemological position in which one questions the veracity of claims lacking empirical evidence.”
As a rational skeptic, I prefer empirical evidence based on actual physical observations made today (over paleo-climate proxy studies or reconstructions) and every time over climate model outputs, which actually offer no empirical evidence at all.
While it can be argued that there can be distortions due to errors in measurement techniques, etc. (probably even more so with paleo-climate reconstructions than with actual physical observations made today), these are insignificant in comparison to the possible errors in climate model input assumptions (where not only the magnitude, but even the sign of the assumption can be wrong).
IPCC climate models assume a 2xCO2 climate sensitivity of 3.2°C. But does this assumption hold up to the scrutiny of empirical evidence, such as:
· Actual long-term temperature change vs. change in CO2
· Physically observed net cloud feedback with warming
· Physically observed increase of atmospheric water vapor content with warming?
This, I believe, is the dilemma.
Max
[Response: The dilemma is that people who have been told that their basic understanding is flawed persist in insisting that it is not. Models do not ‘assume’ that climate sensitivity is 3.2 deg C, physical observations of increasing water vapour are described in Chp 3 of the AR4 report, and no-one has great data on cloud feedbacks (which is why they are still uncertain). Any conversation has to have both parties at least paying attention to the points of the other party. Simply repeating things over and over is pointless. If you can’t move on from your incorrect talking points, then move on. – gavin]
Bart Verheggen says
BobFJ (896),
It’s not just intuition.
I was merely pointing out that (a change in) the concentration of any species depends on (a change in) the ratio of its sources and sinks; you seemed to have only considered a change in the source as a cause of the change.
Ray Ladbury says
BobFJ, your analysis really doesn’t make sense. It seems to presume there is no mixing in the atmosphere. Energy transported within the atmosphere STILL contributes to warming. It is only the LWIR loss at the top of the atmosphere that is truly lost. In any case, what evidence do you have that this is not properly modeled?
The problem you are having is that you are looking at a tiny piece of the puzzle and trying to draw conclusions about the whole. You have to look at the whole to see how the pieces fit together.
Question: Has it occurred to you that if what you were saying is true, we’d be seeing a significant worsening of temperature inversions and therefore of pollution?