climate

All posts tagged climate

Discussions asking whether “Snowball Earth” triggered animal evolution (see the bottom half of that page) suggest increasing evidence that the Snowball Earth hypothesis may explain an important part of why spacefaring civilizations seem rare.

photosynthetic organisms are limited by nutrients, most often nitrogen or phosphorous

the glaciations led to high phosphorous concentrations, which led to high productivity, which led to high oxygen in the oceans and atmosphere, which allowed for animal evolution to be triggered and thus the rise of the metazoans.

This seems quite speculative, but if true it might mean that our planet needed a snowball earth effect for complex life to evolve, but also needed that snowball earth period to be followed by hundreds of millions of years without another snowball earth period that would wipe out complex life. It’s easy to imagine that the conditions needed to produce one snowball earth effect make it very unusual for the planet to escape repeated snowball earth events for as long as it did, thus explaining more of the Fermi paradox than seemed previously possible.

Rob Freitas has a good report analyzing how to use molecular nanotechnology to return atmospheric CO2 levels to pre-industrial levels by about 2060 or 2070.

My only complaint is that his attempt to estimate the equivalent of Moore’s Law for photovoltaics looks too optimistic, as it puts too much weight on the 2006-2008 trend, which was influenced by an abnormal rise in energy prices. If the y-axis on that graph were logarithmic instead of linear, it would be easier to visualize the lower long-term trend.

(HT Brian Wang).

The Global Catastrophic Risks conference last Friday was a mix of good and bad talks.
By far the most provocative was Josh‘s talk about “the Weather Machine”. This would consist of small (under 1 cm) balloons made of material a few atoms thick (i.e. needed nanotechnology that won’t be available for a couple of decades) filled with hydrogen and having a mirror in the equatorial plane. They would have enough communications and orientation control to be individually pointed wherever the entity in charge of them wants. They would float 20 miles above the earth’s surface and form a nearly continuous layer surrounding the planet.
This machine would have a few orders of magnitude more power over atmospheric temperatures to compensate for the warming caused by greenhouse gasses this century, although it would only be a partial solution to the waste heat farther in the future that Freitas worries about in his discussion of the global hypsithermal limit.
The military implications make me wish it won’t be possible to make it as powerful as Josh claims. If 10 percent of the mirrors target one location, it would be difficult for anyone in the target area to survive. I suspect defensive mirrors would be of some use, but there would still be serious heating of the atmosphere near the mirrors. Josh claims that it could be designed with a deadman switch that would cause a snowball earth effect if the entity in charge were destroyed, but it’s not obvious why the balloons couldn’t be destroyed in that scenario. Later in the weekend Chris Hibbert raised concerns about how secure it would be against unauthorized people hacking into it, and I wasn’t reassured by Josh’s answer.

James Hughes gave a talk advocating world government. I was disappointed with his inability to imagine that that would result in power becoming too centralized. Nick Bostrom’s discussions of this subject are much more thoughtful.

Alan Goldstein gave a talk about the A-Prize and defining a concept called the carbon barrier to distinguish biological from non-biological life. Josh pointed out that as stated all life fit Goldstein’s definition of biological (since any information can be encoded in DNA). Goldstein modified his definition to avoid that, and then other people mentioned reports such as this which imply that humans don’t fall within Goldstein’s definition of biological due to inheritance of information through means other than DNA. Goldstein seemed unable to understand that objection.

Book review: Global Catastrophic Risks by Nick Bostrom, and Milan Cirkovic.
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can’t be handled as well as I thought by early detection, because some dark comets can’t be detected with current technology until it’s way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I’ve previously read on the subject, but is technical enough that it won’t be widely read, and it won’t satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren’t understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk – better sanitation made polio more dangerous by altering the age at which it infected people. If I’d written the chapter, I’d have mentioned Ewald’s analysis of how human behavior influences the evolution of strains which are more or less virulent.
There’s good news about nuclear proliferation which has been under-reported – a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there’s any trend, it’s toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don’t know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:

totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.

Sen. John Barrasso (R-WY) has introduced a bill to create prizes for carbon sequestration:

This is how it would work. There would be four different levels of prizes. The first level award would go to the public or private entity that could first demonstrate a design for a successful technology that could remove and permanently sequester greenhouse gases. Second, there would be a prize for a lab scale demonstration project of the technology that accomplishes the same thing. Third, there would be an award for demonstrating the technology to remove and permanently sequester greenhouse gases that is operational at a larger, working model scale. Finally, there would be an award for whoever could demonstrate the technology to remove and permanently sequester greenhouse gases on a commercially viable scale.

It sounds like many important details would be decided by a federal commission. The prizes could have many of the promises and drawbacks of Virgin Earth Challenge.
The first three levels of the prizes appear to create incentives to create designs with little regard for commercial feasibility. If those prizes are large, they might end up rewarding technologies that are too expensive to be worth using. Small prizes might have little trouble with this due to inventors not wanting to spend much money to win the prizes, but I’d still have concerns about inventors paying little attention to reliability and maintenance costs. The fourth level appears more promising.
Bureaucrats are likely to put more effort into clarifying prize rules that the Virgin Earth group did. But it’s unclear whether any approaches that a government agency is likely to recommend will do a decent job of translating the “commercially viable” goal into a clear enough set of rules that inventors will be able to predict how the prizes will be awarded.
My advice for the commission, should it be created, is that it tie the prizes to actual amounts of carbon removed from the atmosphere over some pre-specified period, or to estimates of those amounts derived from a prediction market.
(HT Jim Manzi).

Cool It

Book review: Cool It: The Skeptical Environmentalist’s Guide to Global Warming by Bjørn Lomborg.
This book eloquently counters many popular myths about how much harm global warming is likely to cause, but is sufficiently partisan and one-sided that it will do more to polarize the debate than to resolve disagreements. Many of his criticisms of the alarmists are correct. Reading this book in combination with writings of his opponents will give you a much better perspective than reading only one side of this debate.
Selective reporting gives the impression that global warming is causing more deaths, but Lomborg reports that warming will cause reduced deaths for the foreseeable future, mainly through reduced cold-related cardiovascular deaths. He claims warming won’t cause a net increase in deaths until at least 2200, but I expect that uncertainty about medical innovation makes predictions more than a few decades ahead not credible. Even for the next few decades, he exaggerates the evidence. What he calls “The first complete survey for the world” covers the entire world, but only tries to model the effects of the 6 types of disease for which global information is available, and its authors clearly deny knowing whether other diseases have important effects. Lomborg claims there will be 1.4 million fewer deaths in 2050 due to global warming, but he seems to get that number from the effects of only two disease types, whereas the paper he cites predicts 849252 fewer deaths from 6 disease types.
He is often too dismissive of the possibility of technological improvements. For instance, he claims that sticking to Kyoto commitments through the 21st century “would get ever harder”, yet I can imagine a variety of ways it could get easier. He mentions specific dollar costs for complying with Kyoto for a century without hinting at the large uncertainties in those guesses.
In one place he analyzes Kyoto as if it were a foreign aid program, and says that it would do 16 cents of good in developing countries for every dollar spent. I assume he considers this an argument against Kyoto. Since 16 cents might help a person in a developing country more than a dollar helps a person in a developed country, and there is some reason to suspect that few large aid programs are more than 16 percent efficient, it could easily be considered a weak argument for Kyoto.
Sometimes he’s blatantly careless, such as when he talks about “reducing [hurricane] damage by almost 500 percent”.
He appropriately criticizes the Stern report’s use of a suspiciously low discount rate (which has major implications for how much we should do now), but he doesn’t provide a clear explanation of that issue, nor does he say what his preferred model uses (a review on Salon says it uses a 6 percent discount rate, which I suspect only makes sense if we assume a higher economic growth rate than most experts expect).

While there is much debate about whether we should respond to global warming by taxing CO2 emissions at $2 per ton or $140 per ton, there are countries with policies that are roughly equivalent to rewarding CO2 emissions at levels that appear to exceed $100 per ton.
I’m referring to gasoline pricing rules that keep gas prices at the local consumer level way below the global market price. Venezuela is a dramatic example, China has prices that are modestly below the market price, but that applies to an important fraction of the world’s gas use, and last I heard Iran and Iraq were practically giving away whatever gas became available to their consumers (it’s sad that Iraq was invaded by a government that didn’t think freer markets would help Iraq). These policies would be wasteful even if CO2 emissions were good (e.g. due to causing long lines to get gas).
Even if those low prices are implemented in a way that helps the poor in those countries, it causes nontrivial increases in gas demand which drive up gas prices in the rest of the world (at least in the short run; the long run price changes depend on the cost of finding new oil).
People who care enough about global warming to make modest efforts to slow it should put pressure on these countries to charge market prices for gas. In addition to traditional techniques, one obvious response is to exploit the inherent instability of these price differentials by giving as much aid and protection as is practical to the heroic businessmen who smuggle gas from low price regions to regions where marker prices prevail. If governments of Europe and the U.S. cared enough about global warming, they could probably enable enough smuggling to measurably reduce the waste of gas in many smaller countries. But that would probably still leave significant waste in China, and I’m not sure what can be done about that.

One way to find evidence concerning whether a politicized theory is being exaggerated or being stated overconfidently is to look at how experts from a very different worldview thought about the theory. I had been under the impression that theories about global warming were recent enough that it was hard to find people who studied it without being subject to biases connected with recent fads in environmental politics.
I now see that Arrhenius predicted in 1896 that human activity would cause global warming, and estimated a sensitivity of world temperature to CO2 levels that differ from current estimates by about a factor of 2. The uncertainty in current estimates is large enough that they disagree with Arrhenius by a surprisingly small amount. This increases my confidence in that part of global warming theory.
Arrhenius disagreed with modern theorists about how fast CO2 level would rise (he thought it would take 3000 years to rise 50% or to double, depending on whether you believe Nature or Wikipedia), and about whether warming is good. That slightly weakens my confidence in forecasts of CO2 levels and of harm from warming (although as a Swede Arrhenius might have overweighted the benefits of warming in arctic regions).

The Virgin Earth Challenge would be a great idea if we could count on it being awarded for a solution that resembles the headlines it’s been getting (e.g. $25M Bounty Offered for Global Warming Fix).
But the history of such prizes suggests that even for simple goals, describing the terms of a prize so that inventors can predict what will qualify for the award is nontrivial (e.g. see the longitude prize, where the winner appeared to have clearly met the conditions specified, yet wasn’t awarded the prize for 12 years because the solution didn’t meet the preconceptions of the board that judged it).
Anyone who looks past the media coverage and finds the terms of the challenge will see that the criteria are intended to be a good deal more complex than those of the longitude prize, that there’s plenty of ambiguity in them (although it’s possible they plan to make them clearer later), and that the panel of judges could be considered to be less impartial than the ideal one might hope for.
The criterion of “commercial viability” tends to suggest that a solution that required additional charitable donations to implement might be rejected even if there were donors who thought it worthy of funding, yet I doubt that’s consistent with the intent behind the prize. This ambiguity looks like simple carelessness.
The criterion of “harmful effects and/or other incidental consequences of the solution” represents a more disturbing ambiguity. Suppose I create nanobots which spread throughout the biosphere and sequester CO2 in a manner that offends some environmentalists’ feelings that the biosphere ought to be left in its natural state, but otherwise does no harm. How would these feelings be factored into the decision about whether to award the prize? Not to mention minor ambiguities such as whether making a coal worker’s job obsolete or reducing crop yields due to reduced atmospheric CO2 counts as a harmful effect.
I invite everyone who thinks Branson and Gore are serious about paying out the prize to contact them and ask that they clarify their criteria.

Nick Szabo has a very good post on global warming.
I have one complaint: he says “acid rain in the 1970s and 80s was a scare almost as big global warming is today”. I remember the acid rain concerns of the 1970s well enough to know that this is a good deal of an exaggeration. Acid rain alarmists said a lot about the potential harm to forests and statues, but to the extent they talked about measurable harm to people, it was a good deal vaguer than with global warming, and if it could have been quantified it would probably have implied at least an order of magnitude less measurable harm to people than what mainstream academics are suggesting global warming could cause.