risks

All posts tagged risks

TL;DR: loss of topsoil is a problem, but not a crisis. I’m unsure whether fixing it qualifies as a great opportunity for mitigating global warming.

This post will loosely resemble a review of the book Dirt: The Erosion of Civilizations, by David R. Montgomery. If you want a real review, see Colby Moorberg’s review on Goodreads.

Depletion of topsoil has been an important cause of the collapse of large civilizations. Farmers are often tempted to maximize this year’s production, at the cost of declining crop yields. When declining yields leave an empire unable to feed everyone, farmers are unwilling to adopt techniques that restore the topsoil, because doing so will temporarily decrease production further. The Mayan civilization seems to have experienced three cycles of soil-driven boom and bust lasting around 1000 years per cycle.

Continue Reading

From The problem with rapid Covid testing, Mayank Gupta writes:

The absolute number of false positives would rise dramatically under slightly inaccurate, broad surveillance testing. At least initially, the number of people going to the doctor to ask what to do would also rise. One can imagine if doctors truly flubbed and didn’t know how to advise patients accurately, a lot of individual patients would lose trust in the medical system (testing, doctors, or both). The consequence of this would be more resistance to health public policy measures in the future.

For a reminder of why rapid testing is valuable, see Alex Tabarrok. Note also the evidence from the NBA that people who need useful tests can be more innovative than the medical system.

This seems like the tip of an important iceberg.

Continue Reading

I recently made a bet with Robin Hanson that US COVID-19 deaths will be less than 250,000 by Jan 1, 2022 (details hiding in these Facebook comments).

I gave a few hints here about my reasons for optimism (based on healthweather.us). I’ll add some more thoughts here, but won’t try to fully explain my intuitions. Note that these are more carefully thought out than my reasoning at the time of the bet, and the evidence has been steadily improving between then and now.

First, a quick sanity check. Metaculus has been estimating about 2 million deaths from COVID-19 worldwide this year. It also predicts that diagnosed cases will decline each quarter from this quarter through at least Q4 2020, and stabilize in Q1 2021 at 1/10 the rate of the current quarter, suggesting that most deaths will occur this year.

U.S. population is roughly 4% of the world, suggesting a bit over 80k deaths if the U.S. is fairly average. The U.S. looks about a factor of 5 worse than average as measured by currently confirmed deaths, but a bit of that is due to a few countries doing a poorer job of confirming the deaths that happen (Iran?), and more importantly, the Metaculus forecasts likely anticipate that countries such as India, Brazil, and Indonesia will eventually have a much higher fraction of the world’s deaths than is the case now. So I’m fairly comfortable with betting that the U.S. will end up well within a factor of 3 of the world per capita average.

I was about 75% confident in late March that R0 had dropped below 1, and my confidence has been slowly increasing since then.

Note a contrary opinion here. It appears to produce results that are slightly pessimistic, due to assuming that testing effort hasn’t increased.

Yet even if it’s currently a little bit above 1, there’s still a fair amount of reason for hope.

Many people have been talking as if strict shelter-in-place rules (lockdowns) are the main tools for keeping R0 < 1. That’s a misleading half-truth. Something like those rules may have been critical last month for generating quick coordination around some drastic and urgent changes. But the best longer-term strategies are less drastic and more effective.

One obstacle to lowering R0 is that hospitals are a source of infection. I’m pretty sure that will be solved, on a lousy schedule that’s unconnected with the lockdowns.

Within-home transmission likely has a significant effect on R0. Lockdowns didn’t cause any immediate drop in that transmission, but that transmission drops a good deal as the fraction of people who have been staying at home for 2+ weeks rises, so R0 is likely declining now due to that effect.

Most buildings that are open to the public should soon require good masks for anyone to enter. It wasn’t feasible to include such a rule in the initial lockdown orders, but there’s a steady move toward following that rule.

I expect those 3 changes to reduce R0 at least 20%, and probably more, between late March and late April.

Robin is right to be concerned about the competence of institutions that we relied on to prevent the pandemic. Yet I see modest reasons for optimism that the U.S. will mostly use different institutions for test and trace: Google, Apple, LabCorp, etc., and they’re moderately competent. Also, most institutions are more competent at handling problems which they recall vividly than they are at handling problems which have been insignificant in the lifetimes of most executives.

We can be pretty sure based on China’s results that R0 < 1 is not a narrow target. Wuhan got R0 lower than the key threshold by a factor of something like two. They did that in roughly the worst weather conditions – most of the time, warmer (or occasionally colder) weather will modestly reduce R0. So we’ll be able to survive a fair amount of incompetence.

But there’s still plenty of uncertainty about whether next week’s R0 will be just barely acceptable, or comfortably below 1.

Deliberate Infection?

The challenges of adapting to the most likely scenarios took nearly all of my attention in March. So I had no remaining slack to adequately prepare for a scenario that looked unlikely to me, but which looked likely to Robin. For one thing, I ought to have evaluated the possibility that money will be significantly more valuable to me if Robin wins the bet than if he loses.

It is certainly possible to imagine circumstances where deliberate coronavirus infection is quite valuable. But it looks rather low value in the scenario I think we’re in.

I don’t have much hope of getting a sensible program of deliberate infection in a society that couldn’t even stockpile facemasks in February.

I also see only a small chance that talking about deliberate infection now will help in a future pandemic. I expect this to be humanity’s last major natural pandemic (note: I’m too lazy today to evaluate the relevance of bioterrorist risks). I don’t know exactly how we’ll deal with future pandemics, but the current crisis is likely to speed up some approaches that could prevent a future virus from becoming a crisis. Some conjectures about what might be possible within a decade:

  • Better approaches to vaccination, such that vaccines could become widely available within a week of identifying the virus.
  • Medical tricorders that are as ubiquitous as phones, and which can be quickly updated to detect any new virus.

Still, I do think deliberate infection should be tried in a few places, in case the situation is as desperate as Robin believes. I’ll suggest Australia as a top choice. It has weather-related reasons for worrying that the peak will come in a few months. It has substantial tuberculosis vaccination, which may reduce the death rate among infected people by a large margin (see Correlation between universal BCG vaccination policy and reduced morbidity and mortality for COVID-19: an epidemiological study).

Note that tuberculosis vaccination looks a good deal more promising than deliberate infection, so it should be getting more attention.

Other odds and ends

Some of the concerns about a lasting economic slowdown are due to expectations that the restaurant industry will be shut down for years. I expect many other businesses to reopen within months with strict requirements that everyone wear masks, but it’s rather hard to eat while wearing a mask. So I see a large uncertainty about which year the restaurant business will return to normal. Yet I also don’t see people who used to rely on restaurants putting up with cooking at home for long. I see plenty of room for improvement in providing restaurant-like food to the home.

Current apps for delivery from restaurants seem like clumsy attempts to tack on a service as an afterthought. There’s plenty of room to redesign food preparation around home delivery, in ways that more efficiently and conveniently handle more of the volume that restaurants were handling before.

We have significant unemployment among restaurant workers, combined with food being hard to acquire for reasons which often boil down to labor shortages (combined with rules against price gouging). That’s not the kind of disruption that causes a lasting depression. The widespread opposition to price gouging is slowing down the adjustments a bit, but even so, it shouldn’t be long before the unemployed food service workers manage to become redeployed in whatever roles are appropriate to this year’s food preparation and delivery needs.

Finally, what should we think about this news: SuperCom Ships Coronavirus Quarantine Compliance Technology for Immediate Pilot?

The stock market crash of the past two weeks looks like an over-reaction to COVID-19.

Is COVID-19 really the reason for the crash? I can’t find any other news that would explain the timing and which stocks were hit hardest.

Here’s a sample of some of the harder hit stocks, all travel-related (Friday’s close compared to the highest close in February):

  • -37% Hertz (HTZ)
  • -36% Avis (CAR)
  • -29% World Fuel Services Corp (INT)
  • -24% Carnival (cruise line) (CCL)
  • -22% Delta Air Lines (DAL)
  • (compare to the S&P 500: -12.4%)

It is, of course, possible that the market was in a mild bubble in early February, and the virus merely triggered a return to sanity. There were enough high-priced stocks that I’ll guess that’s explains a little of what happened. Hertz and Avis are maybe high-risk stocks due to the risks associated with the upcoming transition to robocars. But the others that I listed did not at all fit my stereotype of overpriced stocks.

And the stocks that I had been thinking were overpriced, in industries that don’t look to be especially hurt by the virus, declined roughly in line with the market.

Outside of travel-related stocks, it mostly looks like a general shift in preferences to more cash, and away from stock. I.e. a general increase in risk aversion.

The gold market is confused as to which direction a pandemic should move it. I agree. I’m confused as to how gold should react.

What scenario could explain the decline? Maybe a two month shutdown of 90+% of U.S. air travel? A multi-year reduction in travel of 10%? It would take something like that for the market reaction to make much sense. Yet I’d bet at roughly 10:1 odds against any one of those scenarios happening.

Metaculus is currently predicting 195k COVID-19 deaths this year.

Metaculus forecast trends ought to look a good deal like random walks, yet the charts I see there look more like exponential growth.

Metaculus is likely to be a more objective source of information than the news media storyteller industry or social media. But it’s likely more susceptible to selection effects and hype than are markets that have lots of money at stake. (Metaculus has token prizes, structured in a way that may encourage more extreme bets than a regular market would).

None of this implies much about where other reactions to the virus are sensible. There’s a much different asymmetry between getting sick versus being paranoid than there is between losing money due to a pandemic versus losing money due to selling on a false alarm.

I’ve got about a month’s supply of food, but that’s my normal preparation for a variety of disasters. I have no special insights about whether the current risks justify staying home.

P.S. Chinese stocks are supporting the view that the situation in China has improved over the past month.

Eric Drexler has published a book-length paper on AI risk, describing an approach that he calls Comprehensive AI Services (CAIS).

His primary goal seems to be reframing AI risk discussions to use a rather different paradigm than the one that Nick Bostrom and Eliezer Yudkowsky have been promoting. (There isn’t yet any paradigm that’s widely accepted, so this isn’t a Kuhnian paradigm shift; it’s better characterized as an amorphous field that is struggling to establish its first paradigm). Dueling paradigms seems to be the best that the AI safety field can manage to achieve for now.

I’ll start by mentioning some important claims that Drexler doesn’t dispute:

  • an intelligence explosion might happen somewhat suddenly, in the fairly near future;
  • it’s hard to reliably align an AI’s values with human values;
  • recursive self-improvement, as imagined by Bostrom / Yudkowsky, would pose significant dangers.

Drexler likely disagrees about some of the claims made by Bostrom / Yudkowsky on those points, but he shares enough of their concerns about them that those disagreements don’t explain why Drexler approaches AI safety differently. (Drexler is more cautious than most writers about making any predictions concerning these three claims).

CAIS isn’t a full solution to AI risks. Instead, it’s better thought of as an attempt to reduce the risk of world conquest by the first AGI that reaches some threshold, preserve existing corrigibility somewhat past human-level AI, and postpone need for a permanent solution until we have more intelligence.

Continue Reading

Descriptions of AI-relevant ontological crises typically choose examples where it seems moderately obvious how humans would want to resolve the crises. I describe here a scenario where I don’t know how I would want to resolve the crisis.

I will incidentally ridicule express distate for some philosophical beliefs.

Suppose a powerful AI is programmed to have an ethical system with a version of the person-affecting view. A version which says only persons who exist are morally relevant, and “exist” only refers to the present time. [Note that the most sophisticated advocates of the person-affecting view are willing to treat future people as real, and only object to comparing those people to other possible futures where those people don’t exist.]

Suppose also that it is programmed by someone who thinks in Newtonian models. Then something happens which prevents the programmer from correcting any flaws in the AI. (For simplicity, I’ll say programmer dies, and the AI was programmed to only accept changes to its ethical system from the programmer).

What happens when the AI tries to make ethical decisions about people in distant galaxies (hereinafter “distant people”) using a model of the universe that works like relativity?

Continue Reading

Book review: Artificial Intelligence Safety and Security, by Roman V. Yampolskiy.

This is a collection of papers, with highly varying topics, quality, and importance.

Many of the papers focus on risks that are specific to superintelligence, some assuming that a single AI will take over the world, and some assuming that there will be many AIs of roughly equal power. Others focus on problems that are associated with current AI programs.

I’ve tried to arrange my comments on individual papers in roughly descending order of how important the papers look for addressing the largest AI-related risks, while also sometimes putting similar topics in one group. The result feels a little more organized than the book, but I worry that the papers are too dissimilar to be usefully grouped. I’ve ignored some of the less important papers.

The book’s attempt at organizing the papers consists of dividing them into “Concerns of Luminaries” and “Responses of Scholars”. Alas, I see few signs that many of the authors are even aware of what the other authors have written, much less that the later papers are attempts at responding to the earlier papers. It looks like the papers are mainly arranged in order of when they were written. There’s a modest cluster of authors who agree enough with Bostrom to constitute a single scientific paradigm, but half the papers demonstrate about as much of a consensus on what topic they’re discussing as I would expect to get from asking medieval peasants about airplane safety.

Continue Reading

Book review: Warnings: Finding Cassandras to Stop Catastrophes, by Richard A. Clarke and R.P. Eddy.

This book is moderately addictive softcore version of outrage porn. Only small portions of the book attempt to describe how to recognize valuable warnings and ignore the rest. Large parts of the book seem written mainly to tell us which of the people portrayed in the book we should be outraged at, and which we should praise.

Normally I wouldn’t get around to finishing and reviewing a book containing this little information value, but this one was entertaining enough that I couldn’t stop.

The authors show above-average competence at selecting which warnings to investigate, but don’t convince me that they articulated how they accomplished that.

I’ll start with warnings on which I have the most expertise. I’ll focus a majority of my review on their advice for deciding which warnings matter, even though that may give the false impression that much of the book is about such advice.
Continue Reading

Book review: Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, by David Denkenberger and Joshua M. Pearce.

I have very mixed feelings about this book.

It discusses some moderately unlikely risks – scenarios where most crops fail worldwide for several years, due to inadequate sunlight.

It’s hard to feel emotionally satisfied about a tolerable but uncomfortable response to disasters, when ideally we’d prevent those disasters in the first place. And the disasters seem sufficiently improbable that I don’t feel comfortable thinking frequently about them. But we don’t yet have a foolproof way of preventing catastrophic climate changes, and there are things we can do to survive them. So logic tells me that we ought to devote a few resources to preparing.

The authors sketch a set of strategies which could conceivably ensure that nobody starves (Wikipedia has a good summary). There might even be a bit of room for mistakes, but not much.

The book focuses on the technical problems, with the hope that others will solve the political problems. This makes some sense, as the feasibility of various political solutions is very different if the best political strategy saves 95% of people than if it saves 30%.

It’s a bit disturbing that this seems to be the most expert analysis available for these scenarios – the authors appear fairly competent, but seem to have done less research than I expect from a technical book. They may have made the right choice to publish early, in order to attract more support. I’m mainly disturbed by what the lack of expertise says about societal competence.

The book leaves me with lots of uncertainty about how hard it is to improve on the meager preparations that have been done so far.

For example, I expect there are a moderate number of people who know something about rapidly scaling up mushroom production. Are they already capable of handling the needed changes? Or are drastically different preparations needed? It’s hard for me to tell without developing significant expertise in growing mushrooms.

There’s probably an urgent need for a bit more preparation for extracting nutrition from ordinary leaves. In particular, I expect it to matter what kinds of leaves to use. The book mostly talks of leaves from trees, but careless people in my area might include poison hemlock leaves, with disastrous results. A small amount of advance preparation should be able to cause large reductions in this kind of mistake.

Another simple preparation that’s needed is a better awareness of where to look in a crisis. The news media in particular ought to be able to quickly find this kind of information even when they’re overwhelmed with problems.

I’m guessing that a few hundred thousand dollars of additional effort in this area would have high expected value, with strongly diminishing returns after that. I’ve donated a small amount to ALLFED, and I encourage you to donate a little bit as well.

Or, why I don’t fear the p-zombie apocalypse.

This post analyzes concerns about how evolution, in the absence of a powerful singleton, might, in the distant future, produce what Nick Bostrom calls a “Disneyland without children”. I.e. a future with many agents, whose existence we don’t value because they are missing some important human-like quality.

The most serious description of this concern is in Bostrom’s The Future of Human Evolution. Bostrom is cautious enough that it’s hard to disagree with anything he says.

Age of Em has prompted a batch of similar concerns. Scott Alexander at SlateStarCodex has one of the better discussions (see section IV of his review of Age of Em).

People sometimes sound like they want to use this worry as an excuse to oppose the age of em scenario, but it applies to just about any scenario with human-in-a-broad-sense actors. If uploading never happens, biological evolution could produce slower paths to the same problem(s) [1]. Even in the case of a singleton AI, the singleton will need to solve the tension between evolution and our desire to preserve our values, although in that scenario it’s more important to focus on how the singleton is designed.

These concerns often assume something like the age of em lasts forever. The scenario which Age of Em analyzes seems unstable, in that it’s likely to be altered by stranger-than-human intelligence. But concerns about evolution only depend on control being sufficiently decentralized that there’s doubt about whether a central government can strongly enforce rules. That situation seems sufficiently stable to be worth analyzing.

I’ll refer to this thing we care about as X (qualia? consciousness? fun?), but I expect people will disagree on what matters for quite some time. Some people will worry that X is lost in uploading, others will worry that some later optimization process will remove X from some future generation of ems.

I’ll first analyze scenarios in which X is a single feature (in the sense that it would be lost in a single step). Later, I’ll try to analyze the other extreme, where X is something that could be lost in millions of tiny steps. Neither extreme seems likely, but I expect that analyzing the extremes will illustrate the important principles.

Continue Reading