Book review: Doing Good Better, by William MacAskill.

This book is a simple introduction to the Effective Altruism movement.

It documents big differences between superficially plausible charities, and points out how this implies big benefits to the recipients of charity from donors paying more attention to the results that a charity produces.

How effective is the book?

Is it persuasive?

Probably yes, for a small but somewhat important fraction of the population who seriously intend to help distant strangers, but have procrastinated about informing themselves about how to do so.

Does it focus on a neglected task?

Not very neglected. It’s mildly different from similar efforts such as GiveWell’s website and Reinventing Philanthropy, in ways that will slightly reduce the effort needed to understand the basics of Effective Altruism.

Will it make people more altruistic?

Not very much. It mostly seems to assume that people have some fixed level of altruism, and focuses on improving the benefits that result from that altruism. Maybe it will modestly redirect peer pressure toward making people more altruistic.

Will it make readers more effective?

Probably. For people who haven’t given much thought to these topics, the book’s advice is a clear improvement over standard habits. It will be modestly effective at promoting a culture where charitable donations that save lives are valued more highly than donations which accomplish less.

But I see some risk that it will make people overconfident about the benefits of the book’s specific strategies. An ideal version of the book would instead inspire people to improve on the book’s analysis.

The book provides evidence that donors rarely pay attention to how much good a charity does. Yet it avoids asking why. If you pay attention, you’ll see hints that donors are motivated mainly by the desire to signal something virtuous about themselves (for example, see the book’s section on moral licensing). In spite of that, the book consistently talks as if donors have good intentions, and only need more knowledge to be better altruists.

The book is less rigorous than I had hoped. I’m unsure how much of that is due to reasonable attempts to simplify the message so that more people can understand it with minimal effort.

In a section on robustness of evidence, the book describes this “sanity check”:

“if it cost ten dollars to save a life, then we’d have to suppose that they or their family members couldn’t save up for a few weeks, or take out a loan, in order to pay for the lifesaving product.”

I find it confusing to use this as a sanity check, because it’s all too easy to imagine that many people are in desperate enough conditions that they’re spending their last dollar to avoid starvation.

The book alternates between advocating doing more good (satisficing), and advocating the most possible good (optimizing). In practice, it mostly focuses on safe ways to produce fairly good results.

The book barely mentions existential risks. If it were literally trying to advocate doing the most good possible, it would devote a lot more attention to affecting the distant future. But that’s much harder to do well than what the book does focus on (saving a few more lives in Africa over the next few years), and would involve acts of charity that have small probabilities of really large effects on people who are not yet born.

If you’re willing to spend 50-100 hours (but not more) learning how to be more effective with your altruism, then reading this book is a good start.

But people who are more ambitious ought to be able to make a bigger difference to the world. I encourage those people to skip this book, and focus more on analyzing existential risks.

The stock market reaction to the election was quite strange.

From the first debate through Tuesday, S&P 500 futures showed modest signs of believing that Trump was worse for the market than Clinton. This Wolfers and Zitzewitz study shows some of the relevant evidence.

On Tuesday evening, I followed the futures market and the prediction markets moderately closely, and it looked like there was a very clear correlation between those two markets, strongly suggesting the S&P 500 would be 6 to 8 percent lower under Trump than under Clinton. This correlation did not surprise me.

This morning, the S&P 500 prices said the market had been just kidding last night, and that Trump is neutral or slightly good for the market.

Part of this discrepancy is presumably due to the difference between regular trading hours and after hours trading. The clearest evidence for market dislike of Trump came from after hours trading, when the most sophisticated traders are off-duty. I’ve been vaguely aware that after hours markets are less efficiently priced. But this appears to involve at least a few hundred million dollars of potential profit, which somewhat stretches the limit of how inefficient the markets could plausibly be.

I see one report of Carl Icahn claiming

I thought it was absurd that the market, the S&P was down 100 points on Trump getting elected … but I couldn’t put more than about a billion dollars to work

I’m unclear what constrained him, but it sure looked like the market could have absorbed plenty more buying while I was watching (up to 10pm PST), so I’ll guess he was more constrained by something related to him being at a party.

But even if the best U.S. traders were too distracted to make the markets efficient, that leaves me puzzled about asian markets, which were down almost as much as the U.S. market during the middle of the asian day.

So it’s hard to avoid the conclusion that the market either made a big irrational move, or was reacting to news whose importance I can’t recognize.

I don’t have a strong opinion on which of the market reactions was correct. My intuition says that a market decline of anywhere from 1% to 5% would have been sensible, and I’ve made a few trades reflecting that opinion. I expect that market reactions to news tend to get more rational over time, so I’m now giving a fair amount of weight to the possibility that Trump won’t affect stocks much.

I’ve substantially reduced my anxiety over the past 5-10 years.

Many of the important steps along that path look easy in hindsight, yet the overall goal looked sufficiently hard prospectively that I usually assumed it wasn’t possible. I only ended up making progress by focusing on related goals.

In this post, I’ll mainly focus on problems related to general social anxiety among introverted nerds. It will probably be much less useful to others.

In particular, I expect it doesn’t apply very well to ADHD-related problems, and I have little idea how well it applies to the results of specific PTSD-type trauma.

It should be slightly useful for anxiety over politicians who are making America grate again. But you’re probably fooling yourself if you blame many of your problems on distant strangers.

Trump: Make America Grate Again!

Continue Reading

Book review: The Vital Question: Energy, Evolution, and the Origins of Complex Life, by Nick Lane.

This book describes a partial theory of how life initially evolved, followed by a more detailed theory of how eukaryotes evolved.

Lane claims the hardest step in evolving complex life was the development of complex eukaryotic cells. Many traits such as eyes and wings evolved multiple times. Yet eukaryotes have many traits which evolved exactly once (including mitochondria, sex, and nuclear membranes).

Eukaryotes apparently originated in a single act of an archaeon engulfing a bacterium. The result wasn’t very stable, and needed to quickly evolve (i.e. probably within a few million years) a sophisticated nucleus, plus sexual reproduction.

Only organisms that go through these steps will be able to evolve a more complex genome than bacteria do. This suggests that complex life is rare outside of earth, although simple life may be common.

The book talks a lot about mitochondrial DNA, and make some related claims about aging.

Cells have a threshold for apoptosis which responds to the effects of poor mitochondrial DNA, killing weak embryos before they can take up much parental resources. Lane sees evolution making important tradeoffs, with species that have intense energy demands (such as most birds) setting their thresholds high, and more ordinary species (e.g. rats) setting the threshold lower. This tradeoff causes less age-related damage in birds, at the cost of lower fertility.

Lane claims that the DNA needs to be close to the mitochondria in order to make quick decisions. I found this confusing until I checked Wikipedia and figured out it probably refers to the CoRR hypothesis. I’m still confused, but at least now I can attribute the confusion to the topic being hard. Aubrey de Grey’s criticism of CoRR suggests there’s a consensus that CoRR has problems, and the main confusion revolves around the credibility of competing hypotheses.

Lane is quite pessimistic about attempts to cure aging. Only a small part of that disagreement with Aubrey can be explained by the modest differences in their scientific hypotheses. Much of the difference seems to come from Lane’s focus on doing science, versus Aubrey’s focus on engineering. Lane keeps pointing out (correctly) that cells are really complex and finely tuned. Yet Lane is well aware that evolution makes many changes that affect aging in spite of the complexity. I suspect he’s too focused on the inadequacy of typical bioengineering to imagine really good engineering.

Some less relevant tidbits include:

  • why vibrant plumage in male birds may be due to females being heterogametic
  • why male mammals age faster than females

Many of Lane’s ideas are controversial, and only weakly supported by the evidence. But given the difficulty of getting good evidence on these topics, that still represents progress.

The book is pretty dense, and requires some knowledge of biochemistry. It has many ideas and evidence that were developed since I last looked into this subject. I expect to forget many of those ideas fairly quickly. The book is worth reading if you have enough free time, but understanding these topics does not feel vital.

Book review: Notes on a New Philosophy of Empirical Science (Draft Version), by Daniel Burfoot.

Standard views of science focus on comparing theories by finding examples where they make differing predictions, and rejecting the theory that made worse predictions.

Burfoot describes a better view of science, called the Compression Rate Method (CRM), which replaces the “make prediction” step with “make a compression program”, and compares theories by how much they compress a standard (large) database.

These views of science produce mostly equivalent results(!), but CRM provides a better perspective.

Machine Learning (ML) is potentially science, and this book focuses on how ML will be improved by viewing its problems through the lens of CRM. Burfoot complains about the toolkit mentality of traditional ML research, arguing that the CRM approach will turn ML into an empirical science.

This should generate a Kuhnian paradigm shift in ML, with more objective measures of the research quality than any branch of science has achieved so far.

Burfoot focuses on compression as encoding empirical knowledge of specific databases / domains. He rejects the standard goal of a general-purpose compression tool. Instead, he proposes creating compression algorithms that are specialized for each type of database, to reflect what we know about topics (such as images of cars) that are important to us.
Continue Reading

MIRI has produced a potentially important result (called Garrabrant induction) for dealing with uncertainty about logical facts.

The paper is somewhat hard for non-mathematicians to read. This video provides an easier overview, and more context.

It uses prediction markets! “It’s a financial solution to the computer science problem of metamathematics”.

It shows that we can evade disturbing conclusions such as Godel incompleteness and the paradox of the liar, by expecting to only be very confident about logically deducible facts (as opposed to being mathematically certain). That’s similar to the difference between treating beliefs about empirical facts as probabilities, as opposed to boolean values.

I’m somewhat skeptical that it will have an important effect on AI safety, but my intuition says it will produce enough benefits somewhere that it will become at least as famous as Pearl’s work on causality.

Book review: The Moral Economy: Why Good Incentives Are No Substitute for Good Citizens, by Samuel Bowles.

This book has a strange mixture of realism and idealism.

It focuses on two competing models: the standard economics model in which people act in purely self-interested ways, and a more complex model in which people are influenced by context to act either altruistically or selfishly.

The stereotypical example comes from the semi-famous Haifa daycare experiment, where daycare centers started fining parents for being late to pick up children, and the parents responded by being later.

The first half of the book is a somewhat tedious description of ideas that seem almost obvious enough to be classified as common sense. He points out that the economist’s model is a simplification that is useful for some purposes, yet it’s not too hard to find cases where it makes the wrong prediction about how people will respond to incentives.

That happens because society provides weak pressures that produce cooperation under some conditions, and because financial incentives send messages that influence whether people want to cooperate. I.e. the parents appear to have previously felt obligated to be somewhat punctual, but then inferred from the fines that it was ok to be late as long as they paid the price.[*].

The book advocates more realism on this specific issue. But it’s pretty jarring to compare that to the idealistic view the author takes on similar topics, such as acquiring evidence of how people react, or modeling politicians. He treats the Legislator (capitalized like that) as a very objective, well informed, and altruistic philosopher. That model may sometimes be useful, but I’ll bet that, on average, it produces worse predictions about legislators’ behavior than does the economist’s model of a self-interested legislator.

The book becomes more interesting around chapter V, when it analyzes the somewhat paradoxical conclusion that markets sometimes make people more selfish, yet cultures that have more experience with markets tend to cooperate more.

He isn’t able to fully explain that, but he makes some interesting progress. One factor that’s important to focus on is the difference between complete and incomplete contracts. Complete contracts describe everything a buyer might need to know about a product or service. An example of an incomplete contract would be an agreement to hire a lawyer to defend me – I don’t expect the lawyer to specify how good a defense to expect.

Complete contracts enable people to trade without needing to trust the seller, which can lead to highly selfish attitudes. Incomplete contracts lead to the creation of trust between participants, because having frequent transactions depends on some implicit cooperation.

The book ends by promoting the “new” idea that policy ought to aim for making people be good. But it’s unclear who disagrees with that idea. Economists sometimes sound like they disagree, because they often say that policy shouldn’t impose one group’s preferences on another group. But economists are quite willing to observe that people generally prefer cooperation over conflict, and that most people prefer institutions that facilitate cooperation. That’s what the book mostly urges.

The book occasionally hints at wanting governments to legislate preferences in ways that go beyond facilitating cooperation, but doesn’t have much of an argument for doing so.

[*] – The book implies that the increased lateness was an obviously bad result. This seems like a plausible guess. But I find it easy to imagine conditions where the reported results were good (i.e. the parents might benefit from being late more than it costs the teachers to accommodate them).

However, that scenario depends on the fines being high enough for the teachers to prefer the money over punctuality. They appear not to have been consulted, so success at that would have depended on luck. It’s unclear whether the teachers were getting overtime pay when parents were late, or whether the fines benefited only the daycare owner.

One of the weakest claims in The Age of Em was that AI progress has not been accelerating.

J Storrs Hall (aka Josh) has a hypothesis that AI progress accelerated about a decade ago due to a shift from academia to industry. (I’m puzzled why the title describes it as a coming change, when it appears to have already happened).

I find it quite likely that something important happened then, including an acceleration in the rate at which AI affects people.

I find it less clear whether that indicates a change in how fast AI is approaching human intelligence levels.

Josh points to airplanes as an example of a phase change being important.

I tried to compare AI progress to other industries which might have experienced a similar phase change, driven by hardware progress. But I was deterred by the difficulty of estimating progress in industries when they were driven by academia.

One industry I tried to compare to was photovoltaics, which seemed to be hyped for a long time before becoming commercially important (10-20 years ago?). But I see only weak signs of a phase change around 2007, from looking at Swanson’s Law. It’s unclear whether photovoltaic progress was ever dominated by academia enough for a phase change to be important.

Hypertext is a domain where a clear phase change happened in the earl 1990s. It experienced a nearly foom-like rate of adoption when internet availability altered the problem, from one that required a big company to finance the hardware and marketing, to a problem that could be solved by simply giving away a small amount of code. But this change in adoption was not accompanied by a change in the power of hypertext software (beyond changes due to network effects). So this seems like weak evidence against accelerating progress toward human-level AI.

What other industries should I look at?

I started writing morning pages a few months ago. That means writing three pages, on paper, before doing anything else [1].

I’ve only been doing this on weekends and holidays, because on weekdays I feel a need to do some stock market work close to when the market opens.

It typically takes me one hour to write three pages. At first, it felt like I needed 75 minutes but wanted to finish faster. After a few weeks, it felt like I could finish in about 50 minutes when I was in a hurry, but often preferred to take more than an hour.

That suggests I’m doing much less stream-of-consciousness writing than is typical for morning pages. It’s unclear whether that matters.

It feels like devoting an hour per day to morning pages ought to be costly. Yet I never observed it crowding out anything I valued (except maybe once or twice when I woke up before getting an optimal amount of sleep in order to get to a hike on time – that was due to scheduling problems, not due to morning pages reducing the available of time per day).
Continue Reading

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading