effective altruism

All posts tagged effective altruism

Book review: What We Owe the Future, by William MacAskill.

WWOTF is a mostly good book that can’t quite decide whether it’s part of an activist movement, or aimed at a small niche of philosophy.

MacAskill wants to move us closer to utilitarianism, particularly in the sense of evaluating the effects of our actions on people who live in the distant future. Future people are real, and we have some sort of obligation to them.

WWOTF describes humanity’s current behavior as reckless, like an imprudent teenager. MacAskill almost killed himself as a teen, by taking a poorly thought out risk. Humanity is taking similar thoughtless risks.

MacAskill carefully avoids endorsing the aspect of utilitarianism that says everyone must be valued equally. That saves him from a number of conclusions that make utilitarianism unpopular. E.g. it allows him to be uncertain about how much to care about animal welfare. It allows him to ignore the difficult arguments about the morally correct discount rate.

Continue Reading

[I have medium confidence in the broad picture, and somewhat lower confidence in the specific pieces of evidence. I’m likely biased by my commitment to an ETG strategy.]

Earning to Give (ETG) should be the default strategy for most Effective Altruists (EAs).

Five years ago, EA goals were pretty clearly constrained a good deal by funding. Today, there’s almost enough money going into far future causes, so that vetting and talent constraints have become at least as important as funding. That led to a multi-year trend of increasingly downplaying ETG that was initially appropriate, but which has gone too far.

Continue Reading

Book review: Principles: Life and Work, by Ray Dalio.

Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.

Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.

Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.

Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.

Advice

Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.

Key ideas include:

  • be honest with yourself
  • be open-minded
  • focus on identifying and fixing your most important weaknesses

Continue Reading

I wrote this post to try to clarify my thoughts about donating to the Longevity Research Institute (LRI).

Much of that thought involves asking: is there a better approach to cures for aging? Will a better aging-related charity be created soon?

I started to turn this post into an overview of all approaches to curing aging, but I saw that would sidetrack me into doing too much research, so I’ve ended up skimping on some.

I’ve ordered the various approaches that I mention from most directly focused on the underlying causes of aging, to most focused on mitigating the symptoms.

I’ve been less careful than usual to distinguish my intuitions from solid research. I’m mainly trying here to summarize lots of information that I’ve accumulated over the years, and I’m not trying to do new research.
Continue Reading

Most Universal Basic Income (UBI) proposals look a bit implausible, because they want to solve poverty overnight, and rely on questionable hopes for how much taxpayers can be persuaded to support[1].

They also fall short of inspiring my idealistic motives, because they want to solve poverty only within the countries that implement the UBI (i.e. they should be called national basic income proposals). That means even those of us living in relatively successful countries would be gambling on the continued success of the country they happen to live in. I imagine some large upheavals in the next century or so that will create a good deal of uncertainty as to which countries prosper.

Political movements to create national basic income run the risk of being hijacked by political forces that are more short-sighted and less altruistic.

Whereas I’m more interested in preparing for the more distant risks of a large-scale technological unemployment that might accompany a large increase in economic growth.

UBI without taxation?

Manna is a somewhat better attempt. It’s a cryptocurrency with a one account per human rule, and regular distributions of additional (newly created) currency to each account.

It provides incentives to sign up (speaking of which, I get rewards if you sign up via this link). It’s less clear what incentive people have to hold onto their manna[2].

It’s designed so that, given optimistic assumptions, the price of manna will be stable, or maybe increase somewhat. Note that those optimistic assumptions include a significant amount of altruism on the part of many people.

Cryptocurrencies gained popularity in part because they offered a means of trust that was independent of their creator’s trustworthiness.

Manna doesn’t attempt to fully replicate that feature, because they’re not about to fully automate the one-human-one-account rule. They’ve outsourced a good deal of the verification to cell phone companies, but the system will still be vulnerable to fraud unless a good deal of human labor goes into limiting people to one account each.

The obvious outcome is that people stop buying manna, so it becomes worth too little for people to bother signing up.

I suspect most buying so far has been from people who think any cryptocurrency will go up. That’s typical of a bubble.

That may have helped to jumpstart the system, but I’m concerned that it may distract the founders from looking for a long-term solution.

Why use a cryptocurrency?

Some of what’s happening is that crypto enthusiasts expect crypto to solve all problems, and apply crypto to everything without looking for evidence that crypto is helpful to the problem at hand. The cryptocurrency bubble misled some people into thinking that cryptocurrencies created free lunches[3] (manna comes from heaven, right?), and a UBI is a good use for a free lunch.

I recommend instead that you think of manna as primarily a charity, which happens to get some advantage from using a cryptocurrency.

Cryptocurrencies provide fairly cheap ways of transmitting value.

The open source nature of the mechanism makes it relatively easy to verify most aspects of the system.

These may not sound like terribly strong reasons, but it looks to me like much of the difficulty in getting widespread adoption of valuable new charities is that donors won’t devote much effort to evaluating charities. So only the most easily verified charities succeed on their merits, and the rest succeed or fail mainly on their marketing ability.

Difficulties

It seems almost possible that the price of manna could be stable or rise reliably enough to act as a good store of value.

But it won’t happen via the thoughtless greed that drove last year’s cryptocurrency buying frenzy. It requires something along the lines of altruism and/or signaling.

It seems to require the “central bank” to use charitable donations to buy manna when the price of manna declines.

It also requires something unusual about the average person’s attitude toward manna. Would it be enough for people and businesses to accept manna as payment, for reasons that involve status signaling? That doesn’t seem quite enough.

It’s also important to persuade some people to hold the manna for a significant time.

Strategies

There’s little chance that can be accomplished by making manna look as safe as dollars or yuan. The only possibility that I can imagine working is if holdings of manna provide a good signal of wealth and wealth-related status. Manna seems to be positioned so that it could become a substitute for a fancy car or house as a signal of wealth. With that level of acceptance, it might provide a substitute for bank accounts as a store of value.

Signaling motives might also lead some upper-class people/businesses to use it as medium of exchange.

To work well, manna would probably need to be recognized as a charity, with a reputation that is almost as widely respected as the Red Cross. I.e. it would need to be a fairly standard form of altruism.

The main UBI movement wants to imagine they can solve poverty with one legislative act. Manna uses a more incremental approach, which provides less hope of solving poverty this decade, but maybe a bit more hope of mitigating larger problems from technological unemployment several decades from now.

Doubts?

Manna seems to be run by the first group of people who decided the idea was worth doing. Typically with a new technology, the people who will manage it most responsibly wait a few years before getting involved, so my priors are that I should hesitate before deciding this particular group is good enough.

Manna currently isn’t fair to people who can’t afford a cell phone, but if other aspects of manna succeed, it’s likely that cell phone companies will find a way to get cell phones to essentially everyone, since the manna will pay for the phones. Also, alternatives to cell phones will probably be implemented for manna access.

The high-level rhetoric says any human being is eligible for manna, but a closer look shows that anyone under 18 is treated as only partly qualified – manna accumulates in their name, and they get access to the manna when they come of age. The arbitrariness of this threshold is unsettling. We’ll get situations where people become parents, yet don’t have access to manna. Or maybe that’s not much of a problem because someone else will enable children to borrow, using their manna as collateral?

The problems will become harder if someone needs to figure out what qualifies a human being in an Age of Em, where uploaded minds (human, and maybe bonobo) can be quickly duplicated.

I’m not too clear on how the governing board will be chosen – they say something about voting, which sort of suggests a global democracy. That runs some risk of short-sighted people voting themselves more money now at the cost of a less stable system later. But the alternative governing mechanisms aren’t obviously great either.

I’d have more confidence if manna were focused exclusively on a UBI. But they want to also enable targeted donations, by providing verified age, gender, location, and occupation data, and “verified needy” status indications generated by other charities. Maybe a one or two of those would work out well, but I see some important tension between them and the “NO DISCRIMINATION” slogan on the home page.

The people in charge also want to solve “instability … resulting from too much money being held in too few hands and used for reckless financial speculation” without convincing me they understand what causes instability.

I’d be concerned about macroeconomic risks in the unlikely event that manna’s use became widespread enough that wages were denominated in it. Manna’s creators express Keynesian concerns about aggregate demand, suggesting that the best we could hope for from a manna monetary policy is that it would repeat the Fed’s occasional large mistakes. I’d prefer to aim for something better than that.

Current central banks have enough problems with promoting monetary stability. If they’re replaced by an organization which has a goal that’s more distinct from monetary stability, I expect monetary stability to suffer. I don’t consider it likely that manna will replace existing currencies enough for that to be a big concern, but I find this scenario hard to analyze.

Like most charities, it depends more on support from the wealthy than from the average person. Yet the rhetoric behind Manna seems designed to alienate the wealthy.

Is current People’s Currency Foundation sufficiently trustworthy? Or should someone create a better version?

I don’t know, and I don’t expect to do enough research to figure it out. Maybe OpenPhil can investigate enough?

Is this Effective Altruism?

The near-term benefits of Manna or something similar appear unimpressive compared to GiveDirectly, which targets beneficiaries in a more sophisticated (but less transparent?) way.

But Manna’s simpler criteria make it a bit more scaleable, and make it somewhat easier to gain widespread trust.

The main costs that I foresee involve the attention that is needed to shift people’s from charities such as the Red Cross or their alma mater as the default charity, toward manna. Plus, of course, whatever is lost from the charities who get fewer donations. There’s no shortage of charities that produce less value than a well-run UBI would, but the social pressure that I’m imagining is too blunt an instrument to carefully target the least valuable charities as the things that manna should replace.

Conclusion

I don’t recommend significant purchases of manna or donations to the People’s Currency Foundation now. Current efforts in this area should focus more on evaluating these ideas further, figuring out whether a good enough implementation exists, and if it should be scaled up, then we should focus more on generating widespread agreement that this is a good charity, and not focus much on near-term funding.

I give Manna a 0.5% chance of success, and I see an additional 1% chance that something similar will succeed. By success, I mean reliably providing enough income within 30 years so that at least 10 million of the world’s poorest people can use it to buy 2000 calories per day of food. That probability seems a bit higher than the chance that political action will similarly help the world’s poorest.

Footnotes

[1] – e.g. pointing to tax rates that were tolerated for a while after a world war, without noticing the hints that war played an important role in getting that toleration, and without noting how tax rates affect tax avoidance. See Piketty’s Capital in the Twenty-First Century, figures 13.1 and 14.1, for evidence that tax rates which are higher than current rates haven’t generated more revenues.

[2]Wikipedia says of the original manna: ‘Stored manna “bred worms and stank”‘.

[3] – or maybe the best cryptocurrencies do create free lunches, but people see more free lunches than are actually created. The majority of cryptocurrencies have been just transfers of money from suckers to savvy traders.

Book review: The Life You Can Save, by Peter Singer.

This book presents some unimpressive moral claims, and some more pragmatic social advocacy that is rather impressive.

The Problem

It is all too common to talk as if all human lives had equal value, yet act as if the value of distant strangers’ lives was a few hundred dollars.

Singer is effective at arguing against standard rationalizations for this discrepancy.

He provides an adequate summary of reasons to think most of us can easily save many lives.
Continue Reading

Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.

This book is a well-written analysis of human self-deception.

Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.

Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”

The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.

This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.

The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.

The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.

Impact

Most people will underestimate the effects which the book describes.
Continue Reading

Two and a half years ago, Eliezer was (somewhat plausibly) complaining that virtually nobody outside of MIRI was working on AI-related existential risks.

This year (at EAGlobal) one of MIRI’s talks was a bit hard to distinguish from an AI safety talk given by someone with pretty mainstream AI affiliations.

What happened in that time to cause that shift?

A large change was catalyzed by the publication of Superintelligence. I’ve been mildly disappointed about how little it affected discussions among people who were already interested in the topic. But Superintelligence caused a large change in how many people are willing to express concern over AI risks. That’s presumably because Superintelligence looks sufficiently academic and neutral to make many people comfortable about citing it, whereas similar arguments by Eliezer/MIRI didn’t look sufficiently prestigious within academia.

A smaller part of the change was MIRI shifting its focus somewhat to be more in line with how mainstream machine learning (ML) researchers expect AI to reach human levels.

Also, OpenAI has been quietly shifting in a more MIRI-like direction (I’m very unclear on how big a change this is). (Paul Christiano seems to deserve some credit for both the MIRI and OpenAI shifts in strategies.)

Given those changes, it seems like MIRI ought to be able to attract more donations than before. Especially since it has demonstrated evidence of increasing competence, and also because HPMoR seemed to draw significantly more people into the community of people who are interested in MIRI.

MIRI has gotten one big grant from OpenPhilanthropy that it probably couldn’t have gotten when mainstream AI researchers were treating MIRI’s concerns as too far-fetched to be worth commenting on. But donations from MIRI’s usual sources have stagnated.

That pattern suggests that MIRI was previously benefiting from a polarization effect, where the perception of two distinct “tribes” (those who care about AI risks versus those who promote AI) energized people to care about “their tribe”.

Whereas now there’s no clear dividing line between MIRI and mainstream researchers. Also, there’s lots of money going into other organizations that plan to do something about AI safety. (Most of those haven’t yet articulated enough of a strategy to make me optimistic that that money is well spent. I still endorse the ideas I mentioned last year in How much Diversity of AGI-Risk Organizations is Optimal?. I’m unclear on how much diversity of approaches we’re getting from the recent proliferation of AI safety organizations.)

That kind of pattern of donations creates perverse incentives to charities to at least market themselves as fighting a powerful group of people, rather than (as the ideal charity should be) addressing a neglected problem. Even if that marketing doesn’t distort a charity’s operations, the charity will be tempted to use counterproductive alarmism. AI risk organizations have resisted those temptations (at least recently), but it seems risky to tempt them.

That’s part of why I recently made a modest donation to MIRI, in spite of the uncertainty over the value of their efforts (I had last donated to them in 2009).

Book review: Doing Good Better, by William MacAskill.

This book is a simple introduction to the Effective Altruism movement.

It documents big differences between superficially plausible charities, and points out how this implies big benefits to the recipients of charity from donors paying more attention to the results that a charity produces.

How effective is the book?

Is it persuasive?

Probably yes, for a small but somewhat important fraction of the population who seriously intend to help distant strangers, but have procrastinated about informing themselves about how to do so.

Does it focus on a neglected task?

Not very neglected. It’s mildly different from similar efforts such as GiveWell’s website and Reinventing Philanthropy, in ways that will slightly reduce the effort needed to understand the basics of Effective Altruism.

Will it make people more altruistic?

Not very much. It mostly seems to assume that people have some fixed level of altruism, and focuses on improving the benefits that result from that altruism. Maybe it will modestly redirect peer pressure toward making people more altruistic.

Will it make readers more effective?

Probably. For people who haven’t given much thought to these topics, the book’s advice is a clear improvement over standard habits. It will be modestly effective at promoting a culture where charitable donations that save lives are valued more highly than donations which accomplish less.

But I see some risk that it will make people overconfident about the benefits of the book’s specific strategies. An ideal version of the book would instead inspire people to improve on the book’s analysis.

The book provides evidence that donors rarely pay attention to how much good a charity does. Yet it avoids asking why. If you pay attention, you’ll see hints that donors are motivated mainly by the desire to signal something virtuous about themselves (for example, see the book’s section on moral licensing). In spite of that, the book consistently talks as if donors have good intentions, and only need more knowledge to be better altruists.

The book is less rigorous than I had hoped. I’m unsure how much of that is due to reasonable attempts to simplify the message so that more people can understand it with minimal effort.

In a section on robustness of evidence, the book describes this “sanity check”:

“if it cost ten dollars to save a life, then we’d have to suppose that they or their family members couldn’t save up for a few weeks, or take out a loan, in order to pay for the lifesaving product.”

I find it confusing to use this as a sanity check, because it’s all too easy to imagine that many people are in desperate enough conditions that they’re spending their last dollar to avoid starvation.

The book alternates between advocating doing more good (satisficing), and advocating the most possible good (optimizing). In practice, it mostly focuses on safe ways to produce fairly good results.

The book barely mentions existential risks. If it were literally trying to advocate doing the most good possible, it would devote a lot more attention to affecting the distant future. But that’s much harder to do well than what the book does focus on (saving a few more lives in Africa over the next few years), and would involve acts of charity that have small probabilities of really large effects on people who are not yet born.

If you’re willing to spend 50-100 hours (but not more) learning how to be more effective with your altruism, then reading this book is a good start.

But people who are more ambitious ought to be able to make a bigger difference to the world. I encourage those people to skip this book, and focus more on analyzing existential risks.