effective altruism

All posts tagged effective altruism

Most Universal Basic Income (UBI) proposals look a bit implausible, because they want to solve poverty overnight, and rely on questionable hopes for how much taxpayers can be persuaded to support[1].

They also fall short of inspiring my idealistic motives, because they want to solve poverty only within the countries that implement the UBI (i.e. they should be called national basic income proposals). That means even those of us living in relatively successful countries would be gambling on the continued success of the country they happen to live in. I imagine some large upheavals in the next century or so that will create a good deal of uncertainty as to which countries prosper.

Political movements to create national basic income run the risk of being hijacked by political forces that are more short-sighted and less altruistic.

Whereas I’m more interested in preparing for the more distant risks of a large-scale technological unemployment that might accompany a large increase in economic growth.

UBI without taxation?

Manna is a somewhat better attempt. It’s a cryptocurrency with a one account per human rule, and regular distributions of additional (newly created) currency to each account.

It provides incentives to sign up (speaking of which, I get rewards if you sign up via this link). It’s less clear what incentive people have to hold onto their manna[2].

It’s designed so that, given optimistic assumptions, the price of manna will be stable, or maybe increase somewhat. Note that those optimistic assumptions include a significant amount of altruism on the part of many people.

Cryptocurrencies gained popularity in part because they offered a means of trust that was independent of their creator’s trustworthiness.

Manna doesn’t attempt to fully replicate that feature, because they’re not about to fully automate the one-human-one-account rule. They’ve outsourced a good deal of the verification to cell phone companies, but the system will still be vulnerable to fraud unless a good deal of human labor goes into limiting people to one account each.

The obvious outcome is that people stop buying manna, so it becomes worth too little for people to bother signing up.

I suspect most buying so far has been from people who think any cryptocurrency will go up. That’s typical of a bubble.

That may have helped to jumpstart the system, but I’m concerned that it may distract the founders from looking for a long-term solution.

Why use a cryptocurrency?

Some of what’s happening is that crypto enthusiasts expect crypto to solve all problems, and apply crypto to everything without looking for evidence that crypto is helpful to the problem at hand. The cryptocurrency bubble misled some people into thinking that cryptocurrencies created free lunches[3] (manna comes from heaven, right?), and a UBI is a good use for a free lunch.

I recommend instead that you think of manna as primarily a charity, which happens to get some advantage from using a cryptocurrency.

Cryptocurrencies provide fairly cheap ways of transmitting value.

The open source nature of the mechanism makes it relatively easy to verify most aspects of the system.

These may not sound like terribly strong reasons, but it looks to me like much of the difficulty in getting widespread adoption of valuable new charities is that donors won’t devote much effort to evaluating charities. So only the most easily verified charities succeed on their merits, and the rest succeed or fail mainly on their marketing ability.

Difficulties

It seems almost possible that the price of manna could be stable or rise reliably enough to act as a good store of value.

But it won’t happen via the thoughtless greed that drove last year’s cryptocurrency buying frenzy. It requires something along the lines of altruism and/or signaling.

It seems to require the “central bank” to use charitable donations to buy manna when the price of manna declines.

It also requires something unusual about the average person’s attitude toward manna. Would it be enough for people and businesses to accept manna as payment, for reasons that involve status signaling? That doesn’t seem quite enough.

It’s also important to persuade some people to hold the manna for a significant time.

Strategies

There’s little chance that can be accomplished by making manna look as safe as dollars or yuan. The only possibility that I can imagine working is if holdings of manna provide a good signal of wealth and wealth-related status. Manna seems to be positioned so that it could become a substitute for a fancy car or house as a signal of wealth. With that level of acceptance, it might provide a substitute for bank accounts as a store of value.

Signaling motives might also lead some upper-class people/businesses to use it as medium of exchange.

To work well, manna would probably need to be recognized as a charity, with a reputation that is almost as widely respected as the Red Cross. I.e. it would need to be a fairly standard form of altruism.

The main UBI movement wants to imagine they can solve poverty with one legislative act. Manna uses a more incremental approach, which provides less hope of solving poverty this decade, but maybe a bit more hope of mitigating larger problems from technological unemployment several decades from now.

Doubts?

Manna seems to be run by the first group of people who decided the idea was worth doing. Typically with a new technology, the people who will manage it most responsibly wait a few years before getting involved, so my priors are that I should hesitate before deciding this particular group is good enough.

Manna currently isn’t fair to people who can’t afford a cell phone, but if other aspects of manna succeed, it’s likely that cell phone companies will find a way to get cell phones to essentially everyone, since the manna will pay for the phones. Also, alternatives to cell phones will probably be implemented for manna access.

The high-level rhetoric says any human being is eligible for manna, but a closer look shows that anyone under 18 is treated as only partly qualified – manna accumulates in their name, and they get access to the manna when they come of age. The arbitrariness of this threshold is unsettling. We’ll get situations where people become parents, yet don’t have access to manna. Or maybe that’s not much of a problem because someone else will enable children to borrow, using their manna as collateral?

The problems will become harder if someone needs to figure out what qualifies a human being in an Age of Em, where uploaded minds (human, and maybe bonobo) can be quickly duplicated.

I’m not too clear on how the governing board will be chosen – they say something about voting, which sort of suggests a global democracy. That runs some risk of short-sighted people voting themselves more money now at the cost of a less stable system later. But the alternative governing mechanisms aren’t obviously great either.

I’d have more confidence if manna were focused exclusively on a UBI. But they want to also enable targeted donations, by providing verified age, gender, location, and occupation data, and “verified needy” status indications generated by other charities. Maybe a one or two of those would work out well, but I see some important tension between them and the “NO DISCRIMINATION” slogan on the home page.

The people in charge also want to solve “instability … resulting from too much money being held in too few hands and used for reckless financial speculation” without convincing me they understand what causes instability.

I’d be concerned about macroeconomic risks in the unlikely event that manna’s use became widespread enough that wages were denominated in it. Manna’s creators express Keynesian concerns about aggregate demand, suggesting that the best we could hope for from a manna monetary policy is that it would repeat the Fed’s occasional large mistakes. I’d prefer to aim for something better than that.

Current central banks have enough problems with promoting monetary stability. If they’re replaced by an organization which has a goal that’s more distinct from monetary stability, I expect monetary stability to suffer. I don’t consider it likely that manna will replace existing currencies enough for that to be a big concern, but I find this scenario hard to analyze.

Like most charities, it depends more on support from the wealthy than from the average person. Yet the rhetoric behind Manna seems designed to alienate the wealthy.

Is current People’s Currency Foundation sufficiently trustworthy? Or should someone create a better version?

I don’t know, and I don’t expect to do enough research to figure it out. Maybe OpenPhil can investigate enough?

Is this Effective Altruism?

The near-term benefits of Manna or something similar appear unimpressive compared to GiveDirectly, which targets beneficiaries in a more sophisticated (but less transparent?) way.

But Manna’s simpler criteria make it a bit more scaleable, and make it somewhat easier to gain widespread trust.

The main costs that I foresee involve the attention that is needed to shift people’s from charities such as the Red Cross or their alma mater as the default charity, toward manna. Plus, of course, whatever is lost from the charities who get fewer donations. There’s no shortage of charities that produce less value than a well-run UBI would, but the social pressure that I’m imagining is too blunt an instrument to carefully target the least valuable charities as the things that manna should replace.

Conclusion

I don’t recommend significant purchases of manna or donations to the People’s Currency Foundation now. Current efforts in this area should focus more on evaluating these ideas further, figuring out whether a good enough implementation exists, and if it should be scaled up, then we should focus more on generating widespread agreement that this is a good charity, and not focus much on near-term funding.

I give Manna a 0.5% chance of success, and I see an additional 1% chance that something similar will succeed. By success, I mean reliably providing enough income within 30 years so that at least 10 million of the world’s poorest people can use it to buy 2000 calories per day of food. That probability seems a bit higher than the chance that political action will similarly help the world’s poorest.

Footnotes

[1] – e.g. pointing to tax rates that were tolerated for a while after a world war, without noticing the hints that war played an important role in getting that toleration, and without noting how tax rates affect tax avoidance. See Piketty’s Capital in the Twenty-First Century, figures 13.1 and 14.1, for evidence that tax rates which are higher than current rates haven’t generated more revenues.

[2]Wikipedia says of the original manna: ‘Stored manna “bred worms and stank”‘.

[3] – or maybe the best cryptocurrencies do create free lunches, but people see more free lunches than are actually created. The majority of cryptocurrencies have been just transfers of money from suckers to savvy traders.

Book review: The Life You Can Save, by Peter Singer.

This book presents some unimpressive moral claims, and some more pragmatic social advocacy that is rather impressive.

The Problem

It is all too common to talk as if all human lives had equal value, yet act as if the value of distant strangers’ lives was a few hundred dollars.

Singer is effective at arguing against standard rationalizations for this discrepancy.

He provides an adequate summary of reasons to think most of us can easily save many lives.
Continue Reading

Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.

This book is a well-written analysis of human self-deception.

Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.

Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”

The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.

This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.

The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.

The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.

Impact

Most people will underestimate the effects which the book describes.
Continue Reading

Two and a half years ago, Eliezer was (somewhat plausibly) complaining that virtually nobody outside of MIRI was working on AI-related existential risks.

This year (at EAGlobal) one of MIRI’s talks was a bit hard to distinguish from an AI safety talk given by someone with pretty mainstream AI affiliations.

What happened in that time to cause that shift?

A large change was catalyzed by the publication of Superintelligence. I’ve been mildly disappointed about how little it affected discussions among people who were already interested in the topic. But Superintelligence caused a large change in how many people are willing to express concern over AI risks. That’s presumably because Superintelligence looks sufficiently academic and neutral to make many people comfortable about citing it, whereas similar arguments by Eliezer/MIRI didn’t look sufficiently prestigious within academia.

A smaller part of the change was MIRI shifting its focus somewhat to be more in line with how mainstream machine learning (ML) researchers expect AI to reach human levels.

Also, OpenAI has been quietly shifting in a more MIRI-like direction (I’m very unclear on how big a change this is). (Paul Christiano seems to deserve some credit for both the MIRI and OpenAI shifts in strategies.)

Given those changes, it seems like MIRI ought to be able to attract more donations than before. Especially since it has demonstrated evidence of increasing competence, and also because HPMoR seemed to draw significantly more people into the community of people who are interested in MIRI.

MIRI has gotten one big grant from OpenPhilanthropy that it probably couldn’t have gotten when mainstream AI researchers were treating MIRI’s concerns as too far-fetched to be worth commenting on. But donations from MIRI’s usual sources have stagnated.

That pattern suggests that MIRI was previously benefiting from a polarization effect, where the perception of two distinct “tribes” (those who care about AI risks versus those who promote AI) energized people to care about “their tribe”.

Whereas now there’s no clear dividing line between MIRI and mainstream researchers. Also, there’s lots of money going into other organizations that plan to do something about AI safety. (Most of those haven’t yet articulated enough of a strategy to make me optimistic that that money is well spent. I still endorse the ideas I mentioned last year in How much Diversity of AGI-Risk Organizations is Optimal?. I’m unclear on how much diversity of approaches we’re getting from the recent proliferation of AI safety organizations.)

That kind of pattern of donations creates perverse incentives to charities to at least market themselves as fighting a powerful group of people, rather than (as the ideal charity should be) addressing a neglected problem. Even if that marketing doesn’t distort a charity’s operations, the charity will be tempted to use counterproductive alarmism. AI risk organizations have resisted those temptations (at least recently), but it seems risky to tempt them.

That’s part of why I recently made a modest donation to MIRI, in spite of the uncertainty over the value of their efforts (I had last donated to them in 2009).

Book review: Doing Good Better, by William MacAskill.

This book is a simple introduction to the Effective Altruism movement.

It documents big differences between superficially plausible charities, and points out how this implies big benefits to the recipients of charity from donors paying more attention to the results that a charity produces.

How effective is the book?

Is it persuasive?

Probably yes, for a small but somewhat important fraction of the population who seriously intend to help distant strangers, but have procrastinated about informing themselves about how to do so.

Does it focus on a neglected task?

Not very neglected. It’s mildly different from similar efforts such as GiveWell’s website and Reinventing Philanthropy, in ways that will slightly reduce the effort needed to understand the basics of Effective Altruism.

Will it make people more altruistic?

Not very much. It mostly seems to assume that people have some fixed level of altruism, and focuses on improving the benefits that result from that altruism. Maybe it will modestly redirect peer pressure toward making people more altruistic.

Will it make readers more effective?

Probably. For people who haven’t given much thought to these topics, the book’s advice is a clear improvement over standard habits. It will be modestly effective at promoting a culture where charitable donations that save lives are valued more highly than donations which accomplish less.

But I see some risk that it will make people overconfident about the benefits of the book’s specific strategies. An ideal version of the book would instead inspire people to improve on the book’s analysis.

The book provides evidence that donors rarely pay attention to how much good a charity does. Yet it avoids asking why. If you pay attention, you’ll see hints that donors are motivated mainly by the desire to signal something virtuous about themselves (for example, see the book’s section on moral licensing). In spite of that, the book consistently talks as if donors have good intentions, and only need more knowledge to be better altruists.

The book is less rigorous than I had hoped. I’m unsure how much of that is due to reasonable attempts to simplify the message so that more people can understand it with minimal effort.

In a section on robustness of evidence, the book describes this “sanity check”:

“if it cost ten dollars to save a life, then we’d have to suppose that they or their family members couldn’t save up for a few weeks, or take out a loan, in order to pay for the lifesaving product.”

I find it confusing to use this as a sanity check, because it’s all too easy to imagine that many people are in desperate enough conditions that they’re spending their last dollar to avoid starvation.

The book alternates between advocating doing more good (satisficing), and advocating the most possible good (optimizing). In practice, it mostly focuses on safe ways to produce fairly good results.

The book barely mentions existential risks. If it were literally trying to advocate doing the most good possible, it would devote a lot more attention to affecting the distant future. But that’s much harder to do well than what the book does focus on (saving a few more lives in Africa over the next few years), and would involve acts of charity that have small probabilities of really large effects on people who are not yet born.

If you’re willing to spend 50-100 hours (but not more) learning how to be more effective with your altruism, then reading this book is a good start.

But people who are more ambitious ought to be able to make a bigger difference to the world. I encourage those people to skip this book, and focus more on analyzing existential risks.

This post is partly a response to arguments for only donating to one charity and to an 80,000 Hours post arguing against diminishing returns. But I’ll focus mostly on AGI-risk charities.

Diversifying Donations?

The rule that I should only donate to one charity is a good presumption to start with. Most objections to it are due to motivations that diverge from pure utilitarian altruism. I don’t pretend that altruism is my only motive for donating, so I’m not too concerned that I only do a rough approximation of following that rule.

Still, I want to follow the rule more closely than most people do. So when I direct less than 90% of my donations to tax-deductible nonprofits, I feel a need to point to diminishing returns [1] to donations to justify that.

With AGI risk organizations, I expect the value of diversity to sometimes override the normal presumption even for purely altruistic utilitarians (with caveats about having the time needed to evaluate multiple organizations, and having more than a few thousand dollars to donate; those caveats will exclude many people from this advice, so this post is mainly oriented toward EAs who are earning to give or wealthier people).

Diminishing Returns?

Before explaining that, I’ll reply to the 80,000 Hours post about diminishing returns.

The 80,000 Hours post focuses on charities that mostly market causes to a wide audience. The economies of scale associated with brand recognition and social proof seem more plausible than any economies of scale available to research organizations.

The shortage of existential risk research seems more dangerous than any shortage of charities which are devoted to marketing causes, so I’m focusing on the most important existential risk.

I expect diminishing returns to be common after an organization grows beyond two or three people. One reason is that the founders of most organizations exert more influence than subsequent employees over important policy decisions [2], so at productive organizations founders are more valuable.

For research organizations that need the smartest people, the limited number of such people implies that only small organizations can have a large fraction of employees be highly qualified.

I expect donations to very young organizations to be more valuable than other donations (which implies diminishing returns to size on average):

  • It takes time to produce evidence that the organization is accomplishing something valuable, and donors quite sensibly prefer organizations that have provided such evidence.
  • Even when donors try to compensate for that by evaluating the charity’s mission statement or leader’s competence, it takes some time to adequately communicate those features (e.g. it’s rare for a charity to set up an impressive web site on day one).
  • It’s common for a charity to have suboptimal competence at fundraising until it grows large enough to hire someone with fundraising expertise.
  • Some charities are mainly funded by a few grants in the millions of dollars, and I’ve heard reports that those often take many months between being awarded and reaching the charities’ bank (not to mention delays in awarding the grants). This sometimes means months when a charity has trouble hiring anyone who demands an immediate salary.
  • Donors could in principle overcome these causes of bias, but as far as I can tell, few care about doing so. EA’s come a little closer to doing this than others, but my observations suggest that EA’s are almost as lazy about analyzing new charities as non EA’s.
  • Therefore, I expect young charities to be underfunded.

Why AGI risk research needs diversity

I see more danger of researchers pursuing useless approaches for existential risks in general, and AGI risks in particular (due partly to the inherent lack of feedback), than with other causes.

The most obvious way to reduce that danger is to encourage a wide variety of people and organizations to independently research risk mitigation strategies.

I worry about AGI-risk researchers focusing all their effort on a class of scenarios which rely on a false assumption.

The AI foom debate seems superficially like the main area where a false assumption might cause AGI research to end up mostly wasted. But there are enough influential people on both sides of this issue that I expect research to not ignore one side of that debate for long.

I worry more about assumptions that no prominent people question.

I’ll describe how such an assumption might look in hindsight via an analogy to some leading developers of software intended to accomplish what the web ended up accomplishing [3].

Xanadu stood out as the leading developer of global hypertext software in the 1980s to about the same extent that MIRI stands out as the leading AGI-risk research organization. One reason [4] that Xanadu accomplished little was the assumption that they needed to make money. Part of why that seemed obvious in the 1980s was that there were no ISPs delivering an internet-like platform to ordinary people, and hardware costs were a big obstacle to anyone who wanted to provide that functionality. The hardware costs declined at a predictable enough rate that Drexler was able to predict in Engines of Creation (published in 1986) that ordinary people would get web-like functionality within a decade.

A more disturbing reason for assuming that web functionality needed to make a profit was the ideology surrounding private property. People who opposed private ownership of home, farms, factories, etc. were causing major problems. Most of us automatically treated ownership of software as working the same way as physical property.

People who are too young to remember attitudes toward free / open source software before about 1997 will have some trouble believing how reluctant people were to imagine valuable software being free. [5] Attitudes changed unusually fast due to the demise of communism and the availability of affordable internet access.

A few people (such as RMS) overcame the focus on cold war issues, but were too eccentric to convert many followers. We should pay attention to people with similarly eccentric AGI-risk views.

If I had to guess what faulty assumption AGI-risk researchers are making, I’d say something like faulty guesses about the nature of intelligence or the architecture of feasible AGIs. But the assumptions that look suspicious to me are ones that some moderately prominent people have questioned.

Vague intuitions along these lines have led me to delay some of my potential existential-risk donations in hopes that I’ll discover (or help create?) some newly created existential-risk projects which produce more value per dollar.

Conclusions

How does this affect my current giving pattern?

My favorite charity is CFAR (around 75 or 80% of my donations), which improves the effectiveness of people who might start new AGI-risk organizations or AGI-development organizations. I’ve had varied impressions about whether additional donations to CFAR have had diminishing returns. They seem to have been getting just barely enough money to hire employees they consider important.

FLI is a decent example of a possibly valuable organization that CFAR played some hard-to-quantify role in starting. It bears a superficial resemblance to an optimal incubator for additional AGI-risk research groups. But FLI seems too focused on mainstream researchers to have much hope of finding the eccentric ideas that I’m most concerned about AGI-researchers overlooking.

Ideally I’d be donating to one or two new AGI-risk startups per year. Conditions seem almost right for this. New AGI-risk organizations are being created at a good rate, mostly getting a few large grants that are probably encouraging them to focus on relatively mainstream views [6].

CSER and FLI sort of fit this category briefly last year before getting large grants, and I donated moderate amounts to them. I presume I didn’t give enough to them for diminishing returns to be important, but their windows of unusual need were short enough that I might well have come close to that.

I’m a little surprised that the increasing interest in this area doesn’t seem to be catalyzing the formation of more low-budget groups pursuing more unusual strategies. Please let me know of any that I’m overlooking.

See my favorite charities web page (recently updated) for more thoughts about specific charities.

[1] – Diminishing returns are the main way that donating to multiple charities at one time can be reconciled with utilitarian altruism.

[2] – I don’t know whether it ought to work this way, but I expect this pattern to continue.

[3] – they intended to accomplish a much more ambitious set of goals.

[4] – probably not the main reason.

[5] – presumably the people who were sympathetic to communism weren’t attracted to small software projects (too busy with politics?) or rejected working on software due to the expectation that it required working for evil capitalists.

[6] – The short-term effects are probably good, increasing the diversity of approaches compared to what would be the case if MIRI were the only AGI-risk organization, and reducing the risk that AGI researchers would become polarized into tribes that disagree about whether AGI is dangerous. But a field dominated by a few funders tends to focus on fewer ideas than one with many funders.

Book review: Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty by Abhijit V. Banerjee and Esther Duflo.

This book gives an interesting perspective on the obstacles to fixing poverty in the developing world. They criticize both Jeffrey Sach and William Easterly for overstating how easy/hard it is provide useful aid to the poor by attempting simple and sweeping generalizations, where Banerjee and Duflo want us to look carefully at evidence from mostly small-scale interventions which sometimes produce decent results.

They describe a few randomized controlled trials, but apparently there aren’t enough of those to occupy a full book, so they spend more time on less rigorous evidence of counter-intuitive ways that aid programs can fail.

They portray the poor as mostly rational and rarely making choices that are clearly stupid given the information that is readily available to them. But their cognitive abilities are sometimes suboptimal due to mediocre nutrition, disease, and/or stress from financial risks. Relieving any of those problems can sometimes enable them to become more productive workers.

The book advocates mild paternalism in the form of nudging weakly held beliefs about health-related questions where people can’t easily observe the results (e.g. vaccination, iodine supplementation), but probably not birth control (the poor generally choose how many children to have, although there are complex issues influencing those choices). They point out that the main reason people in developed countries make better health choices is due to better defaults, not more intelligence. I wish they’d gone a bit farther and speculated about how many of our current health practices will look pointlessly harmful to more advanced societies.

They give a lukewarm endorsement of microcredit, showing that it needs to be inflexible to avoid high default rates, and only provides small benefits overall. Most of the poor would be better off with a salaried job than borrowing money to run a shaky business.

The book fits in well with Givewell’s approach.

Book review: Reinventing Philanthropy: A Framework for More Effective Giving, by Eric Friedman.

This book will spread the ideas behind effective altruism to a modestly wider set of donors than other efforts I’m aware of. It understates how much the effective altruism movement differs from traditional charity and how hard it is to implement, but given the shortage of books on this subject any addition is valuable. It focuses on how to ask good questions about philanthropy rather than attempting to find good answers.

The author provides a list of objections he’s heard to maximizing the effectiveness of charity, a majority of which seem to boil down to the “diversification of nonprofit goals would be drastically reduced”, leading to many existing benefits being canceled. He tries to argue that people have extremely diverse goals which would result in an extremely diverse set of charities. He later argues that the subjectivity of determining the effectiveness of charities will maintain that diversity. Neither of these arguments seem remotely plausible. When individuals explicitly compare how they value their own pleasure, life expectancy, dignity, freedom, etc., I don’t see more than a handful of different goals. How could it be much different for recipients of charity? There exist charities whose value can’t easily be compared to GiveWell’s recommended ones (stopping nuclear war?), but they seem to get a small fraction of the money that goes to charities that GiveWell has decent reasons for rejecting.

So I conclude that widespread adoption of effective giving would drastically reduce the diversity of charitable goals (limited mostly by the fact that spending large amount on a single goal is subject to diminishing returns). The only plausible explanation I see for peoples’ discomfort with that is that people are attached to beliefs which are inconsistent with treating all potential recipients as equally deserving.

He’s reluctant to criticize “well-intentioned” donors who use traditional emotional reasoning. I prefer to think of them as normally-intentioned (i.e. acting on a mix of selfish and altruistic motives).

I still have some concerns that asking average donors to objectively maximize the impact of their donations would backfire by reducing the emotional benefit they get from giving more than it increases the effectiveness of their giving. But since I don’t expect more than a few percent of the population to be analytical enough to accept the arguments in this book, this doesn’t seem like an important concern.

He tries to argue that effective giving can increase the emotional benefit we get from giving. This mostly seems to depend on getting more warm fuzzy feelings from helping more people. But as far as I can tell, those feelings are very insensitive to the number of people helped. I haven’t noticed any improved feelings as I alter my giving to increase its impact, and the literature on scope insensitivity suggests that’s typical.

He wants donors to treat potentially deserving recipients as equally deserving regardless of how far away they are, but he fails to include people who are distant in time. He might have good reasons for not wanting to donate to people of the distant future, but not analyzing those reasons risks making the same kind of mistake he criticizes donors for making about distant continents.

Book Review: Let Their People Come: Breaking the Gridlock on Global Labor Mobility by Lant Pritchett.
This book is primarily written for economists and academics in related fields, but most of it can be understood by an average person.
I was a little hesitant to read this book because I suspected it would do little more than reinforce my existing beliefs. There were certainly parts of the book that I would have been better off skipping for that reason.
But one important effect of the book was to convince me that the effects on the poor of migration to wealthier countries is so large compared to things like “foreign aid” and free trade that anyone trying to help the poor by influencing government policies shouldn’t spend any time thinking about how to improve “foreign aid” or trade barriers.
I’ve long been wondering how to respond to remarks such as Jimmy Carter’s ‘We are the stingiest nation of all’ based the U.S.’s low “foreign aid” to GDP ratio. Pointing out that “foreign aid” is mostly wasted or even harmful requires too much analysis of lots of not-too-strong evidence. Pritchett shows that the wealth affects of allowing the poor to work in rich countries should dominate any measure of how those rich countries treat the poor. By that measure, adjusting for country size, the U.S. ranks better than countries in the EU, but is embarrassingly callous compared to the United Arab Emirates, Kuwait, and Jordan.
The book addresses both moral and selfish arguments for restricting immigration. It treats the selfish arguments (even those based on myths) as problems that can’t be overcome, but which can be reduced via compromises. These pragmatic parts of the book are too ordinary to be worth much.
The sections about moral arguments are more powerful. He clearly demonstrates a large blind spot in the moral vision of those who think they’re opposed to all discrimination but who aren’t offended by discrimination on the basis of the nationality a person was assigned at birth. But he exaggerates when he claims that nationality is the only exception to a widely agreed on outrage at discrimination based on “condition of birth”. Discrimination based on date of birth still gets wide support (e.g. the drinking age). And if you’re born as a conjoined twin, don’t expect much protection from surgery that looks about as moral as brain surgery designed to cure a child’s homosexuality should.
Perhaps this book is one small step toward creating a movement with a slogan such as “Tear down that kinder, gentler Berlin wall!”.

This post is a response to a challenge on Overcoming Bias to spend $10 trillion sensibly.
Here’s my proposed allocation (spending to be spread out over 10-20 years):

  • $5 trillion on drug patent buyouts and prizes for new drugs put in the public domain, with the prizes mostly allocated in proportion to the quality adjusted life years attributable to the drug.
  • $1 trillion on establishing a few dozen separate clusters of seasteads and on facilitating migration of people from poor/oppressive countries by rewarding jurisdictions in proportion to the number of immigrants they accept from poorer / less free regions. (I’m guessing that most of those rewards will go to seasteads, many of which will be created by other people partly in hopes of getting some of these rewards).

    This would also have a side affect of significantly reducing the harm that humans might experience due to global warming or an ice age, since ocean climates have less extreme temperatures, seasteads will probably not depend on rainfall to grow food, and can move somewhat to locations with better temperatures.
  • $1 trillion on improving political systems, mostly through prizes that bear some resemblance to The Mo Ibrahim Prize for Achievement in African Leadership (but not limited to democratically elected leaders and not limited to Africa). If the top 100 or so politicians in about 100 countries are eligible, I could set the average reward at about $100 million per person. Of course, nowhere near all of them will qualify, so a fair amount will be left over for those not yet in office.
  • $0.5 trillion on subsidizing trading on prediction markets that are designed to enable futarchy. This level of subsidy is far enough from anything that has been tried that there’s no way to guess whether this is a wasteful level.
  • $1 trillion existential risks
    Some unknown fraction of this would go to persuading people not to work on AGI without providing arguments that they will produce a safe goal system for any AI they create. Once I’m satisfied that the risks associated with AI are under control, much of the remaining money will go toward establishing societies in the asteroid belt and then outside the solar system.
  • $0.5 trillion on communications / computing hardware for everyone who can’t currently afford that.
  • $1 trillion I’d save for ideas I think of later.

I’m not counting a bunch of other projects that would use up less than $100 billion since they’re small enough to fit in the rounding errors of the ones I’ve counted (the Methuselah Mouse prize, desalinization and other water purification technologies, developing nanotech, preparing for the risks of nanotech, uploading, cryonics, nature preserves, etc).