ethics

All posts tagged ethics

Descriptions of AI-relevant ontological crises typically choose examples where it seems moderately obvious how humans would want to resolve the crises. I describe here a scenario where I don’t know how I would want to resolve the crisis.

I will incidentally ridicule express distate for some philosophical beliefs.

Suppose a powerful AI is programmed to have an ethical system with a version of the person-affecting view. A version which says only persons who exist are morally relevant, and “exist” only refers to the present time. [Note that the most sophisticated advocates of the person-affecting view are willing to treat future people as real, and only object to comparing those people to other possible futures where those people don’t exist.]

Suppose also that it is programmed by someone who thinks in Newtonian models. Then something happens which prevents the programmer from correcting any flaws in the AI. (For simplicity, I’ll say programmer dies, and the AI was programmed to only accept changes to its ethical system from the programmer).

What happens when the AI tries to make ethical decisions about people in distant galaxies (hereinafter “distant people”) using a model of the universe that works like relativity?

Continue Reading

Book review: The Life You Can Save, by Peter Singer.

This book presents some unimpressive moral claims, and some more pragmatic social advocacy that is rather impressive.

The Problem

It is all too common to talk as if all human lives had equal value, yet act as if the value of distant strangers’ lives was a few hundred dollars.

Singer is effective at arguing against standard rationalizations for this discrepancy.

He provides an adequate summary of reasons to think most of us can easily save many lives.
Continue Reading

In this post, I’ll describe features of the moral system that I use. I expect that it’s similar enough to Robin Hanson’s views I’ll use his name dealism to describe it, but I haven’t seen a well-organized description of dealism. (See a partial description here).

It’s also pretty similar to the system that Drescher described in Good and Real, combined with Anna Salamon’s description of causal models for Newcomb’s problem (which describes how to replace Drescher’s confused notion of “subjunctive relations” with a causal model). Good and Real eloquently describes why people should want to follow dealist-like moral system; my post will be easier to understand if you understand Good and Real.

The most similar mainstream system is contractarianism. Dealism applies to a broader set of agents, and depends less on the initial conditions. I haven’t read enough about contractarianism to decide whether dealism is a special type of contractarianism or whether it should be classified as something separate. Gauthier’s writings look possibly relevant, but I haven’t found time to read them.

Scott Aaronson’s eigenmorality also overlaps a good deal with dealism, and is maybe a bit easier to understand.

Under dealism, morality consists of rules / agreements / deals, especially those that can be universalized. We become more civilized as we coordinate better to produce more cooperative deals. I’m being somewhat ambiguous about what “deal” and “universalized” mean, but those ambiguities don’t seem important to the major disagreements over moral systems, and I want to focus in this post on high-level disagreements.
Continue Reading

Will young ems be created? Why and how will it happen?

Any children that exist as ems will be important as em societies mature, because they will adapt better to em environments than ems who uploaded as adults, making them more productive.

The Age of Em says little about children, presumably in part because no clear outside view predictions seem possible.

This post will use a non-Hansonian analysis style to speculate about which children will become ems. I’m writing this post to clarify my more speculative thoughts about how em worlds will work, without expecting to find much evidence to distinguish the good ideas from the bad ones.

Robin predicts few regulatory obstacles to uploading children, because he expects the world to be dominated by ems. I’m skeptical of that. Ems will be dominant in the sense of having most of the population, but that doesn’t tell us much about em influence on human society – farmers became a large fraction of the world population without meddling much in hunter-gatherer political systems. And it’s unclear whether em political systems would want to alter the relevant regulations – em societies will have much the same conflicting interest groups pushing for and against immigration that human societies have.

How much of Robin’s prediction of low regulation is due to his desire to start by analyzing a relatively simple scenario (low regulation) and add complexity later?

Continue Reading

Ethical Diet Reviewed

My first year of eating no factory farmed vertebrates went fairly well.

When eating at home, it took no extra cost or effort to stick to the diet.

I’ve become less comfortable eating at restaurants, because I find few acceptable choices at most restaurants, and because poor labeling has caused me to mistakenly get food that wasn’t on my diet.

The constraints were strict enough that I lost about 4 pounds during 8 days away from home over the holidays. That may have been healthier than the weight gain I succumbed to during similar travels in prior years, but that weight loss is close to the limit of what I find comfortable.

In theory, I should have gotten enough flexibility from my rule to allow 120 calories per month of unethical animal products for me to be mostly comfortable with restaurant food. In practice, I found it psychologically easier to adopt an identity of someone who doesn’t eat any factory farmed vertebrates than it would have been to feel comfortable using up the 120 calorie quota. That made me reluctant to use any flexibility.

The quota may have been valuable for avoiding a feeling of failure when I made mistakes.

Berkeley is a relatively easy place to adopt this diet, thanks to Marin Sun Farms and Mission Heirloom. Pasture-raised eggs are fairly easy to find in the bay area (Berkeley Bowl, Whole Foods, etc).

I still have some unresolved doubts about how much to trust labels. Pasture-raised eggs are available in Colorado in winter, but chicken meat is reportedly unavailable due to weather-related limits on keeping chickens outdoors. Why doesn’t that reasoning also apply to eggs?

I’m still looking for a good substitute for Questbars. These come closest:

For most people, it would be hard enough to follow my diet strictly that I recommend starting with an easier version. One option would be to avoid factory farmed chicken/eggs (i.e. focus on the avoiding the cruelest choices). And please discriminate against restaurants that don’t label their food informatively.

I plan to continue my diet essentially unchanged, with maybe slightly less worry about what I eat when traveling or at parties.

Ethical diets

I’ve seen some discussion of whether effective altruists have an obligation to be vegan or vegetarian.

The carnivores appear to underestimate the long-term effects of their actions. I see a nontrivial chance that we’re headed toward a society in which humans are less powerful than some other group of agents. This could result from slow AGI takeoff producing a heterogeneous society of superhuman agents. Or there could be a long period in which the world is dominated by ems before de novo AGI becomes possible. Establishing ethical (and maybe legal) rules that protect less powerful agents may influence how AGIs treat humans or how high-speed ems treat low-speed ems and biological humans [0]. A one in a billion chance that I can alter this would be worth some of my attention. There are probably other similar ways that an expanding circle of ethical concern can benefit future people.

I see very real costs to adopting an ethical diet, but it seems implausible that EAs are merely choosing alternate ways of being altruistic. How much does it cost MealSquares customers to occasionally bemoan MealSquares use of products from apparently factory-farmed animals? Instead, it seems like EAs have some tendency to actively raise the status of MealSquares [1].

I don’t find it useful to compare a more ethical diet to GiveWell donations for my personal choices, because I expect my costs to be mostly inconveniences, and the marginal value of my time seems small [2], with little fungibility between them.

I’m reluctant to adopt a vegan diet due to the difficulty of evaluating the health effects and due to the difficulty of evaluating whether it would mean fewer animals living lives that they’d prefer to nonexistence.

But there’s little dispute that most factory-farmed animals are much less happy than pasture-raised animals. And everything I know about the nutritional differences suggests that avoiding factory-farmed animals improves my health [3].

I plan not to worry about factory-farmed invertebrates for now (shrimp, oysters, insects), partly because some of the harmful factory-farm practices such as confining animals to cages not much bigger than the animals in question aren’t likely with animals that small.

So my diet will consist of vegan food plus shellfish, insects, wild-caught fish, pasture-raised birds/mammals (and their eggs/whey/butter). I will assume vertebrate animals are raised in cruel conditions unless they’re clearly marked as wild-caught, grass-fed, or pasture-raised [4].

I’ve made enough changes to my diet for health reasons that this won’t require large changes. I already eat at home mostly, and the biggest change to that part of my diet will involve replacing QuestBars with a home-made version using whey protein from grass-fed cows (my experiments so far indicate it’s inconvenient and hard to get a decent texture). I also have some uncertainty about pork belly [5] – the pasture-raised version I’ve tried didn’t seem as good, but that might be because I didn’t know it needed to be sliced very thin.

My main concern is large social gatherings. It has taken me a good deal of willpower to stick to a healthy diet under those conditions, and I expect it to take more willpower to observe ethical constraints.

A 100% pure diet would be much harder for me to achieve than an almost pure diet, and it takes some time for me to shift my habits. So for this year I plan to estimate how many calories I eat that don’t fit this diet, and aim to keep that less than 120 calories per month (about 0.2%) [6]. I’ll re-examine the specifics of this plan next Jan 1.

Does anyone know a convenient name for my planned diet?

footnotes

0. With no one agent able to conquer the world, it’s costly for a single agent to repudiate an existing rule. A homogeneous group of superhuman agents might coordinate to overcome this, but with heterogeneous agents the coordination costs may matter.

1. I bought 3 orders of MealSquares, but have stopped buying for now. If they sell a version whose animal products are ethically produced (which I’m guessing would cost $50/order more), I’ll resume buying them occasionally.

2. The average financial value of my time is unusually high, but I often have trouble estimating whether spending more time earning money has positive or negative financial results. I expect financial concerns will be more important to many people.

3 With the probable exception of factory-farmed insects, oysters, and maybe other shellfish.

4. In most restaurants, this will limit me to vegan food and shellfish.

5. Pork belly is unsliced bacon without the harm caused by smoking.

6. Yes, I’ll have some incentive to fudge those estimates. My experience from tracking food for health reasons suggests possible errors of 25%. That’s not too bad compared to other risks such as lack of willpower.

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.

Book review: The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt.

This book carefully describes the evolutionary origins of human moralizing, explains why tribal attitudes toward morality have both good and bad effects, and how people who want to avoid moral hostility can do so.

Parts of the book are arranged to describe the author’s transition from having standard delusions about morality being the result of the narratives we use to justify them and about why other people had alien-sounding ideologies. His description about how his study of psychology led him to overcome his delusions makes it hard for those who agree with him to feel very superior to those who disagree.

He hints at personal benefits from abandoning partisanship (“It felt good to be released from partisan anger.”), so he doesn’t rely on altruistic motives for people to accept his political advice.

One part of the book that surprised me was the comparison between human morality and human taste buds. Some ideologies are influenced a good deal by all 6 types of human moral intuitions. But the ideology that pervades most of academia only respect 3 types (care, liberty, and fairness). That creates a difficult communication gap between them and cultures that employ others such as sanctity in their moral system, much like people who only experience sweet and salty foods would have trouble imagining a desire for sourness in some foods.

He sometimes gives the impression of being more of a moral relativist than I’d like, but a careful reading of the book shows that there are a fair number of contexts in which he believes some moral tastes produce better results than others.

His advice could be interpreted as encouraging us to to replace our existing notions of “the enemy” with Manichaeans. Would his advice polarize societies into Manichaeans and non-Manichaeans? Maybe, but at least the non-Manichaeans would have a decent understanding of why Manichaeans disagreed with them.

The book also includes arguments that group selection played an important role in human evolution, and that an increase in cooperation (group-mindedness, somewhat like the cooperation among bees) had to evolve before language could become valuable enough to evolve. This is an interesting but speculative alternative to the common belief that language was the key development that differentiated humans from other apes.

The Honor Code

Book review: The Honor Code: How Moral Revolutions Happen by Kwame Anthony Appiah.

This book argues that moral changes such as the abolition of dueling, slavery, and foot-binding are not the result of new understanding of why they are undesirable. They result from changes in how they affect the honor (or status) of the groups that have the power to create the change.

Dueling was mostly associated with a hereditary class of gentlemen, and feeling a responsibility to duel was a symbol of that status. When the nature of the upper class changed to include a much less well defined class that included successful businessmen, and society became more egalitarian, the distinction associated with demonstrating that one was a member of the hereditary elite lost enough value that the costs of dueling outweighed the prestige.

Slave-owners increasingly portrayed the labor that slaves preformed in a way that also implied the work of British manual laborers deserved low status, and rising resentment and political power of that labor class created a movement to abolish slavery.

The inability of Chinese elites to ignore the opinions of elites in other nations whose military and technological might made it hard for China to dismiss them as inferior altered the class of people whom the Chinese elites wanted respect from.

These are plausible stories, backed by a modest amount of evidence. I don’t know of any strong explanations that compete with this. But I don’t get the impression that the author tried as hard as I would like to find evidence for competing explanations. For instance, he presents some partial evidence to the effect that Britain abolished slavery at a time when slavery was increasingly profitable. But I didn’t see any consideration of the costs of keeping slaves from running away, which I expect were increasing due to improved long-distance transportation such as railroads. He lists references which might constitute authoritative support for his position, but it looks like it would be time-consuming to verify that.

Whether this book can help spark new moral revolutions is unclear, but it should make our efforts to do so more cost-effective, if only by reducing the effort put into ineffective approaches.

Book review: Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Collin Allen.

This book combines the ideas of leading commentators on ethics, methods of implementing AI, and the risks of AI, into a set of ideas on how machines ought to achieve ethical behavior.

The book mostly provides an accurate survey of what those commentators agree and disagree about. But there’s enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don’t have those kinds of insights.

The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.

The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren’t obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.

Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I’ve observed pretty consistent efforts by those advocates to create apprehension among AI researchers.