Archives

All posts by Peter

In this post, I’ll describe features of the moral system that I use. I expect that it’s similar enough to Robin Hanson’s views I’ll use his name dealism to describe it, but I haven’t seen a well-organized description of dealism. (See a partial description here).

It’s also pretty similar to the system that Drescher described in Good and Real, combined with Anna Salamon’s description of causal models for Newcomb’s problem (which describes how to replace Drescher’s confused notion of “subjunctive relations” with a causal model). Good and Real eloquently describes why people should want to follow dealist-like moral system; my post will be easier to understand if you understand Good and Real.

The most similar mainstream system is contractarianism. Dealism applies to a broader set of agents, and depends less on the initial conditions. I haven’t read enough about contractarianism to decide whether dealism is a special type of contractarianism or whether it should be classified as something separate. Gauthier’s writings look possibly relevant, but I haven’t found time to read them.

Scott Aaronson’s eigenmorality also overlaps a good deal with dealism, and is maybe a bit easier to understand.

Under dealism, morality consists of rules / agreements / deals, especially those that can be universalized. We become more civilized as we coordinate better to produce more cooperative deals. I’m being somewhat ambiguous about what “deal” and “universalized” mean, but those ambiguities don’t seem important to the major disagreements over moral systems, and I want to focus in this post on high-level disagreements.
Continue Reading

Book review: Into the Gray Zone: A Neuroscientist Explores the Border Between Life and Death, by Adrian Owen.

Too many books and talks have gratuitous displays of fMRIs and neuroscience. At last, here’s a book where fMRIs are used with fairly good reason, and neuroscience is explained only when that’s appropriate.

Owen provides evidence of near-normal brain activity in a modest fraction of people who had been classified as being in a persistent vegetative state. They are capable of answering yes or no to most questions, and show signs of understanding the plots of movies.

Owen believes this evidence is enough to say they’re conscious. I suspect he’s mostly right about that, and that they do experience much of the brain function that is typically associated with consciousness. Owen doesn’t have any special insights into what we mean by the word consciousness. He mostly just investigates how to distinguish between near-normal mental activity and seriously impaired mental activity.

So what were neurologists previously using to classify people as vegetative? As far as I can tell, they were diagnosing based on a lack of motor responses, even though they were aware of an alternate diagnosis, total locked-in syndrome, with identical symptoms. Locked-in syndrome and persistent vegetative state were both coined (in part) by the same person (but I’m unclear who coined the term total locked-in syndrome).

My guess is that the diagnoses have been influenced by a need for certainty. (whose need? family members? doctors? It’s not obvious).

The book has a bunch of mostly unremarkable comments about ethics. But I was impressed by Owen’s observation that people misjudge whether they’d want to die if they end up in a locked-in state. So how likely is it they’ll mispredict what they’d want in other similar conditions? I should have deduced this from the book stumbling on happiness, but I failed to think about it.

I’m a bit disturbed by Owen’s claim that late-stage Alzheimer’s patients have no sense of self. He doesn’t cite evidence for this conclusion, and his research should hint to him that it would be quite hard to get good evidence on this subject.

Most books written by scientists who made interesting discoveries attribute the author’s success to their competence. This book provides clear evidence for the accidental nature of at least some science. Owen could easily have gotten no signs of consciousness from the first few patients he scanned. Given the effort needed for the scans, I can imagine that that would have resulted in a mistaken consensus of experts that vegetative states were being diagnosed correctly.

Book review: Darwin’s Unfinished Symphony: How Culture Made the Human Mind, by Kevin N. Laland.

This book is a mostly good complement to Henrich’s The Secret of our Success. The two books provide different, but strongly overlapping, perspectives on how cultural transmission of information played a key role in the evolution of human intelligence.

The first half of the book describes the importance of copying behavior in many animals.

I was a bit surprised that animals as simple as fruit flies are able to copy some behaviors of other fruit flies. Laland provides good evidence that a wide variety of species have evolved some ability to copy behavior, and that ability is strongly connected to the benefits of acquiring knowledge from others and the costs of alternative ways of acquiring that knowledge.

Yet I was also surprised that the value of copying is strongly limited by the low reliability with which behavior is copied, except with humans. Laland makes plausible claims that the need for high-fidelity copying of behavior was an important driving force behind the evolution of bigger and more sophisticated brains.

Laland claims that humans have a unique ability to teach, and that teaching is an important adaptation. He means teaching in a much broader sense than we see in schooling – he includes basic stuff that could have preceded language, such as a parent directing a child’s attention to things that the child ought to learn. This seems like a good extension to Henrich’s ideas.

The most interesting chapter theorizes about the origin of human language. Laland’s theory that language evolved for teaching provides maybe a bit stronger selection pressure than other theories, but he doesn’t provide much reason to reject competing theories.

Laland presents seven criteria for a good explanation of the evolution of language. But these criteria look somewhat biased toward his theory.

Laland’s first two criteria are that language should have been initially honest and cooperative. He implies that it must have been more honest and cooperative than modern language use is, but he isn’t as clear about that as I would like. Those two criteria seem designed as arguments against the theory that language evolved to impress potential mates. The mate-selection theory involves plenty of competition, and presumably a fair amount of deception. But better communicators do convey important evidence about the quality of their genes, even if they’re engaging in some deception. That seems sufficient to drive the evolution of language via mate-selection pressures.

Laland’s theory seems to provide a somewhat better explanation of when language evolved than most other theories do, so I’m inclined to treat it as one of the top theories. But I don’t expect any consensus on this topic anytime soon.

The book’s final four chapters seemed much less interesting. I recommend skipping them.

Henrich’s book emphasized evidence that humans are pretty similar to other apes. Laland emphasizes ways in which humans are unique (language and teaching ability). I didn’t notice any cases where they directly contradicted each other, but it’s a bit disturbing that they left quite different impressions while saying mostly appropriate things.

Henrich claimed that increasing climate variability created increased rewards for the fast adaptation that culture enabled. Laland disagrees, saying that cultural change itself is a more plausible explanation for the kind of environmental change that incentivized faster adaptation. My intuition says that Laland’s conclusion is correct, but he seems a bit overconfident about it.

Overall, Laland’s book is less comprehensive and less impressive than Henrich’s book, but is still good enough to be in my top ten list of books on the evolution of intelligence.

Update on 2017-08-18: I just read another theory about the evolution of language which directly contradicts Laland’s claim that early language needed to be honest and cooperative. Wild Voices: Mimicry, Reversal, Metaphor, and the Emergence of Language claims that an important role of initial human vocal flexibility was to deceive other species.

Or, why I don’t fear the p-zombie apocalypse.

This post analyzes concerns about how evolution, in the absence of a powerful singleton, might, in the distant future, produce what Nick Bostrom calls a “Disneyland without children”. I.e. a future with many agents, whose existence we don’t value because they are missing some important human-like quality.

The most serious description of this concern is in Bostrom’s The Future of Human Evolution. Bostrom is cautious enough that it’s hard to disagree with anything he says.

Age of Em has prompted a batch of similar concerns. Scott Alexander at SlateStarCodex has one of the better discussions (see section IV of his review of Age of Em).

People sometimes sound like they want to use this worry as an excuse to oppose the age of em scenario, but it applies to just about any scenario with human-in-a-broad-sense actors. If uploading never happens, biological evolution could produce slower paths to the same problem(s) [1]. Even in the case of a singleton AI, the singleton will need to solve the tension between evolution and our desire to preserve our values, although in that scenario it’s more important to focus on how the singleton is designed.

These concerns often assume something like the age of em lasts forever. The scenario which Age of Em analyzes seems unstable, in that it’s likely to be altered by stranger-than-human intelligence. But concerns about evolution only depend on control being sufficiently decentralized that there’s doubt about whether a central government can strongly enforce rules. That situation seems sufficiently stable to be worth analyzing.

I’ll refer to this thing we care about as X (qualia? consciousness? fun?), but I expect people will disagree on what matters for quite some time. Some people will worry that X is lost in uploading, others will worry that some later optimization process will remove X from some future generation of ems.

I’ll first analyze scenarios in which X is a single feature (in the sense that it would be lost in a single step). Later, I’ll try to analyze the other extreme, where X is something that could be lost in millions of tiny steps. Neither extreme seems likely, but I expect that analyzing the extremes will illustrate the important principles.

Continue Reading

Book review: The Hungry Brain: Outsmarting the Instincts That Make Us Overeat, by Stephan Guyenet.

Researchers who studied obesity in rats used to have trouble coaxing their rats to overeat. The obvious approaches (a high fat diet, or a high sugar diet) were annoyingly slow. Then they stumbled on the approach of feeding human junk food to the rats, and made much faster progress.

What makes something “junk food”? The best parts of this book help to answer this, although some ambiguity remains. It mostly boils down to palatability (is it yummier than what our ancestors evolved to expect? If so, it’s somewhat addictive) and caloric density.

Presumably designers of popular snack foods have more sophisticated explanations of what makes people obese, since that’s apparently identical to what they’re paid to optimize (with maybe a few exceptions, such as snacks that are marketed as healthy or ethical). Yet researchers who officially study obesity seem reluctant to learn from snack food experts. (Because they’re the enemy? Because they’re low status? Because they work for evil corporations? Your guess is likely as good as mine.)

Guyenet provides fairly convincing evidence that it’s simple to achieve a healthy weight while feeling full. (E.g. the 20 potatoes a day diet). To the extent that we need willpower, it’s to avoid buying convenient/addictive food, and to avoid restaurants.

My experience is that I need a moderate amount of willpower to follow Guyenet’s diet ideas, and that it would require large amount of willpower if I attended many social events involving food. But for full control over my weight, it seemed like I needed to supplement a decent diet with some form of intermittent fasting (e.g. alternate day calorie restriction); Guyenet says little about that.

Guyenet’s practical advice boils down to a few simple rules: eat whole foods that resemble what our ancestors ate; don’t have other “food” anywhere that you can quickly grab it; sleep well; exercise; avoid stress. That’s sufficiently similar to advice I’ve heard before that I’m confident The Hungry Brain won’t revolutionize many people’s understanding of obesity. But it’s got a pretty good ratio of wisdom to questionable advice, and I’m unaware of reasons to expect much more than that.

Guyenet talks a lot about neuroscience. That would make sense if readers wanted to learn how to fix obesity via brain surgery. The book suggests that, in the absence of ethical constraints, it might be relatively easy to cure obesity by brain surgery. Yet I doubt such a solution would become popular, even given optimistic assumptions about safety.

An alternate explanation is that Guyenet is showing off his knowledge of brains, in order to show that he’s smart enough to have trustworthy beliefs about diets. But that effect is likely small, due to competition among diet-mongers for comparable displays of smartness.

Or maybe he’s trying to combat dualism, in order to ridicule the “just use willpower” approach to diet? Whatever the reason is, the focus on neuroscience implies something unimpressive about the target audience.

You should read this book if you eat a fairly healthy diet but are still overweight. Otherwise, read Guyenet’s blog instead, for a wider variety of health advice.

The paper When Will AI Exceed Human Performance? Evidence from AI Experts reports ML researchers expect AI will create a 5% chance of “Extremely bad (e.g. human extinction)” consequences, yet they’re quite divided over whether that implies it’s an important problem to work on.

Slate Star Codex expresses confusion about and/or disapproval of (a slightly different manifestation of) this apparent paradox. It’s a pretty clear sign that something is suboptimal.

Here are some conjectures (not designed to be at all mutually exclusive).
Continue Reading

Book review: Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead, by Brene Brown.

I almost didn’t read this because I was unimpressed by the TEDx video version of it, but parts of the book were pretty good (mainly chapters 3 and 4).

The book helped clarify my understanding of shame: how it differs from guilt, how it often constrains us without accomplishing anything useful, and how to reduce it.

She emphasizes that we can reduce shame by writing down or talking about shameful thoughts. She doesn’t give a strong explanation of what would cause that effect, but she prompted me to generate one: parts of my subconscious mind initially want to hide the shameful thoughts, and that causes them to fight the parts of my mind that want to generate interesting ideas. The act of communicating those ideas to the outside world convinces those censor-like parts of my mind to worry less about the ideas (because it’s too late? or because the social response is evidence that the censor was mistakenly worried? I don’t know).

I was a bit confused by her use of the phrase “scarcity culture”. I was initially tempted to imagine she wanted us to take a Panglossian view in which we ignore the resource constraints that keep us from eliminating poverty. But the context suggests she’s thinking more along the lines of “a culture of envy”. Or maybe a combination of perfectionism plus status seeking? Her related phrase “never enough” makes sense if I interpret it as “never impressive enough”.

I find it hard to distinguish those “bad” attitudes from the attitudes that seem important for me to strive for self-improvement.

She attempts to explain that distinction in a section on perfectionism. She compares perfectionism to healthy striving by noting that perfectionism focuses on what other people will think of us, whereas healthy striving is self-focused. Yet I’m pretty sure I’ve managed to hurt myself with perfectionism while focusing mostly on worries about how I’ll judge myself.

I suspect that healthy striving requires more focus on the benefits of success, and less attention to fear of failure, than is typical of perfectionism. The book hints at this, but doesn’t say it clearly when talking about perfectionism. Maybe she describes perfectionism better in her book The Gifts of Imperfection. Should I read that?

Her claim “When we stop caring about what people think, we lose our capacity for connection” feels important, and an area where I have trouble.

The book devotes too much attention to gender-stereotypical problems with shame. Those stereotypes are starting to look outdated. And it shouldn’t require two whole chapters to say that advice on how to have healthy interactions with people should also apply to relations at work, and to relations between parents and children.

The book was fairly easy to read, and parts of it are worth rereading.

A new paper titled When Will AI Exceed Human Performance? Evidence from AI Experts reports some bizarre results. From the abstract:

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

So we should expect a 75 year period in which machines can perform all tasks better and more cheaply than humans, but can’t automate all occupations. Huh?

I suppose there are occupations that consist mostly of having status rather than doing tasks (queen of England, or waiter at a classy restaurant that won’t automate service due to the high status of serving food the expensive way). Or occupations protected by law, such as gas station attendants who pump gas in New Jersey, decades after most drivers switched to pumping for themselves.

But I’d be rather surprised if machine learning researchers would think of those points when answering a survey in connection with a machine learning conference.

Maybe the actual wording of the survey questions caused a difference that got lost in the abstract? Hmmm …

“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers

versus

when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.

I tried to convince myself that the second version got interpreted as referring to actually replacing humans, while the first version referred to merely being qualified to replace humans. But the more I compared the two, the more that felt like wishful thinking. If anything, the “unaided” in the first version should make that version look farther in the future.

Can I find any other discrepancies between the abstract and the details? The 120 years in the abstract turns into 122 years in the body of the paper. So the authors seem to be downplaying the weirdness of the results.

There’s even a prediction of a 50% chance that the occupation “AI researcher” will be automated in about 88 years (I’m reading that from figure 2; I don’t see an explicit number for it). I suspect some respondents said this would take longer than for machines to “accomplish every task better and more cheaply”, but I don’t see data in the paper to confirm that [1].

A more likely hypothesis is that researchers alter their answers based on what they think people want to hear. Researchers might want to convince their funders that AI deals with problems that can be solved within the career of the researcher [2], while also wanting to reassure voters that AI won’t create massive unemployment until the current generation of workers has retired.

That would explain the general pattern of results, although the magnitude of the effect still seems strange. And it would imply that most machine learning researchers are liars, or have so little understanding of when HLMI will arrive that they don’t notice a 50% shift in their time estimates.

The ambiguity in terms such as “tasks” and “better” could conceivably explain confusion over the meaning of HLMI. I keep intending to write a blog post that would clarify concepts such as human-level AI and superintelligence, but then procrastinating because my thoughts on those topics are unclear.

It’s hard to avoid the conclusion that I should reduce my confidence in any prediction of when AI will reach human-level competence. My prior 90% confidence interval was something like 10 to 300 years. I guess I’ll broaden it to maybe 8 to 400 years [3].

P.S. – See also Katja’s comments on prior surveys.

[1] – the paper says most participants were asked the question that produced the estimate of 45 years to HLMI, the rest got the question that produced the 122 year estimate. So the median for all participants ought to be less than about 84 years, unless there are some unusual quirks in the data.

[2] – but then why do experienced researchers say human-level AI is farther in the future than new researchers, who presumably will be around longer? Maybe the new researchers are chasing fads or get-rich-quick schemes, and will mostly quit before becoming senior researchers?

[3] – years of subjective time as experienced by the fastest ems. So probably nowhere near 400 calendar years.

[Another underwhelming book; I promise to get out of the habit of posting only book reviews Real Soon Now.]

Book review: Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed, by James C. Scott.

Scott begins with a history of the tension between the desire for legibility versus the desire for local control. E.g. central governments wanted to know how much they could tax peasants without causing famine or revolt. Yet even in the optimistic case where they got an honest tax collector to report how many bushels of grain John produced, they had problems due to John’s village having an idiosyncratic meaning of “bushel” that the tax collector couldn’t easily translate to something the central government knew. And it was hard to keep track of whether John had paid the tax, since the central government didn’t understand how the villagers distinguished that John from the John who lived a mile away.

So governments that wanted to grow imposed lots of standards on people. That sometimes helped peasants by making their taxes fairer and more predictable, but often trampled over local arrangements that had worked well (especially complex land use agreements).

I found that part of the book to be a fairly nice explanation of why an important set of conflicts was nearly inevitable. Scott gives a relatively balanced view of how increased legibility had both good and bad effects (more efficient taxation, diseases tracked better, Nazis found more Jews, etc.).

Then Scott becomes more repetitive and one-sided when describing high modernism, which carried the desire for legibility to a revolutionary, authoritarian extreme (especially between 1920 and 1960). I didn’t want 250 pages of evidence that Soviet style central planning was often destructive. Maybe that conclusion wasn’t obvious to enough people when Scott started writing the book, but it was painfully obvious by the time the book was published.

Scott’s complaints resemble the Hayekian side of the socialist calculation debate, except that Scott frames in terms that minimize associations with socialism and capitalism. E.g. he manages to include Taylorist factory management in his cluster of bad ideas.

It’s interesting to compare Fukuyama’s description of Tanzania with Scott’s description. They both agree that villagization (Scott’s focus) was a disaster. Scott leaves readers with the impression that villagization was the most important policy, whereas Fukuyama only devotes one paragraph to it, and gives the impression that the overall effects of Tanzania’s legibility-increasing moves were beneficial (mainly via a common language causing more cooperation). Neither author provides a balanced view (but then they were both drawing attention to neglected aspects of history, not trying to provide a complete picture).

My advice: read the SlateStarCodex review, don’t read the whole book.

[An unimportant book that I read for ARC; feel free to skip this.]

Book review: Be Yourself, Everyone Else is Already Taken: Transform Your Life with the Power of Authenticity, by Mike Robbins.

This book’s advice mostly feels half-right, and mostly directed at people who have somewhat different problems than I have.

The book’s exercises range from things I’ve already done enough of, to things I ought to practice more but which feel hard (such as the self-love exercise).
Continue Reading