status

All posts tagged status

Book review: The Life You Can Save, by Peter Singer.

This book presents some unimpressive moral claims, and some more pragmatic social advocacy that is rather impressive.

The Problem

It is all too common to talk as if all human lives had equal value, yet act as if the value of distant strangers’ lives was a few hundred dollars.

Singer is effective at arguing against standard rationalizations for this discrepancy.

He provides an adequate summary of reasons to think most of us can easily save many lives.
Continue Reading

Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.

This book is a well-written analysis of human self-deception.

Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.

Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”

The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.

This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.

The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.

The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.

Impact

Most people will underestimate the effects which the book describes.
Continue Reading

Book review: Inadequate Equilibria, by Eliezer Yudkowsky.

This book (actually halfway between a book and a series of blog posts) attacks the goal of epistemic modesty, which I’ll loosely summarize as reluctance to believe that one knows better than the average person.

1.

The book starts by focusing on the base rate for high-status institutions having harmful incentive structures, charting a middle ground between the excessive respect for those institutions that we see in mainstream sources, and the cynicism of most outsiders.

There’s a weak sense in which this is arrogant, namely that if were obvious to the average voter how to improve on these problems, then I’d expect the problems to be fixed. So people who claim to detect such problems ought to have decent evidence that they’re above average in the relevant skills. There are plenty of people who can rationally decide that applies to them. (Eliezer doubts that advising the rest to be modest will help; I suspect there are useful approaches to instilling modesty in people who should be more modest, but it’s not easy). Also, below-average people rarely seem to be attracted to Eliezer’s writings.

Later parts of the book focus on more personal choices, such as choosing a career.

Some parts of the book seem designed to show off Eliezer’s lack of need for modesty – sometimes successfully, sometimes leaving me suspecting he should be more modest (usually in ways that are somewhat orthogonal to his main points; i.e. his complaints about “reference class tennis” suggest overconfidence in his understanding of his debate opponents).

2.

Eliezer goes a bit overboard in attacking the outside view. He starts with legitimate complaints about people misusing it to justify rejecting theory and adopt “blind empiricism” (a mistake that I’ve occasionally made). But he partly rejects the advice that Tetlock gives in Superforecasting. I’m pretty sure Tetlock knows more about this domain than Eliezer does.

E.g. Eliezer says “But in novel situations where causal mechanisms differ, the outside view fails—there may not be relevantly similar cases, or it may be ambiguous which similar-looking cases are the right ones to look at.”, but Tetlock says ‘Nothing is 100% “unique” … So superforecasters conduct creative searches for comparison classes even for seemingly unique events’.

Compare Eliezer’s “But in many contexts, the outside view simply can’t compete with a good theory” with Tetlock’s commandment number 3 (“Strike the right balance between inside and outside views”). Eliezer seems to treat the approaches as antagonistic, whereas Tetlock advises us to find a synthesis in which the approaches cooperate.

3.

Eliezer provides a decent outline of what causes excess modesty. He classifies the two main failure modes as anxious underconfidence, and status regulation. Anxious underconfidence definitely sounds like something I’ve felt somewhat often, and status regulation seems pretty plausible, but harder for me to detect.

Eliezer presents a clear model of why status regulation exists, but his explanation for anxious underconfidence doesn’t seem complete. Here are some of my ideas about possible causes of anxious underconfidence:

  • People evaluate mistaken career choices and social rejection as if they meant death (which was roughly true until quite recently), so extreme risk aversion made sense;
  • Inaction (or choosing the default action) minimizes blame. If I carefully consider an option, my choice says more about my future actions than if I neglect to think about the option;
  • People often evaluate their success at life by counting the number of correct and incorrect decisions, rather than adding up the value produced;
  • People who don’t grok the Bayesian meaning of the word “evidence” are likely to privilege the scientific and legal meanings of evidence. So beliefs based on more subjective evidence get treated as second class citizens.

I suspect that most harm from excess modesty (and also arrogance) happens in evolutionarily novel contexts. Decisions such as creating a business plan for a startup, or writing a novel that sells a million copies, are sufficiently different from what we evolved to do that we should expect over/underconfidence to cause more harm.

4.

Another way to summarize the book would be: don’t aim to overcompensate for overconfidence; instead, aim to eliminate the causes of overconfidence.

This book will be moderately popular among Eliezer’s fans, but it seems unlikely to greatly expand his influence.

It didn’t convince me that epistemic modesty is generally harmful, but it does provide clues to identifying significant domains in which epistemic modesty causes important harm.

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

Book review: Are We Smart Enough to Know How Smart Animals Are?, by Frans de Waal.

This book is primarily about discrediting false claims of human uniqueness, and showing how easy it is to screw up evaluations of a species’ cognitive abilities. It is best summarized by the cognitive ripple rule:

Every cognitive capacity that we discover is going to be older and more widespread than initially thought.

De Waal provides many anecdotes of carefully designed experiments detecting abilities that previously appeared to be absent. E.g. asian elephants failed mirror tests with small, distant mirrors. When experimenters dared to put large mirrors close enough for the elephants to touch, some of them passed the test.

Likewise, initial observations of behaviorist humans suggested they were rigidly fixated on explaining all behavior via operant conditioning. Yet one experimenter managed to trick a behaviorist into demonstrating more creativity, by harnessing the one motive that behaviorists prefer over their habit of advocating operant conditioning: their desire to accuse people of recklessly inferring complex cognition.

De Waal seems moderately biased toward overstating cognitive abilities of most species (with humans being one clear exception to that pattern).

At one point he gave me the impression that he was claiming elephants could predict where a thunderstorm would hit days in advance. I checked the reference, and what the elephants actually did was predict the arrival of the wet season, and respond with changes such as longer steps (but probably not with indications that they knew where thunderstorms would hit). After rereading de Waal’s wording, I decided it was ambiguous. But his claim that elephants “hear thunder and rainfall hundreds of miles away” exaggerates the original paper’s “detected … at distances greater than 100 km … perhaps as much as 300 km”.

But in the context of language, de Waal switches to downplaying reports of impressive abilities. I wonder how much of that is due to his desire to downplay claims that human minds are better, and how much of that is because his research isn’t well suited to studying language.

I agree with the book’s general claims. The book provides evidence that human brains embody only small, somewhat specialized improvements on the cognitive abilities of other species. But I found the book less convincing on that subject than some other books I’ve read recently. I suspect that’s mainly due to de Waal’s focus on anecdotes that emphasize what’s special about each species or individual. Whereas The Human Advantage rigorously quantifies important ways in which human brains are just a bigger primate brain (but primate brains are special!). Or The Secret of our Success (which doesn’t use particularly rigorous methods) provides a better perspective, by describing a model in which ape minds evolve to human minds via ordinary, gradual adaptations to mildly new environments.

In sum, this book is good at explaining the problems associated with research into animal cognition. It is merely ok at providing insights about how smart various species are.

Book review: The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, by Joseph Henrich.

This book provides a clear explanation of how an ability to learn cultural knowledge made humans evolve into something unique over the past few million years. It’s by far the best book I’ve read on human evolution.

Before reading this book, I thought human uniqueness depended on something somewhat arbitrary and mysterious which made sexual selection important for human evolution, and wondered whether human language abilities depended on some lucky mutation. Now I believe that the causes of human uniqueness were firmly in place 2-3 million years ago, and the remaining arbitrary events seem much farther back on the causal pathway (e.g. what was unique about apes? why did our ancestors descend from trees 4.4 million years ago? why did the climate become less stable 3 million years ago?)

Human language now seems like a natural byproduct of previous changes, and probably started sooner (and developed more gradually) than many researchers think.

I used to doubt that anyone could find good evidence of cultures that existed millions of years ago. But Henrich provides clear explanations of how features such as right-handedness and endurance running demonstrate important milestones in human abilities to generate culture.

Henrich’s most surprising claim is that there’s an important sense in which individual humans are no smarter than other apes. Our intellectual advantage over apes is mostly due to a somewhat special-purpose ability to combine our individual brains into a collective intelligence. His evidence on this point is weak, but it’s plausible enough to be interesting.

Henrich occasionally exaggerates a bit. The only place where that bothered me was where he claimed that heart attack patients who carefully adhered to taking placebos were half as likely to die as patients who failed to reliably take placebos. The author wants to believe that demonstrates the power of placebos. I say the patient failure to take placebos was just a symptom of an underlying health problem (dementia?).

I’m a bit surprised at how little Robin Hanson says about the Henrich’s main points. Henrich suggests that there’s cultural pressure to respect high-status people, for reasons that are somewhat at odds with Robin’s ally/coalition based reasons. Henrich argues that knowledge coming from high-status people, at least in hunter-gatherer societies, tended to be safer than knowledge from more directly measurable evidence. The cultural knowledge that accumulates over many generations aggregates information that could not be empirically acquired in a short time.

So Henrich implies it’s reasonable for people to be confused about whether evidence based medicine embodies more wisdom than eminence based medicine. Traditional culture has become less valuable recently due to the rapid changes in our environment (particularly the technology component of our environment), but cultures that abandoned traditions too readily were often hurt by consequences which take decades to observe.

I got more out of this book than a short review can describe (such as “How Altruism is like a Chili Pepper”). Here’s a good closing quote:

we are smart, but not because we stand on the shoulders of giants or are giants ourselves. We stand on the shoulders of a very large pyramid of hobbits.

Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.

This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.

He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.

He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.

Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.

I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.

Book review: Leadership and Self-Deception: Getting out of the Box, by the Arbinger Institute.

In spite of being marketed as mainly for corporate executives, this book’s advice is important for most interactions between people. Executives have more to gain from it, but I suspect they’re somewhat less willing to believe it.

I had already learned a lot about self-deception before reading this, but this book clarifies how to recognize and correct common instances in which I’m tempted to deceive myself. More importantly, it provides a way to explain self-deception to a number of people. I had previously despaired of explaining my understanding of self-deception to people who hadn’t already sought out the ideas I’d found. Now I can point people to this book. But I still can’t summarize it in a way that would change many people’s minds.

It’s written mostly as a novel, which makes it very readable without sacrificing much substance.

Some of the books descriptions don’t sound completely right to me. They describe people as acting “inside the box” or “outside the box” with respect to another person (not the same as the standard meaning of “thinking outside the box”) as if people normally did one or the other, we I think I often act somewhere in between those two modes. Also, the term “self-betrayal”, which I’d describe as acting selfishly and rationalizing the act as selfless, should not be portrayed as if the selfishness automatically causes self-deception. If people felt a little freer to admit that they act selfishly, they’d be less tempted to deceive themselves about their motives.

The book seems a bit too rosy about the benefits of following it’s advice. For instance, the book leaves the reader to imagine that Semmelweis benefited from admitting that he had been killing patients. Other accounts of Semmelweis suggest that he suffered, and the doctors who remained in denial prospered. Maybe he would have done much better if he had understood this book and been able to adopt its style. But it’s important to remember that self-deception isn’t an accident. It happens because it has sometimes worked.

Book review: Drive: The Surprising Truth About What Motivates Us, by Daniel H. Pink.

This book explores some of the complexities of what motivates humans. It attacks a stereotype that says only financial rewards matter, and exaggerates the extent to which people adopt that fallacy. His style is similar to Malcolm Gladwell’s, but with more substance than Gladwell.

The book’s advice is likely to cause some improvement in how businesses are run and in how people choose careers. But I wonder how many bosses will ignore it because their desire to exert control over people outweighs their desire to create successful companies.

I’m not satisfied with the way he and others classify motivations as intrinsic and extrinsic. While feelings of flow may be almost entirely internally generated, other motivations that he classifies as intrinsic seem to involve an important component of feeling that others are rewarding you with higher status/reputation.

Shirking may have been a been an important problem a century ago for which financial rewards were appropriate solutions, but the nature of work has changed so that it’s much less common for workers to want to put less effort into a job. The author implies that this means standard financial rewards have become fairly unimportant factors in determining productivity. I think he underestimates the importance they play in determining how goals are prioritized.

He believes the changes in work that reduced the importance of financial incentives was the replacement of rule-following routine work with work that requires creativity. I suggest that another factor was that in 1900, work often required muscle-power that consumed almost as much energy as a worker could afford to feed himself.

He states his claims vaguely enough that they could be interpreted as implying that broad categories of financial incentives (including stock options and equity) work poorly. I checked one of the references that sounded like it might address that (“When performance-related pay backfires”), and found it only dealt with payments for completing specific tasks.

His complaints about excessive focus on quarterly earnings probably have some value, but it’s important to remember that it’s easy to err in the other direction as well (the dot-com bubble seemed to coincide with an unusual amount of effort at focusing on earnings 5 to 10 years away).

I’m disappointed that he advises not to encourage workers to compete against each other without offering evidence about its effects.

One interesting story is the bonus system at Kimley-Horn and Associates, where any employee can award another employee $50 for doing something exceptional. I’d be interested in more tests of this – is there something special about Kimley-Horn that prevents abuse, or would it work in most companies?

Book review: Hierarchy in the Forest: The Evolution of Egalitarian Behavior, by Christopher Boehm.

This book makes a good argument that a major change from strongly hierarchical societies to fairly egalitarian societies happened to the human race sometime after it diverged from Chimpanzees and Bonobos. Not due to any changes in attitudes toward status, but because language enabled low-status individuals to cooperate more effectively to restrain high-status individuals, and because of he equalizing effects of weapons. Hunter-gatherer societies seem rather consistently egalitarian, and the partial reversion to hierarchy in modern times may be due to the ability to accumulate wealth or the larger size of our societies.

He provides a plausible hypothesis that this change enabled group selection to become more powerful than in a typical species, but that doesn’t imply that group selection became as important as within-group selection, and he doesn’t have a good way of figuring out how important the effect was.

He demonstrates that humans became more altruistic, using a narrow biological definition of altruism, but it’s important to note that this only means agreeing to follow altruistic rules. He isn’t able to say much about how well people follow those rules when nobody notices what they’re doing.

Much of the middle of the book recounting anthropological evidence can be skipped without much loss – the most important parts are chapters 8 and 9.