status

All posts tagged status

Book review: Where Is My Flying Car? A Memoir of Future Past, by J. Storrs Hall (aka Josh).

If you only read the first 3 chapters, you might imagine that this is the history of just one industry (or the mysterious lack of an industry).

But this book attributes the absence of that industry to a broad set of problems that are keeping us poor. He looks at the post-1970 slowdown in innovation that Cowen describes in The Great Stagnation[1]. The two books agree on many symptoms, but describe the causes differently: where Cowen says we ate the low hanging fruit, Josh says it’s due to someone “spraying paraquat on the low-hanging fruit”.

The book is full of mostly good insights. It significantly changed my opinion of the Great Stagnation.

The book jumps back and forth between polemics about the Great Strangulation (with a bit too much outrage porn), and nerdy descriptions of engineering and piloting problems. I found those large shifts in tone to be somewhat disorienting – it’s like the author can’t decide whether he’s an autistic youth who is eagerly describing his latest obsession, or an angry old man complaining about how the world is going to hell (I’ve met the author at Foresight conferences, and got similar but milder impressions there).

Josh’s main explanation for the Great Strangulation is the rise of Green fundamentalism[2], but he also describes other cultural / political factors that seem related. But before looking at those, I’ll look in some depth at three industries that exemplify the Great Strangulation.

Continue Reading

Book review: Capital in the Twenty-First Century, by Thomas Piketty.

Capital in the Twenty-First Century is decent at history, mediocre at economics, unimpressive at forecasting, and gives policy advice that is thoughtfully adapted to his somewhat controversial goals. His goals involve a rather different set of priorities than I endorse, but the book mostly doesn’t try to persuade us to adopt his goals, so I won’t say much here about why I have different priorities.

That qualifies as a good deal less dumbed-down-for-popularity than I expected from a bestseller.

Even when he makes mistakes, he is often sufficiently thoughtful and clear to be somewhat entertaining.

Piketty provides a comprehensive view of changes in financial inequality since the start of the industrial revolution.

Piketty’s main story is that we’ve experienced moderately steady increases in inequality, as long as conditions remained somewhat normal. There was a big break in that trend associated with WW1, WW2, and to lesser extents the Great Depression and baby boom. Those equalizing forces (mainly decreases in wealth) seem unlikely to repeat. We’re back on a trend of increasing inequality, with no end in sight.

Continue Reading

Book review: The Life You Can Save, by Peter Singer.

This book presents some unimpressive moral claims, and some more pragmatic social advocacy that is rather impressive.

The Problem

It is all too common to talk as if all human lives had equal value, yet act as if the value of distant strangers’ lives was a few hundred dollars.

Singer is effective at arguing against standard rationalizations for this discrepancy.

He provides an adequate summary of reasons to think most of us can easily save many lives.
Continue Reading

Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.

This book is a well-written analysis of human self-deception.

Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.

Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”

The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.

This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.

The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.

The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.

Impact

Most people will underestimate the effects which the book describes.
Continue Reading

Book review: Inadequate Equilibria, by Eliezer Yudkowsky.

This book (actually halfway between a book and a series of blog posts) attacks the goal of epistemic modesty, which I’ll loosely summarize as reluctance to believe that one knows better than the average person.

1.

The book starts by focusing on the base rate for high-status institutions having harmful incentive structures, charting a middle ground between the excessive respect for those institutions that we see in mainstream sources, and the cynicism of most outsiders.

There’s a weak sense in which this is arrogant, namely that if were obvious to the average voter how to improve on these problems, then I’d expect the problems to be fixed. So people who claim to detect such problems ought to have decent evidence that they’re above average in the relevant skills. There are plenty of people who can rationally decide that applies to them. (Eliezer doubts that advising the rest to be modest will help; I suspect there are useful approaches to instilling modesty in people who should be more modest, but it’s not easy). Also, below-average people rarely seem to be attracted to Eliezer’s writings.

Later parts of the book focus on more personal choices, such as choosing a career.

Some parts of the book seem designed to show off Eliezer’s lack of need for modesty – sometimes successfully, sometimes leaving me suspecting he should be more modest (usually in ways that are somewhat orthogonal to his main points; i.e. his complaints about “reference class tennis” suggest overconfidence in his understanding of his debate opponents).

2.

Eliezer goes a bit overboard in attacking the outside view. He starts with legitimate complaints about people misusing it to justify rejecting theory and adopt “blind empiricism” (a mistake that I’ve occasionally made). But he partly rejects the advice that Tetlock gives in Superforecasting. I’m pretty sure Tetlock knows more about this domain than Eliezer does.

E.g. Eliezer says “But in novel situations where causal mechanisms differ, the outside view fails—there may not be relevantly similar cases, or it may be ambiguous which similar-looking cases are the right ones to look at.”, but Tetlock says ‘Nothing is 100% “unique” … So superforecasters conduct creative searches for comparison classes even for seemingly unique events’.

Compare Eliezer’s “But in many contexts, the outside view simply can’t compete with a good theory” with Tetlock’s commandment number 3 (“Strike the right balance between inside and outside views”). Eliezer seems to treat the approaches as antagonistic, whereas Tetlock advises us to find a synthesis in which the approaches cooperate.

3.

Eliezer provides a decent outline of what causes excess modesty. He classifies the two main failure modes as anxious underconfidence, and status regulation. Anxious underconfidence definitely sounds like something I’ve felt somewhat often, and status regulation seems pretty plausible, but harder for me to detect.

Eliezer presents a clear model of why status regulation exists, but his explanation for anxious underconfidence doesn’t seem complete. Here are some of my ideas about possible causes of anxious underconfidence:

  • People evaluate mistaken career choices and social rejection as if they meant death (which was roughly true until quite recently), so extreme risk aversion made sense;
  • Inaction (or choosing the default action) minimizes blame. If I carefully consider an option, my choice says more about my future actions than if I neglect to think about the option;
  • People often evaluate their success at life by counting the number of correct and incorrect decisions, rather than adding up the value produced;
  • People who don’t grok the Bayesian meaning of the word “evidence” are likely to privilege the scientific and legal meanings of evidence. So beliefs based on more subjective evidence get treated as second class citizens.

I suspect that most harm from excess modesty (and also arrogance) happens in evolutionarily novel contexts. Decisions such as creating a business plan for a startup, or writing a novel that sells a million copies, are sufficiently different from what we evolved to do that we should expect over/underconfidence to cause more harm.

4.

Another way to summarize the book would be: don’t aim to overcompensate for overconfidence; instead, aim to eliminate the causes of overconfidence.

This book will be moderately popular among Eliezer’s fans, but it seems unlikely to greatly expand his influence.

It didn’t convince me that epistemic modesty is generally harmful, but it does provide clues to identifying significant domains in which epistemic modesty causes important harm.

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

Book review: Are We Smart Enough to Know How Smart Animals Are?, by Frans de Waal.

This book is primarily about discrediting false claims of human uniqueness, and showing how easy it is to screw up evaluations of a species’ cognitive abilities. It is best summarized by the cognitive ripple rule:

Every cognitive capacity that we discover is going to be older and more widespread than initially thought.

De Waal provides many anecdotes of carefully designed experiments detecting abilities that previously appeared to be absent. E.g. asian elephants failed mirror tests with small, distant mirrors. When experimenters dared to put large mirrors close enough for the elephants to touch, some of them passed the test.

Likewise, initial observations of behaviorist humans suggested they were rigidly fixated on explaining all behavior via operant conditioning. Yet one experimenter managed to trick a behaviorist into demonstrating more creativity, by harnessing the one motive that behaviorists prefer over their habit of advocating operant conditioning: their desire to accuse people of recklessly inferring complex cognition.

De Waal seems moderately biased toward overstating cognitive abilities of most species (with humans being one clear exception to that pattern).

At one point he gave me the impression that he was claiming elephants could predict where a thunderstorm would hit days in advance. I checked the reference, and what the elephants actually did was predict the arrival of the wet season, and respond with changes such as longer steps (but probably not with indications that they knew where thunderstorms would hit). After rereading de Waal’s wording, I decided it was ambiguous. But his claim that elephants “hear thunder and rainfall hundreds of miles away” exaggerates the original paper’s “detected … at distances greater than 100 km … perhaps as much as 300 km”.

But in the context of language, de Waal switches to downplaying reports of impressive abilities. I wonder how much of that is due to his desire to downplay claims that human minds are better, and how much of that is because his research isn’t well suited to studying language.

I agree with the book’s general claims. The book provides evidence that human brains embody only small, somewhat specialized improvements on the cognitive abilities of other species. But I found the book less convincing on that subject than some other books I’ve read recently. I suspect that’s mainly due to de Waal’s focus on anecdotes that emphasize what’s special about each species or individual. Whereas The Human Advantage rigorously quantifies important ways in which human brains are just a bigger primate brain (but primate brains are special!). Or The Secret of our Success (which doesn’t use particularly rigorous methods) provides a better perspective, by describing a model in which ape minds evolve to human minds via ordinary, gradual adaptations to mildly new environments.

In sum, this book is good at explaining the problems associated with research into animal cognition. It is merely ok at providing insights about how smart various species are.

Book review: The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, by Joseph Henrich.

This book provides a clear explanation of how an ability to learn cultural knowledge made humans evolve into something unique over the past few million years. It’s by far the best book I’ve read on human evolution.

Before reading this book, I thought human uniqueness depended on something somewhat arbitrary and mysterious which made sexual selection important for human evolution, and wondered whether human language abilities depended on some lucky mutation. Now I believe that the causes of human uniqueness were firmly in place 2-3 million years ago, and the remaining arbitrary events seem much farther back on the causal pathway (e.g. what was unique about apes? why did our ancestors descend from trees 4.4 million years ago? why did the climate become less stable 3 million years ago?)

Human language now seems like a natural byproduct of previous changes, and probably started sooner (and developed more gradually) than many researchers think.

I used to doubt that anyone could find good evidence of cultures that existed millions of years ago. But Henrich provides clear explanations of how features such as right-handedness and endurance running demonstrate important milestones in human abilities to generate culture.

Henrich’s most surprising claim is that there’s an important sense in which individual humans are no smarter than other apes. Our intellectual advantage over apes is mostly due to a somewhat special-purpose ability to combine our individual brains into a collective intelligence. His evidence on this point is weak, but it’s plausible enough to be interesting.

Henrich occasionally exaggerates a bit. The only place where that bothered me was where he claimed that heart attack patients who carefully adhered to taking placebos were half as likely to die as patients who failed to reliably take placebos. The author wants to believe that demonstrates the power of placebos. I say the patient failure to take placebos was just a symptom of an underlying health problem (dementia?).

I’m a bit surprised at how little Robin Hanson says about the Henrich’s main points. Henrich suggests that there’s cultural pressure to respect high-status people, for reasons that are somewhat at odds with Robin’s ally/coalition based reasons. Henrich argues that knowledge coming from high-status people, at least in hunter-gatherer societies, tended to be safer than knowledge from more directly measurable evidence. The cultural knowledge that accumulates over many generations aggregates information that could not be empirically acquired in a short time.

So Henrich implies it’s reasonable for people to be confused about whether evidence based medicine embodies more wisdom than eminence based medicine. Traditional culture has become less valuable recently due to the rapid changes in our environment (particularly the technology component of our environment), but cultures that abandoned traditions too readily were often hurt by consequences which take decades to observe.

I got more out of this book than a short review can describe (such as “How Altruism is like a Chili Pepper”). Here’s a good closing quote:

we are smart, but not because we stand on the shoulders of giants or are giants ourselves. We stand on the shoulders of a very large pyramid of hobbits.

Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.

This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.

He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.

He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.

Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.

I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.