bias

All posts tagged bias

Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.

This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.

He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.

He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.

Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.

I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.

Book review: Value-Focused Thinking: A Path to Creative Decisionmaking, by Ralph L. Keeney.

This book argues for focusing on values (goals/objectives) when making decisions, as opposed to the more usual alternative-focused decisionmaking.

The basic idea seems good. Alternative-focused thinking draws our attention away from our values and discourages us from creatively generating new possibilities to choose from. It tends to have us frame decisions as responses to problems, which leads us to associate decisions with undesirable emotions, when we could view decisions as opportunities.

A good deal of the book describes examples of good decisionmaking, but those rarely provide insight into how to avoid common mistakes or to do unusually well.

Occasionally the book switches to some dull math, without clear explanations of what benefit the rigor provides.

The book also includes good descriptions of how to measure the things that matter, but How to Measure Anything by Douglas Hubbard does that much better.

Book review: The Motivation Hacker, by Nick Winter.

This is a productivity book that might improve some peoples’ motivation.

It provides an entertaining summary (with clear examples) of how to use tools such as precommitment to accomplish an absurd number of goals.

But it mostly fails at explaining how to feel enthusiastic about doing so.

The section on Goal Picking Exercises exemplifies the problems I have with the book. The most realistic sounding exercise had me rank a bunch of goals by how much the goal excites me times the probability of success divided by the time required. I found that the variations in the last two terms overwhelmed the excitement term, leaving me with the advice that I should focus on the least exciting goals. (Modest changes to the arbitrary scale of excitement might change that conclusion).

Which leaves me wondering whether I should focus on goals that I’m likely to achieve soon but which I have trouble caring about, or whether I should focus on longer term goals such as mind uploading (where I might spend years on subgoals which turn out to be mistaken).

The author doesn’t seem to have gotten enough out of his experience to motivate me to imitate the way he picks goals.

Book review: The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do To Get More of It, by Kelly McGonigal.

This book starts out seeming to belabor ideas that seem obvious to me, but before too long it offers counterintuitive approaches that I ought to try.

The approach that I find hardest to reconcile with my intuition is that self-forgiveness over giving into temptations helps increase willpower, while feeling guilt or shame about having failed reduces willpower, so what seems like an incentive to avoid temptation is likely to reduce our ability to resist the temptation.

Another important but counterintuitive claim is that trying to suppress thoughts about a temptation (e.g. candy) makes it harder to resist the temptation. Whereas accepting that part of my mind wants candy (while remembering that I ought to follow a rule of eating less candy) makes it easier for me to resist the candy.

A careless author could have failed to convince me this is plausible. But McGonigal points out the similarities to trying to follow an instruction to not think of white bears – how could I suppress thoughts of white bears of some part of my mind didn’t activate a concept of white bears to monitor my compliance with the instruction? Can I think of candy without attracting the attention of the candy-liking parts of my mind?

As a result of reading the book, I have started paying attention to whether the pleasure I feel when playing computer games lives up to the anticipation I feel when I’m tempted to start one. I haven’t been surprised to observe that I sometimes feel no pleasure after starting the game. But it now seems easier to remember those times of pleasureless playing, and I expect that is weakening my anticipation or rewards.

Book review: The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t by Nate Silver.

This is a well-written book about the challenges associated with making predictions. But nearly all the ideas in it were ones I was already familiar with.

I agree with nearly everything the book says. But I’ll mention two small disagreements.

He claims that 0 and 100 percent are probabilities. Many Bayesians dispute that. He has a logically consistent interpretation and doesn’t claim it’s ever sane to believe something with probability 0 or 100 percent, so I’m not sure the difference matters, but rejecting the idea that those can represent probabilities seems at least like a simpler way of avoiding mistakes.

When pointing out the weak correlation between calorie consumption and obesity, he says he doesn’t know of an “obesity skeptics” community that would be comparable to the global warming skeptics. In fact there are people (e.g. Dave Asprey) who deny that excess calories cause obesity (with better tests than the global warming skeptics).

It would make sense to read this book instead of alternatives such as Moneyball and Tetlock’s Expert Political Judgment, but if you’ve been reading books in this area already this one won’t seem important.

Book review: The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt.

This book carefully describes the evolutionary origins of human moralizing, explains why tribal attitudes toward morality have both good and bad effects, and how people who want to avoid moral hostility can do so.

Parts of the book are arranged to describe the author’s transition from having standard delusions about morality being the result of the narratives we use to justify them and about why other people had alien-sounding ideologies. His description about how his study of psychology led him to overcome his delusions makes it hard for those who agree with him to feel very superior to those who disagree.

He hints at personal benefits from abandoning partisanship (“It felt good to be released from partisan anger.”), so he doesn’t rely on altruistic motives for people to accept his political advice.

One part of the book that surprised me was the comparison between human morality and human taste buds. Some ideologies are influenced a good deal by all 6 types of human moral intuitions. But the ideology that pervades most of academia only respect 3 types (care, liberty, and fairness). That creates a difficult communication gap between them and cultures that employ others such as sanctity in their moral system, much like people who only experience sweet and salty foods would have trouble imagining a desire for sourness in some foods.

He sometimes gives the impression of being more of a moral relativist than I’d like, but a careful reading of the book shows that there are a fair number of contexts in which he believes some moral tastes produce better results than others.

His advice could be interpreted as encouraging us to to replace our existing notions of “the enemy” with Manichaeans. Would his advice polarize societies into Manichaeans and non-Manichaeans? Maybe, but at least the non-Manichaeans would have a decent understanding of why Manichaeans disagreed with them.

The book also includes arguments that group selection played an important role in human evolution, and that an increase in cooperation (group-mindedness, somewhat like the cooperation among bees) had to evolve before language could become valuable enough to evolve. This is an interesting but speculative alternative to the common belief that language was the key development that differentiated humans from other apes.

Book review: Thinking, Fast and Slow, by Daniel Kahneman.

This book is an excellent introduction to the heuristics and biases literature, but only small parts of it will seem new to those who are familiar with the subject.

While the book mostly focuses on conditions where slow, logical thinking can do better than fast, intuitive thinking, I find it impressive that he was careful to consider the views of those who advocate intuitive thinking, and that he collaborated with a leading advocate of intuition to resolve many of their apparent disagreements (mainly by clarifying when each kind of thinking is likely to work well).

His style shows that he has applied some of the lessons of the research in his field to his own writing, such as by giving clear examples. (“Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular”).

He sounds mildly overconfident (and believes mild overconfidence can be ok), but occasionally provides examples of his own irrationality.

He has good advice for investors (e.g. reduce loss aversion via “broad framing” – think of a single loss as part of a large class of results that are on average profitable), and appropriate disdain for investment advisers. But he goes overboard when he treats the stock market as unpredictable. The stock market has some real regularities that could be exploited. Most investors fail to find them because they see many more regularities than are real, are overconfident about their ability to distinguish the real ones, and because it’s hard to distinguish valuable feedback (which often takes many years to get) from misleading feedback.

I wish I could find equally good book for overuse of logical analysis when I want the speed of intuition (e.g. “analysis paralysis”).

Book Review: Simple Heuristics That Make Us Smart by Gerd Gigerenzer and Peter M. Todd.

This book presents serious arguments in favor of using simple rules to make most decisions. They present many examples where getting a quick answer by evaluating a minimal amount of data produces almost as accurate a result as highly sophisticated models. They point out that ignoring information can minimize some biases:

people seldom consider more than one or two factors at any one time, although they feel that they can take a host of factors into account

(Tetlock makes similar suggestions).

They appear to overstate the extent to which their evidence generalizes. They test their stock market heuristic on a mere six months worth of data. If they knew much about stock markets, they’d realize that there are a lot more bad heuristics which work for a few years at a time than there are good heuristics. I’ll bet that theirs will do worse than random in most decades.

The book’s conclusions can be understood by skimming small parts of the book. Most of the book is devoted to detailed discussions of the evidence. I suggest following the book’s advice when reading it – don’t try to evaluate all the evidence, just pick out a few pieces.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

Influence

Book review: Influence: The Psychology of Persuasion by Robert B. Cialdini.

This book gives clear descriptions of six strategies that salesmen use to influence customers, and provides advice on how we can somewhat reduce our vulnerability to being exploited by them. It is one of the best books for laymen about heuristics and biases.

It shows why the simplest quick fixes would produce more problems than they solve, by showing that there are good reasons why we use heuristics that create opportunities for people to exploit us.

The author’s willingness to admit that he has been exploited by these strategies makes it harder for readers to dismiss the risks as something only fools fall for.