rationality

All posts tagged rationality

Book review: Thinking, Fast and Slow, by Daniel Kahneman.

This book is an excellent introduction to the heuristics and biases literature, but only small parts of it will seem new to those who are familiar with the subject.

While the book mostly focuses on conditions where slow, logical thinking can do better than fast, intuitive thinking, I find it impressive that he was careful to consider the views of those who advocate intuitive thinking, and that he collaborated with a leading advocate of intuition to resolve many of their apparent disagreements (mainly by clarifying when each kind of thinking is likely to work well).

His style shows that he has applied some of the lessons of the research in his field to his own writing, such as by giving clear examples. (“Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular”).

He sounds mildly overconfident (and believes mild overconfidence can be ok), but occasionally provides examples of his own irrationality.

He has good advice for investors (e.g. reduce loss aversion via “broad framing” – think of a single loss as part of a large class of results that are on average profitable), and appropriate disdain for investment advisers. But he goes overboard when he treats the stock market as unpredictable. The stock market has some real regularities that could be exploited. Most investors fail to find them because they see many more regularities than are real, are overconfident about their ability to distinguish the real ones, and because it’s hard to distinguish valuable feedback (which often takes many years to get) from misleading feedback.

I wish I could find equally good book for overuse of logical analysis when I want the speed of intuition (e.g. “analysis paralysis”).

Book Review: Simple Heuristics That Make Us Smart by Gerd Gigerenzer and Peter M. Todd.

This book presents serious arguments in favor of using simple rules to make most decisions. They present many examples where getting a quick answer by evaluating a minimal amount of data produces almost as accurate a result as highly sophisticated models. They point out that ignoring information can minimize some biases:

people seldom consider more than one or two factors at any one time, although they feel that they can take a host of factors into account

(Tetlock makes similar suggestions).

They appear to overstate the extent to which their evidence generalizes. They test their stock market heuristic on a mere six months worth of data. If they knew much about stock markets, they’d realize that there are a lot more bad heuristics which work for a few years at a time than there are good heuristics. I’ll bet that theirs will do worse than random in most decades.

The book’s conclusions can be understood by skimming small parts of the book. Most of the book is devoted to detailed discussions of the evidence. I suggest following the book’s advice when reading it – don’t try to evaluate all the evidence, just pick out a few pieces.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

At last Sunday’s Overcoming Bias meetup, we tried paranoid debating. We formed groups of mostly 4 people (5 for the first round or two) and competed to produce the most accurate guess to trivia questions with numeric answers, with one person secretly designated to be rewarded for convincing the team to produce the least accurate answer.

It was fun and may have taught us a little about becoming more rational. But in order to be valuable, it should be developed further to become a means of testing rationality. As practiced, it tested some combination of trivia knowledge and rationality. The last round reduced the importance of trivia knowledge by rewarding good confidence intervals instead of a single good answer. I expect there are ways of using confidence intervals that remove the effects of trivia knowledge from the scores.

I’m puzzled about why people preferred the spokesman version to the initial version where the median number was the team’s answer. Designating a spokesman publicly as a non-deceiver provides information about who the deceiver is. In one case, we determined who the deceiver was by two of us telling the spokesman that we were sufficiently ignorant about the subject relative to him that he should decide based only on his knowledge. That gave our team a big advantage that had little relation to our rationality. I expect the median approach can be extended to confidence intervals by taking the median of the lows and the median of the highs, but I’m not fully confident that there are no problems with that.

The use of semi-randomly selected groups meant that scores were weak signals. If we want to evaluate individual rationality, we’d need rather time consuming trials of many permutations of the groups. Paranoid debating is more suited to comparing groups (e.g. a group of people credentialed as the best students from a rationality dojo, or the people most responsible for decisions in a hedge fund).

See more comments at Less Wrong.

This paper reports that people with autistic spectrum symptoms are less biased by framing effects. Unfortunately, the researchers suggest that the increased rationality is connected to an inability to incorporate emotional cues into some decision making processes, so the rationality comes at a cost in social skills.

Some analysis of how these results fit in with the theory that autism is the opposite end of a spectrum from schizophrenia can be found here:

It seems that the schizophrenic is working on the basis of an internal model and is ignoring external feedback: thus his reliance on previous response.I propose that an opposite pattern would be observed in Autistics with Autistics showing no or less mutual information, as they have poor self-models; but greater cross-mutual information , as they would base their decisions more on external stimuli or feedback.

Book review: Mindless Eating: Why We Eat More Than We Think by Brian Wansink.
This well-written book might help a few people lose a significant amount of weight, and many to lose a tiny bit.
Some of his advice seems to demand as much willpower for me as a typical diet (e.g. eat slowly), but he gives many small suggestions and advises us to pick and choose the most appropriate ones. There’s enough variety and novelty among his suggestions that most people are likely to find at least one feasible method to lose a few pounds.
A large fraction of his suggestions require none of the willpower that a typical diet requires, but will be rejected by most people because their ego will cause them to insist that only people less rational than them are making the kind of mistakes that the book’s suggestions will fix.
Most of the book’s claims seem to be backed up by careful research. But I couldn’t find any research to back up the claim that approaches which cause people to eat 100 calories per day less for days will cause people to lose 10 pounds in ten months. He presents evidence that such a diet doesn’t need to make people feel deprived over the short time periods they’ve been studied. But there’s been speculation among critics of diet books that our bodies have a natural “set point” weight, and diets which work for a while have no long-term effect because lower body weights cause increased desire to return to the set point. This book offers only weak anecdotal evidence against that possibility.
But even if it fails as a diet book, it may help you understand how the taste of your food is affected by factors other than the food itself.

Bryan Caplan has a good post arguing democracy produces worse results than rational ignorance among voters would explain.
However, one aspect of his style annoys me – his use of the word irrationality to describe what’s wrong with voter thinking focuses on what is missing from voter thought processes rather than what socially undesirable features are present (many economists tend to use the word irrationality this way). I hope his soon-to-be-published book version of this post devotes more attention to what voters are doing that differs from boundedly rational attempts at choosing the best candidates (some of which I suspect fall into what many of us would call selfishly rational motives even though economists usually classify them as irrational). Some of the motives that I suspect are important are the desire to signal one’s group membership, endowment effects which are one of the many reasons people treat existing jobs as if they were more valuable than new and more productive jobs that can be created, and reputation effects where people stick with whatever position they had in the past because updating their beliefs in response to new evidence would imply that their original positions weren’t as wise as they want to imagine.
Alas, his policy recommendations are not likely to be very effective and are generally not much easier to implement than futarchy (which I consider to be the most promising approach to dealing with the problems of democracy). For example:

Imagine, for example, if the Council of Economic Advisers, in the spirit of the Supreme Court, had the power to invalidate legislation as “uneconomical.”

If I try hard enough, I can imagine this approach working well. But it would take a lot more than Caplan’s skills at persuasion to get voters to go along with this, and it’s not hard to imagine that such an institution would develop an understanding of the concept of “uneconomical” that is much less desirable than Caplan’s or mine.

Book review: Expert Political Judgment: How Good Is It? How Can We Know? by Philip E. Tetlock
This book is a rather dry description of good research into the forecasting abilities of people who are regarded as political experts. It is unusually fair and unbiased.
His most important finding about what distinguishes the worst from the not-so-bad is that those on the hedgehog end of Isaiah Berlin’s spectrum (who derive predictions from a single grand vision) are wrong more often than those near the fox end (who use many different ideas). He convinced me that that finding is approximately right, but leaves me with questions.
Does the correlation persist at the fox end of the spectrum, or do the most fox-like subjects show some diminished accuracy?
How do we reconcile his evidence that humans with more complex thinking do better than simplistic humans, but simple autoregressive models beat all humans? That seems to suggest there’s something imperfect in using the hedgehog-fox spectrum. Maybe a better spectrum would use evidence on how much data influences their worldviews?
Another interesting finding is that optimists tend to be more accurate than pessimists. I’d like to know how broad a set of domains this applies to. It certainly doesn’t apply to predicting software shipment dates. Does it apply mainly to domains where experts depend on media attention?
To what extent can different ways of selecting experts change the results? Tetlock probably chose subjects that resemble those who most people regard as experts, but there must be ways of selecting experts which produce better forecasts. It seems unlikely they can match prediction markets, but there are situations where we probably can’t avoid relying on experts.
He doesn’t document his results as thoroughly as I would like (even though he’s thorough enough to be tedious in places):
I can’t find his definition of extremists. Is it those who predict the most change from the status quo? Or the farthest from the average forecast?
His description of how he measured the hedgehog-fox spectrum has a good deal of quantitative evidence, but not quite enough for me check where I would be on that spectrum.
How does he produce a numerical timeseries for his autoregressive models? It’s not hard to guess for inflation, but for the end of apartheid I’m rather uncertain.
Here’s one quote that says a lot about his results:

Beyond a stark minimum, subject matter expertise in world politics translates less into forecasting accuracy than it does into overconfidence

This book is a colorful explanation of why we are less successful at finding happiness than we expect. It shows many similarities between mistakes we make in foreseeing how happy we will be and mistakes we make in perceiving the present or remembering the past. That makes it easy to see that those errors are natural results of shortcuts our minds take to minimize the amount of data that our imagination needs to process (e.g. filling in our imagination with guesses as our mind does with the blind spot in our eye).
One of the most important types of biases is what he calls presentism (a term he borrows from historians and extends to deal with forecasting). When we imagine the past or future, our minds often employ mental mechanisms that were originally adapted to perceive the present, and we retain biases to give more weight to immediate perceptions than to what we imagine. That leads to mistakes such as letting our opinions of how much food we should buy be overly influenced by how hungry we are now, or Wilbur Wright’s claim in 1901 that “Man will not fly for 50 years.”
This is more than just a book about happiness. It gives me a broad understanding of human biases that I hope to apply to other areas (e.g. it has given me some clues about how I might improve my approach to stock market speculation).
But it’s more likely that the book’s style will make you happy than that the knowledge in it will cause you to use the best evidence available (i.e. observations of what makes others happy) when choosing actions to make yourself happy. Instead, you will probably continue to overestimate your ability to predict what will make you happy and overestimate the uniqueness that you think makes the experience of others irrelevant to your own pursuit of happiness.
I highly recommend the book.
Some drawbacks:
His analysis of memetic pressures that cause false beliefs about happiness to propagate is unconvincing. He seems to want a very simple theory, but I doubt the result is powerful enough to explain the extent of the myths. A full explanation would probably require the same kind of detailed analysis of biases that the rest of the book contains.
He leaves the impression that he thinks he’s explained most of the problems with achieving happiness, when he probably hasn’t done that (it’s unlikely any single book could).
He presents lots of experimental results, but he doesn’t present the kind of evidence needed to prove that presentism is a consistent problem across a wide range of domains.
He fails to indicate how well he follows his own advice. For instance, does he have any evidence that writing a book like this makes the author happy?

Robin Hanson writes in a post on Intuition Error and Heritage:

Unless you can see a reason to have expected to be born into a culture or species with more accurate than average intuitions, you must expect your cultural or species specific intuitions to be random, and so not worth endorsing.

Deciding whether an intuition is species specific and no more likely than random to be right seems a bit hard, due to the current shortage of species whose cultures address many of the disputes humans have.
The ideas in this quote follow logically from other essays of Robin’s that I’ve read, but phrasing them this way makes them seem superficially hard to reconcile with arguments by Hayek that we should respect the knowledge contained in culture.
Part of this apparent conflict seems to be due to the Hayek’s emphasis on intuitions for which there is some unobvious and inconclusive evidence that supports the cultural intuitions. Hayek wasn’t directing his argument to a random culture, but rather to a culture for which there was some evidence of better than random results, and it would make less sense to apply his arguments to, say, North Korean society. For many other intuitions that Hayek cared about, the number of cultures which agree with the intuition may be large enough to constitute evidence in support of the intuition.
Some intuitions may be appropriate for a culture even though they were no better than random when first adopted. Driving on the right side of the road is a simple example. The arguments given in favor of a judicial bias toward stare decisis suggest this is just the tip of an iceberg.
Some of this apparent conflict may be due the importance of treating interrelated practices together. For instance, laws against extramarital sex might be valuable in societies where people depend heavily on marital fidelity but not in societies where a divorced person can support herself comfortably. A naive application of Robin’s rule might lead the former society to decide such a law is arbitrary, when a Hayekian might wonder if it is better to first analyze whether to treat the two practices as a unit which should only be altered together.
I’m uncertain whether these considerations fully reconcile the two views, or whether Hayek’s arguments need more caveats.