Book review: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich, Richard F. West and Maggie E. Toplak.
This book describes an important approach to measuring individual rationality: an RQ test that loosely resembles an IQ test. But it pays inadequate attention to the most important problems with tests of rationality.
My biggest concern about rationality testing is what happens when people anticipate the test and are motivated to maximize their scores (as is the case with IQ tests). Do they:
- learn to score high by “cheating” (i.e. learn what answers the test wants, without learning to apply that knowledge outside of the test)?
- learn to score high by becoming more rational?
- not change their score much, because they’re already motivated to do as well as their aptitudes allow (as is mostly the case with IQ tests)?
Alas, the book treats these issues as an afterthought. Their test knowingly uses questions for which cheating would be straightforward, such as asking whether the test subject believes in science, and whether they prefer to get $85 now rather than $100 in three months. (If they could use real money, that would drastically reduce my concerns about cheating. I’m almost tempted to advocate doing that, but doing so would hinder widespread adoption of the test, even if using real money added enough value to pay for itself.)
Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.
This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.
He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.
He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.
Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.
I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.
Book review: Leadership and Self-Deception: Getting out of the Box, by the Arbinger Institute.
In spite of being marketed as mainly for corporate executives, this book’s advice is important for most interactions between people. Executives have more to gain from it, but I suspect they’re somewhat less willing to believe it.
I had already learned a lot about self-deception before reading this, but this book clarifies how to recognize and correct common instances in which I’m tempted to deceive myself. More importantly, it provides a way to explain self-deception to a number of people. I had previously despaired of explaining my understanding of self-deception to people who hadn’t already sought out the ideas I’d found. Now I can point people to this book. But I still can’t summarize it in a way that would change many people’s minds.
It’s written mostly as a novel, which makes it very readable without sacrificing much substance.
Some of the books descriptions don’t sound completely right to me. They describe people as acting “inside the box” or “outside the box” with respect to another person (not the same as the standard meaning of “thinking outside the box”) as if people normally did one or the other, we I think I often act somewhere in between those two modes. Also, the term “self-betrayal”, which I’d describe as acting selfishly and rationalizing the act as selfless, should not be portrayed as if the selfishness automatically causes self-deception. If people felt a little freer to admit that they act selfishly, they’d be less tempted to deceive themselves about their motives.
The book seems a bit too rosy about the benefits of following it’s advice. For instance, the book leaves the reader to imagine that Semmelweis benefited from admitting that he had been killing patients. Other accounts of Semmelweis suggest that he suffered, and the doctors who remained in denial prospered. Maybe he would have done much better if he had understood this book and been able to adopt its style. But it’s important to remember that self-deception isn’t an accident. It happens because it has sometimes worked.
I’ve been learning how to read body language and how to alter my body language, and I’m wondering how much of the changes in my body language that I’m hoping to create should be considered honest communications.
Increased eye contact and mimicking a person’s body language seem to be unavoidably genuine expressions of interest. The fact that people can have bad motives for such interest doesn’t seem like it should make me hesitant when my motives for being interested in someone are good.
Posture is harder to evaluate. One function of altering my posture to look as tall as I can is to signal desirable qualities that correlate with height (e.g. good nutrition as a child, leading to good health and a well developed brain). If this led to costly status seeking, I’d feel guilty. But there’s little cost for everyone to match the degree to which I’m looking taller by paying attention to my posture, and little hope that competition for status can be reduced by people such as me ignoring my posture, so I feel negligible guilt.
Another function of posture is to indicate confidence. I’d feel guilty about artificially increasing the confidence I express about a specific factual claim. Most communication is either expressing factual claims of some sort or has no clear content. I’m unsure how to treat the confidence expressed by posture. It seems to say something about some poorly specified anticipated outcomes. Is it mostly a self-fulfilling prophecy, so that it will honestly indicate whether I’m going to be happy in the future even if I alter it in a way that seems artificial? I can’t pin down what it’s expressing well enough to say.
I often hide my hands in my pockets, and that reportedly gets interpreted as saying that I’m hiding something. I suspect this is a false signal. As far as I can recall, when I fail to communicate something that people might want to hear it’s due something like not figuring out whether someone wants to hear it or being too slow to notice a break in a conversation in which to start talking. If I can alter my hand position to better indicate when I’d like people to be more inquisitive about my thoughts, that will improve communication.
Hand movements such as scratching my head that get interpreted as nervousness are more problematic. That scratching does have some correlation with nervousness. I feel a bit dishonest when I hide increased nervousness by consciously resisting my temptation to scratch my head. But some of head scratching habits seem to reflect something other than nervousness (maybe a mild version Dermatillomania associated with obsessive tendencies that fall short of being a disorder), and are probably creating false impressions with most people. I’m unsure whether I can eliminate those false impressions without also eliminating accurate signals.
Robin Hanson has another interesting paper on human attitudes toward truth and on how they might be improved.
See also some related threads on the extropy-chat list here and here.
One issue that Robin raises involves disputes between us and future generations over how much we ought to constrain our descendants to be similar to us. He is correct that some of this disagreement results from what he calls “moral arrogance” (i.e. at least one group of people overestimating their ability to know what is best). But even if we and our descendants were objective about analyzing the costs and benefits of the alternatives, I would expect some disagreement to remain, because different generations will want to maximize the interests of different groups of beings. Conflicting interests between two groups that exist at the same time can in principle be resolved by one group paying the other to change it’s position. But when one group exists only in the future, and its existence is partly dependent on which policy is adopted now, it’s difficult to see how such disagreements could be resolved in a way that all could agree upon.