The Human Mind

Book review: Counterclockwise: Mindful Health and the Power of Possibility, by Ellen J. Langer.

This book presents ideas about how attitudes and beliefs can alter our health and physical abilities.

The book’s name comes from a 1979 study that the author performed that made nursing home residents act and look younger by putting them in an environment that reminded them of earlier days and by treating them as capable of doing more than most expected they could do.

One odd comment she makes is the there were no known measures of aging other than chronological age at the time of the 1979 study. She goes on to imply that little has changed since then – but it took me little effort to find info about a 1991 book Biomarkers which made a serious attempt at filling this void.

She disputes claims such as those popularized by Atul Gawande that teaching doctors to act more like machines (following checklists) will improve medical practice. She’s concerned that reducing the diversity of medical opinions will reduce our ability to benefit from getting a second opinion that could detect a mistake in the original diagnosis, and cites evidence that North Carolina residents have an unusually high tendency to seek second opinions, and also have signs of better health. But this only tells me that with little use of checklists, getting a second opinion is valuable. That doesn’t say much about whether adopting a culture of using checklists is better than adopting a culture of seeking second opinions. The North Carolina evidence doesn’t suggest a large enough health benefit to provide much competition with the evidence for checklists.

One surprising report is that cultures with positive views of aging seem to produce older people who have better memory than other cultures. It’s not clear what the causal mechanism is, but with the evidence coming from groups as different as mainland Chinese and deaf Americans, it seems likely that the beliefs cause the better memory rather than the better memory causing the beliefs.

Two interesting quotes from the book:

certainty is a cruel mindset

to tell us we’re “terminal” may be a self-fulfilling prophecy. There are no records of how often doctors have been correct or not after making this prediction.

Research indicates that cultures in which relationships can be formed and dissolved relatively easily produce more disclosure of intimate information between friends, probably due to a combination of greater need to invest in each relationship and lesser harm from taking risks that alter relationships.

The study compared Japanese culture to U.S. culture, but my impression is that there has also been a significant change over time in the U.S., with internet access increasing relationship mobility, followed by an increase in self-disclosure. (It’s possible that my impression was due to my move from New England to Silicon Valley in 1994 – there’s more social mobility in Silicon Valley, but I didn’t notice much change in self-disclosure until several years later).

It seems likely that the effects of the web on relationship mobility and self-disclosure will grow larger. The trend of increasing mobility has shown few signs of slowing, and the effects on self-disclosure probably lag by at least a few years.

Book review: Leadership and Self-Deception: Getting out of the Box, by the Arbinger Institute.

In spite of being marketed as mainly for corporate executives, this book’s advice is important for most interactions between people. Executives have more to gain from it, but I suspect they’re somewhat less willing to believe it.

I had already learned a lot about self-deception before reading this, but this book clarifies how to recognize and correct common instances in which I’m tempted to deceive myself. More importantly, it provides a way to explain self-deception to a number of people. I had previously despaired of explaining my understanding of self-deception to people who hadn’t already sought out the ideas I’d found. Now I can point people to this book. But I still can’t summarize it in a way that would change many people’s minds.

It’s written mostly as a novel, which makes it very readable without sacrificing much substance.

Some of the books descriptions don’t sound completely right to me. They describe people as acting “inside the box” or “outside the box” with respect to another person (not the same as the standard meaning of “thinking outside the box”) as if people normally did one or the other, we I think I often act somewhere in between those two modes. Also, the term “self-betrayal”, which I’d describe as acting selfishly and rationalizing the act as selfless, should not be portrayed as if the selfishness automatically causes self-deception. If people felt a little freer to admit that they act selfishly, they’d be less tempted to deceive themselves about their motives.

The book seems a bit too rosy about the benefits of following it’s advice. For instance, the book leaves the reader to imagine that Semmelweis benefited from admitting that he had been killing patients. Other accounts of Semmelweis suggest that he suffered, and the doctors who remained in denial prospered. Maybe he would have done much better if he had understood this book and been able to adopt its style. But it’s important to remember that self-deception isn’t an accident. It happens because it has sometimes worked.

Switch

Book review: Switch: How to Change Things When Change Is Hard, by Chip and Dan Heath.

This book uses an understanding of the limits to human rationality to explain how it’s sometimes possible to make valuable behavioral changes, mostly in large institutions, with relatively little effort.

The book presents many anecdotes about people making valuable changes, often demonstrating unusually creative thought. The theories about why the changes worked are not very original, but are presented better than in most other books.

Some of the successes are sufficiently impressive that I wonder whether they cherry-picked too much and made it look too easy. One interesting example that is a partial exception to this pattern is a comparison of two hospitals that tried to implement the same change, with one succeeding and the other failing. Even with a good understanding of the book’s ideas, few people looking at the differences between the hospitals would notice the importance of whether small teams met for afternoon rounds at patients’ bedsides or in a lounge where other doctors overheard the discussions.

They aren’t very thoughtful about whether the goals are wise. This mostly doesn’t matter, although it is strange to read on page 55 about a company that succeeded by focusing on short-term benefits to the exclusion of long-term benefits, and then on page 83 to read about a plan to get businesses to adopt a longer term focus.

Book review: Choke: What the Secrets of the Brain Reveal About Getting It Right When You Have To, by Sian Beilock.

This book provides some clues about why pressure causes some people to perform less well than they otherwise would, and gives simple (but not always easy) ways to reduce that effect. There’s a good deal of overlap between this book’s advice and other self-improvement advice. The book modestly enhances how I think about the techniques and how motivated I am to use them.

The main surprise about the causes is that people with large working memories are more likely to choke because they’re more likely to over-analyze a problem, presumably because they’re better at analyzing problems. They’re also less creative. There are also interesting comments about the role of small working memories in ADHD.

The book includes some interesting comments on how SAT tests provide misleading evidence of sexual differences in ability, and how social influences can affect sexual differences in ability (for example, having a more feminine name makes a girl less likely to learn math).

The book’s style is unusually pleasant.

Drive

Book review: Drive: The Surprising Truth About What Motivates Us, by Daniel H. Pink.

This book explores some of the complexities of what motivates humans. It attacks a stereotype that says only financial rewards matter, and exaggerates the extent to which people adopt that fallacy. His style is similar to Malcolm Gladwell’s, but with more substance than Gladwell.

The book’s advice is likely to cause some improvement in how businesses are run and in how people choose careers. But I wonder how many bosses will ignore it because their desire to exert control over people outweighs their desire to create successful companies.

I’m not satisfied with the way he and others classify motivations as intrinsic and extrinsic. While feelings of flow may be almost entirely internally generated, other motivations that he classifies as intrinsic seem to involve an important component of feeling that others are rewarding you with higher status/reputation.

Shirking may have been a been an important problem a century ago for which financial rewards were appropriate solutions, but the nature of work has changed so that it’s much less common for workers to want to put less effort into a job. The author implies that this means standard financial rewards have become fairly unimportant factors in determining productivity. I think he underestimates the importance they play in determining how goals are prioritized.

He believes the changes in work that reduced the importance of financial incentives was the replacement of rule-following routine work with work that requires creativity. I suggest that another factor was that in 1900, work often required muscle-power that consumed almost as much energy as a worker could afford to feed himself.

He states his claims vaguely enough that they could be interpreted as implying that broad categories of financial incentives (including stock options and equity) work poorly. I checked one of the references that sounded like it might address that (“When performance-related pay backfires”), and found it only dealt with payments for completing specific tasks.

His complaints about excessive focus on quarterly earnings probably have some value, but it’s important to remember that it’s easy to err in the other direction as well (the dot-com bubble seemed to coincide with an unusual amount of effort at focusing on earnings 5 to 10 years away).

I’m disappointed that he advises not to encourage workers to compete against each other without offering evidence about its effects.

One interesting story is the bonus system at Kimley-Horn and Associates, where any employee can award another employee $50 for doing something exceptional. I’d be interested in more tests of this – is there something special about Kimley-Horn that prevents abuse, or would it work in most companies?

Tyler Cowen has a good video describing why we shouldn’t be too influenced by stories. He exaggerates a bit when he says

There are only a few basic stories. If you think in stories, that means you are telling yourself the same thing over and over

but his point that stories allow storytellers to manipulate our minds deserves more emphasis. For me, one of the hardest parts of learning how to beat the stock market was to admit that I did poorly when I was influenced by stories, and did well mainly when I relied on numbers that are available and standardized for most companies, and on mechanical rules which varied little between companies (I sometimes use different rules for different industries, but beyond that I try to avoid adapting my approach to different circumstances).

For example, The stories I heard about Enron’s innovative management style gave me a gut feeling that it was a promising investment. But its numbers showed an uninteresting company, and persuaded me to postpone any investment.

But I’ve only told you a story here (it’s so much easier to do than provide rigorous evidence). If you really want good reasons, try testing for yourself story versus non-story approaches to something like the stock market.

(HT Patri).

Book review: Hierarchy in the Forest: The Evolution of Egalitarian Behavior, by Christopher Boehm.

This book makes a good argument that a major change from strongly hierarchical societies to fairly egalitarian societies happened to the human race sometime after it diverged from Chimpanzees and Bonobos. Not due to any changes in attitudes toward status, but because language enabled low-status individuals to cooperate more effectively to restrain high-status individuals, and because of he equalizing effects of weapons. Hunter-gatherer societies seem rather consistently egalitarian, and the partial reversion to hierarchy in modern times may be due to the ability to accumulate wealth or the larger size of our societies.

He provides a plausible hypothesis that this change enabled group selection to become more powerful than in a typical species, but that doesn’t imply that group selection became as important as within-group selection, and he doesn’t have a good way of figuring out how important the effect was.

He demonstrates that humans became more altruistic, using a narrow biological definition of altruism, but it’s important to note that this only means agreeing to follow altruistic rules. He isn’t able to say much about how well people follow those rules when nobody notices what they’re doing.

Much of the middle of the book recounting anthropological evidence can be skipped without much loss – the most important parts are chapters 8 and 9.

Book review: Breakdown of Will, by George Ainslie.

This book analyzes will, mainly problems connected with willpower, as a form of intertemporal bargaining between a current self that highly values immediate temptation and future selves who prefer that current choices be more far-sighted. He contrasts simple models of rational agents who exponentially discount future utility with his more sophisticated and complex model of people whose natural discount curve is hyperbolic. Hyperbolic discounting causes time-inconsistent preferences, resulting in problems such as addiction. Intertemporal bargains can generate rules which bundle rewards to produce behavior more closely approximating the more consistent exponential discount model.

He also discusses problems associated with habituation to rewards, and strategies that can be used to preserve an appetite for common rewards. For example, gambling might sometimes be rational if losing money that way restores an appetite for acquiring wealth.

Some interesting ideas mentioned are that timidity can be an addiction, and that pain involves some immediate short-lived reward (to draw attention) in addition to the more obvious negative effects.

For someone who already knows a fair amount about psychology, only small parts of the book will be surprising, but most parts will help you think a bit clearer about a broad range of problems.

Book Review: Simple Heuristics That Make Us Smart by Gerd Gigerenzer and Peter M. Todd.

This book presents serious arguments in favor of using simple rules to make most decisions. They present many examples where getting a quick answer by evaluating a minimal amount of data produces almost as accurate a result as highly sophisticated models. They point out that ignoring information can minimize some biases:

people seldom consider more than one or two factors at any one time, although they feel that they can take a host of factors into account

(Tetlock makes similar suggestions).

They appear to overstate the extent to which their evidence generalizes. They test their stock market heuristic on a mere six months worth of data. If they knew much about stock markets, they’d realize that there are a lot more bad heuristics which work for a few years at a time than there are good heuristics. I’ll bet that theirs will do worse than random in most decades.

The book’s conclusions can be understood by skimming small parts of the book. Most of the book is devoted to detailed discussions of the evidence. I suggest following the book’s advice when reading it – don’t try to evaluate all the evidence, just pick out a few pieces.