At a recent LessWrong meetup, someone described his GTD system with the metaphor automated self, to emphasize that the things he offloads from his mind into the GTD system help him act more robotic. I like the idea of automating some of my actions so that I can further separate planning and execution. The term automated self is a good way to remember that goal, and should be used more widely than it is. Plus I like to distinguish myself from those who attach negative connotations to “robot-like”.
Book review: The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt.
This book carefully describes the evolutionary origins of human moralizing, explains why tribal attitudes toward morality have both good and bad effects, and how people who want to avoid moral hostility can do so.
Parts of the book are arranged to describe the author’s transition from having standard delusions about morality being the result of the narratives we use to justify them and about why other people had alien-sounding ideologies. His description about how his study of psychology led him to overcome his delusions makes it hard for those who agree with him to feel very superior to those who disagree.
He hints at personal benefits from abandoning partisanship (“It felt good to be released from partisan anger.”), so he doesn’t rely on altruistic motives for people to accept his political advice.
One part of the book that surprised me was the comparison between human morality and human taste buds. Some ideologies are influenced a good deal by all 6 types of human moral intuitions. But the ideology that pervades most of academia only respect 3 types (care, liberty, and fairness). That creates a difficult communication gap between them and cultures that employ others such as sanctity in their moral system, much like people who only experience sweet and salty foods would have trouble imagining a desire for sourness in some foods.
He sometimes gives the impression of being more of a moral relativist than I’d like, but a careful reading of the book shows that there are a fair number of contexts in which he believes some moral tastes produce better results than others.
His advice could be interpreted as encouraging us to to replace our existing notions of “the enemy” with Manichaeans. Would his advice polarize societies into Manichaeans and non-Manichaeans? Maybe, but at least the non-Manichaeans would have a decent understanding of why Manichaeans disagreed with them.
The book also includes arguments that group selection played an important role in human evolution, and that an increase in cooperation (group-mindedness, somewhat like the cooperation among bees) had to evolve before language could become valuable enough to evolve. This is an interesting but speculative alternative to the common belief that language was the key development that differentiated humans from other apes.
Book review: The Ego Tunnel: The Science of the Mind and the Myth of the Self, by Thomas Metzinger.
This book describes aspects of consciousness in ways that are often, but not consistently, clear and informative. His ideas are not revolutionary, but will clarify our understanding.
I didn’t find his tunnel metaphor very helpful.
I like his claim that “conscious information is exactly that information that must be made available for every single one of your cognitive capacities at the same time”. That may be an exaggeration, but it describes an important function of consciousness.
He makes surprisingly clear and convincing arguments that there are degrees of consciousness, so that some other species probably have some but not all of what we think of as human consciousness. He gives interesting examples of ways that humans can be partially conscious, e.g. people with Cotard’s Syndrome can deny their own existence.
His discussion of ethical implications of neuroscience points out some important issues to consider, but I’m unimpressed with his conclusion that we shouldn’t create conscious machines. He relies on something resembling the Precautionary Principle that says we should never risk causing suffering in an artificial entity. As far as I can tell, the same reasoning would imply that having children is unethical because they might suffer.
Book review: Switch: How to Change Things When Change Is Hard, by Chip and Dan Heath.
This book uses an understanding of the limits to human rationality to explain how it’s sometimes possible to make valuable behavioral changes, mostly in large institutions, with relatively little effort.
The book presents many anecdotes about people making valuable changes, often demonstrating unusually creative thought. The theories about why the changes worked are not very original, but are presented better than in most other books.
Some of the successes are sufficiently impressive that I wonder whether they cherry-picked too much and made it look too easy. One interesting example that is a partial exception to this pattern is a comparison of two hospitals that tried to implement the same change, with one succeeding and the other failing. Even with a good understanding of the book’s ideas, few people looking at the differences between the hospitals would notice the importance of whether small teams met for afternoon rounds at patients’ bedsides or in a lounge where other doctors overheard the discussions.
They aren’t very thoughtful about whether the goals are wise. This mostly doesn’t matter, although it is strange to read on page 55 about a company that succeeded by focusing on short-term benefits to the exclusion of long-term benefits, and then on page 83 to read about a plan to get businesses to adopt a longer term focus.
Book review: Greatness: Who Makes History and Why by Dean Keith Simonton.
This broad and mediocre survey of psychology of people who stand out in history probably contains a fair number of good ideas, but it’s hard to separate them from the many ideas that are questionable guesses. He’s inconsistent about distinguishing his guesses from claims backed by good evidence.
One of the clearest examples is his assertion that childhood adversity builds character. He presents evidence that eminent figures were unusually likely to have had a parent die early, and describes this as the “most impressive proof” of his claim. He ignores the possibility those people come from families with a pattern of taking sufficiently unusual risks to explain that evidence.
In other places, he makes mistakes which seemed reasonable when the book was published, such as “Mendelian laws of inheritance are blind to whether an individual is first-born or later-born” (parental age has a measurable effect on mutation rates).
He avoids some of the worst mistakes that a psychology of history could make, such as trying to psychoanalyze individuals without having enough information about them.
He mentions some approaches to analyzing presidential addresses and corporate letters to stockholders, which have some potential to be used in predicting whether leaders have the appropriate personality for their jobs. I wonder what would happen if many voters/stockholders demanded that leaders pass tests of this nature (I’m assuming the tests can be scored objectively, but that may be shaky assumption). I’m confident that we’d get leaders with rhetoric that passes those tests. Would that simply mean the leaders change their rhetoric, or would it be hard enough to maintain a mismatch between rhetoric and thought patterns that we’d get leaders with better thought patterns?