Book review: The Execution Channel by Ken MacLeod.
The style of this book is better than that of the other books I’ve read by MacLeod, but not good enough for the style alone to be sufficient reason to read it.
I was disappointed that the substance was not very thought provoking. Unlike the typical MacLeod novel, it is set in a society too similar to ours to stretch our imaginations much, and sufficiently less pleasant to be somewhat depressing.
Much of the book is commentary on the current “war on terror”. I agree with a lot of that commentary, but only a few aspects of the commentary have much value.
The most important way in which this novel stands out is that it portrays most characters as people who expect to be the kind of leaders that conspiracy theorists imagine the world to be run by, but regularly end up as more realistic people whose battle plans don’t survive contact with the apparent enemy. And there’s a good deal of realistic “fog of war” type uncertainty over who the enemy is.
MacLeod deserves a good deal of credit for avoiding a number of biases that make typical novels popular but unrealistic, such as making the protagonists better than human. Unfortunately, the results confirm that this kind of realism interferes with the enjoyability of novels.
Archive for December, 2007
Book review: The Execution Channel by Ken MacLeod.
Tim Freeman has a paper which clarifies many of the issues that need to be solved for humans to coexist with a superhuman AI. It comes close to what we would need if we had unlimited computing power. I will try amplify on some of the criticisms of it from the sl4 mailing list.
It errs on the side of our current intuitions about what I consider to be subgoals, rather than trusting the AI’s reasoning to find good subgoals to meet primary human goal(s). Another way to phrase that would be that it fiddles with parameters to get special-case results that fit our intuitions rather than focusing on general purpose solutions that would be more likely to produce good results in conditions that we haven’t yet imagined.
For example, concern about whether the AI pays the grocer seems misplaced. If our current intuitions about property rights continue to be good guidelines for maximizing human utility in a world with a powerful AI, why would that AI not reach that conclusion by inferring human utility functions from observed behavior and modeling the effects of property rights on human utility? If not, then why shouldn’t we accept that the AI has decided on something better than property rights (assuming our other methods of verifying that the AI is optimizing human utility show no flaws)?
Is it because we lack decent methods of verifying the AI’s effects on phenomena such as happiness that are more directly related to our utility functions? If so, it would seem to imply that we have an inadequate understanding of what we mean by maximizing utility. I didn’t see a clear explanation of how the AI would infer utility functions from observing human behavior (maybe the source code, which I haven’t read, clarifies it), but that appears to be roughly how humans at their best make the equivalent moral judgments.
I see similar problems with designing the AI to produce the “correct” result with Pascal’s Wager. Tim says “If Heaven and Hell enter into a decision about buying apples, the outcome seems difficult to predict”. Since humans have a poor track record at thinking rationally about very small probabilities and phenomena such as Heaven that are hard to observe, I wouldn’t expect AI unpredictability in this area to be evidence of a problem. It seems more likely that humans are evaluating Pascal’s Wager incorrectly than that a rational AI which can infer most aspects of human utility functions from human behavior will evaluate it incorrectly.
Book review: The Robot’s Rebellion: Finding Meaning in the Age of Darwin by Keith E. Stanovich.
This book asks us to notice the conflicts between the goals our genes created us to serve and the goals that we as individuals benefit from achieving. Its viewpoint is somewhat new and unique. Little of the substance of the book seemed new, but there were a number of places where the book provides better ways of communicating ideas than I had previously seen.
The title led me to hope that the book would present a very ambitious vision of how we might completely free ourselves from genes and Darwinian evolution, but his advice focuses on modest nearer term benefits we can get from knowledge produced by studying heuristics and biases. The advice consists mainly of elaborations on the ideas of being rational and using scientific methods instead of using gut reactions when those approaches give conflicting results.
He does a good job of describing the conflicts between first order desires (e.g. eating sugar) and higher order desires (e.g. the desire not to desire unhealthy amounts of sugar), and why there’s no easy rule to decide which of those desires deserves priority.
He isn’t entirely fair to groups of people that he disagrees with. I was particularly annoyed by his claim that “economics vehemently resists the notion that first-order desires are subject to critique”. What economics resists is the idea that person X is a better authority than person Y about what Y’s desires are or ought to be. Economics mostly avoids saying anything about whether a person should want to alter his desires, and I expect those issues to be dealt with better by other disciplines.
One of the better ideas in the book was to compare the effort put into testing peoples’ intelligence to the effort devoted to testing their rationality. He mentions many tests that would provide information about how well a person has overcome biases, and points out that such information might be valuable to schools deciding which students to admit and employers deciding whom to hire. I wish he had provided a good analysis of how well those tests would work if people trained to do well on them. I’d expect some wide variations – tests for overconfidence can be made to work fairly well, but I’m concerned that people would learn to pass tests such as the Wason test without changing their behavior under conditions when they’re not alert to these problems.