Book review: Good and Real: Demystifying Paradoxes from Physics to Ethics by Gary Drescher.
This book tries to derive ought from is. The more important steps explain why we should choose the one-box answer to Newcomb’s problem, then argue that the same reasoning should provide better support for Hofstadter’s idea of superrationality than has previously been demonstrated, and that superrationality can be generalized to provide morality. He comes close to the right approach to these problems, and I agree with the conclusions he reaches, but I don’t find his reasoning convincing.
He uses a concept which he calls a subjunctive relation, which is intermediate between a causal relation and a correlation, to explain why a choice that seems to happen after its goal has been achieved can be rational. That is the part of his argument that I find unconvincing. The subjunctive relation behaves a lot like a causal relation, and I can’t figure out why it should be treated as more than a correlation unless it’s equivalent to a causal relation.
I say that the one-box choice in Newcomb’s problem causes money to be placed in the box, and that superrationality and morality should be followed for similar reasons involving counterintuitive types of causality. It looks like Drescher is reluctant to accept this type of causality because he doesn’t think clearly enough about the concept of choice. It often appears that he is using something like a folk-psychology notion of choice that appears incompatible with the assumptions of Newcomb’s problem. I expect that with a sufficiently sophisticated concept of choice, Newcomb’s problem and similar situations cease to seem paradoxical. That concept should reflect a counterintuitive difference between the time at which a choice is made and the time at which it is introspectively observed as being irrevocable. When describing Kavka’s toxin problem, he talks more clearly about the concept of choice, and almost finds a better answer than subjunctive relations, but backs off without adequate analysis.
The book also has a long section explaining why the Everett interpretation of quantum mechanics is better than the Copenhagen interpretation. The beginning and end of this section are good, but there’s a rather dense section in the middle that takes much effort to follow without adding much.
Book review: Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals, by David Aronson.
This is by far the best book I’ve seen that is written for professional stock market traders. That says more about the wishful thinking that went into other books that attempt to analyze trading rules than it does about this author’s brilliance. There are probably books about general data mining that would provide more rigorous descriptions of the relevant ideas, but they would require more effort to find the ideas that matter most to traders.
There hasn’t been much demand for rigorous analysis of trading systems because people who understand how hard it is to do it well typically pick a different career, leaving the field populated with people who overestimate their ability to develop trading systems. That means many traders won’t like the message this book sends because it doesn’t come close to fitting their preconceptions about how to make money. It is mostly devoted to explaining how to avoid popular and tempting mistakes.
Although the book only talks specifically about technical analysis, the ideas in it can be applied with little change to a wide variety of financial and political forecasting problems.
He is occasionally careless. For example: “All other things being equal, a TA rule that is successful 52 percent of the time is less valuable than one that works 70 percent of the time.” There might be a way of interpreting this that is true, but it’s easy for people to mistake this for a useful metric, when it has little correlation with good returns on investment. It’s quite common for a system’s returns to be dominated by a few large gains or losses rather than the frequency of success.
The book manages to spell Occam three different ways!
Book review: Hollywood Economics: How extreme uncertainty shapes the film industry, by Arthur De Vany.
This rather dense and scholarly book that contains some good insights into how markets for information differ from markets for physical goods. But few people will want to read the whole book. Much of the book was originally published as papers in economics journals. It’s better organized than that suggests, but the style is mostly oriented toward professional economists.
Much of the book can be summed up by the conclusion that nobody knows anything about how successful a movie will be. The typical film loses money, and the expected returns are heavily dominated by rare films that are huge successes.
He says through much of the book that returns on investment in movies have infinite variance, and only at the very end admits that that’s not literaly true, and then provides a more credible description of the variance as unstable and generally increasing over time.
His argument that Hollywood makes too many R-rated films takes a good deal of effort to follow. Table 5.3 is confusing, because it shows a mean return on R-rated films as much higher for the returns on PG13 films. This sounds like the opposite of his conclusion. It took 13 more pages before I figured out that that was due to some high rates of return on low budget R-rated films that had little effect on aggregate profits. It appears that his conclusion ought to have been that Hollywood makes too many high-budget R-rated films, and too few low-budget R-rated films.
His description of the antitrust cases that transformed the movie industry provides convincing evidence that the courts were confused and didn’t help the independent exhibitors that the lawsuits were allegedly designed to help. The arguments about how they affected consumers are less clear.
At last Sunday’s Overcoming Bias meetup, we tried paranoid debating. We formed groups of mostly 4 people (5 for the first round or two) and competed to produce the most accurate guess to trivia questions with numeric answers, with one person secretly designated to be rewarded for convincing the team to produce the least accurate answer.
It was fun and may have taught us a little about becoming more rational. But in order to be valuable, it should be developed further to become a means of testing rationality. As practiced, it tested some combination of trivia knowledge and rationality. The last round reduced the importance of trivia knowledge by rewarding good confidence intervals instead of a single good answer. I expect there are ways of using confidence intervals that remove the effects of trivia knowledge from the scores.
I’m puzzled about why people preferred the spokesman version to the initial version where the median number was the team’s answer. Designating a spokesman publicly as a non-deceiver provides information about who the deceiver is. In one case, we determined who the deceiver was by two of us telling the spokesman that we were sufficiently ignorant about the subject relative to him that he should decide based only on his knowledge. That gave our team a big advantage that had little relation to our rationality. I expect the median approach can be extended to confidence intervals by taking the median of the lows and the median of the highs, but I’m not fully confident that there are no problems with that.
The use of semi-randomly selected groups meant that scores were weak signals. If we want to evaluate individual rationality, we’d need rather time consuming trials of many permutations of the groups. Paranoid debating is more suited to comparing groups (e.g. a group of people credentialed as the best students from a rationality dojo, or the people most responsible for decisions in a hedge fund).
See more comments at Less Wrong.