There has been a fair amount of research suggesting that beyond some low threshold, additional money does little to increase a person’s happiness.
Here’s a research report (see also here) indicating that the effect of money has sometimes been underestimated because researchers use income as a measure of money, when wealth has a higher correlation with happiness.
There’s probably more than one reason for this. Wealth produces a sense of security that isn’t achieved by having a high income but spending that income quickly. Also, it’s possible that people with high savings rates tend to be those who are easily satisfied with their status, whereas those who don’t save when they have high incomes are those who have a strong need to show off their incomes in order to compete for status (and since competition for status is in some ways a zero sum game, many of them will fail).
This rather discouraging blog post provides some unusual analogies to historical conflicts that might help predict what will happen in Iraq. The analogies aren’t exact enough to be convincing by themselves, but are close enough to deserve some attention.
Book review: Information Markets: A New Way of Making Decisions, edited by Robert Hahn and Paul Tetlock
This book contains some good discussions of current issues in the design of prediction markets (aka idea futures).
Since it’s the result of a conference for experts, it is mainly directed toward experts. It shouldn’t be overly hard for laymen to understand, but it probably focuses on issues that are somewhat different from what most laymen would find interesting, so I’d probably recommend reading Surowiecki’s Wisdom of Crowds or some of Robin Hanson’s earlier papers on the subject first.
One surprising result reported here is that the Iowa Electronic Markets show no longshot bias, in contrast to similar markets on Tradesports/Intrade and to widespread types of sports betting. This looks like an important area for research, although that would probably require setting up many variations on those markets (varying things such as the user interface, commissions on trades, limits on how much money can be invested, etc.), which would be expensive and hindered by regulatory uncertainty.
Michael Abramowicz presents an interesting proposal to create incentives to counteract the likely tendency of markets such as prediction markets to discourage people from making public the knowledge that goes into making market prices efficient. I don’t have much of a guess about how well his solution will work. It needs some more thought about how vulnerable it is to manipulation of the intermediate prices used to reward traders who convince others to follow their reasoning (averaging prices over a week or two would be a simple start at deterring manipulation). But I think he understates the importance of the problems he’s trying to solve. He says “while they are endemic to all securities markets, they apparently cause little harm. They are likely to be much more severe, however, in markets with very few active participants.”. I suspect they are significant in most securities markets, and are underestimated because they are very hard to measure. As someone who trades stocks for a living, I’d say that the amount and quality of knowledge that is shared among traders is quite low compared to most professions, although it’s hard to say how much of this is due to desire to keep valuable information secret and how much is due to the difficulty of distinguishing valuable information from misleading information.
For anyone obsessed with books, I recommend LibraryThing for sharing information about them.
You can see my books on my LibraryThing profile.
This book does an excellent job of reporting important evidence showing that group decisions can be wiser than those of any one individual. He makes some good attempts to describe what conditions cause groups to be wiser than individuals, but when he goes beyond reporting academic research, the quality of the book declines. He exaggerates enough to give critics excuses to reject the valuable parts of the book.
He lists four conditions that he claims determine whether groups are wiser than their individual members. I’m uncertain whether the conditions he lists are sufficient. I would have added something explicit about the need to minimize biases. It’s unclear whether that condition follows from his independence condition, partly because he’s a bit vague about whether he uses independence in the strong sense that statisticians do or whether he’s speaking more colloquially.
Sometimes he ignores those conditions and makes unconvincing blanket statements that larger groups will produce wiser decisions.
He makes exaggerated claims for the idea that crowds are wise due to information possessed by lots of average people rather than the influence of a few wise people. For instance, he disputes a Forsythe et al. paper which argues that a small number of “marginal traders” in a market to predict the 1988 presidential vote were responsible for the price accuracy. Surowiecki’s rejection of this argument depends on a claim that “two investors with the same amount of capital have the same influence on market prices”. But that looks false. For example, if the nonmarginal traders make all their trades on the first day and then blindly hold for a year, and the marginal traders trade with each other over that year in response to new information, prices on most days will be determined by the marginal traders.
It’s not designed to be an investment advice book, but if judged solely as a book on investment, I’d say it ranks in the top ten. It does a very good job of explaining both what’s right and what’s wrong with the random walk theory of the stock market.
He does a good job of ridiculing the “cult of the CEO” whereby most of a company’s value is attributed to its CEO (at least in the U.S.). I was surprised by his report that 95% of investors said they would buy stocks based on their opinion of the CEO. They certainly didn’t get that attitude from successful investors (who seem to do that only in rare cases where they are able to talk at length with the CEO). But his claim that “Corporate profit margins did not increase over the course of the 1990s, even as executive compensation was soaring” looks false, as well as being of questionable relevance to his points about executives being overvalued. And I wish he had also applied his argument to beliefs of the form “if we could just elect a good person to lead the nation”.
Chapter 6 does a good job of combining the best ideas from Wright’s book Nonzero and Fukuyama’s Trust (oddly, he doesn’t cite Trust).
He exaggerates reports that the stock market responded accurately to the Challenger explosion before any public reports indicated the cause. He claims “within a half hour of the shuttle blowing up, the stock market knew what company was responsible.” I don’t know where he gets the “half hour” time period. The paper he cites as the source says the market “pinpointed” Thiokol as the culprit “within an hour”, but it exaggerates a bit. If the percent decline in stock price is the best criterion, then the market provided strong evidence within an hour. If the dollar value of the loss of market capitalization is the best criterion, then the evidence was weak after one hour but strong within four hours.
He also claims “Savvy insiders alone did not cause that first-day drop in Thiokol’s price.”, but shows no sign that he could know whether this is true. He seems to base on the absence of reported selling by executives whom the law requires to report such selling, but he appears to overestimate how reliably that law is obeyed, and to ignore a large number of non-executive insiders (e.g. engineers). He does pass on a nice quote which better illustrates our understanding of these issues: “While markets appear to work in practice, we are not sure how they work in theory.”
Nick Szabo has a
very good post on global warming.
I have one complaint: he says “acid rain in the 1970s and 80s was a scare almost as big global warming is today”. I remember the acid rain concerns of the 1970s well enough to know that this is a good deal of an exaggeration. Acid rain alarmists said a lot about the potential harm to forests and statues, but to the extent they talked about measurable harm to people, it was a good deal vaguer than with global warming, and if it could have been quantified it would probably have implied at least an order of magnitude less measurable harm to people than what mainstream academics are suggesting global warming could cause.
Mike Linksvayer has a fairly good argument that raising X dollars by running ads on Wikipedia won’t create more conflict of interest than raising X dollars some other way.
But the amount of money an organization handles has important effects on its behavior that are somewhat independent of the source of the money, and the purpose of ads seems to be to drastically increase the money that they raise.
I can’t provide a single example that provides compelling evidence in isolation, but I think that looking at a broad range of organizations with $100 million revenues versus a broad range of organizations that are run by volunteers who only raise enough money to pay for hardware costs, I see signs of big differences in the attitudes of the people in charge.
Wealthy organizations tend to attract people who want (or corrupt people into wanting) job security or revenue maximization, whereas low-budget volunteer organizations tend to be run by people motivated by reputation. If reputational motivations have worked rather well for an organization (as I suspect the have for Wikipedia), we should be cautious about replacing those with financial incentives.
It’s possible that the Wikimedia Foundation could spend hundreds of millions of dollars wisely on charity, but the track record of large foundations does not suggest that should be the default expectation.
Book review: An Inconvenient Truth: The Planetary Emergency of Global Warming and What We Can Do About It by Al Gore
I read An Inconvenient Truth in book form rather than watching the movie because I’m generally suspicious of attempts to convey serious arguments via film, and expected that the book version would have better references to the sources of his claims. Alas, this is merely a movie copied to paper, and his idea of a technical reference is a label such as “source: science magazine”.
This may be more scholarly than what a typical politician would produce, but it’s poor scholarship compared to what I’d expect from a typical college professor, even allowing for the goal of reaching a wide audience by keeping it simple.
The book is full of exaggerations and misleading impressions (but is usually not explicit enough to be clearly false). It is hard to say whether the book is helping by offsetting myths from the other extreme or whether it is adding to the confusion. Gore does deserve some credit for bringing more attention to a fairly serious problem.
Much of the book is pictures showing examples of climate changes, which don’t by themselves say whether we’ve experienced anything more than normal fluctuations. The movie may have reached people who thought climates were more stable than that, but I doubt the book will.
His main attempt to show evidence that CO2 emissions cause warming is a graph showing CO2 levels and temperature over the last 600,000 years. It sure looks like there’s a strong correlation. But my crude attempts at comparing the timing of the changes suggest that temperature changes precede CO2 changes more often than they follow them. I can imagine ways that the correlation could be caused by temperature changes causing changes in CO2 levels. I don’t see how a non-expert can tell what this correlation implies except by relying on authority (experts seem to think the causation works in both directions).
So that leaves him with only appeals to authority to back up his claims. He’s more credible there. His claim of a scientific consensus is approximately right. He lists the “percentage of articles in doubt as to the cause of global warming: 0%”. The paper he’s apparently referencing is responsible enough to use words such as “likely” rather than claiming an absence of doubt. Peter Norvig‘s analysis of some of those papers concludes at least 4 of those papers expressed doubt. But that difference is probably too subtle to matter to the people this book is targeting.
Gore is quick to blame big oil for the popular press’s false impressions of scientific controversy. The possibility that controversy sells stories appears to be at least as strong an explanation, but blaming the people Gore’s trying to convince would be rather inconvenient.
Pages 183 to 196 appear designed to create fears that sea levels could rise 18 to 20 feet suddenly and unpredictably. He doesn’t say anything about how fast experts think this might happen (which seems to be over many decades). The hints he gives are the mention of an ice shelf than unexpectedly broke up in 35 days, and some maps Greenland which appear to suggest the ice there could vanish in the next decade. The difference between an expected sea level rise over many decades and an unexpected rise over less than a decade makes a big difference in how well people could adapt to it. (The mass migrations in China recently demonstrate the feasibility of adapting to sea level changes in a decade or two). Gore appears to be contributing to fears of changes that are way outside the expert consensus.
Gore underestimates human ability to adapt to climate change (much as those on the other extreme underestimate human ability to invent affordable ways to reduce carbon emissions). For example, he implies that quick efforts to mitigate global warming are the only way to deal with the risks of drinking water shortages. But I see signs that cheaper desalinization is a more promising approach.
His graph on page 276 of “U.S. renewable energy future” is strange. Renewability has a weak connection to global warming solutions, but the possibility that nuclear power might be desirable seems too inconvenient to him. His forecast for biomass looks too optimistic, his forecast for solar after 2020 looks too pessimistic, and his forecast for wind shows strange fluctuations.
Gore repeats the myth that frogs won’t jump out of water that’s slowly brought to a boil, and claims that sometimes people make the same mistake. “Sometimes” is too uninformative to refute, but the most relevant research that I can think of suggests that at least political experts are biased toward sounding alarms too often.
He claims it’s “absolutely indisputable” that global warming is a “planetary emergency”. Yet nothing he says implies that stopping global warming is as urgent as reducing poverty, war, or disease.
Ph.D. economists seem fairly confident that the effects of global warming will be small.
There are substantial disputes among experts about how much of the global warming problem we should try to solve now (see Hal Varian’s comments, Tyler Cowen’s comments and Arnold Kling’s comments). But you won’t find any hint of that controversy in An Inconvenient Truth (in part because it’s hard to describe in ways that laymen can understand).
Gore recommends doing many things to slow down global warming a bit (but may leave many with the impression that his plans would do more than that – if it were an emergency as he says, wouldn’t he recommend more?).
Some of these steps are clearly good even if their effects on the climate are trivial.
For some (recycling and locally grown food) I’ve seen conflicting claims and can’t tell whether objective analyses exist.
Some are misleading. He claims a “fuel-cell vehicle (FCV) that uses pure hydrogen produces no pollutants”, which would be true if we had a convenient source of pure hydrogen. But on this planet, hydrogen requires energy to create, and only acts as a battery, so FCVs cause pollution if energy production causes pollution.
I recommend Ron Baily’s review for additional criticisms.
Disclosure: I own stocks in oil companies and in a company that serves the photovoltaic industry.