Idea Futures

Manifold Markets is a prediction market platform where I’ve been trading since September. This post will compare it to other prediction markets that I’ve used.

Play Money

The most important fact about Manifold is that traders bet mana, which is for most purposes not real money. You can buy mana, and use mana to donate real money to charity. That’s not attractive enough for most of us to treat it as anything other than play money.

Play money has the important advantage of not being subject to CFTC regulation or gambling laws. That enables a good deal of innovation that is stifled in real-money platforms that are open to US residents.

Continue Reading

Book review: Superforecasting: The Art and Science of Prediction, by Philip E. Tetlock and Dan Gardner.

This book reports on the Good Judgment Project (GJP).

Much of the book recycles old ideas: 40% of the book is a rerun of Thinking Fast and Slow, 15% of the book repeats Wisdom of Crowds, and 15% of the book rehashes How to Measure Anything. Those three books were good enough that it’s very hard to improve on them. Superforecasting nearly matches their quality, but most people ought to read those three books instead. (Anyone who still wants more after reading them will get decent value out of reading the last 4 or 5 chapters of Superforecasting).

The book’s style is very readable, using an almost Gladwell-like style (a large contrast to Tetlock’s previous, more scholarly book), at a moderate cost in substance. It contains memorable phrases, such as “a fox with the bulging eyes of a dragonfly” (to describe looking at the world through many perspectives).

Continue Reading

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]

The CFTC is suing Intrade for apparently allowing U.S. residents to trade contracts on gold, unemployment rates and a few others that it had agreed to prevent U.S. residents from trading. The CFTC is apparently not commenting on whether Intrade’s political contracts violate any laws.

U.S. traders will need to close our accounts.

The email I got says

In the near future we’ll announce plans for a new exchange model that will allow legal participation from all jurisdictions – including the US.

(no statement about whether it will involve real money, which suggests that it won’t).

I had already been considering closing my account because of the hassle of figuring out my Intrade income for tax purposes.

Book review: The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t by Nate Silver.

This is a well-written book about the challenges associated with making predictions. But nearly all the ideas in it were ones I was already familiar with.

I agree with nearly everything the book says. But I’ll mention two small disagreements.

He claims that 0 and 100 percent are probabilities. Many Bayesians dispute that. He has a logically consistent interpretation and doesn’t claim it’s ever sane to believe something with probability 0 or 100 percent, so I’m not sure the difference matters, but rejecting the idea that those can represent probabilities seems at least like a simpler way of avoiding mistakes.

When pointing out the weak correlation between calorie consumption and obesity, he says he doesn’t know of an “obesity skeptics” community that would be comparable to the global warming skeptics. In fact there are people (e.g. Dave Asprey) who deny that excess calories cause obesity (with better tests than the global warming skeptics).

It would make sense to read this book instead of alternatives such as Moneyball and Tetlock’s Expert Political Judgment, but if you’ve been reading books in this area already this one won’t seem important.

[See here and here for some context.]

John Salvatier has drawn my attention to a paper describing A Practical Liquidity-Sensitive Automated Market Maker [pdf] which fixes some of the drawbacks of the Automated Market Maker that Robin Hanson proposed.

Most importantly, it provides a good chance that the market maker makes money in roughly the manner that a profit-oriented human market maker would.

It starts out by providing a small amount of liquidity, and increases the amount of liquidity it provides as it profits from providing liquidity. This allows markets to initially make large moves in response to a small amount of trading volume, and then as a trading range develops that reflects agreement among traders, it takes increasingly large amounts of money to move the price.

A disadvantage of following this approach is that it provides little reward to being one of the first traders. If traders need to do a fair amount of research to evaluate the contract being traded, it may be that nobody is willing to inform himself without an expectation that trading volume will become significant. Robin Hanson’s version of the market maker is designed to subsidize this research. If we can predict that several traders will actively trade the contract without a clear-cut subsidy, then the liquidity-sensitive version of the market maker is likely to be appropriate. If we can predict that a subsidy is needed to generate trading activity, then the best approach is likely to be some combination of the two versions. The difficulty of predicting how much subsidy is needed to generate trading volume leaves much uncertainty.

[Updated 2010-07-01:
I’ve reread the paper more carefully in response to John’s question, and I see I was confused by the reference to “a variable b(q) that increases with market volume”. It seems that it is almost unrelated to what I think of as market volume, and is probably better described as related to the market maker’s holdings.

That means that the subsidy is less concentrated on later trading than I originally thought. If the first trader moves the price most of the way to the final price, he gets most of the subsidy. If the first trader is hesitant and wants to see that other traders don’t quickly find information that causes them to bet much against the first trader, then the first trader probably gets a good deal less subsidy under the new algorithm. The latter comes closer to describing how I approach trading on an Intrade contract where I’m the first to place orders.

I also wonder about the paper’s goal of preserving path independence. It seems to provide some mathematical elegance, but I suspect the market maker can do better if it is allowed to make a profit if the market cycles back to a prior state.
]

Some comments on last weekend’s Foresight Conference:

At lunch on Sunday I was in a group dominated by a discussion between Robin Hanson and Eliezer Yudkowsky over the relative plausibility of new intelligences having a variety of different goal systems versus a single goal system (as in a society of uploads versus Friendly AI). Some of the debate focused on how unified existing minds are, with Eliezer claiming that dogs mostly don’t have conflicting desires in different parts of their minds, and Robin and others claiming such conflicts are common (e.g. when deciding whether to eat food the dog has been told not to eat).

One test Eliezer suggested for the power of systems with a unified goal system is that if Robin were right, bacteria would have outcompeted humans. That got me wondering whether there’s an appropriate criterion by which humans can be said to have outcompeted bacteria. The most obvious criterion on which humans and bacteria are trying to compete is how many copies of their DNA exist. Using biomass as a proxy, bacteria are winning by several orders of magnitude. Another possible criterion is impact on large-scale features of Earth. Humans have not yet done anything that seems as big as the catastrophic changes to the atmosphere (“the oxygen crisis”) produced by bacteria. Am I overlooking other appropriate criteria?

Kartik Gada described two humanitarian innovation prizes that bear some resemblance to a valuable approach to helping the world’s poorest billion people, but will be hard to turn into something with a reasonable chance of success. The Water Liberation Prize would be pretty hard to judge. Suppose I submit a water filter that I claim qualifies for the prize. How will the judges test the drinkability of the water and the reusability of the filter under common third world conditions (which I suspect vary a lot and which probably won’t be adequately duplicated where the judges live)? Will they ship sample devices to a number of third world locations and ask whether it produces water that tastes good, or will they do rigorous tests of water safety? With a hoped for prize of $50,000, I doubt they can afford very good tests. The Personal Manufacturing Prizes seem somewhat more carefully thought out, but need some revision. The “three different materials” criterion is not enough to rule out overly specialized devices without some clear guidelines about which differences are important and which are trivial. Setting specific award dates appears to assume an implausible ability to predict how soon such a device will become feasible. The possibility that some parts of the device are patented is tricky to handle, as it isn’t cheap to verify the absence of crippling patents.

There was a debate on futarchy between Robin Hanson and Mencius Moldbug. Moldbug’s argument seems to boil down to the absence of a guarantee that futarchy will avoid problems related to manipulation/conflicts of interest. It’s unclear whether he thinks his preferred form of government would guarantee any solution to those problems, and he rejects empirical tests that might compare the extent of those problems under the alternative systems. Still, Moldbug concedes enough that it should be possible to incorporate most of the value of futarchy within his preferred form of government without rejecting his views. He wants to limit trading to the equivalent of the government’s stockholders. Accepting that limitation isn’t likely to impair the markets much, and may make futarchy more palatable to people who share Moldbug’s superstitions about markets.

I once proposed using life expectancy as the primary indicator of what society should try to maximize.

Recently there have been reports that life expectancy is negatively correlated with standard measures of economic growth. I accept the conclusion that depressions and recessions are less harmful than is commonly believed, but I want to point out the dangers of looking at only the life expectancy in the same year as an event that influences life expectancy. Depressions may have harmful effects that take a decade to show up in life expectancy figures (e.g. long-term wealth effects, effects on willingness to wage war, etc). So I’d like to see how life expectancy averaged over the ensuing 10 or 15 years correlates with a year’s gdp change.