Friday, January 25, 2008

Jed Christiansen with a fanastic primer on prediction markets and elections

Read the whole thing, via Chris Masse.

Jed points out the weakest point of my Intrade v. Zogby Showdown contests: Intrade contracts are winner-take-all, while Zogby poll statistics are linear probabilities. I attempt to normalize the "predictiveness" between them by looking first at the clear leader, if there is one, and then compare candidate probabilties to break ties. I also take snapshots on the eve of each election, a T-1 approach.

However, he also affirms points which I've made in the past:
What this also means is that prediction markets have to be “wrong” in order to be right. If all of the contracts trading at 80% actually occurred, the market would be incorrect; 1 in five contracts trading at 80% has to lose!

So when do we judge how accurate a prediction market is? Do we take the price from the week before an event? The day before? For an election, do we take it when the polls open? When the polls close?

In my opinion, it all comes down to your goals. InTrade lets traders trade contracts until a winner is settled, because they want an active and accurate marketplace. This has allowed contracts to swing wildly through the day, perhaps most notably in the 2004 Presidential election when leaked exit polls in the afternoon indicated a strong showing for Kerry, only to see actual results not match up with these polls. Other markets look to generate forecasts, so they would end at the point where the information from the forecast was required.

A binary contract is either correct or it isn’t; there’s no good way to assess the quality of a single data point. What we do is assess the calibration of the marketplace. Of all the contracts judged with a 20% probability, do they happen 20% of the time? Of all the contracts judged with a 95% probability, do they happen 95% of the time? With sufficient data we can draw a calibration curve to determine accuracy in this manner. Prediction markets typically do quite well here.

Prediction markets should be compared to other forecasting methods, and not perfection. Let’s match up prediction markets against the cable-news talking heads and see who’s better. (I haven’t done so, but I would suggest that prediction markets would perform well.)
I hypothesize that individual contests will look quite bumpy. Across the entire set of data, which will approach 100 data cohorts, we'll be able to establish some clear trends and contrasts, and perhaps have a better designed experiment in 2012.

No comments:

Post a Comment