Predicting the Future
Summary of: Predicting the Future
The authors propose a market-based methodology, that accounts for public information, for predicting future outcomes using a small number of individuals participating in an imperfect information market and they verify the method demonstrating that predictions outperform the market and the best predictor in the group of participants.
- Published in/by
- Information Systems Frontiers 5:1, 47-61
- It would be a simple matter to aggregate the predictions of a set of individuals if they had the same predicting ability (using Bayes Law). This however is not the case and an accurate aggregation function has to take into account prediction abilities. The authors characterize this differing ability as a person's information strengths and their risk attitude. Given an uncertain event and a set of possible outcomes, a person's risk attitude, which might span from risk averse to risk loving, will cause them to report a probability distribution that does not reflect their true beliefs about the outcome. For example, a risk averse person might report a very flat distribution (providing little predictive information) and a risk loving person might bet everything on a single outcome.
- Tuning the aggregation for a particular group of participants requires determining "both the risk attitude of the market as a whole and of the individual players." The calculation for individuals is the ration of portfolio value to risk (based on portfolio theory) however, the risk attitude for the entire market is proxied in a novel way based on the inefficiency of the markets being used. Markets with a small number of participants are inefficient (i.e., the ration of sum of the prices of individual securities to the payoff of the winning security won't be one). If the ratio is less than one, then the market is risk loving and if it is greater than one, then it is risk averse.
- The existence of public information implies a possible correlation between the bets of players. The authors demonstrate that 'double counting' of public information dramatically reduces the accuracy of a prediction and needs to be accounted for by subtracting out public information when aggregating predictions. This requires that one assume that public information is indeed public and that private information is indeed private. The authors solve the public information problem by gathering information in a coordination game in the second stage that incents players to reveal public information through payouts. Their aggregation mechanism is then augmented to make use of this information.
- The authors compare their private information only mechanism to their mechanism that takes into account public information and also to a perfect public information mechanism and a limited public information correction mechanism. What they demonstrate is that "while algorithms that aggregate private and public information are sensitive to the underlying information structures, markets are not."
- If everyone receives the same public information, it may not be necessary to use all participants public information reports in order to create an accurate prediction. The issue of identifying subgroups to recover public or private information from is noted as a further research topic.
- The aggregation method has advantages over "standard information aggregation implied by the Condorcet theorem": (1) it extracts probability distributions as opposed to the "validity of a discrete choice obtained via a majority vote", (2) provides prediction information even when the overall system "does not contain accurate information as to the outcome", (3) does not assume risk neutrality and equal access to information by all participants.
- The aggregation method does not require complex market games to account for the shortcomings associated with information markets. Using the Kullback-Leibler measure, the method "surpasses by a factor of seven even more complicated institutions such as pari-mutual games."
- The authors conclude their work, which requires small groups of participants, could be used in focus group settings "where each member (of the focus group) has a financial stake in the outcome of the focus group."
- The authors note that even though the work of the paper "are particular to events with finite numbers of outcomes, they can be generalized to a continuous space."
- The authors note that this mechanism is intriguing in the context of the Web where (possibly asynchronous) information aggregation can occur over large geographical areas. They note possible further work in this area.
Predicting future outcomes of uncertain events in social situations is difficult because information is dispersed and can be difficult to aggregate. Using the commonly held shared belief that markets efficiently collect and disseminate information, the authors propose and experimentally verify a methodology for "predicting future outcomes using a small number of individuals participating in an imperfect information market." This methodology includes a means to account for public information and experiments show that it outperforms both the market and the best predictor in the group of participants. The methodology is a two stage mechanism that: (1) extracts the risk attitudes of participants and their ability to predict a given outcome and uses this information to construct a non-linear aggregation function for the collective prediction of uncertain events; and (2) collects predictions from individuals about an uncertain event, rewards individuals for their accuracy, and uses the aggregation function to predict the outcome of the event. Public information will create strong correlations that must be taken into account during aggregation. Assuming public and private information that are truly public and private and that individual participants can differentiate between, the authors provide a mechanism for identifying public information within the group of participants and subtracting during aggregation.