To add event to calendar, click the desired date below.
This seminar is chaired by Professor Scott Wiltermuth from the USC Marshall School of Business.
Previous political forecasting tournaments have painted a rather depressing picture of political experts: hard-pressed to outperform chance and simple extrapolation algorithms, overconfident in their projections, and defensive in response to disconfirmation. A massive new forecasting tournament -- sponsored by the Intelligence Advanced Research Project Agency -- reveals that what we learn from such tournaments is very much a function of exactly how these tournaments are structured (including length of forecasting windows, opportunities for belief updating, and social mechanisms, such as leaderboards, for honoring superb performance). Drawing on the initial two years of forecasting a wide range of events around the world (ranging from intra- and interstate conflict to leadership and regime change to macroeconomic and financial indicators), we find that: (1) it is possible to improve judgmental accuracy--beyond a randomized control group -- by roughly 25% through experimental interventions that offer guidelines for debiasing probabilistic reasoning and that offer opportunities to participate in collaborative teams or prediction markets; (2) it has been possible to improve accuracy by roughly an additional 30% by relying on weighted-aggregation algorithms that give greater weight to the most attentive and thoughtful forecasters.
Of course, there are limits to how much judgmental performance can be improved in a stochastic world with large pockets of irreducible uncertainty. Forecasting tournaments in world politics should constantly test the robustness of conclusions by factoring in controversies over "close-call" counterfactuals, events that happened but almost did not and events that never happened but almost did. Forecasting tournaments also need to acknowledge that judgmental accuracy is not a value neutral construct but rather takes on different meanings as a function of the value judgments that political observers place on avoiding errors of under-predicting versus over-predicting various classes of outcomes. Acknowledging the limits of a neo-positivist approach to science will make it easier to use forecasting tournaments to enhance the quality of intelligence analysis and improve the quality of public debate.
Philip E. Tetlock is the Annenberg University Professor at the University of Pennsylvania, with cross appointments in psychology, the Wharton school, and political science. He previously held the Mitchell Endowed Professorship at the University of California, Berkeley. Tetlock has published widely -- and is cited widely -- in peer-reviewed journals on the topics of expert judgment, judgmental biases, and the effectiveness of various approaches to debiasing. He has received scientific awards and honors from a wide range of scholarly organizations, including the National Academy of Sciences, the American Academy of Arts and Sciences, the American Political Science Association and the American Psychological Association.
More info about Tetlock: