Statistician/poll-predictor Nate Silver (previously discussed) analyzed the Oscars before last night's telecast, and attempted to forecast the outcome of the six most popular categories: Supporting Actor (Ledger), Supporting Actress (Henson - wrong!), Lead Actor (Rourke - wrong!), Lead Actress (Winslet), Best Director (Boyle), Best Picture (Slumdog Millionaire). So he got four out of six right. Decent, but not great (the Intrade predictions actually got the Cruz win right, thus doing better than Silver). Here's some discussion Silver's article this morning on his statistical model's successes and failures:
What to make of this performance? Heath Ledger's award for Best Supporting Actor was a virtual lock; it's hard to take any credit at all for that one. The awards for Slumdog Millionaire and its director Danny Boyle were not quite in the same category -- both were trading at around 80 percent on Intrade at the time I issued my forecasts. But still, Slumdog winning those categories was by far the most likely outcome. Of the three awards that were in more genuine doubt, the model got one right (Best Actress) and missed the other two.
I don't know, however, that this is a terrific way to go about evaluating the model's validity. There is uncertainty -- as the model happily acknowledges -- in any sort of human endeavor. One year's worth of results is nowhere near enough to estimate the effects of this uncertainty.
Instead, whenever we make an incorrect prediction, we are probably better off asking questions along these lines:
What, if anything, did the incorrect prediction reveal to us about the model's flaws?
Was the model wrong for the wrong reasons? Or was it wrong for the right reasons?
What, if any, improvements should we make to the model given these results?
Read the rest for a good analysis of Silver's model...and how he intends to improve it in the future. See also: a New York Magazine article from before the ceremony, discussing the predictions with specific statistical forecasts.