Memo to Nate Silver: The Oscars Ain't Politics
The Oscars are apparently a key point to the prognosticator's new deal with ESPN and ABC -- but THR's awards analyst Scott Feinberg warns that it's not as easy as it might look.
I have great respect for Nate Silver, the blogger who almost perfectly predicted the electoral outcome of the 2008 and 2012 presidential elections while writing the FiveThirtyEight blog, which most recently was housed at the New York Times. On Monday, he announced he is leaving the Times for a new gig at ESPN, which will also have him contributing to the sports network's parent company, ABC -- which also happens to be home to the annual Oscar broadcast.
Now, Silver is very good at what he does, which is essentially finding meaning in numbers. But a report by Politico's Mike Allen suggests that Silver's new deal also promises him "a role in the Oscars," presumably providing predictions and analysis for ABC, before, after and maybe even during the show. When I spoke with him Monday during a conference call with ESPN, he admitted, "There's not a great statistical way to predict the Oscars -- but it doesn't mean we aren't gonna have some fun with it."
Whether or not he ends up treating any future Oscars predicting sideline as a serious effort or just fun, I think a reality check is in order. Having specialized in Oscars predictions and analysis myself for the past 12 years, I can assure Silver and ABC that it requires a totally different set of skills and experiences than does electoral forecasting. Silver actually found that out himself over the last few years, when he began wading into Oscar predictions and got slightly burned, if not scalded.
In the wake of his 2008 presidential election success, many in the media began regarding Silver as an all-purpose oracle. But presidential elections come along only every four years, so Silver has had to find something else to prophesy about. In 2009, 2011 and 2013, he tried his hand at Oscar predicting. But as The Atlantic noted Sunday in a blog post entitled "Nate Silver's Mediocre Oscar Prediction History," that didn't work out so well.
In each of those years, Silver declared his picks for only the "big six" categories, not even attempting the harder "below-the-line" categories that separate the experts from the rest. Even so, he still missed several biggies. In 2009, he went four for six, predicting incorrectly that best actor would go to Mickey Rourke (The Wrestler) over Sean Penn (Milk) and that best supporting actress would go to Taraji P. Henson (The Curious Case of Benjamin Button) over Penelope Cruz (Vicky Cristina Barcelona). In 2011, he went five for six, predicting incorrectly that best director would go to David Fincher (The Social Network) over Tom Hooper (The King's Speech). And this year, he went four for six again, predicting incorrectly that best director would go to Steven Spielberg (Lincoln) over Ang Lee (Life of Pi) and that best supporting actor would go to Tommy Lee Jones (Lincoln) over Christoph Waltz (Django Unchained).
(For the record: Though I'm far from infallible, I correctly predicted every one of those races, going six for six in two of those years and five for six in the other, and this year I predicted 15 of the 18 other categories, as well.)
Why hasn't Silver had much success at predicting the Oscars -- and why won't his record improve unless he changes his approach? There are several reasons.
When predicting elections, Silver has at his disposal a ton of data, including regularly updated polls from numerous pollsters, offering insight about how the American public plans to cast its votes and why. Cumulatively, those polls are generally quite reliable, because they draw from dozens of representative samples of the voting public.
But in the case of the Academy Awards, although there are only about 6,000 Academy members, it's much harder to find a representative sample. The Academy asks its members not to reveal their preferences, and many don't want to do that anyway. Complicating matters is the fact that the Academy closely guards its membership list, so the first challenge is simply identifying who is actually a member. It's necessary to attend many of the same events that they do (and that means months of events, some on the East Coast but many on the West) and develop relationships with them (which can't be done overnight). Unless the Academy provides Silver with a shortcut -- by giving him its members' contact info and/or encouraging its members to speak with him -- he faces a very uphill climb.
Some have raised the possibility that ABC, which broadcasts the Oscars, could pressure the Academy to help put Silver on the inside track. But the Academy has no real reason to do that, and has reacted very defensively in the past when journalists have attempted to survey its members en masse. If one of the complaints about the Oscars is that the winners, at least in the big categories, have become too predictable, it doesn't benefit either the Academy or ABC to help Silver eliminate some of the suspense that does remain.
In the past, Silver has treated his Oscar-predicting as a side hobby. He hasn't attempted to do all the necessary grunt work. (Who knows if he even watched the contending movies?) Instead, he created statistical databases into which he fed select data about the nominees (such as how they fared at precursor awards, the frequency with which certain Oscar categories have correlated, etc.). He then weighted the various factors, depending on how much overlap had existed in years past. But it never quite worked. In 2011 he threw out and replaced the system that he employed in 2009, and in 2013 he threw out and replaced the system that he used in 2011.
Countdown to the Oscars
- MOST SHARED
- MOST POPULAR