July 22, 2013 7:00pm PT by Scott Feinberg
Memo to Nate Silver: The Oscars Ain't Politics
I have great respect for Nate Silver, the blogger who almost perfectly predicted the electoral outcome of the 2008 and 2012 presidential elections while writing the FiveThirtyEight blog, which most recently was housed at the New York Times. On Monday, he announced he is leaving the Times for a new gig at ESPN, which will also have him contributing to the sports network's parent company, ABC -- which also happens to be home to the annual Oscar broadcast.
Now, Silver is very good at what he does, which is essentially finding meaning in numbers. But a report by Politico's Mike Allen suggests that Silver's new deal also promises him "a role in the Oscars," presumably providing predictions and analysis for ABC, before, after and maybe even during the show. When I spoke with him Monday during a conference call with ESPN, he admitted, "There's not a great statistical way to predict the Oscars -- but it doesn't mean we aren't gonna have some fun with it."
Whether or not he ends up treating any future Oscars predicting sideline as a serious effort or just fun, I think a reality check is in order. Having specialized in Oscars predictions and analysis myself for the past 12 years, I can assure Silver and ABC that it requires a totally different set of skills and experiences than does electoral forecasting. Silver actually found that out himself over the last few years, when he began wading into Oscar predictions and got slightly burned, if not scalded.
In the wake of his 2008 presidential election success, many in the media began regarding Silver as an all-purpose oracle. But presidential elections come along only every four years, so Silver has had to find something else to prophesy about. In 2009, 2011 and 2013, he tried his hand at Oscar predicting. But as The Atlantic noted Sunday in a blog post entitled "Nate Silver's Mediocre Oscar Prediction History," that didn't work out so well.
In each of those years, Silver declared his picks for only the "big six" categories, not even attempting the harder "below-the-line" categories that separate the experts from the rest. Even so, he still missed several biggies. In 2009, he went four for six, predicting incorrectly that best actor would go to Mickey Rourke (The Wrestler) over Sean Penn (Milk) and that best supporting actress would go to Taraji P. Henson (The Curious Case of Benjamin Button) over Penelope Cruz (Vicky Cristina Barcelona). In 2011, he went five for six, predicting incorrectly that best director would go to David Fincher (The Social Network) over Tom Hooper (The King's Speech). And this year, he went four for six again, predicting incorrectly that best director would go to Steven Spielberg (Lincoln) over Ang Lee (Life of Pi) and that best supporting actor would go to Tommy Lee Jones (Lincoln) over Christoph Waltz (Django Unchained).
(For the record: Though I'm far from infallible, I correctly predicted every one of those races, going six for six in two of those years and five for six in the other, and this year I predicted 15 of the 18 other categories, as well.)
Why hasn't Silver had much success at predicting the Oscars -- and why won't his record improve unless he changes his approach? There are several reasons.
When predicting elections, Silver has at his disposal a ton of data, including regularly updated polls from numerous pollsters, offering insight about how the American public plans to cast its votes and why. Cumulatively, those polls are generally quite reliable, because they draw from dozens of representative samples of the voting public.
But in the case of the Academy Awards, although there are only about 6,000 Academy members, it's much harder to find a representative sample. The Academy asks its members not to reveal their preferences, and many don't want to do that anyway. Complicating matters is the fact that the Academy closely guards its membership list, so the first challenge is simply identifying who is actually a member. It's necessary to attend many of the same events that they do (and that means months of events, some on the East Coast but many on the West) and develop relationships with them (which can't be done overnight). Unless the Academy provides Silver with a shortcut -- by giving him its members' contact info and/or encouraging its members to speak with him -- he faces a very uphill climb.
Some have raised the possibility that ABC, which broadcasts the Oscars, could pressure the Academy to help put Silver on the inside track. But the Academy has no real reason to do that, and has reacted very defensively in the past when journalists have attempted to survey its members en masse. If one of the complaints about the Oscars is that the winners, at least in the big categories, have become too predictable, it doesn't benefit either the Academy or ABC to help Silver eliminate some of the suspense that does remain.
In the past, Silver has treated his Oscar-predicting as a side hobby. He hasn't attempted to do all the necessary grunt work. (Who knows if he even watched the contending movies?) Instead, he created statistical databases into which he fed select data about the nominees (such as how they fared at precursor awards, the frequency with which certain Oscar categories have correlated, etc.). He then weighted the various factors, depending on how much overlap had existed in years past. But it never quite worked. In 2011 he threw out and replaced the system that he employed in 2009, and in 2013 he threw out and replaced the system that he used in 2011.
Consider one aspect of Silver's method: The winners of certain categories at the International Press Academy's Satellite Awards, for example, may have overlapped with the winners at the Oscars on a number of occasions over the last few years, but it takes someone familiar with what the Satellite Awards are, and who does and doesn't vote for them, to know that such overlap is purely coincidental and not even worth considering when formulating Oscar predictions.
To be a consistently strong Oscar prognosticator -- someone like Fandango's Dave Karger, or Deadline's Pete Hammond, or The Wrap's Steve Pond, or InContention's Kris Tapley, or me -- you have to watch everything (dozens if not hundreds of contending films), know your history (familiarizing yourself with lots of older movies and the dynamics of past Oscar races), show up everywhere (there are rubber-chicken dinners and awards ceremonies almost every week), build relationships (with talent, awards strategists, publicists, voters) and know what and who is and isn't worth factoring into your projections. It's a full-time job, though it doesn't look as if Silver, with his expanding empire, intends to treat it as such.
This is not to pick on Silver. He is but one of many people who believe that Oscar predicting is something that anyone can do if only they take the time to plug a bunch of numbers into an equation. Every year around Oscar time, I get a call or email from some journalist seeking my take on the latest Ivy League whiz-kid or datatician who claims to have figured out a formula to predict the Oscars. These folks generally send out a bunch of press releases, get a lot of publicity and then do no better -- and often worse -- than the regulars. And then are never heard from again.
Certainly, Silver has a higher profile than most other would-be Oscar prognosticators. But on Monday, when I surveyed several awards strategists who run major studios' Oscar campaigns, none believed he'd have a magic formula.
"I'm a huge Nate fan, and I'm also a big believer in statistics in general," one said. "However, a political race generally has two candidates, endless polls and other information from which to compile predictions and outcomes. There are so many variables in the average Oscar race, from clinical to emotional, that I don't think the same approach used for elections can be used as effectively in the Oscar landscape. Being on the ground at screenings and events and actually speaking to voters is an invaluable source of information. I've won many an Oscar pool due to that advantage."
A second strategist noted, "It helps to know the audience personally, which is something that takes years of experience. What really matters is to find a personal way to make a connection with members and to see what makes a connection with them. After that, statistics aren't as helpful."
A third put it even more succinctly: "An artistic competition calculated via statistical variables leaves room for doubt. But what the hell, it's all in fun anyway."
Let's stipulate, then, that Nate Silver is very good at predicting elections. But just because you can predict one thing well doesn't mean you can predict all things well. He is not a clairvoyant. He's a numbers cruncher, expert at defining the probability of a given outcome when working from comprehensive data sets. But the simple fact of the matter is that Oscar voting is simply not quantifiable.