Polling is a curious institution — it does and does not make sense. On the one hand, the collective insistence on knowing the popular sentiment on a set of issues or candidates satisfies a very modern desire to not only experience the now, but simultaneously systematize, process, and interpret our experience of the present. And yet, for all the fervor for an uninterrupted and competing stream of polls, there's also a widely acknowledged skepticism we accord to election predictions. Where polls represent a snapshot in the past, we want to know what will happen in the future where very different circumstances and barriers characterize the practice of voting.
And polls are hardly limited to mere description. Campaigns have weaponized polls to mobilize their supporters for contributions and get-out-the-vote efforts. The media uses national polls to justify their racehorse coverage. On election day, exit polls fuel the anxiety of some and motivate others to head to their local voting booth. In the practice of democracy, polling is a through-line.
To understand the power of polling requires us to complicate our reception of them, to retain some internal radar for separating the signal from the noise. But this does not mean we all need to up and begin studying Bayesian statistics or enroll in a local college course. To have a healthy skepticism and filter gold from polls — and all a handful of days before the election — you can go far with a catalog of poll types.
National Polls vs. State Polls
To many election forecasters, national polls are the bane of their existence. There's good reason why. National polls tend to display this election as a too-close-to-call race, one in a dead statistical heat. Whether or not national polls are tremendously accurate in their snapshot of the nation is surprisingly not the chief reason for skepticism.
On election day, we will observe a popular vote result. And it is likely that this result will support the outcome of the election, but it is important to remember that the outcome of the popular vote does not itself necessarily decide the outcome of an election. As most folks know, the outcome is determined by the result of the electoral college, and this institution is comprised of state election returns. Indeed, the reason why there's as much of a fuss over swing states is because those are the only states in actual contention in the race for the minimum 270 electoral college votes needed to win the White House.
So while national polls may give us a good perspective on the big picture of the race, they do little to evidence the electoral truth of election night. This is one reason why there's as huge a disparity between national polls and every other type of polling covered here. It's also the reason why these polls are most often co-opted by the media and campaigns. While state polls tend to get state and local media coverage, they're eminently more valuable than the deluge of national polls deconstructed every hour.
Organizations like InTrade, Betfair, and the Iowa Electronic Markets treat the outcomes of a given event, say the presidential election, like products in a stock market. The prices of each outcome's stock represent the probability that the event will occur. If you're a savvy investor, the goal is to be an underrated candidate's stock and sell it for a higher price. Because actual money is on the line, folks who may or may not be familiar with the efficient market hypothesis tend to assign a lot of credibility to the leanings of prediction markets. They're not wrong for doing so — it's been shown that prediction markets do better than single subjective predictions, and even outside of single individuals, these markets are not easy to beat on average.
But prediction markets are not without their flaws. David Rothschild of the University of Pennsylvania Wharton School compared prediction markets to more comprehensive statistical models of election prediction and found that the former tends to overrate the chances of the underdog in what is called the favorite-long-shot bias. To overestimate the underdog in a prediction market makes some sense, because the payoff is high in the case of an upset.
More troubling is the suspicion of market manipulation. A market like InTrade gets far more media coverage in the news than its competitors, so much so that its leanings can influence a news cycle on a slow day. If you work on a campaign and you have oodles of contributions to spend, or if you similarly stand to gain heavily from a particular election outcome, it is well-within the realm of possibility (but notoriously difficult to really prove) that you can artificially improve your candidate's chances just by placing a single or series of large bets. This drives the stock (and so predicted) chances of your candidate up, a media outlet may interpret this as your candidate “building momentum,” and seeing this on the news, more individuals will exhibit herding behavior and also buy the stock of that candidate, building the momentum larger still.
As a result of their coverage in the media, they can be the subject of manipulation. Supporters of a candidate know that the news is watching InTrade, then buying enough Romney stocks to evidence a shift in momentum on the market could drive a news story. It is not likely that a single trader is alone shifting a market, but a herding effect could compel other traders to follow and similarly place bets on Romney, which would compound to raise his momentum still higher. The Washington Post ran a piece skeptical of prediction markets in which they traced the impact of a single $17,800 InTrade bet in favor of Governor Romney.
Prediction markets can paradoxically overrate top dogs. One potential reason these markets consistently pick President Obama as the favorite could be because of a speculator strategy. Traders may be buying at a certain price now to sell for a higher price later as they expect the incumbent president's chances to increase.
With all the conflicting biases, it may be the case that these markets self-correct, but that seems like wishful thinking for market efficiency hardliners. For those fond of this form of trend-spotting, it's a good idea to pay more attention to under-covered, relatively more obscure prediction markets and sport books like Betfair or Pinnacle, respectively.
Of all the tools for predicting elections, it seems like comprehensive models are the new (and few) kids on the block. It's important to pay attention to them, though. Many have had a recent verve for calling election day the most accurately. In 2008, statistician Nate Silver, who runs the New York Times' FiveThirtyEight poll aggregation blog, correctly predicted all 35 Senate races and 49 out of 50 states' presidential election results (he miscalled Indiana, by a one-percent margin). Somewhat less well-known is Emory University political science professor Drew Linzer, who runs the site Votamatic. Both models, and the many more included in this category, use a smorgasbord of data, some historical and much live-updating, to build forecast models. This means tracking and averaging and assigning weights to various state and national polls, incorporating various felt economic indicators, surveying prediction markets, and so on.
What bothers many about these sites is that they tend to show an overwhelming advantage to the President's reelection campaign. As of this morning, FiveThirtyEight predicts that the President has a 85.1 percent of winning the 2012 election, with a predicted 306.9 electoral votes. If you think that's bold, Votamatic pegged the President's electoral likelihood at 332 votes. Both are a far cry away from the national polls, and even prediction markets.
Some critics have taken this data to “expose” the bias of Nate Silver, who himself seems more skeptical of his model than anyone else. On his own blog, he notes over and again that models are only as good as the data they incorporate, and only as well-informed as the present. Insofar as state and national and market prediction polls are biased or inaccurate, forecast models lose their predictive luster as well. However, Mr. Silver has also gone to lengths to explain exactly what biases are needed for his model to be incorrect.
One misconception with bullish forecast models is that their predictions entail a landslide election victory when it's clear, by measures both numerical and not, that this election won't be. But it's possible to have a high probability of victory, even in a close race. This is the difference between winning by a 30 percent margin (of course, eminently unlikely) and winning by a 1-2 percent margin with an 85.1 percent probability. These forecasts make a statement closer to the latter.
Whatever the case, it is clear that polls have taken on a life of their own in election coverage. The number of institutions singularly devoted to their tracking, the tools invented to better their accuracy, and media outlets and campaigns engrossed in the strategies of their use, and folks like you and me interested in calling a race before its called. For as much has been said about the ability to control what we can measure, this is one case that has it the other way around.