CHAIRMAN: DR. KHALID BIN THANI AL THANI
EDITOR-IN-CHIEF: DR. KHALID MUBARAK AL-SHAFI

Views /Opinion

Crunching the numbers shows how Obama and Romney were polls apart

Harry J

02 Dec 2012

by Harry J Enten

Last week, we learned from Mark Blumenthal the story behind the internal polling of President Obama’s re-election campaign. Obama’s team was serious about its numbers and had an understanding about polling that stunned even me.

The Obama campaign combined three levels of data to peg the state of the race against Mitt Romney. The campaign conducted aggregated battleground polls for message testing, individual state tracking polls by multiple pollsters to understand where the campaign was standing in each state, and individual state parallel surveys to refine micro-targeting.

Notice how Obama’s team utilised multiple pollsters at three different levels, to ensure any one survey error wouldn’t affect them too much. This shows a keen mind for utilising modern research that says no one poll is always best or worst. The Obama analytics team made one other key correct judgement. It has become sport for campaigns to trash public polling. But as they said some public polling was wrong, (which it was), the Obama team recognised that it was still useful in figuring out where the horse race stood. All of this well thought-out planning worked, predictably, and Obama’s campaign team now look like geniuses.

Today, we know from Noam Scheiber that Romney’s team seemingly went the opposite way. Most of the attention has been paid to the fact that the Romney polls underestimated Obama in all the states in Scheiber’s piece. That’s true. Yet what I’m more interested in is how the Romney team made its projections.

First, the Romney team relied on a two-day tracking poll in each state. There’s a reason no public tracking poll was done over less than three days. Two days of data, even if you call a lot of people, will have plenty of noise. You often have difficulty getting tougher-to-reach voters (ie: minority and younger voters with cellphones) in a two-day period.

The Romney campaign also read far too much into one-day changes in their data. They thought they were seeing momentum, as many of their Sunday before the election numbers were better than their Saturday data, even as their pollster warned them about reading too much into one-day numbers. If you think two-day data has a lot of noise, then imagine one day with no call backs to those who don’t answer the phone the first time. Just look at the daily breakdowns from publicly available polling.

Second, the Romney team seemingly tossed away older polling as if it were useless. I’ll admit I’m addicted to polling data and am always looking for my new fix. I also know that the research indicates that any polling done within the last 25 days of a general election campaign is going to be very solid. From 2006 to 2010, this 25-day average was as accurate as more complicated methods. In 2012, it also did as well as the final-week averages.

The reason for this is that campaigns just don’t have as big of an effect as people think. Obama’s campaign noted that after the first debate the polling was consistent. Obama held a three- to four-point edge that never abated. The fact that Romney’s team seemingly thought otherwise is something I don’t quite get. If Romney and company had just averaged all their data over the last few weeks, they would have had a more realistic picture.

Third, the Romney team did not comprehend that it was a presidential year and that in presidential years, everyone votes. A lot was made of Romney voters’ enthusiasm and how that among the most “interested” voters Romney was running strongly. This demonstrates a seeming lack of knowledge of solid research which shows that this type of modeling just doesn’t work in presidential elections anymore.

People who pick up the phone and say they are going to vote will vote. 

Fourth, the Romney campaign apparently didn’t care to look at the public polling. I have no idea how many pollsters Romney’s team had as part of the campaign, beyond their chief pollster. I hope they were getting their data from more than one internal source; they should have looked at public polling too. This again goes back to a fact that the Obama campaign understood. Pretty much any poll is useful in some way. 

Some polls are going to be right more often, but no one poll is going to be right all the time. Obama’s team built a model that took into account public polling. They used statistical equations (ie: math) to weight the polling to see what picture of the electorate made most sense, given what they knew. Romney’s team would have been far better off if they had done the same.

The Guardian

by Harry J Enten

Last week, we learned from Mark Blumenthal the story behind the internal polling of President Obama’s re-election campaign. Obama’s team was serious about its numbers and had an understanding about polling that stunned even me.

The Obama campaign combined three levels of data to peg the state of the race against Mitt Romney. The campaign conducted aggregated battleground polls for message testing, individual state tracking polls by multiple pollsters to understand where the campaign was standing in each state, and individual state parallel surveys to refine micro-targeting.

Notice how Obama’s team utilised multiple pollsters at three different levels, to ensure any one survey error wouldn’t affect them too much. This shows a keen mind for utilising modern research that says no one poll is always best or worst. The Obama analytics team made one other key correct judgement. It has become sport for campaigns to trash public polling. But as they said some public polling was wrong, (which it was), the Obama team recognised that it was still useful in figuring out where the horse race stood. All of this well thought-out planning worked, predictably, and Obama’s campaign team now look like geniuses.

Today, we know from Noam Scheiber that Romney’s team seemingly went the opposite way. Most of the attention has been paid to the fact that the Romney polls underestimated Obama in all the states in Scheiber’s piece. That’s true. Yet what I’m more interested in is how the Romney team made its projections.

First, the Romney team relied on a two-day tracking poll in each state. There’s a reason no public tracking poll was done over less than three days. Two days of data, even if you call a lot of people, will have plenty of noise. You often have difficulty getting tougher-to-reach voters (ie: minority and younger voters with cellphones) in a two-day period.

The Romney campaign also read far too much into one-day changes in their data. They thought they were seeing momentum, as many of their Sunday before the election numbers were better than their Saturday data, even as their pollster warned them about reading too much into one-day numbers. If you think two-day data has a lot of noise, then imagine one day with no call backs to those who don’t answer the phone the first time. Just look at the daily breakdowns from publicly available polling.

Second, the Romney team seemingly tossed away older polling as if it were useless. I’ll admit I’m addicted to polling data and am always looking for my new fix. I also know that the research indicates that any polling done within the last 25 days of a general election campaign is going to be very solid. From 2006 to 2010, this 25-day average was as accurate as more complicated methods. In 2012, it also did as well as the final-week averages.

The reason for this is that campaigns just don’t have as big of an effect as people think. Obama’s campaign noted that after the first debate the polling was consistent. Obama held a three- to four-point edge that never abated. The fact that Romney’s team seemingly thought otherwise is something I don’t quite get. If Romney and company had just averaged all their data over the last few weeks, they would have had a more realistic picture.

Third, the Romney team did not comprehend that it was a presidential year and that in presidential years, everyone votes. A lot was made of Romney voters’ enthusiasm and how that among the most “interested” voters Romney was running strongly. This demonstrates a seeming lack of knowledge of solid research which shows that this type of modeling just doesn’t work in presidential elections anymore.

People who pick up the phone and say they are going to vote will vote. 

Fourth, the Romney campaign apparently didn’t care to look at the public polling. I have no idea how many pollsters Romney’s team had as part of the campaign, beyond their chief pollster. I hope they were getting their data from more than one internal source; they should have looked at public polling too. This again goes back to a fact that the Obama campaign understood. Pretty much any poll is useful in some way. 

Some polls are going to be right more often, but no one poll is going to be right all the time. Obama’s team built a model that took into account public polling. They used statistical equations (ie: math) to weight the polling to see what picture of the electorate made most sense, given what they knew. Romney’s team would have been far better off if they had done the same.

The Guardian