Hmmm. It almost sounds like the exit pollster you know is saying that exit polls don't predict anything, the margin of error just indicates the level of repeatability in the results of the sample. If that were the case, exit polling would be worthless. I suspect this person just quickly dashed off a response and didn't proofread. Exit polls
should be able to predict outcomes, otherwise they are pretty useless. An exit poll with a confidence level of 95% and margin of error of 5% says that the final result
should be within 5% of the polled result 95% of the time. This allows for the unexpected outcome like a 20% difference in the expected result 5% of the time. I've seen a few statements about some of these charts being "impossible" but "improbable" is a better word to use. We should be able to calculate the probability of what we're seeing in the many charts that you guys have put together. What are the odds of this happening? is it 1 in 5? or 1 in a billion? The calculated odds make a big difference in the persuasiveness of the argument.
Also note that the exit poller states, "instead of a 5 percent margin of error, you would have a 7 or maybe 8 percent MoE." This really should be written as, "
if you have a 5 percent MOE....." If you start with a 10% MOE, then you're looking at a 14% or maybe 16% MOE by cherry picking your precincts? Many of these "flips" or "criss-crosses" can fall within a 16% MOE and prove to be statistically "expected." The real MOE will vary from county to county since the MOE for cluster sampling depends on the variability within the cluster and variability between clusters.
From
Kellogg School of Management, Northwestern University
Here's another piece on cluster sampling that suggests our flips/criss-crosses may be within the margin of error.
Take these chart for example:
If the sample size required to get a 3% MOE is at the 50% mark in total votes, then these charts simply highlight that the county exhibits great between-cluster variability and the final results are within the MOE for the sample size since they start to flatline at that 50% mark.
Now then, please don't get me wrong. When I see charts that are not flatlining after hitting the 90% or 95% mark, instinctively I know something is wrong. But as bbwarfield has been saying, we need to dot our i's and cross our t's with regard to the methodology and calculate our odds. Each of these apparently freak occurrences
can happen. We can't say they're impossible. But we have to do the math and state that the likelihood of these results are 1 in (whatever). That's when we have a story.
Forgive me for saying "this needs to be done" without contributing to that myself. I live in a Super Tuesday State and it's been a busy week here.