Flipping the vote against Ron Paul in South Carolina?

Check your vote total on Greenville County. It should be around 73k votes. I still give you an "A+" because you mentioned my name.

You are right. Great job!

My spreadsheet total and the number actually used in the graph are the same: 73,055. I add the number of votes in titles manually (so tedious...) and I blew this one for some reason: 8,005 is the votes total of Greenwood. Apologies. I'll edit the uploaded graphs ASAP.

BO3zy.jpg


Passionate collaborative Work. Unbeatable!
 
Humor acknowledged, but with all due respect, the science of statistics doesn't prove conclusions by simply asserting those conclusions repeatedly. Also, it's not "nit picking methodology" to point out that accepted methodology was not followed. I'm not saying that the conclusions are wrong here, but since alternate, non-standard methodologies were used, they need to be justified. Once justified, the probability of the observed results needs to be calculated so that we can scientifically evaluate the results of these primaries and caucuses.

Im sure that may be the case for the specific example your talking about but a blanket statement like that is not doing any justice to the results that did follow proper methodology. Heck they even went back over some data and chaged the methodology and the results still produced the same conclusion. And I'm sure the stats team have the statistical results to warrent their conclusions.
 
Im sure that may be the case for the specific example your talking about but a blanket statement like that is not doing any justice to the results that did follow proper methodology. Heck they even went back over some data and chaged the methodology and the results still produced the same conclusion. And I'm sure the stats team have the statistical results to warrent their conclusions.

Again, here's an assertion, but no concrete numbers. I'm not trying to be difficult here, believe it or not, I'm trying to help by making sure the assertions do have statistical backing. So, if proper methodology has been followed, can anyone here provide me with one example that shows:

1) A projected result using the formulas for cluster sampling.
2) An actual result that falls outside of the MOE for the projected result.
3) The calculated probability that this would happen

I have seen several charts that have been sorted by precinct turnout and show no flat-lining even at the 90% mark and I am in complete agreement that this doesn't appear to be natural. But nothing has been statistically proven until the math has been performed.
 
Hey CJM I asked an exit poller to tell me how accurate an exit poll would be if only polling the smallest precincts and he responded:
"the z value or standard deviation would be larger which would mean that instead of a 5 percent margin of error, you would have a 7 or maybe 8 percent MoE. In any exit poll a good number is around 400. When a national poll is taken most agencies use around 1000 as the base number. Even though 1000 is a fraction of the population, the data is considered representative of the group statistically. But do not be fooled by numbers. A 5% margin of error, in no way means that the final number will be five percent higher or lower. What the MoE really states is that if another poll was taken that 95% of the time the results would be within 3 standard deviations of the original. The large polling agencies will never explain this to the public, just as we probably will never explain it to the public, because it is confusing and takes years of study to truly understand how to create a survey and questions with validity and reliability. In South Carolina, I have heard reports that our numbers - the ones we feel were manipulated- are actually pretty close to another firms findings."
I'm just the messenger.

Great feedback. Thanks.

I am fully aware of this and on top of those maths. However, we are not talking 1 poll! Each precinct is one poll. So in Iowa, I have in the smallest decile (0-10% of votes), we are looking at 270 polls. In the second decile 216 polls, in the 3rd 157.

On the other hand, remember that I am note talking average score of precinct in this analysis. I am looking at the cumulative average score of precincts. So I am not looking at the standard deviation of Poll 1,Poll 2, Poll 3... but at the standard deviation of Poll 1, Average Poll 1 and Poll 2, Average Poll 1, Poll 2, Poll 3..., which is massively lower.

I'll publish more on this shortly. Please please forward it to your friend then. A statistician will jump out of his skin when he sees it.
 
cjm says:

I pointed out in posts #404 and #418 that the projections may be incorrect due to the samples used and that the margin of error on the projection may be a lot higher when using the proper formula. Then again, those projections and MOEs may be no different than what they are now. I don't know.

Statistical methods depend on random sampling and the precincts used for the projections are decidedly non-random. I know that the theory of fraud in high turnout precincts means that they have to be excluded from the sample, but I haven't yet seen a convincing justification that says selecting all the low turnout precincts is a good method for projecting vote totals.


Very fair comment, cjm. No objection to that.

My analysis and the jaw-dropping math anomaly stands entirely if you excludes all small, volatile precinct from the data. It is robust to small precinct exclusion. The impossibly systematic climb in Romney's score is very visible in highly populated-only precincts.
 
Last edited:
Yes please. Show me the work. Thanks.

Actually, I'd like to see this too. I also think more of those standard deviation charts might be helpful. The picture graphs are eye-catching and provocative, but to be convincing to others I suggest adding more proof that this couldn't happen either 'by chance' or due to the small-vote precincts being a skewed sample.
 
Again, here's an assertion, but no concrete numbers. I'm not trying to be difficult here, believe it or not, I'm trying to help by making sure the assertions do have statistical backing. So, if proper methodology has been followed, can anyone here provide me with one example that shows:

1) A projected result using the formulas for cluster sampling.
2) An actual result that falls outside of the MOE for the projected result.
3) The calculated probability that this would happen

I have seen several charts that have been sorted by precinct turnout and show no flat-lining even at the 90% mark and I am in complete agreement that this doesn't appear to be natural. But nothing has been statistically proven until the math has been performed.

Well I'm sure those criteria you list will provide reliable results. However, I want to point out that the 3 criteria you outlined are in no way the only measuring sticks to prove results are reliable. Your criteria centers on making conclusions from projected results from sampling and comparing it to actual data. There are many types of data, many types of statistical analysis, and many different purposes for methods. Many results stated so far used other methods that are just as applicable, if not more applicable in certain circumstances, then the criteria you have outlined. So if data was not analysed by the 3 criteria you outlined that in no way instantly invalidates the results. In regards to the math being performed so data and conclusions are proven, have you asked the people putting up the data if they did the math? I have seen many posts where they did post the math.
 
Great feedback. Thanks.

I am fully aware of this and on top of those maths However, we are not talking 1 poll! Each precinct is one poll. So in Iowa, I have in the smallest decile (0-10% of votes), we are looking at 270 polls. In the second decile 216 polls, in the 3rd 157.

On the other hand, remember that I am note talking average score of precinct in this analysis. I am looking at the cumulative average score of precincts. So I am not looking at the standard deviation of Poll 1,Poll 2, Poll 3... but at the standard deviation of Poll 1, Average Poll 1 and Poll 2, Average Poll 1, Poll 2, Poll 3..., which is massively lower.

I'll publish more on this shortly. Please please forward it to your friend then. A statistician will jump out of his skin when he sees it.

Yes, this makes sense. More about this please!
 
Last edited:
Hmmm. It almost sounds like the exit pollster you know is saying that exit polls don't predict anything, the margin of error just indicates the level of repeatability in the results of the sample. If that were the case, exit polling would be worthless. I suspect this person just quickly dashed off a response and didn't proofread. Exit polls should be able to predict outcomes, otherwise they are pretty useless. An exit poll with a confidence level of 95% and margin of error of 5% says that the final result should be within 5% of the polled result 95% of the time. This allows for the unexpected outcome like a 20% difference in the expected result 5% of the time. I've seen a few statements about some of these charts being "impossible" but "improbable" is a better word to use. We should be able to calculate the probability of what we're seeing in the many charts that you guys have put together. What are the odds of this happening? is it 1 in 5? or 1 in a billion? The calculated odds make a big difference in the persuasiveness of the argument.

Also note that the exit poller states, "instead of a 5 percent margin of error, you would have a 7 or maybe 8 percent MoE." This really should be written as, "if you have a 5 percent MOE....." If you start with a 10% MOE, then you're looking at a 14% or maybe 16% MOE by cherry picking your precincts? Many of these "flips" or "criss-crosses" can fall within a 16% MOE and prove to be statistically "expected." The real MOE will vary from county to county since the MOE for cluster sampling depends on the variability within the cluster and variability between clusters.

From Kellogg School of Management, Northwestern University



Here's another piece on cluster sampling that suggests our flips/criss-crosses may be within the margin of error.

Take these chart for example:

QUc88.jpg


If the sample size required to get a 3% MOE is at the 50% mark in total votes, then these charts simply highlight that the county exhibits great between-cluster variability and the final results are within the MOE for the sample size since they start to flatline at that 50% mark.

Now then, please don't get me wrong. When I see charts that are not flatlining after hitting the 90% or 95% mark, instinctively I know something is wrong. But as bbwarfield has been saying, we need to dot our i's and cross our t's with regard to the methodology and calculate our odds. Each of these apparently freak occurrences can happen. We can't say they're impossible. But we have to do the math and state that the likelihood of these results are 1 in (whatever). That's when we have a story.

Forgive me for saying "this needs to be done" without contributing to that myself. I live in a Super Tuesday State and it's been a busy week here.

Cluster maths! Absolutely spot on comment. Many thanks. This needs to be addressed. We need to talk clusters!

Through precinct ballots, I am collecting 100s or 1000s of "clusters". Through the cumulative % of vote tally, I keep adding new ones. Clusters' standard deviation calculus is complex, but one thing I know is: if you keep adding clusters, the estimate of your population's score improves all the time.

Romney's st dev goes normally down into the first 10% -30% of polls (here = precincts) but then it goes up, as his percentage-of-votes score goes up in steady increment! Impossibly steady increments! Remember the Iowa table: precincts of 59 people score 1% below precincts of 93 people who score 1% below precincts of 116 people who score 1% below those of 144. What????????????????????????????????????????????????????????

More on this shortly. It will be crucial to pro statistician.
 
Absolute mathematical proof - Step 1

This is what you need to forward to your math/statistician teacher/friend. This is the mathematically impossible to the mathematical brain.

This is where we need feedback fast.

hT1i6.jpg


More evidence along those lines soon.

Debunk! Debunk! Debunk!
 
Last edited:
This is what you need to forward to your math/statistician teacher/friend. This is the mathematically impossible to the mathematical brain.

This is where we need feedback fast.

hT1i6.jpg




More evidence along those lines soon.

Yes. This is very clear. What do RP's numbrs do? Do they deviate just as Romney's do?
 
This is not a zero-sum game: election is a 100%-sum game, so yes Paul "complements" Romney.

100% sum game creates dependency between candidates scores by the way ("The variables are not independent."), and might be source of debunking: apparent vote flipping is normal in a 2 candidates' race. What about a 4 candidates' race? Complex question. Could be very important.
 
This is not a zero-sum game: election is a 100%-sum game, so yes Paul "complements" Romney.

100% sum game creates dependency between candidates scores by the way ("The variables are not independent."), and might be source of debunking: apparent vote flipping is normal in a 2 candidates' race. What about a 4 candidates' race? Complex question. Could be very important.

The graphs show that Paul only deviates sometimes, while Romney deviates more often (because he also takes from the others sometimes). To highlight that Romney is the beneficiary, perhaps it might be helpful to create an overall 'number' of times that Romney deviates compared with the others? That might make a catchy opener for the report: eg, 'why does Romney deviate (and always do better) more than any other?'
 
Away from main computer for a bit...

Anyone up for converting Liberty's charts for SC into a map of South Carolina?

Each county should show:

1) one color if flipped, another if unflipped (based on liberty's division of charts)
2) # of votes
3) if flipped, 'benefactor=Romney', 'victim=xxxxxx' (Paul/Gingrich)

since Liberty has pointed out the flipping occurs not because of strict precinct size, but rather, size as percent of country, it would be very interesting to see, visually, if any neighboring regions of similar size show flipping on and off, or different victims.
 
Absolute mathematical proof -Step 2

Ok. How do we seriously, professionally tell when votes where flipped or not? What is a natural staight line and one doctored with?

Just staring at graphs is not so convincing. Fair enough.

It's gonna be tricky for those without stat training. The others will see quickly why I start to speak of absolute mathematical proof of vote rigging.

Here are the Republican Primary results for Palm Beach. Loads of voters and precincts. Perfect. Look at the charts:

orbVo.jpg


In 2008, something extraordinary goes on.

McCain's score goes dead flat very early. This is what one should expect. You cumulate so many votes so quickly that you can reliably project Mc Cain's final score at 100% with the score at 10%. Good.

Now look at the rest of the pack.

Romney climbs CONSTANTLY at the sole expense of the 3 others.

How constantly? That is what the table below the chart explains. Even though all the candidates' lines look identically straight to the naked eye from 50% cumulative onwards, they are totally different mathematical animals to the analytical microscope.

The variation in the cumulative % (X-axis) explains 97-98% of the variation in the score of Giulani, Huckabee, Paul and Romney (it is what the R-squared number means). Those 4 lines are identically straight. Amazingly straight. Algorithmetically smoothed. McCain's line is not at all like them. McCain was just left alone.

F factor and t-stat are sophisticated statistical indicators giving the probability of this happening by chance. The higher the value, the lower the chance of simply random correlation. F and t are HUGE, leaving no room whatsoever for chance.

Now 2012.

Well, the vote flipper was pissed. All candidates were bled for Romney this time around. Landslide time. No mercy...

Go viral with your math friends and let us now.

This looks utterly undebunkable to the best of my judgement, but that might not be saying much.

Raw data used from here: http://www.pbcelections.org/Elections.aspx?type=past
 
Last edited:
  • Like
Reactions: cjm
I have asked an independent statistical analyst, to whom most of this spreadsheeted data has been sent, to look at this thread. I encourage all of you to spread this to any qualified analyst as well. Liberty1789, The more persons that "know" this, the better.
cjm says:

I pointed out in posts #404 and #418 that the projections may be incorrect due to the samples used and that the margin of error on the projection may be a lot higher when using the proper formula. Then again, those projections and MOEs may be no different than what they are now. I don't know.

Statistical methods depend on random sampling and the precincts used for the projections are decidedly non-random. I know that the theory of fraud in high turnout precincts means that they have to be excluded from the sample, but I haven't yet seen a convincing justification that says selecting all the low turnout precincts is a good method for projecting vote totals.


Very fair comment, cjm. No objection to that.

My analysis and the jaw-dropping math anomaly stands entirely if you excludes all small, volatile precinct from the data. It is robust to small precinct exclusion. The impossibly systematic climb in Romney's score is very visible in highly populated-only precincts.
 
Good job:
My debunk:
-Again Ron Pauls crazy conspiracy theorists.
-Bigger percints are in large towns where people are more pro Mitt... so it is only natural that there he does better...among richer and more educated people.
-CNN exit polls show that Mitt does well with rich and more dens populated areas...

THIS:

Debunk! Debunk! Debunk!

Debunking is our priority now!!!


Also: Do you remember that guy who testified that he made computer program to steal votes? Maybe someone could contact him and get some info?
 
http://verifiedvoting.org/verifier please please please... Take note of the simple demographic of electronic vs. hand balloting

Iowa, Nevada are caucuses and the biggest difference is you can't manipulate hand written paper ballots during the counting process (washoe and Clark numbers were recounted separately an here numbers are suspect) Iowa had counties go missing...there numbers are suspect

I'm not trying to say fraud didn't take place in these locations.... But the flipping algorithm wouldn't be he cause....

Remember the live televised caucus counting? That's what it looks like at a precinct count.... Three counters and the campaign observers.... The larger the precinct in Iowa the less likely the fraud cause there are observers for more campaigns...not just a Paul Romney santorum.... But over a certain number youd have volunteers for perry huntsman Bachman maybe even paw lent and Herman Caine....

Iowa is a dead end in proving this type of fraud cause it's a pre frauded system in caucus states.... That straw poll doesn't decide one delegate in a caucus state.... And it just happens we know that this time around and circumvented the usual fraud of the caucus delegates system ( its not really a fraud cause it's an open process... But most people voting in the caucus straw polls have no idea there candidate gets no delegates based on there vote)
Iowa may prove its own type of fraud.... But it will never ever ever do one iota of good proving Manipulation of electronic voting through a flipping algorithm cause no one voted electronically...... It's like surveying the south pole for proof Santa clause doesn't live in the north pole...... You'll find a lot of great information..... But none o it relevant...... Let's search the north pole... Places were electronic voting took place.....

If you continue to work on Iowa.... Be clear what you looking at.... Talking about Iowa and sc in the same breath muddles the numbers and makes them tainted...... Please label when your switching between two very very different types of voting. I was a political science major in college... If you need help understanding how the elections are different on these areas pm me and I will explain how they are different
 
Last edited:
Copy, paste Iowa table / PalmBeach charts.

Email, twitter, FB, printout, rush to the math teacher of your kids, the stat dept of your university. Whatever.

It is possible that nothing more important has ever taken place in this forum. Come on guys.

For the love of America.
 
Back
Top