Re: A generic best practice document for NewMexicolegislators

From: Edward Cherlin <cherlin_at_pacbell_dot_net>
Date: Sun Jan 02 2005 - 23:21:42 CST

On Wednesday 29 December 2004 07:59 am, you wrote:
> At 10:20 AM 12/29/2004, you wrote:
> >It seems like some error between the paper and electronic
> > ballot should be tolerable, since even with OVC its possible
> > the ballot reconciliation procedure will find some paper
> > ballots missing not accounted for with the list of spoiled
> > ballots. But how much and when does this trip a recount
> > paid for by the state as oppose to a challenge paid for by
> > the challenger. That's the advice I seek since I'm
> > actually trying to make reccomendations to law makers.
> I would suggest that guideline be that the state pay for it if
> the margin of victory is less than some multiple of the
> estimated error rate for ballots. The multiple would be set
> at creating a very low probability of error.
> For example, if the estimated error rate is 1% and the desired
> probability of a wrong victor declaration was set at .5%, then
> the maximum margin of victory for which the state would pay
> for the recount would be maybe 5% (statisticians please help
> out here with an exact number).

I disagree. I believe that states should be under an obligation
to perform and pay for investigation of any statistically
significant anomaly, whether or not it could affect the current
outcome. If we can catch them small, we can prevent them growing
into something worse, with a quantifiable probability of
improved outcomes that anyone in statistical quality control
should be able to look up in a handbook.

> Testing a system in its environment could provide a good
> measurement of the error rate. Then the law need only set the
> desired probability for erroneous victors.
> Ken

At a guess, I would say the state should pay for investigation of
anything over half the average error rate, regardless of its
impact on the result of the election, and not set an integer
multiple at all. My theory is that current practice is
remarkably sloppy, and should not be normative. And neither
should the measured error rate at any given point in time. Why
stop there, no matter where there is? Where we have measurable
error, we should have an audit that tells us where most of the
error comes from, and we should fix it.

When we have not just numbers, but a reasonably accurate
multivariate model of the error process, we can choose a more
optimal target.

Edward Cherlin
Generalist & activist--Linux, languages, literacy and more
"A knot! Oh, do let me help to undo it!"
--Alice in Wonderland
OVC discuss mailing lists
Send requests to subscribe or unsubscribe to
= The content of this message, with the exception of any external 
= quotations under fair use, are released to the Public Domain    
Received on Sat Jan 7 22:28:55 2006

This archive was generated by hypermail 2.1.8 : Sat Jan 07 2006 - 22:28:59 CST