Audit Engine Frequently Asked Questions
Citizens Oversight (2021-06-11) Ray Lutz
This Page: https://copswiki.org/Common/AuditEngineFAQs
More Info: Audit Engine
Frequently Asked Questions (FAQs) about Audit Engine
Simply stated, Audit Engine
uses the ballot images of each ballot that are now produced by voting system when the ballot is first scanned, and we provide a detailed independent tabulation of the results. Our tabulation can be compared, ballot-by-ballot with the results of the voting system to find those exact ballots where we differ. Our system works best if we audit all contests because it gives us the most information about the marking habits of each voter so we can tell the difference between a true mark versus a "hesitation mark" or crease in the ballot.
runs in the cloud, where we can harness the power of large data centers. We are currently authorized to run up to 10,000 computers in parallel to complete the analysis of the images in a very short period of time, typically less than 15 minutes per run of each phase.
General information about Audit Engine
is on this page: https://copswiki.org/Common/AuditEngine
Q: Does Audit Engine have a good track record?
is relatively new, largely because the ballot images it uses in its review are only recently available on a widespread basis. However, we have recently completed a very thorough case study of the platform on three counties in Florida: Collier, Volusia, and St. Lucie. This case study provides evidence of the very high accuracy of Audit Engine
, where it agrees with the voting system more than 99.7% of the ballots, and when we disagree with the voting system, Audit Engine
interpret voter intent 93% of the time correctly. In other words, we have proven that Audit Engine
is more accurate in terms of automatically correctly interpreting voter intent than the voting systems in use. In this case study, we also identified a number of discrepancies in how results were uploaded to the Election Management System (EMS) indicating the use of two internal tabulations in ES&S Equipment, that does not match.
In Volusia County, we identified 4,904 ballot images that were duplicated and one voting machine that was never correctly uploaded using the thumbdrive, resulting in 537 fewer ballot images than should have been provided. Despite these operational errors, the tabulation from the county appeared correct, further substantiating the fact that the ES&S EMS incorporates dual tabulations that do not fully correspond.
We have also recently audited Bartow County, GA, which uses Dominion Voting Systems equipment and software. Also, during the development of the platform, we performed audits of elections in Dane County, WI, Wakulla County, FL, Leon County FL and San Francisco, CA.
You can read the case study report and associated explanation videos on this page: https://copswiki.org/Common/M1970
With that said, we must admit that Audit Engine
is relatively new technology and the election field is highly non-standardized with proprietary voting systems and a vast number of different ballot layouts and conventions. Therefore, we do occassionally encounter a new sitation that requires additional software development or configuration changes.
Q: How can we trust the result of the audit by Audit Engine?
A: The premise of Audit Engine
is complete transparency. We turn a black box into a transparent box.
The Audit Engine
auditing system is simple in concept. We read the vote off each and every ballot image, and create an independent tabulation. Our system provides complete transparency, so you can take any ballot and follow it through the system. The system will find "disagreements", where the audit system interpreted the marks on the ballot differently from the voting system used by the jurisdiction. We will be able to manually inspect those ballot images and confirm how those ballots should be interpreted, and if we want, dig into the paper ballots and find those exact ballots. When we disagree with the voting system, Audit Engine
correctly interprets the marks about 93% of the time, according to our recent case study in Florida, whereas the voting system interprets the same marks only 7% of the time. Typically, the disagreements are fewer than 0.25%, a quarter of one percent, depending on whether the voting system results were heavily manually adjudicated. Audit Engine tends to find incorrectly interpreted undervotes, where the voter made a mark that was intended for the candidate but was not sufficiently in the bubble. Audit Engine
uses an "adaptive threshold" method which evaluates the marks based on other marks on the same ballot and the relative darkness or lightness of the ballot itself.
Q: How do we know the ballot images have not been altered?
- The proper ballot images from the election department, as exported by the "Election Management System" or EMS must be uploaded to the secure cloud data center used by Audit Engine. After being uploaded, the hash values are easily read in the listing of each file without any further processing. These hash values can be compared with the values produces with similar calculations by election officials to confirm that the image files are the same. The use of these secure hashes is commonplace and a well respected methodology.
The following references provide an overview of hash functions and their use in the Federal Rules of Evidence:
- The second level of this question has to do with whether a hacker has modified the images before they were captured by the election department, perhaps using a virus inside the voting machine itself. This may indeed be a hazard in the future when ballot image audits become commonplace. But in recent elections, no one expected a ballot image audit to be performed, and so if you assume a hacker or compromised insider wanted to modify the election, they would likely just modify the numbers in the election result (in the EMS database) rather than go to all the trouble of modifying the images, which is indeed a lot of work and may be obvious when the images are inspected. So for now, we can largely ignore the possibility that anyone would go to this expense.
- If the ballot image audit finds no inconsistencies, one option is to perform an independent rescan of the ballots using high-speed scanners that are not used in the election process, process those images using Audit Engine, and then compare the result of the tabulation on those batches. This process would detect any image manipulation that would alter the result of any contest.
- Furthermore, we at Citizens Oversight are working to include cybersecurity measures to allow us to detect any modification of ballot images once they are produced. At this time, these measures have not been adopted in the standards nor incorporated by voting machines. We view such hacks at the time the image is created to be very unlikely, particularly if the image is scanned using commercial off-the-shelf (COTS) scanners that are not purpose-designed as a voting system.
- Even while knowing that modification of the images is very unlikely, we do advise that some paper ballots also be inspected and compared with the images to provide further confidence. Today, most districts perform a limited audit of the paper ballots. That audit also verifies the ballot images, because the images are used by the tabulators to determine the vote on each ballot. We also suggest that if Audit Engine finds batches that have disagreements in terms of voter intent, the paper ballots can be checked by locating the paper ballot and inspecing and compariing it with the ballot image. Doing this a few times provides a good sense that the ballots are indeed well organized and there is a correspondence with the ballot images.
- If a thorough hand count is performed, checking the result of that hand count on a batch-by-batch basis can help to eliminate the possibility that: 1) the hand count was incorrectly performed, 2) the hand count results were modified, and 3) the ballot images were modified (of course to the extent any hand count reviewed those ballots.) Thus if a hand count covered only one or two contests, then those contests can be compared with the results of the ballot image audit (which covers all contests.) In theory, image manipulation could occur in just those contests not hand counted, but modification of down ballot contests by modifying the images is even less likely.
- What we tend to find quite often is that there are inconsistencies in the ballot images in terms of the raw counts of images, if 1) some ballot images were copied twice into the set, 2) some ballots are rescanned to produce duplicate ballot images, or 3) some ballot images are missing when they are not uploaded to the EMS. (We found these exact problems in the Volusia 2020 General election, covered in detail by our case study results, read more here: https://copswiki.org/Common/M1970).
Q: Are there aspects of the election that Audit Engine does not include?
A: Yes. Audit Engine
provides a consistency check between the ballot images, which are made very early in the election process, and the final tabulation, which is at the very end. Thus, it can detect most issued such as errors or malicious changes between these two checkpoints. It does not include many aspects of the election that do deserve scrutiny, such as voter registration, voter eligibility, paper ballot alteration, ballot harvesting, signature validation, campaign finance, inappropriate adversizing, etc. The consistency check from ballot images to final result eliminate some of the most obvious security hazards. As we continue to develop Audit Engine
, we will also be adding additional components where the horsepower of the cloud is beneficial.
Q: Does Audit Engine ever fail to process ballot images?
We find that some ballot images are distorted and poorly created by the voting system.
Q: Do you need ballot masters for each style prior to running Audit Engine for a given election?
A: No, we don't need style masters.
derives style masters from the images themselves, so it is not necessary to have all the ballot masters for each style. The helper app "TargetMapper" is then used to map the targets on the ballot to each style, contest and ballot option.
Q: How much time do you need in advance of the election to set up Audit Engine?
A: Audit Engine
is typically deployed after the election when the results and ballot images have been finalized, or at least semi-final results have been published. However, it is helpful to have some experience with a given area and the specific methods used in any given jurisdiction by prior audits.
Q: Does Audit Engine also audit "Ballot Marking Device" (BMD) ballot summary sheets?
Yes. BMD ballots are those printed by systems that incorporate touch screens to allow the voter to make selections, followed by printing a voted selection summary card. This card, or sheet, includes linear or 2-D barcodes that provide a machine-readable representation of the selections by the voter. These barcodes typically are difficult if not impossible for voters to verify, and instead they are left with verifying printed selections in printed text. Thus, the part verified by the voter is not read by voting systems. Audit Engine
stands alone in the field of ballot image auditing offerings because we perform OCR on the printed selections to determine the vote on the ballot rather than relying on the barcodes, while we also verify the consistency of the barcode to the printed selections.
Q: What voting systems do you support?
Currently, we support the two leading voting system vendors, Election Systems & Software (ES&S) and Dominion Voting Systems. In the near future, we plan to also support Hart Intercivic. We prefer to the latest generations of these systems which provide a ballot-by-ballot cast-vote-record (CVR) report of the voting system results so we can compare with the voting sytem down to the ballot. The older Dominion and ES&S systems do not provide that level of reporting even if they provide ballot images, and although we can process the images to product an overall tabulation, we can't compare on a ballot-by-ballot basis.
Q: How many people are involved in doing an audit?
We need at least one auditor to be in charge of each audit, plus a number of workers who can help with the mapping and adjudication process, to the extent those are required, and observers. The amount of work required is highly dependent on the sheer number of ballots and the smallest margin of victory. If the margin of victory is fairly large, and if we find a relatively small number of disagreements, we may not need to review them all to conclude that the result is consistent. On the other hand, with a very close margin, every disagreement will need to be reviewed. If there are also a large number of write-ins, this can also increase the amount of work involved. At this stage, we are still evaluating how many people are needed in general.
With that said, we encourage the process of each stage of the audit to be witnessed by a set of interested parties in an observers panel, so they can have all their questions answered and the process can also be livestreamed to the public.
Although Audit Engine
uses a lot of open source software and we endorse standardization, at this time Audit Engine
is not open source software. We are reviewing our options but at present we believe the most important aspect is providing "open data" transparency, so that anyone can check the data at each stage of the process. Open Source software works best when the users of those software modules are programmers who can then actively work to improve them. The users of Audit Engine
are not programmers, and so it is a poor fit. Plus, since the software runs in the cloud, it is very hard to prove that it is not changed from the open source that may have been inspected. Our philosophy is that it does not matter whether the software is open sourced if the data it produces is open, and can be checked at intermediate locations along the way. Audit Engine
has been run now on many millions of ballots, and any edge cases are quickly exposed in the operation of the software itself.
There is another aspect of open source which is perceived as a benefit in most situations: code sharing. Thus, in the open source world, if something has been developed it is commonly reused for many other purposes. Audit Engine
really is single-purpose software. If it is reused, it will be reused for the same purpose by another entity. Instead, we believe it is more beneficial if that other entity develop their own auditing software which will have different characteristics. Using both auditing systems on the same elections provides the opporunity to compare the results of the two (or more) independently designed systems. This is a beneficial competitive process while sharing the underlying code does not provide this cross checking.
We are pursuing a grass-roots funding model, where we can do fundraising for each audit from the general public, rather than relying on contracts with the same government entities we are auditing. We believe such contracts, unless carefully constructed, will result in the auditors preferrably providing high scores to their clients. We believe that the cost of operating Audit Engine
is low enough so that the public can fund each audit due to the interest in having an independent review.