News

A Systematic Approach to Analyzing Voting Terminal Event Logs

A Systematic Approach to Analyzing Voting Terminal Event Logs Laurent D. Michel, Alexander A. Shvartsman and Nikolaj Volgushev 2014 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE'14) USENIX Journal of Election Technology and Systems (JETS), Volume 2, Number 2, April 2014 www.usenix.org/jets August 18-19, 2014, San Diego, CA, USA www.usenix.org Abstract This paper presents a systematic approach to automating the analysis of event logs recorded by the electronic voting tabulators in the course of an election. An attribute context-free grammar is used to specify the language of the event logs, and to dis- tinguish compliant event logs (those that adhere to the defined proper conduct of an election) and non-compliant logs (those that deviate from the expected sequence of events). The attributes provide additional means for semantic analysis of the event logs by enforcing constraints on the timing of events and repetitions of events. The system is implemented with the help of commodity tools for lexical analysis and pars- ing of the logs. The system was rigorously tested against several thousand event logs collected in real elections in the State of Connecticut. The approach based on an at- tribute grammar proved to be superior to a previous approach that used state machine specifications. The new system is substantially easier to refine and maintain due to the very intuitive top-down specification. An unexpected benefit is the discovery of revealing and previously unknown deficiencies and defects in the event log recording systems of a widely used optical scan tabulator. Download full paper:: evt14.pdf


Read more ...

Post-Election Audit of Memory Cards for the November 6, 2012 Connecticut Elections

The Center for Voting Technology Research (VoTeR Center) at the School of Engineering of the University of Connecticut performed post-election audit of the memory cards for the Accu-Vote Optical Scan (AV-OS) tabulators that were used in the November 6, 2012 elections. The cards were programmed by LHS Associates of Salem, New Hampshire, and shipped to Connecticut districts. Cards were submitted for two reasons per instructions from the SOTS Office (a) the 10% of the districts that were randomly selected for the post-election hand-counted audit as well as any districts that were interested in participating in the audit were asked to send their cards for the post-election technological audit, and (b) any card was to be submitted if it appeared to be unusable. Given that the cards were submitted without consistent categorization of the reason, this report considers all unusable cards to fall into category (b). The Center received 578 memory cards from 286 districts (as of March 15, 2013). This is the largest number of cards submitted since 2008. Among these cards, 375 (64.9%) fall into category (a). All of these 375 cards were correctly programmed. Out of 375 cards, 174 contain completed elections (the rest were not used in the elections). There remaining 203 cards (35.1% of all cards) were found to be unusable by the AV-OS, thus falling into category (b). Among those, 192 cards contained apparently random (or ‘junk’) data, 7 cards were unusable by AV-OS, but did not contain random data (this requires further investigation), 4 cards were formatted using AV-OS utility, however, they were not programmed. None of these cards are usable by the AV-OS for the purpose the election. Given that such cards were not selected randomly, we estimate that the percentage of unusable cards is between 6.7% and 17.7% in this audit, and this is consistent with prior audit results. All cards in category (a) contained valid ballot data and the executable code on these cards was the expected code, with no extraneous data or code on the cards. Overall the audit found no cases where the behavior of the tabulators could have affected the integrity of the elections. The adherence to the election procedures by the districts had improved compared to prior years, especially in preparations for election. However the analysis the established procedures are not always followed and in several cases problems with tabulators were apparently encountered at the districts and were not reported to the SOTS Office. It would be helpful if any extra-procedural actions and technical problems were documented and communicated to the SOTS Office in future elections. The audit was performed at the request of the Office of the Secretary of the State. Full report: evt14.pdf


Read more ...

Pre-Election Audit of Memory Cards for the November 5, 2013 Connecticut Elections

The Center for Voting Technology Research (VoTeR Center) at the School of Engineering of the University of Connecticut performed pre-election audit of the memory cards for the Accu-Vote Optical Scan (av-os) tabulators that were used in the November 5, 2013 elections. The cards were programmed by LHS Associates of Salem, New Hampshire, and shipped to Connecticut districts. Cards were submitted for two reasons per instructions from the SOTS Office (a) one of the four cards per district was to be selected randomly and submitted directly for the purpose of the audit, and (b) any card was to be submitted if it appeared to be unusable. Given that cards in category (a) were to be randomly selected, while all cards in category (b) were supposed to be submitted, and that the cards were submitted without consistent categorization of the reason, this report considers all unusable cards to fall into category (b). The VoTeR Center received 62 memory cards from 53 districts. This is a relatively small sample of cards. Among these 62 cards, 41 (66.1%) fall into category (a). All of these 41 cards were correct. There are 21 cards (33.9% of all cards) that were found to be unusable by the av-os, thus falling into category (b). In particular, 19 cards contained apparently random (or ‘junk’) data, 2 cards were unusable by av-os, but did not contain random data (this requires further investigation). All these cards were unreadable by the tabulators and could not have been used in an election. Given that such cards were not selected randomly, we estimate that for pre-election audit the percentage of unusable cards is between 0.6% and 9.9% and this range is consistent with the results for prior audits. Cards that fell into category (a) contained valid ballot data and the executable code on these cards was the expected code, with no extraneous data or code on the cards. Overall the audit found no cases where the behavior of the tabulators could have affected the integrity of the elections. We note that the adherence to the election procedures by the districts has improved compared to prior years, however the analysis indicates that the prescribed procedures are not always followed; it would be helpful if reasons for these extra-procedural actions were documented and communicated to the SOTS Office in future elections. The audit was performed at the request of the Office of the Secretary of the State. Full report: VC-audit-main1


Read more ...

Scaling Privacy Guarantees in Code-Verification Elections

Scaling Privacy Guarantees in Code-Verification Elections Aggelos Kiayias and Anthi Orfanou E-Voting and Identify, 4th International Conference (Vote-ID 2013) Springer 2013 Lecture Notes in Computer Science, pp. 1-24 July 17-19, 2013, Guildford, UK www.voteid13.org Abstract Preventing the corruption of the voting platform is a major issue for any e-voting scheme. To address this, a number of recent protocols enable voters to validate the operation of their platform by utilizing a platform independent feedback: the voting system reaches out to the voter to convince her that the vote was cast as intended. This poses two major problems: first, the system should not learn the actual vote; second, the voter should be able to validate the system’s response without performing a mathematically complex protocol (we call this property “human verifiability”). Current solutions with convincing privacy guarantees suffer from trust scalability problems: either a small coalition of servers can entirely break privacy or the platform has a secret key which prevents the privacy from being breached. In this work we demonstrate how it is possible to provide better trust distribution without platform side secrets by increasing the number of feedback messages back to the voter. The main challenge of our approach is to maintain human verifiability: to solve this we provide new techniques that are based on either simple mathematical calculations or a novel visual cryptography technique that we call visual sharing of shape descriptions, which may be of independent interest.


Read more ...

Older entries Newer entries