How Local Election Officials View Election Reform: Results of Three National Surveys

March 4, 2011 (R41667)

Contents

Figures

Tables

Appendixes

Summary

Local election officials (LEOs) are critical to the administration of federal elections and the implementation of the Help America Vote Act of 2002 (HAVA, P.L. 107-252). Three surveys of LEOs were performed by academic institutions in collaboration with the Congressional Research Service. Although care needs to be taken in interpreting the results, they may have implications for several policy issues, such as how election officials are chosen and trained, the best ways to ensure that voting systems and election procedures are sufficiently effective, secure, and voter-friendly, and whether adjustments should be made to HAVA requirements. Major results include the following:

The demographic characteristics of LEOs differ from those of other government officials. Almost three-quarters are women, and 5% are minorities. Most do not have a college degree, and most were elected, although those characteristics appear to be changing. Some results suggest areas of potential improvement such as in training and participation in professional associations.

LEOs believed that the federal government has too great an influence on the acquisition of voting systems, and that local elected officials have too little. Their concerns increased from 2004 to 2006 about the influence of the media, political parties, advocacy groups, and vendors. Concern about the influence of these groups increased again, slightly, from 2006 to 2008.

LEOs were highly satisfied with whatever voting system they used but were less supportive of other kinds. Their satisfaction declined from 2004 to 2006 for all systems except lever machines, but rebounded in 2008. They also rated their primary voting systems as very accurate, secure, reliable, and voter- and pollworker-friendly, no matter what system they used. However, the most common incident reported by respondents in both the 2006 and 2008 elections was malfunction of a direct recording (DRE) or optical scan (OS) electronic voting system. The incidence of long lines at polling places was highest in jurisdictions using DREs. Most DRE users did not believe that voter-verified paper audit trails (VVPAT) should be required, but nonusers believed they should be. However, the percentage of DRE users who supported VVPAT increased from 2004 to 2008, and more VVPAT users were satisfied with them in 2008 than in 2006.

On average, LEOs mildly supported requiring photo identification for all voters and believed it would make elections more secure, even though they strongly believed that it will negatively affect turnout and did not believe that voter fraud is a problem in their jurisdictions.

In all three surveys, LEOs believed that HAVA is making moderate improvements in the electoral process. The level of support declined from 2004 to 2006 but increased to its highest point in 2008. LEOs reported that HAVA has increased the accessibility of voting but has made elections more complicated and has increased their cost, though fewer believed so in 2008 than in 2006. LEOs spent much more time preparing for the election in 2008 than in 2004. They also believed that the increased complexity of elections is hindering recruitment of pollworkers. Most found the activities of the Election Assistance Commission (EAC) that HAVA created moderately important, and that its helpfulness improved from 2006 to 2008. Their assessment of the statewide voter-registration database was neutral in 2006 but positive in 2008. They believed that it was more accurate and fair than their previous registration system.


How Local Election Officials View Election Reform: Results of Three National Surveys

U.S. elections are highly decentralized, with much of the responsibility for election administration residing with local election officials (LEOs). There are thousands of such officials, many of whom are responsible for all aspects of election administration in their local jurisdictions—including voter registration, recruiting pollworkers, running each election, and choosing and purchasing new voting systems.

These officials are therefore critical not only to the successful administration of federal elections, but also to the implementation of the Help America Vote Act of 2002 (HAVA, P.L. 107-252), the Uniform and Overseas Citizens Absentee Voting Act (UOCAVA, P.L. 99-410, as amended) and other federal statutes. Nevertheless, there has been little objective information on the perceptions and attitudes of LEOs about election reform. Such information may be useful to Congress in deliberations about federal election reform activities.

This report discusses the results of three scientific opinion surveys that were designed to help fill that gap in knowledge about principal local election officials.1 The surveys were performed pursuant to projects sponsored by the Congressional Research Service (CRS). The projects were developed in collaboration with and the surveys performed by faculty and students at the George Bush School of Government and Public Service at Texas A&M University, and the Center for Applied Social Research at the University of Oklahoma. The university teams developed and administered the surveys, in consultation with CRS, to samples of LEOs from all 50 states, with a response rate of approximately 40% for each survey. The responses were analyzed by CRS for purposes of this report. See the Appendix for information on the methods used in the surveys and analyses.

The surveys were administered following the 2004, 2006, and 2008 federal elections. While they were not identical, many of the questions were the same, and comparisons of the results are discussed where appropriate.2 The findings may be useful to Congress as it considers funding for HAVA, oversight of its implementation, and possible revisions to current law.

The report begins with a description of some characteristics of local election officials and their jurisdictions. That is followed by a discussion of perceptions and attitudes of LEOs about the different kinds of voting systems used in different jurisdictions—lever machines, punchcard ballots, hand-counted paper ballots,3 central-count optical scan (CCOS), precinct-count optical scan (PCOS), and direct-recording electronic (DRE) systems such as "touchscreens." The report then describes how HAVA has affected local jurisdictions and the opinions LEOs expressed about the law. The section after that discusses other election administration topics covered in the 2006 and 2008 surveys—preparations for election day, election-day incidents, characteristics of pollworkers, and attitudes about nonpartisan election administration. The final sections discuss caveats to consider in interpreting the results, and potential policy implications of the findings.

Characteristics of Local Election Officials and Their Jurisdictions

There are about 9,000 local election jurisdictions in the United States.4 In most states, they are counties or major cities, but in some New England and Upper Midwest states, they are small townships—for example, more than 1,800 townships in Wisconsin. The number of registered voters and polling places in a jurisdiction reported by LEOs also varies greatly (Figure 1 and Figure 2).

Figure 1. Reported Number of Registered Voters per Jurisdiction

Source: Analysis by CRS of data from surveys performed collaboratively by CRS with Texas A&M University and the University of Oklahoma (called "capstone surveys" hereinafter).

Notes: The X-axis uses logarithmic scaling. Each number depicts the upper bound for the bin displayed in that column (except ">1,250"). For example, the column labeled "25" displays the number of jurisdictions reporting 10,001 to 25,000 registered voters. Throughout this report, bar or column graphs comparing results of the three surveys show data for 2004 in blue (color copies) or medium gray (black and white), for 2006 in burgundy or dark gray, and for 2008 in yellow or light gray.

The reported number of registered voters ranged from fewer than 100 to more than 1 million, with a median of about 12,000 in 2006 and 13,000 in 2008.5 That 8% increase was consistent with the reported 7%-10% increase in the number of registered voters nationwide during that period.6

Figure 2. Reported Number of Polling Places per Jurisdiction, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Although LEOs were not asked to report the number of voters in their jurisdictions in the 2004 survey, in 2008 they were asked how the number changed from 2004 to 2008. About two-thirds reported that the number of registered voters had increased between 2004 and 2008, and about one-fifth reported a decrease (Figure 3).

The number of polling places in a jurisdiction averaged about a dozen, ranging from 0 to 1,000 or more,7 with about 15% of jurisdictions in each election having only one polling place and a similar percentage having more than 50. The number of election personnel working in a jurisdiction, in addition to the local election official, also varied greatly, from none to more than 10,000, including pollworkers. The 2008 survey also asked about the number of employees other than pollworkers. The median was 5, with a mean of 17 and a maximum of 800.

The number of registered voters, polling places, and pollworkers was about 10 times greater in county jurisdictions than in townships. LEOs from counties reported an average of about 56,000 registered voters, while those from townships averaged 6,000. There were 40 polling places per county on average, and 3 per township. Counties had about 290 pollworkers and townships 30.

Figure 3. Change in the Number of Registered Voters from 2004 to 2008

Source: Analysis by CRS of data from capstone surveys.

Given such diversity and other differences among states—such as wealth, population, and the role of state election officials—responsibilities and characteristics of LEOs are likely to vary greatly. Nevertheless, some patterns emerged from the surveys.

The demographic characteristics of LEOs differ from those of other government officials.

According to the survey results, the typical LEO is a white woman between 50 and 60 years old who is a high school graduate. She was elected to her current office, works full-time in election administration, has been in the profession for about 10 years, and earns under $60,000 per year. She belongs to a state-level professional organization but not a national one, and she believes that her training as an election official has been good to excellent.

As with any such description, the one above does not capture the diversity within the community surveyed: About one-quarter of LEOs are men, about 5% belong to minority groups, about 40% are college graduates, and about 10% have graduate degrees (see Table 1). LEOs range from under 25 to more than 80 years of age, and have served from 1 to 50 years. More than half were elected rather than appointed to their posts.8 Reported salaries range from under $10,000 to more than $120,000. Most belong to at least one professional organization.

The demographic profile of LEOs is unusual, especially for a professional group. They differ from those of other local government employees. For example, according to U.S. Census figures, while women comprise a higher proportion of the local government workforce than men overall,9 men comprise a higher proportion of local government general and administrative managers.10 About 20% of those managers are members of minorities.11 The patterns do not appear to be a result of the fact that most LEOs are elected, as the demographic characteristics of legislators appear to be largely similar to those for local government managers.12

Table 1. Comparison of Selected Demographic Characteristics of LEOs

Percentages of LEOs who…

2004

2006

2008

were elected.

65

58

53

worked full-time.

66

76

72

had served for more than 10 years in current position.

47

44

40

spent more than 20 hours per week on election duties.

41

47

49

did not belong to an association of election professionals.

30

26

31

were not college graduates.

60

59

56

possessed a graduate degree.

8.1

7.9

9.7

had a salary under $40,000.

47

39

37

were women.

75

77

76

were older than 50.

63

62

62

were not white.

5.6

5.4

6.5

professed a conservative political ideology.

51

47

44

Source: Analysis by CRS of data from capstone surveys.

Note: Bold type denotes statistically significant differences among the surveys.

The average tenure in the current position declined by about one year from 2004 to 2008, with the proportion of LEOs who had served for two years or less in their current positions rising to 18% in 2008 from 11% in 2004 (see Figure 4). Thus, there appeared to be a small increase in job turnover over the three elections.13 However, there was no significant change in average age (Figure 5).

Figure 4. Length of Tenure of LEOs in Their Current Positions

Source: Analysis by CRS of data from capstone surveys.

Figure 5. Age Distribution of LEOs

Source: Analysis by CRS of data from capstone surveys.

Other trends across the surveys included a decrease in the proportion of LEOs who were elected, an increase in both the percentage who worked full time and in the amount of time they spent working on elections,14 an increase in salary, and a decrease in the proportion who expressed a slightly to strongly conservative ideology.15

The survey was not designed to identify the causes of such changes, but at least some appear to be consistent with the impacts of federal and state election reform on local jurisdictions. That reform led to increased funding for election administration, changes in voting systems used by many jurisdictions, and an increased workload for many election officials. For example, the survey found that those who reported that they worked full-time on election administration increased from 66% in 2004 to 72% in 2008, while those who reported that they spent more than twenty hours per week on election duties increased from 41% to 49%.

Figure 6. Level of Education Reported by LEOs

Source: Analysis by CRS of data from capstone surveys.

The increasing complexity of elections and the increased federal role after the passage of HAVA have focused more attention on the role of professionalism in election administration. Given that change, it might be expected that election officials who began serving more recently would have more formal education than those who have served for longer periods. Such a pattern could yield a statistical association between the highest education level attained and the number of years in service as an election official. In fact, there was a small but significant relationship, with LEOs who did not have a college degree averaging 11-12 years of service and those with graduate degrees averaging 8-9 years. However, there was no significant change in the overall distribution of maximum education level over the three surveys (Figure 6).

Figure 7. Salaries Reported by Local Election Officials

Source: Analysis by CRS of data from capstone surveys.

Figure 8. Distribution of Memberships among LEOs Who Belong to One or More Professional Associations

Source: Analysis by CRS of data from capstone surveys.

Note: Abbreviated names of associations are as follows: NACRC = National Association of County Recorders, Election Officials and Clerks; IACREOT = International Association of Clerks, Recorders, Election Officials and Treasurers. The choice of regional association was new for the 2006 survey. The data used in this graph include only those LEOs who indicated that they belonged to at least one professional association. See text.

Reported salaries of LEOs increased about 7% per year, from an estimated average of $45,000 in 2004 to $51,000 in 2008 (Figure 7). LEOs in jurisdictions with larger numbers of voters tended to have somewhat higher salaries and education levels, and were more likely to be male. In 2008, the median salary for LEOs in jurisdictions below the median size in numbers of voters was $40-50,000; it was $50,000-$60,000 for LEOs in larger jurisdictions. More than 20% of LEOs in larger jurisdictions had attended graduate school, while 10% of those from smaller jurisdictions had. LEOs were male in about 20% of jurisdictions below the median size, and in 30% of those above.

Fewer than half of LEOs belonged to a national or international association, but most belonged to a state association.

The survey also examined other factors related to election administration as a profession. More than two-thirds (70%-74%) of LEOs reported belonging to at least one professional association.16 Of those, more than three-fourths belonged to a state association, 30% to a regional one, and about 20% to the three major national or international associations (see Figure 8). Altogether, about 40% of LEOs who belonged to at least one association were members of a national or international one, with more than half belonging only to a state or regional association.

Table 2. Selected Election Administration Responsibilities Reported by LEOs, 2006

Responsibility

% Reporting

Managing poll workers and other election administrators

90

Serving as a liaison between my jurisdiction and state and federal election officials

90

Overseeing an election recount when necessary

88

Authorizing and adhering to a budget

83

Hiring poll workers and other election administrators

83

Reporting inappropriate conduct by voters or politicians at polling place

82

Maintaining contact with vendors

80

Maintaining the voter registration database

80

Purchasing election equipment

78

Maintaining an electronic voting system

76

Purchasing an electronic voting system

63

Additional duties not listed

57

Source: Analysis by CRS of data from capstone surveys.

Note: LEOs were asked to check all applicable items in the list of responsibilities presented in the table. The data presented may be overestimates. They are percentages of the 1,406 LEOs who responded to the question; 7% of LEOs who responded to the survey did not answer this question. Using the total number of 1,506 survey respondents would reduce the percentages by 4-6 points but would probably constitute underestimates.

In 2006, the percentage of LEOs reporting that they had a written job description was 43% for those who had been elected and 70% for those who had been appointed. Most LEOs reported a broad range of election-administration responsibilities beyond solely running elections. Most are also responsible for budgeting, personnel, and purchasing, for example (Table 2).17

Most LEOs received some initial training specifically designed to prepare them for their duties, but for most that training was less than 20 hours, and only one-fifth of LEOs were required to pass an examination (Table 3). Most have also received additional training beyond that initially provided. More than two-thirds of LEOs assessed that their training was good to excellent and resulted in moderate to substantial improvement in their effectiveness and ability to solve problems. More than four-fifths believed that training and experience are equally important in ensuring a successful election.

Table 3. Training Reported by LEOs, 2006 and 2008

 

Kind of Training

Initial

Additional

Percentage of LEOs who…

2006

2008

2006

2008

received any training.

78

65

82

92

received > 20 hours of training.

43

46

52

63

received certification from training.

45

n/a

36

43

received mandatory training.

54

54

35

50

were required to pass an exam.

19

16

n/a

n/a

Source: Analysis by CRS of data from capstone surveys.

Note: n/a = not applicable. The question was not asked.

This result, shown in Figure 9, might reflect the impact of HAVA requirements, most of which went into effect in 2006. For example, election officials might have felt less well prepared by their training to implement HAVA in 2006 than in 2004, but the survey did not address that possibility. Other possible factors include increasing public attention to problems in election administration, and recent controversies about the reliability and security of voting systems. Two-fifths of respondents to the 2006 survey, and a third of 2008 respondents, commented on what kinds of additional training would be useful. The most common suggestions were for more training in technical and legal aspects of elections, and more "hands-on" training.

Figure 9. Assessments by LEOs of the Quality of the Training They Have Received

Source: Analysis by CRS of data from capstone surveys.

LEOs were less satisfied with their training in 2006 and 2008 than in 2004.

Given the increasing role of technology in elections, the surveys asked LEOs questions about their attitudes toward technology (Figure 10). Respondents believed that technology can be useful for government services, but were cautious about implementation.

Figure 10. Agreement/Disagreement of LEOs on Statements About Technology

Source: Analysis by CRS of data from capstone surveys.

Note: Error bars on graphs in this report denote upper and lower 95% confidence limits for the average response (arithmetic mean).

Voting Systems

Current Voting System

The use of precinct-count optical scan (PCOS) and direct-recording electronic (DREs) voting systems increased substantially from 2004 to 2008.

Figure 11. Percentages of Jurisdictions Using Different Kinds of Primary Voting Systems as Reported by LEOs

Source: Analysis by CRS of data from capstone surveys.

Note: Types of voting systems listed are as follows: Lever = mechanical lever machines; Punch = punchcard ballots; Paper = hand-counted paper ballots; CCOS = central-count optical scan systems; PCOS = precinct-count optical scan systems; DRE = direct-recording electronic systems; and Other = cases where the respondent checked "Other" and the primary voting system could not be determined from the written response—for example, the respondent wrote "DRE and OS." That might indicate, for example, that DREs were used only for accessibility, or that OS (optical scan) was used only for absentee ballots.

In 2004, more than half of jurisdictions used lever machines, punchcards, hand-counted paper ballots, or central-count optical scan (CCOS) as their primary voting systems. By 2008, that percentage had fallen by almost half, with lever machines decreasing by almost two-thirds,18 paper ballots by half, CCOS by one-quarter,19 and punchcards virtually disappearing by 2006 (see Figure 11). The proportion of jurisdictions using central-count optical scan (CCOS) decreased by almost half from 2004 to 2006, but that decline reversed in 2008.

Jurisdictions using PCOS and DREs increased substantially from 2004 to 2008, with a 50% increase for PCOS and more than two-thirds for DREs.20 From 2006 to 2008, respondents reported a 5% increase in PCOS and a similar decrease in DREs, although those changes were not statistically significant.

The observed patterns are consistent with results from other sources.21 The trends conform with expectations arising from HAVA requirements that emphasized improved usability, including prevention and correction of errors by voters in marking ballots, and accessibility of voting systems.22 The increase in use of CCOS from 2006 and 2008 appears to run counter to that explanation, because unlike PCOS and DRE systems, CCOS does not provide assistance to voters in preventing errors. The cause of the increase is not clear, but two possible factors are the increase in "no-excuse" absentee voting (see the section on "Election Administration Issues") and the controversy about the security and reliability of DREs that have led many states to adopt paper-ballot requirements.23

Jurisdictions appeared reluctant to change the kinds of voting systems they used.

The average length of time jurisdictions have been using a particular kind of voting system varies greatly with the kind of system (Figure 12). The average length of use varies with the length of time a voting system has been available for use. At one extreme, jurisdictions with hand-counted paper ballots have used them for 80 to 100 years, on average. At the other, jurisdictions with DREs have had them under 10 years, on average.

Figure 12. Average Length of Use of the Current Voting System as Reported by LEOs

Source: Analysis by CRS of data from capstone surveys.

Notes: See note for Figure 11 for an explanation of types of voting systems. Data on punchcard users is not presented for 2006 or 2008 because very few LEOs reported using them.

The pattern of use shown in Figure 12 suggests that jurisdictions do not readily change the kinds of voting systems they use. Before the 1990s, fewer than 10% of jurisdictions used optical scan and DRE voting systems.24 On the one hand, such reluctance to change creates stability that may be beneficial to voters and administrators. On the other hand, it may mean that a particular kind of technology is used far longer than it should be, with increasing risks of negative consequences. For example, many of the problems associated with the 2000 presidential election were attributed to the continued use of outmoded or flawed technology, such as the punchcard systems in widespread use at the time.25

The causes of such long-term use patterns are complex and may include factors such as legal and budgetary constraints and various forms of transaction costs that would be incurred with any change. Such factors, if they continue to be important, may impede jurisdictions from taking advantage of the kinds of improvements that are likely to occur in voting technology over the next decade.

Figure 13. Kinds of Accessible Voting Systems Used by Jurisdictions with Different Primary Voting Systems, 2008

Source: Analysis by CRS of data from capstone surveys.

Notes: BMD = electronic ballot marking device; Phone = vote-by-phone system; Other = a system that could not be placed in one of the other categories based on the description provided in the survey response. See also note for Figure 11. Data on punchcard users is not presented because very few LEOs reported using them in 2008.

The kind of accessible voting system used varied with the kind of primary voting system.

In 2008, LEOs were asked about the voting systems they used to meet the HAVA accessibility requirement. The options presented were (1) an electronic ballot marking device (BMD), which uses a touchscreen or other computer interface to mark an optical-scan ballot; (2) a DRE; (3) a vote-by-phone voting system, in which the voter uses an automated telephone system to fill out the ballot; and (4) some other system, which the LEO was asked to describe.26 As Figure 13 shows, the kinds of accessible voting systems used varied depending on the kind of primary voting system used in the jurisdiction.27 About 40% of responding jurisdictions reported using DREs as their accessible systems. Not surprisingly, most of those also used them as their primary systems.

A slightly lower percentage of respondents, 37%, reported using BMDs for accessibility, and it was most commonly used in lever-machine jurisdictions. While most optical-scan jurisdictions might be expected to use BMDs, only about half reported doing so. About 12% of jurisdictions used vote-by-phone, with the majority of those having hand-counted paper ballots as their primary system.

Table 4. Relative Use of Accessible Rather Than Other Voting Machines by Voters Without Disabilities, 2008

Accessible Voting System

Percentage of Jurisdictions

 

Less

The Same

More

DRE

52%

31%

18%

Phone

72%

17%

10%

BMD

76%

19%

5%

Other

61%

32%

7%

Total

64%

25%

11%

Source: Analysis by CRS of data from capstone surveys.

Notes: LEOs were asked the extent to which accessible voting systems were used by voters without disabilities, on a scale of -5 (much less) to +5 (much more), with 0 meaning "about the same." Data presented are sums of frequencies as follows: Less = -5 to -1, The Same = 0, and More = 1 to 5. See also note for Figure 13.

Three-quarters of jurisdictions reported having one accessible machine per polling place. About one-fifth used two, and about one in 20 used three. Most LEOs reported that voters without disabilities used accessible machines less than other machines, and about 40% reported that they used them much less. In jurisdictions using DREs as the accessible machines, the differences were less on average than in those using other technology (see Table 4). Some observers have argued that accessible voting systems should not be limited to use by voters with disabilities, because that would provide a way for election officials to determine the ballot choices made by that class of voters; it would therefore deprive them of the same degree of ballot secrecy that other voters have.

Influence of Stakeholders on the Acquisition of Voting Systems

Most LEOs play a role in decisions on what voting systems to use in their jurisdictions (see Table 2). Many other stakeholders may also influence those decisions. To help provide an understanding of how LEOs assess the appropriateness of the roles other stakeholders play, the survey asked respondents to what extent they agreed or disagreed with statements about the influence of those stakeholders on the decision-making process. Two examples are "The federal government has too great an influence," and "Local level, elected officials should have greater influence."

Figure 14. Reactions of LEOs to Statements about the Influence of Various Stakeholders on Decisions about Selection of Voting Systems

Source: Analysis by CRS of data from capstone surveys.

Note: LEOs were asked about cost only in 2008.

LEOs believed that several groups have too great an influence on the acquisition of voting systems, including the federal government, the media, advocates, political parties, and vendors, and that local elected officials have too little influence.

The results are presented in Figure 14. On average, in fact, LEOs felt more strongly about the role of local elected officials than any other stakeholder. LEOs were largely neutral about the level of influence of state election officials and the public, and did not believe that nonelected officials, professional associations, and independent experts should have greater influence than they do now.

LEOs have become more concerned about the influence of the media, political parties, advocacy groups, and vendors, and more supportive of influence by professionals, experts, and nonelected state and local officials.

Some of the differences among the surveys are notable. As Figure 14 shows, in 2004, LEOs were largely neutral about the influence of the media, political parties, and various advocacy groups.28 In 2006 and 2008, they thought those groups had too much influence. Also, in 2004 LEOs did not believe on average that vendors had too great an influence. That changed in 2006, when most believed that vendor influence was too high, and that number increased again in 2008.

In contrast, in 2008 LEOs were much less convinced than in 2004 and 2006 that professionals, experts, and nonelected state and local officials should not have greater influence. Their views did not change significantly on the roles of the federal government, elected state officials, local elected officials, and the public over the three surveys.

In 2008, LEOs were also asked whether cost has too great an influence on the process of selecting voting systems. They agreed on average that it did, and the strength of that view was about the same as for the media, advocates, political parties, and vendors.

Overall, the observed patterns of response are not surprising. Support for greater influence by local elected officials is expected, because LEOs generally either report to elected local officials or are elected themselves. However, for some questions, elected LEOs and appointed ones had somewhat different views. The support of elected LEOs for greater influence by local elected officials was 20% higher than that of appointed LEOs; support for greater influence by state elected officials and the public was 5%-10% higher. Elected LEOs were 20% less supportive of influence by unelected officials, and 5%-10% less supportive of influence by the federal government, professional associations, and independent experts.

The concerns of local officials about the influence of the federal government are well-known in many areas, not just election administration, and many local officials may have resented the HAVA requirements that led to changes in long-used voting systems.29 The concerns of LEOs about federal influence have not abated, despite improvements in the attitudes of LEOs about HAVA (see "The Help America Vote Act (HAVA): Impacts and Attitudes" below). Also, it is not surprising that LEOs have become more concerned about the roles of stakeholders such as the media, advocates, and political partisans, who are closely associated with the recent controversies about the reliability and security of voting systems.

There has also been debate and uncertainty specifically about the role and influence of voting system manufacturers and vendors in the selection of voting systems by local jurisdictions. Some observers have argued that vendors have undue influence in what voting systems jurisdictions choose. Others believe that such concerns are unwarranted. But little has been known previously of how LEOs actually view vendors and their relationships with them.

The results of the 2004 survey were mixed with respect to the importance of vendors. (These results are from more detailed questions on factors influencing the acquisition of voting systems that were not included in the 2006 or 2008 surveys.)30 LEOs in 2004 appeared to have high trust and confidence in vendors but did not rate them as being especially influential with respect to decisions about voting systems—a view that changed over the next two surveys. Fewer than 10% in 2004 believed that there was insufficient oversight of vendors by the federal government and states, but about one in six believed that local governments did not exercise enough oversight.

Most jurisdictions using computer-assisted voting reported in 2004 that they had interacted with their voting-system vendors within the last four years.31 More than 90% of LEOs considered their voting system vendors responsive and the quality of their goods and services to be high.32 They felt equally strongly that the recommendations of those vendors could be trusted. However, about a fifth of respondents thought that vendors were willing to sacrifice security for greater profit, although 60% disagreed. Also, a quarter felt that vendors were used for too many elements of election administration.33

When LEOs were asked in 2004 what sources of information they relied on with respect to voting systems, state election officials received the highest average rating, with about three-quarters of LEOs indicating that they rely on state officials a great deal. Next most important were other election officials, followed by the federal Election Assistance Commission (EAC) and advocates for the disabled. About one-third of LEOs stated that they relied on vendors a great deal, a level similar to their stated reliance on professional associations. Only 2% of LEOs rated vendors higher than any other source, whereas 20% rated state officials highest. Interest groups were rated lower than vendors, and political parties and media received the lowest ratings.

When LEOs were asked in 2004 about the amount of influence different actors had on decisions about voting systems, the overall pattern of response was similar to that for information sources. Once again, state, local, and federal officials were judged the most influential,34 and political parties and the media the least, with vendors in between. An exception was that local nonelected officials were considered less influential on average than vendors. Both voters and advocates for the disabled were rated as more influential on average than vendors. No LEOs rated vendors as more influential than any other source.

Those results contrast with the views of LEOs described above about whether the levels of influence of stakeholders were too little or too great in 2004 (Figure 14). Of the three actors considered most influential, LEOs believed that local elected officials should have more influence and the federal government has too much, and they were neutral about state officials. They did not believe on average that those considered least influential should have more influence.

Figure 15. Support of LEOs for the Use of Different Kinds of Voting Systems

Source: Analysis by CRS of data from capstone surveys.

Note: The X-axis variable (Voting System) is categorical. The data are presented as line, rather than bar, graphs purely as a visual aid to facilitate comparison. The lines do not denote any relationship among the categories. See note for Figure 11 for an explanation of types of voting systems. Each of the six graphs presents the views of LEOs who primarily use the particular kind of voting system denoted on the graph. Data on punchcard users is not presented for 2006 or 2008 because fewer than five LEOs reported using them.

Attitudes Toward Voting Systems

LEOs were highly satisfied with whatever voting system they were using but were less supportive of other kinds of systems.

LEOs had strong opinions about the different kinds of voting systems used in the United States. Those whose jurisdiction used a particular kind of system, whatever it was, supported its use more strongly than any other system (see Figure 15).35 Thus, users of lever machines strongly supported their use, were fairly neutral about DREs and optical scan systems, and were opposed to the use of punchcard and hand-counted paper ballot systems.

In general, except for those using them, LEOs opposed the use of lever machines, punchcard systems, and paper ballots; supported the use of optical scan systems; and were neutral about DREs.36 Those views changed little across the three surveys. However, support of nonusers for DREs declined across the three surveys, from supportive to neutral. For other voting systems, the levels of preference were fairly consistent.

From 2004 to 2006 there was a slight but significant decrease in the level of support for DREs among users of those systems. DREs were the only voting system for which support of users dropped across the first two surveys, although it still remained very high. It was not possible to determine if the change in support for users of DREs resulted from changes in the views of long-time users or from lower initial support among those who used DREs for the first time in the 2006 election. Whatever the cause, the decline reversed with the 2008 survey.

In 2008, LEOs were also asked about their support for vote-by-phone systems, which are used in several states to meet HAVA's accessibility requirements.37 Overall, three-quarters of LEOs opposed the use of such systems, and under 10% supported their use. However, opposition was not as strong in states using such systems, where about half of LEOs opposed their use and one-quarter supported it. These systems were only recently adopted, and it was not possible to determine to what extent experience with them influenced the attitudes of LEOs toward them.

Figure 16. Overall Satisfaction of LEOs with Their Primary Voting System and with the Performance of the System

Source: Analysis by CRS of data from capstone surveys.

Note: LEOs were asked to rate overall satisfaction on a scale from 0 (not satisfied at all) to 10 (extremely satisfied), and performance from 0 (not well at all) to 10 (extremely well). Note that the scale on the graph is 7-10, not 0-10. The numbers of LEOs using punchcard systems in 2006 and 2008 were too low to calculate meaningful error bars for that voting system. See note for Figure 11 for an explanation of types of voting systems. See also the note for Figure 15 on the use of line graphs.

Satisfaction with the voting systems LEOs used declined from 2004 to 2006 but rebounded in 2008.

Overall, and consistent with the above results, LEOs reported a high level of satisfaction with their voting systems in all three surveys and assessed that the systems performed very well during the election preceding each survey. On a scale of 0-10, average ratings were 8 or higher for each of those questions in all three surveys (Figure 16). However, ratings for satisfaction with and performance of all systems except lever machines were significantly lower in 2006 but rebounded in 2008 to levels that were closer to the ratings in 2004. LEOs still rated DREs lower in 2008 than they had in 2004. There was no difference in ratings across the surveys for lever machines in either satisfaction or performance.38

Figure 17. Overall Satisfaction of LEOs with Their Accessible Voting System, 2008

Source: Analysis by CRS of data from capstone surveys.

In 2008, LEOs were asked to assess their degree of satisfaction with the performance of their accessible voting systems. Those ratings, with an average rating of 6.5 on a scale of 0-10, were lower than the ones for satisfaction with the primary voting system (Figure 17). Among the different kinds of accessible system (DRE, ballot-marking device, vote-by-phone, and other), users of DREs were the most satisfied, with an average rating of 7.2. However, even LEOs who also used DRE as their primary voting system were less satisfied with their system in its accessibility performance (7.9) than in its overall performance (8.5).

LEOs who used DREs and precinct-count optical scan systems were more satisfied with them in 2004 than LEOs who used lever machines, paper ballots, or central-count optical scan, but in 2006 and 2008, there were no significant differences in satisfaction among users of different voting systems. However, users of PCOS systems were slightly more satisfied overall than users of either CCOS or DRE systems.39 There were also no significant differences in rated performance of different voting systems in any of the three surveys.

To assess more directly how LEOs rated their own voting systems in 2006 and 2008, they were asked whether their current system is the best available, and what voting system they believed is best overall. About 80% agreed with the statement that their current voting system is the best available. The level of agreement among users of hand-counted paper ballots was lower than average and that of PCOS users was higher than average (Figure 18). The same percentage believed that their current voting system was the best overall in 2006, with a significantly higher percentage of PCOS users holding that view than users of other systems.

Figure 18. Average Levels of Agreement among LEOs That Their Current Voting System Is the Best Available, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Note: LEOs were asked how strongly they agreed with the statement, "The voting system in my jurisdiction is the best available," on a scale from 1 (strongly disagree) to 7 (strongly agree). See note for Figure 11 for an explanation of types of voting systems. Data on punchcard users is not presented because only four LEOs reported using them in 2006. For clarity of presentation, confidence intervals are omitted from this graph; they ranged from 0.1 to 0.5.

In 2008, LEOs were asked to rank different types of voting systems in order of preference.40 Not surprisingly, the highest average preference by far was for the current voting system used in the jurisdiction—about three-quarters of LEOs chose that as their top preference—and the lowest was for Internet voting. More than half of PCOS users chose CCOS as their second preference, and more than half of CCOS users ranked PCOS second.

Figure 19. Characteristics of the Primary Voting System

Source: Analysis by CRS of data from capstone surveys.

Note: See note for Figure 11 for an explanation of types of voting systems. See also note for Figure 15. The numbers of LEOs using punchcard systems in 2006 and 2008 were too low to calculate meaningful error bars for that voting system.

LEOs rated their primary voting systems as very accurate, secure, reliable, and voter- and pollworker-friendly, no matter what voting system they used.

To further assess voting system preferences, the surveys asked LEOs to assess their primary voting systems on fifteen specific characteristics (Figure 19). The high ratings for accuracy, security, reliability, and usability varied little among the different kinds of voting systems in each survey and changed little across the three surveys. Ratings for usability were slightly lower in 2006 and 2008 than in 2004, although those for multilingual capacity, which is a component of usability, were higher.

As the figure shows, for other characteristics, there were substantial differences in many cases both among voting systems in a survey and for a given voting system across the surveys. For most of those characteristics, LEOs were less happy with the performance of their voting system in 2006 and 2008 than in 2004, especially with respect to optical scan and DRE systems, which they rated lower for cost, size, storage requirements, and machine error in the second and third surveys.

Optical scan systems, both central- and precinct-count, were rated higher for accessibility in 2006 and 2008 than in 2004. The reasons for this change are not clear.41 All systems were rated poorer for machine and voter error in 2006 and 2008—LEOs switched from positive to more neutral about these performance characteristics.

It was not surprising that DREs received the highest ratings of any system for accessibility and ability for use in multiple languages, or that hand-counted paper ballots were rated lowest for counting speed. Some of the comparisons among voting systems, however, did yield surprising results. In particular, the ratings for reliability, security, accuracy, and ease of use by voters were very high and were similar for all voting systems.

Given media reports about problems with the reliability and security of electronic voting, somewhat different outcomes might have been expected—namely, that DREs would have been rated lower in reliability and security. Also, given that modern DREs are often described as more voter-friendly than other systems, and certainly have the capability of providing higher levels of usability than other types, the lack of difference in ratings for usability is somewhat surprising.

With respect to accuracy, a lower rating might have been expected for punchcards, given the difficulties with recounts that were prominent during the 2000 presidential election. It is possible that such confidence exists because few jurisdictions use punch cards now, and those that still use them declined to replace them after 2000. Those jurisdictions kept the system, despite intense negative media coverage of system limitations, and opted not to take part in the punchcard buyout program offered through HAVA.

The relative lack of difference in ratings of optical scan and DRE systems for acquisition and maintenance costs, and size and storage requirements, appears to run counter to widely held views. Many observers regard DREs as the most expensive voting systems, given that several machines may be needed for each polling place, whereas optical scan systems usually require one machine per polling place (PCOS) or none (CCOS).

These differences from expectation suggest that LEOs' perceptions of how their voting systems perform may differ substantially in some ways from views about those systems that have often been depicted by the news media and activists. If the perceptions of election officials are accurate, then several of the criticisms leveled at specific voting systems could lead, if acted upon, to unnecessary and even counterproductive regulation and expenditure. For example, if in fact there is little difference in security between an optical scan system and a DRE, then the requirements for voter-verified paper records of votes that many states have imposed may be unnecessary. If, however, LEOs' perceptions are inaccurate, then understanding and addressing the causes of those inaccuracies may be beneficial. Unfortunately, the survey data do not permit an assessment of which interpretation is correct.

Electronic Voting

Much of the recent controversy about election reform has focused on electronic voting systems. Questions about the security and reliability of those systems were a relatively minor issue until 2003. Two factors led to a sharp increase in public concerns about them: (1) HAVA promoted the use of both PCOS and DREs through its provisions on preventing voter error and making voting systems accessible to persons with disabilities; and (2) the security vulnerabilities of electronic voting systems, especially DREs, were widely publicized as the result of several studies released beginning in 2003.42

The surveys asked several questions designed to elicit the views of LEOs about aspects of that controversy. When asked in 2006 whether current federal and state guidelines and standards about electronic voting systems (both optical scan and DRE systems) are at the right degree of strictness, most LEOs—about 60%—replied in the affirmative. Those who did not were fairly evenly split among officials who believed that the current standards are too strict and those who believed they were not strict enough. There was no significant difference in the average assessment between users and nonusers of electronic voting systems, but nonusers were slightly more likely than users to believe that the standards were either too strict or not strict enough (Figure 20).

Figure 20. Assessment by Users and Nonusers of Electronic Voting Systems of the Strictness of Standards for Those Systems, 2006

Source: Analysis by CRS of data from capstone surveys.

Note: LEOs were asked, "Do you believe that the state and federal standards for electronic voting systems are too strict or not strict enough?" using a scale from -5 (too strict) to 0 (just strict enough) to 5 (not strict enough). The three categories in the graph show the summed percentages who chose -5 to -1, 0, and 1 to 5, respectively. This question was asked only in 2006.

DRE users differed more from nonusers in their views about their voting system than optical scan users differed from nonusers.

In all three surveys, LEOs were asked to what extent they agreed with several statements about DRE and optical scan systems. In 2004 those questions were asked of all LEOs, but in later surveys they were asked only of those who used DREs and optical scan as their primary voting systems. Also, two questions asked in 2004 were not asked in 2006 (see Figure 21 and Figure 22).

Not surprisingly, the opinions of nonusers of either kind of system were generally less strong than those of users. Nonusers were neutral on average with respect to several statements about DREs, including their level of knowledge about the systems, vulnerability to tampering, and the need for more public trust.

Figure 21. Views of DRE Users and Nonusers about DREs

Source: Analysis by CRS of data from capstone surveys.

Note: See text for explanation of the question. Note that not all questions were asked each year.

Figure 22. Views of Users and Nonusers of Optical Scan (OS) Voting Systems About OS Systems

Source: Analysis by CRS of data from capstone surveys.

Note: See text for explanation of the question. Note that not all questions were asked each year.

LEOs whose primary voting systems were precinct-count optical scan were more neutral about DREs than were users of other voting systems.43 Users of DREs, in contrast, generally agreed that they had sufficient knowledge about the voting system, that certification procedures were adequate, that DREs are not vulnerable to tampering and security concerns can be addressed with good procedures, that the public should have greater trust in DREs, and that the media report too many criticisms of that voting system. Those views were similar in both surveys.

Nonusers were less neutral about optical scan (OS) systems, but users nevertheless held stronger views than nonusers about these systems, except for the statement about media criticism, about which both users and nonusers were neutral on average in 2004. User beliefs about the media were similarly neutral in 2006, but in 2008, they believed that the media were overly critical. LEOs whose primary voting systems were DREs were less neutral about OS systems than users of other voting systems.44

The controversy about the security and reliability of DREs has led to widespread calls for the adoption of a paper trail of the ballot choices that a voter can verify before casting the ballot. These paper trails, printed as separate ballot records that the voter can examine, are usually called voter-verified paper audit trails, or VVPAT. LEOs whose primary voting system is a DRE were asked several questions in the surveys about VVPAT. The percentage who used them increased from 18% in 2004 to 36% in 2006 and 46% in 2008.45 In 2006, about 36% of LEOs whose jurisdictions used DREs as their primary voting system stated that voters who did not wish to use a DRE had the option of using a paper ballot instead. That number increased to 44% in 2008. However, it was not possible to determine which of those jurisdictions permitted that choice in the polling place rather than through the use of "no excuse" absentee balloting.46

Given concerns about the auditability of ballots recorded on DREs, users of DREs and OS systems were also asked in 2008 whether they agreed that in close elections the system they used was more open to questions about accuracy than other systems. DRE users were neutral on average about that statement, but OS users disagreed.

About 100 LEOs reported in 2008 that they had recently switched from DREs to another voting system, mostly PCOS. About half of those who switched were more satisfied with their current than their previous voting system, a quarter preferred the DREs, and the rest were neutral about the switch. The most common reasons given for the change were a mandate from the state and the lack of a paper ballot with DREs.

Most LEOS who did not use DREs believed that VVPAT should be required, but most DRE users disagreed.

About two-thirds of LEOs who did not use DREs supported a VVPAT requirement in 2004 and 2008, 47 whereas one-third or fewer of users did.

Support for VVPAT increased among DRE users over the three surveys.

Fewer than one in five LEOs who used DREs in 2004 believed they should have VVPAT. That number increased to one in three in 2008. The views of nonusers, however, did not change (Figure 23).

Figure 23. Support for VVPAT Among Users and Nonusers of DREs

Source: Analysis by CRS of data from capstone surveys.

Note: In the 2006 survey, only DRE users were asked if VVPAT should be required.

The views of DRE users varied depending on whether the machines used VVPAT (Figure 24). Not surprisingly, LEOs who used DREs with VVPATs were more supportive of them than other DRE users, except in 2004, when there was no significant difference between the two groups.

In 2006 and 2008, LEOs were also asked if they would be willing to use a VVPAT if reimbursed for the costs by the federal government, and about 60% answered in the affirmative. However, even those respondents (DRE users and nonusers) who expressed support for VVPAT in 2006 were generally willing (65%) to spend only $300 or less for the feature.

Figure 24. Support for VVPAT Among Users of DREs With and Without VVPAT

Source: Analysis by CRS of data from capstone surveys.

LEOs were asked to choose one or more of several reasons for disagreeing or agreeing that DREs should produce a VVPAT. The results for DRE users are presented in Figure 25. The most frequent reasons chosen varied across the surveys, but the risk of printer failure, the complexity of implementation, and risks to voter privacy were consistently among the most frequently chosen disadvantages, with about half of DRE users choosing them on average. Those LEOs appeared least concerned about risks of tampering. The degree of concern about potential disadvantages was highest in the 2006 survey and lowest in 2008. Only about one-third of LEOs chose any of the three potential advantages listed for this question in the 2006 and 2008 surveys—recounts, checks on accuracy, and improved voter confidence.

DRE users and nonusers differed strikingly in their views on the disadvantages and benefits of VVPAT. In 2008, DRE users expressed far greater concern about the disadvantages and far less agreement with the potential advantages than did nonusers (Figure 26). The only exception was that neither group believed on average that risk of tampering was a significant concern in 2008. That was not the case in 2004, when concerns about tampering among users had been four times as high as among nonusers, consistent with differences in views for other potential disadvantages in that survey.48

Figure 25. Reasons Chosen by DRE Users for Disagreeing or Agreeing That DREs Should Print a VVPAT

Source: Analysis by CRS of data from capstone surveys.

Figure 26. Reasons Chosen by DRE Users and Non-Users in 2008 for Disagreeing or Agreeing That DREs Should Print a VVPAT

Source: Analysis by CRS of data from capstone surveys.

In 2008, LEOs with experience using VVPATs were less likely to express concerns about their disadvantages than were other DRE users and were more likely to agree with the potential advantages (Figure 27). Their views were closer to those of LEOs who did not use DREs (see Figure 26).

Figure 27. Reasons Chosen by VVPAT Users and Non-Users in 2008 for Disagreeing or Agreeing That DREs Should Print a VVPAT

Source: Analysis by CRS of data from capstone surveys.

Most VVPAT users were satisfied with them.

About three-quarters of LEOs who used a VVPAT were somewhat to very satisfied with it. About one-fifth were dissatisfied in 2006, and fewer than one in ten in 2008. More than four-fifths of LEOs had confidence in the accuracy of VVPAT, with fewer than one-tenth expressing concerns. More than two-thirds thought that voters reacted positively to them in 2006, but only half in 2008 (Figure 28).

Figure 28. Reactions to VVPAT by Users, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

The Help America Vote Act (HAVA): Impacts and Attitudes

Most LEOs, about 90%, considered themselves familiar with and knowledgeable about HAVA's requirements in the surveys. Those who were "not familiar at all" with HAVA decreased from 4% in 2004 to less than 0.1% in 2008. About 90% of respondents believed that almost all jurisdictions in their state were in full compliance with HAVA provisions in 2006.49

Figure 29. Assessment by LEOs of Whether HAVA Is Improving the Election Process in Their Jurisdictions

Source: Analysis by CRS of data from capstone surveys.

LEOs believed that HAVA is making moderate improvements in the electoral process overall in their jurisdictions.

The strength of this view varied somewhat across the surveys (Figure 29). The most favorable assessment was in 2008, and the least favorable in 2006. Also, in the first two surveys, about twice as many LEOs believed that the law resulted in no improvements than in major improvements, but the percentage choosing "no improvement" fell by more than half in 2008, while the percentage choosing "major improvement" was unchanged.

Figure 30. Assessment of HAVA Provisions as Advantage or Disadvantage

Source: Analysis by CRS of data from capstone surveys.

The views of LEOs in 2008 about the extent to which HAVA had improved elections nationally were similar to their views about local impacts.50 It might be expected that larger jurisdictions would find HAVA more beneficial than smaller ones, but in fact there was no association between the number of voters in a jurisdiction and how LEOs answered this question. However, the kind of voting system used did have an effect. In each of the surveys, DRE users rated improvements from HAVA highest, followed by users of PCOS, CCOS, and other systems.

Table 5. Assessment by LEOs of Advantageousness of HAVA Provisions

HAVA Provision

Percentage of LEOs Choosing Assessment

Advantage

Neutral

Disadvantage

2004

2006

2008

Δ

2004

2006

2008

Δ

2004

2006

2008

Δ

Provision of federal funds to states

90

81

81

-9

6

12

13

7

4

7

6

2

Facilitating participation for military or overseas votes

82

72

80

-2

11

18

15

4

7

10

5

-2

Requirements for centralized voter registration

71

70

74

3

16

17

16

0

13

13

10

-3

Requirement for voter-error correction

78

68

69

-9

13

22

24

11

8

11

7

-1

Provision of information for voters

79

67

75

-4

15

25

22

7

5

8

3

-2

Process for certification of voting systems

79

67

64

-15

15

21

21

6

7

13

15

8

Codification of voting system standards in law

74

64

68

-6

19

25

23

4

8

11

9

1

Requirement for disabled access to voting systems

76

64

69

-7

13

18

18

5

11

17

13

2

Identification requirements for certain first-time voters

68

64

73

5

16

20

17

1

16

16

10

-6

State matching requirement for federal funds

74

57

61

-13

14

24

23

9

12

20

15

3

Creation of the Election Assistance Commission

62

48

49

-13

23

31

37

14

15

21

13

-2

Requirement for provisional voting

49

51

58

9

17

20

23

6

35

30

20

-15

Source: Analysis by CRS of data from capstone surveys.

Note: ∆ = Change from 2004 to 2008. LEOs were asked to rate the provisions on a scale of 1 (disadvantage) to 7 (advantage). Entries for the Advantage column include respondents who chose 5-7, for the Neutral column 4, and for the Disadvantage column 1-3.

LEOs supported all major provisions of HAVA, but the degree of support changed across the surveys.

Most LEOs regarded the major provisions of HAVA as advantageous. However, the average level of support varied among both the provisions and the surveys. LEOs were most supportive of federal funding and least supportive of the requirement for provisional voting and the creation of the EAC (Figure 30). Provisional voting received substantially higher negative ratings than any other provision in the surveys, but the proportion of LEOs rating it as a disadvantage declined more than 40% from 2004 to 2008 (Table 5).

While remaining positive overall, the level of support decreased for seven provisions across the surveys, especially from 2004 to 2006. They were provision of federal funds, error-correction requirements, the certification process for voting systems, codification of voting-system standards in law, accessibility requirement, the requirement for state matching funds, and the creation of the EAC. However, even for the EAC, which, along with provisional ballots, had the lowest rating in 2008, half of LEOs regarded it as an advantage and fewer than one in seven as a disadvantage in that survey.

Decreased support for funding might have been caused simply by the decrease in availability of federal funds over the course of the three elections. Controversies about the security and reliability of different voting systems might have contributed to the decline in support for provisions relating to voting systems and the EAC.

There was no significant change for four provisions: facilitating participation for military and overseas voters, the requirement for centralized voter registration, the provision of information to voters, and identification requirements for certain first-time voters. Support for one provision—the requirement for provision voting—actually increased with each survey.

Support for HAVA's provisions also varied to some extent with the primary voting system used in the jurisdiction. DRE users exhibited the strongest support for the requirement on accessibility, provisional ballots, and military and overseas voters, and for the codification of standards. DRE and PCOS users expressed the strongest support for the error correction requirement.

Users of hand-counted paper ballots exhibited lower support than users of other systems for the provisions on federal funding, provisional ballots, facilitation of military and overseas voting, and the codification of standards. Nevertheless, all were regarded as advantages except provisional ballots, toward which users of paper ballots were neutral on average.

LEOs perceived HAVA provisions as moderately difficulty to implement.

In general, LEOs reported in all three surveys that implementation of HAVA provisions was moderately difficult (Figure 31). The level of difficulty for three provisions declined from 2004 to 2008: provisional voting, accessibility, and provision of information for voters. For one, the process for certification of voting systems, the reported difficulty of implementation increased.51 The other provisions exhibited no net change, although for voter identification and error correction, the perceived difficulty was lower in 2006 than in the presidential election years.

Figure 31. Perceived Level of Difficulty by LEOs in Implementing HAVA Provisions

Source: Analysis by CRS of data from capstone surveys.

Not surprisingly, LEOs were less likely to support a provision they found difficult to implement. That is, there was an inverse relationship between the reported level of difficulty and the reported advantageousness of a provision. That pattern held for all provisions, but was most pronounced for provisional ballots, and least for the provision of information to voters.

Users of different voting systems varied in how difficult they found provisions to implement.

Optical scan users found the accessibility requirement more difficult than did users of other voting systems. Perhaps surprisingly, DRE users did not find this provision significantly less difficult to implement than did users of hand-counted paper ballots, punch cards, or lever machines.

PCOS and DRE users found the voter error-correction requirement easier to implement than did users of other voting systems. That finding is consistent with the greater error prevention and correction features of those systems, although lever machines also possess error prevention features.

Users of paper ballots perceived the certification provision as less difficult to implement than users of other systems, as would be expected, given the acquisition of certified voting systems would likely be less important for them than for users of optical scan and DRE systems and lever machines in the process of being replaced. However, it is not clear why PCOS users reported similar levels of difficulty for that provision. Similarly, it is not clear why CCOS users found the voter registration requirement more difficult to implement than did users of other voting systems. Users of different kinds of systems did not differ in their assessments of the difficulty of implementing the provisions on military and overseas voters, provisional ballots, and voter identification.

Most LEOs reported that HAVA has increased the cost of elections, and they are concerned about future funding.

The decrease in support for most HAVA provisions across the three surveys may have resulted in part from perceptions about costs and funding. The importance of these factors is also supported by the responses to three questions in the 2006 and 2008 surveys:

The results are presented in Figure 32.

About 90% of respondents in 2006 and 75% in 2008 believed that HAVA has increased the cost of elections, and only 2% believed the costs have decreased. LEOs were fairly evenly divided in both surveys on whether current funding is sufficient to implement the requirements, but most expressed concerns about the sufficiency of future funding, with 25%-30% stating that they were "extremely concerned." LEOs were also concerned about the impact on election administration of the financial crisis that arose in 2008, with more than 60% indicating that they were moderately to very concerned, and only 4% reporting that they were not concerned at all.

Figure 32. Response of LEOs to Questions about Funding Effects of HAVA,
2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Figure 33. Reactions of LEOs to Statements about the Impacts of HAVA,
2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Table 6. Distribution of Responses of LEOs to Statements about the Impacts of HAVA, 2006 and 2008

Statement

Percentage of LEOs Who…

Disagreed

Were Neutral

Agreed

2006

2008

2006

2008

2006

2008

HAVA has made elections more accessible for voters

26%

16%

23%

24%

51%

60%

HAVA has made elections more fair

40%

24%

31%

40%

30%

36%

HAVA has made elections more complex to administer

7%

8%

8%

14%

85%

78%

HAVA has made elections more reliable

42%

24%

28%

37%

29%

39%

HAVA requirements are not consistent with state requirements

44%

23%

33%

47%

23%

30%

Source: Analysis by CRS of data from capstone surveys.

Note: LEOs were asked to rate their level of agreement or disagreement on a scale of 1 (strongly disagree) to 4 (neutral) to 7 (strongly agree). Entries for the Agreed column include respondents who chose 5-7, for the Were Neutral column 4, and for the Disagreed column 1-3.

LEOs reported in 2008 that HAVA has improved the accessibility, fairness, and reliability of elections but has made them more complicated to administer.

LEOs were also asked in 2006 and 2008 to respond to a set of statements about the impacts of HAVA (Figure 33). Their views changed significantly across the two surveys. They agreed on average in both that HAVA has made elections more accessible for voters, and they held that view more strongly in 2008. In 2006, they disagreed that the law has made elections fairer or more reliable, but agreed with those views on average in 2008. In 2006, they did not believe that HAVA requirements were inconsistent with state requirements, and they were neutral on average about that statement in 2008.

Their most strongly held view in both surveys was that HAVA has made elections more complex to administer. As Table 6 shows, responses to the statement on complexity of elections were the least evenly distributed, with about one-quarter to one-half of respondents expressing a neutral position.

LEOs were skeptical about how much attention Congress pays to their views.

In 2008, LEOs were asked how much attention they thought Congress will pay to their views when considering legislation regarding election administration. In general, they appeared skeptical that their views would be heard (Figure 34). About one in seven believed that Congress would pay no attention at all, and fewer than 3% believed that a great deal of attention would be paid. It could not be determined whether these views arose from their experiences relating to the development of HAVA or from some other source, such as a more general skepticism about government responsiveness.

Figure 34. Assessment by LEOs of How Much Attention Congress Will Pay to Their Views When Considering Legislation on Election Administration, 2008

Source: Analysis by CRS of data from capstone surveys.

Election Assistance Commission

When HAVA created the Election Assistance Commission, the law gave it several specific responsibilities. The EAC carries out grant programs, provides for voluntary testing and certification of voting systems, studies election issues, and issues voluntary guidelines for voting systems and guidance for the requirements in the act. The EAC has no rule-making authority (other than very limited authority under the National Voter Registration Act, the "motor-voter" law, P.L. 103-31) and does not enforce HAVA requirements.

In the 2006 and 2008 surveys, LEOs were asked about the EAC's responsibilities, helpfulness, and benefits. They were asked to rank the importance of the following four EAC responsibilities:

Most LEOs found most activities of the EAC moderately important.

The results are presented in Figure 35. LEOs regarded guidance to them as the most important of the listed responsibilities and ensuring compliance by them as the least.52 Research and certification were rated in the middle, and the ratings for them did not differ significantly. The results were consistent across both surveys. When asked in 2006 how many times the EAC had helped them understand or perform their duties during the previous year, about one third indicated that the EAC had helped at least once, and about 10% ten or more times.53

The degree to which LEOs found the EAC helpful improved substantially from 2006 to 2008 (Table 7). In 2006, almost half of LEOs had found the EAC not very helpful, with 13% finding it "not helpful at all." In 2008, the proportion of more negative ratings dropped by more than half, with only about one-fifth finding the EAC not very helpful, and only 3% choosing "not helpful at all." The proportion finding the EAC moderately helpful doubled, from about one-third to two-thirds. However, the proportion who found the EAC very helpful did not increase, but remained at about one-fifth.

Table 7. Perceptions of LEOs about the Helpfulness of the EAC, 2006 and 2008

Perceived Degree of Helpfulness

2006

2008

Less helpful

46%

18%

Moderately helpful

35%

65%

More helpful

19%

17%

Source: Analysis by CRS of data from capstone surveys.

Notes: LEOs were asked to rate the degree of helpfulness of the EAC to them on a scale of 0 (not helpful at all) to 10 (extremely helpful). In the table, less helpful = 0-3, moderately helpful = 4-6, more helpful = 7-10.

Figure 35. Perceived Importance by LEOs of Selected EAC Responsibilities,
2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

LEOs were also asked how they had benefitted from the four functions listed above plus the distribution of federal funds for use by local jurisdictions. The ratings (Figure 36) generally reflect the pattern seen in the responses on overall helpfulness. On average, LEOs responded that they had benefitted only moderately overall, but the average level of benefit for each category was higher in 2008 than in 2006.54 However, while they considered local guidance as the most important responsibility (see Figure 35), they rated it lowest in benefit, along with local compliance, which they regarded as the least important responsibility. About a quarter rated EAC guidance as "not beneficial at all," with about 7% rating it "extremely beneficial." Perceived benefits from research and certification were somewhat higher, and funding, not surprisingly, was rated highest.

Figure 36. Perceived Degree of Benefit to LEOs from EAC Functions, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

There might be several possible explanations for the discrepancy in the ratings for importance versus benefits of EAC guidance to local jurisdictions. For example, it could reflect frustration with the delays in the initial start-up of the EAC, an explanation that is consistent with the increase in ratings in 2008. It could reflect difficulties in understanding the guidance that the EAC issued. It might reflect the fact that the purpose of the guidance stated in HAVA is to assist states, not local jurisdictions, in meeting the title III requirements (§311(a)). Consistent with that explanation, when LEOs were asked in 2008 about guidance and compliance at the state level, they perceived each of those as being more beneficial than at the local level (Figure 36). Or it could simply be an expression of opposition to or uncertainty about the requirements themselves. Individual comments from LEOs in 2006 and 200855 suggest a diversity of views about the EAC:

- They need to move faster, the new system or changes to old systems need to get certified in a reasonable amount of time.

- Much of the information received from the EAC either did not apply … or was already in practice for many years.

- The EAC's information on their website can be very helpful.

- My local jurisdiction does not really see anything from the EAC because the state usually takes care of it and then passes it on to us to comply.

- EAC commissioners and staff are very well aware of their situation and environment. I work closely with them on a regular basis and know they are doing the best they can, as a federal agency with no enforcement powers….

- I would like for the EAC to work more with states to have equipment certified and power to enforce that certification. I also wish the EAC members did not change so often—it takes a long time to learn….

- Exempt cities or other entities with less than 2,000 voters from the very expensive HAVA equipment requirements.

- Get rid of it. Elections … should be free of federal control.

- I believe they need more power to correct election problems.

Voter Registration Database

HAVA required each state to implement a statewide, computerized voter registration list before the 2006 election. A few states were unable to meet that deadline, and that is reflected in the survey, with 6% of respondents indicating that their states had not yet met the requirement in 2006, and 4% in 2008.56 Most LEOs were familiar with their state's database, with about a third assessing themselves as "very familiar" in 2006.57

Given the concerns expressed in the first survey about the burdens of HAVA implementation, the second survey asked LEOs whether the implementation of the computerized list had required the hiring of additional staff in the local jurisdiction.58 Four-fifths responded that it had not. Those that did hire additional staff were asked to identify all sources of funds. More than three-quarters received funding from local governments (Figure 37), with about 70% receiving only local funding.

Figure 37. Sources of Funds Reported by LEOs
for Additional Local Staffing for the Voter
Registration Database Required by HAVA, 2006

Source: Analysis by CRS of data from capstone surveys.

Note: There were 234 jurisdictions that reported requiring additional staffing in 2006.

Table 8. Levels of Confidence Reported by LEOs in Selected Features of Their Voter Registration Databases, 2006 and 2008

 

Contingency Plans

Security

Accuracy

Confidence Rating

2006

2006

2008

2008

Low

6%

1%

1%

2%

Moderate

32%

24%

34%

40%

High

61%

75%

66%

59%

Source: Analysis by CRS of data from capstone surveys.

Notes: LEOs were asked to rate their level of confidence for each item on a scale of 0 (lowest) to 10 (highest). Entries for Low column include respondents who chose 0-2, for Moderate 3-7, and for High 8-10. The question about security was asked in 2006 and 2008, the one about contingency plans was asked only in 2006, and that about accuracy only in 2008.

To explore perceptions about the effectiveness of the computerized statewide voter registration database, LEOs were asked about security, accuracy, and contingency plans in case of failure on election day.59 Respondents were very confident about each (Table 8).60

Figure 38. Agreement/Disagreement of LEOs with Statements About
the Voter Registration Database, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Note: Sample sizes differed substantially in the two surveys. See text.

LEOs were also asked about their agreement or disagreement with a series of statements about the voter registration database. The responses (Figure 38) are generally consistent with the responses to the questions on accuracy and security. Most LEOs did not believe that the database could be accessed by unauthorized people or manipulated improperly. They did not believe it created problems for legitimately registered voters, and did not see challenges from political parties and others as a significant problem.

LEOs had more confidence in the voter registration databases in 2008 than they had in 2006.

They were less concerned about reliability, administrative burden, matching problems, inadvertent removal of voters, and identity theft. Their views on security were unchanged. They continued to believe that the new databases were to some degree an improvement over the previous systems and were somewhat more accurate and fair. They remained neutral on average about whether the new systems would reduce the need for provisional ballots. However, for this statement, although not for any of the others, the number of voters in the jurisdiction had a significant impact. LEOs with larger jurisdictions were more likely to believe that the new databases reduced the need for provisional ballots than were those with small jurisdictions.

Figure 39. Assessment by LEOs of the Performance of Electronic Pollbooks in Resolving Issues About Voter Eligibility, 2008

Source: Analysis by CRS of data from capstone surveys.

Most LEOs who used electronic pollbooks were positive about them.

In 2008, LEOs were asked about their use of electronic pollbooks, which provide immediate electronic access to the state's voter registration database at the polling place. About one in six reported that they used them.61 About two-thirds of those reported that electronic pollbooks were better than paper registers in resolving issues about voter eligibility (Figure 39). About 10% considered them worse, and the rest were neutral.

Voter Identification

Issues relating to voter identification have been controversial.62 HAVA requires that first-time voters who register by mail must present a specified form of identification, either when registering or when voting. The law does not require photographic identification, although a few states have such requirements, and many states require some form of identification document.63

About two-thirds of LEOs reported that their jurisdictions required some form of document identification from all voters. About one-third of jurisdictions used signature comparisons. Roughly one-quarter permitted the voter to provide identification verbally via some form of personal information, such as name and address (Figure 40).

LEOs supported requiring photo identification for all voters, even though they believed it would negatively affect turnout and did not believe that voter fraud is a serious problem in their jurisdictions.

One of the principal policy arguments often cited for tightening voter-identification requirements is concern about the risk of significant levels of voting by ineligible voters. Opponents counter that those risks are small and that requiring identification, especially photo IDs, would effectively disenfranchise many eligible voters who would have difficulty obtaining such documents.64 To help determine the views of LEOs about this issue, the surveys asked several additional questions about voter identification:

Figure 40. Percentage of Jurisdictions Requiring or Accepting Different Kinds of Identification for Voting, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Notes: LEOs were asked to indicate all acceptable forms of identification. The option "Other Govt. Document" refers to those indicating the name and address of the voter, and "Personal Information" refers to address, date of birth, and so forth.

The results are presented in Figure 41. On average, LEOs mildly supported a requirement for photo identification. However, about 30% of respondents chose "extremely supportive," 5%-10% "do not support at all," and the choices of the other 60% were spread across the scale of possible responses. LEOs whose jurisdictions used hand-counted paper ballots were somewhat less supportive of photo ID than other jurisdictions, perhaps because those jurisdictions had many fewer voters on average than other jurisdictions (see Figure 49 below), and those LEOs therefore might be more likely to know voters personally. DRE users, who tended to have large jurisdictions, were more supportive on average of photo ID than were users of other systems.

Figure 41. Frequency Distributions of Response by LEOs to
Questions about Voter Identification, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Two-thirds of LEOs also believed that requiring photo identification would make elections more secure. DRE users were more likely than users of other systems to believe that, but users of other systems did not differ significantly.

The support of LEOs for photo ID and their views about its impacts on security do not, however, appear to be based on concerns about ineligible voters or voter fraud, which few believed were problems in their jurisdictions. Furthermore, while about half of LEOs believed that requiring photo identification would have no impact on voter turnout, more than 40% believed that it would depress turnout.

Views did not change greatly from 2006 to 2008. LEOs were slightly more supportive of photo ID in 2008 and somewhat more concerned about fraud, but they were significantly less confident that photo ID would improve the security of elections.

The results appear to suggest an apparent discrepancy between, on the one hand, the overall support of LEOs for photo ID and their average views about its effects on security, and, on the other hand, their views about impacts on turnout and the risk of voter fraud—that is, they tended to support photo ID and believed it would increase security but at the same time they tended to believe that fraud was not a problem and that requiring photo ID would depress turnout. It is possible that however low the risk of fraud, LEOs believe reducing it outweighs any negative impact on turnout. Also, LEOs who supported photo ID were less likely than those who did not to believe that requiring photo ID would depress turnout, and they were more likely to believe that fraud was a problem and that photo ID would increase security. In any case, the range of perspectives in the responses to the questions shows that the controversy is not settled, even among local election officials.66

Election Administration Issues

Election Preparations

The time that LEOs reported spending preparing for elections increased from 2004 to 2008.

The 2006 election was the first under which all HAVA requirements were in effect.67 Consistent with the perception of LEOs that HAVA has made elections more complex to administer (Figure 33), three-quarters found that they spent more time preparing for the 2006 than the 2004 election, and almost 90% reported spending more time in 2008 than 2006. For 2004 versus 2006, this perception was supported by comparing the number of hours per week LEOs reported spending on election duties. On average, the time spent increased 15%, from 21 to 24 hours.68 This difference may be especially significant given that 2006 was not a presidential election year, during which additional work may be required than in intervening elections. However, the time spent did not change from 2006 to 2008—it was 24 hours in 2008 as well. There are several possible explanations for this discrepancy, and it was not possible to determine the cause.

LEOs believed that better voter education would reduce problems on election day and improve the overall process.

In 2008, LEOs were asked about voter education. About half reported that their jurisdictions had voter education programs intended to increase voters' knowledge of election rules and procedures. About 90% agreed that voter education about rules and procedures is important, and two-thirds that it is the responsibility of LEOs. About 60% believed that lack of voter knowledge creates problems in elections, and 80% that better voter education would improve the election-day process (Figure 42). Support was somewhat weaker for all those statements, although still positive on average, among LEOs whose jurisdictions did not have voter education programs.

The percentage of jurisdictions with voter-education programs varied with the kind of voting system (Table 9). Fewer than 20% of those using hand-counted paper ballots had such programs, whereas most using DREs did. Similarly, paper-ballot users indicated lower support than users of other systems for the statements in Figure 42, and they were neutral about whether educating voters is important or the responsibility of LEOs.

Figure 42. Agreement/Disagreement of LEOs with Statements About Voter Education, 2008

Source: Analysis by CRS of data from capstone surveys.

Notes: Data presented are mean responses on a scale of 1 to 7.

Table 9. Percentage of Jurisdictions Using Different Kinds of Voting Systems That Reported Having a Voter-Education Program, 2008

Type of Voting System

%

Lever machine

53

Hand-counted paper ballots

18

Central-count optical scan (CCOS)

50

Precinct-count optical scan (PCOS)

41

Direct-recording electronic (DRE)

58

Source: Analysis by CRS of data from capstone surveys.

Incidents on Election Day

There have been several prominent issues of concern reported by the media in recent elections, such as voting-system malfunctions and problems with pollworkers, vendors, long lines, media coverage, and timely and accurate reporting of results. The surveys therefore presented a list of 16 potential problems and other events in 2006 and 2008 and asked LEOs to indicate which, if any, had occurred. The results are presented in Table 10. About 65% of survey respondents in 2006 and 50% in 2008 reported experiencing at least one of the events listed in the table. In 2006, more than 100 LEOs reported five or more kinds of incidents, with a maximum of 11, and in 2008, more than 50 experienced five or more, with a maximum of 10. Not surprisingly, LEOs in more populous jurisdictions reported more events than those in less populous ones. In 2008, about 1 in 15 LEOs reported an "uncontrollable" event such as a fire or a power outage.

In 2008 LEOs were asked to assess how successful the election process was in their jurisdictions. More than one-quarter reported a very successful process, and none considered it unsuccessful. However, the degree of success perceived was inversely related to the number of different types of incidents reported.

Given that 2008 was a presidential election, higher turnout was expected, and some variation in incidents might be related to whether a jurisdiction had higher turnout in 2008 than 2006. Seventy percent of LEOs reported higher turnout in 2008 (Figure 43), and LEOs in jurisdictions using hand-counted paper ballots reported a greater increase in turnout than those using other kinds of systems. However, those reporting higher turnout were not more likely to experience incidents.

The most commonly reported incident in the 2006 and 2008 elections was malfunction of a DRE or optical scan system.

About 65% of LEOs using DREs or PCOS as the primary voting system reported this problem. About 55% of CCOS users and about 40% of lever machine users reported this problem, with the lowest incidence, 20%, among LEOs using hand-counted paper ballots.69 There was no significant difference between the surveys in the incidence of this problem.70

Table 10. Percentage of LEOs Reporting Various Events in Their Jurisdictions on Election Day, 2006 and 2008

Event

% o f LEOs Reporting

2006

2008

Repairable electronic voting system malfunction

60

59

Unrepairable electronic voting system malfunction

16

14

Electronic voting system was hacked

0

0

Vendors did not provide the support expected

18

6

Poll workers did not understand their jobs

30

34

Poll workers did not report for duty

14

13

Excessively long lines

16

15

Insufficient supply of paper ballots

5

4

Polling places failed to accurately report election results

3

2

Polling places failed to report election results in a timely manner

6

5

Central office failed to report election results in a timely manner

4

1

Unfair media coverage of election administration

15

7

A close race (2%-3% margin of victory)

32

31

A race resulting in an election recount

27

23

A race resulting in a legal challenge

3

6

Deliberate election fraud

1

2

Source: Analysis by CRS of data from capstone surveys.

Note: The percentages in this table are based on the total number of respondents who reported at least one kind of incident on election day (970 in 2006 and 670 in 2008). This base was chosen because it provided a reasonable basis of comparison among the kinds of incidents and between the surveys, given that the question did not have an option for LEOs to check if they had no problems whatever. Therefore, the percentages listed are likely to be overestimates of the actual percentage of jurisdictions that experienced the listed events. The percentages in the table would have been lower if the total number of LEOs responding to the survey had been used, but in that case the denominator would likely have included LEOs who experienced problems but skipped the question. An estimate relative to the total number of respondents can be calculated for each percentage in the table by multiplying the listed percentage by either 0.644 (2006 data) or 0.494 (2008 data).

Figure 43. Assessment by LEOs of Whether Turnout Was Higher in 2008 Than 2006

Source: Analysis by CRS of data from capstone surveys.

The reason for the low incidence among paper ballot users is not clear. It might be explained in part by the high proportion of those jurisdictions, about half, that used vote-by-phone to meet HAVA accessibility requirements (see Figure 13 above).

While DRE users reported a slightly higher incidence of malfunction (67% of those reporting at least one event and about 50% of all DRE users) than PCOS users (63% of those reporting at least one event and about 48% of all users), a larger difference might have been expected. In jurisdictions where DREs are the primary voting system, several might be used in each polling place, whereas in PCOS jurisdictions, typically only one OS machine is used. Therefore, the chance of at least one malfunction would be expected to be higher on average in jurisdictions using DREs.71 However, if DREs had lower failure rates per machine than optical scan systems, the difference would be correspondingly lower.72

The results suggest that current optical scan systems may not be significantly more reliable than DREs. They also contrast strikingly with the uniformly high ratings all users gave for the reliability of their voting systems (see Figure 19 above).

LEOs did not appear to assess the malfunctions as being the result of tampering. In fact, only one reported a system being hacked, in 2006, and that was a precinct-count optical scan user.73

About 10% of LEOs were disappointed in the level of support provided by vendors. Those LEOs were more than twice as likely to have experienced malfunctions of their voting systems as LEOs who were not disappointed with vendor support.

Confusing ballots were often a problem.

In 2008, ballots were slightly longer on average than they had been in 2004, but half of LEOs reported no difference. More than half reported that confusing ballots were a problem for voters in their jurisdictions in 2008 (Figure 44). About three-quarters believed that it would be beneficial to devote additional resources to ballot design. Not surprisingly, LEOs who were more concerned about ballot confusion were also more likely on average to believe that additional resources should be devoted to ballot design. About 40% of LEOs reported that they had little or no familiarity with ballot-design studies and best practices, about half were moderately familiar, and only about 10% reported a high level of familiarity.

Figure 44. Assessment by LEOs of Whether Confusing Ballots Were a Problem for Voters, 2008

Source: Analysis by CRS of data from capstone surveys.

Figure 45. Relationship Between Assessments by LEOs About Confusing Ballots and Whether Additional Resources for Ballot Design Would Be Useful, 2008

Source: Analysis by CRS of data from capstone surveys.

Notes: The graph shows mean responses on benefit of LEOs choosing different ratings for whether ballots were confusing. Ratings of 8 to 10 (on a scale of 0 to 10) for the latter were combined because of small sample sizes.

The incidence of long lines at the polling place was highest in jurisdictions using DREs.

Another notable result was the fairly high incidence of LEOs who reported excessively long lines at the polling place. About 11% of all respondents reported long lines in 2006, and 7% in 2008. The prevalence was much higher in jurisdictions using DREs primarily, occurring in about one quarter in 2006 and 14% in 2008. In those using other kinds of voting systems, long lines were reported by only about 5% of respondents in both surveys. Jurisdictions using DREs also reported more unfair media coverage (19% in 2006 and 7% in 2008) than users of other systems (5% in 2006 and 2% in 2008).

The incidence of problems with accurate and timely reporting of election results was low. It did not differ among users of the different kinds of voting systems, except for lever machine users. They reported a much higher incidence, about 10%, of failure of polling places to report accurately in both surveys. That was about five times the rate of users of other voting systems.

Reports of deliberate election fraud of any kind were also few—8 LEOs in 2006 and 14 in 2008, under 1% of jurisdictions. Such a rate might nevertheless be considered unacceptably high, depending on such factors as the seriousness of the offense, the impact on the election of such attempts at fraud, and the degree to which election officials are able to detect all such attempts.

In 2006, the number of elections requiring recounts that LEOs reported was much higher (264, which was 18% of all survey respondents and 27% of LEOs reporting incidents) than in 2008 (156, 12% of respondents and 23% of those reporting incidents). Not surprisingly, recounts were much more likely to be reported when a race was close. They were also more likely in jurisdictions using lever machines and hand-counted paper ballots than optical scan or DRE systems.

LEOs noticed no change on average in residual votes (overvotes plus undervotes plus spoiled ballots) from 2004 to 2006.74 About 60% reported no change, and about 20% each reported an increase or a decrease. This result suggests that the decreased confidence LEOs had in 2006 in the ability of voting systems to reduce voter error was not a result of a noticeable increase in such error. Alternatively, the decrease in confidence might have resulted from sources such as changes in media coverage of voting-system problems.

The number of provisional ballots used varied greatly among jurisdictions in 2006. About 30% of that variability was explainable by the number of voters in the jurisdiction. Thus, jurisdictions with fewer than 1,000 registered voters used about 10 provisional ballots on average and those with more than 100,000 voters used 1,500. Across all jurisdictions, one provisional ballot was used for every 140 registered voters on average.75 About a quarter of jurisdictions, mostly small, used no provisional ballots, and about 4% used more than 1,000, with a maximum of 15,000 reported by a jurisdiction with about half a million voters.

In 2008, LEOs were asked for the percentage of ballots cast that were provisional, rather than the number cast. About one-third used no provisional ballots. In about half of jurisdictions, some were cast but accounted for 1% or fewer of all ballots. In about 15%, they accounted for 1%-5% of all ballots, and in about 6% of jurisdictions, they accounted for more than 5%. About 2% of jurisdictions stated that at least one polling place had run out of provisional ballots during the election. The percentage of provisional ballots cast did not vary with the size of the jurisdiction. Also, 36% of LEOs reported an increase in the use of provisional ballots in comparison to 2006, while 26% reported a decrease.

When asked whether provisional ballots were easier to use than they had been in the previous election, LEOs found them slightly easier to use on average in 2006 and again in 2008. However, there was far more variability in the 2008 results, with higher proportions of LEOs finding them both easier and more difficult than in the previous election (Table 11). The reasons for the increased variation are not clear.

Table 11. Assessments of LEOs About Whether Provisional Ballots Were Easier or More Difficult to Use Than in the Previous Election, 2006 and 2008

Change from Previous Election

2006

2008

Easier

16%

36%

The Same

75%

38%

More Difficult

9%

26%

Source: Analysis by CRS of data from capstone surveys.

The percentage of jurisdictions offering early voting increased substantially from 2006 to 2008.

The percentage of jurisdictions offering early voting increased from about half in 2006 to nearly two-thirds in 2008, and most LEOs reported that the number of early voters increased in 2008. In 2006, about a third each of jurisdictions offering early voting reported using optical scan, a third DREs, and fewer than one in ten paper ballots. In 2008, more than half used optical scan, with about a third offering CCOS and a quarter PCOS, while almost half used DREs and about one in six paper ballots.76 In 2008, a quarter of jurisdictions used two different kinds of voting systems for early voting, and a few offered three. Among those using more than one system, only about 10% did not use DREs. About half used CCOS and DREs, a quarter PCOS and DREs, and a fifth paper ballots and DREs. In 2008, a higher proportion of votes were cast early in jurisdictions using DREs as their main voting system (27%) than in those using other systems (14%) (see Figure 46).

The rate of absentee voting has been increasing nationally over the last several elections, as the number of states offering early and "no excuse" absentee voting has increased.77 In both 2006 and 2008, 85%-90% of jurisdictions used only one kind of voting system for absentee ballots, with most of the rest using two, and a few three. About three-quarters of jurisdictions used optical scan systems, and about one-quarter used paper ballots. About 10% reported using DREs. In most cases these were for "in-person absentee ballots,"78 but in some cases, LEOs reported that election officials entered choices submitted on paper ballots into DREs.

The survey asked LEOs to provide information on the percentage of all votes cast by absentee voting in 2006 and 2008. The average reported was about 15% in each election, with 1%-5% being most common in both elections (Figure 47). However, most LEOs reported that the number of absentee ballots increased in 2008.79 In contrast to early voting, CCOS jurisdictions had a higher proportion of ballots cast via absentee (26%) than did jurisdictions using other systems (13%) (Figure 46). The overall average rate is very similar to the ones reported in the EAC's election day surveys (14.2% for 2006 and 17.3% for 2008).80

Most LEOs received voted ballots from military and overseas voters after the deadline.

In 2008, more than 95% of LEOs reported receiving requests for ballots from military and overseas voters. However, 62% also reported receiving voted ballots81 from such voters after the deadline for receiving ballots. Many LEOs provided suggestions for ways to improve participation by such voters, which varies by state. The most common by far was to permit greater use of electronic methods—fax, e-mail, and Internet. Other common suggestions were more time for preparing, distributing, and processing ballots for such voters, improved training and awareness of voters and military personnel, and ways of improving the currency of address information.

Some observers have expressed concerns about early and "no excuse" absentee voting, arguing, among other things, that they do not increase turnout and that they pose some security risks. These concerns were largely not shared by LEOs (Figure 48). About three-quarters agreed that absentee voting should be considered a voter's right, and about half that early voting should be.82 Most also agreed that absentee voting is worth the costs, and that verification of authenticity is not difficult for those ballots.

LEOs were equivocal about whether early voting was worth the costs in 2006 but supported it on average in 2008. Among users of different kinds of voting systems, lever-machine users were somewhat negative, DRE users were positive, and those using paper ballots and optical scan were neutral. All except lever-machine users believed on average that early voting should be a right, and users of all systems believed that absentee voting should be a right.

Figure 46. Mean Percentage of Ballots Cast via Absentee and Early Voting as a Function of Primary Voting System Type in a Jurisdiction, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Notes: Data presented are means for jurisdictions using each type of voting system. Early voting data for lever machine jurisdictions are excluded because too few reported using early voting to provide meaningful results for the chart.

Problems with pollworkers were common.

About 10% of jurisdictions experienced one or more instances of pollworkers not reporting for duty (see Table 10 above). Since the average jurisdiction used more than 150 pollworkers, the impact may be small on average (although not necessarily in the affected polling places). Nevertheless, absenteeism among pollworkers has been cited as a significant problem on election day.83 Factors that might contribute include long hours, low pay, poor training, and age or illness, but analysis of pay and training data from the survey did not point to those factors as being significant.84

Figure 47. Percentage of Votes LEOs Reported as Cast via Absentee Voting, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

About 20% of LEOs reported instances of pollworkers who did not understand their jobs.85 The lowest rate was in jurisdictions using hand-counted paper ballots. Results from LEOs using other kinds of voting systems were substantially higher, but did not differ significantly from one another. It seems unlikely that the differences between the results for paper and those for other voting systems arose purely from differences in the roles of technology in the different voting systems, since the technology-related tasks of pollworkers in jurisdictions using CCOS are unlikely to be much greater than those in jurisdictions using paper ballots. There are several other possible factors. For example, the average total number of pollworkers, polling places, and registered voters reported by LEOs is far lower for jurisdictions using paper ballots than for any other voting system (see Figure 49). Also, the quality of training and the background and experience of pollworkers are likely to vary among jurisdictions.

Figure 48. Agreement/Disagreement by LEOs with Statements
About Absentee and Early Voting, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Use, Training, and Experience of Pollworkers

The 2006 and 2008 surveys included several additional questions about pollworkers. More than 95% of LEOs reported using one or more pollworkers, with a mean number of more than 200 in a jurisdiction86 and a maximum of more than 10,000. The number of pollworkers in the jurisdictions was strongly correlated with the number of registered voters reported, as was the total number of polling places. The kind of voting system used also varied with the number of registered voters.

Overall, jurisdictions using hand-counted paper ballots had the smallest number of registered voters, polling places, and pollworkers, and those using DREs and lever machines the highest (Figure 49). On average, there were 5-10 pollworkers per polling place. Jurisdictions using lever machines and DREs had the lowest number, and those using other voting systems did not differ significantly from each other. There were about 1,000 voters per polling place on average, with jurisdictions using paper ballots having the fewest, about 600. The pattern was similar for the number of registered voters per pollworker, with an overall average of 160, and 100 for paper-ballot jurisdictions.87

While Figure 49 suggests that the number of registered voters, polling places, and pollworkers increased in 2008, the large variation among jurisdictions meant that the only statistically significant increase was for pollworkers per polling place, which increased slightly for PCOS jurisdictions.

Compensation of pollworkers also varied substantially. Respondents reported paying pollworkers $100 on average for work on election day.88 The results suggest that there is significant variation among the states, with averages ranging from a low of about $30 to a high of more than $200. Very few respondents reported paying nothing to pollworkers. Rates of pay did not vary significantly with the number of registered voters or type of voting system used, and it did not change significantly from 2006 to 2008.

Pay also did not vary with performance. LEOs who reported problems with pollworker performance paid them no less per day on average than those who did not. However, the survey did not explore potentially influential demographic factors such as age of pollworkers or average cost of living.

Perhaps more surprisingly, the amount of training pollworkers received was also not associated statistically with reports of performance problems, in either survey.89 However, more LEOs than not believed that inadequate training was responsible for problems with election administration (Figure 50), with DRE users expressing the most concern and paper-ballot users the least. Most also believed that training needs significant improvement (Figure 51), with lever machine and DRE users expressing the most concern and paper-ballot users the least. The overall level of concern about the impact of inadequate training and the need to improve it was somewhat lower in 2008 than 2006. Not surprisingly, in both surveys LEOs who believed more strongly that inadequate training caused problems also tended to believe more strongly that improvements in training were needed.

In both surveys, 93% of LEOs reported that pollworkers received training, about two to three hours on average (Figure 52).90 Seventy percent of LEOs considered pollworker training "extremely important," and only a few considered it "not important at all."91 The amount of training was about 20% lower on average for jurisdictions using paper ballots than other kinds of voting systems. In just under 10% of jurisdictions, training was 1 hour or less. In three quarters, it was 2-4 hours, and in only 5% was it one day or more.

Figure 49. Relationships Between Kinds of Voting Systems Used and Selected Characteristics of Jurisdictions, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Note: See note for Figure 11 for an explanation of types of voting systems, and note for Figure 15.

Figure 50. Views of LEOs on the Responsibility of
Inadequate Pollworker Training for Problems with
Election Administration, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

Figure 51. Views of LEOs on the Need for
Improvement of Pollworker Training, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

In 2008, LEOs were asked their views about the best method for training pollworkers. Nine out of 10 believed that classroom training was most effective, with the rest preferring reading materials, Internet, or other methods, such as instruction at the polling place.

Figure 52. Number of Hours of Pollworker
Training Reported by LEOs, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

There appeared to be substantial uniformity among jurisdictions in the areas in which pollworkers were trained (Figure 53),92 with more than 90% being trained in voter check-in, accessibility, election laws, operation of voting machines, and election integrity. LEOs were not asked what areas of training should be improved, but another study that surveyed pollworkers in New Mexico found that many desired more training in voting-machine operation and election laws.93 Interestingly, that finding reflects the views of many LEOs about their own training, as discussed earlier in this report.

Figure 53. Areas of Training for Pollworkers Reported by LEOs, 2006

Source: Analysis by CRS of data from capstone surveys.

Figure 54. Level of Concern Reported by LEOs
About the Negative Impact of Increased Election
Complexity on Pollworker Recruitment, 2006

Source: Analysis by CRS of data from capstone surveys.

LEOs also believed that HAVA is changing the nature of pollworker training, with 20% reporting that the changes were "substantial." As reported earlier (see Table 6 and Figure 33 above), most LEOs believed that HAVA has made elections more complex to administer. In 2006, most also expressed concern that the increased complexity of elections would have a negative impact on recruitment of pollworkers, and more than a third of respondents were "extremely concerned" (Figure 54).94 In 2008, most agreed with the statement that recruiting pollworkers was difficult, but nevertheless felt equally strongly that the number of pollworkers in their jurisdictions was adequate, and that the pollworkers had the necessary knowledge and skills to perform effectively. LEOs were also neutral about whether the increased technological complexity of voting systems has made it difficult for pollworkers to perform their election-day duties (Figure 55).

Users of hand-counted paper-ballot systems expressed the most positive views about their pollworkers. They felt most strongly that the number of pollworkers they had was adequate, and they were the only set of users who were neutral about the difficulty of recruiting pollworkers. DRE users might have been expected to have the highest concerns about pollworker knowledge and skills, but in fact lever-machine users expressed the most concern.

HAVA established two programs to provide incentives for student participation in election administration, one for high school and the other for college students.95 To help identify what impacts those programs might have had, LEOs were asked in 2008 whether they had experienced an increase in high-school and college student volunteers. About one-quarter reported an increase (Figure 55), but it could not be determined for this report which respondents were in jurisdictions that may have benefited from those programs. However, DRE users were the most likely to report an increase, and lever-machine and paper-ballot users the least likely.

Nonpartisan Election Officials

Some observers have suggested that the environment in which election officials operate is too politically contentious and that steps should be taken to make election administration more nonpartisan. For example, some believe that state election officials should not be permitted to be involved in political campaigns other than for their own positions. The 2006 and 2008 surveys asked LEOs several questions about this issue. In general, LEOs were satisfied with election administration at the state level (Figure 56), with only about 10% expressing significant dissatisfaction. More LEOs than not also believed that election administration in their state was independent of partisan politics.96 Those views did not change significantly from 2006 to 2008 and they did not vary depending on whether a LEO was elected or appointed.

Figure 55. Agreement/Disagreement of LEOs with Statements About Pollworkers, 2008

Source: Analysis by CRS of data from capstone surveys.

In 2008, LEOs were asked whether the state role in local elections had changed in the last five years. Not surprisingly, three-quarters believed the state role had increased. About 60% of those who believed the state role had changed considered the effects of that change beneficial in their jurisdictions.

Figure 56. Assessments by LEOs About Aspects of the Election Administration Environment, 2006 and 2008

Source: Analysis by CRS of data from capstone surveys.

There was more variation in the views of LEOs about the political contentiousness of the election-administration environment, with more believing it contentious in 2008 than 2006. LEOs who were appointed were more likely to find the environment contentious than those who were elected, although that difference was not significant in 2008. In 2008, LEOs were asked if the degree of contention had increased since the last election. While more than half believed there had been no change, a greater percentage believed it had increased than that it had decreased. About 40% believed that the political environment had made election administration more difficult, while half believed it had made no difference.

In 2006, LEOs were also asked whether election administration should be a civil service function in their state. About half had no opinion, but significantly more elected LEOs were opposed to the idea than favored it. Appointed LEOs were evenly divided (Figure 57).

Figure 57. Views of LEOs About Whether Election Administration
Should Be Part of the Civil Service in Their States, 2006

Source: Analysis by CRS of data from capstone surveys.

Possible Caveats

As with all surveys, care needs to be taken in drawing inferences from the results. One question that could arise is whether the sample is representative of LEOs as a whole. For example, simply drawing the sample at random from the nationwide pool of election administrators would have resulted in a disproportionately large number of jurisdictions from New England and the upper Midwest, where elections are administered by townships rather than counties.97 Steps were taken in the design of the studies to minimize the risk that the sample would not be representative (see the Appendix). Overall, neither the sample design nor the characteristics of the responses suggest that the results are unrepresentative of the views and characteristics of local election officials.

Another potential caution for interpretation relates to the inherent limits of surveys such as these. In particular, there is no way to guarantee that the responses of the election officials correspond to their actual beliefs. In addition, there is no way to be certain that any particular belief corresponds to reality. The question on voting-system characteristics (see Figure 19) provides an illustration of the possibility for disparity. For several reasons, LEOs might be reluctant to rate their voting systems low in reliability, accuracy, and security, despite the anonymity of the results. Alternatively, they might truly believe that their voting systems are highly reliable, accurate, and secure, even if independent evidence does not support that view.

Also, some caution is needed in assigning cause and effect. The mere existence of an association or correlation between a factor and an effect does not necessarily mean that the factor caused the effect. For example, the survey showed a strong association between the kind of voting system used in a jurisdiction and the number of pollworkers (see Figure 49). However, while the kind of voting system may have some independent effect, a more important factor is likely to be the number of registered voters.

A final caution involves how survey results might be used to inform policy decisions. On the one hand, the results could be used to support the shaping of policy in directions expressed by LEOs in their responses. In many cases, such policy changes might be appropriate. On the other hand, it is possible that at least some of those desired changes would not in fact yield the most effective or appropriate policies. In such cases, the results might more constructively be used to help policymakers identify issues for which improvements in communication and understanding are needed.

Potential Policy Implications

The survey results may have policy implications for several issues at the federal, state, and local levels of government. Some issues that may be relevant for congressional deliberations are highlighted below.

Election Officials

Many observers have commented favorably on the experience and dedication of the nation's local election officials. Survey results are consistent with that view. At the same time, other observers, including some election officials, have called for increased professionalism in election administration. Some survey results suggest areas of potential professional improvement, such as in education and in professional involvement at the national level. Congress could address this potential need by several means, for example facilitating educational and training programs for LEOs and promoting professional certification of election officials by entities accredited through the EAC or a professional association.

The seemingly unique demographic characteristics of LEOs as a group of government officials may have other policy implications, but they are not altogether clear. However, some observers may argue that efforts should be undertaken to ensure that LEOs reflect the diversity of the workforce or voting population as a whole, especially with respect to minority representation.

The issue of partisanship among election officials has been controversial for several years. Most national attention has been on state officials, but, given that most LEOs are elected and only about half the local jurisdictions in the United States are administered on a nonpartisan or bipartisan basis, policymakers may wish to consider the influence of partisanship among LEOs.

Voting Systems

Since the enactment of HAVA, controversy has arisen over whether DRE voting systems are sufficiently secure and reliable. The survey revealed that LEOs who have experience with DREs are very confident in them, consider them superior for accessibility, and do not generally support the addition of a voter-verified paper audit trail (VVPAT) to address security concerns, although those who use a VVPAT are satisfied with its performance. However, LEOs using other systems are much less confident in DREs and more supportive of VVPAT. The strongly dichotomous results suggest that as Congress considers whether to require changes in the security mechanisms used in voting systems, it might be useful to determine whether DRE users are overconfident in the security of their systems and procedures in practice, or, alternatively, whether nonusers might be misinformed about the reliability and security of DRE systems.98

The Help America Vote Act (HAVA)

The survey results suggest that HAVA is in the process of achieving several of its policy goals. The general and increasing support of most HAVA provisions—including those such as the creation of the EAC and the provisional ballot requirement that have been somewhat controversial—implies that most LEOs are in agreement with the goals of the act and are active partners in its implementation.

The overwhelming selection by jurisdictions of new voting systems that assist voters in avoiding errors indicates that the HAVA goal of reducing avoidable voter error is in the process of being met. The areas of concern expressed by LEOs—such as how to meet the costs of ongoing implementation of HAVA requirements—raise issues that Congress may wish to address as it considers HAVA appropriations and reauthorization.

The close relationship between LEOs and the vendors of their voting systems seems unlikely to change as a result of HAVA. However, with the codification by HAVA of the voting system standards and certification processes, the influence of the federal government in decisions about new voting systems might be expected to increase in relation to that of vendors and others. The increased concerns of LEOs in 2006 that vendors, media, political parties, and advocacy groups have too much influence on such decisions may bear consideration.

Research Needs

Scientific opinion surveys of local election officials are rare,99 and additional research may be useful to address some of the matters raised by these studies. For example, a survey of state election officials might provide useful information and might additionally be helpful in assessing the most appropriate federal role in promoting the effective implementation of HAVA goals at all levels of government.

One common suggestion of LEOs for improving HAVA was to provide a means of adjusting requirements to fit the needs of smaller jurisdictions. To determine what, if any, such adjustments would be appropriate, it may be useful to have specific information on how the needs and characteristics of different jurisdictions vary with size—something that was beyond the scope of these surveys. It could also be useful to identify how the duties of LEOs vary with size and other characteristics of the jurisdiction. In many jurisdictions, election administration is only part of the LEO's job. It is not known to what degree these other responsibilities might affect election administration—negatively or positively.

Finally, these surveys have provided only snapshots of LEO characteristics and perceptions over three election cycles. It might be beneficial to perform similar surveys periodically to identify trends and explore new questions and issues.

Appendix. Notes on Methodology

The results presented and analyzed in this report are from three surveys sponsored by CRS as part of its Capstone program and performed by graduate students and faculty at the George Bush School of Government and Public Service at Texas A&M University in 2004 and 2006, and the Department of Political Science at the University of Oklahoma in 2008.100 For both studies the CRS project manager was Eric Fischer and the project liaison was Kevin Coleman.101

The topics for the surveys were developed collaboratively by CRS and Texas A&M and University of Oklahoma participants. The major factor in choosing the topics was potential usefulness of the results for Congress. The Bush School and University of Oklahoma teams developed and administered the survey instruments in consultation with CRS and provided the authors with the data used in performing the analyses.

The three surveys were conducted after the November 2004, 2006, and 2008 federal elections, between December and the following April. For each survey, a sample of approximately 3,800 LEOs was drawn from the roughly 9,000 election jurisdictions in the 50 states.102 To ensure that LEOs from all states were included, but that states with large numbers of LEOs were not disproportionately represented (see Figure A-1), a modified random-sampling regime was used, as follows: Surveys were sent to all LEOs in states with 150 or fewer local jurisdictions. For the ten states with more than 150 LEOs, a sample of 150 was chosen at random from the local jurisdictions, and surveys were sent to those LEOs.103

Each survey was initially distributed in the month following the election (December). Administration was mostly electronic, with respondents visiting a website to enter their responses. In cases where electronic administration was not possible, LEOs were sent paper surveys via the U.S. Postal Service. Those who did not respond were sent reminders or contacted by telephone, with the survey response period closing in March or April following the election.

Figure A-1. Frequency Distribution of the Number of
Local Election Jurisdictions in the States, 2004

Source: CRS analysis of data provided by the Election Reform Information Project (electionline.org) and other sources.

Note: Data are from 2004, but the distribution of jurisdictions did not change significantly across the surveys.

For each survey, the overall final response rate was about 40% of the sample, or about 17% of all jurisdictions in the United States. Respondents answered 85%-90% of questions, on average.104 The response was sufficiently high to permit statistical analysis and comparison of the results among the surveys. The distribution of responses among states was similar across the surveys (see Figure A-2). Individual response rates per state were between 25% and 50% for about three-quarters of states. The remainder were evenly split between those for which under 25% of LEOs responded, and those for which the rate was greater than 50%. Response rates did not vary significantly for any survey with the number of local election jurisdictions in a state or its voting age population. About 70% of respondents worked in county election jurisdictions, with most of the remainder working in townships (Figure A-3). The small difference among the surveys in those choosing "town/township" and in those choosing "other" was almost certainly a result of a small change in the structure of the question after the 2004 survey.105 However, as the figure implies, the proportion of respondents from county jurisdictions increased slightly over the course of the three surveys.

Figure A-2. Frequency Distribution of Response Rates by State

Source: Analysis by CRS of data from capstone surveys.

All the results presented in this report are from analyses by CRS of data provided from the surveys by researchers at Texas A&M University (2004 and 2006) and the University of Oklahoma (2008). The raw data were first examined for errors, and corrections were made where necessary, in a few cases, such as if a LEO claimed to work more hours per week than is physically possible.106 Where the correct answer could be reasonably discerned, the response was corrected.107 Otherwise it was discarded. Once cleaned, the data were analyzed using standard parametric methods, mainly analysis of variance, linear regression, and Student's t-tests as appropriate.

Three kinds of hypotheses were tested:

Figure A-3. Kinds of Jurisdictions Administered
by Survey Respondents

Source: Analysis by CRS of data from capstone surveys.

Note: In each survey, the choices for kind of jurisdiction were county, town, township, borough, and other. For this graph, the replies for town and township were combined, as were the replies for borough and other.

Statistical significance was determined using a significance level (α) of .01. However, for display purposes, graphs with error bars were drawn showing 95% confidence intervals for the means.

Most tests for which results are presented in this report yielded highly statistically significant results—p-values much lower than the significance level (p << .01). For reported data where statistically significant effects were not found, the lack of effect is noted in the text, for example, by stating that no change was found between 2004 and 2006 for a particular survey item.

Additional methodological details can be provided upon request.

Acknowledgments

Many people provided helpful assistance in the development of this report. The authors wish to thank the following in particular for their invaluable contributions:

The principal investigators at Texas A&M University and the University of Oklahoma were Drs. Donald P. Moynihan and Carol L. Silva for the 2004 survey, and Carol L. Silva for 2006 and 2008. Ten graduate students participated in the first survey: Jennifer Gray, Marshall Gray, Joshua Hodges, Jeff Jewell, Marcia Larson, Ryan Mitchell, Erin Murello, Steve Murello, Alice Reeves, and Julie Siddique. Six participated in the second: Brock Ramos, Robert Thetford, Trait Thompson, Staci Thrasher, Shavonda Johnson, and Carlos Cruz-Fernandez. The twelve students participating in the third survey were Nathan Brown, Erin Ford, Paul Lore, Natalie Jackson, Christopher Murray, Sarah Norris, Thomas Rabovsky, Guanhai Song, Breanca Thomas, Sarah Trousset, Dustin Woods, and teaching assistant Natalie Jackson. The hard work, dedication, and intellectual contributions of those faculty and students were essential to the success of the surveys.

Julia Boortz, a student at the Massachusetts Institute of Technology, provided exceptional contributions to data analysis and the development of content for this report while an intern at CRS in 2010.

Doug Chapin and Sean Greene of the Election Reform Information Project (electionline.org) provided the original data set of local election officials.

Nearly 1,500 local election officials took the time from busy schedules to answer the many questions in the three surveys. The report would not have been possible without the thorough and thoughtful responses they made to the many survey questions

Footnotes

1.

The surveys were designed for and sent to officials with primary responsibility for elections within a local jurisdiction—for example, a town clerk or county election director.

2.

For discussion of results from earlier surveys, see also CRS Report RL32938, What Do Local Election Officials Think about Election Reform?: Results of a Survey, by [author name scrubbed] and [author name scrubbed], and CRS Report RL34363, Election Reform and Local Election Officials: Results of Two National Surveys, by [author name scrubbed] and [author name scrubbed].

3.

In this report, the term paper ballots refers to hand-counted paper ballots. Optical scan is used to refer to PCOS and CCOS ballots, which are also composed of paper.

4.

Source: Election Reform Information Project, http://www.electionline.org.

5.

However, the increase observed in the survey data was not statistically significant (see the Appendix for an explanation of the statistical methods used). In this and other cases where the data were highly skewed toward one end of the distribution, the median (the midpoint of the distribution) from the LEO surveys is reported rather than the arithmetic mean, because the latter is more sensitive to extreme scores and therefore the median is generally considered a better measure of the average in such cases. The mean number of registered voters reported increased from about 40,000 in 2006 to 50,000 in 2008. Data on voters and polling places were not collected in the 2004 survey.

6.

According to one report, the number of registered voters nationwide was 189 million in 2008, an increase of 17.5 million, or 10%, from 2006 (Election Assistance Commission, The Impact of the National Voter Registration Act of 1993 on the Administration of Elections for Federal Office 2007-2008: A Report to the 111th Congress, June 30, 2009, http://www.eac.gov/assets/1/AssetManager/The%20Impact%20of%20the%20National%20Voter%20Registration%20Act%20on%20Federal%20Elections%202007-2008.pdf, p. 1). Another study yielded a 7% increase (Michael P. McDonald, "2008 General Election Voter Registration Statistics," United States Elections Project, March 6, 2009, http://elections.gmu.edu/Registration_2008G.html).

7.

The median was 13 polling places in 2006 and 14 in 2008, with means of 32 and 37, respectively. The increase was not statistically significant. The minimum was zero because Oregon is a vote-by-mail state and does not generally use polling places. The maximum was about 1,000 in 2006 and about 1,700 in 2008. Not surprisingly, in each election the number of polling places in a jurisdiction was strongly correlated with the number of registered voters.

8.

This result is similar to the figure of 61% reported from an independent study in David C. Kimball and Martha Kropf, "The Street-Level Bureaucrats of Elections: Selection Methods for Local Election Officials," Review of Policy Research 23, no. 6 (2006): 1257-1268.

9.

Women make up about 60% of that workforce: see U.S. Census Bureau, "2000 Supplementary Survey Summary Table P068," available at http://factfinder.census.gov. Comparable data from the 2010 census were not available for this report.

10.

About 53% of the managers are men: see U.S. Census Bureau, "Census 2000 EEO Data Tool," available at http://www.census.gov/eeo2000/index.html.

11.

Ibid.

12.

Ibid.

13.

The cause of this change is not clear. However, the pattern is consistent with the contention by some observers that the changes in election administration brought about by HAVA could increase turnover.

14.

Many LEOs, for example, town clerks, have duties other than election administration.

15.

The proportion increased slightly for those who considered themselves "middle of the road" (34% in 2004 and 36% in 2008) or liberal (16% and 20%).

16.

The proportion is an estimate determined by comparing the number of LEOs who answered this question with the number answering the gender question, which was in the same section of the survey. Such a comparison was necessary because LEOs were asked only to indicate the organizations to which they belonged, not whether they belonged to any organization. The question on gender was chosen for the comparison because only a few LEOs answered the question on membership but not the question on gender, fewer than for other questions in that section of the survey. Using the percentage of total survey respondents would yield estimates of 60%-67%, but those are almost certainly underestimates.

17.

These questions were not asked in 2004 or 2008.

18.

Lever machines were used by 25% of townships in 2004, but none reported using them in 2008. As of 2011, lever machines were no longer used for voting in any U.S. election jurisdiction.

19.

Most of these declines were in counties, of which 10% used paper ballots in 2004 but only 2% in 2008, and 28% used CCOS in 2004 but 18% in 2008. The use of those two voting systems changed little in township jurisdictions, remaining at about 20% for paper ballots and 10% for CCOS.

20.

Both county and township use of PCOS increased, from 26% of counties in 2004 to 37% in 2008, and from 30% of townships in 2004 to 50% in 2008. DRE use remained at about 5% among townships, but increased from 22% to 35% in counties.

21.

See, for example, Election Data Services (EDS), "Nation Sees Drop in Use of Electronic Voting Equipment for 2008 Election—A First," October 17, 2008, http://www.edssurvey.com/images/File/NR_VoteEquip_Nov-2008wAppendix2.pdf. While the EDS report showed a similar small decrease in use of DREs by jurisdictions from 2006 to 2008, it also showed a larger decrease in use by voters, because jurisdictions that changed tended to be larger ones. Also, the EDS report provided results from all jurisdictions rather than a sample. However, it did not distinguish between central count and precinct-count optical scan systems.

22.

The results described here refer to the primary or main voting system used in a jurisdiction—the one that most voters would use. HAVA also requires that every polling place have at least one fully accessible voting system such as a properly equipped DRE. As a result, many jurisdictions using other kinds of voting systems also had one DRE per polling place.

23.

electionline.org, Election Preview 2008 (Pew Center on the States, October 2008), http://www.pewcenteronthestates.org/uploadedFiles/Election%20Preview%20FINAL.pdf.

24.

Election Data Services, "Nation Sees Drop in Use of Electronic Voting." Most used punchcard, lever machine, and hand-counted paper systems.

25.

This problem had long been recognized. A 1988 National Bureau of Standards report called for the elimination of Votomatic pre-scored punch card voting, still in use in Palm Beach County, FL in the 2000 election. See Roy G. Saltman, Accuracy, Integrity, and Security in Computerized Vote-Tallying, NBS Special Publication 500-158 (National Bureau of Standards, August 1988), http://www.itl.nist.gov/lab/specpubs/500-158.htm.

26.

Many of the jurisdictions where LEOs chose "Other" may in fact use one of the three systems specified, but the description provided did not permit that determination (e.g., "paper" might mean a BMD).

27.

However, the results should be interpreted with some caution, since the rate of response to this question was relatively low—about 22% of survey respondents, whereas the median response rate to a specific question was 80%.

28.

Specifically, LEOs were asked about the statement, "Public interest groups/civil rights groups/advocates for the disabled have too great an influence on the process."

29.

Many respondents commented that they should not have been required by the federal government to change voting systems or to add accessible ones.

30.

Several questions in the 2004 survey were omitted in 2006 to make room for additional questions about election administration and the impacts of HAVA. Nevertheless, the 2006 survey had more than twice as many questions as the 2004 instrument. A smaller number of changes were made to the set of 2008 survey questions.

31.

Not surprisingly, the lowest interaction (13% of LEOs) was in paper-ballot jurisdictions, and the highest was in optical scan and DRE jurisdictions (about 85%).

32.

However, in the 2006 survey, about one in eight reported that vendors did not provide the expected level of support on election day (discussed later in this report).

33.

This question explored the views of LEOs about the concern that some observers have raised that the range of services vendors provide in some jurisdictions may amount to a kind of privatization of election administration.

34.

For this question, LEOs were also asked to rate their own influence, which received the highest average score. The question also asked about the influence of some other actors, such as courts and voters, and it listed elected and nonelected state and local officials but not election officials specifically, except the respondents themselves and the EAC.

35.

For this question, LEOs were asked to rank how they felt about the use of different types of voting systems for elections in the United States, on a scale of 1 (strongly oppose) to 7 (strongly support). The types of voting systems listed were lever machines, punchcard systems, hand-counted paper ballots, central-count optical scan, precinct-count optical scan, DRE, Internet, and other. Only 10% of LEOs supported Internet voting, and since this type of system has not been used in public elections in the United States (except experimentally on occasion), it is not discussed further in this report. The category "other" is not discussed because the response rate was very low (<5%).

36.

Nonusers of PCOS systems supported their use about as strongly as nonusers of CCOS systems supported their use. Nonusers were most strongly opposed to the use of punchcard systems, followed by lever machines and hand-counted paper ballots.

37.

Those states include Connecticut, Maine, New Hampshire, Oklahoma, Oregon, and Vermont (CRS analysis of data from Verified Voting, "The Verifier," 2010, http://www.verifiedvoting.org/verifier, supplemented by state sources).

38.

Too few jurisdictions used punchcards in 2006 and 2008 to permit meaningful statistical comparisons for this system.

39.

This conclusion is the result of a statistical comparison from a separate question and is not shown in the graph.

40.

The choices were punchcards, CCOS, DREs, hand-counted paper, PCOS, and Internet voting.

41.

The change seems surprising on its surface, because hand-marked optical scan ballots of either type are not accessible to persons with disabilities in the sense used in HAVA. However, some manufacturers have marketed accessible ballot-marking machines, and that could account for the increase.

42.

See CRS Report RL33190, The Direct Recording Electronic Voting Machine (DRE) Controversy: FAQs and Misperceptions, by [author name scrubbed] and [author name scrubbed].

43.

This conclusion is the result of a statistical comparison of responses from users of all voting systems in 2004 and is not shown in Figure 21.

44.

This conclusion is the result of a statistical comparison of responses from users of all voting systems in 2004 and is not shown in Figure 22.

45.

According to the EAC, 21 states used DREs without VVPAT in 2008, and 16 states used DREs with VVPAT (Election Assistance Commission, The 2008 Election Administration and Voting Survey: A Summary of Key Findings, November 2009, http://www.eac.gov/assets/1/Documents/2008%20Election%20Administration%20and%20Voting%20Survey%20EAVS%20Report.pdf). However, the EAC report did not present results at the level of local election jurisdictions.

46.

States increasingly offer absentee ballots to any voter requesting them, rather than requiring a reason such as disability or absence from the jurisdiction on election day.

47.

In the 2006 survey, only DRE users were asked if VVPAT should be required. All LEOs were asked that question in 2004 and 2008.

48.

In 2004, this question focused only on disadvantages, and in 2006, it was asked only of DRE users.

49.

This question was not asked in the other surveys.

50.

This question was not asked in the other surveys.

51.

The results on year-to-year differences hold despite a small inadvertent change in this question between the two surveys. In 2004, LEOs were asked to rate the difficulty on a scale of 0 (not difficult at all) to 10 (extremely difficult). In 2006, the scale began at 1. However, that change should have caused a slight increase, not a decrease, in the scores—the opposite of the observed change for all but the two items discussed in the text. Normalizing the scores to a uniform scale did not change the described patterns significantly.

52.

The low rating for the compliance question is consistent with the decision by Congress not to grant the EAC regulatory authority.

53.

This question was asked only in 2006.

54.

There was a slight change in scale from 2006 (1-10) to 2008 (0-10), but the conclusions are unaffected, since the effect of the change was similar to that described for difficulty of HAVA implementation described in footnote 51.

55.

For some questions, LEOs were offered the opportunity to provide open-ended comments. For this question, LEOs were asked to make suggestions about ways to improve the EAC.

56.

The question in the two surveys varied slightly. The 2006 survey asked whether states had implemented a state-wide database, whereas the 2008 survey asked if they had fully complied with the requirement. As of 2010, California had still not complied (see CRS Report RS22505, Voter Identification and Citizenship Requirements: Overview and Issues, by [author name scrubbed] and [author name scrubbed]), but some LEOs from 14 other states also believed their states had not fully complied by the time of the 2008 election.

57.

This question was not asked in 2008.

58.

This question was not asked in 2008.

59.

The number of LEOs who responded to these questions in 2006 was unusually small, because of an error in the survey instrument that caused most respondents to this question to be only those who answered the staffing question in the affirmative—about 250 respondents. More than 1,000 respondents answered these questions in 2008. Therefore, additional caution is warranted in interpreting the significance of the answers in 2006 and in comparing them between the two surveys. Nevertheless, the answers were fairly consistent.

60.

The question on security was asked in both surveys, and the answers were similar, although confidence was slightly lower in 2008. The question on contingency plans was asked only in 2006 and that on accuracy was asked only in 2008.

61.

The EAC reported that at least some jurisdictions in 19 states used electronic pollbooks in the 2008 election (Election Assistance Commission, The 2008 Election Administration and Voting Survey).

62.

For more information on this issue, see CRS Report RS22505, Voter Identification and Citizenship Requirements: Overview and Issues, by [author name scrubbed] and [author name scrubbed].

63.

See, for example, electionline.org, "Voter ID Laws," April 28, 2008, http://www.pewcenteronthestates.org/uploadedFiles/voterID.laws.6.08.pdf.

64.

See, for example, Stephen Ansolabehere, Access versus Integrity in Voter Identification Requirements, VTP Working Paper #58 (Caltech/MIT Voting Technology Project, February 2007), http://www.vote.caltech.edu/media/documents/wps/vtp_wp58.pdf; Commission on Federal Election Reform, Building Confidence in U.S. Elections, September 2005, http://www1.american.edu/ia/cfer/report/full_report.pdf.

65.

This question was asked only in 2006.

66.

For more information on this issue, see CRS Report RS22505, Voter Identification and Citizenship Requirements: Overview and Issues, by [author name scrubbed] and [author name scrubbed].

67.

One HAVA requirement (§301(a)(3)(C)) went into effect January 1, 2007; it applies only to voting systems purchased with funds made available under title II after that date.

68.

In 2006, LEOs also stated that they worked an additional 20 hours per week in the month before the election. However, that question was not asked in 2008.

69.

Presumably, users of nonelectronic voting systems were reporting problems with their accessible voting systems, most of which are DREs or optical-scan systems with ballot-marking devices.

70.

A GAO survey of state election officials found that most states reported problems with voting systems during the 2006 election but that most problems had little impact (Government Accountability Office, States, Territories, and the District Are Taking a Range of Important Steps to Manage Their Varied Voting System Environments, GAO-08-874, September 2008, http://www.gao.gov/new.items/d08874.pdf).

71.

The survey asked LEOs to indicate only whether a particular event had occurred, not how many times. So if a DRE and precinct-count optical scan system have similar failure rates, then a jurisdiction using 1 DRE and 1 OS unit per polling place will probably have a lower incidence of failures than a jurisdiction that uses 10 DRE units per polling place. If the rate of failure per unit is 5%, the polling place using 1 OS and 1 DRE would have a 10% chance that at least one unit would fail, and the polling place using 10 DREs would have a 40% chance.

72.

For example, if the failure rate for DREs were 1% and that for OS 5%, a polling place using 1 OS and 1 DRE would have a 6% chance that at least one unit would fail, and the polling place using 10 DREs would have a 10% chance.

73.

Since many such users also use DREs to meet the HAVA accessibility requirements, it was not possible to determine whether it was an optical scan system or a DRE that the LEO assessed as having been hacked.

74.

This question was not asked in 2008.

75.

It was not possible to determine the ratio for actual voters, since LEOs were not asked how many of the registered voters participated in the election.

76.

It could not be determined if these numbers were significantly changed from 2006, because in 2008 LEOs were asked to list all the voting systems they used for early voting, whereas in 2006 they were asked to list only their primary system.

77.

Historically, most states have required voters to provide a reason such as illness, disability, or absence from the jurisdiction on election day as part of an application for an absentee ballot. However, most states now offer early voting, "no excuse" absentee voting, or both.

78.

This is essentially a form of early voting.

79.

The greater number of absentee ballots reported in 2008 without a corresponding increase in the percentage may be a result of the higher voter turnout in 2008.

80.

Election Assistance Commission, The 2006 Election Administration and Voting Survey: A Summary of Key Findings, December 2007, http://www.eac.gov/News/press/clearinghouse/2006-election-administration-and-voting-survey; ____, The 2008 Election Administration and Voting Survey: A Summary of Key Findings. The EAC reported a domestic civilian absentee-voting rate of 13.8% in 2006 and 16.6% in 2008, and an overseas-voter rate of 0.4% in 2006 and 0.5% in 2008.

81.

A ballot that a voter returns with whatever choices have been made is called a voted ballot.

82.

Those views were significantly correlated—that is, LEOs who believed that absentee voting should be a right were more likely to believe also that early voting should be.

83.

electionline.org, "Helping Americans Vote: Poll Workers," September 2007, http://www.electionline.org/Portals/1/Publications/ERIPBrief19_final.pdf.

84.

The survey did not include questions on the age or number of hours worked by pollworkers.

85.

Note that this result does not mean that 20% of pollworkers did not understand their jobs, but that 20% of LEOs reported that lack of understanding had occurred often enough for them to consider it a problem.

86.

The median was more than 70.

87.

These averages reported here are medians, given the skewed distribution of these ratios.

88.

In 2006, respondents were given a choice of reporting daily or hourly wages. About 40% chose to report hourly compensation, which averaged $7.25.

89.

This result does not necessarily mean that no relationship exists, only that none was detected. While little research is available on this topic, available evidence supports the contention that training and performance are related (see, for example, Thad Hall, J. Quin Monson, and Kelly D. Patterson, "Poll Workers and the Vitality of Democracy: An Early Assessment," PS: Political Science and Politics, Vol. XL(4), October 2007, p. 647-654, available at http://www.vote.caltech.edu/journals/PS-ThadHall.pdf).

90.

In both surveys, the mean was about 3.25, the median 3, the mode 2, and the maximum 36.

91.

This question was asked only in 2006.

92.

This question was asked only in 2006.

93.

R. Michael Alvarez, Lonna Rae Atkeson, and Thad E. Hall, The New Mexico Election Administration Report: The 2006 November General Election, August 2, 2007, p. 20, available at http://www.vote.caltech.edu/reports/NM_Election_Report_8-07.pdf.

94.

This question was asked only in 2006.

95.

Title V authorized grants for the college program created, and Title VI established a federally chartered foundation for the high school program.

96.

However, in 2006 more than half of elected LEOs (57%) indicated that they communicated their party affiliation during their election. This question was not asked in 2008. According to another study prior to the 2006 election, about one-fifth of local jurisdictions were administered by Republicans and one-quarter by Democrats, with about two-fifths nonpartisan and the remainder bipartisan (Kimball and Kropf, "Street-Level Bureaucrats," p. 1262).

97.

For example, Maine ranks 37th among states in population, with 1.3 million residents, but it ranks 4th in the number of election jurisdictions, with 518.

98.

For discussion of the DRE security issue and proposals for resolving it, see CRS Report RL33190, The Direct Recording Electronic Voting Machine (DRE) Controversy: FAQs and Misperceptions, by [author name scrubbed] and [author name scrubbed]; and CRS Report RL32139, Election Reform and Electronic Voting Systems (DREs): Analysis of Security Issues, by [author name scrubbed].

99.

The Government Accountability Office surveyed a sample of about 600 LEOs nationwide by mail and about 160 by telephone following the 2000 federal election (see Government Accountability Office, Elections: Perspectives on Activities and Challenges Across the Nation, GAO-02-3, October 2001). That survey focused largely on issues of election management, such as the availability of poll workers and the processing of absentee ballots. While results of the two surveys are not generally comparable because of differences in focus and methodology, the GAO survey did find that a high percentage of local officials expressed satisfaction with the performance of their existing voting systems, a finding consistent with the results of the current survey.

100.

See the section on acknowledgements at the end of this report.

101.

The authors wish to thank the many people who devoted time and energy to this project. Most important among them were nearly 1,500 local election officials who took the time from busy schedules to answer the many questions in the two surveys. Doug Chapin and Sean Greene of the Election Reform Information Project (electionline.org) provided the original data set of local election officials. The skills and dedication of the principal investigators and students at Texas A&M University were essential to the successful completion of the project.

102.

Individual responses were anonymous, in keeping with human-subjects research requirements for the surveys. Those requirements prevented the inclusion of the District of Columbia, which has only one LEO.

103.

The number of LEOs per state varies greatly, from fewer than 10 to more than 1,000. The number varies much more strongly with the way states have chosen to organize their election jurisdictions than it does with variables such as the voting-age populations of the states. Consequently, a simple random sample of the total number of election officials in the United States would have caused states with more decentralized election administration to be disproportionately represented in the set of responses. Alternative approaches that attempted to weight the data (by state, voting-age population, or portion of LEOs, for example) would also have had weaknesses in addressing questions of representativeness. There is no simple solution to this problem, and the sampling strategy used in the two surveys was chosen as a way to strike a reasonable balance between populational and geographic representation. In combination with the unweighted statistical analyses performed for this report, the strategy has the effect of increasing the relative influence of the four-fifths of states with fewer than 150 LEOs while ensuring a relatively strong influence of states with large numbers of LEOs.

104.

This number is for questions that applied to all LEOs. Some questions were targeted to specific groups, such as users of DREs.

105.

In each survey, the choices for kind of jurisdiction were county, town, township, borough, and other. In 2006, LEOs could write in the kind of jurisdiction they administered in the "other" category, and almost all of those indicated their jurisdiction as a city. The option to write in a response did not exist in the 2004 survey, and the pattern of response strongly suggests that most LEOs with city jurisdictions chose "town" or "township" as the most closely matching category.

106.

This was only an issue for those few questions where LEOs provided "ad-lib" answers rather than choosing from among a range of options.

107.

For example, when asked how many additional hours per week LEOs worked in the four weeks preceding the election, the responses of five LEOs presented in the database as impossibly large numbers such as 1015 or 2530 (there are 168 hours in a week). Those responses were clearly incorrect. Given the structure of those responses, the intent was interpreted as a range, 10-15 and 25-30 in the examples, and the number of hours was corrected to the midpoint of the range, 12.5 and 27.5.