G2TT
来源类型Report
规范类型报告
Putting civics to the test: The impact of state-level civics assessments on civic knowledge
David E. Campbell
发表日期2014-09-17
出版年2014
语种英语
摘要Download the PDF Key points: The national debate over the efficacy of state-level exams has largely ignored whether assessments in civics enhance democratic education, but a 2012 national survey of 18–24-year-olds substantiates the hypothesis that civics assessments matter for civic education. While civics assessments required for high school graduation do not appear to influence voter turnout or party identification, they do lead to greater civic knowledge in youths, with the greatest gains among African Americans, Hispanics, and immigrants—especially Hispanic immigrants. Future research should specifically address why required civics assessments are effective, what happens to civic knowledge when states adopt or eliminate their civics assessments, and whether the behavior of students, teachers, and administrators changes when a new civics assessment is introduced.       In spite of the national debate over the efficacy of state-level exams, whether assessments in civics enhance democratic education remains largely unexamined. This paper uses a large 2012 national survey of 18–24-year-olds to examine the potential effect of civics assessments on civic outcomes. In doing so, it strives to answer three questions: 1. Do civics assessments matter? Yes, but only assessments that are required for high school graduation (that is, “consequential civics assessments”). 2. For what outcomes do civics assessments matter? Consequential civics assessments lead to greater civic knowledge but do not foster greater voter participation. Nor do they influence partisan or ideological leanings. 3. For whom do civics assessments matter most? Consequential civics assessments lead to the greatest gains in civic knowledge among African American, Hispanic, and immigrant youth—especially Hispanic immigrants. These assessments, however, do not have a systematic effect on the methods used for teaching civics. Future research should therefore focus on how students, teachers, and administrators adapt to the presence of consequential assessments in civic education. Amidst the cut and thrust of debate over American education policy, the civic dimension of our public education system often gets short shrift. The relative inattention to civic education is both lamentable and ironic given that civic education—that is, preparation for democratic citizenship—was originally a primary objective of the public school system. This inattention means that civic education is often relegated to the sidelines in the discussion of concrete initiatives within education policy, even though all states have developed standards for civics (or related subjects, like social studies) and 40 states require students to complete at least one civics course.[1] Accountability, and assessment specifically, figure among the most contentious issues within education policy, yet civics has largely been left out of the discussion. The debate over assessment centers on fundamental questions such as how students should be assessed, particularly at the state level, and whether assessment results in accountability and thus higher academic performance. Since the passage of the federal No Child Left Behind Act (NCLB)—and in some states, since well before that—students have taken statewide standardized exams designed to ensure their competence in key subjects. In most states, civics is not included within the statewide testing regimen, and the general trend has been for states to drop civics assessments.[2] In particular, NCLB does not include civics among the subjects in which students must be tested. As a result, little is known about whether statewide assessments in civics make any difference. For other subjects, notably math and reading, some evidence suggests that statewide assessments—typically, exams—boost student performance.[3] However, this finding remains contentious, as some studies conclude that there are deleterious consequences of statewide assessment and accountability standards.[4] Nonetheless, proponents of assessment argue that well-designed accountability measures spur students, teachers, and administrators alike to adopt the best practices to boost students’ academic performance. If the assessments are indeed designed well, then students—and, relatedly, their teachers and administrators—can be appropriately evaluated by the exams. By this logic, teacher evaluation would not require monitoring teachers’ specific pedagogical practices but instead only the outcomes of their students’ performance. That is, the focus would be on the ends and not the means, leaving teachers to determine the optimal methods to achieve the objective of their students’ strong performance on assessments. Similarly, administrators are left to decide which instructors should be teaching civics. Obviously, in practice there are many factors outside of the classroom that affect students’ performance, which is one reason why assessment is a controversial topic. Should teachers only be evaluated based on changes in their students’ performance (that is, value-added analysis)? Should an assessment take into account possible confounding factors, such as the socioeconomic status of the students? What does a fair and informative assessment entail? While contentious, the debate over assessments has the virtue of being grounded in empirical analysis. A huge volume of data exists on those subjects—primarily math and reading—that has been subjected to testing. In other words, there can be a debate because there is something to debate. The Who, What, and Why of Civics Assessments In sharp contrast to the large literature on assessments’ effects regarding math and reading, very few studies have examined what effect, if any, statewide assessments in civics and related subjects have on civic education. And to the extent that there has been any research on state-level policies regarding civic education—including but not limited to assessments—these studies have concluded that these policies have no discernible effect on civic attitudes and behavior.[5] Yet these studies are few, so notwithstanding their null findings, this paper proceeds from the premise that the issue is not yet settled and thus poses the question anew: do civics assessments matter for civic education? The current research into this question resembles past literature on the more general question of whether civics courses have any effect on young people’s beliefs and actions. For decades, there was a virtual consensus that civics courses had little effect, based largely on research done in the 1960s. More recently, however, a growing body of research has found that civics courses can have an effect, even if those effects are relatively modest.[6] A common explanation for the limited effect is that civics, unlike many other subjects, can be learned outside the classroom by, for example, conversations with parents or following the news. An alternative explanation, however, could be that the stakes are low for both students and teachers when it comes to the subject of civics. Absent an external assessment of what students are learning in civics courses, perhaps both students and teachers do not prioritize the subject. If so, we should not be surprised that civics courses have little measurable impact on students. Civics courses consequently have little impact. It would stand to reason that having a civics assessment increases the seriousness with which teachers and students treat civics, leading to measurable results. That is, an assessment in civics means better performance in civic outcomes. (Later, I address in detail what civic outcomes should be measured.) Scholars, policymakers, and teachers who focus on civic education have debated the utility of civics assessments, specifically whether civics should be included among the subjects examined within accountability systems established under NCLB. On the one hand, there are those who worry that the nature of civics does not lend itself to standardized assessment. These opponents to civics testing worry that a systematic form of assessment will narrow civic education to only what appears on a standardized exam. In other words, they are concerned that civic education will merely become an exercise in teaching to the test—specifically, a test that does not cover the full range of what successful civic education should entail. On the other hand, some civic educators worry that the absence of civics assessment devalues its significance as a subject, which means that it receives too little time, attention, and resources. In a 2010 AEI Program on American Citizenship survey of high school social studies teachers, an overwhelming majority—93 percent—said they would prefer that civics (social studies) be subject to regular testing to ensure that it was not ignored, even though many are also critical of the testing required by NCLB.[7] Much of the debate over including civics as a subject for assessment rests on what civic education should encompass. Should it center on knowledge? If so, knowledge of what? Should civic education be defined to include the skills necessary for participation in the public square, such as public speaking and running meetings?[8] Should civic educators teach students to have certain dispositions, such as tolerance for differing viewpoints and a sense of civic responsibility?[9] If so, should these dispositions be evaluated? While an interesting and spirited debate centers on the question of what civic education should include, my analysis focuses on the civics outcome with the broadest consensus: knowledge. The National Assessment of Educational Progress (NAEP) Civics Exam, for example, includes only factual questions about the how, what, when, and why of American government and does not attempt to examine civics skills or dispositions. Recently, Peter Levine and Richard Niemi, two well-known scholars of civic education, called for renewed attention to the sort of knowledge we should expect young people to have. They suggested that civic educators should focus on ensuring that their students are fluent in current events, as this is more likely to spur their engagement in the political process than the traditional emphasis on constitutional basics (for example, understanding how a bill becomes a law or how the Constitution can be amended).[10] The data employed in this paper include both types of knowledge, namely that of the contemporary political landscape and some constitutional fundamentals. Decades of research support an emphasis on civic knowledge, as numerous studies have shown that knowledge is a precursor to civic engagement.[11] But even absent a familiarity with this research, it seems self-evident that a representative democracy rests on an informed electorate. Social scientists debate the level of knowledge needed for democracy to function properly, but no one seriously questions the intrinsic need for an electorate comprised of voters who have at least a modicum of knowledge about how their government works. Furthermore, it seems sensible that if a course of study has any effect, it would be to increase knowledge within that subject area. While the analysis to follow will center on knowledge as the most plausible and defensible civic outcome to be measured, it will also examine other potential civic outcomes that previous research has suggested fall within the purview of civic education—specifically, whether statewide civics assessments have an effect on young people’s political participation. (Are young people who took civics assessments in high school more likely to turn out to vote?) And even though civic education is ostensibly nonpartisan, the heated debates over civics standards, and over civics’ close cousin history, suggest a concern that civics entails political indoctrination.[12] Consequently, my analysis will also examine whether civics assessments have an effect on either party identification or political ideology. In sum, assuming that civics assessments do have an effect on outcomes, in the next part of the paper I examine the specific outcomes for which they matter, hypothesizing that assessments have a bearing on political knowledge. The analysis then turns to ask: For whom do civics assessments matter most? Do they have the same effect across all student groups, or do they matter more for some students? The fact that US public schools were created for civic purposes underscores that the nation’s education system has long served as the proverbial melting pot. Public schools bring together young people of many different ethnicities to forge a common identity and to school them in democratic virtues, such as respect for the symbols and institutions of government, a sense of civic responsibility, and an appreciation for individual rights. In short, the public schools have been a leading institution for creating unum out of pluribus. One important aspect of this great melting together is teaching young immigrants and children of immigrants about America’s system of government, thus preparing—and motivating—them for participation in the nation’s political life.[13] Similarly, past research on civic education has shown that formal instruction in civics matters most among students who are otherwise the least exposed to politics. A seminal study of civic education, conducted in 1965, concluded that while civics courses had little effect on students in general, they did lead to greater civic knowledge, efficacy, tolerance, and sense of civic duty for African American students.[14] In 1965, there was every reason to think that African American youth had little exposure to conventional political activity in the home, given that they had been raised in an era of overt racism nationwide and formal disenfranchisement in the Jim Crow South. While subsequent research has not always found a greater effect for civic education among disadvantaged youth, enough studies have that it remains a viable hypothesis. Specifically, to the extent that civics assessments lead to improved civic education, the extant literature suggests that assessments would have the biggest effect for immigrants and members of minority groups, two categories that obviously overlap. In particular, we should expect that immigrants would benefit most from civics instruction, as they are likely to have the least experience with the US political system. To summarize, past research has suggested that civics assessments—or any state-level policies or standards regarding civic education—have no measurable effect on civic outcomes within the youth population. This paper revisits that conclusion by focusing on three interrelated questions and drawing on the existing literature to suggest what the answers might be. 1. Do civics assessments matter?  The general literature on accountability measures suggests that, under some circumstances, assessments in subjects other than civics can incentivize educators and students to improve their academic performance. It stands to reason that they would have the same effect for civics. 2. For what outcomes do civics assessments matter?  If civics classes—and, specifically, statewide assessments—are to make a difference, they are most likely to have an impact on factual knowledge about the political process and institutions. 3. For whom do civics assessments matter most?  Past research suggests that civic education in general matters most for students who have the least exposure to the nation’s politics in their homes, which points toward a larger effect for immigrants and members of other minority groups. I am thus suggesting that past research has cast too wide a net in looking for the effects of state standards in civics. The effect of civic education can be subtle and there is no reason to assume that it matters equally for everyone in the population. As I explain later, the data do indeed indicate that civics assessments (with academic consequences) appear to boost political knowledge among young people, especially among African American, Hispanic, and immigrant youth. Consequential assessments have the biggest impact of all on Hispanic immigrants. After presenting evidence that youth educated in states with a civics assessment have greater political knowledge, the remainder of the paper examines why. Does a civics assessment systematically affect civics pedagogy? Is civics taught differently (presumably, better) in states with an assessment system? The answer is elusive in existing data. However, the absence of a single explanation may itself be an important insight into effective civics instruction: perhaps it is best to “let a thousand flowers bloom” and leave it to teachers to determine the most effective means of achieving the end of ensuring that their students have the necessary level of political knowledge to be informed citizens. Analysis One reason for the lack of attention to civics is the relatively scarce amount of data available on the subject, especially when compared to other academic subjects, such as math and reading. Fortunately, a recent survey is an important correction to that lacuna. In 2012, the Spencer Foundation funded a survey conducted by the Center for Information and Research on Civic Learning and Engagement (CIRCLE) and supplemented with data on statewide civics standards collected with the support of the Bechtel Foundation. These data are ideally designed to answer the three questions I have posed:[15] 1. Do civics assessments matter?  The Spencer survey was limited to 18–24-year-olds, the group where we are most likely to see an effect for civic education. The survey contained a nationally representative sample of 4,483 US citizens. In addition to a wide array of information gathered about each of the respondents, including detailed demographic data, CIRCLE researchers incorporated data on the characteristics of the respondents’ states. This includes standards and requirements for civic education in the state where the respondent attended high school. Respondents were also asked detailed questions about their experiences in high school, including the general climate of the school and the specific pedagogical practices used in civics-related (social studies) classes. 2. For what outcomes do civics assessments matter?  The survey asks a wide array of questions to gauge civic attitudes and behaviors. These include voter turnout in the 2012 election and, critically for this study, a battery of questions to measure respondents’ civic knowledge. 3. For whom do civics assessments matter most?  The survey’s large sample size enables reliable analysis of subgroups within the population, such as immigrants and minorities. The logic of the analysis is straightforward: do young people who attended high school in states with a civics assessment have greater knowledge about government and politics than those who did not?[16] The first step in answering this question requires identifying those states with such an assessment, and states are either coded as having such an assessment or not.[17] As of 2012, there were 21 states with such an assessment. This subset of states reflects the diversity of the nation. (See table 1.) The 21 states are geographically diverse, as they are not concentrated in any single region, and they are economically diverse, as their average household income ranges from $45,000 to $72,000. Moreover, they are racially diverse, with the states’ Hispanic populations ranging from 1.2 percent to 46.3 percent and their African American populations ranging from 2.1 percent to 37 percent. They are also politically diverse, as 8 states went for Obama in 2012 and 13 went for Romney. Finally, they are educationally diverse, as their NAEP scores range from 251 to 269 (national average is 263) in reading and from 265 to 289 (national average is 282) in math. Their per-pupil expenditures for education range from $7,500 to $19,000 (national average is $11,000). While my statistical analysis will control systematically for these differences across states, a priori there is no obvious confounding factor that characterizes the assessment states. They are a cross-section of America. Table 1. Civic Education Requirements by State Source: Center for Information & Research on Civic Learning and Engagement, “New CIRCLE Fact Sheet Describes State Laws, Standards, and Requirements for K–12 Civics,” 2012, www.civicyouth.org/new-circle-fact-sheet-describes-state-laws-standards-and-requirements-for-k-12-civics/. In addition to the presence or absence of a civics assessment, the analysis tests whether the number of years civic education is required affects political knowledge, since the critical factor might not be the assessment but rather the sheer amount of instruction. Even though a study by Deven Carlson found that the number of years of civic instruction has no bearing on civic knowledge (as measured by the NAEP) among high school students, this control is nonetheless included to see whether quantity of instruction affects the amount of civic knowledge that sticks past high school.[18] Analysis throughout this section of the paper includes only state-level policies regarding civic education. While later analysis incorporates self-reports about the pedagogical techniques respondents recall in their high school civics classes, these reports are unavoidably clouded by self-selection. That is, people who are civically aware—and thus score higher on an index of political knowledge—are also more likely to have taken civics courses in high school, to have participated more fully in them, and perhaps even to remember them differently than their less civically aware classmates. What one person considers an enlivening classroom discussion might bore someone else to tears. State-level policies, however, are not subject to self-selection in the same way. Admittedly, it is theoretically possible that especially civically engaged families choose to live in states with a civics assessment, but that seems implausible. Civics requirements within a state are hardly common knowledge and it strains credulity to suggest that a state’s civic education requirements are a drawing card for potential move-ins. A more plausible concern is that policies across a given state are not implemented consistently or, in some districts and schools, even ignored altogether. Given implementation uncertainty, any effects observed in this analysis should be considered a lower bound. A medical trial to test a new medicine provides a useful analogy. In such a trial, not every patient takes the pill as prescribed—either out of forgetfulness or inattention. Thus, medical researchers consider all the subjects assigned to take the pill as the “intention to treat” group, whether those subjects actually comply with the protocol or not. The analysis then measures whether the pill has an effect, on average, within the intention-to-treat group, even though some members of the group may not have followed the protocol. This is done because, should a drug come to market, it will not always be used correctly and the medical community wants to determine the aggregate effect of the experimental treatment. States with a civics assessment (or any state-level policy) are like the intention-to-treat group, as not all districts and schools will necessarily follow protocol and comply with the rule.[19] The question is whether the policy being analyzed has an effect on the aggregate, notwithstanding the uncertainties in implementation. The primary outcome of interest is political knowledge, which is measured with an index of six questions. While a survey of this type cannot have the same breadth of items as an exam such as the NAEP, these questions include some fundamentals of how the American political system operates and some rudimentary knowledge about the current political landscape.[20] They are thus a good gauge of informed voting. The questions, which comprise the Civic Knowledge Index, are (with correct answers in parentheses): 1. As far as you know, does the federal government spend more on Social Security or on foreign aid? (Social Security) 2. Would you say that one of the parties is more conservative than the other on the national level? (Yes) / Which party is more conservative? (Republican) 3. Do you happen to know which party had the most members in the House of Representatives in Washington before the election this month? (Republican) 4. How much of a majority is required for the US Senate and House to override a presidential veto? (Two-thirds) 5. Which of the following best describes who is entitled to vote in federal elections? (Multiple-choice options were residents, taxpayers, legal residents, citizens. The correct answer is citizens.) The average score was three out of six correct. Roughly 9 percent got none of them correct, while 4 percent had a perfect score. The easiest question was whether one party was more conservative than the other (69 percent knew that, of whom 76 percent knew it was the Republican Party). The most difficult question was whether the US government spends more on Social Security or foreign aid. Only 29 percent knew that it is Social Security. These questions should not imply that these items are either necessary or sufficient for informed democratic engagement. Rather, the intent is to have a general gauge of respondents’ knowledge about politics and government. A more detailed exam would obviously provide more nuanced information, differentiating among different domains of knowledge. But such an exam would also correlate highly with this short quiz. While blunt, this index is a serviceable measure of general civic knowledge. Indeed, its brevity—and thus limited variance—only makes it more difficult to detect a signal amidst the noise. The bias of the analysis is thus against finding an effect of statewide assessments, underscoring the substantive significance of any effect we might find. The analysis proceeds by testing whether 18–24-year-olds who attended high school in states with a civics assessment have greater civic knowledge. More formally, the Civic Knowledge Index is regressed on civics assessment, but with a host of control variables to ensure that the effect of the assessment has been isolated from potentially confounding factors. In other words, the analysis is designed to eliminate concern that what appears to be the effect of a civics assessment is really the effect of something else that either describes the states that have assessments or the people who live in such states. To ensure an apples-to-apples comparison, the same variables are used in the models for each civic outcome discussed in this paper. The statistical analysis controls for a host of individual-level characteristics, including race or ethnicity (whether the respondent is African American, Asian American, or Hispanic), immigrant status, gender, and age. In addition, the model accounts for educational attainment but, because many of these respondents are still pursuing their education, it includes both the highest level of education attained and whether the respondent is currently enrolled in school. Although household income is typically used as a measure of socioeconomic status, income can be misleading for this age group. For one, it is not clear whether the appropriate income to be measured is respondents’ own or their parents’. In the former case, income often appears misleadingly low because respondents are in school or just starting a career. In the latter, they often do not know their parents’ income. Consequently, intellectual stimulation in the childhood home better captures the presumed influence of social class, measured with the deceptively simple question, “When you were growing up, about how many books were there in your home?” The choice of answers includes a few (0–10), enough to fill one shelf (11–25), enough to fill one bookcase (26–100), or enough to fill several bookcases (more than 100). This simple question efficiently yet powerfully measures the intellectual climate within the home, which is one important way social class affects all educational performance, including civics.[21] To ensure that a civics assessment is not masking other state-level characteristics, the analysis also accounts for students’ general academic performance in the states where the respondents went to high school, measured with mean NAEP scores in math and reading. The model also controls for each state’s average per-pupil expenditure and median household income. Since the survey was conducted during 2012, a presidential election year, it is also important to account for any possible effects of the political environment of the state in which respondents were living in 2012. These include the past level of turnout among 18–24-year-olds (measured as 2010 turnout in the Current Population Survey) and the degree of electoral competition in 2012. Historically high levels of turnout and contemporaneous political competition could foster greater political engagement and, potentially, knowledge.[22] Do Civics Assessments Matter? The analysis begins with the fundamental question of whether respondents who attended high school in states with a civics assessment have greater civic knowledge. In a nutshell, the answer is yes. Students who attended high school in a state without a civics assessment scored, on average, 2.8 out of 6.0 on the knowledge scale. Those who were educated in states with an assessment scored 3.03, a statistically significant difference (p < .05). Importantly, the presence of a civics assessment has the effect, not the number of civics courses students are required to take. This finding is notable in light of the high bar for finding any statistically significant effect, given the shortcomings of a six-item knowledge index. However, it is just as important to ask whether an increase of 0.23 on the scale has a substantive significance—intrinsically a more subjective judgment. One useful benchmark is comparing the effect size for a civics assessment to other factors that have a statistically meaningful impact on civic knowledge. In making this comparison, the presence of a civics assessment has what can be described as a moderate effect, comparable to, for example, being raised in a home with several bookcases instead of only one. But not all civics assessments are created equal. In some assessment states, the civics exam has no bearing on whether a student graduates from high school. In others, graduation does not require earning a certain score on the assessment, but the assessment nonetheless counts toward a grade in a civics course. In still others, the assessment is a graduation requirement. Essentially, the analysis asks whether the stringency of the requirement matters. Do students have higher knowledge when the state-administered civics exam has consequences for graduating high school? Of the 21 states with a civics assessment, a total of 11 either require the exam for graduation (9 states) or count it toward a final grade in a required civics course (2 states). Because it is not clear that the consequences are more or less severe under one system than another, these 11 states have been combined into a single category and thus treated as functional equivalents. Even though they are a subset of a subset, this group of 11 states (listed in table 1) is nonetheless broadly representative of the United States. Their average median income is $55,000, their Hispanic population ranges from 2.7 to 46.3 percent (average is 13 percent), their African American population ranges from 2.1 to 37 percent (average is 19 percent), four of them favored Obama in 2012, their NAEP math scores range from 265 to 287 (average is 278), and their NAEP reading scores run from 251 to 269 (average is 260). To answer whether it matters if the civics assessment has academic consequences, the binary measure of whether a state has an assessment is replaced with a series of dichotomous variables that indicate whether the state falls within one of the following categories: 1. No civics requirement or assessment 2. State requires a civics course but no assessment 3. State requires a civics course and has an assessment, but the assessment has no bearing on graduation or grades 4. State requires a civics course, has an assessment, and the assessment has consequences.
主题Citizenship
标签Civic education ; education assessment
URLhttps://www.aei.org/research-products/report/putting-civics-to-the-test-the-impact-of-state-level-civics-assessments-on-civic-knowledge/
来源智库American Enterprise Institute (United States)
资源类型智库出版物
条目标识符http://119.78.100.153/handle/2XGU8XDN/206029
推荐引用方式
GB/T 7714
David E. Campbell. Putting civics to the test: The impact of state-level civics assessments on civic knowledge. 2014.
条目包含的文件
文件名称/大小 资源类型 版本类型 开放类型 使用许可
2014-09-Campbell.fin(646KB)智库出版物 限制开放CC BY-NC-SA浏览
个性服务
推荐该条目
保存到收藏夹
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[David E. Campbell]的文章
百度学术
百度学术中相似的文章
[David E. Campbell]的文章
必应学术
必应学术中相似的文章
[David E. Campbell]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 2014-09-Campbell.final-template.pdf
格式: Adobe PDF

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。