Gateway to Think Tanks
来源类型 | Book |
规范类型 | 其他 |
The Video Campaign: Network Coverage of the 1988 Primaries | |
s-robert-lichter; Daniel Amundson; Richard Noyes | |
发表日期 | 1988 |
出版者 | AEI Press |
出版年 | 1988 |
语种 | 英语 |
摘要 | Read the full PDF. Buy the book. Setting the Scene Nobody knows what the hell’s goin’ on. —Curtis Wilkie, Boston Globe It was a roller coaster election, packed with thrills and spills, drama and trauma, stunning surprises, and reckless surmises. Who could have predicted such odd couples as Gary Hart and Donna Rice, Joe Biden and Neil Kinnock, or Ed Koch and Al Gore? Who could have known that George Bush and Bob Dole would both get mean, but Bush would pick the right fight (against Dan Rather) and Dole the wrong one (against Bush after New Hampshire)? Who would have guessed that Dick Gephardt would fight the establishment or that Jesse Jackson would join it? Certainly not the media. It is not surprising that Boston Globe reporter Wilkie couldn’t get a handle on the elections early in 1988. His colleagues had the same trouble. In February, Newsweek media critic Jonathan Alter complained, “The press is held captive in Campaignland—the worst possible vantage point from which to make sense of anything. The extension cords that connect political coverage to the rest of the country have become hopelessly tangled.”l During the next few months campaign journalists would deal with the rapid rise and fall of Gephardt and Pat Robertson, Bush’s near disaster in Iowa and spectacular recovery in New Hampshire, Dole’s near miss in New Hampshire and subsequent tailspin, the Jesse Jackson phenomenon, the New York showdown, and, not to be overlooked, the emergence of Michael Dukakis as the tortoise who outlasted all the Democratic hares. Making sense of all this on daily deadline is not easy, especially while continually being assailed for bias, inaccuracy, and arrogance. At one time or another, just about everybody took a shot at the press. Gephardt’s backers called them yuppie elitists for bashing their man’s trade proposals. Hart ripped them for stressing his hijinks instead of his high-mindedness. Dole lashed out at liberal ideologues who would not give Republicans a fair shake. Bush squared off against Rather while Robertson blasted Tom Brokaw. Jackson’s supporters claimed he was treated like a horse of a different color, while his opponents muttered (not for attribution) about his free ride. Even noncandidate Mario Cuomo complained that his noncandidacy was not taken seriously enough. No wonder Alter’s Newsweek piece was headlined, “The media is as confused as the election itself.” Yet the campaign somehow got covered, as it does every four years, with nightly reports on the three television networks that embody “the media” for their millions of viewers. There the peculiarities, idiosyncrasies, and unique events of campaign ’88 were sorted into the soothingly predictable categories of election coverage—the horse race, the issues, the candidate profiles, the campaign strategies and tactics, the inside dope from informed sources, and the gaffes and one-liners that together form the spectrum of election news. Just as journalists rely on the methods of their craft to bring some order into the flux and frenzy of presidential campaigns, so scholars in recent years have begun to apply the methods of social science to understanding the patterns of campaign journalism. Led by the pathbreaking studies of Richard Hofstetter in 1972, Thomas Patterson in 1976, and Michael Robinson in 1980, they have honed and sharpened the tool of content analysis to dissect the who, what, when, and where of campaign coverage.2 (As in journalism itself, the how and why remain the toughest assignments.) Content analysis is a technique that allows researchers to classify the news objectively and systematically according to explicit rules and clear criteria. The goal is to produce valid measures of news content, and the hallmark of success lies in reliability. Other investigators who apply the same procedures to the same material should obtain the same results, although their interpretations of those results may differ. For example, the amount of coverage a candidate receives can be measured in various ways—by the amount of time he appears physically onscreen, by the number of times he is mentioned or quoted, by the number of stories that focus on him, and so forth. Whichever method is chosen, the result can be expressed in absolute terms or relative to other candidates. And the question of whether a given amount of coverage is fair or appropriate requires judgments or interpretations that go beyond the data. But once a certain standard of measurement is chosen and rules for applying it are codified, different researchers should come up with about the same numerical findings, regardless of their own ideological or partisan predilections. Good and bad press are harder to measure objectively than the sheer amount of coverage, but the task is by no means impossible. First you decide which topics are relevant (for example, discussions of the candidate’s competence, integrity, consistency, and the like). Then you determine the tone of each statement dealing with one of these topics. The result may be coded as positive (“Reagan is a great communicator”), negative (“Reagan often gets his facts wrong”), mixed (“Reagan is a master at using anecdotes, but he often gets his facts wrong”), or neutral (“Reagan’s use of anecdotes has stirred debate”). Some judgments are more difficult than these, and coders must be guided by clear rules. In making each decision, coders should be applying rules, not expressing their own opinions. If the rules are sufficiently clear, two coders working independently should come to the same conclusions, regardless of their own opinions about the subject matter. Content analysis is not a panacea. The quality of a study depends on the way the coding categories are constructed, the clarity and appropriateness of the rules that guide coders in applying them, and the skill of the coders in doing so. Nonetheless, the difference between content analysis and casual monitoring is akin to the difference between scientific polling and man-on-the-street interviews. Guided by the lessons of previous research, as well as the logistical requirements of rapid-response media monitoring, we applied the best procedures developed during earlier election studies, along with some refinements of our own. Our aim was to publish the results rapidly enough that journalists and news watchers alike could evaluate the coverage as it developed rather than after the fact.3 We viewed all election stories on the ABC, CBS, and NBC evening news shows beginning on February 8, 1987, a full year before the Iowa caucuses. This volume presents the results from that date through the end of the primary season on June 7, 1988. (The study will continue through the general election in November.) To be selected for analysis, a story either had to be devoted in large measure to the election or had to focus on one or more of the candidates, with reference to their campaigns. For example, a story about George Bush’s activities as vice president was not included if it made no mention of his quest for the presidency. There was no lack of material. During the sixteen months covered, the networks broadcast 1,338 election stories with a combined airtime of 40 hours and 17 minutes. That total broke down to 418 stories lasting 14 hours and 5 minutes on CBS, 464 stories lasting 13 hours and 48 minutes on NBC, and 456 stories lasting 12 hours and 24 minutes on ABC. Of course, these averages mask the ebb and flow of election news that followed the rhythms of the campaign. During 1987, before the start of the primary season, the three networks together broadcast an average of just over one story a night. Even then, periods of intense activity alternated with long lulls in the coverage. For example, the Donna Rice scandal generated twenty stories on Gary Hart in four days in May, and Biden’s borrowed oratory was the subject of fourteen stories in September. When the bell rang for the contenders to square off in Iowa, the coverage really began to heat up. The networks averaged six and one-half stories a night from January 1, 1988, through the Iowa caucuses. But even that was just the warm-up for New Hampshire, when the coverage doubled to nearly fourteen stories a night. It dropped to eight stories nightly during the Super Tuesday campaign, six stories a night from Super Tuesday through the New York primary, and fewer than four stories each night thereafter. Each story was taped during its broadcast and later reviewed by coders who analyzed it according to dozens of criteria ranging from length and placement in the broadcast to the use of sources, treatment of the candidates, and viewpoints expressed on various issues. The coder entered each judgment onto a code sheet that listed the options in numerical form. The resulting data were then entered into a computer, where they could be aggregated and analyzed most efficiently. To standardize coding decisions, each coding category was defined in a written codebook. Each coder learned to apply the coding system in training sessions of 100-200 hours. The building blocks of the study were not the news stories themselves but rather every statement of fact or opinion that appeared on each story about the candidates or the campaign. By using individual statements as the unit of analysis, the coders avoided having to make global judgments about entire stories. Instead they classified discrete bits of information from individual sources within each story. They identified not only the issues that were raised and the viewpoints expressed, but also the individual or group that was the source of each statement. (We also analyzed visual aspects of the coverage, but time constraints preclude presentation of the results in this volume.) To test the reliability of the content analysis system, two fully trained coders independently reviewed 150 stories. We retained only the variables on which their coding decisions were in agreement at least 80 percent of the time. On most variables the level of agreement was even higher. The following chapters present the results of this analysis. Chapter 2 examines the issues, themes, and topics of coverage that together form the context within which the election takes place. In chapters 3 and 4 attention shifts from the electoral context to the candidates themselves. Chapter 3 considers evaluations of each candidate’s viability. Did the media’s assessments of the horse race affect the outcome of the race? We consider separately judgments about each campaign’s organizational and financial base, the movement of public opinion, each candidate’s electoral showing, and the expectations race—efforts to evaluate past performances and to predict future performance. Chapter 4 takes on the heated debate over bias in campaign coverage. Specifically, it examines assessments of each candidate’s desirability, including discussions of character, past job performance, abilities as a campaigner, and stands on policy issues. Finally, chapter 5 asks what it all means and whether it matters in the end. Did the networks get the story right? Did they intrude unduly into the electoral process? Did the coverage help some candidates and hurt others? In short, after all the sound and fury, did the media make a difference for better or for worse? To find out, let us tune in the latest episode of that quadrennial drama we know as elections in the media age. Notes Read the full PDF. |
主题 | Elections |
标签 | 1980s ; AEI Archive ; AEI Press ; Elections ; Presidential Primaries ; US Media |
URL | https://www.aei.org/research-products/book/the-video-campaign-network-coverage-of-the-1988-primaries/ |
来源智库 | American Enterprise Institute (United States) |
资源类型 | 智库出版物 |
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/208075 |
推荐引用方式 GB/T 7714 | s-robert-lichter,Daniel Amundson,Richard Noyes. The Video Campaign: Network Coverage of the 1988 Primaries. 1988. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。