G2TT
来源类型Article
规范类型评论
DOES OVERREGULATION CAUSE UNDERREGULATION? The Case of Toxic Substances
John Mendeloff
发表日期1981-10-07
出版年1981
语种英语
摘要In Washington, the reigning view of environmental health and safety regulation is that it has gone too far. And there is an important sense in which that is true. The Reagan administration has considerable justification for resting its program of regulatory relief on the conviction that standards have been set so strictly that benefits often fall short of costs. But it can also be argued that, in another important sense, environmental health and safety regulation has not gone far enough. Indeed, it is likely that the field is characterized not only by too much control but also by too little, and that the former problem is one reason for the latter. Therefore, the best policy may be to increase the pace of standard-setting while, at the same time, decreasing its strictness. That is easier said than done. Defining the Terms To begin unscrambling this puzzle, let us first examine the notions of overregulation and underregulation. Sometimes what we call overregulation may simply mean failure to achieve a given health benefit at the lowest cost—that is, the lack of cost-effectiveness. A 90 percent improvement in air quality, for example, might be produced more efficiently by emissions taxes than by across-the-board requirements to reduce pollution by that percentage. More often, however, overregulation means that the additional health benefits sought from the stricter standard are not significant enough to justify the additional costs. Making such a calculation is especially difficult, of course, because it unavoidably involves deciding what value to put on reductions in illness and death. One approach to this problem has been to examine the so-called risk premiums demanded by workers in hazardous jobs. These have been estimated in a number of studies to range from roughly $200,000 all the way to $5 million for each accidental death experienced. It seems unlikely—although this is admittedly speculative—that workers place a higher value than this on preventing a death that would occur twenty to thirty years in the future, even if the cause of the death were cancer. What is society in fact paying to achieve health benefits? Looking at standards of the Occupational Health and Safety Administration (OSHA), John Morrall and Ivy Broder have put the average costs per cancer death averted at $20.2 million for the arsenic standard, $18.9 million for benzene, $4.5 million for coke-oven emissions, and $3.5 million for acrylonitrile. For the asbestos standard, however—which accounts for almost 90 percent of the estimated fatal cancers averted by these five standards—the estimated costs are below $200,000 and maybe even below $100,000. It is important to keep in mind that these are average costs. In the case of acrylonitrile, for example, it is the average cost of moving from the preexisting exposure level of 20 parts per million (ppm) to 2 ppm. With acrylonitrile and probably with most toxic substances, it becomes increasingly expensive to achieve added reduction in exposure. Thus, the extra cost per death averted in moving from 3 ppm of acrylonitrile to 2 ppm was undoubtedly much higher than $3.5 million, probably at least twice as much. It is this higher figure that is the valuation implicit in a decision to adopt a 2 ppm standard. Thus even if you value preventing cancer deaths at $3.5 million each, you should not support a 2 ppm standard unless the only choices are 20 ppm and 2 ppm, with no options in between. Since the same point applies to the standards on coke-oven emissions, benzene, and arsenic, the overall numbers clearly are very high. Although these figures are subject to major uncertainties, it should be apparent why the overregulation argument has credibility. Underregulation, for its part, can be defined quite simply as the mirror image of overregulation—that is, a failure to regulate strictly enough when the marginal benefits would exceed the marginal costs. This can include, of course, failing to regulate at all. Strictness and Extensiveness The finding that we have frequently set overly strict health and safety standards does not preclude the possibility that more standards would be desirable. One reason is that there are growing numbers of newly identified hazards that have not been addressed at all. Another is that the cost per death averted is strongly affected by how much exposures are reduced. Thus, even if we believe that the typical exposure reduction of 90 to 95 percent usually imposes excessive costs, we might still agree that a 50 percent reduction would be justified. The casual lumping together of the charges that regulation has been too strict and too extensive is particularly unfortunate because it obscures some important relationships between the two issues and, in so doing, some possible remedies. It is generally agreed that where markets “fail” (because of inadequate information and externalities), regulation of toxic substances is potentially beneficial. Yet if we expect that the regulatory response will be improper—that standards will not reflect the most efficient approach or will be set far too strictly—we are likely to prefer inaction on the grounds that the net cost of regulating will exceed the cost of inaction. In this manner, overregulation can cause underregulation. And that in turn suggests that the costs of overregulation properly include the potential benefits forgone when we decide that no regulation is better than overly strict regulation. An important implication of this is that if standards are set more sensibly, with more attention to weighing costs and benefits, we should be willing to regulate more extensively. … An important implication of this is that if standards are set more sensibly, with more attention to weighing costs and benefits, we should be willing to regulate more extensively than we would otherwise. Suppose that an agency, instead of issuing one new standard a year with a reduction of 95 percent, issued five new standards a year with reductions of 50 percent each. It is highly likely that the second approach of regulating more substances less strictly would be the more cost-effective—would lower the average cost for each fatality averted. (Of course, across-the-board reductions will always be less efficient than setting each standard so that its marginal benefits equal its marginal costs.) Viewed in this perspective, the Supreme Court’s recent cotton-dust decision, which restricts the use of cost-benefit analysis in favor of more protective criteria for standard-setting, might actually end up diminishing OSHA’s contribution to worker health. OSHA has declared its intent to circumvent the decision and it probably has the ability to do so at least partially—mainly through its determinations of what constitutes “significant risk” and “feasibility.” To the extent that it does not succeed and is compelled to regulate more strictly than it would prefer, it will probably become even more reluctant to undertake new standards. The Reagan administration’s regulatory policy comes down hard on the side of weighing costs and benefits and maximizing net benefits, but ignores altogether the implications for extensiveness. A comprehensive plan for regulatory reform should simultaneously consider the pace and extensiveness of regulation as well as the strictness of the individual standards. Perhaps the administration’s policy is based on the view that agencies have addressed the biggest problems first, so that by now there are few chemicals left that warrant regulation. What can be said about the merits of this view? The fact is, we currently do not know enough to decide whether it is true or whether, instead, many toxic substances are underregulated. That decision would require preliminary benefit/cost analyses of potential regulatory choices, analyses that have not yet been done. Fortunately, my purpose here is less ambitious—to establish that the underregulation hypothesis is plausible, and then to explore some of the causes and possible remedies of underregulation. Have We Underregulated? One indication that significant hazards may be underregulated is the growing gap between the number of carcinogenic chemicals that have been identified and the number that have been regulated. More specific evidence consists of the backlogs that have piled up at regulatory agencies, backlogs that are likely to grow because of accelerated industry and government testing and the rapid rate at which new chemicals are being developed. While only about twenty substances have definitely been identified as human carcinogens, several hundred substances have been found to be animal carcinogens. Given that most human carcinogens also cause cancer in animals, the best guess we can make is that it is likely that many animal carcinogens are also human carcinogens. Concern that toxic substances have not been regulated as broadly as they should be does not, it should be stressed, depend on the view that we are in the midst of a veritable cancer epidemic. In the case of work-place hazards, OSHA has promulgated regulations on twenty-three health hazards. (Fourteen of those—all relatively rare carcinogens—were addressed in a single rulemaking that stopped short of establishing permissible exposure limits.) Meanwhile, the National Institute for Occupational Safety and Health (NIOSH) has recommended more than thirty new or reduced exposure limits that OSHA has not yet addressed. NIOSH—which is required to suggest the most protective achievable standard regardless of cost—has generally called for reductions of 90 to 99 percent for carcinogens and 50 to 60 percent for other hazards. Even if those particular reductions are not appropriate, there would seem to be a strong prima facie case that some reduction in exposure could be justified for some of these chemicals. Many of the same chemicals are emitted into the ambient air. Thus the Clean Air Act of 1970 (Section 112) calls upon the Environmental Protection Agency (EPA) to list and then regulate hazardous air pollutants. So far EPA has regulated only four—asbestos, beryllium, mercury, and vinyl chloride (the last one in 1976)—while listing three more—benzene, arsenic, and radionuclides. Thirty-six other chemicals remain under study with no regulatory prospects in sight, and hundreds more have never even been considered. EPA’s regulatory mandate is even stricter than OSHA’s, requiring that standards be set at levels that protect the public health “with an ample margin of safety” and without regard to costs. Because literal compliance with this criterion might require zero exposure in the case of carcinogens, it is hardly surprising that EPA has failed to comply. Environmentalists complain about the extent of the lapses, and businesses complain about overzealousness, pointing to figures that indicate, for example, that the EPA’s most moderate option for benzene would on the average cost over $4 million per cancer death averted. The Toxic Substances Control Act, passed in 1976 largely to fill the gaps in the existing regulatory framework, authorizes EPA to regulate the use of 55,000 potentially hazardous existing chemicals. Rules have been established for asbestos, PCBs, and chlorofluorocarbons. In addition, the act established an Interagency Testing Committee to recommend chemicals for testing and authorized EPA to order manufacturers to perform the tests. Every six months since October 1977 the committee has updated its recommendations. But so far EPA, which is supposed to respond within twelve months, has addressed only three of the forty-two items on the committee’s priority list. Although the agency is under court order to come closer to the act’s timetable, few people expect it to have much success. The Reasons for Underregulation Why have OSHA and EPA set so few standards? There are many answers—beginning with the normal start-up difficulties any agency encounters in its initial years, complicated by attempts at both agencies to draw up generic policies for regulating carcinogens. A second factor is the shortage of good personnel, which restricts the scope of activities and increases oversight burdens on top scientific staff. A third factor is the temptation to resist calls for immediate action because of uncertainty about costs, technology, and, especially, health effects. The desire to wait for better information and the willingness to delay until it can be obtained are both related to the issue of regulatory strictness. Not only is it desirable, as discussed above, that there be an inverse relationship between the intensity and scope of regulation, but it also seems politically inevitable. That is to say, in all probability strictness has been one cause of the paucity of new standards. OSHA is a case in point. Until the Reagan administration, OSHA typically sought out the strictest standard that would hold up in court. The need for “substantial evidence” to support such strictness before the court—and (since the mid-1970s) before White House reviewers as well—caused rulemakings to run longer and consume more staff resources than if the reasonableness of the rules had been more readily demonstrable. For an example of how statutory strictness contributes to delay, take EPA’s regulation of vinyl chloride under the Clean Air Act. David Doniger has observed that, because the agency was reluctant “to either flout the literal meaning of Section 112 or to set a standard effectively closing the … industries [involved],” it put off setting a standard for almost three years after the discovery that vinyl chloride was a human carcinogen. Section 4 of the Toxic Substances Control Act presents similar problems. According to chemical industry representatives, its rigorous and comprehensive approach prevents EPA from producing more than two or three test rules a year. “Section 4 will work only to the extent it isn’t used,” says Peter Barton Hutt, attorney for the Chemical Manufacturers’ Association—suggesting that informal bargaining agreements between industry and EPA will lead to a lot more testing results than the agency could ever get through formal rules. In the larger political arena as well, it seems likely that there are long-run trade-offs between intensity and scope. Industry complaints about overregulation are less likely to elicit White House and congressional sympathy when there are only a few highly protective standards to complain about. But the political and symbolic attractions of strict protection may wear thin when that kind of regulation becomes more extensive. Many who would accept a decision to require a major industry to spend $1 billion a year to prevent 100 fatal cancers a year—or $10 million each—would probably balk at proposals to spend $100 billion to prevent 10,000 cancers—also $10 million each. This reasoning suggests the existence of some implicit “regulatory budget”—without, however, suggesting what its limits are. Making Regulation More Extensive Assuming that it would be good policy to increase the extensiveness or scope of regulation, how could it best be done? In the abstract there are two ways. One is to reduce the resources required for individual rules so that each takes less time. The other is to increase the total resources devoted to rulemaking so that more proceedings can be undertaken at the same time. Simplifying Rulemaking Requirements. Taking the two approaches in order, can individual rulemakings be speeded up, or is that like trying to speed up a Beethoven quartet by playing it twice as fast? The answer depends largely on whether important new information will be developed during the rulemaking process. If that is likely to happen, quickness could be a liability, especially given the difficulties of re-regulating. But based on OSHA’s experience, the likelihood is small: while new health information often triggers a rulemaking, such information usually does not appear during the several years between the proposal and promulgation of a rule. Moreover, it would be even less likely to under a proposal for “tiered testing.” Because such a scheme involves successive reviews to screen out proposed rules having low net benefits, it would produce more information before publication of the proposed rule. Note, however, that the use of a priority-setting mechanism to make preliminary assessments of the costs and benefits of hazards would itself require additional resources. Less strict standards would take less time and consume fewer resources because industry is less likely to challenge them. The only major OSHA rule not litigated by industry was the 1977 acrylonitrile standard, which was adopted at the height of presidential concern about regulatory costs and which did not push the required exposure reduction to the limits of technical feasibility. On the other hand, of course, less strict standards are more likely to be challenged by environmentalist and labor groups—a response, however, that would probably be less than fully offsetting because those groups have fewer legal resources at their command. As a practical matter, if OSHA and EPA could be assured freedom from litigation, the degree of proof they would have to amass would decline and the rulemaking could be hastened. But again, this assurance would be lacking for any particular standard, unless industry on the one hand and labor or the environmentalists on the other could reach some overarching agreement to sue only if certain bounds had been exceeded. And such agreements would be difficult to reach, because they would be difficult to enforce. Enforcement would probably be less of a problem for labor and the environmentalists given their greater centralization and cohesiveness. But on industry’s side, almost any maverick firm could afford litigation that would upset any bargains. Increasing Rulemaking Resources. While speeding up individual rulemakings might produce some gains in extensiveness, greater gains could probably be achieved by doubling or tripling standard-setting budgets. For OSHA and for EPA’s toxic substances activity combined, this would entail at most about $100 million. Yet this strategy too faces major problems. Why should industry groups, which now have a sympathetic ear in the White House, go along? Why should they be interested in trading off a few strict regulations for a significantly larger number of less strict regulations? Why not just oppose the new regulations? Of course, these questions ignore the mixture of reasons—social responsibility, fear of liability, a desire for a competitive advantage—that can spur firms to accept some form and level of regulation, although usually much less than EPA and OSHA have imposed. One further reason for firms to consider this trade-off could be the desire to weaken statutory barriers to less strict standard-setting. For example, following the Supreme Court’s cotton-dust decision, a Wall Street Journal editorial called for amending OSHA’s mandate to allow the agency to perform cost-benefit analysis. A related factor could be the recognition that, since the President’s ear might not always be so sympathetic, explicit statutory constraints on policy makers (cost/benefit-balancing requirements, for example) would be useful in the long run. Whatever the legislative goal, bargaining would be required. Industry groups might decide to delay until after the 1982 elections, hoping that the next Congress will be more hospitable. Or they might conclude that any constraints likely to be enacted now or in the future would be too feeble to justify the concessions needed to bring them about. In the case of labor and environmental groups, bargaining advocates would have to contend with the symbolism that those groups attach to the idea of strict regulation. And even if that did not prove insurmountable—even if those groups agreed that a more extensive, less strict strategy would produce more health benefits in the short run—they might argue that the stricter strategy would be more protective in the long run. This view would seem to depend on the assumption that new hazards eventually stop emerging and that the years of an individual’s life count roughly the same whether they are saved sooner or later. More practically, it is difficult to negotiate revisions in a statute in exchange for bigger budgets for regulators. Appropriations are annual and, even though today’s increment becomes tomorrow’s budgetary base, the increase may not provide enough security to justify giving up valued language, especially in an atmosphere poisoned by mistrust. Another approach could be to amend the statutes to require that a certain number of hazards be regulated each year (as some have suggested for Section 112 of the Clean Air Act). While this strategy leaves open the problem of having enough funds and people to implement the mandate, the existence of the requirement might make Congress somewhat more willing to provide those needed resources. In the case of OSHA, labor might agree to some relaxation in strictness in return for explicit statutory requirements that workers removed for medical reasons be guaranteed other jobs at equal pay and seniority. The obstacles to achieving any of these political bargains are obviously substantial. As in the case of another major regulatory change—airline deregulation—success would probably depend on whether people within the administration were willing and able to get the regulatory agencies to begin the process themselves. At the moment the chances for this do not look good. Bigger budgets for rulemaking are not exactly a top priority. Conclusion The conceptual argument that the scope and intensiveness of environmental health and safety regulation should be inversely related seems strong. The companion suggestion that, in the past, the intensiveness of our regulation has caused us to forgo substantial health benefits is supported by a much weaker empirical case. What is needed to explore this question further is the same rough-cut benefit-cost analysis that underlies any intelligent decision on what to regulate and how strict to be. That research deserves a high priority—for it could lead to a regulatory system that produced more health and safety at far less cost to businesses and other private institutions. If all we have done by 1984 or 1988 is reassess the choices of the 1970s, the cry of underregulation could become as damaging in those elections as the cry of overregulation was in the one just past. ■ John Mendeloff is a policy analyst at the University of California, San Diego.
主题Uncategorized
URLhttps://www.aei.org/articles/does-overregulation-cause-underregulation-the-case-of-toxic-substances/
来源智库American Enterprise Institute (United States)
资源类型智库出版物
条目标识符http://119.78.100.153/handle/2XGU8XDN/235299
推荐引用方式
GB/T 7714
John Mendeloff. DOES OVERREGULATION CAUSE UNDERREGULATION? The Case of Toxic Substances. 1981.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[John Mendeloff]的文章
百度学术
百度学术中相似的文章
[John Mendeloff]的文章
必应学术
必应学术中相似的文章
[John Mendeloff]的文章
相关权益政策
暂无数据
收藏/分享

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。