Gateway to Think Tanks
来源类型 | Article |
规范类型 | 评论 |
Exorcists vs. Gatekeepers in Risk Regulation | |
peter-huber | |
发表日期 | 1983-11-01 |
出版年 | 1983 |
语种 | 英语 |
摘要 | Regulating health and safety hazards is an endeavor fraught with two risks of its own. Regulation may impede risk-reducing change, freezing us into a hazardous present when a safer future beckons. Worse still, as with the Hydra’s head, when one risk is removed, two others often grow up in its place. It is commonplace to observe that risk is ubiquitous and inescapable. Every insurance company knows that life is growing safer, but the public is firmly convinced that living is becoming ever more hazardous. Congress, understandably enough, has been more interested in the opinion polls than in the actuarial tables. A bountiful crop of federal health and safety regulation, most of it of recent harvest, reflects the popular concern. The risks of foods and drugs are subject to intricate and comprehensive legislation. The adverse health effects of air, water, and land pollution are the province of four major and equally complex environmental statutes. The risks of transporting people or hazardous cargoes are strictly regulated. The hazards of electric power generation are regulated by the Environmental Protection Agency (EPA) if the fuel is coal or oil, by the Nuclear Regulatory Commission (NRC) if the fuel is nuclear. Consumer products have their very own safety commission; occupational safety and health its own administration (OSHA). Among the myriad other special purpose risk statutes are those devoted to household poisons, mine safety, natural gas pipelines, flammable fabrics, and lead paint. I argue here that federal regulation of health and safety is not only a major obstacle to technological transformation and innovation but also often aggravates the hazards it is supposed to avoid. Two Goals, Two Procedures Risk regulation has two overarching goals—goals that are distinct and often contradictory. It aims, on the one hand, to reduce the “old” risks of our environment. I am referring here to risks that accompany such familiar activities as driving a car, or digging for coal, or stepping out for a breath of air. On the other hand, risk regulation seeks to impede technological changes that threaten to introduce “new” hazards into our lives. I have in mind here risks associated with the likes of nuclear power, artificial food additives, and new toxic chemicals. These two goals—the control of old risks and the exclusion of new ones—lead to profoundly different legislative commitments. The first is made when Congress wakes up one day to discover that things somewhere out there are intolerably hazardous. The resulting “something ought’a be done” laws are transformational. They demand a change in the established order—clean-up programs, if you will. The second kind of commitment is the child of a Panglossian dream, in which Congress sees the ominous unknown encroaching on this safest-of-all-possible worlds. So the “don’t let it happen” laws are exclusionary. They demand protection of the presumptively safe status quo—like antilittering programs. The two different legislative objectives spawn two quite different regulatory procedures: “standard setting” and “screening.” Under a standard-setting regime of regulation, reserved for old risks, you go about your business until Washington, in its own good time, comes to you and tells you how to do it better. OSHA is an example of a standard-setting agency. “Screening,” which applies to new hazards, is regulation by advance licensing. Before undertaking a new venture, you go to Washington to ask for permission. The Food and Drug Administration (FDA) is a screening agency. OSHA and the FDA regulate chemically similar toxins, but use fundamentally different regulatory tools. These two goals [of risk regulation]—the control of old risks and the exclusion of new ones—lead to . … two quite different regulatory procedures: “standard setting” and “screening.” Standard setting is initiated by the regulatory agency. If a standard-setting agency promulgates a standard based on inadequate scientific evidence of the underlying risk, the standard will be thrown out by the courts. Screening places the burden of initiating the regulatory process on the regulatee. A screening agency can survive a judicial challenge by proving its complete ignorance about the hazard involved. It is up to the would-be licensee, the person trying to pass through the screening system, to prove that the screened product is acceptable. Standard-setting agencies aspire for a safer world: they exorcise the devils we know. Screening agencies … protect the universe of risk from deterioration: they act as guardians at the gate. … Standard setting is an incremental, transformational approach. Standard-setting agencies aspire for a safer world: they exorcise the devils we know. Screening agencies, on the other hand, serve to protect the universe of risk from deterioration; they act as guardians at the gate, making yes-no kinds of decisions, protecting us from the ominous unknown. Congress generally decrees that standards shall be set for old products, old sources of risk, and that screening will be used to regulate new products, new risks. Standard setting is reserved for our “familiar killers”—risks that society has come to tolerate before the decision to regulate is reached. Screening regulates new risks that loom on the horizon—risks that threaten to undermine the perceived safety of the status quo. Thus, we set standards for cars, but screen aircraft. We set standards to control the old hazards of burning coal, but screen new nuclear power plants. Under the Toxic Substances Control Act (TOSCA), EPA is supposed to screen all major new productions of “new” chemicals, but is directed merely to set standards for the production and handling of old ones. EPA screens new pesticides but for the most part leaves the old ones alone. Numerous other examples could be cited. Of course, some statutes, like the Clean Air Act, combine elements of standard setting (in establishing ambient standards) and screening (to set individual new source emission limits). But overall, the old/new line falls remarkably close to the standard-setting/screening division. Indeed, Congress exerts itself mightily to preserve the division. The FDA, my basic example of a screening agency, regulates both old food hazards and new ones. And OSHA, my basic standard-setting agency, regulates both new work-place risks and old ones. But it turns out that the old-new regulatory division is carefully codified at a second level, within each agency’s statutory charter. The FDA’s regulation of foods, for example, is rigidly subdivided between natural foods (very “old”), food additives (“new”), and a curious group of substances “generally recognized as safe” (GRAS). GRAS substances are substances that were “old” and therefore nonthreatening when the FDA polled the scientific community on the matter in the late 1950s. TOSCA similarly divides the regulatory universe of toxic chemicals between old chemicals—in this case chemicals in significant use before 1973—and new ones. The Clean Air Act calls for the screening of new major sources of pollution, but only sets standards for old ones. The Federal Water Pollution Control Act contemplates more stringent regulation of new emitters than of old ones. New pesticides are regulated more severely than the old ones. Again, numerous other examples can be found. Process and Reality So what? Who cares if the procedures for regulating old and new risks are different? The answer, I think, is found in the words of Alfred North Whitehead: “The process is itself the actuality.” There is a difference between Mohammed going to the mountain and the mountain coming to Mohammed. Procedures do make a difference. The Supreme Court’s decision in Industrial Union Department v. American Petroleum Institute (1980)—the benzene case—was about procedures. OSHA had come to realize that regulating occupational exposures to carcinogens through standard setting is difficult and time-consuming. So it set about promulgating its own, in-house Delaney Amendment. Under its proposed carcinogen policy, no employer could introduce into a work place chemicals that had been found to be carcinogenic in test animals. OSHA would make no assessment of actual risk to humans; it would be up to employers to prove, if they could, that non-zero occupational exposures to animal carcinogens were safe. Through the magic of the Federal Register OSHA would turn itself into a screening agency, shifting burdens of proof from the agency to the regulatee. OSHA’s benzene standard, though promulgated just before the agency’s official carcinogen policy, reflected the evolving philosophy. But the Supreme Court would not go along. It ruled, in effect, that OSHA was constituted as a standard-setting agency and would have to behave like one. It is up to OSHA to demonstrate that its standards will mitigate a “significant risk,” not up to employers to show that their work places are “safe.” For a standard-setting agency this result made perfect sense. For a screening agency it would have been extraordinary. When the FDA declines to license a new food additive and thereby effectively bans the additive, the agency is not required to show that the additive poses a significant risk. The FDA may simply insist that it is ignorant, that safety has not been proven by the regulatee to the agency’s satisfaction. The same is true for the NRC when it declines to grant an operating license to a new plant, or for the FAA when it holds up on licensing a new aircraft, or for the EPA when it declines to license a new pesticide, or for any other screening agency, when the information available does not support an affirmative finding of acceptability. The different procedures for regulating old and new risks—standard setting and screening—can thus have profoundly different substantive consequences. Screening, first of all, regulates at the “strict” margin of scientific uncertainty, standard setting regulates at the “lenient” margin. A screening system admits only the “acceptably safe,” while a standard-setting system excludes only the “unacceptably hazardous.” There is often a wide gap in between those two criteria. Screening systems also place the cost of acquiring the information needed for regulation on the regulatee; standard-setting systems place that cost on the agency. This makes all the difference when the product or process targeted for regulation is only marginally profitable. A pesticide manufacturer may have to spend $20 million on tests needed for licensing. Even if a pesticide is completely safe, it will never even be submitted for review if the manufacturer stands to make only $19 million from its sale. The cost problem also impels screening systems to favor big-ticket products and operations—a broad spectrum drug, a new pesticide that will kill everything from aphids to dung beetles, the largest nuclear power plants. As in most other ventures, there are economies of scale in paying the price of being screened. Securing regulatory approval of a single 1000 MW power plant will certainly cost less than securing approval of two 500 MW plants. So our nuclear plants tend to get bigger and bigger, our pesticides less and less specific. Standard-setting systems, in contrast, tend to place the greatest burdens on the largest regulatory targets because it is there that the standard-setting agency can have the biggest impact. A small generator of an unusual type of risk is often beneath the standard-setting agency’s attention. … there are economies of scale in paying the price of being screened. … So our nuclear plants tend to get bigger and bigger, our pesticides less and less specific. Another component of cost is delay. Under a screening system it is the regulatee who bears the risk and cost of regulatory delay. Delay postpones the return on R&D costs and allows the clock to tick on crucial patents. In standard setting, delay postpones the cost of compliance until an agency acts—which may mean forever, especially if you have a good lawyer litigating avidly on your side. The final and most important difference between standard setting and screening—that is, between the regulation of old and new risks—is found in the statutory criteria for regulation. Standard-setting statutes almost always limit in some manner the costs that a regulatory scheme may impose on regulatees. Screening statutes rarely contain analogous cost-conscious provisions. I could march through the kinds of opaque statutory provisions I have in mind, but this is an exercise for footnoters, and moreover an exercise I have recently completed elsewhere (“The Old-New Division in Risk Regulation,” in University of Virginia Law Review, September 1983). Moreover, Congress is always embarrassed and therefore somewhat reticent when a crass consideration like money must be injected into a risk statute. Let me instead offer just one example. In the famous cotton-dust case, American Textile Manufacturers Institute, Inc. v. Donovan (1981), the magic statutory term was “feasible.” OSHA standards may be strict but must be “feasible.” The Supreme Court rejected a claim by textile manufacturers that OSHA’s new cotton-dust standard should be invalidated because it was not grounded on a cost-benefit analysis. “Feasible,” the Court said, does not mean justified in formal cost-benefit terms. But what is at least equally striking about the case is what “feasible” does mean. The Court found the term to require that “the industry as a whole will not be threatened by the capital requirements of the regulation.” The Court therefore approved a standard that, according to OSHA’s own estimates, will permit a continuing incidence of byssinosis among 15 percent of textile workers. Any stricter standard would not be “feasible” because it would cost the industry too much. At least when compared with the typical screening statute, OSHA’s statutory mandate is strikingly cost-conscious. The NRC, by way of contrast, is certainly not required, and quite possibly not even permitted, to consider economic impacts when it withholds the Diablo Canyon license, or for that matter when it freezes out all future development of nuclear power. The same is true for the FDA, when it declines to license a new food additive, and for many other screening agencies. For those screening agencies that are required to weigh costs and benefits, burdens of proof usually remain with the regulatees, so that uncertainties about cost or benefit are consistently chalked up against the proposed new product or process. Origin of the Double Standard Old risks subject to standards are systematically treated more leniently than new risks that are screened. What accounts for the double standard? Some suggest that informational problems are at the root of the division. We set standards for old risks because they are familiar and therefore well understood. We screen new hazards because we know less about them. Yet those in the business know that informational problems are pervasive even for hazards as old as asbestos and wood fires. Others have suggested that the psychological dimension of risk accounts for the old-new division. “Rare catastrophes” provoke different legislative responses than “common killers.” Again I am skeptical. Rare catastrophes are caused by old sources of risk every bit as much as by new ones. Somewhat more convincing is Robert Crandall’s suggestion that the old-new division results from the raw politics of competition between the industrially old, politically powerful Frost Belt and the industrially new, less powerful Sun Belt (Controlling Industrial Pollution, 1983). Though all of these factors undoubtedly play some role, I am convinced that the old-new division is primarily attributable to something much more pedestrian. Congress thinks that it is much more expensive to regulate old risks than new ones. That belief is understandable enough. Cleaning up the risk environment requires direct cash outlays. Regulated industries rebel at these transition costs; consumers are dismayed to lose products to which they have become habituated. People are usually of the view that it is better that things be settled than that they be settled right. In contrast, excluding new risky products or activities seems relatively painless. Manufacturers do not have to readjust production processes, consumers do not have to change established patterns of consumption. The only cost that is incurred by the regulation of new types of risk is the price society pays whenever it decides not to do something—a lost opportunity cost. Congress, it seems plain, systematically judges this type of cost to be relatively small or at least obscure. Congress’s belief that it is cheaper to exclude one unit of new risk than to neutralize one unit of old risk is both plainly wrong and readily understandable. It is plainly wrong because lost opportunity costs are not uniformly negligible. To cite just one example, uniquely therapeutic drugs are often licensed in this country years after they are approved elsewhere. The people who lose the opportunity to be treated in the interim definitely pay a very real price. More generally, this misapprehension about costs reflects the alarming view that there is little to be lost in obstructing technological and scientific change. Congress’s belief that it is cheaper to exclude one unit of new risk than to neutralize one unit of old risk. … reflects the alarming view that there is little to be lost in obstructing technological and scientific change. But Congress’s view about costs is also readily understandable because legislators care more about political costs than economic ones. Old risks derive from established technology and their regulation presents unwelcome production and consumption choices. Old risks have identifiable and self-aware constituencies. In contrast, the regulation of new risks attracts much less political heat. Under a rigid, predictable screening system industry loses little—it just steers clear of the field. Consumers lose, of course, but—here’s the political kicker—they don’t know it. Formula for Regression To sum up, we have established a systemic preference for old sources of hazard and a systemic bias against new sources of risk. Imbue this system with the widely held belief that life is too dangerous, encourage it with vocal demands that life be made safer, and you have a formula for inexorable technological regression. First, everything is risky in some degree. Second, we decide to go after risks aggressively. Third, we determine that the cheap way to avoid risk is to exclude new risks, to cut back on our most novel products and processes. Finally, we set up a bifurcated regulatory process, one that is far more risk-averse and far less cost-conscious when it regulates new risks than when it regulates old ones. The older, the more entrenched the status quo, the slower we are to regulate it strictly. The newer, the more speculative, the more innovative the regulatory target, the more likely we are to take a firm, no-risk, exclusionary stand. Two things, I believe, have brought us to where we are now. First, there has been a change in the national mood. Somewhere along the way we lost our taste for technological exploration and adventure. Ours seems to be what Arthur Kantrowitz, an engineer and scientist, dubbed the era of “neo-Malthusianism.” We share, he believes, a profound belief that “mankind cannot manage the great power that it is able to unleash.” Second, we have progressively changed the way in which we regulate risk, and that has greatly affected the conclusions we reach about the acceptability of risk. There was a day when risks were regulated only after the accident, after the bodies had fallen, through liability rules administered by the courts. The incentive not to create a risk was that if the risk was an unreasonable one, you might end up paying compensation, and perhaps punitive damages, to the person you injured. This retrospective regulatory system was cumbersome, it diverted too much to the lawyers, it placed on injured persons an often insurmountable burden of proving causation, it was erratic and unpredictable. But it had one large advantage. To recover in the courts you had to prove harm. A cardinal rule of tort litigation is that the courts do not compensate exposure to risk—”the neighbors your dog doesn’t bite”; they compensate those who are bitten. This means, first, that risks have to be real before they are regulated by the courts, and second, that the “acceptability” of a risk is evaluated at a time when the social utility of the risk-creating activity is known. But risk regulation is becoming an increasingly prospective business. Agency standard setting is the first step in this direction. Once a pattern of unacceptable harm becomes clear, an agency intervenes to mandate across-the-board correction. Like a court, the standard-setting agency must have evidence of harm before it regulates. But unlike a court, the standard-setting agency regulates wholesale, not retail, once that evidence is found. Both the suspect risk-creators and the proven harm-causers are regulated uniformly. Screening moves regulation yet another step forward in time. Screening regulation occurs before any pattern of harm is apparent or predictable. It is grounded on some generalized anxiety about risk in a particular area. A screening agency regulates not on the basis of proven harm, but on the basis of unproven safety. This is the ultimate step in prospective intervention—you cannot move regulation any earlier. There are two central problems with pushing regulation earlier and earlier, as we seem determined to do. First, the earlier we regulate, the harder it is to assess the benefits of the product or activity regulated. A century ago people agitated to ban vaccination. It seems unlikely that the eventual eradication of smallpox figured prominently in the debate. More recently, we have witnessed attempts to curtail significantly experiments in genetic engineering. Who can begin to assess what benefits we would forgo if such research were in fact halted? Second, the earlier we regulate, the harder it is to evaluate risk accurately. Of course, we regulate early precisely because we do not want to count bodies later. But without bodies it is very easy to overestimate risk, especially when the national mood is receptive to claims of new and lurid risk. Indeed, early regulation can become something of a self-fulfilling prophecy. We start with unfocused anxiety about a product and set up a strict regulatory regime. The public infers from that action that the product is especially dangerous. Enthusiasm for strict regulation grows, impelling the politically responsive agency to regulate even more strictly. And of course the public infers from the stricter regulatory regime that there is even more danger out there than originally thought. The paradox of risk regulation is that too much of it makes life more dangerous. Not just more expensive, not just less convenient, but more dangerous. Which brings me full circle. The paradox of risk regulation is that too much of it makes life more dangerous. Not just more expensive, not just less convenient, but more dangerous. The introduction of new, safer products is slowed; safer (but not perfectly safe) products recently introduced to the market are driven out, and consumption shifts back to the old and common killers, which are entrenched and therefore too costly to regulate seriously. Proposals for Change What is to be done? The most popular reform proposal these days seems to be risk-benefit balancing—monetize both the injuries and the benefits of the hazardous activity and then bring in chartered accountants to balance the books. I fear the proposal is up against insuperable political obstacles. Moreover, if you propose cost-benefit balancing, I ask, by whom? The last thing a regulatee whose product is to be screened should want is an additional requirement that it prove the acceptability of the product in risk-benefit terms. There are less ambitious reforms in the air. The patent term extension bill looks as if it will pass Congress next year. It would stop the clock from ticking on patents while regulatory review is in progress, and so remove at least one of the costs of being screened. And the Orphan Drug Act, enacted a few months ago, streamlines and subsidizes the licensing of new drugs that treat very rare diseases. Until now, it often did not pay to try to push these so-called orphan drugs through the system. Other promising proposals would standardize the screening process in various fields. For example, if nuclear plants are ever built again in this country, the NRC will undoubtedly push for an extremely standard plant that can be approved once and then built by all. Again, the purpose would be to cut down on the staggering transaction costs associated with screening regulation. Finally, it is also occasionally proposed to force some cost-consciousness on to screening agencies. The proposal usually takes the form of a threshold risk criterion. Screening agencies would be required to establish the likelihood of a given degree of harm before deciding to ban an established product from the marketplace. I believe there is one politically feasible possibility for more far-reaching reform. One of the most common, and most profoundly fallacious assumptions made in the risk-regulation trade is that new products and processes generally add to the risk burden of our environment. In fact, most new products do not “add to,” they “substitute for.” Yet under most existing regulatory statutes, the agency is clearly and flatly prohibited from comparing the risks of a new product with the risks of the old products for which it will substitute. Examples abound. The artificial sweetener saccharin, although thought to present some risk, has been kept legal by special act of Congress. After ten years of delay, the FDA recently approved a new dietetic sweetener called aspartame. But the agency had to establish that aspartame met an objective level of safety; it could not lawfully have approved aspartame simply by establishing that aspartame was safer than saccharin. The NRC, and its myriad consultants and contractors, have become extremely expert at estimating nuclear risks. Understandably, nearly all their efforts are directed at estimating risks of nuclear power. EPA devotes vastly fewer resources to assessing the risks of the nonnuclear alternatives that it regulates—coal power, for example. Neither agency is encouraged, nor perhaps even permitted, to base its regulatory decisions on a comparison of the risks presented by the alternative generating technologies. … banning one risky product may decrease societal risk or may increase it. It depends entirely on what is left behind. … In recognition of this painful reality, risk agencies should be restructured around natural “risk markets.” Our regulatory system must find a way to recognize that most things in life are substitutes, not additions. The uncomfortable truth, widely ignored, is that banning one risky product may decrease societal risk, or it may increase it. It depends entirely on what is left behind. In recognition of this painful reality, risk agencies should be restructured around natural “risk markets.” The hazards of all sources of electric power should be placed under one regulatory umbrella. We should discard the artificial regulatory divisions between “natural foods,” food additives, and “GRAS” substances. In the area of occupational safety and health, the regulator must recognize that strict regulation of safer jobs tends to drive workers toward more hazardous ones. We should abandon the artificial divisions between old and new drugs, old and new emitters of air pollutants, old and new pesticides, old and new chemicals, at least when the new target for regulation promises to substitute for an old product or process. Functional substitutes should be regulated within a single agency according to more or less uniform decisional criteria. Reorganizing our risk agencies around natural risk markets would have some obvious advantages. First, a comparative approach to risk regulation can operate in perfect harmony with our reluctance to regulate old risks precipitately. If we are determined to proceed with circumspection in our regulation of old risks, those risks provide the perfect benchmark for a comparative system. Comparative regulation would also make risk regulation more credible. It is always much easier to compare risks than to make determinations of absolute safety. Critics might complain less about the overfeeding of rats if the data simply showed one group of live and healthy aspartame-fed rats and another group of dead or ailing saccharin-fed rats. Comparative regulation might also promote desirable comparison between the so-called technologically enhanced risks and the all-natural hazards of our environment. Some state legislatures, for example, have passed nuclear waste disposal laws that, if applied literally, outlaw the excretion and disposal of human waste, which, like everything else on this middle earth, is mildly radioactive. Comparative risk regulation might help to deter such idiocies. Finally, comparative regulation would help to avert the most intolerable of all possible risk regulations—regulations that aggravate the hazard they are supposed to mitigate. Again, historical examples of the problem are easy to come by; I shall recite only three. • Cyclamates have been banned in this country, while saccharin has not. Canada has followed exactly the opposite course. One of us has banned the safer product and continues to use the more hazardous. Comparative regulation would impel the FDA to determine whether it might be we who have followed that irrational course. (Today, the FDA is not legally empowered to inquire how the risks of cyclamates compare with those of saccharin, once it has made the threshold determination that both are carcinogenic.) • The high wall between the NRC’s risk decisions and EPA’s allows one agency—I will not venture to say which one—to pile increasingly strict regulations on the safer branch of the industry, with the effect of driving production toward the more dangerous. A single Electric Power Safety Administration might be less prone to accept that type of regression. • Some time ago the FDA banned bottles made of acrylonitrile because small amounts of the carcinogenic plastic leach into the drink. But an “all-natural” glass bottle containing soda under pressure has much in common with a hand grenade, with an unexpected defect in the glass playing the role of the firing pin. Before the advent of plastic bottles, exploding glass bottles caused tens of thousands of injuries in this country every year. Plastic bottles have been a great setback for the trial lawyers of America. Yet at no time was the FDA legally empowered to ask how the risks of acrylonitrile bottles compare with those of glass bottles. The Reaction The idea of comparing interchangeable sources of risk before deciding which to regulate, or how strictly, seems so simple, so obviously reasonable. It was with some surprise that I discovered that this proposal encounters vehement and vocal opposition. The criticism comes in subtle forms but it has two basic refrains: risks are unmeasurable and risks are incommensurable. Unmeasurability. This has become quite a crusade. The arguments sound like this. Don’t trust the experts. Don’t believe any estimates of risk probabilities. Regulate according to maximum conceivable harm, ignore the likelihood of harm. Expand the definition of risk—I quote from one prominent commentator’s recently published suggestion—to include all “sociopolitical, biological and geophysical conditions.” This is, of course, intellectual rubbish that can be answered in short order. If risks are unmeasurable, then risk regulation is an utterly futile endeavor. You cannot rationally control what you cannot measure. Incommensurability. This is a more popular, more credible, and more pernicious attack on comparative risk regulation. It runs something like this. Risks in the nature of carcinogens are special—the public demands particularly strict cancer control. Occupational hazards associated with the production of a hazardous product have attendant benefi |
主题 | Uncategorized |
URL | https://www.aei.org/articles/exorcists-vs-gatekeepers-in-risk-regulation/ |
来源智库 | American Enterprise Institute (United States) |
资源类型 | 智库出版物 |
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/235382 |
推荐引用方式 GB/T 7714 | peter-huber. Exorcists vs. Gatekeepers in Risk Regulation. 1983. |
条目包含的文件 | 条目无相关文件。 |
个性服务 |
推荐该条目 |
保存到收藏夹 |
导出为Endnote文件 |
谷歌学术 |
谷歌学术中相似的文章 |
[peter-huber]的文章 |
百度学术 |
百度学术中相似的文章 |
[peter-huber]的文章 |
必应学术 |
必应学术中相似的文章 |
[peter-huber]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。