Major legislation with regard to drugs and medical devices:
The Nineteenth and Early Twentieth Centuries
Biologics Act of 1902
Pure Food and Drugs Act of 1906
Harrison Narcotics Act of 1914
Food, Drug, and Cosmetic Act of 1938
Durham-Humphrey Amendment, 1951
Miller Pesticide Amendments of 1954
Food Additives Amendments of 1958
Color Additives Amendments of 1960
Kefauver-Harris Amendments of 1962
Animal Drug Amendments of 1968
Medical Device Amendments of 1976
Toxic Substances Control Act of 1976
Infant Formula Act of 1980
Orphan Drug Act of 1983
Waxman-Hatch Act of 1984
Drug Export Amendment Act of 1986
Nutrition Labeling and Education Act (NLEA) of 1990
Safe Medical Device Act (SMDA) of 1990
Prescription Drug User Fee Act (PDFUA) of 1992
Dietary Supplement Health and Education Act (DSHEA) of 1994
FDA Modernization Act of 1997
FDA Expansion Overseas in 2008
At the turn of the twentieth century, pharmacy was a young and immature science. Most drugs were still created by hand in a local pharmacy. Technologies to assess and create uniformity in drugs often did not exist. Indeed, a major task of nineteenth- and early twentieth-century pharmacy was to define what a drug was and to create standards of composition, purity, and strength. Pioneering efforts in this direction had begun in 1820 with the creation of the U.S. Pharmacopoeia (USP). A private, voluntary undertaking of physicians, pharmacists and colleges of pharmacy, the USP presented a formulary of compositions and listed chemical compounds, crude drugs, fixed oils, and other substances typically kept by a pharmacist (then called a pharmaceutist or an apothecary). Later the USP listed tests for determining purity. Leading pharmacists regularly revised the USP as new and better drugs, compositions, and tests were discovered and created. Medical men interested in advancing their crafts and the dignity of their professions formed themselves into state medical societies and pharmaceutical associations, the American Medical Association (AMA, 1848), and the American Pharmaceutical Association (APA, 1852). The major societies and associations often published journals, collaborated with medical schools, and sometimes maintained committees on drug adulteration to check drug samples and to publicize information. Pharmacists compiled the National Formulary, first published by the APA in 1888. The Formulary has functioned since 1896 to provide standards for drugs omitted from the USP and to serve as a proving ground for drugs eventually transferred to the USP. (On the earlier history of the USP and National Formulary see Sonnedecker 1970.)
But before the twentieth century there was no direct federal regulation of drugs or other consumer products. In 1848, Congress forbade the importation of adulterated drugs, but the law quickly became moribund, as the drug examiners were usually untrained political spoilsmen (Young 1970, 151). The Reconstruction years saw the formation of the U.S. Department of Agriculture Bureau of Chemistry, the predecessor of the FDA. Consisting of only a few men, the bureau did little more than request customs inspections of imported foods and to a lesser extent drugs. In 1883, the bureau got a righteous and rambunctious chief in Harvey Washington Wiley, who campaigned for federal laws.
A common pattern in the history of federal drug control has been the shocking event that unleashes new governmental powers. In 1901, contaminated smallpox vaccines and diphtheria antitoxins led to tetanus outbreaks and the death of several children. Vaccines, blood and blood products, extracts of living cells, and other drugs belong to a category called biological drugs. The Biologics Act of 1902 required that federal government grant premarket approval for every biological drug and for the process and facility producing such drugs. Never before had such premarket control existed in the United States. The same premarket authority was enacted for animal biological drugs in the Virus, Serum, and Toxin Act of 1913 (Miller 2000).
In 1906, Upton Sinclair’s book The Jungle described the filthy conditions of a meatpacking plant. In the most shocking incident, a worker collapses into a lard canister and is indiscriminately ground and shipped for sale. At about the same time in the nonfiction world, Chemistry Bureau Chief Harvey Wiley recruited a group of young men into “the Poison Squad.” The squad volunteers ingested formaldehyde, boric acid, and other food colorings and preservatives in concentrated form. Eventually their digestive systems showed ill effects. Wiley’s dramatic stunts earned him the folkname “the Crusader” and inspired popular songs about his patriotic self-poisoning disciples. The combination of the Poison Squad and The Jungle prompted Congress to pass the Pure Food and Drugs Act in 1906.
The 1906 law recognized the privately produced U.S. Pharmacopoeia and National Formulary as official standards for the strength, quality, and purity of drugs and for the tests to make such determinations. Thus, the 1906 law defined an adulterated drug as a drug that was listed in the USP but that did not meet USP specifications (unless variations from the USP were clearly labeled). Practitioners, however, had given the USP “official” status long before USP specifications were anointed by law. In addition, several states had already made reference to the USP in defining adulteration. Thus, the federal law mandated what was already practiced widely, though not universally.
The 1906 law included provisions against “misbranding. A drug was considered misbranded if it contained alcohol, morphine, opium, cocaine, or any of several other potentially dangerous or addictive drugs, and if its label failed to indicate the quantity or proportion of such drugs. (The law pertained only to labeling, not to advertising.)
For consumers, the main result of the 1906 law was not to restrict choices but to provide more information. In addition, the scientists of the Bureau of Chemistry performed useful and important work in developing assays and tests to help identify and purify drugs and to make production uniform. Because the feds could not wield coercive premarket power, the Chemistry Bureau and industry trusted each other and cooperated to improve drug manufacturing.
The clause against “false and misleading” labeling was, however, initially used in an aggressive manner. Under this clause, federal enforcers prosecuted many manufacturers who sold “cures” for headache, baldness, cancer, and other ailments. Prosecutions typically resulted in confiscation and small fines. When the government prosecuted “Dr. Johnson’s Mild Combination Treatment for Cancer,” Dr. Johnson, the manufacturer, fought back, taking the case to Supreme Court. The Court agreed with Johnson, ruling that any therapeutic claim is a matter of opinion. It follows that there exists no authoritative medical opinion that coercively overrides others, and hence no charges can be brought against therapeutic claims unless the sellers actually intended fraud. Justice Oliver Wendell Holmes affirmed that the FDA was “to regulate commerce in food and drugs with reference to plain matter of fact, so that foods should be what they professed to be[,] . . . [rather] than to distort the uses of its Constitutional power to establishing criteria in regions where opinions are far apart” (quoted in Temin 1980, 33). Seeking to overcome the limitations discovered by the Supreme Court, Congress passed the Sherley Amendment in 1912, but this amendment banned only “false and fraudulent” claims, i.e., claims that the seller knew to be false, thus it failed to deliver the sought-for expansion of power. The act moreover did not extend the police power against “false and fraudulent” claims to advertising.
Following the Supreme Court’s ruling and the Sherley Amendment, the FDA’s function became fixed to monitoring the identification of a drug. Nonetheless, the developments of this early period set a precedent for federal government activism in medicine.
Meanwhile, in 1927 the regulatory functions of the Bureau of Chemistry were reorganized to become the Food, Drug, and Insecticide Administration, which in 1930 changed its name to the Food and Drug Administration.
The Harrison Narcotics Act of 1914 placed a tax on the production, sale, and use of opium and required prescriptions for products exceeding the allowable limit of narcotics. This act also mandated increased record keeping for physicians and pharmacists who dispense narcotics. Initially passed to ensure the orderly marketing of narcotics, the act was later interpreted to prohibit the supply of narcotics, even to addicts on a physician’s prescription. Under the Harrison Act, thousands of physicians were imprisoned for prescribing narcotics.
Under the new administration of Franklin D. Roosevelt, the FDA immediately began pressing for more regulatory powers, but not much happened with respect to drugs until the next shocking episode. A well-established pharmaceutical company, Massengill, released a new sulfa drug (an antibacterial) under the name Elixir Sulfanilamide. The drug itself had undergone a variety of quality and safety checks, but in producing a liquid form the company failed to test the solvent. Possessing a pleasant green shade, this solvent, diethylene gycol—better known today as antifreeze—had deadly effects on the kidneys. As a result, 107 people, mostly children, died before the product was quickly recalled.
The Elixir story is often told to lament the meagerness of federal control. The FDA could prosecute Massengill merely for misbranding: the product Elixir Sulfanilamide did not contain alcohol and therefore did not fit the definition of an elixir. Nothing could be done about the Elixir deaths, runs the usual lament, because Massengill had not made fraudulent claims for the product. The tragedy is said to have demonstrated that unfettered markets cause reckless injury and that public safety called for additional laws.
That telling of the story, however, entirely overlooks the role of tort law in providing compensation to victims and in promoting deterrence. At the time of the Elixir episode, the common law did provide remedies for harm from misbranded or adulterated drugs, and Massengill was successfully sued in tort for its gross negligence (Krauss 1996). The chemist responsible for creating Elixir Sulfanilamide committed suicide.
Within months of the tragedy, Congress passed the Food, Drugs, and Cosmetic Act of 1938. The Constitution does not give the federal government any power to regulate drugs. As has often been the case, the wedge into federal control was the federal government’s power to regulate interstate commerce. The laws regulating drugs were thus written so as to apply only to drugs used or produced in more than one state. Given the broad—sometimes absurdly broad—construction the courts have given to the interstate commerce clause, in practice the law regulates every drug despite the original intent of the framers.
The 1938 Act included several provisions that would prove to be significant wedges for future power expansion. The most salient change was the requirement that manufacturers file a New Drug Application (NDA) with the FDA. The application would indicate the drug’s composition, report test results on safety, and describe how the drug was to be manufactured and quality controlled. If a company submitted an NDA, it would be automatically approved in sixty days if the FDA took no action. Thus, the default position was approval; the burden of proof for departure from the default position fell on the FDA, and as a result the costs of the FDA to the public were kept low.
To some extent, the 1938 Act continued the information provision requirements of the 1906 Act. The classification “misbranded” was expanded, for example, and now included any drug whose label failed to identify and quantify the precise ingredients, to list effects and possible side effects, and to give directions and cautionary information that even the least-educated person could understand.
Other provisions did restrict choice and reduce consumer information. Proof of fraud was no longer required to stop “false” claims for drugs. Falsity would, of course, mean “deemed by the FDA to be false.” Thus was erected the pedestal of authoritative knowledge regarding truth and falsehood for all users in all situations.
An obscure provision of the Food, Drug, and Cosmetic Act and of a series of subsequent FDA regulatory decisions had the effect of creating a new class of prescription-only drugs. The original labeling laws were meant to provide more information to consumers and thus improve their ability to make good decisions. The intention of Congress in passing the act was to further the goal of creating informed consumers. Indeed, the House committee reporting on the bill declared explicitly that “[this] bill is not intended to restrict in any way the availability of drugs for self medication. On the contrary, it is intended to make self medication safer and more effective” (quoted in Temin 1992, 351). Yet the FDA decided that some drugs could not be labeled safely. Thus, in some cases, the FDA required that drugs be labeled in such a way that a consumer could not understand them or else that they be labeled only with the warning “Caution: To be used only by or on the prescription of a physician.” In the latter case, sale without prescription was illegal.
Traditionally the manufacturer decided whether a drug was a prescription or an over-the-counter (OTC) drug, and sometimes some manufacturers sold a drug by prescription only, while others sold the same drug over the counter. Now, however, manufacturers were subject to considerable uncertainty because they had to guess whether the FDA would deem a drug prescription only or OTC. If a manufacturer thought consumers could properly use a drug, and thus labeled it and sold it OTC, the FDA might disagree, remove the product, and sue the manufacturer for misbranding. Indeed, if a patient misused the drug, even deliberately so, the manufacturer bore all resulting legal responsibility. Divided authority began to stifle entrepreneurship. The disarray continued until 1951, when the Congress passed the Durham-Humphrey Amendment (see below).
The 1938 Act also expanded the FDA’s powers over medical devices. Although the FDA could not prevent a medical device from coming onto the market, as it could for drugs, it did have the authority to ask the courts to stop the production or sale of devices already entered into interstate commerce. Under this authority, the FDA removed a number of quack devices from the market (Higgs 1995c).
The Durham-Humphrey Amendment drew a clearer legal distinction between prescription-only and OTC drugs, and authorized the FDA to classify drugs accordingly. Many important drugs could be sold only by prescription from a licensed practitioner. Licensed doctors, therefore, became deputies and spoilsmen in the growing system of controls. Consumers had to pay for the drug and a visit to the doctor. These new privileges for doctors were the bounty of the government’s regimentation of the drug industry and assault on consumers’ freedom to self-medicate. Dependence on doctors was further institutionalized and legitimated by making it difficult for consumers to gain information, in particular by the labeling and advertising controls that prohibited information or mandated unintelligibility. Thus, licensed doctors gained wealth and relative status by stripping others of freedom and by dumbing down consumers. (Under this amendment, manufacturers still had discretion over the classification of already approved, non-habit-forming drugs, which were to come under FDA control in 1962.)
These amendments required premarket approval of pesticide residues in or on food.
The Food Additives Amendment required premarket approval of food additives. The FDA later used this authority to regulate dietary supplements, but such authority was removed with the Dietary Supplement Health and Education Act (DSHEA) of 1994.
The Color Additive Amendments required premarket approval of color additives. The so-called Delaney anticancer clause prohibits the FDA from approving any color additive that has been found to cause cancer in humans or animals regardless of the dosage levels. As a result, substances that cause cancer when rats ingest hundreds or thousands of times the typical human dose have been banned from food products. (See Wildavsky 2000 on the inappropriate use of animal studies in the regulation of carcinogens.)
In the post–World War II era, the field of pharmacology entered a new age. People with bacterial illnesses could now be treated with a host of new antibiotics, and diabetics were likewise given the life-saving invention of insulin. In the 1950s in particular, many new drugs were called “magic bullets” because of their potency and swift defeat of disease. The very success of the new drugs, however, spurred new regulations.
Senator Estes Kefauver, who sat on the Senate Antitrust and Monopoly Subcommittee, decided that in dealing with medications, the government must do more than control their labels, contents, and safety and their marketing and distribution processes. It must also control their prices and enforce “competition.” In 1960, Kefauver initiated hearings in an attempt to expose unfair marketing practices. Kefauver’s bill called for a scheme of compulsory patent sharing. Each pharmaceutical company would, after three years, be required to share its new patents with competitors, while collecting an annual royalty fee of some 8 percent of the total. Although Kefauver’s main concern was pricing, another provision called for NDAs to show proof of both safety and efficacy.
Though President Kennedy spoke fondly of the “safety and efficacy” clause, the Kefauver bill lacked popularity and went nowhere. As with the acts of 1902, 1906, and 1938, another tragedy paved the way to passage. The tragedy was so great, so sensitive, and so graphically shocking that it still evokes strong emotions and arrests intellectual discourse.
In 1957, a West German pharmaceutical manufacturer introduced a new sedative, thalidomide, which alleviated the symptoms of morning sickness in women during the first trimester of pregnancy. In 1962, by which time the drug had been sold in forty-six countries, it became clear that thalidomide damaged the fetus, causing stillbirth or, most prevalently, phocomelia (Greek for “seal limb”). Thousands of newborn babies were found to have truncated limbs that resemble flippers. By virtue of photojournalism, the horror and sadness were shared throughout the world.
In the United States, an NDA for thalidomide had been submitted to the FDA in 1960, but approval had been delayed as the FDA investigated adverse neurological reactions. FDA officials had not even suspected that the drug caused birth defects. In 1962, President Kennedy bestowed the Distinguished Federal Civil Service Award on the FDA physician who held up approval, Frances Kelsey, even though her withholding of approval was more a matter of bureaucratic delay than of investigation (Harris 1992).
“Thalidomide babies” became a bludgeon for urging stronger government action. The Kefauver bill was revised so that the pricing and patent-sharing provisions were deleted, and the Kefauver-Harris Amendments were soon law. The amendments authorized the FDA to require drug companies to conduct and submit tests determining safety and efficacy. In addition, the FDA now had to preclear all human trials, drug advertising, and labeling. The FDA also increased its regulatory power over manufacturing.
The 1962 Amendments significantly reduced the choices of doctors and patients, and expanded the power of the FDA, which increased its staff from one thousand members in 1951 to nearly sixty-five hundred two decades later (Temin 1980, 121). In addition to requiring efficacy testing for new drugs, the FDA, with the help of the National Academy of Science Drug Efficacy Study Implementation, launched an investigation of the efficacy of the then-current stock of drugs.
The task of proving efficacy is much more difficult, expensive, and time-consuming than the task of proving safety. To a great extent, efficacy, which is sensitive to individual conditions and mediated by market process, had in the past always been judged jointly by doctors and consumers. A drug’s efficacy ought to be judged relative to the alternative therapies and is therefore constantly changing, being discovered, and being proven by medical-market experience, with the use of postmarket surveillance and research. Safety, naturally, always calls for strong prior assurance. But the search for improved efficacy had proceeded, to some extent, by people serving as each other’s guinea pigs, and the result had been rapid progress. In 1962, however, the FDA began to act on the premise that it could establish authoritative knowledge of efficacy prior to experience and experimentation in actual market processes.
The time spent waiting for FDA approval and the expense and duration of the bureaucratically determined testing procedures combined to cause tremendous delays in drug development and production. Drug development declined significantly after 1962, and the wait for new life-saving drugs increased to more than a decade by the end of the 1970s (see FDA Harm below).
The role of thalidomide in the passage of the 1962 Amendments is riddled with unfortunate ironies. First, the episode aroused great public empathy for human suffering, but no thought was given to the suffering that was bound to result from the ever more confining grip on drug development, availability, and information. Second, people cited thalidomide in claiming that drug approval delay is a blessing, but the pre-1962 FDA had proven to be sufficiently slow to avoid thalidomide harm in the United States. Third, the old law of 1938 already required premarket approval for safety. Nothing about thalidomide even superficially recommended premarket approval for efficacy.
These amendments required premarket approval of new drugs and feed additives for animals.
As with drugs, the field of medical devices entered a new era after World War II. Cardiac pacemakers, renal catheters, replacement joints, and many other innovations were introduced in this period. The FDA first tried to regulate these new products by reclassifying them as drugs, but in the usual story it took a tragedy, this time over the faulty Dalkon Shield IUD, to generate new law.
The 1976 Amendments expanded the definition of a medical device and authorized the FDA to categorize all medical devices into three classes. Class I devices—tongue depressors and gauze, for example—are subject to reporting requirements and Good Manufacturing Practices (GMP) regulations. Class II devices are subject to the same controls as Class I devices and the same product-specific performance standards supposedly developed by the FDA (see further below). Class III devices—artificial hearts and angioplasty catheters, for example—must pass an FDA approval process similar to that required for new drugs; that is, before marketing can begin, Class III devices have to be proven safe and effective in extensive clinical trials, and submit to and pass an FDA premarket approval process.
In an excellent example of FDA thinking, new devices are automatically categorized as the most risky devices, i.e., Class III devices, regardless of actual risk. (See Some Remarks about Medical Devices for an absurd consequence of this procedure.) A new device can escape going through a premarket approval procedure if it can be shown to be “substantially equivalent” to a preamendment device (and, since the Safe Medical Devices Act [SMDA] of 1990, to any currently marketed non–Class III device). “Substantially equivalent” devices are supposed to be able to go through a simpler premarket notification (as opposed to approval) process known as the 510(k) route, after that section of the 1976 Act.
The neat classification scheme of the Medical Device Amendments does not describe actual FDA practices. The FDA, for example, did not develop any performance standards until 1997! Thus, Class II devices have played no role in medical device development up until recent times, and it remains to be seen whether they will become more common in the future. (The requirements for a Class II device were loosened in the Safe Medical Devices Act of 1990.) Furthermore, the “simple” 510(k) procedure evolved to become what in effect was an extensive and time-consuming premarket approval process. (The reality was recognized in the SMDA, which formally made the 501(k) process into an approval process.)
Munsey (1995) provides a good overview of the medical device regulation.
This act required premarket notification for chemical substances.
The Infant Formula Act of 1980 was passed after thirty-one children were diagnosed with problems relating to a chloride deficiency in a particular brand of soy-based infant formula. Following an initial report on three children from a physician who suspected a problem with the soy formula, the FDA worked remarkably rapidly in consultation with pediatricians and the manufacturer to assess and identify the problem. Within one week of the initial report, the manufacturer initiated a voluntary recall (CDC 1999). The story so far shows the FDA at its best and most useful. Unfortunately, in response to this event, Congress ruled that infant formula could no longer be marketed without prior FDA approval. As a result, it has become difficult and expensive to get new infant formulas approved. In parallel with the problem of drug lag we now have a problem of infant formula lag (an example of which can be found by following the link).
By 1983, the research, testing, and development of a new drug could take up to twenty years, seven of which expired in waiting for final FDA approval of the NDA. (For more recent average times, see the Drug Development and Approval Process below.) Heightened awareness of patients direly waiting for pending treatment gave rise to reform. Because the costs of obtaining FDA approval were the same whether the projected market was two million patients or twenty thousand patients, companies naturally pursued, all else being equal, the development of large-market therapies and abandoned (or “orphaned”) small-market therapies. Thus, FDA regulation had especially negative consequences for people suffering from rare diseases. The Orphan Drug Act was created in an effort to reduce drug loss for “rare” diseases, which were defined as having fewer than two hundred thousand cases in the United States. The Orphan Drug Act gave tax breaks, subsidies, and special exclusivity privileges to sponsors of drugs for rare diseases.
Rather than reducing the FDA barriers to producing orphan drugs, the Orphan Drug Act was meant to stimulate the development of such drugs by granting sponsors new monopoly privileges. The exclusivity granted under the act differs from a patent. A patent protects against competition from a drug with the same chemical structure. Market exclusivity as implemented by the Orphan Drug Act grants protection for seven years against competition from any drug with a similar effect. The FDA thereby bars firms from marketing drugs that treat diseases also treated (and perhaps less effectively) by a drug granted exclusivity.
Officials claim that this act has been a success, noting that almost a thousand drugs have been granted orphan status. The number of such drugs is misleading, however, because many would have been produced even without the act. Furthermore, the administering of the act involves several artifices. Cancer patients number in the millions, but a drug may be granted orphan status to treat ovarian cancer or bladder cancer. Thus, a drug used to treat ovarian and bladder cancer could be an orphan in each category even though the total population served by the drug would be well more than two hundred thousand. Even more absurdly, the market for a drug may be divided into a prevention category and a treatment category, and if the number afflicted in either category is less than two hundred thousand, orphan status is granted. Moreover, the same drug can be an orphan for more than one disease, multiplying its monopoly privileges (Arno, Bonuck, and Davis 1995).
The history of the Orphan Drug Act shows an interesting expansion of benefits to drug manufacturers. When originally enacted, the standard for orphan status was “no reasonable expectation that the costs of development will be recouped from U.S. sales.” Because worldwide sales often much exceed U.S. sales, even this standard could grant exclusivity, subsidies, and tax breaks to drugs that would still be profitable without such benefits. To prove that there was no reasonable expectation of recouping cost, pharmaceutical firms were supposed to submit financial data to the IRS. The pharmaceutical industry disliked this provision, however, and lobbied to have the requirement weakened. In 1984, the standard for orphan status was weakened to say that there be fewer than two hundred thousand potential U.S. patients at the time of the request for designation of orphan status. In the early years of AIDS, when the disease affected relatively few people, the revised standard allowed many AIDS drugs to gain orphan status despite the fact that the market for these drugs was expected to grow rapidly. AZT was designated an orphan drug despite its having generated billions of dollars of sales. Initially, Congress had also restricted the exclusivity to drugs that could not be patented; this restriction was dropped in 1985. Thus, over time the Orphan Drug Act has become significantly more beneficial to the established U.S. drug manufacturers.
A sponsor seeking orphan status for a drug need not be the creator of the drug, and the drug need not be new. The drug oxandrolone had been used to treat wasting in hepatitis patients and had been available by prescription for thirty years. When body builders began to use it illegally to bulk up, the drug received bad publicity and was discontinued. Another company then gained the rights to the drug and presented it to the FDA as a new treatment for HIV-related wasting. Orphan status was granted. AIDS patients now paid a price 1,200 percent higher than when the manufacturer did not have monopoly rights (LeBlanc and Sabados 1996).
On the surface, the Orphan Drug Act seems like one instance in which policymakers recognized some of the problems created by restrictions and moved to rectify matters. The act does not, however, roll back restrictions, but rather grants new powers to the FDA and throws new monkey wrenches into private-sector affairs. FDA proponents of the act claim, without substantial evidence, to be solving the problem. Unfortunately, no major cost-benefit analysis has yet been performed to determine the net effects of the act.
If the U.S. government grants a patent to a drug, all other manufacturers are barred for a prespecified number of years from producing a product of the same chemical composition (except by franchise from the patent holder). A patent, therefore, grants a degree of monopoly power to the patent holder. The usual term of patent life is seventeen years. When developing a new drug, the company is anxious about the possibility that another company is also working on the drug (or has received news or leaks about the promising incipient drug) and is eager to attain a patent. Companies therefore apply for and receive drug patents in advance of final FDA approval to market the drug.
But some of the seventeen years of patent protection is dissipated waiting for approval. The “effective patent life” of a new drug is the time from approval to the end of the patent. When a patent expires, other producers are permitted to replicate the product and to sell it as a “generic drug.” This competition drives down prices.
During the 1970s and 1980s, the duration of FDA requirements continued to grow, reducing the effective patent life. The drug companies therefore experienced not only greater drug development costs and delays, but also shrinking patent protection of products that were eventually emerging from the FDA gauntlet. They were squeezed at both ends.
Commissions established by Presidents Carter and Reagan recommended that patent terms be adjusted to make up for time lost in regulatory review. The generic drug producers, however, opposed the idea. It proved impossible to pass patent term reform over their opposition. Thus was born a bilateral bill, the 1984 Drug Price Competition and Patent Term Restoration Act, known as the Waxman-Hatch Act. This act served the generic drug producers by removing some arbitrary and absurd constraints on generic drug manufacturers. Prior to the act, it was not sufficient for a generic drug manufacturer to prove that its drug was bioequivalent to an approved drug. Instead, the manufacturer had to submit independent information on safety and efficacy. Thus, the generic drug manufacturer had to repeat many of the clinical trials performed by the original manufacturer, despite the fact that the drugs could be shown to be bioequivalent. As a result of the costs of performing clinical trials, many drugs did not face generic competition even after the relevant patents had expired. The act required the FDA to accept bioequivalence as sufficient for approval (something the FDA could have elected to do prior to the act). The procedure for a generic drug approval is called an Abbreviated New Drug Application (ANDA).
The liberalization of generic drug approval was the inducement generic drug companies required in order to support the second part of the act, patent term adjustment. Waxman-Hatch extends patents for time lost during FDA review and for one-half the time lost during FDA-required clinical testing. The extension is capped at a maximum of five years, and the total patent term is capped at fourteen years from the date of FDA approval. Prior to the act, effective patent terms were approximately seven to ten years. Waxman-Hatch has extended patents by two to three years on average for an effective patent life of approximately nine to twelve years (Grabowski and Vernon 1996). Although patent law grants seventeen years of patent life, patent terms much beyond ten years are typically of low value because the advent of new drugs diminishes the value of old drugs, patented or not.
The FDA had made it illegal for Americans to export drugs that had not been approved in the United States. FDA paternalism was thus not restricted to U.S. citizens, but also impinged on people throughout the world. Some firms moved manufacturing plants abroad to escape the restriction. The export restrictions also contributed to drug loss because they made U.S. drug development less profitable. The 1986 Drug Export Amendment Act liberalized U.S. export of such drugs.
Under the act, export is allowed if the drug (not FDA approved) satisfies three conditions: (1) U.S. approval is actively being sought; (2) the drug is also covered by a U.S. investigational exemption; and (3) the drug is for export to any of twenty-one nations that have approved the drug and have regulatory programs that meet U.S. standards (the standards, that is, of a foreign country’s regulatory program, not necessarily the FDA standards for drug control) (Kaplan 1995).
The NLEA required food manufacturers to include nutritional labeling on most food products. (Ironically, such labeling had been illegal prior to the early 1970s!) The act added such things as saturated fat, cholesterol, total and subgroups of carbohydrates, and dietary fiber to the list of nutrients that must appear on nutrition labels. Although meat and poultry remain under the control of the U.S. Department of Agriculture, the FDA has authority over the form and content of nutrient descriptors for most foods. The NLEA also codified the FDA’s authority to allow health claims on foods and dietary supplements. Although the intent of the NLEA was to increase the amount of information consumers received by broadening the health claims allowed on foods and dietary supplements, the FDA officials took an aggressive stance and announced that they planned to regulate supplements as drugs. The resulting backlash led to the passing of the Dietary Supplement Health and Education Act (DSHEA) of 1994.
The SMDA substantially increased reporting requirements for medical devices, including requiring device users to report adverse events to the FDA and to device manufacturers. The Medical Device Reports require extensive and costly paperwork often with little value. The SMDA also formally changed the 510(k) procedure, which was originally intended to be a notification procedure, to a premarket approval procedure. Up to the time of this act, the FDA had never issued any Class II performance standards, so the SMDA modified the requirements, making it easier for the FDA to establish a standard; it also provided that Class II devices could be issued if accompanied by “special controls.” Special controls include postmarket surveillance and other controls the FDA may deem necessary. The SMDA also permitted the assessment of substantial civil penalties for violating the Food, Drug, and Cosmetic Act relating to devices.
Munsey (1995) provides a good overview of the medical device regulation.
Pre-1992 figures indicated that on average it took the FDA two and a half years to review an NDA and sometimes up to eight years. Often, the cause of delay was not the difficulty of the application but merely backlog. Applications would sit unexamined for months or even years. The FDA concluded that the process of approval could be sped up if they had better equipment and more workers to review applications. Congress was unwilling to increase FDA appropriations, however. Thus was born the Prescription Drug User Fee Act of 1992, establishing for a five-year period a mandatory fee of roughly $200,000 to be submitted by a pharmaceutical company along with its application. The FDA hired hundreds of new employees. As a result of the legislation the average processing time fell by half, to eighteen months. Because of this evident success, the Modernization Act of 1997 renewed the practice for another five-year period and increased the user fees. The necessity of renewing the PDUFA every five years has put pressure on the FDA to streamline its processes so that drug manufacturers will support renewal. The possible threat, in other words, of losing the user fees and thus of having to cut back on staff and other perquisites appears to have made the FDA bureaucracy more efficient and amenable to customer needs.
In 2002 Congress reauthorized the act and the FDA released PDUFA III, their five-year plan, through 2007. This plan expanded upon PDUFA I and II by allowing the collection of user fees for post approval surveillance of drug safety, in response to the GAO report and Congressional concerns that resources were being reduced for non-approval activities at the FDA. This also provided for funding to increase the number of employees by about thirty percent. PDUFA IV, the current five-year plan following the 2007 reauthorization, continues the expansion of the FDA’s coverage of postmarket safety of drugs and allows for additional hiring. The FDA expects to collect an additional $29 million a year in user fees to support these efforts. The additional funding will increase the FDA’s ability to detect postmarket safety concerns, but does not relate to the original purposes of the PDUFA. Instead the new fees act as another funding mechanism for the FDA and funding increases will be used to support postmarket activities, develop best practices, and enable faster approval and reconciliation of proprietary drug names.
A 2002 GAO review of the PDUFA found that the revenue from user fees and the PDUFA performance requirements have allowed the FDA to approve some drugs more quickly. The GAO found that between 1993 and 2001 approval times for standard, or nonpriority, drugs fell from twenty-seven months to fourteen months while approval for priority drugs stayed at 6 months. However, the GAO found that drugs were now going through more review cycles (e.g. first cycle approvals decreased from 51% to 37% between 1998 and 2001). The FDA may be rejecting applications that might otherwise have been eventually approved in order to meet the processing speed requirements introduced in the 1997 modernization act, reducing the apparent efficiency gains. The GAO report also noted that drug withdrawals increased over this time frame, though not necessarily due to any problems caused by the PDUFA, and recommended that more effort be spent on postmarket surveillance activities, leading to some of the expansions in PDUFA III and IV.
Philipson et al. (2005) and Berndt et al. (2005) present the most sophisticated cost-benefit analysis of PDUFA. They find that PDUFA did increase manufacturer profits and reduce FDA review times. Moreover, they find no evidence that safety declined under PDUFA. Most importantly faster review times meant big gains for consumers, which they evaluate as equivalent to savings of 180 to 310 thousand life-years.
The FDA has for decades tried to regulate the sale and use of vitamins, herbs, and other dietary supplements. By law, any ingested product that is intended by its manufacturer to prevent or treat a disease is a drug. Products, other than “food,” that are intended to affect the structure or function of the body are also considered drugs. Throughout the 1950s and 1960s, the FDA brought hundreds of court actions against nutrition manufacturers for making health-related claims for their products. Under threat of law, food manufacturers were even prevented from labeling the fat, cholesterol, or other nutritional content of their food! (Later such labeling was allowed, and with the Nutrition Labeling and Education Act of 1990 nutrition labeling became mandatory.)
The FDA actively prosecuted vitamin retailers that sold vitamins and other supplements in conjunction with books or pamphlets that extolled their use. It was illegal, for example, for a health food store to sell vitamins and books extolling the virtues of vitamins. The FDA justified such practices, which many considered to be a violation of the First Amendment, under the theory that literature that was sold near a product was thereby converted into a product label, and if health claims were made in the literature, then the product had to be regulated as a drug (and thus had to go through FDA clinical trials before being sold).
In 1973, the FDA published regulations (to take effect in 1975) expanding its control over supplements by declaring that any dietary supplement that it considered to lack nutritional usefulness was a drug and thus under the FDA’s control. High-potency vitamins, by which the FDA meant vitamins sold in dosages as little as twice the federal recommended daily allowance (RDA) for example, were ipso facto considered a drug (i.e., regardless of manufacturer claims or lack thereof). High-potency vitamins were effectively made illegal by this ruling because they could not be sold without FDA approval, and the FDA would not approve supplements that it considered to be unnecessary. Vitamin manufacturers and consumers fought back, and in response Congress passed the Proxmire Vitamin Mineral Amendment of 1976, which stated that the FDA could not classify a mineral or vitamin as a drug “solely because it exceeds the level of potency which [the FDA] determines is nutritionally rational or useful” (21 USC 350 [1994, originally enacted 1976], [a][B]).
It is worth pointing out explicitly, although it will come as no surprise to anyone who follows today’s health news, that numerous scientific studies have since validated many of the health claims for vitamins and minerals that the FDA had earlier suppressed. The FDA suppression of information concerning vitamin E and heart attacks, for example, may rank alongside its suppression of information concerning aspirin as one of the most deadly regulations of the post–World War II era.
In 1985, the FDA lost a related turf war with the Federal Trade Commission (FTC) and the National Institutes of Health (NIH). Under recommendation from the National Cancer Institute, a division of the NIH, the FTC permitted Kellogg to claim that a high-fiber diet reduced the probability of certain types of cancer. The FDA wanted to sue Kellogg, but the FTC argued that the ads presented “important public health recommendations in an accurate, useful, and substantiated way” (quoted in Calfee 1997, 25). Under pressure, the FDA backed down, and as a result it was established that food products could advertise a “substantiated” health claim without going through the FDA drug approval process.
Under the protection of the Proxmire Amendment, the dietary and nutritional supplement industry expanded, but the FDA stepped up enforcement again in the early 1990s after thirty-eight deaths were attributed to L-tryptophan, an amino acid widely used for treating depression and building muscle mass. (The Centers for Disease Control later exonerated L-tryptophan in the deaths, which were caused by a contaminant, but the FDA did not lift its ban on OTC sales of L-tryptophan (Beisler 2000). In 1993, the FDA announced that it planned to regulate as drugs all amino acids, herbs, and other supplements including fibers and fish oils. The FDA soon found itself under a furious attack from millions of consumers of nutritional supplements. The DSHEA, passed in 1994 and taking effect in 1996, explicitly required the FDA to revoke its Advance Notice on supplements.
Under the DSHEA, nutritional supplements can make substantiated “statements of nutritional support” that do not thereby invoke FDA control. Supplements, however, cannot make claims regarding disease without becoming regulated as drugs. The distinction between statements of nutritional support and claims regarding disease is vague. Manufacturers of St. John’s Wort, for example, may claim that St. John’s Wort “promotes healthy emotional balance and well-being,” but they cannot say St. John’s Wort “is useful in the treatment of depression.” The distinction is mostly for lawyers, not consumers, considering that many consumers do take St. John’s Wort for depression. (Such consumers are in fact justified in doing so; a number of studies indicate that not only is St. John’s Wort effective at relieving mild cases of depression [e.g., Woelk 2000], but it does so with fewer side effects than many antidepressive pharmaceuticals. In addition, St. John’s Wort is considerably cheaper than pharmaceuticals and does not require a prescription.)
Dietary supplements that make nutritional claims must carry the following two disclaimers: “This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease.” In the section Reform Options, we suggest that the first disclaimer is useful and that this split-label approach be extended to drugs proper. The second disclaimer is not informative.
Subject to certain conditions, such as that the information presented is not false or misleading and not biased in favor of a particular manufacturer or brand, the DSHEA also restricts the FDA’s ability to ban the dissemination of information on dietary supplements (Pinco and Rubin 1996). Health food retailers, for example, can now market books, magazines, and scientific articles describing the uses of dietary supplements. As a result, in recent years consumers have become much better informed about the role of vitamins and other supplements in optimal health.
The full text of the DSHEA can be found here.
By the late 1990s, numerous academic studies and government reports had indicted the FDA for drug lag and drug loss. Pressures for reform finally began to be felt in Congress, a portion of which had recently promised deregulation in their “Contract with America.” In 1996, the House wrote an FDA reform bill that would have significantly threatened some of the FDA’s central powers, but the FDA and its supporters in the Clinton administration “pulled out all the stops to defeat it” (Miller 2000, 55). Facing a veto and adverse spin, Congress abandoned the serious bill. The next year they passed a much watered-down bill, the FDA Modernization Act of 1997.
Much of the Modernization Act merely codified what was already FDA practice (Miller 2000). For example, it authorizes the FDA to appoint panels of scientific experts to assist the agency in evaluating new drugs, a practice the FDA has followed for decades. Similarly, it codified the rule that only one adequate and well-controlled clinical study and only confirmatory evidence could be the basis of approval. Because the FDA has always had this flexibility but rarely exercised it, the impact of the rule is likely to be negligible. The act also codified restrictive FDA policies on dissemination of information regarding off-label uses of drugs. (Subsequently, Washington Legal Foundation v. Friedman found such restrictions unconstitutional and expanded firms’ ability to disseminate information.) For medical devices, the Modernization Act exempted most Class I and Class II devices from premarket approval and increased physician authority to use investigational devices. Finally, in a variety of clauses, the FDA was required to provide manufacturers with better and timelier information concerning its procedures.
The most important provisions of this act were the reauthorization of user fees for another five years and new inducements to drug manufacturers to conduct pediatric studies. Following the model established by the Orphan Drug Act, this act rewarded development of pediatric-use information with monopoly privileges. Under the Modernization Act, a sponsor that develops pediatric information is granted six months of exclusive marketing privileges in addition to any patent or other nonpatent rights for which the drug may already be eligible. Moreover, the marketing privileges are for all uses of the drug and not just for pediatric uses. As with the Orphan Drug Act, the increased incentive to research and develop new drugs and pediatric uses also brings higher drug prices. The trade-off might be worthwhile, but no studies on the issue have been done.
Today, the FDA is a vast organization of fifteen offices, including the Office of Regulatory Affairs, the National Center for Toxicological Research, the Center for Biologics Evaluation and Research, the Center for Devices and Radiological Health, the Center for Food Safety and Applied Nutrition, and the Center for Veterinary Medicine. The administration has nine thousand employees, who monitor and process $1 trillion worth of products each year. Obviously, the FDA has grown tremendously since its inception in the Bureau of Chemistry.
The FDA opened its first overseas office in Beijing, China in November 2008. The new office, along with future offices in Shanghai and Guangzhou, is intended to improve the safety of exports to the United States by speeding up regulatory cooperation between the FDA and the Chinese government. The FDA plans to have thirteen employees at the offices in China but has not yet expanded upon their duties. The FDA plans to create additional overseas offices in India, Central America, Europe, and the Middle East. The European Union has also increased its regulatory cooperation with China by signing an agreement for information sharing and allowing joint checks on producers to ensure safety standards are met.
The FDA has played a role in product safety issues during a number of safety issues stemming from contaminated manufacturing processes in Chinese factories over the past few years. While currently limited in scope, this new expansion overseas could significantly increase the FDA’s influence over foreign exporters to the United States by imposing a form of premarket approval over foreign-manufactured drugs rather than responding to safety incidents as they arise in the United States.