Thaliomide and the development of drug safety regulation and monitoring



The Thalidomide scandal of 1961 prompted an increase in the regulation and testing of drugs before licensing, with a new amendment to US Food and Drug Administration (FDA) rules demanding proof of efficacy and accurate disclosure of side-effects for new medications (the Kefauver Harris Amendment) being implemented in 1962. Likewise, the 1964 Declaration of Helsinki put greater ethical strictures on clinical research, clearly cementing the difference between production of scientific prescription medicines and other chemicals.

 

 

The FDA’s Frances O Kelsey: by blocking approval of Thalidomide in the US, Kelsey prevented thousands of children being born with life-changing deformities. Photo : FDA

 

Fordian methods enabled more rational methods of mass production, and increasing understanding of biology and chemistry enabled drug candidates to be chosen systematically rather than discovered serendipitously. This ‘golden age’ of drug development took place in the broader landscape of the post-war boom, a general context of massive improvements in standards of living and technological optimism that characterised the 40s to the early 70s, as well as the science-boosting competition of the cold war. As the barriers to entry in drug production were raised, a great deal of consolidation occurred in the industry. Likewise, the processes of internationalisation begun before the war were continued – in 1951 alone Pfizer opened subsidiaries in nine new countries.

The list of novel drugs from the post-war era speaks for itself. The contraceptive pill, introduced in 1960, had an impact on society almost as massive as that of penicillin, enabling women to effectively control their fertility and enabling sexual equality for the first time. Valium (diazepam) was brought to the market by Roche in 1963, followed by the introduction of the monoamine oxidase inhibitor (MAOI) class of anti-depressants and antipsychotic haloperidol.

These drugs ushered in a new era of psychiatric treatment, adding pill-based treatments to the psychoanalytic ones that had previously characterised psychiatry in this era. The 1970s provided a wave of cancer drugs, as part of the US government’s “war on cancer”, a recent report from Cancer Report UK showed that survival rates have doubled since the early 70s – due in large part to the massive innovation in oncology medicines that has occurred since then. ACE inhibitors arrived in 1975, improving cardiac health, and even drugs as ubiquitous as paracetamol and ibuprofen were developed in 1956 and 1969 respectively.

As the 1970s drew to an end, a shift began in the way the pharma industry focused its energies. In 1977, Tagamet, an ulcer medication, became the first ever “blockbuster” drug, earning its manufacturers more than $1 billion a year and its creators the Nobel Prize. This marked a new departure as companies competed to be the developer of the next big blockbuster, and many achieved great success. Eli Lilly released the first selective serotonin reuptake inhibitor (SSRI), Prozac, in 1987, once again revolutionising mental health practice. The first statin was also approved in 1987, manufactured by Merck (MSD).

“But new technologies are what really promise a positive future for the industry in the 21st century.”

 

But whilst there were some breakthroughs, the enormous expense and risks involved in R&amp,D caused many to merely ape their competitors, trying to get a cut of market-share using “me too” formulations rather than innovating novel medications. For example, AstraZeneca’s popular proton pump inhibitor Nexium (esomeprazole), released in 2001, is merely a purified single isomeric version of an older drug which happened to be losing patent protection. Patents, or the lack of them, became a problem for the industry.

The Hatch-Waxman Act of 1984 regularised generic production in the US, and some developing countries made policy decisions to ignore medical patents. The industry’s focus increased on marketing to maintain market share, on lobbying politicians to protect commercial interests, and on lawyers to enforce legal claims on intellectual property rights. These activities have brought a greater suspicion of the industry in the public at large. However, this can be linked to a wider anti-science feeling and more pessimistic outlook on the possibilities of technology in society, as seen in panics over issues such as genetically-modified crops and suspicion towards nuclear power.

Companies have tried to overcome some of these problems by outsourcing various aspects of their processes, and through buying up smaller companies that perhaps retain more of the innovative entrepreneurialism of the pioneers of the 19th century. But new technologies are what really promise a positive future for the industry in the 21st century. Both computing and biotechnology have allowed great leaps forward in both development and production of new drugs. Automation of the drug discovery process through high-throughput screening, and the computerisation of genomics have allowed breakthroughs at a much higher rate than previously.

Starting with insulin in the 1970s, genetic modification has allowed production of human proteins by bacteria. And biological drugs such as the monoclonal antibodies, introduced around the turn of the millennium, hint at a whole new panorama of far more specific drugs that could impact on human health as much as the medicines of last century.

About the author:

Robin Walsh is a freelance writer on healthcare and the pharmaceutical industry. He is currently training to be a doctor, and previously worked in medical communications. He can be contacted at Robwalsh9@hotmail.com.

 


Дата добавления: 2019-07-15; просмотров: 248; Мы поможем в написании вашей работы!

Поделиться с друзьями:






Мы поможем в написании ваших работ!