The largest community of pharma leaders

Who’s to say if your research data qualifies as strong or weak evidence?

Published on 

Sherlock Holmes says “It is a capital mistake to theorize before you have all the evidence. It biases the judgement”. That statement very much adapts to an ongoing dispute between academic researchers, industries and regulatory agencies such as the FDA in the USA or the EFSA in Europe, a dispute over evidence and how it is obtained. When you finish reading this post you might re-phrase it as “It is a capital mistake to regulate drugs/chemicals before you have all the evidence. It biases the entire society” 😉

The origins of this dispute comes from the famous Bisphenol A (BPA), a chemical used in the production of several plastics that can be found in tableware, water and baby bottles, the lining of food cans and thermal paper used in cash register receipts, just to name a few. The potential toxicity of BPA is a matter of concern, since it can migrate into food and beverages and be ingested by people, and can also be absorbed through skin and by inhalation. It turns out that research and regulations on BPA has a long history of debates about its safety.

In 2000, the US National Toxicology Program (NTP) reported that there was credible evidence for effects from BPA exposure at or below the current safety standard and stated that further research on BPA was needed. This set off the alarm at the scientific community as well as the plastics industry. In the following years, the dispute between academic scientists, government and the plastics industry over the “weight of evidence” led to the review of the current knowledge and safety considerations of BPA. In 2004, the Harvard Center for Risk Analysis reviewed published literature on BPA and evaluated ‘relevance’ and ‘reliability’ of the data. Their conclusion: only two studies fulfilled their criteria (both of them funded by the plastic industry). But what made them decide that only those two studies were ‘relevant’ and ‘reliable’? The main reason, and which accounts for the nowadays dispute, is that they followed “Good Laboratory Practices” (GLP) i.e. regulatory standards for conducting research, that go from sample sizes, recordkeeping to specific toxicity tests, among other things.

In 2008, the US government sponsored a review of the BPA literature and as part of it, the Center for the Evaluation of Risks to Human Reproduction (CERHR) released a report in which they stated concern for effects on the brain and prostate gland in fetuses, infants and children. Nevertheless, the FDA declared that BPA poses no health risks at the current levels of consumers’ exposure. How can such despair decisions happen? Well, once again because of differences on the studies taken into account. In this case, the FDA relied on two GLP studies that, again, happened to be funded by the major plastics trade associations. The controversy around BPA didn’t stop there and even the FDA report was reviewed by an external committee- the FDA Science Board Subcommittee on BPA- that disagreed with the FDA in only taking into account GLP studies and leaving behind hundreds of papers showing evidence of low-dose effects of BPA.

What do you think, should evidence presented in non-GLP studies be left behind for regulatory purposes? Does GLP research account for stronger evidence than non-GLP? Do GLP ensure quality of science?

GLP were adopted in the 1970s after a major scandal in which one of the largest private research laboratories in charge of running chemical safety tests in the US was found to falsify data among other bad lab practices. Since then, GLP has become de rigueur for labs working in the regulatory arena, but not among academic researchers. Why? They say it is extremely cumbersome paperwork just for recordkeeping, and it does not necessarily imply that good science is being made. In fact, you can design an experiment to intentionally not find results and still meet GLP and I will tell you a clear example. One of the two studies the FDA relied upon for approving BPA, although using a large number of samples (8,000 rats) used a strain of insensitive rats and lacked a positive control. This clearly demonstrate the academic’s point of view towards GLP: it could be an intentional way to insure no harm upon BPA exposure, pushed by industry economic interests or even worst, it could mean no understanding of the underlying science at all. GLP studies are almost always private, industry-funded research with a goal to protect the profitability of a product. In the case of BPA that’s a 16-billion-a-year industry, that’s 1,8 million dollars an hour.

Although they are not completely against it, and agree that GLP provide transparency, repeatability and validated data, academic scientists say GLP tend to mask conflicts of interest and the main problem is the blind adherence to it. In simple words just GLP alone is not enough. Instead, academic scientists rely in peer-review because it offers to evaluate science, something GLP does not.

It is fair to ask then, is it possible to include non-GLP studies for regulation purposes? Is there a way that academic researchers could work together with regulatory agencies? YES. In fact, in 2012 a $30 million project started, known as the Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY-BPA) that joins the FDA, NIEHS, NTP and university laboratories. This example shows us that good things can be done when science work together. This study follows GLP protocols, but it also includes disease relevant endpoints that have not been used in any previous guideline study analyzing BPA toxicity.

The CLARITY initiative will not only give a clearer verdict about BPA’s toxicity, but it will also serve as a model for developing guidelines on chemical risk assessment in the future and highlight the importance of how evidence should be evaluated.

Recent Articles