May 30, 2012 at 5:54 pm
Yes, if the first comprehensive review of the issue in a decade is correct in concluding that low doses of chemicals can harm health.
Although the hypothesis that chemicals can have health effects at doses much lower than those routinely tested in the toxicological evaluation of chemicals is much-discussed, there has been no comprehensive review of the literature since 2002, when the US National Toxicology Programme last looked at the peer-reviewed evidence that such effects might exist (Melnick et al. 2002).
Their conclusions were muted, with the evaluating panel stating its findings “indicate that the current testing paradigm used for assessments of reproductive and developmental toxicity should be revisited”.
That reassessment has never really happened, and the possibility of low-dose effects has been more-or-less discarded in regulatory thinking, surfacing as a controversial hypothesis in need of, but never subject to, further evaluation.
The result is expressed in the opinion of the EU Scientific Committees that: “… no robust evidence is available that exposure to a mixture of such substances is of health or environmental concern if the individual chemicals are present at or below their zero effect levels… At low exposure levels, they are either unlikely to occur or are toxicologically insignificant.” (EU Scientific Committees 2011)
On the research front, however, much has changed in the past 10 years. Examples of low-dose effects have multiplied, understanding of the mechanisms by which these can happen has grown, and epidemiological support for the hypothesis has also begun to emerge. This has culminated in a recent 845-citation review of the evidence-base for the plausibility of the low-dose hypothesis. (Vandenberg et al. 2012)
Plausibility of hypothesis from hormone action
Specifically for Vandenberg et al. the mode of action of interest for low-dose effects is action via hormone signalling pathways, which is why their review focuses exclusively on endocrine disrupting chemicals (EDCs).
Low-dose effects are nothing new in endocrinology, where it has been understood for many years that hormones function at concentrations in the blood which, at least at the scale humans are used to dealing with, can only be considered minute.
Steroid hormones, such as oestrogen and testosterone, can have effects at low concentrations, with estradiol documented to have physiological effects at concentrations as low as 0.1-9 picograms/mol (less than a droplet in an Olympic-size swimming pool).
The question is: “Can chemicals not originating in the body also affect the hormone system?”
Demonstrating low-dose effects
Demonstrating the existence of low-dose effects is a complicated business. There are no toxicological end-points which are common to all EDCs, as their modes of action tend to be very unique. Testing has to be adapted for the specific endocrine activity of the EDC in question.
If a chemical has an effect by blocking the action of a hormone, as anti-androgens such as the phthalate DEHP do by blocking the action of testosterone, the relevant end-point is the consequence of blocking that action, not any toxicological effect which the substance might have at high doses.
If, on the other hand, an EDC exerts its effect by binding to a hormone receptor, the effects can be very complex indeed, and our ability to anticipate them will depend on our (potentially limited) knowledge of the role the receptor plays in the various metabolic pathways in which it is involved.
Finally, most EDCs interact in some way, shape or form with multiple pathways, with effects varying dependent on concentration of chemical and timing of exposure, making their effects even more complex and context-specific.
How to rule out a low-dose effect
Much of the evidence showing low-dose effects has been dismissed as unconvincing. Some studies are believed to refute the low-dose findings, but claims of this need to be interpreted cautiously. A negative finding does not disprove a hypothesis; it simply fails to find evidence in favour of the hypothesis.
There are a number of reasons as to why this can happen, all of which need to be carefully controlled for in an experimental set-up. This is particularly important when testing for low-dose effects, as the subtlety of effect and the complexity of the mechanisms by which they are produced make it much more difficult to prevent external influences making a mess of a study.
To genuinely rule out a low-dose effect, Vandenberg et al. argue that it is essential to use both negative and positive control groups in an experimental set-up, particularly if the experiment is to be interpreted as evidence against potential low-dose effects of a substance.
Negative controls. This includes historical comparison with negative controls in other studies, to ensure the controls for the study in question have no abnormalities which would compromise the negative finding. Experimental set-ups are complex; animals more so, even the pure-bred varieties used in studies. Control cages can leak contaminants and animals can be stressed or diseased, resulting in no difference being observed between the low-dose animals and the control group.
Positive controls. Positive controls are used to show that the test system is capable of detecting an effect. For example, it is no good testing the oestrogenicity of BPA on a group of rats which are insensitive to oestrogen. To prevent this, a control group has to be set up which is exposed to a substance with similar biological effect as that which the test compound is being studied for. In testing for oestrogenicity of BPA, this might involve exposing a control group of rats to an appropriate dose of estradiol.
Positive controls are arguably something of a scientific oddity. Normally an experiment either finds a result, or it does not, and that is that. If the positive result seems significant enough, it breeds further research and better studies; if the studies corroborate the findings, then they become incorporated into scientific lore. However, in the current regulatory climate, negative findings are often interpreted as evidence against a low-dose hypothesis. In these circumstances there is a clear need for the scientific validity of these studies to be more readily determinable.
What exactly don’t we know?
Vandenberg et al. conclude that there is consistent evidence that dioxin, atrazine, BPA and perchlorate have effects at low doses and, furthermore, that the effects observed depend on the dose. If correct, the consequences for regulatory toxicology are fundamental: if low dose effects cannot be inferred from high-dose studies then the method used to evaluate almost all chemicals for safety must be unsound.
Historically, regulatory toxicology has tested chemicals at high doses and lowered the dose until no effect is observed, at which point this dose is divided by between 50 and 1000 to produce the acceptable daily dose which human exposure is not meant to exceed (the Tolerable Daily Intake, or TDI). If a chemical can be used in such a way as to not expose people to a dose higher than the TDI, then its use is accepted.
However, if chemicals act inversely to their concentration, then their effect on health at low doses cannot be inferred from their effect at high doses: the TDIs become invalid, and a rethink of the design of toxicological studies becomes necessary.
Vandenberg et al. make four recommendations for the future design of low-dose toxicological studies:
1. Choice of dose ranges. Currently, regulatory guidelines only require that three doses be tested, and whether or not these doses produce circulating levels of the test compound which reflect those seen in humans is often unclear. Studies need to ensure they are genuinely testing low doses of chemicals.
2. Timing of exposure. In general, but not always, developmental exposures have permanent organisational effects, while post-puberty exposures are mainly activational and cease upon end of exposure to the compound in question. The organisational effects observed depend heavily on the timing of exposure, so not only is dose range important, the timing of administration is critical as well.
3. Importance of end-points being tested. Guideline protocols generally look at gross changes to animal health or reproductive outcomes, such as organ weight, death, mortality, and some histopathological analysis of organs. Chemicals acting at low doses are not rapidly lethal but affect a number of end-points important to health, so the end-point selected needs to be relevant to potential effect, rather than being a gross indicator of well-being.
4. Study size. Much is made of how detecting subtle effects from low-dose exposures requires massive numbers of animals. This is not, in fact, the case: when an appropriate end-point is selected, the number of animals used in a study is determined by statistical factors relating to certainty of finding and is therefore variable, rather than the absolute numbers required in the current standardised protocols.
The future of chemical testing and regulation
Students of science should be fascinated by the emerging field of low-dose research. The development and refinement of positive and negative controls in the face of what we are learning about the body’s apparently increasingly specific susceptibility to different environmental agents; the detection methods which allow chemicals to be detected in low concentrations in biological and environmental samples; the subtleties of experimenting at the limits of detectability; and how rapid advances in understanding how signaling systems determine health are feeding into the science of toxicology.
Nonetheless, we are barely at the beginning of low-dose toxicological research. Reliable studies still number in the mere hundreds, and most of it opening up a myriad of ways in which a substance could be toxic, without necessarily telling us which of these ways any particular substance is toxic.
This is why a researcher’s dream can be a regulator’s nightmare – and chemical manufacturers can hardly be thrilled either, as the rapid proliferation in potential health effects of chemicals makes it harder and harder to predict if any particular substance will have negative health effects.
Small wonder industry and regulators look for the crutch of GLP and standardized test protocols; unfortunately, this is of little help. GLP, as a documentation of procedure, prevents neither false positives nor false negatives; by documenting method, it simply makes the experimental errors which may produce false positives and negatives easier to identify. Standardised test protocols are rarely (if ever) contemporary and thus fail to capture the sorts of methodologies appropriate for detecting low-dose effects.
However intimidating the problem might be, it does not change what needs to be done, and it looks like it is high time for the aggressive development of low-dose experimental protocols and re-evaluation of the safety of substances at the levels to which humans are routinely exposed.
The European Food Safety Authority has been instructed to look at low-dose effects in its evaluation of the safety of BPA (EFSA 2012) and is holding a colloquium in June 2012 to discuss the general evidence base and plausibility of the low-dose hypothesis.
Although EFSA’s mandate for evaluating BPA (download here) seems broad in comparison with the mandate for the 2010 opinion, with a general instruction to evaluate the low dose data rather than determine if any study has been published which would warrant a change in the TDI of BPA.
That said, if the low-dose data up to 2010 did not convince EFSA two years ago, it is not clear why that data should convince EFSA now; a different opinion would have to be the result of a different approach to the problem, and it is not clear at this stage what approach EFSA will take. We shall have to wait and see.