The pre-IND phase of drug development is the foundation upon which all development-related activities (including registration) depend. It is, therefore, essential to give proper forethought and attention to this initial, all-important step of the drug-development process. In the United States, a pre-IND meeting can add considerable value to the overall process and maximize efficient use of both Sponsor and FDA resources. Although pre-IND meetings require considerable planning and preparation on the part of both the Sponsor and FDA, if warranted and properly conducted, the meeting can provide the Sponsor with valuable insight as to the FDA’s expectations regarding initial- and later-stage development and registration strategies. This presentation provides a high-level introduction to U.S. FDA pre-IND meetings ─ why and when a Sponsor should consider having a meeting and how the Sponsor approaches the process.
Drug development is a complicated, often convoluted process. The ability to predict drug toxicity in humans from nonclinical data remains a major challenge. Since you can’t “erase” an adverse event, optimization of preclinical dose selection is essential. This presentation outlines the process for dealing with adverse preclinical / nonclinical events in order to 1) optimize the chances of successful drug development, or 2) to create a scientific basis for early termination of drug development. Conclusion: Experience counts! There is no single answer for all problems. Use of sound scientific and business judgement generally yields the best outcome.
Any new drug that penetrates the central nervous system must receive some preclinical analysis of abuse liability potential (Draft Guidance). Usually, it is determined through prior knowledge of chemistry and/or pharmacology of the candidate’s drug class that compounds have abuse liability. For example, if a test article stimulates release of dopamine in the nucleus accumbens of the brain, most likely it will be abused by humans (Koob and Volkow, 2010). Discussion of nonclinical abuse liability testing requirements with regulators prior to submission of any formal materials, however, is always advised.
Behavioral pharmacology is only one facet for determining abuse liability; however, preclinical drug discrimination and self-administration data speak loudly. As outlined in O’Connor et. al. (2011), several factors can influence nonclinical drug self-administration data. Animal strain, training regimen, food restriction, duration of access, rate of infusion, and training doses can all influence self-administration data (Baladi et. al., 2010; Banks and Negus, 2010; Caroll, 1985; Kosten et. al., 1997; Lynch et. al., 2010; Woolverton, 1992). Misleading self-administration data can lead to program-killing false-positives or underestimated abuse liability that will manifest during clinical trials. Something as “unimportant” as the dose of the training compound can impact drug discrimination. Too high or too low of a training dose may alter the interoceptive cue of test article and shift dose response curves accordingly when the test article is screened (e.g., Mumford and Holtzman, 1991). These results would drastically affect interpretation of safety margin. Unchecked variables can significantly impact analysis and delay submissions. Although regulators are savvy to these variables, to the classically trained chemist, for example, these variables can seem like smoke and mirrors without the proper experience.
Daily monitoring of behavioral data and animals (weights, response patterns, and general health) is necessary to determine whether preclinical studies are being carried out properly and are subsequently valid. One must be aware that self-administration and drug discrimination studies usually take several months to complete, with animals generating data daily. Failure to incorporate appropriate controls such as presenting “inactive” levers and recording inactive lever responses can render a study invalid; this practice serves as an index of accuracy (O’Connor et. al., 2011). Additionally, catheter patency in rats used for self-administration studies is not a trivial concern. An impaired catheter can seriously alter response patterns. The same animal may alter behavior over time due to time-dependent physiological changes (e.g., behavioral tolerance) or a faulty catheter. Behavioral criteria must be established well in advance in order to accurately track animal response patterns. Frequent catheter patency tests should regularly occur.
Several nonclinical laboratories (especially academic) combat less than aseptic conditions with daily administration of antibiotics to their experimental animals to maintain catheter patency and animal health for lengthy self-administration experiments. Body weights must be maintained at certain levels to ensure motivated animals. If an animal is food restricted for eight months and administered daily antibiotics, will this create problems with your compound? Concomitant effects can potentially lead to additional toxicology studies if you have unexpected clinical signs or abnormal clinical pathology findings.
Some contract research organizations (CRO) may suggest using their “trained” animals, usually non-human primates, for preclinical drug discrimination and self-administration studies. Will the drug history of these animals pose a problem? Should you instead consider use of rats over non-human primates? At this point in time, if the metabolism and kinetics of your compound are similar in rats and humans, use of non-human primates is not necessary and may not be justified from an animal welfare standpoint (O’Connor et. al., 2011). Moreover, the behavioral database for rats is just as strong as for non-human primates (O’Connor et. al., 2011). The benefits of using non-human primates, however, are multiple. A CRO can maintain a small colony of non-human primates that are trained to self-administer or discriminate drugs of abuse for years. For this reason, animals are essentially ready for screening at initiation of the study. One should consider, however, that non-naive animals may have impacted health due to long histories of handling, laboratory conditions, implanted devices (in self-administration animals), and a history of drugs that may impact physiology and/or behavior. Will this confluence of factors negatively interact with your compound?
In conclusion, behavioral pharmacology studies should not be taken lightly, and possession of the necessary expertise and skills to navigate these challenges is necessary. Lack of experience in what was once considered a “soft science” can be extremely detrimental in drug development, costing additional time and money. Just like any scientific assessment, there are “correct” and “incorrect” ways of conducting behavioral pharmacology experiments. For this reason, many large pharmaceutical companies and CROs now have expert working groups for abuse liability screening.
Draft Guidance for Industry Assessment of Abuse Potential of Drugs (January, 2010) prepared by the Controlled Substance Staff (CSS) in the Center for Drug Evaluation and Research (CDER) at the Food and Drug Administration.
Paul Kruzich is an experienced abuse liability and safety pharmacology consultant. He has extensive industrial/CRO experience as a study director and academic experience as a tenure-track faculty member at the Medical College of Georgia. His professional affiliations include the College on Problems of Drug Dependence, Safety Pharmacology Society, Society of Toxicology, and Society for Neuroscience. Dr. Kruzich has authored over 23 peer-reviewed articles and 2 book chapters and has served as a reviewer for over 5 scientific journals.
During the past 30 years, genetic toxicology testing has evolved technologically to play an important safety assessment role in the progression of chemical candidates through the drug discovery and development process. Prior to application of the battery of regulatory tests, high-throughput screening assay methods are now used to reduce costs by terminating compounds with undesirable characteristics (mutagenic hazard or potential carcinogen). With few exceptions, compounds found to be mutagenic in these assays are dropped from development, and clastogenic compounds result in unfavorable labeling, require disclosure in clinical trial consent forms, and can greatly impact the marketability of a new drug. Furthermore, in vitro clastogenicity responses can delay drug development by requiring additional testing to determine the in vivo relevance, although these assays can at times be integrated into other in vivo toxicity studies to expedite the progression of drugs to clinical trials. Thus, genetic toxicology testing at the drug discovery and optimization stages serves to quickly identify mutagenic compound so that they can be quickly dropped from development.
Genetic toxicology was the first branch of toxicology to fully embrace in vitro test methods, notably through the visionary work of Bruce Ames and coworkers with the development of the Salmonella typhimurium tester strains. These prokaryotic assays demonstrated good correlation with rodent carcinogenicity results. The Ames test is generally used as the first screening method to assess chemical genotoxicity. Although it provides extensive information on DNA reactivity, the Ames assay is not suitable for detecting nongenotoxic carcinogens. In time, in vitro assays were developed for the detection of gene mutations, chromosomal aberrations, and micronuclei formation. The mouse lymphoma assay in particular has been developed to the point that both gene mutations and chromosomal aberrations can be detected and quantified following exposure to test chemicals, when compared with known direct-acting mutagens and promutagens.
The performance of a combination of the 3 most commonly used in vitro genotoxicty tests – the Ames, the mouse lymphoma, and the in vitro micronucleus or chromosomal aberration tests – have been evaluated for their ability to discriminate rodent carcinogens from non-carcinogens using a database of over 700 chemicals (Kirkland et al., 2005). Based on the relative predictivity measure (RP; the ratio of real:false positive results), that study demonstrated that positive results in all 3 tests indicated that a chemical is greater than 3 times more likely to be a rodent carcinogen than a non-carcinogen. Similarly, negative results in all three tests indicated that a chemical is more than two times more likely to be a rodent non-carcinogen than a carcinogen. But further evaluation of combinations of positive and negative results in this genotoxicity battery using the RP calculations indicated that it is not possible to predict outcome of a rodent carcinogenicity study when only 2/3 of the genotoxicity results are in agreement (Kirkland et al., 2006).
A basic if not critical shortcoming in all these mammalian in vitro assays is the lack of mammalian absorption, distribution, metabolism, and excretion (ADME) features. As summarized in a recent European Centre for the Validation of Alternative Methods (ECVAM) workshop (Kirkland et al., 2007), cell lines used for genotoxicity testing have a number of deficiencies that may contribute to a high false-positive rate. These include a lack of normal metabolism leading to reliance on exogenous metabolic activation systems (e.g., Aroclor-induced S9), impaired tumor protein 53 (p53) transcription factor function, and altered deoxyribonucleic acid (DNA) repair capacity. Also the use of excessive test chemical concentrations to achieve an empirical correlation between genotoxicity and carcinogenicity can result in “promiscuous activation.” Because these in vitro assays rely on such artificial activation systems, other enzymes that are relatively unimportant in vivo may take over the activation role, leading to the same or a different metabolite – hence, “promiscuous activation.” Recently, a risk assessment method has been proposed that is dependent upon the availability of quantitative human and rodent ADME data such that exposures to a metabolite of genotoxic concern can be estimated at the intended human efficacious dose and the maximum dose used in the 2-year rodent bioassay (Dobo et al., 2009).
Other notable genotoxicity testing methods are available for use in the drug discovery and lead-optimization process. The comet assay is a microgel electrophoresis technique for detecting DNA damage – in vitro and in vivo- at the level of a single cell. When used in vivo, DNA lesions can be measured in any organ, regardless of the extent of mitotic activity and under normal ADME conditions. The conventional mouse micronucleus test in the hematopoietic system is a simple method to assess the in vivo clastogenicity of chemicals if the chemical reaches the hematopoietic system. When multiple organs in the mouse were analyzed following exposure to 208 chemicals, the comparison of comet assay results and carcinogenicity suggested that the comet assay was more capable than the mouse micronucleus assay of detecting rodent carcinogens (Sasaki et al., 2000).
At present, the ICH/FDA Guidance Document S2(R1) outlines two GLP genotoxicity testing assay options. Option 1 requires completion of: (1) a test for gene mutation in bacteria., (2) a cytogenetic test for chromosomal damage (choice of three), and (3) an in vivo test for chromosome damage using rodent hematopoietic cells (either micronuclei or chromosomal aberrations in metaphase cells). Option 2 combines (1) the highly predictive gene mutation assay in bacteria with (2) an in vivo assessment in 2 tissues (e.g., micronuclei using rodent hematopoietic cells plus a second in vivo assay, such as the liver unscheduled DNA synthesis (UDS) assay, transgenic mouse assay, comet assay, etc. Thus, the ICH guidance allows for the registration of pharmaceuticals without the submission of data from in vitro mammalian genotoxicity tests (e.g., the in vitro micronucleus test, chromosomal aberrations, mouse lymphoma assay). This is important because some authors (Matthews et al., 2006) have indicated that 2 of the tests in the FDA battery show good correlation for carcinogenicity prediction (Ames and in vivo micronucleus) and 2 tests show poor correlation (mouse lymphoma and in vitro chromosomal aberrations).
With the trend towards the application of early pre-screening, high-throughput methods to eliminate potential mutagens/clastogens prior to application of the more resource-intensive and time-consuming regulatory testing methods, many pharmaceutical companies are using these screening methods early in the discovery/lead optimization process. Examples of modified or high-throughput methods for early screening include: (1) computer-assisted (in silico) structural activity relationship (SAR) methods for predictive toxicity screening, (2) modified assays such as the in vitro assessment of micronucleus induction in Chinese hamster ovary (CHO) cells, the Ames II assay (TA98 and TA Mix), the in vitro comet assay, or well-based (e.g., 96- or 384-well format) modifications of the yeast deletion (DEL) assay, or (3) proprietary assays such as Vitotox™ (mutagenicity), RadarScreen® (clastogenicity), and GreenScreen® HC (genotoxicity).
About the Author:
David Amacher is a senior investigative and biochemical toxicologist with extensive experience in the safety evaluation of human and animal health products. Dr. Amacher is a Diplomate of the American Board of Toxicology, a Fellow of the National Academy of Clinical Biochemistry, and serves as an Assistant Research Professor of Toxicology and Adjunct Professor in the Graduate School of the University of Connecticut. His professional affiliations include memberships in the American Society for Pharmacology and Experimental Therapeutics, Society of Toxicology, American Society of Biochemistry and Molecular Biology, International Society for the Study of Xenobiotics, American Association of Clinical Chemistry, and the American College of Toxicology.