Pharmacological Screening Techniques

Experimental Studies

Before a new drug is administered to humans, its pharmacologic effects are thoroughly investigated in studies involving animals. The studies are designed to ascertain whether the new drug has any harmful or beneficial effects on vital organ function, including cardiovascular, renal, and respiratory function; to elucidate the drug’s mechanisms and therapeutic effects on target organs; and to determine the drug’s pharmacokinetic properties, thereby providing some indication of how the drug would be handled by the human body.
Federal regulations require that extensive toxicity studies in animals be conducted to predict the risks that will be associated with administering the drug to healthy human subjects and patients. The value of the experimental studies is based on the proven correlation between a drug’s toxicity in animals and its toxicity in humans, the studies involve short-term-and long-term administration of the drug and are designed to determine the risk of acute, subacute, and chronic toxicity, as well as the risk of teratogenesis, mutagenesis, and carcinogenesis. After animals are treated with the new drug, their behaviour is assessed; their blood samples are analyzed for indications of tissue damage, metabolic abnormalities, and immunologic effects; their tissues are removed and examined for gross and microscopic pathologic changes; and their .offspring are also studied for adverse effects.
Studies in animals do not usually reveal all of the adverse effects that will be found in human subjects, either because of the low incidence of particular effects or because of differences in susceptibility among species. This means that some toxic reactions will not be detected until the drug is administered to humans. Because the studies of long-term effects in animals may require years for completion, it is usually possible to begin human studies while animal studies are being completed if the acute and subacute toxicity studies have not revealed any abnormalities in animals.

Carcinogenesis

Carcinogenesis occurs when a normal cell transforms to a neoplastic cell and the neoplastic cell undergoes clonal expansion. A carcinogen is a chemical, physical, or biologic insult that acts by causing DNA damage (mutations). Carcinogenesis is a complex process, involving multiple genetic changes, that usually takes place over years to decades in human beings.
The development of cancer requires sequential genetic changes (the first of which is termed initiation) and epigenetic changes (characterized as promotion and progression). Initiators act by damaging DNA, interfering with DNA replication, or interfering with DNA repair mechanisms. Most initiators are reactive species that covalently modify the structure of DNA, preventing accurate replication and, if unrepaired or misrepaired, leading to a mutation(s). If the mutation(s) affects a gene(s) that controls cell cycle regulation, neoplastic transformation may be initiated.
Carcinogenesis may ‘involve mutations in at least two types of genes, proto-oncogenes and tumor suppressor genes (of which there are several dozen). Proto-oncogenes encode proteins that encourage cell cycle progression. Tumor suppressor genes often encode proteins responsible for inhibiting growth and cell cycle progression. Tumor suppressors can down-regulate important signaling pathways for growth, such as the phosphoinositide 3-kinase pathway, or they may directly suppress cell cycle progression. A mutation in a tumor suppressor gene thus encourages neoplastic growth by removing the normal inhibitory checks on cell growth.
An important on-target adverse effect of cytotoxic alkylating agents used in cancer chemotherapy (chlorambucil, cyclophosphamide, melphalan, nitrogen mustards, and nitrosoureas) is that they not only kill cancer cells but also damage normal blood cell progenitors. These agents are therefore toxic to bone marrow and can cause myelodysplasia and/or acute myeloid leukemia (AML). Indeed, 10% to 20% of cases of AML in the United States are secondary to treatment with such cancer drugs.
Tamoxifen, an estrogen receptor antagonist, is an effective treatment in patients’ with breast cancer. While tamoxifen is an antagonist of estrogen receptors in the breast, it acts as a partial agonist in other tissues that express the estrogen receptor, most; notably the uterus. Therefore, an adverse effect of breast cancer treatment with tamoxifen can be the development of endometrial cancer. Newer estrogen receptor antagonists, such as raloxifene, do not stimulate uterine estrogen receptors and may therefore be safer drug choices for treatment or prevention of breast cancer.

Teratogenesis

Drugs given to pregnant women may have serious, unwanted effects on the health of the fetus. Teratogenesis is the induction of defects in the fetus, and a teratogen is a substance that can induce such defects. Exposure of the fetus to a teratogen necessarily involves maternal exposure. For this reason, the interaction between maternal tissues and the teratogenie drug is important to the severity of fetal exposure. In particular, the fetus’s exposure to the agent is determined by maternal absorption, distribution, metabolism, and excretion of the drug, by the toxification of inert precursors to toxic metabolites in maternal tissues, and by the ability of the active teratogen to cross the placenta. These issues are further discussed.
Because development of the fetus is precisely timed, the teratogenic effect of any substance is dependent on the developmental timing of the exposure. Thus, drugs that might have few adverse effects on the mother may cause substantial damage to the fetus. For example, retinoic acid (vitamin A) possesses significant on-target teratogenic toxicity. Retinoic acid activates, nuclear retinoid receptors (RAR) and retinoid X receptors (RXRs) that regulate a number of key transcrip-tional events during development. In humans, organogenesis generally occurs between the 3rd and 8th weeks of gestation. It is during the period of organogenesis that teratogens have the most profound effect. Before the 3rd week, most toxic compounds result in death to the embryo and spontaneous abortion, whereas, after organogenesis, teratogenic compounds may affect growth and functional maturation of organs but do not affect the basic developmental plan. Given the severity of birth defects that can occur, women who take RAR/RXR agonists such as isotretinoin for the treatment of acne must sign FDA-mandated informed consent forms to demonstrate that they are aware of the risk of serious drug-related birth defects.
Another example of an on-target teratogenic effect is in utero exposure of the fetus to ACE inhibitors. Although ACE inhibitors were previously not contraindicated in the first trimester of pregnancy, recent data indicate that fetal exposure during this period significantly increases the risks of cardiovascular and central nervous system malformations. ACE inhibitors can cause a group of conditions including oligohydramnios, intrauterine growth retardation, renal dysplasia, anuria, and renal failure, reflecting the importance of the angiotensin pathway on renal development and function.

Dose-Response Relationships

In pharmacodynamic studies, different doses of a drug can be tested in a group of subjects or in isolated organs, tissues, or cells. The relationship between the concentration of a drug at the receptor site and the magnitude of the response is called the dose-response relationship. Depending on the purpose of the studies, this relationship can be described in terms of a graded (continuous) response or a quantal (all-or-none) response.

Graded Dose-Response Relationships

In graded dose-response relationships, the response elicited with each dose of a drug is described in terms of a percent-age of the maximal response and is plotted against the log dose of the drug. Graded-dose response curves illustrate the relationship between drug dose, receptor occupancy, and the magnitude of the resulting physiologic effect. For a given drug, the maximal response is produced when all of the receptors are occupied, and the half-maximal response is produced when 50% of the receptors are occupied. In some cases, fewer than 50% of total receptors will be occupied but still give the half-maximal response. This is because only a fraction of the total receptors are needed to produce the maximal response. The remaining unbound receptors are considered to be “spare” receptors.
Potency is a characteristic of drug action useful for comparing different pharmacologic agents. It is usually expressed in terms of the median effective dose (ED50), which is the dose that produces 50% of the maximal response. The potency of a drug varies inversely with a drug’s ED50, so that a drug whose ED50 is 4 mg is 10 times more potent than a drug whose ED50 is 40 mg. Potency is largely determined by the affinity of a drug for its receptor, because drugs with greater affinity require a lower dose to occupy 50% of the functional receptors.
The maximal response produced by a drug is known as its efficacy.’ A full agonist has maximal efficacy, whereas a partial agonist has less than maximal efficacy and is incapable of producing the same magnitude of effect as a full agonist, even at the very highest doses. As discussed, when a partial agonist is administered with an agonist, the partial agonist may act as an antagonist by preventing the agonist from binding to the receptor and thereby reducing its effect. An antagonist, by definition, has no efficacy in this sense but can be an effective medication as in the use of an adrenergic receptor antagonist (ß-blocker) to treat hypertension.
The effect that an antagonist has on the dose-response curve of an agonist depends on whether the antagonist is competitive or noncompetitive. A competitive antagonist binds reversibly to a receptor, and its effects are surmountable if the dose of the agonist is increased sufficiently. A competitive antagonist shifts the agonist’s dose-response curve to the right, but it does not reduce the maximal response. Although a noncompetitive antagonist also shifts the agonist’s dose-response curve to the right, it binds to the receptor in a way that reduces the ability of the agonist to elicit a response. The amount of reduction is in proportion to the dose of the antagonist. The effects of a noncompetitive antagonist cannot be overcome or surmounted with greater doses of an agonist.

Quantal Dose-Response Relationship

In quantal dose-response relationships, the response elicited with each dose of a drug is described in terms of the cumulative percentage of subjects exhibiting a defined all-or-none effect and is plotted against the log dose of the drug. An example of an all-or-none effect is sleep when a sedative-hypnotic agent is given. With quantal dose-response curves, the median effective dose (ED50) is the dose that produces the observed effect in 50% of the subjects.
Quantal relationships can be defined for both toxic and therapeutic drug effects to allow calculation of the therapeutic index (TI) and the certain safety factor (CSF) of a drug. The TI and CSF are based on the difference between the toxic dose and the therapeutic dose in a population of subjects. The TI is defined as the ratio between the median lethal dose (LD50) and the median effective dose (ED50). It provides a general indication of the margin of safety of a drug, but the CSF is a more realistic estimate of drug safety. The CSF is defined as the ratio between the dose that is lethal in 1% of subjects (LD,) and the dose that produces a therapeutic effect in 99% of subjects (ED99). When phenobarbital was tested in animals, for example, it was found to have a TI of 10 and a CSF of 2. Because the dose that will kill 1% of animals is twice the dose that is required to produce the therapeutic effect in 99% of animals, the drug has a good margin of safety.

Descriptive Animal Toxicity Tests

Two main principles underlie all descriptive animal toxicity testing. The first is that the effects produced by the compound in laboratory animals, when properly qualified, are applicable to humans. This premise applies to all of experimental biology and medicine. On the basis of dose per unit of body surface, toxic effects in humans are usually in the same range as those in experimental animals. On a body weight basis, humans are generally more vulnerable than experimental animals, probably by a factor of about 10. With an awareness of these quantitative differences, appropriate safety factors can be applied to calculate relatively safe dosages for humans. All known chemical carcinogens in humans, with the possible exception of arsenic, are carcinogenic in some-species but not in all laboratory animals. Whether the converse is true- that all chemicals carcinogenic in animals are also carcinogenic in humans—is not known with certainty, but this assumption serves as the basis for carcinogenicity testing in animals. This species variation in carcinogenic response appears to be due in many instances to differences in biotransformation of the procarcingen to the ultimate carcinogen.
The second main principle is that exposure of experimental animals to toxic agents in high doses is a necessary and valid method of discovering possible hazards in humans. This principle is based on the quantal dose-response concept that the incidence of an effect in a population is greater as the close or exposure increases. Practical considerations in the design of experimental model systems require that the number of animals used in toxicology experiment will always be small compared with the size of human populations similarly at risk. To obtain statistically valid results from such small groups of animals requires the use of relatively large doses so that the effect will occur frequently enough to be detected. For example, an incidence of a serious toxic effect, such as cancer, as low as 0.01 percent would represent 20,000 people in a population of 200 million and would be considered unacceptable high. To detect such a low incidence in experimental animals directly would require a minimum of about 30,000 animals. For this reason, there is no choice but to give large doses to relatively small groups and then to use toxicologic principles in extrapolating the results to estimate risk at low doses.  
Toxicity tests are not designed to demonstrate that a chemical is safe but rather to characterize what toxic effects a chemical can produce. There are no set toxicology tests that have to be performed on every chemical intended for commerce. Depending on the eventual use of the chemical, the toxic effects produced by structural analogs of the chemical, as well as the toxic effects produced by the chemical itself, all contribute to determine what toxicology tests should be performed. However, the FDA, EPA, and Organization for Economic Cooperation and Development (OECD) have written good laboratory practice (GLP) standards. These guidelines are expected to be followed when toxicity tests are conducted in support of the introduction of a chemical to the market.

1. Acute Lethality

The first toxicity test performed on a new chemical is acute toxicity. The LD50 and other acute toxic effects are determined after one or more routes of administration (one route being oral or the intended route of exposure), in one or more species. The species most often used are the mouse and rat, but sometimes the rabbit and dog are employed. In mice and rats, the LU50 is usually determined as described earlier in this chapter, but in the larger species only an approximation of the LD50 is obtained by increasing the does in the same animal until serious toxic effects of the chemical are demonstrated. Studies are performed in both adult male and female animals. Food is often withheld the night prior to dosing. The number of animals that die in a 14-day period after a single dosage is tabulated. In addition to mortality and weight, daily examination of test animals should be conducted for signs of intoxication, lethargy, behavioral modifications, morbidity, food consumption, and so on.
The acute toxicity tests give

  1. a quantitative estimate of acute toxicity (LD50) for comparison to other substances,
  2. identify target organs and other clinical manifestations of acute toxicity,
  3. establish the reversibility of the toxic response, and 
  4. give dose-ranging guidance for other studies.

If there is a reasonable likelihood of substantial exposure to the material by dermal or inhalation exposure, then acute dermal and acute inhalation studies are performed. The acute dermal toxicity test is usually performed in rabbits. The site of application is shaven. The lest substance is kept in contact with the skin for 24 hours by wrapping with an impervious plastic material. At the end of the exposure period, the wrapping is removed and the skin wiped to remove any test substance still remaining. Animals are observed at various intervals for 14 days and the LD50 calculated. If no toxicity is evident at 2g/kg, further acute dermal toxicity testing is usually not performed.   Acute inhalation studies are performed similar to the other acute toxicity studies except the route of exposure is inhalation. Most often, the length of exposure is four hours.

Skin and Eye Irritations

The ability of the chemical to irritate the skin and eye after an acute exposure is usually determined in rabbits. For the dermal irritation test (Draize test), rabbits are prepared by removal of fur on a section of their backs by electric clippers. The chemical is applied to the skin (0.5 ml of liquid or 0.5 g of solid) under four covered gauze patches (1-inch square; one intact and two abraded skin sites on each animal) and usually kept in contact for a period of four hours. The nature of the covering patches Depends on whether occlusive, semiocclusive, or nonexclusive tests  are desired. For occlusive testing, the test material is covered with an impervious plastic- sheet, whereas for semiocclasive tests, a gauze dressing may be used. Occasionally, studies may require that the material be applied to abraded skin. The degree of skin irritation is scored for erythema, eschar and edema formation, and corrosive action. These dermal irritation observations are repeated at various intervals after the covered patch is removed. To determine the degree of ocular irritation, the chemical is instilled into one eye (0.1 ml of liquid or 100 mg of solid) of each of the test rabbits. The contralateral eye is used as the con-various times after application.
Controversy over this test has led to a reevaluation of the procedure. Based on reviews of this procedure and additional experimental data, a panel on eye irritancy of the National Academy of Sciences (NAS) recommended lowering the dose volume (NAS, 1977). More recent studies suggest that a volume of 0.01 ml is as sensitive a method for eye irritancy testing as the 0.1 ml test but causes less pain to the animals (Chan and Hayes, 1989).

Sensitization

Information about the potential of a chemical to sensitize skin is needed in addition to irritation testing for all materials that may repeatedly come into contact with the skin. There are numerous procedures developed to determine the potential of substances to induce a sensitization reaction in humans (delayed hypersensitivity reaction), including the Draize test, the open epicutaneous test, the Buehler test, Freund’s complete adjuvant test, optimization test, split adjuvant test, and the guinea pig maximization test (‘Patrick and Maibach, 1989). Although they differ by route and frequency of duration, they all utilize the guinea pig as the preferred test species. In general, the test chemical is administered to the shaved skin topically, intradermally, or both and may include the use of adjuvant to enhance the sensitivity of the assay. Multiple administrations of the test substance are generally given over a period of two to four weeks. Depending on the specific protocol, the treated area may be occluded. Two to three weeks after the last treatment, the animals are challenged with a nonirritating concentration of the test substance, and the development of erythematous responses is evaluated.

2. Subacute (Repeated-Dose Study) Toxicity Tests

The subacute toxicity tests are performed to obtain information on the toxicity of the chemical after repeated administration and as an aid to establish the doses for the subchronic studies. A typical protocol is to give three to four different dosages of the chemicals to the animals by mixing it in the feed. For rats, ten animals per sex per dose arc often used, whereas for dogs three dosages and three to four animals per sex are used. Clinical chemistry and histopathology are performed as described below in the subchronic toxicity testing section.
The toxicity of the chemical after subchronic exposure is then determined. Subchronic exposure can last for different periods of time, but 90 days in the most common test duration. The principal goals of the subchronic study are to establish a no observable effect level and to further identify and characterize the specific organ(s) affected by the test compound after repeated administration. The subchronic study is usually conducted in two species (rat and dog) by the route of intended exposure (usually oral). At least three doses are employed (a high dose that produces toxicity but does not cause more than 10 percent fatalities, a low dose that produces no apparent toxic effects, and an intermediate dose) with 10 to 20 rats and 4 to 6 dogs of each sex per dose. Each animal should be uniquely identified with permanent markings such as ear tags or tattoos. Only healthy animals should be used, and each animal should be housed individually in an adequately controlled environment. Animals should be observed once or twice daily for signs of toxicity, including body weight changes, diet consumption, changes in fur color or texture, respiratory or cardiovascular distress, motor and behavioral abnormalities, and palpable masses. All premature deaths should, be recorded and necropsied as soon as possible. Severely moribund animals should be terminated immediately to preserve tissues and reduce unnecessary suffering. At the end of the’90-day study, all remaining animals should be terminated and blood and tissues collected for further analysis. Gross and microscopic condition of the organs and tissues (about 15 to 20) and the weight of the major organs (about 12) are recorded and evaluated. Hematology and blood chemistry measurements are usually done prior to, in the middle of, and at the termination of exposure. Hematology measurements usually include hemoglobin concentration, hematocrit, erytbrocyte counts, total and differential leukocyte counts1, platelet count, clotting time, and prothrombin time. Clinical chemistry determinations commonly made include glucose, calcium, potassium, urea nitrogen, alanine aminotransferase (ALT, formerly SGPT), serum aspartate aminotransrera.se (AST, formerly SCOT), gamma-glutamyltranspeptidase (GGT), sorbitol dehydrogenase, lactic de-hydrogcnasc, alkaline phosphatase, creatinine, bilirubin, trglycerides, cholesterol, albumin, globulin, and total protein. Urinalysis is usually performed in the middle and at the termination of the testing period and often includes determination of specific gravity or osmolarity, pH, glucose, ketones, bilirubin, and urobilinogen as well as microscopic examination of formed elements. If humans are likely to have significant exposure to the chemical by dermal contract or by inhalation, subchronic dermal and/or inhalation experiments might also be required. The sub-chronic toxicity studies not only characterize the does-response relationship of a test substance following repeated administration but also provide data for a more reasonable prediction of appropriate doses for the chronic exposure studies.
For chemicals that are to be registered as drugs, acute and subchronic studies (and potentially additional special tests if the chemical has unusual toxic effects or therapeutic purposes) must be completed before the company can file an IND (Investigative New Drug) with the FDA. If the IND application is approved, clinical trials can commence. At the same time, that phase I, phase II, and phase III clinical trials are being performed, chronic exposure of the animals to the test compound can be carried out in laboratory animals as well as additional specialized tests.

3. Chronic Toxicity Tests

Long-term or chronic exposure studies are performed similarly to the subchronic, studies except the period of exposure is longer than 3 months. In rodents, chronic exposures arc usually for 6 months to 2 years. Chronic studies in nonrodent species are usually for 1 year but may be longer. The length of exposure is somewhat dependent on the intended period of exposure in humans. If the agent is a drug planned to be used for short periods of time, such as an antimicrobial agent, a chronic exposure of six months might be sufficient, whereas if the agent is a food additive with the potential of lifetime exposure in humans, then a chronic study up to two years in duration is likely to be required. 
Chronic toxicity tests are performed to assess the cumulative toxicity of chemicals, but the study design and evaluation often include a consideration of the carcinogenic potential of chemicals so that a separate lifetime feed big study to address carcinogenicity does not have to be performed. These studies are usually performed in rats and mice and extend over the average lifetime of the species (18 months to 2 years for mice; 2 to 2.5 years for rats). To ensure that 30 rats per dose survive the 2-year study 60 rats per group per sex are often started in the study. Both gross and microscopic pathologic examinations are made, not only on those animals that survive the chronic exposure but also “on those that prematurely.
Dose selection is critical in these studies to ensure that premature mortality from chronic toxicity does not limit the number of animals surviving to normal life expectancy. Most regulatory guidelines require that the highest dose/y administered be the estimated   maximum tolerable dose (MTD). This is generally derived from subchronic studies, but additional, longer studies (e.g., six months) may be necessary if delayed effects or extensive cumulative toxicity are indicated in the 90-day subchionic study. The MTD has found various definitions (Haseman, 1985). The National Toxicology Program’s (NTP) Bioassay Program currently defines the MTD as the dose that suppresses body weight gain slightly (i.e., 10%) in a 90-day subchronic study, although the NTP and other testing programs are critically evaluating the use of parameters other than weight gain, such as physiologic and pharmacokinetic considerations and urinary metabolite profiles, as indicators of an appropriate MTD. Generally, one or two additional doses, usually fractions of the MTD (e.g., ½ and ¼ MTD), and a control group are tested.
The use of the MTD in carcinogenicity has been the subject of much controversy. The premise that high doses are necessary for testing, the carcinogenic potential of chemicals is derived from the statistical and experimental design limitations of chronic bioassays.
Consider that a 0.5 percent increase in cancer incidence in the United States would result in over 1 million additional cancer deaths each  year—clearly an unacceptably high-risk. However, to identify with statistical confidence a 0.5 percent incidence of cancer in a group of experimental animals would require a minimum of 1000 test animals, and this assumes that no tumors were present in the absence of exposure (zero background incidence). 
The statistical relationship between minimum detectable tumor incidence and the number of test animals per group. This curve shows that in a chronic bioassay with 50 animals per test group a tumor incidence of about 8 percent could exist even though no animals in the test group had tumors. This example assumes that there were also no tumors in the control group. These statistical considerations illustrate why animals are tested at doses higher than that which will occur in human exposure. As it is impractical to use the large number of animals that would be required to test the potential carcinogenicity of a chemical at the doses usually encountered by people, the alternative is to assume that there is a relationship between administered does and tumorigenic response and to give animals doses of (lie 3oemlcal”lhat are high enough to produce a measurable tumor response in a reasonable size test group—e.g., 40 to 50 animals   per   dose.   The   limitations   of   this approach will be discussed later in this chapter.

Developmental and Reproductive Toxicity

The effects of chemicals on reproduction and development also need to be determined. Developmental toxicology is the study of adverse effects on the developing organism occurring anytime during the life span of the organism that may result from exposure to chemical or physical agents prior to conception (either parent), during prenatal development, or postnatally until the time of puberty. Teratology is the study of defects induced during development between conception and birth. Reproductive toxicology is the study of the occurrence of adverse effects on the male or female reproductive system that may result from exposure to chemical or physical agents.
Four types of animal tests are utilized to examine the potential of an agent to alter development and reproduction. General fertility and reproductive performance (segment I or phase I) tests are usually performed in rats with two or three doses (20 rats per sex per dose) of the test chemical (neither produces maternal toxicity). Males are given the chemical 60 days and females 14 days prior to mating. The animals are given the chemical throughout gestation and lactation. Typical observations made are the percent of the females that become pregnant, the number of stillborn and live offspring, and the weight, growth, survival, and general condition of the offspring during the first 3 weeks of life.
Teratogenic potential of chemicals is also determined in laboratory animals (segment II). Teratogens are most effective when administered during the first trimester, the period of organogenesis. Thus, the animals (12 rabbits and 20 rats or mice per group) are usually exposed to one of three dosages during organogenesis (day 6 to 15 in rats and 6 to 18 in rabbits) and the fetuses removed by cesarean section a day prior to the estimated time of delivery (rabbit—day 31, rat—day 21). The uterus .is excised and weighed, then examined for the number of live, dead, and resorbed fetuses. Live fetuses are ‘weighed, and one-half of each litter is examined for skeletal abnormalities and the remaining one-half for soft tissue anomalies.
The perinatal and postnatal toxicities of chemicals are also often examined (segment III). This test is performed by administering the test compound to rats from the fifteenth day of gestation throughout delivery and lactation and determining its effect on birth weight, survival, and growth of the offspring during the first three weeks of life.
A multigenerational study is often carried out to determine the effects of chemicals on the reproductive system. At least three dosage levels are given to groups of 25 female and 25 male rats shortly .after, weaning (30 to 40 days of age). These rats are referred to as the F0 generation. Dosing continues throughout breeding (about I40 days of age), gestation, and lactation. The offspring (F1 generation) thus have been exposed to the chemical in utero, via lactation, and in the feed thereafter. When the F1 generation is about 140 days old, about 25 females and 25 males are bred to produce the F2 generation, and administration of the chemical is continued. The F2 generation is thus also exposed to the chemical in utero and via lactation. The Fl and F2 litters are examined as soon as possible after delivery, The percentage of F0 and 1 females that get pregnant, the number of pregnancies that go to full term, the litter size, number of stillborn, and number of live births are recorded. Viability counts and pup weights are recorded at birth, 4, 7, 14, and 21 days of age. The fertility index (percentage of mating resulting in pregnancy), gestation index (percentage of pregnancies resulting in live litters), viability index (percentage of animals’ that survive four days or longer), and lactation index (percentage of animals alive at four days that survived the 21-day lactation period) arc then calculated. Gross necropsy and histopathology are performed on some of the parents (F0 and F1), with greatest attention being paid to the reproductive organs, and gross necropsy on all weanlings.
Numerous short-term tests for teratogenicity have been developed (Faustman, 1988). These tests utilize whole embryo culture, organ culture, and primary and established cell cultures to examine development processes and estimate potential teratogenic risks of chemicals. Many of these in vitro test systems are currently under evaluation for use in screening new chemicals for teratogcnic effects. These systems vary in their ability to identify specific teratogenic events and alterations in cell growth and differentiation. In general, the assays available cannot identify functional or behavioral teratogens (Faustman, 1988.)

Mutagenicity

Mutagenesis is the ability of chemicals to cause changes in the genetic material in the nucleus of cells in ways that can be transmitted during cell division. Mutations can occur in either of two cell types, with substantially different consequences. Germinal mutations damage DNA in sperm and ova, which can undergo meiotic division and therefore have the potential for transmission of mutations to future generations. If mutations are present at the time of fertilization in either the egg or sperm, the resulting combination of genetic material may not be viable, and death may occur in the early stages of embryonic cell division. Alternatively, the mutation in the genetic material may not affect early embryogenesis but may result in death of the fetus at a later developmental period, resulting in abortion. Congenital abnormalities may also result from mutations. Somatic mutations refer to mutations in all other cell types and arc not heritable but may result in cell death or transmission of a genetic defect to other cells in the same tissue via mitotic division. Because the initiation event of chemical carcinogenesis is thought to be a mutagenic event, mutagenic tests are after used to screen for potential carcinogens.   
Several in vivo and in vitro procedures have been devised for testing chemicals for their ability to cause mutations. Some genetic alterations are visible with the light microscope. In this case, cytogenctic analysis of bone marrow smears is used after the animals have been exposed to the test agent. Because some mutations are incompatible with normal development, the mutagenic potential of a. chemical can also be measured by the dominant lethal test. This test is usually performed in rodents. The male is exposed to a single dose of the test compound and then mated with two untreated females weekly for eight weeks. The females are killed before term and the number of live embryos and the number of corpora lutea determined. The test for mutagens receiving the widest attentions is the Salmonella/microsome test developed by Ames and colleagues (Ames et al., 1975). This test uses several mutant strains of Salmonella typhimurium that lacks the enzyme phosphoribosy ATP synthetase, which is required for histidine synthesis. 
These strains are unable to grow in a histidinedeficient medium unless a reverse or back mutation to the will type has occurred. Other mutations in these bacteria have been introduced to enhance the sensitivity of the strains to mutagenesis. The two most significant additional mutations enhance penetration of substances into the bacteria and decrease the ability of the bacteria to repair DNA damage. Since many chemicals are not mutagenic or carcinogenic unless they are biotransformed to a toxic product by the endoplasmic reticulum (Microsomes), rat liver microsomes are usually added to the medium containing the mutant strain and the test chemical. The number of reverse mutations is then quantitated by the number of bacterial colonies that grow in a histiden-deficient medium. 

Other Tests

Most of the tests described above will be included in a “standard” toxicity testing protocol because they are required by the various regulatory agencies. Additional tests may also be required or included in the protocol to provide information relating to a special route of exposure (inhalation) or to a special effect (behavior). Inhalation toxicity tests in animals are usually carried out in a dynamic (flowing) chamber rather than in static chambers, to avoid particulate settling and exhaled gas complications. Such studies usually require special dispersing and analytic methodologies depending on whether the agent to be tested is a gas, vapor, or aerosol. The duration of exposure for both inhalation and behavioral toxicity tests can be acute, subchronic, or chronic, but acute studies are more common with inhalation toxicology and chronic studies are more common with behavioral toxiciology studies. Other special types of animal toxicity tests include immunotoxicology, toxicokinetics (absorption,   distribution,   biotransformation,   and excretion), the development of appropriate antidotes and treatment regimes for poisoning, and the development of analytic techniques to detect residues of chemicals in tissues and other biologic materials. 

Leave a Reply

Your email address will not be published. Required fields are marked *