Risk Assessment: Parts I-IV

Scale (Part I)

THIS IS THE first in a series of four MichiganScience articles on risk assessment. These articles will be designed to acquaint and provide the reader with information that will allow him or her to understand and evaluate potential risks to human health resulting from exposures to chemicals, including drugs. In other words, this series of papers on risk assessment will not be designed to present the reader with an in-depth treatise on the complexities of risk assessment, but rather will provide a high-level overview of the process. The hope is that enough information will be presented such that the reader, when faced with having to understand and make decisions relative to risk, will have the basic tools necessary to make an informed decision.

Test Tubes (Part II)

The question of whether and to what degree chemicals present in air, food, drinking water, pharmaceuticals, consumer products and occupational settings pose a threat to human health is obviously of enormous social and medical importance. Many chemicals, such as asbestos, arsenic and dioxin, have a bad name. On the other hand, many chemicals have clearly transformed modern life in extremely beneficial ways. We have drugs to prevent and cure disease, pesticides to protect and increase crop production, preservatives to protect our food, as well as plastics, fibers, metals and thousands of other chemicals that enhance the pleasures and safety of life as we know it today.

Assessing potential risk resulting from chemical exposure is a complex scientific process and involves the following four steps:

  • Hazard identification: the determination of whether a particular chemical is or is not causally related to particular health effects.
  • Dose-response: the determination of the relation between the magnitude of exposure and the probability of occurrence of health effects in question.
  • Exposure assessment: the determination of the extent of human exposure from all sources. It includes the population(s) that may be exposed and the pathways of exposure, i.e. the potential for exposure via a particular pathway, such as ingestion, inhalation or skin contact.
  • Risk characterization: the description of the nature and often the magnitude of human risk, including all sources of uncertainty implicit in the above steps.[1]


The first step in understanding the risk assessment process is hazard identification, which requires a basic understanding of the field of toxicology. Quite simply, the science of toxicology is defined as the study of the adverse health effects of chemicals (including drugs) on health and of the conditions under which those effects occur.

All chemicals, whether they are man-made or naturally occurring, can be toxic, and therefore have the potential to cause adverse health effects in humans. To assess the toxicity of a chemical, we need to develop an understanding of the dose or concentration that can cause the effect. The hazards of chemicals are not equal. Some chemicals are much more toxic than others. To illustrate, Table 1 presents a conventional rating scheme for lethal doses in humans following oral ingestion. This table clearly shows that toxicity can be rated from practically non-toxic to supertoxic based on the dose. An example of something that is supertoxic is botulinum toxin and something that is practically non-toxic is water.

Table 1 - Probable Lethal Oral Dose for Humans - click to enlarge

In addition to dose, we need a fundamental understanding of how much of the chemical to which we are exposed gets into the body, where it goes and what it does, how long it stays in the body and how it leaves. These processes are referred to in toxicology as absorption, distribution, metabolism and elimination (ADME).

There are three primary routes by which a chemical can enter the body: ingestion, inhalation and skin contact. If the chemical is ingested, then it can be absorbed into the body from the stomach. If inhaled, then the chemical can be absorbed from the airways or the lungs. If exposure is by skin contact, then the material must be absorbed through the outer layers of the skin into the underlying blood supply. Quite often, it is observed that the dose required to produce a toxic effect will vary according to the exposure route. Once the toxin is inside the body, the question becomes: Where does it go while it is there? Usually, the material can distribute itself equally throughout the body depending on the blood supply to any given site. However, if the chemical in question happens to be lipophilic (prone to sequester itself in fat tissue), it can stay in the fat for long periods of time and slowly be released back into the blood stream.

As the blood containing the chemical passes through the liver, there are enzymes present that can convert or metabolize the chemical to a different form. The products of metabolism (i.e., metabolites) are generally more soluble in water so that they can be eliminated from the body in the urine. It should be noted, however, that some chemicals can be readily excreted from the body unmetabolized because they are already water-soluble enough when they enter the body. While the process of metabolism is primarily a process to convert a chemical to a form that can be eliminated from the body, cells in the skin, lungs, intestines and kidneys can also play a role in metabolism. This whole process of ADME can be summarized in Figure 1.

Figure 1 - ADME Process - click to enlarge

While the chemical resides in the body, the question becomes what dose level has the potential to cause harm. Some chemicals can cause damage to organ systems such as the liver, lungs, kidneys, spleen, etc. Other chemicals can cause damage to the reproductive, immune, nervous and other systems and can cause birth defects and cancer. But again, whether or not a chemical causes toxicity is dependent on the dose or concentration of the chemical that makes it into the body. A central paradigm of toxicology comes from Paracelsus (c. 1493-1541), a Swiss physician and alchemist. He is noted for his recognition that "all substances are poisonous, there is none which is not a poison. The right dose differentiates a poison and a remedy." In short, the dose makes the poison.

This brings us to the concept of dose-response. Many individuals have experienced or are familiar with this phenomenon in a mild way (consider the relationship between the amount of alcohol consumed and the various stages of intoxication). It is a well-documented principle of toxicology that for all chemicals, there is a range of doses over which no apparent toxicity can be identified in exposed individuals (No Effect Level, or NOEL) and there is a higher range of doses over which the toxic properties begin to appear. This is shown in Figure 2.

Figure 2 - Dose-Response - click to enlarge

The region of the dose-response curve that makes the transition from "no-toxicity" to "toxicity" is called the threshold. The threshold dose is the dose immediately above which the response (or toxicity) begins to manifest itself. Implied in this concept is the fact that an individual can be exposed to a dose below the threshold for a lifetime and not suffer adverse health effects. However, it must be noted that the actual threshold dose varies from person to person (i.e., inter-individual variation), but there is clearly a "no effect" or sub-threshold range for everyone. In other words, some individuals may be more sensitive than others to the effect of a given chemical.

Finally, toxic effects can be defined as acute, subchronic or chronic, based on duration of exposure. An acute effect is one of very short duration often involving a single exposure at a very high dose. A subchronic exposure is generally viewed as exposure, generally daily, over some period of time less than the whole lifetime. Chronic exposure generally is viewed as a daily exposure lasting over a whole lifetime and beginning at an early age. It should be emphasized, however, that care should be exercised to distinguish subchronic and chronic exposure from subchronic and chronic effects. Subchronic and chronic effects are meant to convey that the adverse effect does not appear immediately after exposure but that the effect occurs after some delay following exposure. In fact, the effects may not appear until close to the end of life (e.g., cancer) even if exposure begins early in life. Furthermore, chronic effects may or may not need chronic exposure to manifest.

In summary, this first article has discussed the process of hazard identification, which is the first step in a human health risk assessment and provides basic information related to how the hazards or toxic properties of chemicals are assessed. The next article in the series will provide an overview of how dose-response and exposure data are used in the risk assessment process. Subsequent articles will deal with the process of risk characterization, risk management, regulatory implications of risk assessment, as well the application of the precautionary principle in protecting human health in the absence of scientific data necessary for assessing risks.

# # #

[1] These steps are well-known in the field of toxicology and are described in similar language by a variety of sources. See, for instance, "An SAB Report: Guidelines for Reproductive Toxicity Risk Assessment: Review of the Office of Research and Development’s Guidelines for Reproductive Toxicity Risk Assessment by the Environmental Health Committee," U.S. Environmental Protection Agency, http://yosemite.epa.gov/sab/
sabproduct.nsf/DD5DA21FE9F56EB48525719B006A11E1/$File/ehc95014.pdf, p. 8, n. 3.

# # #

THIS IS THE second in a series of four MichiganScience articles on risk assessment. These articles will be designed to acquaint and provide the reader with information that will allow him or her to understand and evaluate potential risks to human health resulting from exposures to chemicals, including drugs. In other words, this series of papers on risk assessment will not be designed to present the reader with an in-depth treatise on the complexities of risk assessment, but rather will provide a high-level overview of the process. The hope is that enough information will be presented such that the reader, when faced with having to understand and make decisions relative to risk, will have the basic tools necessary to make an informed decision.

In the first paper of this series, we discussed some of the fundamental elements of risk assessment. The primary focus of the first article was hazard identification, which is the determination of whether a particular chemical is causally related to a particular adverse health effect. We pointed out that all chemicals, whether they are naturally occurring or man-made, can be toxic and, therefore, have the potential to cause adverse health effects in humans. The dose makes the poison, in other words.

This article introduces two additional steps in the risk assessment process - dose response and exposure assessment. Again, this will not be an in-depth presentation of the complexities of these two steps, but rather a high-level overview.


Dose response measures the level of chemical exposure and its impacts. In other words, evaluating the dose-response relationship for a chemical is at the heart of understanding the health risks it may pose.

In order to understand dose and response, we need to understand the conduct of a typical toxicity study. This is how scientists determine at what levels of exposure to a particular chemical it becomes harmful (toxic).

In a toxicity study, multiple groups of experimental animals (ranging from 10 to 60 animals per group) are exposed to or dosed with the chemical.

Because the most common human exposure to chemicals occurs through breathing (inhalation) or eating and drinking (ingestion), these are the two primary routes examined. Very rarely is skin contact evaluated as a route of exposure.

Exposure groups can range in number from three to five. One group is the control (non-exposure) group, and another group is exposed to the highest dose the animal can tolerate for the duration of the particular study. The effects of the chemical on the experimental animal are monitored throughout the study, including:

  • effects on the amount of food eaten,
  • changes in body weight,
  • outward signs of effects on the neurological system,
  • effects on blood parameters, and
  • other key measurements.

The chemical's effects on organ systems are calculated at the end of the study by looking at changes in the weight of the organs and examining any microscopic changes that occur.

After the data are collected and analyzed, the results are plotted graphically (as shown below) where the increasing dose is plotted against increasing severity of response. In general, there is a range of doses below which no response or apparent toxicity occurs in the experimental exposure groups. Conversely, there is a higher range of doses over which the toxic properties begin to appear.

Dose-response function with a no-effect region - click to enlarge

The exposure dose at which the transition between no apparent toxicity and toxicity occurs is referred to as the threshold dose, or No Observed Effect Level (NOEL). Implied in this concept, as noted in the first article, is the fact that an individual can be exposed for a lifetime to a chemical below the threshold and not suffer an adverse effect. It must be noted that even below the threshold, there is likely a sensitive population that responds in some way to the exposure. This is known as inter-individual variation and will be discussed below.

The goal of toxicity studies is to establish a chemical's dose response effects on a specific organ or system, such as the reproductive or central nervous systems. Other studies are designed to assess developmental effects, such as the potential to cause birth defects (teratogens) or the potential to cause cancer (carcinogens).

While most chemicals exhibit a threshold, there are certain classes of compounds for which an argument can be made for no threshold. These compounds (e.g., mutagens and carcinogens) can directly interact with and damage genetic material (DNA) in cells, which can be passed on from one generation of cells to the next. Cells containing such damage are at an increased risk of causing cancer. It should be noted that not all of these cells will become cancerous, but there is an increased probability they will do so. In other words, a single molecule of carcinogen interacting with DNA can potentially lead to cancer. Therefore, it is generally accepted that there is no safe dose or level of exposure for this kind of compound.


There is a complex relationship between dose response and risk assessment:

  • Based on the small number of animals used in each dose group in a toxicity study, it is generally accepted that the difference in disease rate or effect needs to be at least 10 percent to be statistically significant.
  • The dose response can vary significantly depending on whether the chemical was ingested or inhaled, due to differences in absorption in the gastrointestinal tract and lungs.
  • Dose Response and Risk Assessment - click to enlarge

  • The toxicity of a chemical can be due to the actual chemical itself or a metabolite (molecule formed by a chemical reaction between the foreign chemical and the host body) or potentially a combination of both the chemical and the metabolite.
  • The external dose is the actual amount to which an organism is exposed, but what toxicologists really want to know is the internal dose or the amount of chemical that is actually delivered to the target tissue. This obviously will depend on how the chemical is distributed in the body once it is absorbed, how long it stays in the body, how it is metabolized and how it is eliminated from the body (refer to the ADME - absorption, distribution, metabolism and elimination - MichiganScience No. 11).
  • Two approaches are available to deal with ADME. The first is physiologically based pharmacokinetic modeling. PBPK models are used to predict internal dose at target organs. They consist of a series of equations representing real biological tissues and physiological processes that additionally simulate the ADME of chemicals that enter the body. The advantage of these models is that they rely on the real physiology of the species in question, which makes them useful for comparing results between different species (interspecies extrapolation). The second approach is biomonitoring, which is the analysis of human body fluids and tissues for purposes of measuring human exposure to chemicals.

In addition to these complexities, there are at least two limitations associated with the use of dose response data in risk assessment. Both of these limitations involve extrapolating data. In essence, this means drawing conclusions about a set of data that cannot be scientifically proven based on information from another data set that has been proven.

For example, toxicologists speak about extrapolating toxicity results obtained in animal experiments to reach conclusions about possible toxicity in human beings. Since animal studies are generally conducted with high doses in order to elicit and understand the toxicity with a limited number of animals, and since human exposure generally occurs at much lower doses (usually below the threshold), estimating an acceptable risk to human health requires extrapolation from effects seen at high doses to potential effects at lower doses. This provides a level of uncertainty to the risk assessment, since there is potentially a part of the human population that may be extremely sensitive to the potential adverse effects.

The second limitation involves trying to estimate potential effects in humans from studies conducted in animals. In other words, we are faced with interspecies extrapolation, which adds another layer of uncertainty when trying to assess the risk to humans. There are many examples of effects that are seen in animals that do not translate to effects in humans (e.g., limonene and renal tumors in rats, see below).

Limonene is a hydrocarbon found in lemon rinds. The compound is employed in cosmetics, cleaning products, food manufacturing, medicines and insecticides.

The use of limonene has the potential to lead to widespread human exposure. However, limonene was shown to cause increases in incidences of nephropathy, renal hyperplasia and renal tumors in laboratory male rats.

It has been shown in studies that administration of limonene to male rats resulted in the accumulation of a low-molecular-weight protein known as alpha-2u-globulin. The protein build-up is followed by kidney disease and an increased incidence of kidney tumors.

It has been shown that the male rat response to alpha-2u-globulin does not occur in female rats or in other animal species such as mice, nor does it occur in humans. In other words, the carcinogenic response to limonene seen in male rats does not occur in other animal species or humans. It is specific to male rats. Therefore, limonene is not considered carcinogenic to humans. This is a clear example of the potential pitfalls in extrapolating adverse results seen on laboratory animals to humans.

From the dose response data, scientists can estimate exposure levels at which adverse responses occur in experimental animals. They also can estimate, at least for chemicals that provoke threshold responses, an exposure level below which most, if not all, humans can be potentially exposed where no adverse effects are expected.

However, there are uncertainties resulting from extrapolations from high dose to low dose and from extrapolating adverse effects seen in animals to humans. The NOEL is the highest dose at which no adverse effects are detected in a hazard identification study. For purposes of a risk assessment, the NOEL is adjusted downward to account for limitations and uncertainties in the available data to arrive at an exposure that is likely to cause no noticeable harmful effects in humans. This is sometimes referred to as reference dose (RfD) or reference concentration (RfC). This approach has been used for years for substances other than those that cause cancer, and it implies that there is a threshold for all other potential adverse affects.

A benchmark dose (BMD) approach has been used as an alternative to address some of the limitations with the use of the NOEL. The BMD is the current approach used by the U.S. Environmental Protection Agency to set an RfD. Unlike a NOEL, the BMD takes into account dose response information by fitting a mathematical model to dose response data. The benchmark dose low (BMDL) is then estimated from this curve.

The BMDL is statistically the 95 percent lower confidence interval of the BMD. Using the BMD approach and then applying appropriate safety or uncertainty factors (see below) to the BMDL to establish a RfD serves as the basis for setting a more scientifically based estimation of a level to protect human health, since this is a conservative estimate of the dose below which humans can be exposed without effect.

In general, the uncertainty factors are used to account for interspecies variation (x10), which allows for extrapolation from animal data to humans and human variability (x10), which takes into account sensitive individuals of the population. The product of these two factors (100) is routinely used in setting what is referred to as the RfD. Therefore, to set the RfD the NOEL or BMDL is divided by 100.


Once we have established the RfD, we need to gather information for the next step in the risk assessment process. This is the level at which humans are potentially exposed. An exposure assessment is the determination of the extent of human exposure from all sources. It includes a determination of the populations that may be exposed (e.g., worker in an industrial setting, children, consumers, general public, etc.). It also is an estimation of the potential for exposure by a particular pathway such as ingestion, inhalation or skin contact.

Humans are exposed to chemicals through indirect contact in a variety of ways, including air, water, food, soils, dusts, cosmetics products, household products and so on. The pathways of exposure can be as simple as direct contact (cosmetics) and some as complex as a contaminant traveling through the air, depositing in the soil of fields where crops are grown, dissolving in the ground water and taken up through the roots of the crops used to feed livestock that is in turn consumed by humans. If the livestock happen to be cattle, for example, then the contaminant could end up in milk from the cattle. In the case of humans, the contaminant could end up in human breast milk if the mother consumes meat from the cow. Furthermore, to make matters more complex, we have potential exposure from the breathing of air and from human contact with soil and dust. By this time, our contaminant of consideration has passed through and into at least 10 media on its way to human exposure. The illustration on the previous page provides an overview of the potential media and exposure pathways for humans living near a chemically contaminated site.

Since all of the above media contain some level of contaminant, it's important to understand the amount of the chemical in each media as well as the amount of each media to which humans are exposed in order to estimate the external exposure level. This is more often than not a very complex and challenging task.

With the right set of tools such as PBPK modeling and biomonitoring, a somewhat reasonable estimate of the internal dose from the estimation of the external dose may be extrapolated. This estimate of external or internal dose can be compared to the RfD to determine whether we are above or below the RfD and whether there is a risk to human health. Of course, the situation is different for non-threshold carcinogens when one uses the human exposure dose to estimate the probability or incremental increase in risk for a lifetime exposure. Likewise, the process for extrapolating dose response for carcinogens is completely different.

The exposure route is the way the chemical in question moves into the body. Generally, a given medium will result in only one route of exposure. However, there are cases when a given medium results in multiple routes of exposure. Consider a volatile chemical that is contained in a cosmetic product applied to the skin and some part of the chemical in the product volatilizes into the air. In this case, exposure is by both skin contact and inhalation.

As with other areas of risk assessment, getting a reliable estimate of potential human exposure from environmental and other pathways has its share of uncertainty. Perhaps the major problem is sampling: where to sample, how to take the sample, how to preserve and process the sample, how to analyze the sample, etc.

The obvious goal of sampling the environment is to obtain an accurate representation of the environmental level of the chemical. In other words, we want to have some confidence that the whole actually is represented by the part taken for analysis. Fortunately, this problem can be dealt with by statisticians who devise sampling strategies that allow scientists to know the degree of confidence of the sampling. The same is true for the analysis of the samples. Quality assurance or quality control programs help ensure the scientific integrity of the analytical results.

In the first article in this series, we discussed the process of hazard identification, which is the first step in a human health risk assessment, and provided basic information related to how the hazards or toxic properties of chemicals are assessed. In this article, we have provided an overview of how dose response data are generated from the hazard identification data and how these data are used to estimate a safe level of human exposure or, in the case of carcinogens, how to estimate an incremental increase in the risk of developing cancer resulting from a lifetime of exposure to the chemical.

Next, we have shown the complexities of estimating human exposures from a variety of environmental media and the routes by which humans can be exposed. We have presented information now on three of the steps in the risk assessment process. The next article will focus on how the information from the first three steps in the process is used in the risk characterization. The series will conclude with an article on risk management. 

# # #

THIS IS THE third in a series of four MichiganScience articles on risk assessment. These articles are designed to acquaint and provide the reader with information that will allow him or her to understand and evaluate potential risks to human health resulting from exposures to chemicals, including drugs. In other words, this series of papers on risk assessment is not designed to present the reader with an in-depth treatise on the complexities of risk assessment, but rather to provide a high-level overview of the process. The hope is that enough information will be presented such that the reader, when faced with having to understand and make decisions relative to risk, will have the basic tools necessary to make an informed decision.

In the first two articles in this series, we discussed three of the four fundamental elements of a risk assessment:

  • Hazard Identification, which is the process of determining whether a particular chemical is or is not causally related to particular health effects and involves characterizing the nature and the strength of the evidence of causation.
  • Dose-Response, which is the process of characterizing the relationship between the dose of a chemical administered and the incidence or the probability of occurrence of the health effects in question.
  • Exposure Assessment, which is the process of measuring or estimating the intensity, frequency and duration of exposure to a chemical from all sources. It includes the population(s) that is (are) potentially exposed and the pathways of exposure (ingestion, inhalation and dermal contact).

In this article, we will touch on the final step in a risk assessment — risk characterization. As before, this will be a high-level overview rather than an in-depth presentation of the complexities of a risk characterization.

Quite simply, risk characterization is the process of estimating the degree of safety associated with exposure to a chemical or the incidence of a human health effect under the conditions of exposure described in the exposure assessment. Risk characterization brings together the exposure and dose-response assessments and includes a description of their uncertainties. For example, what is the risk someone will develop neurological disorders from being exposed to mercury? What is the likelihood someone exposed to benzene in gasoline will develop cancer? For non-cancer-causing chemicals, risk is determined by estimating the margin of exposure (MOE) or hazard quotient (HQ). For cancer-causing chemicals, the risk is a probability and is expressed as a fraction. The risk in both cases is unitless.

The concepts of chemical risk and how they are expressed are sometimes difficult to grasp. People are more familiar with the discussion of risks related to other sorts of activities, such as the annual chance of dying in an automobile resulting for people who drive an average number of miles per year. This risk is about one in 4,000. The risk of dying from lung cancer for individuals who smoke one pack of cigarettes per day beginning at age 15 is about one in 800. The primary difference between these sorts of risks compared to risks associated with chemical exposure is that these risk estimates are much more solid and are derived from years of extensive and substantial statistical data. Risks associated with chemical exposures are filled with uncertainties, which have been described in previous articles.

We have discussed in previous articles the first three steps in the risk assessment process that lead to the final step — risk characterization. The first three steps provide all that is necessary to answer the ultimate risk questions: What type of toxicity is expected in the exposed population (neurotoxicity, birth defects, reproductive effects, cancer, etc.), and what is the risk of it occurring in the exposed population?

A key component of the risk characterization is an analysis of the uncertainties. The two critical areas of uncertainty are the relevance to humans of the animal toxicity findings (hazard assessment) and the dose-response relations at the human levels typically associated with the chemical in question (high-dose to low-dose extrapolation). These uncertainties are generally dealt with by applying uncertainty factors to the NOEL (no observed effect level) or BMDL (benchmark dose low) to establish an RfD (reference dose), for ingestion, or RfC (reference concentration), for inhalation. The uncertainty factors are usually 10 for animal to human extrapolation and 10 for high-dose to low-dose extrapolation. In the case where a NOEL was not determined in a toxicity study and the LOEL (lowest observed effect level) has to be used in the dose-response assessment, a third uncertainty factor of 10 is used.

So, how does this whole process work? Let's assume we have chemical X that is subjected to a complete hazard assessment using the ingestion (oral) route of exposure. The results of the study showed that this chemical has liver toxicity in repeated dose studies and a reproductive effect in female rats in a standard study to assess male and female reproductive effects. Both effects occurred in a dose-related manner as shown below.

Dose-response function with a no-effect region

Dose-response function with a no-effect region - click to enlarge

The most sensitive endpoint for chemical X is liver damage, where endpoint is defined as an effect observed in a toxicity study. The apparent NOEL for the liver damage is 100 mg of X/kg body weight (BW) and the apparent NOEL for the reproductive effect is 1,000 mg of X/kg BW, making the liver effect the most sensitive endpoint. Using an uncertainty factor of 100 gives an RfD of 1 mg of X/kg BW and 10 mg of X/kg BW for the liver and reproductive effects, respectively. An exposure assessment showed that the intake of chemical X by humans from all sources and by all routes of exposure is 0.5 mg of X/kg BW.

Having the hazards indentified, the dose-response relationship evaluated, and the potential human exposure dose or intake, we can assess whether the level at which humans are exposed are safe or without appreciable risk of adverse non-cancer health effects over a specified period of time. This can be accomplished in one of two ways, which yield different answers but in reality are comparable. The first approach is to divide the NOEL by estimated daily human exposure determined in the exposure assessment. Remember that the NOEL will not have the uncertainty factors (100) applied to it. In the case of our chemical X, the MOEs (margins of exposure) would be:

Liver Effect:

MOE = (NOEL)/estimated human exposure = (100 mg X/kg BW)/(0.5 mg X/kg BW = 200

Reproductive Effect:

MOE = (NOEL)/estimated human exposure = (1000 mg X/kg BW)/(0.5 mg X/kg BW = 2000

Since the MOEs are greater than 100, human exposure to 0.5 mg X/kg BW is judged to be safe or without appreciable risk if exposed to this level everyday for a specified duration of time (generally a lifetime). The second approach is to use the RfD, which has the uncertainty factors of 100 applied to the NOEL and to calculate what is called a Hazard Quotient (HQ). The HQ is calculated by dividing the estimated human exposure dose by the calculated RfD. The HQs for chemical X are:

Liver Effect:

HQ = estimated human exposure/RfD = (0.5 mg X/kg BW)/(1.0 mg X/kg BW) = 0.5

Reproductive Effect:

HQ= estimated human exposure/RFD = (0.5 mg X/kg BW)/(10 mg X/kg BW) = 0.05

For each effect for chemical X, the HQ is less than 1.0, which indicates that the 0.5 mg of X/kg human exposure by all routes and sources is without appreciable risk. It needs to be noted that this is not an expression of probability of an individual suffering and adverse health effect but serves more as a guide of what would be considered a safe level of exposure or an absence of risk.

In the first two articles in this series and the first part of the current article, we focused primarily on a discussion of the approach to assessing risks to human health from chemicals demonstrating a threshold effect in the hazard identification studies. We mentioned that the approach to assessing the risks from exposure to chemicals classified as carcinogens is different. The reason for this difference is that chemicals that cause cancer are considered to have no threshold, because they are generally considered to interact with and damage genetic material (DNA) in cells, which can be passed on from one generation of cells to the next. It is considered that a single molecule of carcinogen interacting with DNA can potentially lead to cancer, and because of this, it is generally accepted that there is no safe dose or level of exposure for these kinds of compounds. The dose-response for a carcinogen is considered linear especially in the low-dose region of the dose response curve, which means for every incremental increase in dose there is an increase in response. Benchmark dose modeling is used on the actual dose-response data to estimate the BMD and the BMDL (sometimes referred to as the ED10 and LED10), which is the dose corresponding to a specified increase in the probability of a specified response or effect. The BMD (or ED10) is usually the estimated dose corresponding to an increase of 10 percent (or increase of 0.1 in probability) of the specified response relative to the probability of that same response at zero dose. The BMDL (or LED10) is statistically the 95 percent lower confidence interval of the BMD (or ED10). This estimated point on the dose response curve is sometimes referred to as the Point of Departure (POD).

The dose-response curve for a carcinogen is assumed to be linear from the POD down to zero dose. The slope of the extrapolated linear curve is referred to as the potency of the carcinogen, which reflects the lifetime cancer risk associated with one unit of average daily lifetime dose. The slope of the extrapolated curve is the rise of the line — the increased risk shown on the vertical axis — for each unit increase in dose, shown on the horizontal axis) (below).

Dose Response Relationship, Carcinogens

Dose Response Relationship, Carcinogens - click to enlarge

The more potent the carcinogen, the steeper the slope compared to less potent carcinogens.

Since there is essentially no safe dose with carcinogens, one has to estimate a level of risk that would be considered acceptable. This is a probability that lifetime exposure to a chemical, under specified conditions of exposure, will lead to an excess cancer risk, which generally ranges from 1/10,000 to 1/1,000,000 and means that one in 10,000 or one in 1 million people exposed is expected to develop cancer. It is important to note again that this is a probability and there is no way to identify which, if any, of the 10,000 to 1,000,000 people exposed will develop cancer. To calculate excess lifetime cancer risk, the estimate of human exposure dose is multiplied by the potency factor.

So how does this work? If, for example, a population of individuals is exposed to 0.0014 mg of a cancer-causing chemical per kg BW per day for a lifetime and the potency factor is 0.006 risk per mg BW per day, then the excess lifetime cancer risk is:

Risk = (0.0014) x (0.006) = 0.000008

This means, then, that eight people out of 1 million experiencing an average intake of the carcinogen each day for a full lifetime will develop cancer over that lifetime, or in other words, the human population would need to have the exposure level reduced to 0.00017 mg of this cancer-causing chemical in order to give an excess lifetime cancer risk of one in 1 million people exposed.

In summary, we have in three articles led the reader through the four steps of a risk assessment:

  • Hazard Identification
  • Dose-Response Assessment
  • Exposure Assessment
  • Risk Characterization

We have attempted to show at a very high level how the process works for both non-carcinogens and carcinogens. It must be reinforced that these articles were a very general overview of each step in the process. While the actual process may be more complex, these articles provide the basic tools to evaluate information presented in news media or other sources. All that needs to be done is a little investigative work to gather some of the basic data needed to perform independent risk assessments of particular chemicals. The next article in this series will focus on how regulatory agencies such as the U.S. Food and Drug Administration and U.S. Environmental Protection Agency use risk assessment to manage risk and set levels of exposure to chemicals in an attempt to protect public health.  

# # #

THIS IS THE fourth and final MichiganScience article on risk assessment. These articles have been designed to acquaint and provide the reader with information that will allow him or her to understand and evaluate potential risks to human health resulting from exposure to chemicals, including drugs. In other words, this series on risk assessment was not designed to present the reader with an in-depth treatise on the complexities of risk assessment, but rather to provide a high level overview of the process. The hope was that enough information would be presented so that the reader, when faced with having to understand and make decisions relative to risk, would have the basic tools necessary to make an informed decision.

In the first three articles, we discussed the four basic steps in a risk assessment:

  • Hazard Identification
  • Dose-Response Relationships
  • Exposure Assessment
  • Risk Characterization

In the last article, we provided examples of a risk characterization for both a threshold chemical and a nonthreshold chemical (i.e. carcinogen) and discussed the uncertainties in the whole process. An expanded view of the risk assessment process would include a provision for additional data to enhance the overall process or to reduce the uncertainties in the final risk assessment number (see Figure 1). Also shown in Figure 1 is risk management, which is the focus of this article. The entirety of the process outlined in Figure 1 is in essence the process of risk analysis.

Figure 1: Risk Analysis

Figure 1: Risk Analysis - click to enlarge

A risk assessment simply cannot draw a distinct line between safe and unsafe. Safety is by its nature an inverse relationship of hazard. If the concept of safety is meant to simply mean the absence of risk resulting from exposure to chemicals, then this is nearly impossible to prove, because to do so requires proof that risk does not exist. Please recall from the earlier articles that it was pointed out that everything has a hazard or is toxic. It is best summed up, to paraphrase Paracelsus, as, "The dose makes the poison."

We can divide chemicals into three broad categories:

  • The enormous number of naturally occurring chemicals that reach us primarily through food.
  • Industrial chemical products that are produced for specific purposes.
  • Industrial pollutants - chemical byproducts of fuel use, of the chemical industry and of most other types of manufacturing (Rodericks).

If the goal is to be absolutely safe, or without risk, from these products, especially industrial chemicals or the polluting byproducts, then a wholesale banning would be necessary. This would require turning the calendar back 200 years or more (Rodericks). Therefore, the process of risk assessment is necessary to understand scientifically what the risk is from exposure from these sources and what is an acceptable level of exposure that would be without appreciable risk.

Once the scientific process is complete and the risks and uncertainties identified, decisions need to be made on how to manage the risk. This is perhaps the thorniest step in the overall process in the risk assessment paradigm. The risk assessor or those charged with protecting public health must make management decisions based on an evaluation of public health, economic, social and political consequences of a regulatory action. They must weigh competing priorities of individual freedoms, groups of individuals (i.e. the population as a whole), environmental groups, industry, etc. That is to say, judgments of acceptability of risky activities are not just a matter of numbers but draw on judicial, regulatory and political mechanisms through which societal choices are made and enforced. Some fundamental factors that must be considered in the management process are voluntariness, equity, procedural legitimacy, treatment of uncertainty and perceptions.

Voluntary vs. involuntary exposure is one key determinant in assessing risk acceptability. In a society that values individual liberties, the risk an individual is willing to take may be higher than a quantitatively similar risk that is imposed on an individual by another party. As a classic example, an individual can smoke cigarettes in the privacy of his or her home, creating a health risk for him or herself, and yet be forbidden to smoke in public, where this individual would impose a much smaller risk (via secondhand smoke inhalation) on others.

A second consideration in the management of risk concerns the fairness and equity of the distribution of risks and benefits. The concept of equity of risk is complicated by the fact that a risk management analysis that appears to be fair and equitable may turn out to be inequitable (though not perhaps unfair). This is very akin to the famous utilitarian dictum: "The needs of the many outweigh the needs of one."

Legal acceptability of risk is based on the answer to a fundamental question posed by society, regulators and industry, which is: How can disputes over risk be adjudicated and policy decisions made in the absence of adequate scientific information and knowledge about causal mechanisms? A critical issue, therefore, is "proof" in cases where it is not clear whether a risk is being imposed or where the magnitude of the suspected risk created by an exposure is highly uncertain. This will be discussed further below.

Uncertainties in the risk assessment process have been discussed in previous articles in this series. Uncertainty in the risk assessment process simply cannot be eliminated, and risk assessment and risk management cannot be clearly separated for uncertain risks. The decision of when to stop collecting data and to act is a risk management problem (Figure 1), while expressing the uncertainty at the time of transition from research to management is part of the risk assessment process.

Individual perception of risk cannot be ignored, but often these perceptions regarding risk are changeable, unreliable and overly sensitive to impressions. Many times, an individual's perception of risk is influenced by special interest groups that have an agenda and can make broad statements that may be true on the surface but are devoid of the fundamental concepts of dose and response. In other words, they may neglect to state that while a material is hazardous, the level at which exposure takes place may be in the range at which no appreciable risk occurs (i.e., exposure is devoid of risk or is at a level to which an individual may be exposed for some duration of time without an impact on health). Therefore, a risk may be perceived as large when in reality the risk to human health is negligible.

So how does one approach the task of risk management? There is a great propensity on the part of regulatory agencies and those who practice public policy to require numerical standards for judging what risks are acceptable. For non-cancer-causing chemicals, numerical thresholds are of great value. They reduce ambiguity and debate for the most part. The reason for this is that it is far easier to compare numbers than to evaluate the complexity of social decision processes.

One common approach to the risk management decision process is to conduct a cost-risk-benefit analysis when chronic health risks of an activity are known (Rodericks). The common practice in this approach is to evaluate risk control measures in the terms of dollars spent per statistical life saved. Balancing the costs against the benefits of risk control measures is clearly necessary for an efficient allocation of resources. To implement fully the cost-risk-benefit analysis approach, it is essential to develop more realistic measurements of the benefits from risk reduction than the conventional one of expected number of statistical lives saved. When risks are uncertain, a different set of issues must be confronted that essentially centers on the high costs of risk research, costs of risk control and uncertain benefits of possible risk reductions resulting from control measures.

It should be clear from the above discussion that one of the most challenging areas in statutory interpretation of risk assessment and risk management is the problem of setting cutoff levels or acceptable levels of exposure for risk regulators. The consistency and effectiveness of risk management decision-making might be enhanced if agencies had a systematic approach for determining whether specific risks are "de minimis" — that is, too trivial to warrant an expenditure of resources to assess or control them.[1]

Determining a de minimis risk level is essentially a pragmatic decision tool for distinguishing between trivial and nontrivial risks. In general, the de minimis approach is accomplished by establishing a risk cutoff level greater than zero. If a hazard is greater than the de minimis level, it becomes the object of possible regulation, up to and including a ban on the use of the chemical. If, however, the level falls below the de minimis level, it is excluded from further consideration. Ideally, a de minimis risk level would distinguish between small risks that are more costly to regulate than to tolerate and large risks that are more costly to tolerate than to regulate.

The de minimis approach is certainly consistent with current health and safety statutes and with regulatory agency efforts to establish insignificant risk levels in the evaluation of suspected hazardous chemicals. The fact that they are labeled insignificant risk levels rather than de minimis levels is not important. The logic underlying both is the same. For an example of this approach, let's refer back to the third article in this series, in which we evaluated chemical X and discovered that it had a reproductive effect with a NOEL or BMDL of 100 mg/kg/day. We applied a 100-fold uncertainty factor to the NOEL or BMDL and established an RfD of 1 mg/kg/day. The RfD established for this reproductive effect could, in essence, be considered the de minimis level below which there is an insignificant risk of a reproductive effect in women if exposed below this level for a lifetime.

On the other hand, how would we handle a nonthreshold chemical (i.e., a carcinogen)? Some argue, as stated in previous articles in this series, that there is no safe level of exposure to a carcinogen (i.e., the no-threshold hypothesis). What is true is that under the no-threshold hypothesis any exposure to a carcinogen increases the probability of cancer occurring, but it does not mean that any exposure to a carcinogen will cause cancer. Short of banning all carcinogens, if the above were true, regulators take the position that the "safe level" for exposure to a carcinogen is defined as the dose or exposure that produces no more than a specified and very low level of excess lifetime risk — generally 1/1,000,000, or one excess cancer in 1 million people exposed, which is sometimes expressed as 10-6. What does this mean? If we assume there are 300 million people in the United States, for example, exposed daily for their full lifetime to a concentration of a carcinogen that caused risk, then the number of extra cancer cases created over a 70-year lifespan would be (300 million people) x (1/1,000,000 extra lifetime risk per person) = 300 extra cancer cases during a lifetime, or an average of 300 ÷ 70 = four to five extra cases per year for an average lifespan of 70 years. Since the actual number of cases associated with 10-6 risk is probably lower than but certainly not more than the four to five extra cases per year, it would appear that a 10-6 risk level is an appropriate definition of protective of human health and that exposure below a level of one in a 1 million extra lifetime risk could be the de minimis level.

The Precautionary Principle

While the above approach seems reasonable to manage risk even with the uncertainties that are ever-present in the risk assessment process, there is a significant movement to manage risks in a much different approach, which is the use of the "precautionary principle." The precautionary principle, as it relates to environmental hazards, was proposed in January of 1998 at the Wingspread Conference held at the headquarters of the Johnson Foundation in Racine, Wis. At the conclusion of the three-day conference, a diverse group of scientists, philosophers, lawyers and environmental activists issued a statement calling for governments, corporations and scientists to adopt the precautionary principle:

"When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent [e.g. chemical manufacturer] of an activity, rather than the public, should bear the burden of proof [of a lack of harm]."

The precautionary principle is an extrapolation of the motto "better safe than sorry." While there is precaution involved in traditional risk assessment (note the 100-fold uncertainty factor used in the first de minimis risk calculation above), the precautionary principle is meant to address situations with higher degrees of scientific uncertainty about how and whether particular harms might be caused. The principle is intended for cases concerning potentially irreparable harm, such as birth defects or species loss.

Because the precautionary principle is applied in instances where scientific evidence and causality are not "fully established," critics observe that the principle may be invoked based on less-than-plausible risks and used to ban, rather than reduce exposure to, a process or product. The European Commission, which implements legislation passed by the European Union, "stresses that the precautionary principle may only be invoked in the event of a potential risk and that it can never justify arbitrary decisions. Hence, the precautionary principle may only be invoked when the three preliminary conditions are met — identification of potentially adverse effects, evaluation of the scientific data available and the extent of scientific uncertainty" (European Commission Communication).

In summary, this four-part MichiganScience series on risk assessment has attempted to provide the reader with a high level overview of the process of assessing risk to human health and the environment resulting from chemical exposures. It has tried to convey the complexities of the process and the uncertainties associated with this process, as well as to provide some insights into the most complex part of the process: risk management. This is by no means the complete picture. After these processes are complete comes the task of trying to communicate the risk to the general public, so they can understand and accept the safe exposure levels that are set.

[1] The term "de minimis" is derived from the Latin maxim "De minimis non curat lex," which means, "The law does not concern itself with trifles."

References and Further Reading

Rodericks, J.V., "Calculated Risks: The toxicity of human health risks of chemicals in our environment," Cambridge University Press, New York, N.Y., 1994.

National Academy Press, "Science and judgment in risk assessment," National Academy Press, Washington, D.C., 1994.

Paustenbach, D.J. (ed), "The risk assessment of environmental and human health hazards: A textbook of case studies," John Wiley and Sons, New York, N.Y., 1989.

"Europa: Summaries of EU Legislation: The precautionary principle," European Union, http://europa.eu/legislation_summaries/ consumers/consumer_safety/l32042_en.htm (accessed May 16, 2010).