Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Abstract

Collecting and reporting information about the safety of an experimental compound or product constitutes a significant challenge for clinical data management. This chapter reviews the wide range of factors that must be considered for the successful completion of a project’s safety data management and reporting responsibilities. Industry guidelines and regulations for collecting and reporting reliable, high-quality safety data are discussed. The importance of degrees of precision and descriptions of severity when capturing data about adverse events is emphasized. The use of medical dictionaries, especially MedDRA, is reviewed with consideration for the process of encoding safety data to dictionary terms and various approaches to this task. Laboratory data and other forms of data, such as specialized tests, are discussed as potential sources of safety data. Special consideration is given for the capture of serious adverse events and their reporting to regulatory agencies. General issues to consider when reporting safety data to the FDA are also discussed.

Introduction

Safety data often present the most challenging aspects of the management and reporting of clinical trial data. Consideration for return-on-investment frequently curtails the query process for cleaning safety data and limits reporting methods.

Estimating resource requirements and balancing business value against scientific theory are critical to the planning of effort. Scientific principles also motivate clinical trial scientists to use judgment in determining the standards to be set for a given study or program, the quality markers to be used, the levels of precision, and the depths of analysis and reporting. When information that has a soft basis is stored and cleaned as if it has a high degree of precision and reliability, reports can reflect an over-reliance on questionable data and lead to inferential errors. Soft information can still be quite useful, but to avoid misrepresentation, a clear identification of the nature of the data is necessary.

The quality of data is really determined in the field. If the quality of the information that is recorded in source documents is poor, data managers or statisticians can do little to repair it. Instead, data managers should ensure that the database accurately conveys the limitations of the data’s quality to users. Statisticians have an imperative to ensure that analyses and data displays acknowledge their limitations.

The processes of data capture, management, and reporting are highly integrated. Considerations of best practices for reporting guidelines would be deficient in absence of guidelines for the earlier processes.

Scope

To the clinical trial scientist, the safety data in a clinical study are simultaneously a rich source of information and an enormous challenge. The data manager and statistician who are a part of the product team must work closely with each other and with other team members to ensure that safety data are captured in a sensible way to facilitate proper interpretation and meaningful analysis and summary. Ensuring quality requires that the team capture, process, and report the data in a way that facilitates the drawing of reliable conclusions. When determining the balance between business and science, data managers and statisticians must consider that resources may be expended on efforts that have no effect on conclusions.

Safety data may be displayed and reported in many ways. To ensure adequate reporting of results that pertain to product effects, judgment and scientific selection are needed to identify the trends and salient features of the data. Producing voluminous pages that are incomprehensible and clinically meaningless can dilute real effects. However, the discernment of these effects is the driving goal of the safety data processing and reporting.

This chapter discusses practices, procedures, and recommendations for data managers to operate within the project team and to work closely with statisticians, monitors, and clinical research so that data management practices support statistical and medical purposes. Data managers are better equipped to function as fully-integrated team members when they have a basic understanding of the activities and needs of other team members, particularly statisticians.

Minimum Standards

When considering the capture, management, analysis, and reporting of safety data, the following minimum standards are recommended:

  • Ensure compliance with regulations.

  • Ensure that the standard of quality supports the utilization of the data.

  • Ensure that conclusions about the safety profile of a compound can be reliably drawn from the database.

  • Ensure that safety risks are identified and reported accurately.

  • Ensure that normal ranges are properly linked to laboratory data. If normal ranges are unavailable, ensure that the reference ranges which are used are documented as such. This standard is especially crucial when normal ranges are updated frequently.

Best Practices

When considering the capture, management, analysis, and reporting of safety data, the following best practices are recommended:

  • Develop CRFs with teams of individuals from the monitoring, data management, statistics, regulatory affairs, and medical departments, thereby ensuring adequate attention to the collection of safety data.

  • Consider the level of precision that can be attained in the study and select the CRF format for collecting AEs appropriate for that level. Also, consider the level of precision in the analysis.

  • Define severity, with an understanding of its uses and limitations.

  • Examine laboratory data from the perspectives of categorical shifts, changes in magnitude for the group, individual significant values or changes, and listings. Consider related parameters for compounds with potential toxicity in specific body systems.

  • Consider laboratory normalization techniques when combining data across studies or centers where varying normal ranges are used.

  • Include data managers and statisticians working together when considering computerization, management, reporting, and analysis of safety data. These tasks are highly integrated and require joint considerations of individual team constituents. Develop standard operating procedures (SOPs) for data capture, data validation, statistical analysis, and reporting of data. The SOPs should include guidelines for this team approach.

  • Document the status and quality of safety data, and include this documentation with the database.

  • Include clear links for comparators, such as normal ranges for laboratory data, with the database.

  • Consider levels of precision in the capture and the reporting of safety data to reduce the likelihood of over-interpretation or misinterpretation.

  • Understand that time-to-event analyses are only meaningful when the timing of the event is reliably known.

  • Consider both categorical shifts (from a status of normal to abnormal) and magnitude changes for analysis and reporting of laboratory data. An examination of significant values may provide different information from an examination of significant changes

  • Apply standards commensurate with the utilization of the results residing in the databases when using databases for safety reporting (e.g., expedited reporting, ongoing review by monitoring boards, or routine reporting). If important decisions will be made based on the information in the database, know the data’s appropriateness and level of quality.

Available Guidelines

One definition of “quality data” is “a collection of data from which reliable conclusions can be drawn.” The goal of reporting safety data is to convey information that would facilitate the drawing of reliable conclusions. Generally, one of the key objectives in investigative clinical research trials, is to characterize, investigate, establish, or confirm the safety profile of an investigational product. The management and reporting of the safety data from the trial should support that objective.

The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) has issued several guidelines to provide guidance to the industry for how to manage and report clinical trial safety data. These guidelines are as follows:

  • E1A describes expectations for extent of population exposure for drugs intended for long-term treatment of non-life-threatening conditions. The guideline acknowledges that safety evaluation during clinical drug development is not expected to characterize rare adverse events (AEs), such as AEs that occur in less than 1 in 1000 subjects. Total short-term exposure is expected to be about 1500 subjects. Exposure for six months by 300 to 600 subjects should be adequate. Exposure for a minimum of one year by 100 subjects should be adequate. Exceptions are noted.
  • E2A, E2B, and E2C are clinical safety data management guidelines. They provide guidance for definitions and standards for expedited reporting, for the data elements for transmission of individual case safety reports, and for periodic safety update reports for marketed drugs.
  • E3 is the guideline on “Structure and Content of Clinical Study Reports.” This guideline provides detailed recommendations and specific suggestions for data displays of safety data. It is noted that the guideline shows “demography” as a subsection of “efficacy evaluation” and “extent of exposure” as a subsection of “safety evaluation.” For studies for which doing so makes sense, and for integrated summaries, FDA regulations require that efficacy and safety data be analyzed with particular consideration in regard to age, sex, and race. ICH guidance encourages that the analysis of both efficacy and safety data consider extent of exposure, including compliance. It is imperative to understand that demography and dose exposure relate to efficacy and safety. Therefore, the analysis and reporting of safety data should consider the characteristics of the presenting population and the extent of exposure to the investigational compound.
  • E5, Ethnic Factors in the Acceptability of Foreign Clinical Data advises that there are concerns “. . .that ethnic differences may affect the medication’s safety, efficacy, dosage, and dose regimen.” This guideline also delineates between extrinsic ethnic factors—those factors associated with environment and culture (e.g., diet, use of tobacco, use of alcohol)— and intrinsic ethnic factors—those factors that help define and identify a subpopulation (e.g., age, sex, weight, organ dysfunction).
  • E6 is the consolidated good clinical practice (GCP) guideline. This guideline contains principles of GCP that underscore the scientific basis of the clinical trial and specify qualifications for the personnel and systems involved in all aspects of the clinical trial. The guideline also asserts that adherence to good scientific principles is required and that the documentation of the adherence is needed.
  • E9 is a guideline geared toward the statistician, which includes substantial advice for the analysis of safety data.

Other guidance documents that give advice for capturing, managing, and reporting safety data are available from the ICH and from regulatory agencies. Sponsors should refer to IND regulations (21 CFR 312) and NDA regulations (21 CFR 314) to ensure compliance with FDA regulations for investigational and marketed products.

Safety Reporting

Safety data are reported and examined at various stages of an investigation and by different assessors. IND regulations specify expedited reporting for serious or alarming adverse events. Many studies have safety data monitoring boards (SDMB) that review data as they accumulate in a study. The sponsor’s medical monitor reviews safety data, frequently masked to the treatment. Then, after market approval, there are NDA regulations that specify safety reporting. Data managers and statisticians need to ensure that the reports provided are supported by the quality appropriate for the purpose of the report.

The FDA requires sponsors to meet their obligations to Congress and to the public. Compliance with IND and NDA regulations is aided by an understanding of the mission and motivation of these regulations. Before marketing, IND regulations apply.

One key purpose of the IND regulations is to facilitate the FDA’s monitoring of the investigation, including protection of the safety and rights of individuals enrolled into trials, and the scientific quality of the investigation in terms of its ability to adequately demonstrate the efficacy of a compound. The FDA requires annual reports, which are brief updates concerning the progress of the investigation, including any newly identified safety trends or risks that may impact the investigation. FDA also requires expedited reports of “any adverse experience associated with the use of the drug that is both serious and unexpected” (21 CFR 312.32). Written notification of such events is required within 15 calendar days. For events that are fatal or life threatening, a telephone or facsimile transmission is required within seven calendar days. Additional details of IND safety reports, annual reports, and IND specifications are provided in 21 CFR 312.

After marketing, the FDA has a different perspective and a different goal. If a recall is necessary after the compound is in medicine cabinets, it becomes much more difficult (if not impossible) for the FDA to retrieve the compound. The regulations provided in 21 CFR 314 describe reporting requirements after approval as follows: For three years after approval, periodic reports are required quarterly. After the initial three years, reports are required annually. Moreover, under NDA regulations, each adverse experience that is both serious and unexpected, whether foreign or domestic, must be reported within 15 calendar days. Additional details of NDA safety reporting, periodic reports, annual reports, and NDA specifications are provided in 21 CFR 314.

In addition to the FDA’s monitoring of investigations and review of safety data, the FDA requires sponsors to employ medical monitors who review safety data. Sponsors frequently have safety data monitoring boards, comprised of individuals separate from the conduct of the study, that conduct interim analyses and review accumulating data, blinded or unblinded. Data monitoring boards can make recommendations or decisions to halt an ongoing investigation due to (1) overwhelming efficacy, (2) unacceptable safety risk, or (3) futility. These boards may also make recommendations for changes in the ongoing study, such as a dose reduction or the elimination of an arm of the study with an unacceptable safety risk.


Any review of safety data that is based on reported information from a safety database (as opposed to CRFs) relies on that database. If the quality is poor, the decisions taken may be wrong. Review of accumulating data often implies a mixture of complete data with partial data and a mixture of clean data with dirty data. To provide the optimal information to the users of the dynamic database, the quality should be known and reported to the reviewers with the safety data. However, it is generally not helpful to report to data reviewers that some data are dirty without specifically identifying which data are dirty.

Capture, Management, And Reporting Of Adverse Events

Clinical adverse events frequently house the most important safety information in a clinical study. Ensuring that methods of collection, coding, analysis, and reporting facilitate the drawing of reliable conclusions requires an understanding of the characteristics and limitations of adverse event data.

Precision

The precision with which AE data are captured relates directly to how the data can be analyzed and reported. There are three basic types of precision in a clinical trial:

  • High Precision

Investigation in a Phase One sequestered environment (i.e., a phase one house) often incorporates medical monitoring that is continuous and high- precision. With a few subjects in a sequestered environment, a nurse or physician is by the bedside continuously. In such an environment, clock time may be recorded so that precise data can be collected for onset and offset of an AE. Hence, duration of the AE and elapsed time since initiation of treatment can be calculated in a meaningful way. Clock time is meaningful in such an environment for some events, although it may be difficult to assess the precise minute that sleepiness begins or a rash is cleared.

  • Moderate Precision

Investigation in a hospital often incorporates medical monitoring that is daily, frequent (but not continuous), and moderate-precision. Hospitalization offers a controlled and sequestered environment such that a nurse or physician can assess the subject daily. In such an environment, clock time may not make sense for all events, but date can be precisely recorded. Onset and offset of an AE can be recorded in terms of days but not hours. Duration (in days) and elapsed days since initiation of treatment of the AE can be calculated.

  • Low Precision

Investigation in an outpatient study where subjects return to the facility after days, weeks, or months incorporates low precision. In such an environment, clock-time and date may not be meaningful. Use of subject diaries may assist with the determination of the duration of the AE or elapsed time since treatment. However, subject diaries are frequently inaccurate. In such studies, it is recommended to capture frequency (e.g., single episode, intermittent, continuous), maximal severity, most-harsh relationship, and other such information rather than to attempt to record each event with time of onset and offset.

When an investigation is of low precision but attempts have been made to record data as if it were moderate or high precision, the result is generally a database with dates (or times) that are rough guesses and that may be far from accurate.

The precision with which AE data were collected has an important impact on how the data can be analyzed in a meaningful way. In an outpatient study, dates cannot be interpreted with the same reliance as in a sequestered study. When dates are present in the database, it may be tempting for the statistician to employ survival-analysis techniques to analyze time-to-event. However, if these dates are inaccurate, the resulting analysis can lead to incorrect or unreliable conclusions.

Severity

When considering the capture of severity of adverse events, it is tempting to make the assessment in terms of its impact on activities. This method of assessment may be meaningful for some events, such as “pain,” but not meaningful for others, such as “alopecia.” In some cases, severity is not assessable at all. For example, “mild suicide” is not meaningful. Some events are episodic rather than graduated by severity, such as “hair-line fracture.” For example, an assessment of diarrhea as “severe” is often made because of duration or frequency of episodes (which are different parameters). However, diarrhea is episodic.

The concept of severity is only meaningful within a particular event. When one considers severity of AEs for an organ class (e.g., CNS), ranking among mild, moderate, and severe AEs is not meaningful. If one considers “mild stroke” and “severe flush” (both CNS events), these rankings are not sensible compared to rankings such as “mild headache” and “severe headache” for which a relative ranking does make sense.

A common data display that is encouraged by the ICH and the FDA is a breakdown by severity. In this context, it is easy to confuse severity with seriousness or to misinterpret severity altogether. A breakdown that ignores the particular events and that counts mild AEs separately from moderate AEs will give a distorted assessment when the same study includes reports of “mild stroke” or “mild MI” and also reports of “severe rash” or “severe sleepiness.” A more meaningful display breaks down severity within a particular event.

Dictionaries

AE dictionaries are needed to group data for meaningful analysis. MedDRA is the ICH-developed and recommended dictionary for all medical events captured in clinical trials, including, but not limited to, AEs.

Use of MedDRA requires an understanding of its levels of terms and an understanding of its multi-axial functionality. The levels of terms used in MedDRA are the follows:

  • Lowest level term (LLT)
  • Preferred term (PT)
  • High level term (HLT)
  • High level group term (HLGT)
  • System Organ Class (SOC)

It is noted that the SOC level within MedDRA is really a dual level, because MedDRA permits a primary SOC and one or several secondary SOCs.


The multi-axiality of MedDRA permits a single AE to be simultaneously coded to many SOCs. For example, a migraine headache could be coded to the nervous system (because of the involvement in the brain), the vascular system (because it is a vascular disorder), the GI system (if there is associated nausea and vomiting), eye disorders (if there are visual disturbances), or other SOCs, as applicable.

MedDRA is not just another dictionary. It is a distinct approach to thinking about medical information. Managers of medical information have an imperative to understand the flexibility of MedDRA as well the implications that its storage and implementation can have on safety reporting.

Dictionary Version Control

Updated versions of dictionaries frequently change pathways to body systems or organ classes. Such changes in a dictionary can have a substantial effect on conclusions regarding a product’s effects on the body. Thus, the version of a dictionary used for classification of AEs into body systems can impact the labeling of the product. As there must be a clear trail leading from the data to the labeling, the data manager who will implement a dictionary for a study (or product) must ensure consistency, when possible, and the ability to replicate.

Most standard dictionaries that have been widely used have been reasonably stable (e.g., COSTART, WHO, ICD-series, and so on). MedDRA is updated periodically. Dictionary version management requires more resources when updates are more frequent.

For purposes of medical monitoring, interim analysis for safety review boards, or other important purposes, one suggested practice for ensuring consistency within a long-term study is to execute the dictionary against the AE data as the study progresses. Then, the dictionary should be re-executed using the most reasonably current version of the standard dictionary. This approach ensures that the entire study is executed against the same version of the dictionary. Because additional queries may result at the time of re-execution, this final execution pf the dictionary should occur prior to database-lock.

To ensure reproducibility, the version of the dictionary used in any study should be stored with the database.


Encoding

Auto-encoding is a highly recommended practice to facilitate the execution of a dictionary against AEs. Auto-encoding software is available to assist with the programming aspect of this task. To cultivate an understanding of the coding process, training of the monitors and site personnel should facilitate capture of AE data in a format that can be auto-encoded. Training should include guidelines such as the following:

  • Avoid use of adjectives as initial words (e.g., “weeping wound” may be coded to “crying”; “faint rash” may be coded to “syncope”).

  • Avoid the use of symbols and abbreviations in the AE text, as they may be interpreted differently.

  • Avoid inclusion of severity in the AE text (e.g., “severe headache” in the AE text inhibits auto-encoding; severity should be recorded in the severity field, not the AE text).

  • Ensure that AE text has a clinical meaning (e.g., “bouncing off the walls” and “feeling weird” are difficult to interpret).

  • Ensure that AE text has a clear meaning (e.g., “cold feeling” may be interpreted as “chills” or “flu symptoms”).

Encoding within the database may add unnecessary complexity to the management of the database when final coding requires judgment. If the auto- encoding is done within the database itself and a medical judgment that is made after database lock indicates that the default pathway inaccurately captures the medical condition, the database would have to be unlocked. Performing auto-encoding in a separate file (e.g., an AE analysis file) offers the possibility of reflecting changes in medical judgment after database lock, if deemed essential. However, this practice imposes the need for an audit trail on analysis files.

Hard-coding

Hard-coding, or coding outside the clinical database, is generally a dangerous practice. For coding AEs, hard-coding is sometimes used to introduce medical judgment that the standard dictionary does not offer. When events such as “strange feeling” are reported and no additional information from the site is available, the medical monitor for the study may have insight that assists with the codification of the event, which can be inserted into the AE analysis file through hard-coding. It is possible to use “pass-through” text for the AE preferred term and hard-code the body system. Conventionally, many sponsors make use of quotation marks to indicate verbatim text that is passed through by a program to the preferred-term field. Any use of hard-coding requires careful documentation.

Lumping and Splitting

Coders can be categorized into “lumpers” and “splitters.” No universally agreed-upon method exists for handling AE text with more than one event. “Tingling in hands and arms” is regarded by some coders as a single event and by other coders as two events. However, the decision to lump or split AE text has consequences.

When two events are reported in the same text field (e.g., “indigestion and diarrhea”) and splitting is done by the data management staff rather than the site, inconsistencies within the database may result. When the data manager splits the AE text into two or more events, the associated items are frequently duplicated (or replicated). For example, if a medication is given for treatment of the AE and the concomitant medications page of the CRF shows one event as the reason for use (e.g., “indigestion”), the splitting of the two events results in an AE with treatment given for which no treatment is recorded.

Medical judgment may also be inadvertently introduced into the database by the data manager. If the severity of the compound event is recorded as “severe,” the duplication of the attributes of the AE imputes “severe” to the other event(s). However, this outcome may not reflect the physician’s judgment for that particular component of the AE.

Coding of AEs has significant impact on the analysis and interpretation of the safety data for a product. The perspective that coding is a clerical function is naïve and risky. As the world moves toward the full implementation of MedDRA, the role of coding will have an even greater impact on the interpretation of safety data.


Capture, Management, and Reporting of Laboratory Data

The characteristics of laboratory data differ importantly from most other types of data. Most clinical adverse events can be observed by either the subject or the physician. However, an elevation in bilirubin or cholesterol is not generally observable. For example, even in high-precision studies, it is impossible to know the time of an elevation of a clinical chemistry analyte. At the time of a blood draw, whether or not the value is elevated can be known, but when the value became elevated is unknown.

The peculiarities of laboratory data need to be respected in the management of the data. Attention is required to ensure that the storage of units of the data clearly reflects the values that were captured; In many databases, units are separated from the values. When data across studies are combined, it becomes particularly challenging and important to ensure proper linkage with the units. This linkage can protect against unreliable conclusions being drawn from the reported laboratory data.

One of the most challenging aspects of managing laboratory data is linking the data to the appropriate normal range. In the capture of data, if the data do not come to the data manger electronically, attention should be given to ensure the link between each value and the appropriate normal range.

When normal ranges are not available or not obtainable, reference ranges— ranges derived from normal ranges that are available in the study or from a reference book—may be used. However, documentation of the use
of reference ranges in lieu of normal ranges must be clear for users of 
the database.

Normalization techniques for laboratory data are often employed for such purposes as conveniently combining data across studies. Normalization techniques generally include a transformation of the data into a unitless value between “0” and “1” when the value is normal, below “0” when the normal is below the lower limit of the normal range, and above “1” when the value is greater than the upper limit of the normal range.

If judgment and selection are not a part of the planning for data displays, reporting laboratory data can be prohibitively resource-intensive. The ICH and FDA have given specific guidance for how to report laboratory data.


Treatment-emergent Abnormal Values (TEAVs)

For hematology, clinical chemistry, urinalysis, or other laboratory panel or group, comparative data summaries and supportive listings that provide a one- page summary by treatment group (for parallel studies) for analytes included in the study are strongly encouraged. Such a summary provides a valuable overview of movement from the normal state pre-dose to an abnormal state at any time post-treatment, in either direction, and for any analyte.

Clinically Significant Values or Changes

Comparative data summaries and supportive listings are recommended. These documents provide summaries and details by treatment group of analytes with significant changes or values, such as an analyte for which the baseline value is doubled or tripled, an analyte for which the value is observed to be twice the upper limit of the normal range, or an analyte for which the change in value exceeds the width of the normal range.

Groups Means and Changes

Displays of means and mean changes from baseline levels are useful within a group—to indicate a trend in an analyte—or among groups—to examine treatment group differences or trends that may be dose-related.

Shift Tables

Shift tables frequently are 3x3 tables that show the status before treatment compared to the status after treatment (e.g., below normal, normal, above normal, in both cases). These displays ignore magnitude of change. The display depicts the movement, or lack thereof, from one category before treatment to another category after treatment.

Individual Data Displays

Listings of individual data are needed for adequate reporting of most clinical trials. When the study is large, individual listings may be voluminous. Therefore, reporting needs to consider practical aspects of summarization.


Related Groups of Analytes

Summaries by related groups of analytes are useful for some studies or integrated summaries. For example, products that may be prone to cause liver damage may need careful examination of analytes that relate to hepatic function. For the hepatic-function-related analytes, it may be useful to prepare a summary on a single page that includes proportions of subjects who double the baseline, triple the baseline, have a change of fixed magnitude, or exceed an alert or toxic threshold.

Other Data

Safety data can have forms other than AEs and laboratory values. Capture of data from specialty tests (e.g., electrocardiograms, electroencephalographs) requires an understanding of the common data derived from the test and of the format, precision, and special attributes of the data.

Physical examinations are customary in clinical trials. In a broad sense, the physical exam is a screening method; if an unexpected, significant abnormality is detected during a physical exam, a specialty test is generally used to confirm the event. In this case, the data from the specialty test has greater reliability.

In considerations of data capture, free-text commentary boxes are generally discouraged. If they are used for medical monitoring purposes, they can be shaded so that the reviewing medical monitor can have the prose handy, but the text does not need to be computerized. Making effective use of the investigator’s comment log can ensure that essential text (which is generally minimal) is computerized, if warranted.

The management of “other data” depends on the form of that information. For physical examinations or specialty tests for which free-text commentary is permitted, methods exist for managing the commentary without compromising the quality standards of the database.

Free-text commentary can be computerized using word-processing rather than a data entry system. Subsequently, the commentary can be proofread rather than double-keyed. Through this method, the free-text commentary can be computerized and linked to the database without being a part of the database itself. As a result, quality standards can be maintained for the database proper, but reasonable standards may apply to free-text prose.

One method used by some sponsors that avoids computerization of verbose commentary is codification, in which a medically qualified individual reads the information and judges it to be relevant, not relevant, or perhaps critical. A code can be applied and keyed, where “0=no comment,” “1=comment, not relevant,” “3=comment, relevant,” and “4=comment, critical.”

Serious Adverse Event Data

Expedited reports are required by regulatory agencies for certain serious adverse events. In many companies, receiving reports of serious adverse events (SAEs), computerizing these reports, and managing these reports is the responsibility of a dedicated group of individuals. Often, this group is separate from the data management group that is responsible for computerizing and managing data reported from clinical trials.

The SAE database often includes safety data from various sources. Reports can be received from patients in clinical trials, from spouses who took trial medication (accidentally) and had AEs, or from patients who took marketed drugs and who are not participating in any trial. These reports can come from individuals who give reports over the telephone to a sponsor, from employees who report to the sponsor that they were told about adverse reactions to marketed products, from physicians, from the literature, and even from regulatory agencies. These reports are generally single-keyed, often by individuals other than professional data mangers, and generally are not queried. The data within these SAE databases may be dirty, incomplete, duplicate, fragmentary, or have other issues. In contrast, the reports of SAEs from clinical trials that are reported on the AE page of the CRF are subjected to rigorous data management procedures, including scrubbing, querying, and verification to ensure accuracy.

These two types of databases generally have important differences in their sources, their quality levels, their uses, and their customers. Reconciliation of SAE data and the clinical trial database that houses the relevant SAE reports is not always straightforward. Different sponsors have vastly different methods of managing these two databases.


Safety Data Management and Reporting - Page 17 of 22 -



include provisions for reconciling important disparities between serious adverse events that are captured both in the SAE database and in the clinical trial database. The business-balance perspective encourages users of these databases to recognize that clinical trial databases may be queried or updated while SAE databases are not and

that, consequently, some discrepancies may exist because preliminary medical judgments were later changed in light of updated information.

General Safety Data

The FDA draft document Reviewer Guidance: Conducting a Clinical Safety Review of a New Product Application and Preparing a Report on the Review (November 1996) provides specific guidance to industry that reflects thinking within the FDA about safety data.

In the above-referenced document, the FDA described the concept of clinical domains for a review of the following systems:

  • Cardiovascular
  • Gastrointestinal
  • Hemic and Lymphatic
  • Metabolic and endocrine
  • Musculoskeletal
  • Nervous
  • Respiratory
  • Dermatological
  • Special Senses
  • Genitourinary
  • Miscellaneous

In the guidance document, the FDA specifies that an NDA should be reviewed against each clinical domain with two key questions as goals:

  • Are the safety data adequate to assess the influence of the product on the clinical domain?

  • What do the data indicate about the influence of the product on the clinical domain?

Statisticians who are involved with the reporting of safety data have an imperative to review safety data and ensure that the influence of the investigational product on each clinical domain is described clearly.

The design of the study must be considered in reporting clinical trial safety data. In a multi-center study, the ICH and the FDA urge an examination of the influence of center effects on the results to ensure that the results are not carried by a single center or dominated by a small proportion of the
total study.

In a multi-center study, center effects are typical and are a nuisance. There are three sources of contributions to center effects:

  • The investigator as an individual (e.g., the bedside manner, personal biases, and peculiar methods of assessment)

  • The environment (e.g., equipment, SOPs, and staff)

  • The subject population (e.g., those people who frequent the hospital or clinic, VA hospital, university hospital, or country clinic)

When the study employs one investigator who may be on the staff of several hospitals, or when a cluster of hospitals shares equipment and has common SOPs, or when a study makes heavy use of referrals, these attributes affect the interpretation of the center’s effects. Reporting data in a multi-center study requires understanding the source of variability among centers and the reasonableness of displaying data by center or by clusters of centers.

Recommended Standard Operating Procedures

  • Coding of Adverse Events

  • Maintenance of Coding Dictionaries

  • Reconciliation of Serious AEs in SAE Database with Clinical Trial Database

  • Management of AE Analysis File

  • Management of Laboratory Data and Normal Ranges

  • Preparing Integrated Summaries of Safety Data 

  • No labels