Abstract
Data collected during a clinical trial must have as few errors as possible to be able to support the findings or conclusions drawn from that trial. Moreover, proof of data quality is essential for meeting regulatory requirements. This chapter considers the challenges faced by clinical data management professionals in determining a dataset’s level of quality, with an emphasis on the importance of calculating error rates. An algorithm for calculating error rates is presented in this chapter and is asserted to be the preferable method for determining the quality of data from a clinical trial.
Introduction
This chapter concentrates on identifying, counting and interpreting errors in clinical trial data. Data quality measurement methods are very important and should be applied to clinical trial operations as part of an overall planned approach to achieving data quality. Although measuring data quality is important, it is equally if not more important to focus on preventing errors early in the protocol development and data handling process design stages. Error prevention will be addressed in the “Assuring Data Quality” chapter of the GCDMP.
Federal regulations and guidelines do not address minimum acceptable data quality levels for clinical trial data, therefore it is left up to each organization to set their own minimum acceptable quality level and methodology for determining that level. As a result, differences in methodology for determining data quality and estimated error rates are often not comparable between different trials, vendors, auditors or sponsors. It is important that data management professionals take a proactive role to set appropriate standards for acceptable data quality levels, to utilize methods for quantifying data quality, and to implement practices to assure data quality.
Scope
This chapter provides minimum standards, best practices, and methods for measuring data quality.
The Institute of Medicine (IOM) defines “quality data” as data that support conclusions and interpretations equivalent to those derived from error-free data1. To make the IOM definition of data quality operational, organizations must understand sources of errors, identify errors through inspections, use inspection results to measure data quality, and assess the impact of the data quality on conclusions drawn from the trial.
Minimum Standards
Use statistically appropriate inspection sample sizes for decision making.
Document the method and frequency of data quality assessments in the study’s data management/quality plan.
Perform at least one quality assessment of the study data prior to final lock.
Document data quality findings and corrective actions, if needed.
Determine acceptable error rates for primary and secondary safety and efficacy (also known as “critical”) variables.
Best Practices
- Use quantitative methods to measure data quality.
...
Compare trial data and processes in the beginning, middle, and end stages of the trial.
Work with clinical operations to predefine criteria to trigger site comparisons based on monitoring reports.
Perform quality control on 100% of key safety and efficacy (critical) variables.
Monitor aggregate data by site to detect sites whose data differ significantly so that appropriate corrective actions can be taken.
Perform quality control prior to release of data used for decision making.
Other Best Practice Considerations
“When a long series of data processing steps occurs between the source document and the final summaries (as when the source document is transcribed to a subject’s chart, transcribed onto a case report form, entered into a database, and stored in data tables from which a narrative summary is produced)” 2 compare the final summaries directly against the source document, at least on a sample of cases.
Streamline data collection and handling to limit the number of hand-offs and transfers.
Perform a data quality impact analysis. Impact analysis in data quality is a methodical approach used to assess the impact of data errors or error patterns on the trial or project. Through impact analysis, potential risks or opportunities can be identified and analyzed. Impact analysis can provide key information to aid in decision making.
Evaluate the results of the impact analysis and propose system and process changes.
Perform the appropriate level of risk assessment to ensure data quality based on the type and purpose of the trial. For more on this, see the “Assuring Data Quality” chapter.
Data Errors
A clinical research study is a complex project involving many processing steps. Each step where data are transcribed, transferred, or otherwise processed has an error potential associated with it.
...
Source data verification (SDV) may be used to identify errors that are difficult to catch with programmatic checks. For example, a clinical trial monitor at the investigator site performs SDV by comparing the medical record (a subject’s chart) to the CRF. Any discrepancies between the two that are not explained by CRF completion instructions, the protocol, or other approved site conventions are counted as errors. In addition, if a study is using electronic data capture (EDC) methods, SDV may be the best way to check for data errors. The scope of SDV can be decided on a trial-by-trial basis and should be determined at the beginning of the trial.
Inspection or Comparison of Data
ICH E6 defines an inspection as “the act by a regulatory authority(ies) of conducting an official review of documents, facilities, records, and any other resources that are deemed by the authority(ies) to be related to the clinical trial, and that may be located at the site of the trial, at the sponsor's or CRO’s facilities or both, or at other establishments deemed appropriate by the regulatory authority(ies).”10
...
Documentation of data quality comparisons should include the number of errors found, the error rate, how the numerator and denominator were defined and the final error rate. Anyone reading the documentation should be able to recreate the sampling and error rate calculations and produce the exact same results. For information addressing how these processes may differ in studies using EDC, please refer to the chapters entitled ”Electronic Data Capture— Study Conduct” and ”Electronic Data Capture—Study Closeout.”
Recommended Standard Operating Procedures
- Measuring Data Quality
- Monitoring Data Quality
- Data Quality Acceptability Criterion