Abstract
A wide range of measurements, commonly referred to as “metrics,” are essential to evaluate the progress and outcomes of a clinical study. This chapter considers various metrics used in clinical data management, as well as the process of selecting metrics that are related to the goals and objectives of an organization. The chapter discusses the importance of standardizing metrics for a project and across projects, and gives suggestions to help ensure metrics are provided in a timely fashion with adequate contextual information to be understood and effectively used to measure, monitor performance and improve efficiencies.
Introduction
The term “metric” simply refers to a measurement. In clinical data management, metrics can quantitatively and qualitatively assess whether or not a process or individual or group performance is efficient and effective, as well as indicate whether the factor being measured has or will have an expected level of quality. Metrics can be used at various intervals throughout a study to ascertain if processes are working as planned. When a process has been completed, well-designed metrics can help indicate if goals were achieved with the expected level of quality.
This chapter provides information on metrics that are particularly relevant to clinical data management (CDM) personnel. There are no regulatory mandates regarding specific metrics; however, metrics can assist in detecting potential regulatory issues, for example by measuring compliance with SOPs. The effective use of metrics also helps an organization evaluate and improve quality and productivity. This chapter is intended to provide helpful suggestions and considerations for CDM personnel involved with establishing a metrics program within a department or company.
Copyright 2013 Society For Clinical Data Management
Metrics in Clinical Data Management - Page 1 of 20 -
Society for Clinical Data Management
Scope
For the purposes of this chapter, the term “metrics” primarily refers to specific data management process-related measurements assessed during the course of a study, but may also refer to the data generated by these measurements. Roles and responsibilities vary between organizations, and some of the topics discussed in this chapter may be the responsibility of different departments in different organizations. Regardless of role assignment, CDM personnel should be aware of the processes discussed in this chapter and how they impact their roles as data managers.
Minimum Standards
Ensure CDM metrics are aligned with key performance indicators (KPI) (milestones, deliverables, timelines and other quantitative measurements) to meet the organizational needs and goals.
Ensure that all metrics are clearly defined, quantifiable, documented and approved.
Communicate approved metrics to relevant personnel and stakeholders within and across projects.
Ensure adequate and appropriate resources (hardware, software, personnel, etc.) are made available to accurately and thoroughly measure and report metrics.
Ensure the personnel responsible for defining, quantifying, documenting and communicating metrics have the proper training and relevant skills and competencies.
Ensure all personnel and stakeholders are adequately trained regarding metrics definition and their relevance to process and project performance.
Perform quality assurance on data used to determine the metrics, to ensure that the metrics are based on accurate and timely data.
Establish and document corrective action to be taken if planned or actual metrics do not align with goals and objectives.
Copyright 2013 Society For Clinical Data Management - Page 2 of 20 – Metrics in Clinical Data Management
Best Practices
Include all stakeholders (e.g., project managers, clinical leads, data managers, management, etc.) in the development of metrics specifications.
Ensure all stakeholders (e.g., project managers, contractors, clinicians, data managers, and management) understand and agree with the definition of the measurements and the parameters used to provide each metric before implementing use of the metric.
Align metrics with project team/organizational goals as well as industry standards and contractual agreements, when and where appropriate.
Standardize the definitions of metrics by using consistent terminology and parameters across projects and the organization.
Agree upon well-defined metrics at the onset of a project, and use those metrics to evaluate performance during all stages of the project.
Select a set of key metrics that apply to all projects. Use these metrics as the basis for comparison of process performance across all projects.
Consider the aspects of cost, quantity, quality, timeliness and performance when deciding which metrics to implement.
Identify metrics that will indicate progress to targets and also provide insight into historical performance.
Ensure that the effort needed to collect and report a metric is appropriately offset by the benefit. Where possible, implement automated collection of data for metrics, and strive to use existing primary data (e.g., audit trails, tracking systems) to collect metrics.
Ensure the tools used to collect and report metrics are thoroughly validated, and are 21 CFR Part 11 compliant where applicable.
Establish benchmarks of expected performance based on pooling of similar data.
Ensure metrics findings are visible to relevant stakeholders via a reporting plan (charts, dashboards, etc.) followed by a feedback loop and rigorous
Copyright 2013 Society For Clinical Data Management
Metrics in Clinical Data Management - Page 3 of 20 -
Good Clinical Data Management Practices
Society for Clinical Data Management
rigorous action plan through root cause analysis (RCA) and corrective action/preventive action (CAPA).
Document the process for collecting, reporting, and communicating metrics.
Evaluate metrics collection and reporting processes frequently (for both internal and outsourced activities).
Determine if metrics need revision, or if other metrics should be added or eliminated, based on changes in technology or process landscape.
Identifying Metrics
An organization’s use of a set of key and relevant metrics will facilitate achievement of predetermined goals. Although agreement on certain metrics is obtained by the overall company or department, individual departments or project teams may need to maintain additional metrics to assess the progress toward the goals of their respective department or team.
Metrics should be based on goals and objectives set by an organization, and ideally, organizations and departments should strive to identify a set of metrics to use across all projects. Identifying the specific metrics that fit the needs of all involved parties is often difficult. Most goals and objectives set by groups or organizations revolve around the interdependent areas of quantity cost, time, quality, and performance, as shown later in this chapter in Table 1.
Quantity—Quantity Quantity - Quantity measurements are straightforward and objective, and are therefore among the easier metrics to quantify.
Time—When Time - When measuring time, one of the most important considerations is defining the exact start and stop points and the unit of measure (e.g., business days, calendar days, or resource hours). Time measurements ensure that chronology of milestones is maintained. Organizations may follow a risk-based approach in adhering to the timelines over other metrics.
Cost—Although Cost - Although costs are not typically a CDM responsibility, CDM may supply metrics that are used for cost analyses.
Copyright 2013 Society For Clinical Data Management - Page 4 of 20 – Metrics in Clinical Data Management
Good Clinical Data Management Practices
Quality—Quality is the Quality - Quality is the most important metric to be considered in CDM. Quality metrics may measure the quality of processes and deliverables and can be quantified in different ways. For more information about data quality, see the GCDMP chapters entitled “Measuring Data Quality” and “Assuring Data Quality.”
Performance—Metrics Performance - Metrics intended to quantify performance are typically made up of some combination of measures of quantity, time, cost, and quality. Therefore, performance can also be assessed in terms of one or more of these measures in relation to another measure, such as performance over time, or performance compared to cost. Performance should typically be measured at multiple levels (for example, site, study, project etc.)
When considering a set of key metrics, an organization should design the metrics to allow for their application across projects, regardless of the project-specific process or technology used. This approach allows for an assessment of each project in comparison to similar projects. It also allows for an evaluation of processes that may be redesigned to take advantage of a new technology.
Two examples applicable for clinical studies using either paper-based data collection or electronic data capture (EDC) are:
...
measurement of the number of queries per data field for incoming data as opposed to the number of queries per page and
...
measurement of the time from subject visit to data entered in the database.
Clinical studies are often evaluated within the realm of strategic (i.e., organizational) and tactical (i.e., operational) objectives. Metrics assessments are generally based on the relationship between two or more (e.g., quantity over time, or quality of quantity) of the five core criteria of quantity, time, cost, quality, and performance.
One should be cautioned that focusing too much on one criterion may adversely affect another. For example, focusing too strongly on quality may impact study timelines, similarly focusing too strongly on study timelines may negatively impact quality. All of the above-mentioned criteria should be balanced to some degree in the metrics used by an organization.
...
...
Society for Clinical Data Management
Regardless of the measurement, or why a measure exists, a well-designed metric should be
relevant—answers relevant - answers critical business questions
enduring—is enduring - is of lasting relevance
robust—is robust - is not subject to manipulation or variation due to process changes
valid—measures valid - measures what it implies to measure accurately
specific—is specific - is clear and consistent
actionable—can actionable - can drive decisions
practical—is practical - is measured in a timely fashion without a significant drain on resources.1
The effort needed to collect and report a metric should be offset by the potential benefit. If a metric has no benefit, it should not be collected just because doing so is easy and inexpensive. Cost, quality, and performance metrics may be difficult to quantify, whereas metrics dealing with quantities and times are often much easier to collect. The metrics that are collected and reported should be able to answer questions that have been predefined to measure the success or failure of a project or process.
Linking Metrics with Organizational Goals
A hierarchical relationship exists between the objectives of an organization, a department, and an individual project or clinical study. An organization may have strategic objectives that include achieving a certain level of quality in its product while achieving a particular profit margin at the end of the fiscal year. Each functional group within an organization, such as CDM, sets tactical goals and objectives to ensure quality while using resources efficiently. A particular project manager or project team may have budget and time constraints, yet be expected to deliver a quality end product.
Each functional group must develop its own objectives and metrics within the context of the organization’s objectives. However, cross-functional input should be solicited to ensure consistent interpretation of the metrics.
...
Copyright 2013 Society For Clinical Data Management - Page 6 of 20 – Metrics in Clinical Data Management
Good Clinical Data Management Practices
The existence of these hierarchical objectives and concurrent timelines drives the need for consistency in the definition and utilization of metrics.
Linking Metrics with Project Goals and Deliverables
Overall project goals and objectives must be considered when metrics are selected and evaluated. A set of metrics that only addresses some, but not all, of the five core criteria will provide only a partial assessment of overall project performance. If one metric is met, it does not imply that the others are achieved. For example, even if milestones are achieved on schedule, they may have required additional resources.
...
Even when the same set of metrics is used across projects, they may be prioritized differently for each project. For example, cost containment may be assigned a higher priority in an early phase exploratory study, while data quality may be prioritized in a phase III pivotal trial.
Identifying Users
To optimize the effectiveness and efficiency of metrics, the users of each metric should be clearly identified. Each metric should be linked with documentation of who collects the metric, who reports the metric, and who is responsible for initiating any actions that may be taken based on the metric. If a metric is to be used for evaluating progress toward goals, all such stakeholders should be identified and documented.
Metrics should be shared with all stakeholders participating in a project when applicable, including CROs and vendors. Decisions should be made early in the project planning stages concerning which metrics will be collected, who will collect the metrics, how and when the metrics will be disseminated (e.g., with a common Web site or visualization tool, such as a dashboard, one month after the first patient signs the consent form, etc.). Copyright 2013 Society For Clinical Data Management
Metrics in Clinical Data Management - Page 7 of 20 -
Society for Clinical Data Management
Metrics results should be communicated to relevant stakeholders clearly and within prescribed timeframes, enabling needed corrective actions to be made in a timely manner.
Evaluating Metrics from Various Sources
Obtaining metrics can be difficult when the parameters required for measurement are found in multiple databases. Even if all of a study’s clinical data reside in a single database, data comprising project metrics may originate from a study database, a project tracking system, a CDMS (clinical data management system), or a system outside CDM altogether. This issue is further compounded when certain complementary metrics, such as the project budget and the status of various CDM processes, are not available for equivalent time frames. However, metrics can be synchronized with other relevant information if they are collected in a timely manner.
...
These tools may have the capability to aggregate real-time study data into intuitive views, eliminate the need to integrate databases or re-enter data, and allow for views of complementary data within the same time frame.
Metrics in Different Types of Studies
EDC systems offer the capability to have clinical data and queries available sooner (in real time) than in paper-based studies. Study or subject status indicators such as subject enrollment or visit completion may also be available within the EDC system. The quality and timeliness of metrics improves substantially when they are collected electronically.
In paper-based studies, CDM metrics can be generated electronically only after data are entered into the database or CDMS. Information regarding subject enrollment, visit completion, and other such status indicators can be difficult to obtain in a timely fashion. Teams often rely on each site to report
Copyright 2013 Society For Clinical Data Management - Page 8 of 20 – Metrics in Clinical Data Management
Good Clinical Data Management Practices
this report this information (e.g. using paper enrollment logs) and then subsequently re- enter the information into a project-management or project-tracking database.
Metrics Common to EDC and Paper-based Studies
Many metrics common to EDC and paper-based studies relate to overall performance of the project, team, or organization. Because metrics measuring organizational or group performance are not contingent upon the data collection modality used, they are also usually independent of any CDMS or database software. Although there are some exceptions, most well-designed metrics are not dependent on a particular data collection strategy or software package.
Metrics Unique to Paper-based Studies
Data entry is one area in which metrics for paper-based studies may be created. An example is the percentage of data entered relative to the number of completed CRFs received. Another example is performance metrics for data entry personnel (number of forms/patients entered per day, per employee). Paper-based studies will also have metrics related to data clarification forms used for query resolution, which are not needed in EDC studies due to the capability of generating queries electronically.
Some metrics used in paper-based studies may have a different meaning when used in EDC studies. For example, data entry percentage may also be measured in studies using EDC, although in that case it is an indication of site performance.
Metrics Unique to EDC Studies
EDC-specific metrics are often directly associated with the EDC system. Examples include the percent of EDC system downtime or the average number and severity of EDC help desk calls. Another class of unique EDC metrics are those that would be prohibitively expensive to measure in a paper- based study, such as the number of modules pending PI review and signature. For more information about metrics in studies using EDC, see the GCDMP chapter entitled “Electronic Data Capture—Concepts and Study Start-up.”Copyright 2013 Society For Clinical Data ManagementMetrics in Clinical Data Management - Page 9 of 20 -
Society for Clinical Data Management
Importance of Metrics Standardization
Because metrics may be shared between various functional groups or stakeholders, metrics should be based on standard definitions. The need for standardized definitions is amplified if metrics are used for comparisons across studies, projects, or organizations (e.g., benchmarking projects). Communication between various groups using a metric is also enhanced by the use of standard definitions.
For example, “time to database lock” is one of the most frequently cited metrics used in clinical studies. However, this metric may be defined differently within different organizations. Depending on an organization’s definition of this metric, completion of database lock may be considered to occur:
when data are “frozen” and a sponsor accepts data transferred from their CRO (e.g., the database or transferred datasets),
after a QA audit is accepted and it is deemed permissible to break blinding of the study,
multiple times, depending upon SOPs and whether or not a company allows for database “unlocking” to make changes to the database after it was originally locked.
Likewise, the starting point for this metric may be defined by different organizations as any one or more of the following criteria:
the last subject completes the last visit (LPLV),
the last data from the last subject visit are recorded on a paper CRF or or entered into an EDC system,
the last CRF is received by the group performing data entry,
the data cleaning activity is deemed completed (i.e., generation of last query in database).
the last query or discrepancy is resolved.
Due to various interpretations of the metric “time to database lock,” all parties
...
could potentially be working in different directions based on
...
Copyright 2013 Society For Clinical Data Management - Page 10 of 20 – Metrics in Clinical Data Management
Good Clinical Data Management Practices
presumption of when their presumption of when database lock occurs and what activities take place at that point. Without a standard definition of this metric, the goal may not be identified or achieved in an efficient and effective fashion. To ensure clarity and efficiency, all functions affected by a metric should be involved in the definition of the metric and made aware of the interpretation of the metric that is to be followed.
If the starting point for “time to database lock” is the date the last subject completes the last visit, the CRA or monitoring group should work with CDM to develop and agree upon definitions and the process used to achieve this milestone. As for the end point, if it is defined as the point that blinding of the study is broken, appropriate representatives (e.g. biostatistics, CDM and personnel responsible for randomization code storage) should work together to understand their respective roles in this process. The data management plan (or other applicable documentation) should be kept current to reflect any decisions that are made regarding metrics to be collected and their definitions.
Like other areas of clinical data management where standards are evolving, there is an initiative to develop industry-wide standards by the not-for-profit Metrics Champion Consortium (MCC).2 Comprised of representatives from biotechnology, pharmaceutical, medical device and service provider organizations, the mission of MCC is to develop performance metrics within the biotechnology and pharmaceutical industry.
...
Figure one shows an example schematic of performance metrics within a clinical study and indicates when specific metrics may be used and the focus of that metric (for example, to evaluate quality or efficiency).Copyright 2013 Society For Clinical Data ManagementMetrics in Clinical Data Management - Page 11 of 20 -
Legend:
Society for Clinical Data Management
Figure 1. Example Schematic of Performance Metrics within a Clinical Study.
Legend:
MCC Metric Number | Definition |
---|---|
1 | Number of calendar days from protocol synopsis to protocol approval |
2 | Number of versions prior to protocol approval |
3 | See Protocol Quality Score System |
4 | Contract execution timeliness (non functional outsourcing models) |
5 | See Site Selection Quality Score System |
6 | % country regulatory packets approved after first receipt |
7 | Timeliness of protocol approval to first site activated [country, region, study] |
8 - EDC | Number of calendar days from final approved protocol to final approved eCRF |
9 - paper | Number of calendar days from final approved protocol to final approved paper CRF |
10 | % monitoring plans completed prior to first site initiated |
11 | % planned sites activated |
12 - EDC | Number of calendar days from eCRF sign-off to database "go live" |
13 - paper | Number of calendar days from sign-off of final paper CRFs to database "go live" |
14 | Number of calendar days from Site Activation to FPFV (patient consented) [site, country, region, study level] |
15 | Number of calendar days from event threshold for change order (CO) generation to CO agreed and signed by both Sponsor and CRO. |
Copyright 2013 Society For Clinical Data Management - Page 12 of 20 – Metrics in Clinical Data Management
Good Clinical Data Management Practices
16 17
18 – EDC
19 – paper
20 21 22 23
24
25 26
27 – paper
28 – EDC
29 – paper
30 31 32 33 34 35 36
37 - EDC
38 - paper
...
16 | % "On Time” payments of invoices |
17 | % actual contract value vs initial baseline contract value |
18 – EDC | Calendar days from Patient Visit complete to eCRF page entered in EDC system |
19 – paper | Calendar days from Patient Visit complete to CRF page entered in data management system |
20 | Monitoring Visit Frequency Compliance |
21 | Monitoring Visit Report Completion Compliance |
22 | Documented Monitoring Visit Report Review Compliance |
...
24 | % of sites meeting recruitment expectations (protocol specific) |
...
[Reported by tier level T0 – T4] | |
25 | % subjects enrolled at point in time vs. target date |
26 | % enrolled subjects who remain in the study (did not voluntarily withdraw) |
27 – paper | Calendar days from pages received and/or scanned to data entry complete. |
28 – EDC | Calendar days from time query generated to query response on EDC system. |
29 – paper | Calendar days from time query generated to query response updated on the DM system |
30 | % of drug not used versus planned amount (per patient per country) |
31 | % of drug kits available vs planned |
32 | Number of protocol amendments after protocol approved |
33 | Number enrolled subjects with protocol deviations per defined categories |
34 | % of active sites closed prior to study closeout |
35 | Number of site audit findings that are major and critical |
36 | % of critical issues escalated according to project plan |
37 - EDC | Number of calendar days from last patient, last visit (LPLV) until database is locked by DM (EDC) |
38 - paper | Number of calendar days from last patient, last visit (LPLV) until database is locked by DM (paper CRFs) |
39 | Number of calendar days from final database lock (DBL)to final TLGs/TLFs |
40 | Number of calendar days from final TLGs/TLFs to first draft clinical study report. |
41 | Number of calendar days from final DBL to first final approved clinical study report. |
42 - paper | Final Database Error Rate |
39 40 41
...
42 - paper
...
43 | Number of calendar days from final TLGs delivered versus target date promised |
MCC Clinical Trial Performance Metrics version 1 |
---|
...
.0 – Exploratory Metrics | |
---|---|
Exploratory Metric E1 | Median number of calendar days from contractual milestone to invoice receipt |
Exploratory Metric E2 | Schedule Performance Index (SPI): Original contract planned amount of |
...
work completed to determine if work is progressing as planned. | |
Exploratory Metric E3 | Schedule Performance Index (SPI): Adjusted contract planned amount of work completed |
...
versus work completed to determine if work is progressing as planned. |
Exploratory |
...
Metric E4 | See Site Assessment Quality Score System |
Context and Attributes of Metrics
The context in which a metric will be applied should be determined prior to reporting the metric. Each metric’s data source(s), data extraction date, and
Copyright 2013 Society For Clinical Data Management
Metrics in Clinical Data Management - Page 13 of 20 -
Median number of calendar days from contractual milestone to invoice receipt
See Site Assessment Quality Score System
Society for Clinical Data Management
. Each metric’s data source(s), data extraction date, and reporting window should be included with each report. Each metric should also be grouped according to its attributes, which can be described as characteristics of a metric that help stakeholders understand the underlying causes for performance variances.
Some attributes that may be used for grouping include:
therapeutic area,
indication,
study phase,
data collection mode (e.g., EDC, paper, imaging),
study design,
size or complexity factors (e.g., number of sites, number of subjects, number of procedures), or
resourcing model (e.g., CRO, contractors, in-house staff, etc.).
Categorizing and summarizing metrics according to their attributes can result in more clear and concise metrics reporting, and minimize the potential for making invalid assessments and generalizations.
Defining Time Points for Standardized Metrics Collection
To provide maximum benefit, metrics reports should be available for review as soon as possible. Project and department managers frequently need to gather status information for an ongoing study, including information such as enrollment rates, the number of open queries, or the types of queries that occur most frequently on CRF data. The greatest opportunity to take corrective action occurs when information is timely. The earlier a problem is detected, the sooner it can be addressed. Although details may vary between organizations and studies, Table 1 presents some metrics commonly used during different periods of a study. Although the table groups these metrics by the five core criteria and by three study periods (startup, conduct and closeout), some of these metrics may be applicable to multiple criteria or time periods.
Copyright 2013 Society For Clinical Data Management - Page 14 of 20 – Metrics in Clinical Data Management
Good Clinical Data Management Practices
Table 1. Examples of Common Study Metrics
Criterion | Study Startup | Study Conduct | Study Closeout |
---|---|---|---|
Quantity | Number of expected subjects Total number of data fields (may be quantified differently by different organizations) | Amount of data entered Amount of data cleaned Expected amount of entered data compared to data in database | Final number of subjects Number of outstanding queries Missing pages report |
Cost | Total estimated resources (such as people, licenses, infrastructure, printing, etc.) needed for a study | Number of monitoring visits | Total study costs Average cost per subject enrolled |
Time | Projected overall study timeline Time needed for protocol/CRF review and finalization Final approved protocol to database activation | Time from subject visit to data available to CDM Time from subject visit to data cleaned and locked | Time from first subject enrolled to last subject visit Time from last subject visit to final database lock Time from final database lock to clinical study report |
Quality | Systems validation results | Number of queries and re-queries Number of data transfer errors Metrics generated from audit trail | Number of data errors per number of total data fields (error rate) (used in paper studies) Number of protocol deviations |
Performance | Number of programmed procedures that validate correctly | Comparison of data entry rates across sites Time from subject visit to data entered Average time for query resolution | Number of database unlocks to correct data errors Number of protocol amendments |
Copyright 2013 Society For Clinical Data Management
Metrics in Clinical Data Management - Page 15 of 20 -
...
Number of database unlocks to correct data errors Number of protocol amendments |
Action Plans: The Feedback Loop
Ultimately, the desired outcome of using metrics is obtained through well- planned and executed processes that include interim assessments and feedback loops. An organization should carefully design the procedures that collect the metrics needed to assess whether a goal has been reached. However, the organization should also carefully design procedures describing the actions that may be taken based on the results of collected metrics.
...
Useful reports for the analysis of metrics include trend analyses, statistical techniques, summary tables, flagging of outliers, identifying unanticipated trends in the data, plots showing incoming data and query rates, and listings of values, such as changes from baseline values.3 Ideally, metrics should be categorized according to their ability to assist in comparing a project’s outcome to the outcomes of other projects inside or outside the organization.
Using Metrics to Improve Organizational Efficiency and Effectiveness
Comparing metrics from different projects and studies can help improve the overall efficiency and effectiveness of an organization. If a particular process functioned more effectively and efficiently in a specific project, the organization can try to determine what factors made the process more efficient in that specific project and then try to apply those same factors to other projects. By using metrics to identify areas of strength or weakness within individual projects, an organization can apply lessons learned to projects in the future, thus improving the overall effectiveness and efficiency of the entire organization.
One of the means of ensuring visibility and transparency of metrics across all parties (sponsor, clinical research organization, and vendor) is by creating service level agreements (SLAs) and operational level agreements (OLAs) for
Copyright 2013 Society For Clinical Data Management - Page 16 of 20 – Metrics in Clinical Data Management
for those metrics that form the key performance indicators. Routinely reviewing KPIs in governance meetings (strategic and operational) provides an indication of the health of the project and may identify areas needing corrective and preventive actions (CAPA).
Using Metrics to Improve Timeline Efficiencies
Metrics can be used early in a study to identify areas where timeline efficiencies might be improved. For example, if a particular site is not entering data or resolving queries in a similar timeframe as other sites or within the expected timeframe, the root cause can be identified and, if warranted, corrective and preventive actions can be initiated, such as retraining relevant site staff. If particular milestones are not being reached as expected across an entire study, processes and data collection tools can be reevaluated to determine if adjustments could potentially improve timeline efficiencies.
Using Metrics to Improve Operational Efficiencies
Frequently, operational efficiencies can also be improved by initiating corrective actions based on metrics reports. As with timeline efficiencies, identified operational inefficiencies at a particular site (e.g., delay with uploading data from ePRO) can often be improved by retraining relevant site staff. If metrics identify processes that are not working as efficiently as intended across an entire project or study, relevant processes and tools can be carefully examined to determine the most effective corrective actions needed to improve operational performance and efficiency.
Metrics Documentation
The data management plan (DMP) is a tool that can be used to document decisions about the use of metrics for a project (e.g., metrics definitions, the means of collecting metrics, the means of communicating metrics). However, some organizations may choose to document metrics separately from the DMP. Regardless of where they are documented, the metrics used for a project should be defined at the planning and initiation stages of the project.
All key metrics reports and other documents relevant across projects should be referenced in the project documentation, as well as all project assumptions and assertions for establishing particular metrics. If new terms are used or new
Copyright 2013 Society For Clinical Data Management
Metrics in Clinical Data Management - Page 17 of 20 -
Good Clinical Data Management Practices
Society for Clinical Data Management
new stakeholders or vendors are involved with a project, establishing and maintaining a project dictionary or glossary may be helpful.
Recommended Standard Operating Procedures
...