Oversight indicators
& benchmarks for
your clinical trial

If your study was headed towards failure, 
how soon would you want to know?

Imagine knowing your study’s trajectory earlier.
If you saw a decreasing retention rate (and the associated reasons for it), would you rethink on-site versus virtual follow-up?
If sites in a specific region excelled in a certain therapeutic area, would you select more sites nearby?
If you saw an unexpected number of protocol deviations at one site, would you apply more resources to protocol training? 

“If only I knew sooner!” How many times has a clinical trial manager said those words? Many trials operationally fail today because of preventable quality issues—problems that have nothing to do with the safety or efficacy of a potential therapy. Meanwhile, many drugs eventually receiving regulatory approval are marred by delayed timelines and budget overruns.

The solution? Better decisions using data-driven foresight. 

It is no longer sufficient to look only at what happened in the past days, weeks, or months in your trial. Study managers need to proactively identify what quality issues and risks may emerge next. Metrics need to be assessed frequently—and in relation to realistic identified targets—to mitigate risk in real-time. Only then can you take action. 

We’ve identified valuable predictable patterns within clinical trials after combing through historical data from nearly 100,000 global sites. Historical data about clinical trial operations offer drug companies a sneak peek into the future of any trial—including any looming time-consuming obstacles and budget-busting setbacks. But how do you:

  • Identify outliers within your study? 
  • Set appropriate thresholds for various KRIs? 
  • Make sense of fluctuations across the lifecycle of every study? 

Finding the answers requires industry context to unlock meaningful insights. That’s why we’re sharing what we know about 10 important indicators. In the following pages, you’ll discover why these indicators matter, when they apply, what healthy thresholds are, and how to collect the data. 

Remember: red flags are only helpful if you see them. They have far less value in hindsight. And as more trials embrace decentralized methods, closer oversight becomes more important than ever. You cannot make decisions fast enough if you don’t oversee results in real-time—and with adequate context. Leverage our insights to better predict future outcomes in your study.

All the best in your quest for new cures,

Shiva Memari

Head of Global Solutions

 


 

Monitoring your trial performance

Yesterday’s data can power today’s decisions to make tomorrow’s study successful.

Predicting risk through historical data

In every clinical trial, sponsors and CROs create lists of KPIs, KRIs, and KQIs. These could be related to participant safety, operational performance, compliance, risks, or other areas of concern. Regardless of whether any one indicator is categorized as a KPI, KRI, or KQI (or all of the above!) it should provide critical information about timeliness, quality, cost, or safety.

In order to mitigate these risks and optimize study performance, we need to take a fresh approach to clinical trial support. This means taking the guesswork out of which indicators should be used for a particular study by leveraging insights about which thresholds should be used across various therapeutic areas.

A dynamic view. Snapshot views are not enough to ward off potential setbacks. Monthly and weekly snapshot reports are lagging and out-of-date by the time they are consumed. Risk and performance are best managed if they’re measured and assessed often. By knowing the relevant thresholds, study teams can avoid problems if they see indicators trending in the wrong direction in real-time. The expected distribution of indicators over the course of a study provides a deeper understanding since thresholds are inherently dynamic.

"All indicators are dynamic throughout a study – some indicators are naturally elevated at the beginning of a study and return to steady state as the study progresses. Having an understanding that this occurs coupled with the ability to measure indicators in real-time equips you with the requisite information you need to understand study health and mitigate risks."

– Shiva Memari, Head of Global Solutions

A wider range of quality indicators. Many companies are overly focused on enrollment production. But this data alone doesn’t provide enough content. We believe in the importance of populating additional factors around quality.

While some indicators apply to every study, others are unique to certain study setups or therapy categories. For example, a decentralized dermatology trial will look different than a rare cancer trial. Prior to study execution, we help study teams set the right KRI thresholds (and quality tolerance limits) by predicting the relative importance of different study risks to their unique study.

A formula for identifying productive sites. Finding the best sites for your study is critical to ensuring the success of your clinical trial. Based on historical site-level data, we can forecast what enrollment will be for the duration of the study and provide probabilities of meeting enrollment timelines. But the Lokavant definition of a productive site does not begin and end with enrollment rate. Layering a site’s diversity score along with additional relevant quality indicators (screen failure rate, discontinuation rate, protocol deviation rate, etc.) completes a more holistic assessment of what a productive site is. And once a study gets underway, we can alert clinical operators to non-compliant sites by modeling site-level quality data and outcomes, informed by analysis from nearly 100,000 global sites.

Configurable data points. No two trials are the same. That’s why we take a case-agnostic approach, configuring study-specific risks on a case-by-case basis.

Based on a study’s specific protocol, benchmarks can be created to monitor risk around primary endpoints. For example, perhaps the protocol calls for an assessment to be performed within two hours of this dose. By configuring and tracking data points around a study’s primary endpoints (e.g., window constraints, medication compliance, progression-free survival), we design indicators specific to your study. Instead of understanding the number of evaluable participants close to the end of a study, we make this information easily accessible and transparent as the study progresses for operational teams.

A study's lifecycle

A studys lifecycle image


1 Site activation to first participant first visit (FPFV)

Why it’s important

Site Activation to First Participant First Visit (FPFV) is a leading indicator for site quality and performance. This cycle time highly correlates with decreased protocol deviation rate and participant enrollment.

Historical data show that the shorter the time between Site Activation and FPFV, the more patients will ultimately be enrolled. It allows study managers to identify high-performing and low-performing sites, and reassign resources to those sites accordingly. In some cases, it may even indicate a need to cut their losses early and close an unproductive site.

It also serves as a key risk indicator for inactive sites that may:

  • Deprioritize a study
  • Not have the requisite site personnel
  • Not have the enrolling participation population

Lokavant continues to assess this cycle time in our growing data asset and can provide recommendations on exactly when (# days) a site should be closed for inactivity.

This metric's role in site selection

Every study wants to select the correct site. With site activation costing sponsors anywhere between $20,000 to $50,000 per site, it's critical to accurately identify highly performant sites. Otherwise, a less-than-ideal site will cost more in both direct and indirect costs. Based on our findings, sponsors should select sites based heavily on a site's ability to startup and screen quickly.

Data considerations

In short, the shorter the duration, the better the site performance.

This indicator is easy to calculate since both dates required for this calculation are readily available for any clinical trial. As with any metric, the team should align on a single source of truth for each data element. We typically pull Actual Site Activation dates from CTMS and Actual FPFV dates from EDC or IRT.

A deeper dive into the data

The time a site takes to screen a participant after activation is a stronger predictor of enrollment than how long it has to enroll.

Some may argue that sites conducting studies with a longer enrollment period will be able to enroll more participants, but according to our analysis, this is not necessarily the case.

Numerous factors could lead to a site screening more quickly than another, including:

  • More experience
  • Better understanding of the protocol
  • Proactive pre-screening activities
  • No competing studies
  • Resources (e.g., dedicated site staff)


2 Site activation

Why it's important

Site activation metrics are a leading indicator for both vendor (if outsourced) and enrollment performance. This is a commonly used indicator (only second to Participant Enrollment), but customers can struggle with persistent challenges on aligning with an accurate data source.

Data considerations

A prerequisite for this indicator is having an actively managed site activation plan. The site activation plan should include the number of sites to be activated at a point in time (by week or month) for every participating country (rolled up to study). Study managers should reconcile this plan to baselined study milestones (First Site Activated, 50% Sites Activated, 100% Sites Activated, etc.) and update each time the study milestones are revised.

From a data source perspective, actual and planned site activation dates should be entered into CTMS or volumes in an Excel tracker if planned dates are not readily captured.

CAUTION: Planned Site Activation dates are typically a moving target and not entered for countries and sites not expected to be activated in the short-term. Always check with the operational team managing site activations to align on the best source for this data.  Just because data can be captured in a system does not make it the source for that data.

Start-up times by country

Site start-up times vary widely by country. For example, In the US, it's relatively fast to get up and running. Meanwhile, study managers can expect to wait 6 months or more in China. But understanding this on a more granular level requires data-driven performance insights by country. Therefore, visibility of regulatory start-up times is important.

Lokavant's competitive analysis provides benchmarking by country where certain types of studies have been performed historically and currently and by layering this on top of other quality indicators (e.g., discontinuation rate, protocol compliance, etc.).

 


3 Participant enrollment

Why it's important

Participant enrollment is a leading indicator for both vendor (if outsourced) and enrollment performance. Arguably, this is the most frequently used indicator across companies and trials.

"Historically, study managers focused on enrollment numbers and rates. But to run a successful study, other quality indicators need to be taken into account."

– Shiva Memari, Head of Global Solutions

Data considerations

As with the site activation indicator, having an actively managed participant enrollment plan (at the site, country, and study level) is a prerequisite for this indicator.

The participant enrollment plan should include the number of participants to be enrolled at a point in time (by week or month) for every participating site (rolled up to country and study). This plan should be reconciled to baselined study milestones at the beginning of the study (First Participant Enrolled, 25% Participants Enrolled, 50% Participants Enrolled, 100% Participants Enrolled, etc.) and be updated each time the study milestones are revised.

From a data source perspective, actual enrollment can be mapped from IT or EDC and planned enrollment should be available in a CTMS (or Excel tracker).

A deeper dive into the data

Based on our analysis of approximately 7,500 sites (a subset of the Lokavant data compendium), only 17% of sites conducting studies fail to enroll a single patient – but a shocking 42% of this subset failed to even screen a patient.

Enrollment also varies by therapeutic area. Based on our findings, dermatology has the lowest number of non-enrolling sites while one-half of sites in hematology/oncology trials didn't enroll a single patient.

Therapeutic area image

 


4 Participant diversity

Why it's important

Participant diversity is a leading indicator to ensure sites enroll a diverse population. Failure to ensure diversity within clinical trials can lead to results that do not reflect the target population of a given therapeutic under study.

Per the recent requirement to submit a Diversity Action Plan under the Food and Drug Omnibus Reform Act (FDORA), this important indicator ensures a sponsor complies with regulations and provides transparency to enrollment in underserved populations.

Data considerations

While the target is company-specific (many will choose to improve diversity from an internal benchmark), Lokavant recommends visualizing requisite demographic information alongside total participant enrollment. In addition, while the new requirement is only for phase 3 trials, Lokavant recommends capturing and visualizing this information in all phases. Arguably, this information is more meaningful in early phase studies than in later phases.

Patient demographics such as date of birth, gender, and race are captured at the time of screening. These data points then need to be monitored in real-time to ensure adequate diversity within the patient pool. Note that while demographic information can be obtained from both IRT and EDC, participant demographic collection restrictions vary by country.

 


5 Screen failure rate

Why it's important

Screen failure rate is a leading indicator for site, vendor (if outsourced) and enrollment performance. Continuously measuring and assessing screen failure rate is important not only to ensure sites are screening the right participants but also to ensure that the site, country, and study will meet enrollment targets.

Data considerations

We recommend setting an initial criterion around a minimum number of subjects that must be screened before this KRI begins functioning. For example, a 100% screen failure rate after screening one subject vs ten subjects has different meaning. Once the initial criterion is reached, the risk score is then determined by comparing the number of subjects screen failed to the number of subjects screened.

Reasons for screen failure should also be regularly assessed and visualized to determine the root cause and by diversity variables. Excessively high screen failure rates can indicate problematic inclusion/exclusion criteria as well as insufficient training at the site level.

"Continuous review of reasons for screen failure is critical to implement timely mitigation strategies. Patterns across sites and countries should be reviewed to better understand critical patterns."

– Shiva Memari, Head of Global Solutions

 


6 Protocol compliance

Why it's important

Protocol deviation rates are a leading indicator for vendor and site compliance to protocol. Since protocol deviations require spending more time and money on training and/or retraining site staff, tracking their occurrence is important. While many study managers only count volume, it's important to look at protocol deviations across subject dates for a true reflection of what's happening.

Increasingly, incorporating participant feedback on the schedule of assessments prior to protocol finalization is an important method for better understanding the burden on the participant. This assists with feasible protocol design, and mitigates potential protocol deviations related to certain study design elements and visit requirements.

"It's the trend that is more important. While a spike may be expected, a corresponding drop in rate is also expected as the risk is mitigated. As the study progresses, the rates reach a steady state level that requires a different threshold. By having this context offered by historical data, you can adeptly manage your thresholds based on how many days into your study you are."

– Shiva Memari, Head of Global Solutions

Data considerations

While protocol deviation volumes are the easiest to measure (and arguably the most common measurement), enhancing this compliance measure by normalizing for the duration a participant is in the study is critical for inter/intrasite comparisons (as well as across countries and studies). Therefore, we recommend using a subject days calculation since it most accurately reflects the true incidence of protocol deviations. This also makes it easy to compare rates over time, regardless of the volume of participants, sites and countries.

This calculation is done at the participant, site, country, and study level. Tracking the PD rate distribution over time is instrumental as the thresholds evolve throughout the study. All PD severities can be assessed: significant (critical/major) and minor. In addition, having standard PD categories is helpful for comparisons across sites and countries.

A deeper dive into the data

Major vs minor. Not all protocol deviations are equal. For example, imagine an oncology study. If the site fails to get blood pressure on a patient visit, this will likely qualify as a protocol deviation since it's a missed assessment. But if they fail to get imaging that is part of the primary analysis, that would qualify as a major deviation protocol.

Field of study. Protocol compliance can differ based on the field of study and/or the phase. For example, oncology phase II and III trials typically have more protocol deviations than other studies (again, Lokavant uses rates). Therefore, benchmarking against similar studies guides study managers in comparing apples to apples.

Geography. Our data shows that protocol compliance fluctuates based on geography. This could be the result of many different variables – from participant proximity to the investigative site to differences in definitions/training by different vendors.

Time since startup. Protocol compliance is rarely static throughout a study's lifecycle. For example, it's common to see increased protocol deviations at the beginning of a trial as site staff get themselves oriented. Therefore, a single snapshot view could be misleading and trigger the shutting down of a site prematurely.

 


7 Discontinuation rate

Why it's important

Discontinuation rate is a leading indicator for participant safety and study power. From a participant safety perspective, it is critical to assess the reasons for premature discontinuation regularly (participants lost to follow-up are particularly concerning). Losing patients before primary endpoints are collected means all data collected on them thus far is unusable, and the participant is non-evaluable. Continuous high discontinuation rates across sites can compromise the evaluable participant population and ultimately study power.

High discontinuation rates can be an indicator that participants are withdrawing due to:

  • An overly burdensome protocol related to procedures conducted
  • An overly burdensome protocol related to the frequency of required visits
  • Certain adverse events
  • Lack of efficacy

It can also indicate that the site may not be managing/contacting the participants optimally.

Data considerations

The minimum number of subjects must be discontinued before this KRI begins functioning. Once reached, the risk score is determined by comparing the number of subjects discontinued from treatment to the number of subjects enrolled.

Reasons for discontinuation should be assessed to determine the root cause as well as stratified by diversity variables.

 


8 Data entry time

Why it's important

Data entry time is a leading indicator for protocol compliance and participant safety. As central monitoring is increasingly playing a role in study conduct, the importance of having data entered into the clinical database has never been greater. Without entered data, there can be no risk-based monitoring or central monitoring to ensure data integrity and participant safety.

This indicator also assesses how a vendor (if outsourced) trains site personnel and how frequently they manage site activity.

Data considerations

Data entry time is one of the more common oversight indicators used. It provides great insight into site activity and can indicate if a site is becoming overburdened (or perhaps experiencing site personnel changes) throughout the study. Ideally, the shorter this cycle time is, the better – especially for monitoring participant safety.

"An oldie, but a goodie. Even with the proliferation of DHTs and other data sources, manually entered data drives so much activity downstream – from basic data cleaning to monitoring participant safety."

– Shiva Memari, Head of Global Solutions

 


9 Adverse event (AE) / Serious adverse event (SAE) rate

Why it's important

Adverse and serious adverse event (SAE) rates are leading indicators for patient safety. Analyzing A/SAE incidence across sites and countries provides information on not only IP safety but also any possible reporting variability by site/country. Abnormally high or low A/SAE reporting levels may indicate the need for improved site training.

Data considerations

The main question relating to SAE data is the source to be used. While the pharmacovigilance (PV) database is considered the source for initial and follow-up SAE information, the EDC database can also be used depending on customer requirements. The PV database must be used if additional SAE submission information indicators are to be considered.

In addition, several indicators can be configured for both event rates:

  1. Absolute rate thresholds by site, country and study; and
  2. Country and site standard deviations from the average rate (to analyze variability across sites and countries).

Note that AEs/SAEs can also be stratified by causality and severity.

 


10 Endpoint compliance

Why it's important

It's critical to regularly assess endpoint compliance to ensure participants are completing the appropriate assessments (in the required protocol window) to be considered valuable for primary and secondary endpoints.

This indicator also assesses how well a vendor (if outsourced) trains site personnel and how frequently they manage site activity.

Data considerations

Since study endpoints can be captured in many different data sources, many times EDC and/or ePRO sources can be used after a thorough protocol review. Required data element information can include the study assessment result and the collection date and time to ensure any assessment window requirements are met. Endpoint compliance should be assessed at the site, country, and study level.

"When you think of identifying critical data and process for initial risk identification, study endpoints are always a great place to start. So this indicator not only provides transparency into your risk management process and KRI measurement, it proactively provides the sponsor with important valuable participant information – historically only obtained near the end of study during data review."

– Shiva Memari, Head of Global Solutions

Download the PDF for additional insights!