Per-protocol Effect in Randomized Trials

Not all my research is about causal inference from observational data. When randomized trials are available, my collaborators and I use cutting-edge statistical methods to complement the usual intention-to-treat estimates with appropriate estimates of the per-protocol effect, that is, the effect that would have been observed under full adherence to the protocol of the study.

These pieces explain why intention-to-treat effects are insufficient, why per-protocol effects are necessary, and why naive per-protocol analyses are dangerous. I suggest that you read them in the order in which they are listed:

A common argument against per-protocol effects is that we can’t never estimate them correctly because, after all, a per-protocol analysis is just an observational analysis of randomized trial data and therefore subject to confounding. The classical example to support this argument came from an analysis conducted in 1980 in the placebo arm of the Coronary Drug Project randomized trial: participants who actually took placebo lived longer than those who didn’t, even after adjustment for multiple risk factors. Because placebo cannot affect mortality, observational analyses of trials must be doomed, right? Ellie Murray and I showed that this argument isn’t correct when we use 21st century approaches to data analysis:

Part of the reluctance to go beyond intention-to-treat analyses, and to bring (non-naive) per-protocol analyses to the forefront, stems from cultural differences between trialists and epidemiologists. We wrote a dictionary  to help translate the main concepts about bias across disciplines:

(Thinking explicitly about biases in randomized trials is also important to avoid silly choices, like using post-progression survival in cancer trials, as we discuss here.)

Just in case we were being carried away, we asked patients whether they would be interested in going beyond intention-to-treat analyses. The answer was yes:

Estimating per-protocol effects is hard because we generally need to adjust for both pre- and post-randomization prognostic factors, and the latter may themselves be affected by treatment. That means that we need to use Robins’s g-methods, which were designed to deal with time-varying post-randomization factors.  If you are interested in description of the application of g-methods to randomized trials, take a look at

  • Toh S, Hernán MA. Causal inference from longitudinal studies with baseline randomization.  International Journal of Biostatistics 2008; 4(1): Article 22.
  • Toh S, Hernández-Díaz S, Logan R, Robins JM, Hernán MA. Estimating absolute risks in the presence of nonadherence: an application to a follow-up study with baseline randomization. Epidemiology 2010; 21(4):528-39.
  • Lodi S, Sharma S, Lundgren JD, Phillips AN, Cole SR, Logan R, Agan BK, Babiker A, Klinker H, Chu H, Law M, Neaton JD, Hernán MA, on behalf of the INSIGHT Strategic Timing of AntiRetroviral Treatment (START) study group. The per-protocol effect of immediate vs. deferred antiretroviral therapy initiation. AIDS 2016; 30(17):2659-2663.

Sometimes, for point (baseline) interventions, a version of the per-protocol effect can be validly estimated using an alternative method: instrumental variable estimation. Here is an example:

  • Holme Ø, Løberg M, Kalager M, Bretthauer M,  Hernán MA, Aas E, Eide TJ, Skovlund E, Schneede J, Hoff G. Effect of flexible sigmoidoscopy screening on colorectal cancer incidence and mortality: A randomized clinical trial. JAMA 2014; 312(6):606-15.

In a more in-depth analysis of the same data, we showed how to obtain upper and lower bounds for the per-protocol effect when one is unwilling to make all the assumptions required for point identification:

  • Swanson SA, Holme Ø, Løberg M, Kalager M, Bretthauer M, Hoff G, Aas E, Hernán MA. Bounding the per-protocol effect in randomized trials: an application to colorectal cancer screening. Trials 2015;16:541.

If you want to learn more about our methodological research on instrumental variables, click here.