How can we compare medicines from different clinical trials?

At a conference, the results of a clinical trial demonstrate that a new medicine is efficacious and safe for a particular disease or condition. Patients, understandably, want to know when they will have access to this new treatment. Regulatory approval will be sought and individual countries will need to consider reimbursement. But, before we even start contemplating delivery of the treatment or how much it may cost, we need to ask "efficacious compared to what?"

Usually, clinical trials compare a new treatment to a "standard of care", which may be a medicine currently used in clinical practice or a placebo. Ideally, the new medicine would have been compared to all existing treatments, including those in development. That sounds ridiculous, but, after a clinical trial has been completed, other treatment options may have become available (which may be different in different countries). It would be impractical to design and conduct new randomized clinical trials at this stage to compare every new treatment to all available treatments in such a constantly evolving landscape. Instead, we can employ indirect treatment comparisons to attempt to estimate differences between some of these medicines.

Indirect treatment comparisons are sometimes called mixed treatment comparisons (where direct and indirect evidence are combined) or network meta-analyses (reflecting the idea that there may be multiple treatment options with evidence linked by common treatment arms). Like meta-analysis, there are a variety of statistical methods which may be applied (frequentist and Bayesian, hierarchical models with fixed or random effects) but with a similar goal to combine data (whether individual patient data or results on outcomes of interest) to estimate treatment effects, particularly between pairs of treatments where existing direct evidence is unavailable. Some comparisons will be based on networks with a large number of placebo-controlled trials of similar designs, while others are based on just a few studies and the decision to include/exclude studies may depend on a number of factors such as data availability. There are assumptions of exchangeability (studies are clinically and methodologically the same), homogeneity (study estimates must measure the same treatment effect) and consistency (direct and indirect evidence are in agreement). Like many statistical methods, these may be complicated by known issues such as imbalances across studies in terms of design, inclusion/exclusion criteria, or the emergence of the clinical importance of a subgroup, and some issues may be addressed by fitting complex statistical models.

Everyone working in the pharmaceutical industry hopes they are working on the medicine, device or therapy that will improve the health of patients and increase life expectancy. We know the strengths of randomized control trials and that they are not always the only pragmatic approach to testing medicine efficacy and safety. Indirect treatment comparisons can provide evidence to assess differences between treatments which will not be compared directly in a randomized control trial and need to be conducted with clear exploration of the assumptions, transparency of the model fitting and careful interpretation of results.