'Heard' Immunity?

Amateur voices

The coronavirus crisis has been a humbling experience. To accompany the pandemic, we have had an epidemic of statistics. If I thought that after 45 years working as a medical statistician, with more than 30 years working on clinical trials, I had what it took to make sense of them all, I have been disabused: testing, diagnosis and infectious disease modelling are matters in which I have only ever had occasional involvement. Even in clinical trials, there are huge gaps in my experience. I have mainly dealt with therapeutic trials, rarely with prevention trials and never with vaccines. The reader is warned that mine is an amateur opinion.

However, every commentator seems to have an opinion on herd immunity, so why not me? Our critical resources are overwhelmed with testing these opinions and navigating between the false positives and negatives is not easy. Rumour is common but understanding is rare; therefore, much of what is offered by way of advice could be called heard immunity. However, a policy of allowing the epidemic to run its course and infect 80% of the population (say) in order to protect the rest, does not seem like much of a strategy, which brings us on to the alternative and the subject of vaccines.

Stopping pandemics and stopping trials

There are currently many trials in progress. Hilda Bastion gives a useful account. I shall consider three major placebo-controlled trials of vaccines with infection (or its prevention) as the endpoint. (See Table 1 ). They have remarkable similarities, which is either reassuring, if you think that great minds think alike, or regrettable, if you worry about common mode failure. The trials are all driven by events (cases of infection) and the maximum number targeted varies from 150 for AstraZeneca (AZ) to 164 for a trial sponsored by BioNTech and run by Pfizer. (I shall refer to this as the Pfizer trial.) All power calculations use a clinically relevant vaccine efficacy (the reduction in expected infections as a ratio of the expected) of 60% and all seek to ‘prove’ an efficacy of at least 30%.

Table 1 Summary of the three trials. The control group rate for Pfizer is quoted in the protocol per year but for the other two per 6 months. For the purposes of comparability, the Pfizer protocol rate has been divided by two (which, given the rarity is approximately OK) for this table. ‘Efficacy or power’ is the vaccine efficacy assumed for the power calculation.

However, to get the required number of events you do need to target a number of patients. Here there are some differences. Pfizer and Moderna have gone for 1:1 (vaccine to placebo) randomisation, whereas AZ has gone for 2:1. (For the purpose of comparing treatments, given the planning parameters, optimal design would recommend a slight excess in the vaccine group but not as much as 2:1. Presumably, AZ is thinking of having more subjects for safety assessment.) On the other hand, AZ and Moderna are targeting a total number of 30,000, whereas Pfizer has a larger figure of 43,998. This is partly a function of a lower assumed control group infection rate: Pfizer has 0.65% per six months (approximately) compared to values of 0.80% and 0.75% respectively for AZ and Moderna.

Figure 1 Stopping for three clinical trials. See text for explanation and note that AZ’s boundary is based on my simple calculations, which may be wrong. The numbers are the planned number of events available at a given look.

All three trials are group sequential, but the biggest difference is in the number of looks. All companies have gone for looks at (nearly) constant intervals in terms of events, which is pretty much the same as in terms of information fraction. However, Pfizer is taking five looks, AZ two and Moderna three. (There is a suggestion by AZ that even if the results are ‘significant’ at the interim look they will not stop.) My attempt at plotting the stopping boundaries for efficacy are as given in Figure 1. Pfizer and Moderna specifically express the boundary in terms of stopping efficacy so that is easy. The boundary AZ are using for significance is one based on a spending function that gives a nominal ‘alpha’ of 0.31% at interim at 4.9% at the end. The plot for AZ represents my back of the envelope attempt to translate this into efficacy and may be wrong. Pfizer also has a futility boundary, which is not shown.

Since the boundaries of all companies approach 50% as the trial progresses , the net result will be that all three trials, if they run to conclusion, will deliver similar crude estimates of vaccine efficacy if just significant. Note that Pfizer is targeting slightly more cases than the other two but looking more frequently, whereas AZ has a less favourable randomisation ratio than the other two for the purpose of comparing the treatments but has spent less alpha than Moderna by the end, who have 4.54% by the final look. The AZ and Moderna trials are fully frequentist. The Pfizer protocol, however, promises, “The assessment for the primary analysis will be based on posterior probability using a Bayesian model.” Nevertheless, I think that the practical difference is minimal.

Report card

The trials seem well designed and, clearly, a great deal of thought has gone into them. However, certain details are missing. I can check approximately that the power claims hold up but I can’t exactly do so, since the full details are absent in each case. Power calculations are not necessary for interpreting a trial and the statistical analysis plans in preparation will no doubt firm up the intended analyses. Still, it would have been nice to have had more.

However, there are three further points that no statistician should forget. First, as Peter Armitage (I think) once remarked, statisticians tend to overlook that there is an important stage between planning and analysing a trial that is called ‘running the trial’. I have every reason to believe these companies will make an excellent job of running their trials but the successful conclusion will depend crucially on their doing so and no statistician can tell that they will from looking at the protocol (although a bad protocol might indicate they will not).

The second factor is far more important. Successful vaccine development depends on hard work by many brilliant life scientists The trials are important but the vaccines are crucial.

The third factor is one no statistician has any excuse for ignoring. It is called ‘luck’.

Fingers crossed.


Declaration of interest

I act as a consultant for the pharmaceutical industry. A full declaration of interests is maintained here.