A quantitative framework to inform extrapolation decisions in children

A quantitative framework to inform extrapolation decisions in children

When developing a new medicine for children, the potential to extrapolate from adult efficacy data is well recognised. Extrapolation can be used to streamline drug development, with the European Medicines Agency (EMA) defining extrapolation as:

“… extending information and conclusions available from studies in one or more subgroups of the patient population (source population) … to make inferences for another subgroup of the population (target population) … thus reducing the amount of, or general need for, additional evidence generation (types of studies, design modifications, number of patients required) needed to reach conclusions.” [1]

This is essentially aiming to reduce the number or size of studies in a population, whilst still being confident to make inferences. Examples of extrapolation can include extrapolating from historical data to predict drug effects in contemporary patients, extrapolating from European patients to predict benefits in patients in Asia and, our focus here, extrapolating from adults to support licensing decisions in children.

A challenge with extrapolation can be in determining how similar source and target populations are and therefore how much information can be borrowed across populations. There are several examples of how extrapolations can be applied. Dunne et al.[2] suggest the following terminology:

  • No extrapolation - full clinical development programme required in the target population.
  • Partial extrapolation - some supporting data needed in the target population.
  • Full / Complete extrapolation - no additional data needed in the target population.

One assumptions-based approach to trying to determine an appropriate level of extrapolation is the U.S. Food and Drug Administration (FDA) paediatric decision tree.[3] In this framework, the level of extrapolation chosen depends upon whether certain assumptions are deemed to hold, including whether it is reasonable to assume adults and children will have similar disease progression, similar response to the intervention, and whether exposure-response (E-R) relationships can be considered similar. PHASTAR statistician, Ian Wadsworth, has recently had a paper published in the Journal of the Royal Statistical Society Series A focussing on trying to quantify understanding and uncertainty surrounding this assumption of similar E-R.[4] Here he explains more:

In our framework, we assume we have existing data in one or more studies of adults and adolescents. We want to use this data to quantify our uncertainty about the extrapolation assumption of similar E-R in adults and younger children. Further, we want to inform our choice of extrapolation strategy in younger children. We do this in the following steps:

  1. Perform a Bayesian meta-analysis of existing adult and adolescent data to quantify our understanding of key E-R model parameters in the adult and adolescent populations.
  2. Elicit expert opinion on whether differences between E-R relationships in adults and adolescents are representative of differences between adults and younger children.
  3. Adjust the Bayesian meta-analysis results for these elicited biases to quantify our understanding of key E-R model parameters in the unstudied population of younger children.
  4. Derive a prior probability that the extrapolation assumption of similar E-R relationships between adults and younger children holds.

We developed our approach within the context of epilepsy drug development. This area was chosen as there is broad agreement within expert groups that for focal epilepsies, adult efficacy data can be extrapolated to paediatric patients, with evidence that differences in treatment effects would be quantitative rather than qualitative. Furthermore, particularly with adjunctive therapy trials, a linear model is often used to model E-R, average steady-state trough concentration (Cmin) is often taken as the measure of exposure and a pharmacodynamic response of a log-transformed change from baseline in seizure frequency is often used. This context gives us a clear example on which to build our framework.

First, in our evidence synthesis step we assume a Bayesian hierarchical model for the data. At the first level, we assume data from adults and adolescents in each existing study can be modelled as linear with three covariates: a binary age variable, a measure of exposure and an interaction between age and exposure. At the second level, we model the effects of age and the interaction between age and exposure in each study as bivariate Normal, in order to get an overall estimate of these effects over the existing studies: the effect of age γA and effect of the interaction between age and exposure γI.

We assume that a future study of adults and younger children could also be modelled as a linear model, with corresponding effects of effects of age (βA) and an interaction between age and exposure (βI). We would say that there is no difference between adults and younger children in terms of their E-R relationships if βA= βI = 0. We relate these corresponding effects between populations with additive bias terms δA and δI. These additive biases are what we want to elicit from expert opinion in order to fully quantify our prior belief regarding βA and βI. In order to achieve this, we developed an elicitation scheme and interactive app using the ‘Shiny’ package for the statistical software R.[5,6] This Shiny app is hosted on GitHub and can be downloaded to trial interactively.

After performing the elicitation, we derive prior distributions for the δA and δI bias parameters. These can then be used to adjust the distributions we obtained for γA and γI using the additive relationship we have assumed in order to obtain prior distributions for the βA and βI  parameters. We can then use these distributions to calculate our prior confidence in the extrapolation assumption of similar E-R relationships. We define this prior probability of extrapolation holding as the probability that the difference in response between adults and younger children is within some chosen equivalence bound, on placebo and a chosen exposure level, say the exposure at which the expected adult response is 90% of the maximum (EC90).

This probability can be used as our decision criteria to determine whether extrapolation is appropriate and at what level. A high probability would be evidence in favour of complete extrapolation and, conversely, a low probability would be evidence in favour of no extrapolation. A moderately sized probability may suggest feeding our probability into some Bayesian decision theoretic approach to determine whether additional data is needed in younger children to further confirm the extrapolation assumption.

In summary, our paper proposes a framework for using existing E-R data to quantify our understanding of differences between E–R relationships in adults and younger children.[4] The prior probability of differences between E-R relationships is used to inform our decision of the level of extrapolation appropriate from adult efficacy data to younger children.


  • European Medicines Agency (EMA). (2018). Reflection paper on the use of extrapolation in the development of medicines for paediatrics. EMA/189724/2018.
  • Dunne J., Rodriguez WJ., Murphy MD., Beasley BN., Burckart GJ., Filie JD., ... & Yao LP. (2011). Extrapolation of adult data and other data in pediatric drug-development programs. Pediatrics, 128(5), e1242-e1249.
  • Food and Drug Administration. (2003). Guidance for industry: exposure-response relationships-study design, data analysis, and regulatory applications.


  • Wadsworth I., Hampson LV., Jaki T., Sills GJ., Marson AG. and Appleton R. (2020). A quantitative framework to inform extrapolation decisions in children. J. R. Stat. Soc. A, 183: 515-534. doi:10.1111/rssa.12532.
  • Chang W., Cheng J., Allaire JJ., Xie Y. and McPherson J. Shiny: web application framework for R. https://shiny.rstudio.com/
  • R Core Team (2017). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/