Image of a woman typing on a keyboard

Smart signal detection - Part two

In a previous blog post, we described how, as part of the Centre for Analytical Excellence, PHASTAR delivered a project looking at detecting signals from historical clinical trial data and developing models to enable predictions of signals from new trials. Over a 3-month period, experts from within PHASTAR were provided with a large amount of historical clinical trial data to digest, generate signals and create predictive models from.

The team explored different machine learning methods and demonstrated good performance at predicting an efficacy signal based on baseline data.

The development of a good model* was one essential element of this project, another was ensuring transparency for the clinical team. The clinical team would have little to no appreciation of the different machine learning methods, how they were trained and how the results could be used. It was important that they could understand how different models were trained, how they performed and were able to review any signals generated. 

Image of simple user interface
The team developed a simple interactive interface to the results, the aim was to ensure the clinical team could understand the data, the insights, and the predictions.

Data

The interactive interface enabled the clinical team to review the training data; the data used to train the different machine learning models. They were able to review each of the features (variables) used to train the model and understand the distribution between the classes (responders and non-responders). This is important when looking to interpret any results. A machine learning model is only ever as good as the data it is trained on and so having an appreciation of that data and the features used by the methods is critical.

Insights

The interactive review tool enabled the clinical team to effectively look at insights based on the training data. For each machine learning method, they could review the cross-validation performance based on the training data and review the features that were important within that model for making a prediction.

Predictions

The aim was to build a model based on a large amount of historical clinical trial data to predict an efficacy response. The model can be applied to a new study and predictions made for each subject regarding their efficacy response. The interactive review tool provided the clinical team with the predictions on the new data, together with a detailed feature profile for each selected subject so they could investigate further.

Please get in touch if you would like to find out more or if you think we can support you.

* A good model can mean different things to different projects, in this project performance and explainability were important.