Real world evidence (RWE) in medicine is the clinical evidence regarding the use and potential benefits or risks of a medical product derived from the analysis of real-world data (RWD). RWD are effectively data collected from outside of a clinical trial and that relate data to the patient health status and/or the delivery of health care. RWD is routinely collected through different digital health sources for example electronic health records (EHRs), product/disease registries, patient-generated data, medical claims/billing databases, mobile devices etc.
Increasing volumes of RWD are being produced following the development of specialist devices and sophisticated data collection techniques. Together with technological advancements including computing power and storage, there is an opportunity for powerful artificial intelligence (AI) approaches to be applied to these data to process and provide valuable insights for patient benefit. In the context of drug development, the application of AI to RWD and subsequent generation of RWE has huge potential with examples including analysis of patient treatment pathways, risk of disease development for patients, tracking patient behaviour’s and adherence.
A common question in clinical research is whether a new method of measurement is equivalent to an established one. As a statistical consultant at PHASTAR, I am seeing an increase in the number of trials where a new artificial intelligence or machine learning diagnostic tool is being compared to either a pre-existing tool, or to a clinician. Methodology for the analysis of binary data is well established, but methodology for continuous outcomes is less developed. Here we shall review current methodology and outline some of the common pitfalls. It should be noted that concordance analysis doesn’t guarantee the correctness of methods of measurement, rather it shows the degree to which different measuring techniques agree with each other. To properly evaluate a new method of measurement, quantities pertaining to the validity of measures, such as sensitivity, specificity, and positive and negative predictive values should also be considered.
At a conference, the results of a clinical trial demonstrate that a new medicine is efficacious and safe for a particular disease or condition. Patients, understandably, want to know when they will have access to this new treatment. Regulatory approval will be sought and individual countries will need to consider reimbursement. But, before we even start contemplating delivery of the treatment or how much it may cost, we need to ask "efficacious compared to what?"
Usually, clinical trials compare a new treatment to a "standard of care", which may be a medicine currently used in clinical practice or a placebo. Ideally, the new medicine would have been compared to all existing treatments, including those in development. That sounds ridiculous, but, after a clinical trial has been completed, other treatment options may have become available (which may be different in different countries). It would be impractical to design and conduct new randomized clinical trials at this stage to compare every new treatment to all available treatments in such a constantly evolving landscape. Instead, we can employ indirect treatment comparisons to attempt to estimate differences between some of these medicines.
The possibilities of capitalizing on emerging technology in healthcare are endless. The drive for improved visibility and oversight, faster trial set-up, sharing of real-time data and easier stakeholder collaboration has led to a high implementation of EDC, eTMF, RTSM and CTMS across the clinical trials arena. Although the pharmaceutical industry has been comparatively slow to adopt and embrace new technology and eClinical applications, the pace of change is accelerating. There is a real opportunity to transform clinical trials, making them more pragmatic, patient-centric and efficient by maximizing the potential to access data through electronic health records, mobile applications, and wearable devices.