The Office for National Statistics (ONS) has just published their latest release for deaths registered weekly in England and Wales (provisional) for the week ending 3rd April. The main point from this release is that the provisional number of deaths registered in England and Wales in the week ending 3rd April 2020 was 16,387, the highest number of deaths since official weekly statistics began 15 years ago. This period in question doesn’t even cover the last week, which has seen the highest number of deaths being published by the Department for Health and Social Care. What should we take away from these figures?
Beginning 15th March 2020, submissions of studies for New Drug Applications (NDAs), Abbreviated New Drug Applications (ANDAs), and certain Biologic License Applications (BLAs) will be required to include Logical Observation Identifiers Names and Codes (LOINC®) as specified by the Food and Drug Administration (FDA) Data Standards Catalog with some INDs (Investigational New Drug) having the same requirement from 15th March 2021. These standards will allow for succinct interoperability between clinical data systems.
While seemingly problematic from the outset, LOINC provides a rich database for clinicians and researchers to draw from. This ensures that once standards are applied to their data, local or proprietary terms can be easily transferred to other institutions, allowing for a swift exchange of data.
Real world evidence (RWE) in medicine is the clinical evidence regarding the use and potential benefits or risks of a medical product derived from the analysis of real-world data (RWD). RWD are effectively data collected from outside of a clinical trial and that relate data to the patient health status and/or the delivery of health care. RWD is routinely collected through different digital health sources for example electronic health records (EHRs), product/disease registries, patient-generated data, medical claims/billing databases, mobile devices etc.
Increasing volumes of RWD are being produced following the development of specialist devices and sophisticated data collection techniques. Together with technological advancements including computing power and storage, there is an opportunity for powerful artificial intelligence (AI) approaches to be applied to these data to process and provide valuable insights for patient benefit. In the context of drug development, the application of AI to RWD and subsequent generation of RWE has huge potential with examples including analysis of patient treatment pathways, risk of disease development for patients, tracking patient behaviour’s and adherence.
A common question in clinical research is whether a new method of measurement is equivalent to an established one. As a statistical consultant at PHASTAR, I am seeing an increase in the number of trials where a new artificial intelligence or machine learning diagnostic tool is being compared to either a pre-existing tool, or to a clinician. Methodology for the analysis of binary data is well established, but methodology for continuous outcomes is less developed. Here we shall review current methodology and outline some of the common pitfalls. It should be noted that concordance analysis doesn’t guarantee the correctness of methods of measurement, rather it shows the degree to which different measuring techniques agree with each other. To properly evaluate a new method of measurement, quantities pertaining to the validity of measures, such as sensitivity, specificity, and positive and negative predictive values should also be considered.
At a conference, the results of a clinical trial demonstrate that a new medicine is efficacious and safe for a particular disease or condition. Patients, understandably, want to know when they will have access to this new treatment. Regulatory approval will be sought and individual countries will need to consider reimbursement. But, before we even start contemplating delivery of the treatment or how much it may cost, we need to ask "efficacious compared to what?"
Usually, clinical trials compare a new treatment to a "standard of care", which may be a medicine currently used in clinical practice or a placebo. Ideally, the new medicine would have been compared to all existing treatments, including those in development. That sounds ridiculous, but, after a clinical trial has been completed, other treatment options may have become available (which may be different in different countries). It would be impractical to design and conduct new randomized clinical trials at this stage to compare every new treatment to all available treatments in such a constantly evolving landscape. Instead, we can employ indirect treatment comparisons to attempt to estimate differences between some of these medicines.