Programming issues in oncology trials

Programming issues in oncology trials

Oncology trials are complex and require a different approach to trials compared with many other therapeutic areas and can generate significant programming challenges.

Oncology can be associated with fast developing disease and short survival times. Due to the fact that many oncology trials are event-driven, the timelines and resources are regularly reviewed and updated. In earlier phase oncology trials, there are frequent data reviews to assess for safety and/or efficacy, which can present challenges in the planning and delivery of programming tasks. Standards programs are often developed so that they can be used across different studies, and re-used across multiple deliveries. This helps reducing the programming time on each study and therefore meeting the tight timelines specific to the oncology therapeutic area. Close collaboration between programmers, data managers and clinicians is required to ensure data issues are promptly resolved.

Clinical oncology trials are also more complex than those in other therapeutic area. The design is often more complicated, with additional data being collected such as quality of life questionnaires, genetic and biomarkers data. Analyses performed are often more specific to this therapeutic area. Therefore, the datasets can be quite complex and below are some examples of the biggest programming challenges.

Efficacy dataset. The evaluation of efficacy in oncology studies has been standardised over time in many areas and defined by Response Evaluation Criteria in Solid Tumours (RECIST) 1.1 guidelines, which are a set of published rules that define when cancers improve (respond), stay the same (are stable) or worsen (progress). All these rules can be used to programmatically derive RECIST responses, to create ADaM efficacy datasets, which can then used to analyse the primary and secondary endpoints (commonly: overall survival, progression-free survival, time to progression, response rates, etc.). It can be useful to programmatically derive these responses from the individual tumour sizes, to verify the investigator's opinion of response. This presents various programming challenges: there are imputation techniques such as "scaling up" using regression to handle missing tumour measurements; an additional clinical review of tumour scans is frequently carried out, resulting in multiple sets of responses needing to be accounted for and SDTM datasets that are mapped from a variety of domains. All of these issues increase the complexity of programming in oncology trials.

Lab grading. While most clinical trials report laboratory results based on normal range criteria (is the result lower or higher than what is considered "normal"), oncology trials add an additional set of criteria. Indeed, in an indication such as oncology where patients are often expected to have abnormal laboratory results, it is important to know how abnormal the result actually is. These labs results are graded using the Common Terminology Criteria (CTC) grading scale, published by the National Cancer Institute (NCI) of the United States.

Data cut-off. It is common practice within oncology to define a data cut-off date for a formal analysis (e.g. survival up to the data cut-off date). Essentially, this means that data up to and including the data cut-off date is cleaned and validated for inclusion in the analysis and reporting step, whereas data post data cut-off is not. It is possible to programmatically "trim" data after a data cut-off, but this process is not as easy as simply removing data points after the data cut-off and it can be different from one domain to another.

Working as a programmer on oncology studies is full of challenges and requires a lot of flexibility and adaptability. However, it is really rewarding: it gives the opportunity to work in a therapeutic area which has its own specificities and guidelines, to develop standard programs or macros that can be used in lots of studies.