Stream 1: Innovation in designing trials
Conventional clinical trials compare two treatment options for a given disease. However, many diseases require multiple different treatments. Adaptive platform trials are a comprehensive approach to identify the best set of treatments for a disease, also allowing personalisation of therapies. They evaluate multiple treatment options; both concurrently (i.e. it is multifactorial), and sequentially by introducing new questions as previous ones are answered. To guide such trials, frequent analyses are performed using Bayesian statistical methods. Extensive pre-trial numerical simulations are undertaken and are updated during the course of the trial as data accumulates, in order to guide the trial’s progress through implementation of stopping rules for benefit, harm, or futility for each treatment. This requires specialist knowledge of the methodology for planning and implementing such trials and customised development for the specific trial. To date, platform trials being conducted in Australia have relied on a US based statistical consultancy firm, Berry Consulting, to undertake all design and analysis work (AI Berry). We are involved in a variety of adaptive platform trials.
Cluster trials involve randomising clusters of individuals (eg. general practices, hospitals) rather than individuals. An important variant is the cluster crossover trial in which interventions can be switched back and forth. CI Forbes developed theoretical results and simulation work for the ‘simple’ two-period and two-treatment (AB|BA) cluster crossover design. This was the design basis for the largest clinical trial ever in the intensive care setting, PEPTIC, which compares two strategies for stress ulcer prophylaxis in critically ill patients and has a target enrolment of 30,000 patients in 50 ICUs across Australia/New Zealand, yet at a budget that is 1/10th of a typical large critical care trial, largely because of the efficiency of the design. However, there has been little development of methodological strategies for optimising more complex clustered designs which have the potential to address clinical questions that were previously considered logistically or financially infeasible, such as factorial trials comparing multiple interventions simultaneously.
Stepped wedge designs are pragmatic cluster randomised designs that allow all clusters to receive intervention in settings where interventions cannot feasibly be withdrawn once implemented. In many settings the primary interest is in the time to an event occurring. This project is motivated by the design challenges of future trials of: (i) a nurse-led educational intervention in kidney peritoneal dialysis patients randomised at the treatment facility level (AIs Hawley, Pascoe) and (ii) a general-practice based trial of an artificial intelligence skin cancer diagnosis support system aimed at improving patient outcomes from skin cancer (CI Wolfe). Methods for time-to-event endpoints in a stepped wedge trial are undeveloped to the extent that even a method for computing sample size is currently not available.
Partially clustered data are common in clinical trials. For example, trials including siblings, and trials assessing the effect of an intervention that is delivered in clusters (eg. group exercise sessions in a weight loss intervention trial). The assumptions underlying traditional sample size calculations are violated for partially clustered designs. We have developed and validated new sample size methods for trials involving both independent and paired data and have created a free online calculator. We have also developed and applied sample size methods for trials with clustering solely in the intervention arm when the analysis will be adjusted for the baseline value of the outcome. However, both time-to-event endpoints and varying cluster sizes pose challenges for extensions of this methodology.
Sequentially monitored trials that continue to their end without stopping for benefit tend to underestimate the treatment effect and we have recently shown that this problem can be overcome using a conditional approach to estimation. When stopping for futility there are the reverse biases. MAMS designs are a direct extension of group sequential trials to more than two arms and involve stopping rules for both futility and efficacy, but the properties of such designs are not yet understood.
Stream 2: Analysis of trial data
Non-compliance in randomised trials: per-protocol analysis using causal methods
The intervention being tested in a trial can often be complied with fully or partially, eg. attending a number of sessions with a health professional. The effect of initial randomisation is estimated by an intention-to-treat analysis. However, there is often secondary interest in the effect in people who would comply fully with the intervention, known as the ‘complier average causal effect’ (CACE). For example, in ASPREE, a community-based trial of aspirin in the elderly over 5 years, compliance was monitored with annual pill counts – but methods to estimate CACE with complex compliance information and time-to-event outcomes are underdeveloped. Similar issues arise in oncology trials in which treatment switching is allowed, and nutritional intervention trials in which alternative foods and dietary supplements are readily available.
The Docosahexaenoic Acid for the Improvement of Neurodevelopmental Outcome in Preterm Infants (DINO) trial was a blinded RCT of over 500 infants conducted in five Australian hospitals between 2001 and 2007 which determined the importance of Docosahexaenoic Acid in this vulnerable population (CI Makrides). A key aspect of the study is determining the long-term effects of this intervention, which has required re-consenting of study participants. In this study, only 89% of participants provided data on the primary outcome at 7 years, which raises important questions regarding how best to handle the missing data. Multiple imputation (MI), whereby missing values are imputed multiple times using a regression (imputation) model based on the available data, has now become one of the recommended approaches for handling incomplete data problems in the medical literature. However, there are a number of unanswered questions regarding the application of MI to longitudinal follow-up of trials which are becoming common in practice as researchers wish to capitalise on trial methodology.
Composite endpoints are common in trials, with an example being major cardiac adverse events (MACE) which usually includes cardiac mortality, non-fatal stroke and non-fatal myocardial infarction and sometimes also angina or revascularization for progressive coronary artery disease. Composite endpoints raise substantial analytical issues not currently addressed in the literature: (a) An intervention’s effect is unlikely to be identical for each event type in the composite and this can give rise to non-proportional hazards in a time-to-event regression model. (b) Standard approaches assume equal weights for components but these may be inappropriate if there are component events of differing clinical consequence, eg. inclusion of cardiac mortality and angina in MACE.
In oncology, a critical issue is whether treatment effects on intermediate/surrogate endpoints (such as tumour response or disease progression) are able to predict treatment effect on overall survival. A related issue in cardiovascular disease is assessment of the extent to which treatment effects on novel biomarkers explain the clinical effects of treatment, and hence explain mechanisms of treatment action. Yet there is currently a lack of statistical methodology to address these critical questions.
Working Groups (special interest groups)
- Adaptive Trials Working Group (led by Associate Professor Stephane Heritier)
- Cluster Randomised Trials Working Group (led by Professor Andrew Forbes)
- Multiple Imputation Working Group (led by Associate Professor Katherine Lee)
We welcome enquiries from doctoral candidates interested in undertaking research projects related to the above research topics.