
Challenges and Opportunities in Cluster Randomized Clinical Trials
Learn about the challenges and opportunities discussed in the 13th Annual Conference on Statistical Issues in Clinical Trials with a focus on Cluster Randomized Clinical Trials (CRTs). Topics include issues like incomplete recruitment, data collection, measurement as an intervention, face validity for trials, and practical vaccine trial designs. Explore insightful perspectives from Andrew Copas and considerations for addressing biases and ethical dilemmas in CRTs.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Cluster Randomized Clinical Trials (CRTs): Challenges and Opportunities Afternoon panellist: Andrew Copas Professor of Trials in Global Health, MRC Clinical Trials Unit at UCL 13th Annual Conference on Statistical Issues in Clinical Trials, University of Pennsylvania MRC Clinical Trials Unit at UCL
Four great talks addressing pragmatic issues in cRCTs Some themes resonated with me: Bias from incomplete recruitment and data collection Measurement as an intervention Face validity for trials, especially with modest number of clusters Practical vaccine trial design in epidemics and a few thoughts on possibly missed literature MRC Clinical Trials Unit at UCL
Incomplete recruitment and/or data collection In cluster RCTs often aim to measure the impact of an intervention as routine practice e.g. a change to clinic policy Can be undermined by (i) selectively offering trial participation, (ii) requiring individual consent, (iii) requiring data collection outside the routine. Bias enhanced by knowledge of intervention status by staff and patients. All feature in a recent trial of mine! Open questions (to me) around ethics of individual consent, what minimum information can be obtained otherwise, and how to assess likely biases For interventions at the individual level, Karla has conducted research on the tradeoff in bias between cluster and individual RCT and impact on sample size MRC Clinical Trials Unit at UCL
Measurement is an intervention Important question for longitudinal cluster RCTs e.g. offer a baseline HIV test to participants, ask detailed questions about level of alcohol consumption In my work often influences decision whether to have a cohort measured repeatedly or a repeated cross-sectional design. In the latter the baseline measuring won t affect the intervention effect (much) MRC Clinical Trials Unit at UCL
Face validity for small cluster RCTs Important for a trial to be credible and have impact! Special concern for cluster RCTs because number of clusters rarely large. For me a key aspect is whether the unadjusted effect measure (OR, RR etc.) comparable to the adjusted. Is RR 1.05 (0.80 1.38); aRR 1.35 (1.03 1.77) convincing for a trial? Larry highlights importance of related aspect: a reassuring Table 1! Restricted randomisation can specify exactly the maximum tolerated difference for each factor in Table 1. Or can select from those with smallest value of a summary difference metric designed for good overall balance. MRC Clinical Trials Unit at UCL
Face validity continued Larry highlights the potential problems of balancing too closely pairs of clusters always in same arm (or different arms pair matched) Lack of balance also reduces power, but I think only if strong imbalance A great Table 1 may require restricting to most balanced 10% of potential allocations, whilst for power restricting to the best 90% might be fine Should balancing too closely (e.g. best 1%) be seen itself as lacking credibility? MRC Clinical Trials Unit at UCL
Balance in stepped wedge trials Balance between arms becomes more difficult when we randomised to clusters to one of several sequences. In a SWT we may have only one cluster per sequences so can t (usefully) look at balance between sequences Can look at balance between the two exposures, of course think of Table 1! Over-balancing means clusters allocated to same, or opposing , sequences. Summary difference metric may weight cells according to their influence in Month Cluster 1 2 1 2 3 4 5 6 analysis 3 4 MRC Clinical Trials Unit at UCL
Ring vaccine trials Recruiting contacts of infected index cases very efficient way to identify people at high risk in whom to measure vaccine effectiveness Option then to individual or cluster randomisation by index contacts Cluster randomisation targets the total vaccine effect, more policy relevant, and also reflects how a vaccine would be rolled out Contacts of index cases have more herd protection than contacts of contacts allows some estimation of herd effects? MRC Clinical Trials Unit at UCL
Extra literature and software Multiple imputation in cluster RCTs e.g. JOMO in R, 2020, Quartagno & Carpenter Sample size in multi-level cRCTs e.g. Tereenstra et al. Clinical Trials 2008 Sample size calculation with varying cluster size e.g. Kerry & Bland, Stats in Med 2001; Eldridge et al., Int J Epi 2006 MRC Clinical Trials Unit at UCL
Final thanks Thank you again to the four speakers! MRC Clinical Trials Unit at UCL