Just a moment, the page is loading...
Browse ALL STUDIES
Keyword Search
View All Selected
Clear All
Login / Create Account
Login
Create Account
Home
About Us
Privacy Policy
Minimum System Requirements
How To Join
Mission
Data Sponsors
Researchers
How It Works
How to Request Data
Review of Requests
Data Sharing Agreement
Access to Data
Independent Review Panel
Metrics
FAQs
News
Help/Contact Us
An investigation into unblinded sample size re-estimation methodology in clinical trials
Proposal
6042
Title of Proposed Research
An investigation into unblinded sample size re-estimation methodology in clinical trials
Lead Researcher
Julia Edwards
Affiliation
The University of Sheffield
Funding Source
Potential Conflicts of Interest
Data Sharing Agreement Date
29 November 2019
Lay Summary
Clinical trials aim to assess the effect of healthcare technologies on health outcomes. However, with the cost of clinical trials increasing, and challenges with recruitment and retention of participants, there is a need to improve the efficiency of trials.Clinical trials require a sample size calculation prior to the start of the trial. Prior knowledge of the treatment effect and its variability is required for these calculations, although these are often estimated. Therefore, a large number of studies have insufficient power at the end of a trial; meaning the trial may be unable to detect a difference between the treatments, even if one exists.An adaptive design is a type of trial that allows researchers to look at the data whilst the trial is ongoing at one or more specified time points (interim time points), and make modifications to the remainder of the trial if necessary. One potential modification is a sample size re-estimation (SSR).If possible, the treatment allocation is not revealed and the SSR is based on the variability of observed data (known as blinded SSR). However, sometimes it is necessary to reveal the treatment information (known as unblinded SSR). The knowledge of treatment group complicates the design of the trial, as knowing the treatment of patients increases the chance of seeing a treatment difference when in fact no such difference exists (false positive rate).One key methodology to counteract this effect is the “promising zone design” for unblinded SSR (Mehta & Pocock). In this situation, a statistician may calculate a measure called conditional power at the interim time point, indicating the projected power at the end of the study given the data observed so far. Conditional power (ranging from 0-100%) can fall into one of three zones:1. `unfavourable' zone - the interim treatment effect is disappointing, and it is not worth the increase in sample size;2. `favourable' zone - the interim treatment effect is sufficiently favourable and no increase is necessary to maintain power; 3. `promising zone', - falling between unfavourable and favourable, indicates the interim effect is not too disappointing, but power is unlikely to be as good as planned in the design stage. If the sample size is increased according to a formula and a restriction of a maximum possible sample size, power can be maintained in the trial.There are, however, issues with this methodology, with criticisms that it is sub-optimal. An alternative approach recommends an alternative sample size increase over a larger boundary (Jennison & Turnbull), chosen such that the expected sample size is minimised. Finally, some researchers have proposed a stepwise rule for SSR (Liu & Hu). Whilst this design avoids the ability to back-calculate the interim treatment effect, the stepwise procedure could decrease overall final power.The proposed research will investigate three key elements.(1) The conditional power calculation relies on an assumption of the future treatment effect: common choices assume the treatment effect originally planned in the design stage, or that the current trend will continue and is based on the observed data so far. It is currently unknown which assumption to use, or if indeed there is an alternative assumption. (2) Operational factors in the planned SSR: elements such as the timing of the interim analysis, length of time required to collect data, recruitment rates and incorporation of additional decisions at the interim such as stopping a trial early could play a huge role in the decision of SSR design choice. There is no current recommendation any of these factors, and indeed the full impact each could have on the interim decisions and final power.(3) An investigation of SSR rules: including a comparison of rules within the three frameworks; promising zone, Jennison & Turnbull, and stepwise increase. This will include investigating the impact of the decision at the interim analysis.
Study Data Provided
[{ "PostingID": 2207, "Title": "GSK-105239", "Description": "Booster Vaccination Study to Assess Immunogenicity & Safety of a Dose of GSK Biologicals' Mencevax™ ACWY & 1/5th of a Dose of Mencevax™ ACWY in Subjects Primed in the DTPW-HBV=HIB-MENAC-TT-011 Study" },{ "PostingID": 2752, "Title": "SANOFI-QID01", "Description": "Immunogenicity and Safety Trial of Quadrivalent Influenza Vaccine Administered by Intradermal Route in Adult Subjects Aged 18 through 64 Years" },{ "PostingID": 4574, "Title": "GSK-201474", "Description": "A Patient Preference Evaluation Study of Fluticasone Furoate Nasal Spray and Mometasone Furoate Nasal Spray in Subjects with Allergic Rhinitis" },{ "PostingID": 19659, "Title": "EISAI-E2007-G000-235", "Description": "A Randomized, Double-blind, Placebo-controlled, Parallel-group Study With an Open-label Extension Phase to Evaluate the Effect of Perampanel (E2007) on Cognition, Growth, Safety, Tolerability, and Pharmacokinetics When Administered as an Adjunctive Therapy in Adolescents (12 to Less Than 18 Years of Age) With Inadequately Controlled Partial-onset Seizures" }]
Statistical Analysis Plan
The proposed research is a comparison of statistical methodologies for trial design purposes and interim analysis decisions only and is looking to address statistical as opposed to clinical questions.For the investigation the effect measure of interest, statistical analysis methods and planned adjustment of covariates will be the same as in the original study. Only the primary outcome is of interest, and no data analysis on secondary outcomes will be performed.Four key designs will be compared: Promising zone (Mehta and Pocock, 2011[1]); Weighted inverse normal combination tests (Jennison and Turnbull, 2015[2]); Stepwise increase sample size rule (Liu and Hu, 2016[3]); Fixed sample size designs (original design of all studies).Each key design will also look at the effect of incorporating stopping boundaries. Efficacy, futility, both efficacy and futility, or no stopping boundaries will be compared within each design. Boundaries will be based on conditional power values. Specific futility bounds to be investigated are at CP=0.1, and CP=0.2, and efficacy bounds at CP=0.9 and CP=0.95.Conditional power calculations will use six different future treatment effect assumptions: The originally planned treatment effect (H1); The observed current trend; A combination of both originally planned and current trend; Minimum clinically meaningful difference; An 80% confidence limit of the observed trend; A 90% confidence limit of the observed trend. Finally, interim analyses will be chosen at 25, 40, 50, 60, 75 and 90% through the originally planned trial. Two definitions of “percentage through the trial” have been established in the current literature; percentage of patients recruited, or percentage of patients with primary outcome data available. Both definitions will be used here, and compared to each other to investigate the impact of this on decision making at the interim analysis.Methodological designs:Conditional power, or the conditional probability of rejecting H0 at the final analysis given the first stage Wald statistic Z1=z1, is calculated by [1]:〖CP〗_δ (z_1,n_2 )=1- Φ{(z_α √n- z_1 √(n_1 ))/√(n-n_1 )-(z_1 √(n-n_1 ) )/√(n_1 )},where α denotes the Type I error, z_α= Φ^(-1) (1-α), δ is the assumed future treatment effect for the remainder of the trial, n is the originally planned final sample size, n1 is the first stage sample size (recruited prior to the interim analysis), and n2 is the second stage sample size, such that n=n1+n2.The promising zone is defined as the set℘={〖CP〗_δ (z_1,n_2^* ):b(z_1,n_2^* )≤z_α },whereb(z_1,n_2^*)=〖(n^*)〗^(-0.5) {√((n_2^*)/n_2 ) (z_α √n-z_1 √(n_1 ))+ z_1 √(n_1 )}If, at the interim analysis, the conditional power falls in the promising zone, the second stage sample size is increased to n_2^*, leading to a final sample size n^*, such that n^*=n_1+n_2^*. The new second stage sample size can be calculated asn_2^*=[n_1/(z_1^2 )][(z_α √n- z_1 √(n_1 ))/√(n-n_1 )+z_β ],where z_β=Φ^(-1) (1-β).For the restricted design, if n^* exceeds a prespecified maximum (n_max), n^* is replaced by this pre-specified maximum value.In the Jennison and Turnbull methodology framework, conditional power calculations are performed across all values of n^*, and n^* is chosen such that the objective function〖CP〗_δ (z_1,n^* (z_1 ))- γ(n^* (z_1)-n)is maximised, where n^* (z_1) is the sample size rule for choosing the sample size given Z1=z1.The stepwise design involves a step function, of the formn^*/n= ∑_(j=1)^J▒〖r_j I(z_1∈l_j)〗where rj≥1 is the pre-determined increased sample size ratio, or step value, chosen in advance by the investigator, when the interim test statistic z1 falls in the interval lj = (lj(L), lj(U)). Sample size increase:The original data only enrols patients up to the originally planned sample size. However, should the interim decision be to increase the number of patients, bootstrapping methods will be performed on the remainder of the patients (not analysed at the interim stage) to make up the additional required sample size.Key Outcomes: Conditional power at the interim analysis The decision at the interim analysis (Stop, continue as planned, or continue with increased sample size) Sample size at the end of the trial compared to originally planned sample size Number of pipeline patients and accrual ratesReferences:[1] C. Mehta and S. Pocock, “Adaptive increase in sample size when interim results are promising: A practical guide with examples,” Stat. Med., vol. 30, no. 28, pp. 3267-3284, Dec. 2011.[2] C. Jennison and B. W. Turnbull, “Adaptive sample size modification in clinical trials: Start small then ask for more?,” Stat. Med., 2015.[3] Y. Liu and M. Hu, “Testing multiple primary endpoints in clinical trials with sample size adaptation,” Pharm. Stat., vol. 15, no. 1, pp. 37-45, 2016.
Publication Citation
Edwards, Julia (2020) Unblinded Sample Size Re-estimation in Randomised Controlled Trials. PhD thesis, University of Sheffield.
http://etheses.whiterose.ac.uk/28647/
© 2024 ideaPoint. All Rights Reserved.
Powered by ideaPoint.
Help
Privacy Policy
Cookie Policy
Help and Resources