Integrating Historical Data for Regulatory Considerations

regulatory considerations for integrating n.w
1 / 21
Embed
Share

Explore the regulatory considerations for integrating historical data in clinical trials, including Bayesian and Frequentist methods, implications of data differences, and the role of extrapolation. Understand the significance of historical data in powering studies and the connection between Bayesian and Frequentist analyses.

  • Historical Data
  • Regulatory Considerations
  • Bayesian Methods
  • Clinical Trials
  • Extrapolation

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Regulatory Considerations for integrating historical data Andrew Thomson PSI Annual Conference 2017 Presented by Andrew Thomson on 16 May 2017 Office of Biostatistics & Methodology Support An agency of the European Union

  2. Content of Talk Motivating example extrapolation. Bayesian and Frequentist methods similarities and differences. Proposal to control Type I error. 2

  3. Historical Data Focus here is on data in each arm, either in the form of a prior belief, or in terms of specific data from e.g. another trial, registry data. Example 50 patients, of whom 15 are considered to be successes in the control arm, are to be integrated into a clinical trial, with 50 patients in control, 100 in active. At the end we have 100 patients worth of data in each arm. Key questions: What do we do if our control data is not the same as the historic data? What is the Type 1 Error associated with the test? How do we define success? 3

  4. Extrapolation motivating example Becoming important in drug development. Data from other trials, or hypothetical outputs from modelling and simulation exercise. Assume we are having a randomised trial some data in each arm. Including historical data because we can (scientifically appropriate to do so) or because we must (inadequate sample size). Either way we use historical data to adequately power studies. Implication is that without this data, we would be underpowered at the usual 5% level. In the context of extrapolation we may not need data at the 5% level to conclude on efficacy. 4

  5. What if our data do not look the same? Simple, Na ve approach: Historic 15/50, and then observe 15/50 and 45/100 in actual trial Compare 30% v 45% p=0.04 Historic 15/50, and then observe 20/50 and 55/100 in actual trial Compare 35% v 55% p=0.006 Historic 15/50, and then observe 10/50 and 35/100 in actual trial Compare 25% v 35% p=0.16 Yet each time we observe 15% treatment effect, and easily within the bounds of sampling error. 5

  6. Link to Bayesian Approach Beta distribution with parameters for control. Non-informative prior for test beta(0.01,0.01). End up with a posterior beta-distribution very similar to our binomial distribution in the frequentist setting. A simple Bayesian analysis produces an almost identical result. So the problem with the frequentist setting is the same problem as the Bayesian setting. Bayesians recognise this as a problem, and present solutions 6

  7. Bayesian Solutions Don t borrow data if the prior and data conflict: Test-then pool; Power priors; Hierarchical models. Bayesian priors are represented by distributions, from which the Effective Sample Size can be calculated. How many patients worth of data do we have encapsulated in the prior. In previous example, ESS = 0.02, i.e. very non-informative. 7

  8. Bayesian approaches graphical approaches From Viele et al, Pharmaceutical Statistics, Use of historical control data for assessing treatment effects in clinical trials 8

  9. What is the (frequentist) problem? Type 1 Error needs to be controlled. Plot true probability of event in the control arm, assume Null Hypothesis is true, and then plot Type 1 Error to look at relationship. This is not simulation, this is analytical. Type 1 Error is controlled if the true underlying belief is the same as the data. Consequence is that they are two sides of the same coin. Consequence of this, is that if you control the Type 1 Error you control the problem introduced by a lack of prior data commensurability and vice versa. 9

  10. Key Point Controlling the Type 1 Error and making sure prior and data are commensurate are the same problem. Implication is that frequentist strategies that control the Type 1 Error will deal with the same problem as Bayesian methods that deal with the prior-data commensurability problem. Viewed in this light, Bayes is not a special way of analysing the data. It is a metric for defining success of the study or otherwise, based on the data. And if we can dichotomise any given study into successful and not successful we can (and should) investigate the Type 1 Error and power of Bayesian approaches and compare them to frequentist approaches with a given Type 1 Error. 10

  11. Frequentist solutions The obvious frequentists solution is to just increase the Type I error. We do not formally include the historical data in the analysis, but accept that we are not starting from the usual de novo start of Phase III: either by choice; or necessity. How we choose, and justify, the higher Type I Error rate is the key question that needs to be answered. Can we use Bayesian thinking to help us? 11

  12. What happens currently? Traditional development: Regulator defines alpha. Company choose Power. Sample size is a consequence of this. More challenging development, e.g. paediatrics with a limited patient population: Regulator and Company agree on power, sample size and alpha between them. Can we find a way to optimise the stages in this last example? Do we agree on n, based on alpha and beta? Do we agree on alpha, based on beta and n? Do we agree on beta, based on alpha and n? 12

  13. Moving from traditional to other developments If we were in the traditional framework, what would we see? n = (Z /2+Z )2* (p1(1-p1)+p2(1-p2)) / (p1-p2)2 If we now decide we are able to / are required to adjust alpha, and we think we have a certain amount of data we can borrow (nb) we can use, we can use this standard formula as follows: Calculate n, for Z /2= -1 (0.05) and Z = -1 (0.8) (or 0.9 etc) Calculate Z /2for Power =0.8, and new sample size n nb Agree an appropriate power, calculate n based on this, and with the strength of data so far, agree on alpha. 13

  14. Frequentist Example historical data in both arms Consider historical data of 30 patients in each treatment arm. If these have yielded 40% and 60% success rates, we use these values in the sample size formula to get n =97, given power of 80% and alpha of 5%. Assume it is rounded up to 100 to account for a modicum of missing data. If we then take 70 patients, 80% power, plug into formula with the same assumptions about effect size, we calculate alpha as 11.8% (two-sided). Allows statements such as: having decided that 30 patients worth of historical data are available per arm, and thus only 70 patients are needed, the success criteria is chosen such that the study has the same power as the study with 100 patients. 14

  15. Frequentist Example limited patient population It is agreed between regulators and a company that only 140 patients are likely to be available for a clinical trial within a reasonable time frame. It is agreed that given the data available to date, and clinical judgement, that the minimum clinical effect of interest would yield a sample size of 200 patients at 80% power. Given this information, the value of alpha that is reasonable for this study to achieve is 11.8% Allows statements such as: having decided that 100 patients worth of data would normally be necessary per arm, but only 70 patients are available, the success criteria is chosen such that the study has the same power as the study with 100 patients. 15

  16. Frequentist Example Bayesian priors from modelling and simulation A robust modelling and simulation exercise is undertaken to summarise the most likely efficacy in each treatment arm. The output is a distribution per arm. For the control arm, the point estimate is 0.4, and 95% of the distribution lies between 23% and 59% For the test arm, the point estimate is 0.6, and 95% of the distribution lies between 41% and 77% In each case, the effective sample size is 30. Repeat as before. Note that we have not done anything inherently Bayesian, even though we have started with a prior distribution . 16

  17. Why this approach? Simple easy to communicate. Can be extended easily to extrapolation in general. If we summarise our prior beliefs in terms of a distribution, e.g. as output from a modelling exercise, we can just use the Effective Sample Size of the prior as our estimate for the amount of data we have easily applies to extrapolation as well. Justification for choice of alpha currently very lacking. Scepticism factors (e.g. Hlavin et al) have been developed but for point estimates of the acceptability of extrapolation only. This is motivated by trying to form an approach based on a distribution and not a point estimate 17

  18. Communicating the impact One of best ways is to demonstrate what success looks like. Given a specified number of successes in the control arm, how many successes would I need to see in the test arm? Plot the 2 against each other, as per Viele et al Bayesian analyses still possible not design dependent. Can help to contextualise the efficacy clinicians can say whether this looks right. Plot approaches on the same graph. Plot different curves for different alphas. 18

  19. Further considerations The choice of an appropriate alpha is not always a precise art. Approximations to rounder values may aid interpretation (e.g. 12% not 11.8%) When unequal randomisation is proposed, a more nuanced approach may need to be used, multiple options are available. Although given the above 2 points this may be less of a concern Of note, Viele et al only considered equal randomisation. 19

  20. Conclusions Type 1 Error Control and Prior Data Commensurability are two sides of the same coin. Just because we have a prior belief /data does not mean we need to be inherently Bayesian in our analysis but such analyses may help contextualise the results. The data need to stand by themselves how we define success here should ideally be more systematic and indeed quantitative. Clearer structure and ordering of how decisions on sample size parameters are made helps agreement with regulators is key. 20

  21. Thank you for your attention Further information andrew.thomson@ema.europa.eu European Medicines Agency 30 Churchill Place Canary Wharf London E14 5EU United Kingdom Telephone +44 (0)20 3660 6000 Facsimile +44 (0)20 3660 5555 Send a question via our website www.ema.europa.eu/contact Follow us on @EMA_News

Related


More Related Content