Will my clinical study be a success? The Concept of Statistical Power

Guest blog by Consultant Statistician, Paul Terrill

statistical power clinical trials

“My clinical study is powered at 80%. It therefore has an 80% chance of being successful”. Unlikely!! The actual chance of success could be a lot lower.

A lot of people are familiar with the concept of statistical power when planning their clinical trials and when deciding how many patients they need to treat. The statistical power of a study is the probability of rejecting a hypothesis such as two different treatments produce the same average value for the assessment in question, or equivalently, the probability of not missing a difference in treatments, when in fact there is a difference. However, power is often not the same as the probability of success, even if this “success” is purely defined as seeing a statistically significant treatment effect.

Consider for example a superiority trial where we want to provide evidence that one treatment (the treatment we are developing) is better than another (say the current standard of care, or perhaps a placebo treatment). When designing the study we may set the power at 80% or 90% for example. Does this mean that our trial has an 80% (90%) chance of showing the result we are looking for, that is, of being successful?

Unfortunately not, for a variety of reasons, one of which is to do with how power calculations are performed. In order to calculate the power of a design you have to come up with a number of assumptions including specifying what size treatment difference you are expecting, the underlying patient to patient variability, the accepted Type I error rate (alpha) and the sample size. We don’t know what the true value is or the expected treatment effect so we estimate it, often based on previous experience. However, it is only an estimate and this estimate will have a level of uncertainty. The true treatment effect is unknown. It may be smaller than our estimate, and if it is smaller then our power is also smaller. As a result, the statistical power of a trial may not provide a good prediction for the probability of success of the trial being planned, as it is solely based on a point estimate of the expected treatment effect.

If we are interested in more information for decision making, an additional measure to that of power that has been found to be useful when designing a study is that of assurance [1]. This is a measure that incorporates the prior uncertainty associated with the estimate of the treatment effect and calculates the unconditional probability that a trial will lead to the desired outcome, in contrast to power which is the conditional probability based on the assumed unknown treatment effect.

Consider an example [1]. Suppose a clinical trial will be conducted with a planned two-sided 5% test of significance and it is required that the test has a power of 80%. The underlying patient to patient variance is assumed to be known and equal to 0.0625. With the assumption that the treatment difference is 0.2, then a standard sample size calculation would say that 25 patients per group are needed.

Although the expected treatment difference is 0.2, it is not known for sure if this size difference is the true value. Prior information suggests a variance of 0.06 for this estimate of treatment difference, meaning that the true value could easily lie somewhere within the 95% prediction interval of -0.28 to 0.68. By including this uncertainty into the calculation of the probability of a statistically significant outcome using a normal distribution for this prior, the assurance that the null hypothesis is rejected with data favouring the study treatment is calculated as 0.595. Thus even though the prior expectation of the treatment effect equals the value 0.2 at which the trial was found to have 80% power, there is only 60% assurance of a positive significant result! The figure illustrates this; here it can be seen that there is a large probability of the treatment effect taking a value for which the power is much lower than 80%. In fact, there is even a prior belief that the new treatment may be less than the one it is being compared to. This is not necessarily unrealistic.

statistical power clinical trials

Power (solid line) and prior density (dotted line) of the treatment difference. For clarity, the prior density has been scaled to have value 1 at the mode.

It is worth noting that it is estimated that 50% or more of all phase III trials performed are not successful. It can be argued that assurance could provide a more realistic estimate of the probability of a trial’s success. Many researchers have started to include assurance when planning trials [2].

Providing an estimate of assurance generally requires more effort than just entering some numbers into sample size software. For example, substantial work may need to be spent on specifying the prior distribution. In addition, when planning future studies based on previous study results, it should be considered whether the effects seen in such previous studies may also need to be discounted (adjusted downwards) [2], [3]. For example, such a projection of results is often optimistic as generally a more heterogeneous patient population is investigated in future studies. In addition, only treatments with favourable results are selected for further studies, which is a possible source of bias.

If you need any assistance in calculating the chances of success of your trial and ensuring you get the most out of your clinical data, contact us and talk to a statistical consultant.

References

[1] O’Hagan A, Stevens JW, Campbell MJ. Assurance in clinical trial design. Pharmaceutical Statistics 2005; 4:187-201

[2] Kirby S, Burke J, Chuang-Stein C, Sin C. Discounting phase 2 results when planning

phase 3 clinical trials. Pharmaceutical Statistics 2012; 11:373–385.

[3] Wang SJ, Hung HMJ, O’Neill RT. Adapting the sample size planning of a phase III trial based on phase II data. Pharmaceutical Statistics 2006; 5:85–97.