Is Adaptive Design just a Buzzword Solution?
Chris Jennison, Professor of Statistics at the University of Bath, joins Pharma IQ to discuss the merits of Sequential Monitoring and Adaptive Design.
Pharma IQ: Chris, welcome, I’m pleased you could join us. Please could we start with some background on the work that you’re involved in?
C Jennison: Yes, certainly, I’ve been working in statistical methods for clinical trials for over 30 years. That was the area of my PhD research, and I’ve had a longstanding collaboration with Bruce Turnbull at Cornell University, where we’ve developed group sequential methods.
We’ve worked on all aspects really, of this methodology; the underlying theory, computation of tests and their properties and application. Both Bruce and I have been involved in collaborations with particular trials, consulting work, and just more generally talking to people at conferences and getting reactions to our work.
So we feel very in touch with the real applied problems, and in the last few years, in the emerging methodologies in the adaptive area. A lot of my work recently, I would say, has been in looking into those methods, trying to understand them, trying to see what’s good about them and being critical as well, but at the end of that, seeing what is good to take from the new methodology and recommend to users.
Pharma IQ: Thank you Chris, and how important do you think that group sequential design is to clinical trials?
C Jennison: Well I would say it’s very important. It’s very much so... It’s the area I first started working in, so I think I’ve got maybe a biased point of view, but sequential monitoring is absolutely crucial. I think large trials have, as a matter of course, a monitoring committee, a data and safety committee, and they’re looking at everything to do with the trial, starting with the basics of the compliance; are the patients actually receiving the treatment they’re meant to in the way it’s meant to be administered?
“Sequential monitoring is absolutely crucial. It’s a good idea to quit early if you’re not going to succeed. So stopping for futility can be just as important as looking to stop for efficacy.”
That committee also receives safety information, so it’s looking out to see if there are problems, and if the treatment should be withdrawn during the course of the trial. But there’s also the chance to look at the primary end point, and look for treatment efficacy, and the idea of a group sequential method there, is to decide when you can conclude a trial because you have got the information that you need. So a trial will be set up with Type 1 error and power requirements, looking for a particular size of treatment effect.
When there’s enough information to say with confidence that you’ve found a treatment effect, then you can move onto the next stage, or if it’s Phase III trial, you can start registering your new treatment. You may also find disappointing results, and that’s not good news, but even then, it’s a good idea to quit early if you’re not going to succeed. So stopping for futility can be just as important as looking to stop for efficacy.
The benefits for this to the process, to the pharmaceutical company or the public health researchers, is that you control the resources that you need to go into the study, the number of patients who are recruited; oh also the time it takes to reach a conclusion. From a company perspective, time is very important. If you can show that you’ve got the right amount of evidence and reach a positive conclusion early, then that gives you more time to use this drug and to take advantage of your patent on it, and actually get something back from the research and development that you’ve put in.
Pharma IQ: Absolutely, and moving on to another area, why is adaptive design often maligned as a buzzword solution?
C Jennison: Well it’s funny you should say that, because looking back, it wasn’t that long ago that adaptive design was a very high-profile area, and there was great enthusiasm and some of that I think persists. But, maybe when it first appeared on the scene, the proponents were...Well they were certainly enthusiastic. Maybe they made some claims that went a bit further than was quite right, and so it’s been an area that’s attracted attention. It’s also attracted really critical investigation. I think in the end, some things have stood up, and some have not looked so good.
“When adaptive design first appeared on the scene, the proponents were certainly enthusiastic. Maybe they made some claims that went a bit further than was quite right, so it’s been an area that’s attracted attention. Some things have stood up, some have not looked so good.”
So with hindsight, you look back and say, ‘well it may have been a flash in the pan’, but that would be unfair. There are things there that have come out that I think are very important. I think also, the enthusiasm that was generated was really useful, because it just drew attention to the important role of statistical thinking in clinical trials. Some of the longer founding methods, the group sequential methods, the use of internal pilots to correct the sample sizes, have been around, and maybe weren’t applied as frequently as they could have been, and I think they’ve now all come under this umbrella. They’re collectively part of the adaptive design toolbox. There are some excellent methods there.
Pharma IQ: Could you talk a little bit about the methodology involved?
C Jennison: The methodology for adaptive design; I’d say the combination test is the key methodology that’s come along that really opens up a lot of possibilities. The idea there is that you divide a study into stages, and say from each stage we can summarise the data in that stage, with a P-value. Under your Null Hypothesis, the P-value will have the usual uniform 01 distribution. That would be the case in the first stage when you just set the trial going.
In the next stage, you may change things. You may redesign in various ways, particularly the number of subjects you’re going to see in this next stage, but there may be other aspects.
The nice thing is that the P-value for that new set of data, under the Null Hypothesis, is still uniformly distributed, uniform 01, even if the nature of that stage was changed based on wha was seen earlier. So you get the sequence of uniform P-values, maybe just two, maybe more than that, if the study has several stages, and you can make a combination test putting those together.
There are different ways of doing that. You may say your overall statistic will be the product of all the P-values. You may turn the P-values into Z-scores or Zee-scores on the normal scale, and add those up with weights as you would for normal data, and you’ve got an underlying Null distribution that you can work with to control Type 1 error, and so this comes up. It’s an alternative way of looking at group sequential analysis, and in some ways, it’s quite convenient in dealing with changes in group size over what was originally planned, but it can be used when there are more complicated things going on, maybe multiple treatments.
You’ve then got another adjustment going on for multiple testing, but that then works in two directions. You’ve got your treatment stacked up and you’re controlling for the multiplicities there, and you’ve got your stages of the study, and you’re combining data across the stages, by combining P-values. So it gives a lot of freedom, a lot of flexibility; freedom to design things ahead of time and flexibility to respond during the trial, to what you see.
Pharma IQ: Thank you Chris, and then what considerations are involved in sample size reestimation for updated estimates of the response variants?
C Jennison: Well the classical approach there is... I would call the internal pilot. What you need to do here, is to think if you are designing your study, and you know the variance, or some other nuisance parameter, what sample size would you need? And you get an equation for that, then the problem is you don’t know the variance. Sample size typically is proportional to response variance, so it’s very important, variable.
So you can start off by designing your study with an estimate of what you think the response variance might be, but then you give yourself the chance partway through the study, to look and see what you’re actually getting in your data. If you see a sample variance that’s higher than the one that you first thought of, and then you’ve got to extend the study, increase the sample size. Some of the ways of doing that, the older methods plug in the new value of variance and say well, it doesn’t make much difference to change as we go. At least we’re not at the primary end point and changing it because of that, so there’s not much effect on the Type 1 error rate, if we plug in a new value.
In fact, there is a small effect and you might try and adjust for that. Inside the combination test methods, you actually get a much more precise way of dealing with Type 1 error, because you can take these P-values from each stage and combine them in the way I was just describing, and that will control Type 1 error exactly, but you’ve still got the same ability to change the later sample sizes to improve the power and make sure the power is what you really first intended in terms of the actual treatment effect.
Pharma IQ: Is it a similar case for sample size modification, to rescue, for instance an under-powered study?
C Jennison: Well I think… that’s a very different problem in origin, at least. The solutions may be similar. You may look at your study partway through and see that it’s not really heading where you hoped; and it’s not because the variance is higher than you thought, it’s simply because the treatment effect isn’t so large, and you’re looking back and saying, if only I’d thought of that earlier on, I’d have designed this study to be larger in order to be able to detect a smaller treatment effect, given that actually this effect is still an interesting one.
So the flexible adaptive methods can be used to do that, but my advice is always to say, try not to get into that situation, because you can trace it back to the planning stage, where power has been specified, and clearly when you have to rescue a study, it’s because it’s not been designed the way it should have been. Group sequential tests offer a very good way around that problem. They allow you to say, ‘we are really interested in this smaller end of the scale of possible effect sizes, we’d like to have a study that can detect that’.
It means we’re going to have to plan a pretty large sample size, but the point of the stopping rule in a group sequential test is that it lets you stop early. So if the treatment effect really is higher and it’s what you hoped for, then you’ll finish the study halfway through. So you won’t need to call on that figure, sample size, but if the effect is smaller, then you keep going.
That really is, statistically speaking, I think a very sound way of dealing with that uncertainty about the possible effect size when you’re planning the study.
Pharma IQ: Thank you Chris. Where do you see clinical trials going in the future?
C Jennison: Well, that’s a challenging question. I think there are lots of things happening right at the moment, but the things that we know about, are growing, and there are areas on the edges of that, where we’re still working away; so the idea of a seamless transition from Phase II to Phase III, what’s good about that? What’s difficult? Enrichment studies; trying to focus sometimes, onto a sub-population with a particular bio-marker, a genetic marker, or a measurement, a blood measurement say, that was made to identify patients who might particularly benefit from a treatment.
There are methods for that, but there are still, I think, in development stage and these will have an impact. But one can also look on a broader scale, and I think because there’s now so much attention to what’s going on in each phase of the drug development process, people are much more aware about the issues, and you can then say, well we should really look at these in sequence. It’s quite hard to decide what’s good about a Phase II design, unless you ask, ‘where do the results go?’
Those results go into the treatment that’s fed into a Phase III study, and so you have to think about these stages together. So that’s very much a question of statistical decision theory, but building in a lot of information about the products going in; the likely benefits of these molecules, and the effects coming out; the benefits to patients; the financial returns to a company. So I think that higher order planning of programme design is one of the most interesting areas that we’ll see explored in the near future.
Pharma IQ: Thanks, Chris. Now I understand that you’re going to be speaking at our Innovation in Clinical Design and Reporting Conference, which will be running on December 6th - 8th in London. Why did you decide to get involved with this event?
C Jennison: Well I like talking about my work and I’ll be giving a short course at this event. I’ve done that quite a few times before and it’s a chance to put together material in a package, and try and go into some depth in explaining both the fundamentals and then getting into case studies. I just find the interaction that that can give, working with a group of usually practising statisticians, we go through the methods and we have really interesting exchanges.
People bring their own problems along and see where they link in, and have plenty of questions. We work through the sort of topics we’ve just been talking about in this conversation. What I like to think is that they come to life; that people understand more about the theory and the method, and then as we get into my case studies and their own examples, we see just how these things work. People go away with a good knowledge, a knowledge of what’s available, how to do it and where to look, if you want to take that further and actually pursue the methodology.
Pharma IQ: Well we look forward to your presentation at the event, and we hope that it provides a useful forum. Thanks so much for your time today, Chris.
C Jennison: Thank you, Helen.
Please note that we do all we can to ensure accuracy within the translation to word of audio interviews but that errors may still understandably occur in some cases. If you believe that a serious inaccuracy has been made within the text, please contact +44 (0) 207 368 9425 or email email@example.com.