Measuring Solubility – The Quickest, Most Efficient Methods
Add bookmark
Dr. Jonathan Goodman, Lead Professor at the University of Cambridge, joins Helen Winsor from Pharma IQ to discuss competitive solubility measurements: implementing the right measurements for your solubility strategy.
Pharma IQ: You have a lot of expertise in the various methods of measuring solubility. What are the main approaches? Can you lay out the pros and cons of each, and what direction do you see this moving in?
J Goodman: I think what you want in measuring solubility is to have speed and precision, low costs, wide application, minimal amounts of sample, and the maximum information at all of your experiments. Unfortunately there isn’t a single method which will give all of these together, so if you’re trying to get all of these, you have to be prepared to use a variety of methods.
If your main requirement is to measure solubility extremely quickly then a good way of doing this is to make up solutions in DMSO and add water and see if they go turbid or not. This is very fast. It requires a tiny amount of material; it’s possible to automate it; but the drawbacks are it’s not particularly precise. You tend to discover whether things are soluble or not, you’re not be able to quantify this to a large degree of precision; and the amount of information you get out of it is fairly small. But it’s an extremely useful process if speed is really the most important thing.
If you have a wide range of different samples, then perhaps the best way to do it is a shake-flask approach. This is just putting it in water and seeing how much dissolves, and measuring the solubility after you’ve shaken it for a while. This is very simple; it works extremely well. It’s fairly slow; you need quite a lot of material, and you have to be careful about reproducibility, because a single molecule might have a large number of different crystalline forms, and it’s quite difficult to know whether you’re getting super-saturation or not. But shake-flask methods work extremely well for solubility measurements too, particularly if you have a wide range of different sorts of samples.
There’s a third approach, which is one we’re most interested in, coming from a university, of course, where interest is in getting the maximum precision and the maximum amount of information out of each sample, even if that is slightly slower than some other methods. The way we tend to do this is to use a potentiometric method, that is using a machine from the Sirius company called GLpKa; and this gives us very high precision, higher reproducibility, and a great deal of information out of every experiment. It’s reasonably quick, but nothing like as quick as a hermetic method. It gives very high reproducibility, and gives a great deal of information out of each experiment. We do need to have a functional group for which charged neutral forms are present in aqueous solution, but fortunately most drug-like molecules fall into this category.
Pharma IQ: What direction do you see this moving in? Do you detect a trend amongst these different techniques?
J Goodman:I think the ultimate goal would be to have one machine, one process, which did everything together: was very fast, very precise, very low-cost, and worked with a minimal amount of sample. In practice, I don’t think a machine like that is going to be available any time soon; so in the immediate future, each of these techniques can get better in its own right; and taking the different types of information you get from all of them, and putting them together to combine the advantages of the different methods - the more precise methods to calibrate the quicker methods – is a much more short-term goal which would lead to better solubility measurements.
Pharma IQ: What are the key points to consider when adopting a Sirius GLpKa potentiometric method, and can you give some tips on how to avoid the pitfalls?
J Goodman: It’s a very convenient method to use. It needs a certain amount of training, but not too much. You need to consider how much substance you have; you need a reasonable amount of substance: we use usually work with tens, or a few hundred milligrams; if you just have a fraction of a milligram then this isn’t the right technique. And you need to have the right amount of time: we find it takes, on average, about half an hour per experiment; which is pretty quick – we can do quite a lot per day – but if we were aiming to screen a database of 10,000 compounds, of course half an hour an experiment – half an hour a measurement – would be unrealistically long.
So it gives a reasonable balance of speed and amount of sample, and it gives us extremely high precision whilst we’re doing that.
There is a potential pitfall, which is the formation of polymorphs as we precipitate things out of solution. Substances can crystallise in many different forms, with other, different, solubilities. There are quite a lot of different examples in the literature of different people measuring the solubility of one compound and getting quite different answers. The explanation may be that they’ve been measuring the solubility of different polymorphs. So, in a sense, they’ve both got the right answer for a different form of the same substance.
We first noticed this in the GLpKa machine when we noticed, as we were monitoring the experiment, that it seemed to be converging on one measurement and suddenly it jumped to a completely different one. And studying this process further, we found that we could, just by programming the machine to get it to precipitate under two slightly different conditions, and reproducibly you get different polymorphs out of the same experiment. So it was a potential pitfall, because if we hadn’t noticed this we could have got misleading results: we could have got two different measurements of the same compound. But out of every pitfall comes an opportunity, and because we realised that this was happening we were then able to use it to exploit the formation of different polymorphs under controlled conditions.
Pharma IQ: How do you set about comparing the reliability of measurements for effective analysis, and can you give us some case-study examples of this?
J Goodman: Reliability in solubility measurements is a huge issue, because if you look through the literature there’s a very wide range of different values. I recently checked on diclofenac - which is a non-steroidal anti-inflammatory compound - in a leading database and it gave a range of values from 26 grams per litre to 1000th of a gram per litre. And if a measure differs by more than 400 magnitude you need to worry about their reliability.
Of course, we’ve run each experiment multiple times, and so we get a good idea of the experimental variation, which is rather small. We know that we can reproduce our experiments. We can check it against literature, and we can check it against other experimental methods; and we find that when we do this, we get results which are consistent with carefully-done experimental data. So we think that to that extent we can. We tried particular experiments on diclofenac which exists in a number of different crystal forms, and exists with salts and as free acids. If we measure the solubility correctly of the free acid, then whichever form we start from, if we’re precipitating the free acid we should get the same answer; and we are able to do that, and we published a study of this a few years ago, showing that we could get the same values of the solubility of this compound, even starting from quite different samples of the material.
This gave us further confidence that we were getting reproducible values.
Pharma IQ: It would be great if you could summarise the solubility competition assessment impact and application, and explain exactly what’s involved.
J Goodman: We ran a competition because, with generous sponsorship from Pfizer, we were able to generate quite a large database of solubility measurements, and of course we wanted to publish this, but we discovered that leading journals weren’t very keen to publish just databases of information, however valuable that information might be.
We had the idea of arranging a competition, and with the help of the American Chemical Society’s Journal of Chemical Information and Modelling we were able to set this up. We published 100 solubility measurements from our database, and we also published the structures of 32 molecules for which we’d measured the solubility but for which we didn’t publish the numbers. And we asked people to predict the solubility for the 32 unknown compounds.
This attracted a lot of interest, we got more than 100 entries, and the best of those entries were really rather good at predicting solubility. I think we can conclude from the competition that, if you want to predict solubility within about an order of magnitude, then there are methods of doing this which work reasonably well most of the time.
But if you want greater precision than that – and an order of magnitude is very useful but it’s not perfect precision – then there doesn’t currently seem to be any available method which you can use in order to get really reliable and really precise calculations of what the solubility of a new compound should be.
So we are obliged to keep on measuring them and we are continuing to do this, and we hope to publish more data soon, so that we can build up the data available to everybody about precise solubility of drug-like molecules.
Pharma IQ: Yes. The competition sounds like a superb idea indeed. Now, looking at the wider picture, in terms of new technology and methodologies, I know that we talked about this a little at the start of the interview, but are there any other changes that you’d like to mention that you’ve noted?
J Goodman: I think methods are developing all the time; always hoping that someone will have a completely new idea which will revolutionise the field. Over the last few years I think there have been incremental developments; potentiometric methods are working faster and working on smaller amounts of compound, and this is an extremely useful development. It also means we can do more experiments more quickly, so we can begin to look at more complicated effects, like the effect of dissolved salts; the effect of temperature; the effects of varying solvents.
At the moment we’ve developed a database under fairly restrictive conditions, just looking at water at roughly physiological ph; but if we move away from that, even quite a small amount, we know there’ll be a huge amount of new and interesting information to discover.
Pharma IQ: You’ll be delivering a presentation at the forthcoming Improving Solubility conference entitled, Competitive Solubility Measurements: Implementing the Right Measurements for your Solubility Strategy. What would you say would be the key learning point that you’re hoping to share?
J Goodman:I think I’ll be hoping to get across that reliable and accurate solubility measurements are possible, but they’re quite difficult, and the subject of solubility is a very important one but also a very complex one.
Pharma IQ: And finally, what do you hope to gain from being a part of the event, and why do you think this conference is important right now for the industry?
J Goodman: No new compound can be licensed for use unless it has suitable solubility properties, so solubility is clearly on the critical path to success, and I look forward to discovering how this might be a part of the processes by which companies are looking to develop new compounds.
[inlinead]