RSS

Tag Archives: Dispensing processes

What data do we trust now in the world of high-throughput screening and public compound databases

Let’s face it, the world of experimentation is fun, rewarding, challenging and depressing. Ok, that has been MY experience of the world of lab-based experimentation. I have made many discoveries and celebrated the true joy of being a lab-rat. Love it…always did. I remain polarized to this day by the number of hours I spent around large NMR magnets. No bias, but still polarized. But lab work is also challenging..sometimes not in a good way. Hours of “experiences”…read that as wasted time because of bad preparation on my part, or on a collaborator’s part, or bad chemicals, poorly calibrated equipment, the “person who came before me” scenario etc. Then there is the truly depressing that I experienced in some of my lab experience. Repeating work that someone else in my lab had done but the lack of a LIMS system didn’t allow me to know that; colleagues not checking materials shipped to them at a crucial stage of a synthesis and finding out what was ordered was not in the bottle (still their fault for not checking!); NMR solvents being really wet and causing nasty side effects on the compound; and, in my life….two magnet quenches in one day….a 500MHz and a 300Mhz. I shrugged and went home…

Some of my lab experiences were depressing but then I moved into cheminformatics. And in the past few years I have been depressed by the sad state of our public compound databases and the quality of data online. I have given dozens of presentations on the matter of data quality and these two blog posts are representative. We’ve also published on the issues of chemical compounds in the public databases and their correctness.

A Quality Alert and Call for Improved Curation of Public Chemistry Databases, A.J. Williams and S.Ekins, Drug Discovery Today, Link

Towards a gold standard: regarding quality in public domain chemistry databases and approaches to improving the situation, A.J. Williams, S. Ekins, V. Tkachenko, Drug discovery today, 5, 2012 Link

This work was always focused on chemical compound structure representations and their matches with synonyms, names etc. Were they what their names said they should be was the common question. After a couple of years of working on this, and publishing with Sean Ekins, we wondered about the data quality of the measured experimental data, especially in the public domain assay screening databases, PubChem of course being the granddaddy of them all. While work could be done to confirm name-structure relationships in PubChem the experimental data is what it is, as submitted. How to check for the data quality of measured experimental data – reproducibility, comparison between labs etc. Not easy.

When the opportunity came to investigate the possibilities of errors in experimental data we didn’t quite expect the results we obtained. Rather than explain the work in detail I encourage you to read the paper, Open Access on PLOS One and available here. The article, entitled “Dispensing Processes Impact Apparent Biological Activity as Determined by Computational and Statistical Analyses” can be summarized as follows:

* Serial dilution and dispensing using pipette tips versus acoustic dispensing with direct dilution can differ by orders of magnitude with no correlation

* The resulting computational 3D pharmacophores generated from data from both acoustic and tip-based transfer differ significantly

* Traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses.

Derek Lowe on the “In the Pipeline” blog made some strong comments in his post about the paper. He called it a “truly disturbing paper” and said “…people who’ve actually done a lot of biological assays may well feel a chill at the thought, because this is just the sort of you’re-kidding variable that can make a big difference.” And he’s right. There is cause for concern. First of all we don’t know enough yet from this very small study to understand what classes of compounds are going to exhibit this effect of pipette vs. acoustic discrepancy. Secondly, there is no meta data associated with the assay data itself (that we are aware of) that captures the distinction in the dispensing process and this paper SHOULD encourage screeners to include this info in their data.

The difference in the tip vs. acoustic dispensing are of course only one of many issues that can accompany data measurements for compounds. Other obvious issues include what’s the purity of what’s being screened – is it one component or many….is an impurity showing the response and in terms of modeling does the compound being screened match the suggested compound that was purchased/synthesized? Classify this as analytical data required prior to screening. Reproducibility and replicates, assay performance, decomposition in storage, etc. Check out the comments on Derek’s blog as responses to his post and clearly the screening community understand many of the challenges and have to deal with them.

Once upon a time someone from pharma made a couple of comments that I found very interesting….1) it likely costs more to store the screening data long term and support the informatics systems that it does to regenerate the data with new and improved assays on an ongoing basis. 2) As assay performance is understood, and assuming that materials are available it is likely appropriate to flush any data older than three years and remeasure. Certainly with this observation of pipette vs. acoustic bias data measured with tips may need to get flushed and remeasured with acoustic dispensing methods.

This work describes the observed differences between tips and acoustic methods and improved pharmacophore correlations. It highlights issues that likely exist in the data sitting in the assay screening databases (compounded with chemistry issues) and brings into focus the question of what can be trusted in the data. For sure not all the data is bad but how to separate good from bad and what of the models that can be derived? As Derek summarized in his blog post “How many other datasets are hosed up because of this effect? Now there’s an important question, and one that we’re not going to have an answer for any time soon.” And it’s depressing to think about how many data sets might be hosed….

There is an entire back story to this publication also…that is the challenges that we had getting the work published and the multiple rejections we had in the process. But Sean has told that story in detail here. There’s also the story about the press release …and how editorial control extended from the paper itself to the press release (described here), a situation that I found inappropriate, over-reaching and simply not right. But it happened anyways…..

So…data quality is an issue. It is confusing, hard to tease out and identify for all its complexities. But it’s science, it’s incremental learning and it’s trial by fire. And we have to wonder how many projects might have been burned simply by the dispensing processes

 

 

 
 

Tags: , , ,

 
Stop SOPA