RSS

Category Archives: Computing

How are NMR Prediction Algorithms and AFM Related?

There’s a really nice News piece over on Nature News regarding “Feeling the Shapes of Molecules“. The work reports on how Atomic Force Microscopy is being used to deduce chemical structure directly, one molecule at a time. It is, quite simply, stunning. This work is an extension of the original work reported on pentacene that many scientists thought was spectacular. This work is even one step closer to the dream of single molecule structure identification. The work is entitled “Organic structure determination using atomic-resolution scanning probe microscopy” and as well as the IBM group responsible for the AFM work involves Marcel Jaspars, someone who’s work I have watched for many years as I am trained as an NMR spectroscopist and have spent a lot of time working on computer-assisted structure elucidation (CASE) approaches to examine natural product structures (see references in here…).

The molecule that they studied was cephalandole A  that had previously been mis-assigned. Interestingly my old colleagues from ACD/Labs, where I worked for over a decade, and myself had published an article in RSC’s Natural Product Reviews where we studied “Structural revisions of natural products by Computer-Assisted Structure Elucidation (CASE) systems“. The basic premise of the article is that there are incorrect structures making it into the literature because of the misinterpretation of the analytical data and that computer algorithms, specifically NMR prediction and CASE algorithms, can be used to rule out structures elucidated by the scientists.It is hard to do justice to the entire review article as we detail the approaches to CASE and NMR prediction and doing it in a blog post is tough. So, I do recommend reading the NPR article. However, I am extracting the part that applies to the elucidation of the structure of cephalandole A and how algorithms would be of value in negating the incorrect structure.

“In 2006 Wu et al isolated a new series of alkaloids, particularly cephalandole A, 16. Using 2D NMR data (not tabulated in the article) they performed a full 13C NMR chemical shift assignment as shown on structure 16.

Mason et al synthesized compound 16 and after inspection of the associated 1H and 13C NMR data concluded that the original structure assigned to cephalandol A was incorrect. The synthetic compound displayed significantly different data from those given by Wu et al. The 13C chemical shifts of the synthetic compound are shown on structure 16A.

Cephalandole A was clearly a closely related structure with the same elemental composition as 16, and structure 17was hypothesized as the most likely candidate. Compound 17 was described in the mid 1960s and this structure was synthesized by Mason et al.The spectral data of the reaction product fully coincided with those reported by Wu et al. The true chemical shift assignment is shown in structure 17. For clarity the differences between the original and revised structures are shown in Figure 17.

We expect that 13C chemical shift prediction, if originally performed for structure 16, would encourage caution by the researchers (we found dA=3.02 ppm).Figure 18 presents the correlation plots of the 13C chemical shift values predicted for structure 16 by both the HOSE and NN methods versus experimental shift values obtained by Wu et al. The large point scattering, the regression equation, the low R2 =0.932 value (an acceptable value is usually R2 ≥ 0.995) and the significant magnitude of the g-angle between the correlation plot and the 45-grade line (a visual indication for disagreement between the experiment and model) could indicate inconsistencies with the proposed structure and should encourage close consideration of the structure.Our experience has demonstrated that a combination of warning attributes can serve to detect questionable structures even in those cases when the StrucEluc system is not used for structure elucidation.

Figure 18. Correlation plots of the 13C chemical shift values predicted for structure 16 by HOSE and NN methods versus experimental shift values obtained by Wu et al. Extracted statistical parameters: R2(HOSE)=0.932, dHOSE=1.20dexp-25.6.

So, for those NMR jocks who don’t have access to the genius of IBM scientists performing AFM, and yet want to have tools to help in the elucidation process you’d be doing well to use NMR prediction algorithms and CASE systems to help….it’s rather embarrassing to have to issue a retraction on a paper with your name on.

Meanwhile I am in awe of the work reported by Marcel and his colleagues at IBM. Clearly there’s a long way to go before such approaches are mainstream but the flag is in the sand…this is where things will speed up and we are surely destined, I hope (!) to see many more reports of this type of work and how it is progressing. Let’s hope. Feedback on the NPR article welcomed!!!

Organic structure determination using atomic-resolution scanning probe microscopy

 

Tags: , , , , , ,

My presentation on Mobile Chemistry and does Slideshare work for marketing?

I have been using Slideshare to post my presentations for a couple of years now. It’s easy to use, has high traffic, has great utilities like embedding (used below) and is a “safe place” to store my presentations (assuming it stays alive!). The presentation I gave at the Special Libraries Association in New Orleans this week received over 300 views in 24 hours. It’s at 735 views as I write this, that’s in 3 days. I have received emails, its been embedded in other websites and I’ve received some very positive feedback. Now I need to find time to do a voiceover version and put it on YouTube!

Slideshare is a super way to share my presentations, store my documents/papers for public exposure and get the message out. I recommend it to everyone!

 
Leave a comment

Posted by on June 18, 2010 in Computing

 

Tags: , , , , , ,

Good Science Takes Time: 16 months to examine NMR Prediction Performance

In October 2007 I got involved in an exchange with Peter Murray-Rust from Cambridge University about Open Notebook NMR. The original post is here and my response is here. The basic premise of the exchange was that I believed that quantum-mechanical NMR predictions had a lot of limitations relative to empirical predictions. I made the comment based on over two decades working in NMR – the first decade managing a number of NMR laboratories and the second decade involved in the delivery of commercial software solutions, including NMR predictions, to the marketplace.

In my original response I stated “This has the potential to be a very exciting project. While I wouldn’t write the paper myself without doing the work I’ll certainly try the approach. Let’s see what the truth is. The challenge now is to get to agreement on how to compare the performance of the algorithms. We are comparing very different beasts with the QM vs. non-QM approaches so, in many ways, this should be much easier than the challenges discussed so far around comparing non-QM approaches between vendors.” and asked Peter to participate in a collaboration with us to do the comparison.

I then posted the blogpost below. It is included in its entirety as it defines what my thought process was almost two years ago and the approach that could be taken. In the blogpost I address a post directly to Peter. If you know the story then go past the history to the conclusions where I discuss the conclusion of the work we have done since this discussion started.

“Previously I blogged about “An Invitation to Collaborate on Open Notebook Science for an NMR Study“. I judged it was a great opportunity to “help build a bridge between the Open Data community, the academic community and the commercial software community for the benefit of science.” In particular I believe the project offers an opportunity to answer a longstanding question I have had. Specifically, I have seen a lot of publications in recent years utilizing complex, time-consuming GIAO NMR predictions. Having been involved with the development of NMR prediction algorithms for the past few years (while working with the scientists at ACD/Labs) my judgment is that these complex calculations can be replaced by calculations which can take just a couple of seconds on a standard PC. I believe this to be true for most organic molecules. I do not believe such calculations would outperform GIAO predictions for inorganic molecules or organometallic complexes or solid state shift tensors. However, there has never been a rigorous examination comparing performance differences. I believe this project offered an excellent opportunity to validate the hypothesis that HOSE code/Neural Network/Increment based predictions could, in general, outperform GIAO predictions.

The study was to be performed on the NMRShiftDB now available on ChemSpider. I’ve blogged previously about the validation of the database (1,2). The conversation about the NMR project has continued and Peter has talked about some of the challenges about open Notebook Science based on Cameron Neylon’s comments. I’ve posted the comments below to the post and they will likely be moderated in shortly. I post them here for the purpose of conclusion since I don’t think my original hopes will come to fruition. Thanks to those of you who have been engaged both on and off blog. I suggest we all help with Peter’s intention to help explain identifiers that are being extracted in the work.

“Can you provide some more details regarding your concerns here:”it would be possible for someone to replicate the whole work in a day and submit it for publication (on the same day) and ostensibly legitimately claim that they had done this independently. They might, of course use a slightly different data set, and slightly different tweaks.”

I have two interpretations:

1) Someone could repeat the GIAO calculations in a day and identify outliers and submit for publication

2) Someone could do the calculations using other algorithms and identify outliers etc and submit for publication

Maybe you mean something else?

For 1) the GIAO calculations CANNOT be repeated since no one has access to Henry‘s algorithms and based on your comments he is modifying them on an ongoing basis as a result of this work. Even if they did have their own GIAO calculations unless they have improved the performance dramatically or have access to a “boat load” of computers the calculations will take weeks (based on your own estimates). That said, comparing one GIAO algorithm to another is valid science and absolutely appropriate and publishable. Also, if they had used used the same dataset as you, with an other algorithm to check prediction and identify outliers it WOULD be independent. Related to the work you are doing for sure but independent.

For 2)using other algorithms on the same dataset is valid and appropriate science. THis is what people do with logP prediction (or MANY other parameters)..they validate their algorithms on the same dataset many times over. Its one of the most common activities in the QSAR and modeling world in my opinion. And people do use slightly different tweaks…it‘s one of the primary manners to shift the algorithms. Henry‘s doing this right now to deal with halogens according to your earlier post. Wolfgang Robien at University of Vienna, ACD/Labs and others use their own approaches but both at a minimum can use HOSE code and Neural Networks. Same general approaches with tweaks. They give different results…all is appropriate science.

Returning to the comment “it would be possible for someone to replicate the whole work in a day and submit it for publication (on the same day) and ostensibly legitimately claim that they had done this independently.”

Wolfgang Robien has taken the NMRShiftDB dataset and performed an analysis. It‘s posted here. ACD/Labs performed a similar analysis as discussed on Ryan‘s blog here. One of the outputs is this document. This resulted in further exchanges and dialog. The parties have discussed this on the phone and face to face with Ryan talking with Wolfgang recently in Europe at a conference.

This was heated and opinionated for sure. STRONG scientific wills and GREAT scientists defending their approaches and performance. Wolfgang is NOT an enemy for ACD/Labs…he has made some of the greatest contributions to the domain of NMR prediction and, in many ways, has been one to emulate in terms of his approach to quality and innovation to create breakthroughs in performance. He is a worthy colleague and drives improvement by his ongoing search for improvements in his own algorithms. I honor him.

The bottom line is this: approaches for the identification of outliers in NMRShiftDB have been DONE already. It‘s been discussed online for months…just do a search on “Robien NMRshiftDB” on google or “ACD/Labs nmrshiftdb”. There are hundreds of pages. We/I just published on the validation of the NMRShiftDB. I blogged about it and you posted it here. Feedback on outliers have been returned to Christoph and changes made already. SO in many ways you are doing repeat work – just using a different algorithm and identifying new outliers. Neither ACD/Labs nor Wolfgang‘s work was exhaustive. it was very much a first cut but did help edit many records already. NO DOUBT you will find new outliers.

I‘ve gone back to the original post and extract two purposes to the work:

1) To perform Open Notebook Science

2) quote “To show that the philosophy works, that the method works, and that NMRShiftDB has a measurable high-quality.”

1) has already changed and is an appropriate outcome from the work.(http://wwmm.ch.cam.ac.uk/blogs/murrayrust/?p=743)

2) The method of NMR prediction applied to NMRShiftDB to prove quality..high or not…has been done already. Wolfgang and ACD/labs did it already. I judge you‘ll have similar conclusions…it‘s the same dataset.

Stated here http://wwmm.ch.cam.ac.uk/blogs/murrayrust/?p=737 is “We shall continue on the project, one of whose purposes is to investigate the hypothesis that QM calculations can be used to evaluate the quality of NMR spectra to a useful level.” It‘s a valid investigation and this is testing whether QM can provide good predictions. This is of course known already from the work done by Rychnovsky on hexacyclinol.

To summarize:

1) Using NMR predictions to identify outliers – already done (Robien and ACD/Labs)

2) Validating that GIAO predictions are useful to validate structures – already done (hexacylinol study)

3) Validating the quality of NMRSHiftDB – already done (Robien, ACD/Labs)

All this brings me down to what I “think” are the intentions or outcomes for the project at this point..but I likely have missed something..

1) Identify more outliers that were not identified by the studies of others

2) Deliver back to Christoph and the NMRShiftDB team a list of outliers/concerns/errors with annotations/metadata in order to improve the Open Data source of NMRShiftDB

3) Allow Nick Day to use a lot of what was learned delivering CrystalEye for a second application around NMR and useful for his thesis (A VERY valid goal..good luck Nick)

4) Show the power of blogging to drive Collaboration via OPen Collaborative NMR

SOme additional project deliverables I think include:

1) make online GIAO NMR predictions available

The project deliverables you are working on are defined here and I believe are consistent: http://wwmm.ch.cam.ac.uk/blogs/murrayrust/?p=742

* create a small subset of NMRShiftDB which has been freed from the main errors we – and hopefull the community – can identify.

* Use this to estimate the precision and variance of our QM-based protocol for calculating shifts.

* refine the protocol in the light of variance which can be scientifically explained.

What I still would like to see, BUT this project belongs to you/Henry/Nick of course and you define what it is, is:

1) to help build a bridge between the Open Data community, the academic community and the commercial software community for the benefit of science.” Wolfgang is in academia, so are you, ACD/Labs is commercial and I‘m independent (but of course am associated with ChemSpider…I am an NMR spectrosopist…it‘s why I‘m interested)

2) To validate the performance of GIAO vs HOSE/NN/Inc by providing the final dataset that you used and statistics of performance for GIAO on that datatset. I‘d like to publish the results jointly, if you would be willing to work with the “dark side”

3) To identify where GIAO can outperform the HOSE/NN/Inc approaches

Wolfgang also has thoughts where he says “What would be great to the scientific community: Do calculations on compounds where sophisticated NMR-techniques either fail or are very difficult to perform – e.g. proton-poor compounds or simply ask for a list of compounds which are really suspicious (either the structure is wrong or the assignment is strange, but the puzzle can’t be solved, because the compound is not available for additional measurements).

I‘ve put a lot of effort into blogging onto this project over the past few days. I‘m about to invest some time in making sure that you get information about outliers so you are not doing repeat work. I judge that my hopes for deeper collaboration will remain unfulfilled so I‘ll give up on asking.

I‘ll do what I can to help from this point forward and keep my own rhetoric off of this blog and restrain it to ChemSpider so as to not distract your readers. I look forward to helping for the benefit of the community.

While I was at ACD/Labs I worked with a number of truly excellent scientists. These people were at the forefront of developing NMR prediction technologies as well as Computer Assisted Structure Elucidation (CASE) software. Over the past year and a half I have had the privilege of continuing some of the work I was involved with while at ACD/Labs and our publication regarding “Empirical and DFT GIAO quantum-mechanical methods of 13C chemical shifts prediction: competitors or collaborators?” was released recently. The abstract states:

“The accuracy of 13C chemical shift prediction by both DFT-GIAO quantum-mechanical (QM) and empirical methods was compared using 205 structures for which experimental and QM-calculated chemical shifts were published in the literature. For these structures, 13C chemical shifts were calculated using HOSE code and neural network (NN) algorithms developed within our laboratory. In total, 2531 chemical shifts were analyzed and statistically processed. It has been shown that, in general, QM methods are capable of providing similar but inferior accuracy to the empirical approaches, but quite frequently they give larger mean average error values. For the structural set examined in thiswork, the following mean absolute errors (MAEs) were found: MAE(HOSE) = 1.58 ppm, MAE(NN) = 1.91 ppm and MAE(QM) = 3.29 ppm. A strategy of combined application of both the empirical and DFT GIAO approaches is suggested. The strategy could provide a synergistic effect if the advantages intrinsic to each method are exploited.”

The conclusion includes the following statements “It has been shown that, in general, QM methods are capable of providing similar but inferior accuracy to the empirical approaches, but quite frequently they
give larger mean average error values. This is accounted for mainly with difficulties in selecting the appropriate calculation protocols and difficulties arising from molecular flexibility. The data show that the average accuracy of the QM methods is 1.5–2 times lower than the accuracy shown by the empirical methods. For the structural set examined in this work, the following MAEs were found: MAE(HOSE) = 1.58 ppm, MAE(NN) = 1.91 ppm, MAE(QM) = 3.29 ppm.”

In order to demonstrate that empirical approaches perform QM methods in general we examined 2531 chemical shifts associated with 205 molecules. It was a rather complete study! It took a long time to do the work but it wasn’t done as Open Notebook NMR. It’s published in Magnetic Resonance in Chemistry here: DOI:/10.1002/mrc.2571. Enjoy!

 

Tags: , , , , ,

Optical Structure Recognition, Solubility Prediction and Neutral Parties

There are a few areas of cheminformatics that I watch out of professional interest but more out of passion if the truth be known. As an NMR spectroscopist I still watch NMR processing and prediction software, CASE systems (Computer assisted structure elucidation), structure drawing and databasing, and, in regards to our recent interest over at ChemSpider regarding chemical name and structure image recognition, I watch OSR software developments. OSR is Optical Structure Recognition, the equivalent of OCR for chemical structure images. (Egon and I are both interested in OSR it seems…)

Probably the best known OSR system on the market for the past few years is CLiDE and I have had a chance to work with it as discussed here. There are now others available on the market though specifically ChemOCR from the Fraunhofer Institute. There is also OSRA from the National Cancer Institute and ChemReader from the University of Michigan. I can’t find it now but there was also Kekule, also funded by the NCI.

As with all software focusing on a particular problem the intention for these packages is the same but the technology approaches are different. These software packages all have similar intentions…convert structure images into machine readable chemical structure formats. The technology approaches are similar but differ of course in their implementation. This blog isn’t about those differences, it is about how can they be compared?

Recently a gauntlet was thrown down in regards to solubility prediction. The question asked was “Can You Predict Solubilities of Thirty-Two Molecules Using a Database of One Hundred Reliable Measurements? “. The details of the challenge are here. What was nice about this is the fact that the results could be judged by independent parties. What was objective, at least from where I’m sitting, is that experts in the field got to review the data and comment. This is very different from chemistry software vendors comparing each others products and standing with their own opinions. I’ve been involved with this myself in terms of NMR prediction comparisons and these discussions can get rather heated. There was similar “warmth” in the air about a year ago in the OSR domain as discussed here.

So, with so many efforts in the area of OSR how can we get independent testing of multiple OSR packages and get a true representation of the performance characteristics of these packages? Since some packages are commercial while others are Open Source we would need to separate the distinctions of “packaging” from performance. A set of objective criteria separating usability, workflows and interface from algorithms. This doesn’t mean that the former are not important, nay critical to the success of a software package BUT the algorithms, the science, the technology should be the focus of the study.

I suggest taking 100-200 images from different sources and applying the various software packages to validate performance in a neutral way. The study should be conducted by neutral parties…not so neutral that they don’t care about the work but neutral in a way that they are implicitly wed to the outcome of an objective comparison of the OSR algorithms. I have an interest in this so will throw my hat in the ring…I have already done some work on CLiDE and OSRA (1, 2, 3, 4). WHo else would be interested?

The challenges…there are a few:

1) Would all of the OSR producers share their software packages with a neutral panel of reviewers?

2) Who would fund the work? The Solubility challenge appears to have been funded by Pfizer. What immediacy would it be done with without funding…everyone’s busy.

3) How would the panel be selected?

4) Would the work be conducted without all OSR producers participating?

5) About a dozen more concerns….probably Jonathan Goodman, Robert Glen and John Mitchell could give some great advice based on their experience with the Solubility Challenge.

I think this type of comparison needs doing…you?

 
1 Comment

Posted by on October 30, 2008 in Community Building, Computing, Software

 

Tags: , , , ,

A Week of Writing NMR Publications

I am a week behind on submitting a Chapter for publication in the Second Edition of the Encyclopedia of Spectroscopy. Too much work…not enough time. In the meantime I’ve co-authored a couple of publications with friends from ACD/Labs…one of these addresses an issue discussed on the ChemSpider Blog…a longstanding wish to compare empirical and quantum-mechanical NMR prediction approaches.

A Systematic Approach for the Generation and Verification of Structural Hypotheses.

During the process of molecular structure elucidation, selection of the most probable structural hypothesis may be based on chemical shift prediction. The prediction is carried out either by empirical or quantum-mechanical (QM) methods. When QM methods are used NMR prediction commonly utilizes the GIAO option of the DFT approximation. In this approach the structural hypotheses are expected to be investigated by the scientist. In this article we hope to show that the most rational manner by which to create structural hypotheses is actually by the application of an expert system capable of deducing all potential structures consistent with the experimental spectral data and specifically using 2D NMR data. When an expert system is used the best structure(s) can be distinguished using chemical shift prediction using either incremental or neural net algorithm. The time-consuming quantum-mechanical calculations can then be applied, if necessary, to one or more of the “best” structures to confirm the suggested solution.

The Application of Empirical Methods of NMR Chemical Shift Prediction to Determine Relative Stereochemistry.

The reliable determination of stereostructures contained within chemical structures usually requires utilization of NMR data, chemical derivatization, molecular modeling, quantum-mechanical calculations and, if available, X-ray analysis. In this article we show that the number of stereoisomers which need to be thoroughly verified can be significantly reduced by the application of NMR chemical shift calculation to the full stereoisomer set of possibilities using a fragmental approach based on HOSE codes. The usefulness of suggested method is illustrated using experimental data published for artarborol.

 
Leave a comment

Posted by on August 31, 2008 in Computing, General Communications

 

Retrosynthetic Analysis Presentation at ACS-Philly

I had the pleasure of representing ARChem Route Designer, a retrosynthetic analysis tool from SimBioSys at the American Chemical Society meeting in Philadelphia last week. While I am not an organic synthetic chemist I have done my fair share of syntheses during my BSc and PhD and actually had a bit of a green thumb when it came to purity and yield. When given the opportunity I ran instead at exciting nuclear spins in large magnets (and enjoyed my choice for many years).

Since I was in the commercial sector for over a decade managing the development of chemistry software I have always had an interest in the development of a retrosynthetic analysis tool. It’s a lot of work and requires a deep understanding of organic chemistry. ARChem results from combining the deep understanding of Peter Johnson with the software development skills of SimBioSys. Peter was not able to make it to the ACS meeting in Philadelphia and, since I have had some experience of ARChem as a result of working with SimBioSys over the past few months, I was asked to step in and present on the product.

 

A link to the presentation is given here. A paper on ARChem has also been submitted if anyone is interested.

 
1 Comment

Posted by on August 24, 2008 in Computing, Consulting

 

Tags:

Publication for People Interested in Computer Assisted Structure Elucidation

Recently I took delivery of a box of reprints of a review article written with by our team of Mikhail Elyashberg (ACD/labs), Gary Martin (Schering-Plough) and myself. It was a major undertaking and took two years of work to final release. It is a >100 pages typeset article. The title is

“Computer Assisted Structure verification and elucidation tools in NMR-based structure elucidation” (doi:10.1016/j.pnmrs.2007.04.003)

The article outline is posted here.

If anyone would like a copy of the article please send me an email at antonyDOTwilliamsATchemspiderDOTcom. I will ask you to cover the costs of shipping via paypal.

 
Leave a comment

Posted by on July 25, 2008 in Computing, Software

 

Tags: ,

Hamburger PDFs and Making Them Structure Searchable

There have been numerous conversations about “Hamburger PDFs” over the months and the most recent exchange is that between Chris Rusbridge and Peter Murray-Rust. Another conversation that I have seen go on has been about making Word documents structure searchable (cannot track down the appropropriate blog-postings at present).

This is just an fyi comment for the community really since this is a general assumption that Word Documents and PDFs cannot be made structure-searchable. The truth is that both can be made structure searchable. How? Well, you need to write the correct information into the file to enable it but it’s possible. There are a number of solutions out there allowing structure-based searching of Word document files. I believe the first one was originally from Oxford Molecular before being acquired by Accelrys. I think there are now multiple including, I believe, Cambridgesoft, ACD/Labs and probably others.

The only PDF structure searching capability I am aware of is that created by ACD/Labs a few years ago. Their website states “Our Search for Structure system allows you to seek out chemical structures in various file formats throughout your computer’s file systems. These formats include: SK2, MOL, SDF, SKC, CHM, CDX, RXN, and PDF (Adobe Acrobat); DOC (Microsoft Word), XLS (Microsoft Excel), and PPT (Microsoft PowerPoint), and ACD/Labs databases: CUD, HUD, CFD, NDB, ND5, and INT.”

For PDF it was required that structure files were “tagged” appropriately when written to PDF by an embedded PDF generation capability. Since the PDF format can be extended ACD/Labs did so. If we wanted to make the majority of PDF files structure searchable then it seems as if the appropriate thing to do would be to extend the general PDF format for Life Sciences, talk to Adobe about including the capabilities into their tools and get the publishers to support it. Ok, there’s details….but why isn’t anyone talking about extending PDF to support structures in this way. it’s already proven, years ago.

Next thing will be that structures will be getting embedded into Word documents and made searchable as if it is something novel. It’s been done many times already. The ACD/Labs website states “Microsoft Word documents with structures created in ChemDraw or MDL ISIS can also be retrieved. Not only can you perform exact structure searches, but you can also search by substructure. Added options allow you to preview search results, open search result documents in ChemSketch as well as in other applications, and store search results for later access.” There are other products doing this too.

Strangely people don’t seem to know about these capabilities. They will…as we move forward to index the web for structures we hope to build the capabilities to search structures inside Word documents directly.

 
4 Comments

Posted by on May 3, 2008 in Computing, Software

 

Tags: , ,

Spaces, Dashes and Issues with Nomenclature Conversion

I’ve been involved with Nomenclature in one way or another for well over a decade. While I’m an NMR spectroscopist by training (as evidenced by the >100 publications in this area)  during my decade long tenure  at ACD/Labs I learned a lot about: PhysChem parameters and their prediction, systematic nomenclature, structure drawing and databasing, chemometrics, LC-MS data analysis and so on. As the product manager for many of these products I was dropped in the deep end. Nomeclature was something I really enjoyed. While I am not a  nomenclature specialist in terms of a “generate a perfect systematic name for Taxol level” I have a decade of experience working with nomenclature software for both generation of names from structures and the generation of structures from names. Having worked with 100s of customers and their needs I’ve dealt with a lot of beliefs around nomenclature and perceptions of how to use the tools.

Having just spent the week at Bio-IT and having been engaged with a number of conversations about Name to Structure conversion, it became clear that one of the prevailing beliefs for users of name to structure conversion packages is that spaces in systematic names can be disregarded. It appears that members of the text-mining for chemistry community are using one or more of the commercial name to structure software programs to convert chemical names to structures and, prior to feeding the algorithms, they are removing all white spaces from the names. They are also doing the same, in some cases, with dashes. How well is that going to work? Is it safe to remove spaces from chemical names and assume this has no effect? Is consideration being given more to the accuracy of the text-mining than to the nature of systematic nomenclature?

Let’s look at some examples of the result of removing spaces from chemical names. Consider the different results just from moving a space.

The impact of spaces on naming

Single structure to separate components based on a space.

Another example of multiple to single component structure.

Another example of space-collapsing structure searching

Clearly there is an impact of removing spaces from systematic names. The same is true of random removal and insertion of dashes. The generation of systematic names by chemists is far from ideal as discussed by Gernot Eller here. The mishandling of correct names when reverting back to structures is one more problem layer. There are many of us using text mining and name to structure conversion to link between documents and structures. It is far from a minor undertaking.

 

Tags: ,

FPGAs, GPUs and now the Cell Processor – A Call for Comments

I have received a couple of emails off blog about my post yesterday about the Cell processor and its application to scientific computing.

The basic premise is one of scepticism. The hot area of interest up to a couple of years ago was Field Programming Gate Arrays. Nowadays a lot of discussions focus on the advantages of GPUs. However, the majority of chemists have not even heard of these processors and they remain of interest to programmers and hardware hobbyists and experts.  For the chemists we spoke to at Bio-IT the terms FPGAs and GPUs went over their heads. Not true for the IT people. When we mentioned the Cell processor then it went over the heads of MOST people. So, the Cell is pretty much an unknown entity to most.

People have been programming onto FPGAs for years but none have gone mainstream in the scientific computing world that I am aware of. A number of researchers are now working with GPUs but have any gone mainstream and will they? The Cell might just be different.

So, a question for the readership. What are your thoughts, comments, opinions on FPGAs vs GPUs vs the Cell processor. Where does each have strengths over the others? What do people think about the future of GPUs in terms of scientific computing? What are your thoughts about the Cell processor?

 
8 Comments

Posted by on May 2, 2008 in Computing

 

Tags: , ,

 
Stop SOPA