We have been working with Vincent Scalfani from the University of Alabama towards supporting a community of 3D printing crystal structure enthusiasts. There is a listserv, [3DP-XTAL] hosted by the university of Alabama and if you would like to be added to the listserv, simply email Vincent at vfscalfaniATuaDOTedu. They are also in the process of creating a 3D printing crystal structure wiki/blog for the community.
With Vincent as the driver we are creating a public on-line repository for 3D printable structure files (.stl and .wrl). He used Jmol to prepare ~30,000 molecules and solids in .wrl and .stl format and we will be hosting them on part of our data repository. We are very excited about this project and there will be more information at the upcoming 248th American Chemical Society Meeting in San Francisco, CA. See CINF Abstract # 125.
The flier that will be distributed at the IUCr meeting in Montreal in August is available on Slideshare here:
I give a lot of presentations. A lot. Maybe too many. At the impending ACS meeting in San Francisco I am giving nine presentations. When I give a presentation I like to share it afterwards. I need the distribution method to be quick, easy to use and hopefully let users of the platform find it if they were interested in it. I have used various platforms to disseminate my talks. There are really no usability issues with any of them….the various groups have done a good job building their platforms. I am a user of both Slideshare and Figshare and my accounts are here: Slideshare and Figshare. This week I received my weekly stats email and the numbers are below…>3000 views in one week and a total of 400,000 views total of my talks, preprints etc.
Compare this with my Figshare stats of >6600 views ever.
The majority of talks I upload to Slideshare have about 3000 views in 2 months as shown below…some have over 25000 now.
If I compare this with Figshare the most views I have is around 500 but that was over 18 months.
Clearly my presentations on Slideshare get way higher exposure. However, the usual question of quality vs quantity comes to bear. Likely the audience on Figshare, of scientists primarily, may be more my audience rather on Slideshare. What I should do, but it is time-consuming (but only a few additional minutes per presentation) is put the presentation to Slideshare, to Figshare, to my Academia.edu account, to my ResearchGate account, to Vimeo, to YouTube etc. But I only have so much time and right now my easiest deposition route is Slideshare. In terms of my actual prioritization of places to deposit, based on the number of views and downloads the order is
I have been working with the Kudos platform for a few weeks now…see here. Two weeks ago I chose to run an experiment. Here it is… (you may want to watch the video on the previous post first to understand what enriching an article is and I why I feel the platform is of value)
1) I enriched an article that I had authored in 2013. GENERALLY after I enrich an article I tweet it out and then look for the response… you can see some of the results below for the articles I have done…I am starting from most recent and going back to the 80s but with 150 articles to do it’s a long journey…
The important stats to take a look at are Kudos views, clickthroughs and Share referrals. ULTIMATELY we want clickthroughs and views on the publisher platform. Kudos views are good but Share Referrals are very useful I believe. In the list below notice that for the fifth article in the list that the referrals are ZERO and the Kudos Views are low relative to the others….but this is the only one I haven’t “shared”…i.e. no tweets and no facebook posts. My hypothesis was “Ok, so it’s not Kudos itself that is helping to drive the views/shares/clickthroughs but MY work to share…Can I prove this?”
2) In order to prove the hypothesis…and I think it’s done…I did the following.
- Choose one article that had been on Kudos for a while and had low views/shares (all do that have not been enriched)
- Enrich the article in increments and see if it makes a difference…see the A’s shown on the chart below as those are enriching activities
- Monitor the views and see if any enriching activities made a difference.
- Wait two weeks and share the article and see what happens
3) The chart below proves the point.
- Enrichment, while useful for me as it helps aggregate information of value to the article, does NOTHING to drive attention to the article…i.e. the community doesn’t know what I’ve done without me telling them
- Once I share then BOOM…views/accesses/share referrals go through the roof. I went from 7 to 42 Kudos views in <2 hours
So, an article languished on Kudos for two weeks with no real traffic. I enriched it…no real impact. Not until I released out to my networks, and it got retweeted and passed on to others did traffic increase. I have fairly good followings on the different social network tools built up over a number of years. But what will Kudos do for those people who don’t use Facebook or Twitter? Yes they can enrich the article but the only way to let people know then is via email. Pushing the Kudos articles out to networks on an authors behalf would be very useful of course. Things will get exciting if and when Kudos uses intelligent algorithms to deliver updates to people interested in specific article topics. Google Scholar Citations does this for me now…it uses my published articles to provide me with notifications and pointers to related articles, not just articles that cite me. If Kudos could send me an email with “You might be interested in these new articles claimed on Kudos…” then that may be of value also. I think a Follow button would make sense whereby I can follow an article and if it is enriched further by the author I am informed by Kudos regarding what new enrichment is added.
This presentation was given at the JC Bradley Memorial Symposium on 14th July 2014
Jean-Claude Bradley had an incredible passion for providing open science tools and data to the community. He had boundless energy, no shortage of ideas and ran so many projects in parallel that it was often difficult to keep up. But at RSC we tried. We provided access to our data, our application programming interfaces and lots of our out-of-hours time to help turn his vision into reality. As a result we helped in the delivery of the SpectralGame to help people learn about NMR and we supported the integration of our services into GoogleDocs underpinning the management and curation of physicochemical property data. We tweaked a number of our services based on JC’s input and as a result we have ended up with a suite of capabilities that serve many of our existing efforts to integrate to electronic lab notebooks and support the ongoing shift towards Open Chemistry. JC was very much ahead of his time….and we were glad to have supported his work. This presentation will give a snapshot of some of the work we did to support his vision.
On July 14th 2014 the Jean-Claude Bradley Memorial Symposium was held to celebrate the life and work of Professor Jean-Claude Bradley of Drexel University. This slide deck highlighting dedications made to JC on various blogs and the memorial symposium wiki helps to capture JC’s contributions to science and how we felt about him.
On 14th July 2014 a memorial symposium to celebrate the life and work of Professor Jean-Claude Bradley, the father of Open Notebook Science, used this photo loop to connect us to some of his activities and give us a glimpse into his personal life.
Next week I am looking forward to co-hosting the JC Bradley Memorial Symposium. How did this come about? The symposium is of course catalyzed by the tragic loss of our friend and colleague Jean-Claude. This hit Andy Lang and myself very hard (and of course many others) because for a number of years we had been collaborating together on a number of projects regarding Open Science, many of these to be discussed in some detail next week at the symposium. When we talked and discussed about ways to memorialize JC we happened upon an instance where we would both be in the UK at the same time and, since JC had so many interactions in place with European scientists and advocates for Open Science, we decided to try and make a go of a symposium to celebrate his work.
We received general support for a gathering and went seeking a venue that would be kind enough to host us. Thank you so much to Christoph Steinbeck for trying to make this work at EBI but because of the popularity of the venue no rooms were available. We extended our hand to Bobby Glen at the University of Cambridge and, gentleman that he is (!), he immediately gave us a home for the gathering. Bobby is Director of the Unilever center of molecular science informatics at the university and may well known scientists and open science evangelists work there, one of these of course being Peter Murray-Rust. Peter threw his support behind the symposium 100% and, together with Susan Begg, has taken all responsibility for local coordination. Despite Peter’s demanding travel schedule we have been able to coordinate the event and we owe a debt of gratitude to Susan for all the work she has done in the background to bring this together in such a short time. Literally, this event will come together as a result of a few skype calls between Andy and I and a series of email exchanges between us, Peter and Susan. When the event comes together, starting Sunday evening with a social gathering, and finally on Monday morning when the formal gathering kicks off, then intention, collaboration, trust and willingness to get it done will be the underpinnings of the meeting.
“Intention, collaboration, trust and willingness to get it done” speak volumes regarding how JC Bradley approached science. He was a get-it-done type of guy. The speakers that will gather next week, listed here, operate in the same way in my mind. They are driven, passionate and getting it done. We thank every one of them for taking their time to come and celebrate JC.
The gathering will honor his work and enormous contributions to open science. He was ahead of his time. With this gathering of people, and the support of the attendees, we hope that we will be able to discuss how to drive forward what he had put so much effort into…OPEN NOTEBOOK SCIENCE. Peter Murray-Rust has already outlined his thoughts and will expand at the gathering. What we will need to do is consider how to turn discussions into actions and deliverables to get it done. The symposium will be a start…the “networking events” (call them pub gatherings) will continue the discussions and what we do afterwards makes it work. Hope to see you in Cambridge!!!
For those of you who CANNOT attend on Monday of next week….you can still contribute…
If you have any photos of JC please send them through to me at tony27587ATgmailDOTcom for a photo loop
If you want to send a dedication to JC send a few words that I will show on a dedication loop sometime during the meeting.
Recently I had a chance to sit with Fiona McKenzie and discuss why we both find Kudos to be of so much interest…it was just released as a movie on the RSC YouTube Channel and is embedded below. Will Russell who is often behind the scenes leading the way on interesting engagements and collaborations is behind the camera…
MOST people who are reading this blog post have likely performed peer review over the years. I have reviewed a lot of manuscripts over the years. It has changed a lot over the past decade in many ways. A couple of examples of how things have changed for me
1) More requests to review papers – and I increasingly turn down requests because they are from journals I have never heard of (some may call them “predatory publishers”), some are in areas for which I have no expertise (e.g. electrical engineering), and sometimes because I simply don’t have time.
2) I have seen papers I have reviewed show up essentially untouched in other journals (no edits and simply reformatted) and commonly these “refused papers” are accepted into what I deem to be “lower quality” publications.
Of course over the past ten years I’ve also had a lot of papers go through peer review for myself and my co-authors. This experience has also been very interesting, if not entertaining. Some examples:
1) I have experienced the third reviewer where an editor has held up a manuscript or demanded changes to match some of their own expectations while other reviewers were publish as is.
2) I have had the request to shorten excellent manuscripts to help with “page limits”….in the electronic age???
3) I have been on the receiving end of non-scientific reviews that have blocked a paper. My personal favorite “Mobile apps are a fad of the youth.”
My best story of peer review, and an example where modern technologies would have been so enabling at the time, is as follows.
I was asked to review a paper regarding the performance of Carbon-13 NMR prediction for this paper. A slice of the abstract says
“Further we compare the neural network predictions to those of a wide variety of other 13C chemical shift prediction tools including incremental methods (CHEMDRAW, SPECTOOL), quantum chemical calculation (GAUSSIAN, COSMOS), and HOSE code fragment-based prediction (SPECINFO, ACD/CNMR, PREDICTIT NMR) for the 47 13C-NMR shifts of Taxol, a natural product including many structural features of organic substances. The smallest standard deviations were achieved here with the neural network (1.3 ppm) and SPECINFO (1.0 ppm).”
This was an important time for me as this paper was comparing various NMR predictors and comparing the performance based on ONE chemical structure. And while any one point of comparison is up for discussion there were 47 shifts so you could argue it is a bigger data set. One of the programs under review was a PRODUCT that I managed at ACD/Labs, CNMR Predictor. Therefore I clearly had a concern as, essentially, the success of this product was partly responsible for my income. Any comparison that made the software look poor in performance was an issue. Was this a conflict of interest…maybe…but I judge myself to still be objective.
Table 3 listed the experimental shifts as well as the predicted shifts from the different algorithms and the size of the accompanying circle/ellipse was a visual indicator of a large difference between experimental and predicted. We will assume that all experimental assignments are correct and that there are no transcription errors between the predicted values from each algorithm and input into the table. A piece of Table 3 is shown below.
I kind of pride myself on being a little bit of a stickler for detail when it comes to reviewing data quality. Those of you who read this blog will know that. As I reviewed the data I was a little puzzled by the magnitude of the errors for certain Carbon nuclei, specifically for Carbons 23 and 27.
What was interesting to me was that the experimental shifts for 23 and 27 were 142.0, 133.2 ppm respectively yet the predicted shifts were 132.8, 142.7 ppm respectively. It struck me that they looked like they were switched. This was what drew my attention to reviewing the data in more detail. I will cut a long story short but I redrew the molecule of Taxol as input into the same version of software that was used for the publication and got a DIFFERENT answer than that reported. I was able to distinguish WHY it was different…it was down to the orientation of a bond in the input molecule that was input by the reporting authors and this made the CNMR prediction worse.
I reported this detail to the editors in a detailed letter and recommended the manuscript for publication with the caveat that the numbers for the column representing CNMR 6.0 be edited to accurately reflect the performance of the algorithm and provide the details. I was shocked to see the manuscript published later WITHOUT any of the edits made for the numbers and inaccurately representing the performance of the algorithm. I contacted the editors and after a couple of exchanges received quite a dressing down that the editor overseeing the manuscript refused to get between a commercial concern and reported science.
What does this mean? That software companies don’t do science and only academics do? I have similar experience of my colleagues in industry being treated with bias relative to my colleagues in academia. I believe my friends in industry, commercial concerns and academia can all be objective scientists….and after all, doesn’t academia teach the chemists that come out to industry and the commercial software world? These are my experiences…I welcome any comments you may have about the bias. BUT, back to the story…
The manuscript was published in June 2002 and as product manager I had to deal with questions around algorithmic performance for many months because “the peer-review literature said…”. This was NOT the only instance of a situation like this as a couple of years later it was reported that ACD/CNMR could not handle stereochemistry only to determine with the scientist who wrote the paper that he had thrown a software switch that affected his results. Software can be tricky and unfortunately the best performance can often come through the hands of those that write the software. Sad but true in many cases.
In August 2004 we published an addendum with one of the original authors regarding the work describing the entire situation in detail. It was over two years from the original publication to the final addendum. I do not believe there was any malicious intent on behalf of the authors of the original manuscript but that was in the days where the only place to issue a rebuttal was in the journal and we could not get editorial support to do it. How would it happen today if a paper came out that was suspicious. There are a myriad number of tools available now….
Yes, I would blog the story here, as I am doing now. Yes I would express concern at the situation on Twitter with the hope of gaining redress. I would likely tell the story in a Slideshare presentation and make a narrated movie and make it available via an embed in the Slideshare presentation on my account. I would hope that the publisher nowadays would at least allow me to add a comment to the article but I do understand that this comment would likely be monitored and mediated and they may choose not to expose it to the readers. I like the implementation on PLoS and have used it on one of our articles previously.
Could I maybe make use of a technology like Kudos that I have started using. I have reported it on this blog already here. I certainly could not claim the ORIGINAL article and start associating information with it regarding the performance of the algorithms…and that is a shame. But MAYBE in the future Kudos would consider letting OTHER people make comments and associate information/data with an article on Kudos. Risky? Maybe. However, I can claim the rebuttal that I was a co-author on and start associating information with that….certainly the original paper and ultimately linking to this blog. In fact, in the future is a rebuttal going to be a manuscript that I publish out on something like Figshare, grab a DOI there and maybe ask Kudos to treat that as a published rebuttal? Peer review of that rebuttal could then happen as comments on Figshare and Kudos directly and maybe in the future Kudos Views and Altmetric measures of that becomes a measure of the importance. We live in very interesting times as these technologies expand, mesh and integrate.
Over the past few years I have learned how to use a lot of the social networking tools and platforms to host and share my publications (when I am allowed to), my presentations, videos etc. I have started using a new website, www.growkudos.com, to help me enrich, expose and measure my publications. This is VERY EARLY in my exposure and usage of the platform but I am already excited by the possibilities. I applied KUDOS to one of the articles I co-authored with Sean Ekins and Joe Olechno regarding “Dispensing Processes Impact Apparent Biological Activity as Determined by Computational and Statistical Analyses“. With almost 10,000 views it has become a very interesting article and has been discussed many times so there was a lot of online information to enrich the article with. The resulting KUDOS page is here: https://www.growkudos.com/articles/10.1371/journal.pone.0062325