Category Archives: Uncategorized

Bias and Data-Driven and Probability-Based Decision Making in Trivia Crack

I’ve been playing Trivia Crack for a few years now and, as of today, I am a level 326. I am an iPhone/iPad user so grabbed it from the AppStore. In playing I have filled my brain with some useless information, learned a lot of history, geography and sports, and taken advantage of my Science background to win more than a few challenges. In terms of Entertainment its clear I stopped learning about new music a few years ago. I’m stuck in my musical history with little interest in the new music scene really. I have enjoyed playing Trivia Crack against my girlfriend for over three years and we continue to have regular periods where we actively engage with the game.

Trivia Crack has a lot of downloads with Forbes reporting on over 300 million downloads and I can hear the theme tune while sitting in restaurants as people get their dose of the day.

There is a lot of advice online for people to try and beat the game. Much of this advice is tactics based. Hackers have even been taking their pokes at it. Looking at some of the analyses that have been made I am at least in the top 1% for Science, and with a category performance of 86 am better than 99.8% of the people playing Trivia Crack. My weakest category is Sports…not a surprise as I prefer to do sports rather than watch it or read about it. I am generally flat across the board for Entertainment, Art and Literature and Geography.

As a scientist I am data driven. However, as Louis Pasteur once commented, “In the fields of observation chance favors only the prepared mind” ( So, while playing over 3500 games I started noticing patterns that helped me play the game. There were numerous patterns I noticed over the years but I will summarize them here and then share the data.

  • If I did not know the answer to a question and HAD to guess, my guesses generally worked out best when I always guessed that the first (top) answer was correct. If I guessed the fourth (bottom) answer I was generally, but not always wrong.
  • With the observation, that I could reproduce over and over, I decided to gather the data and analyze it statistically. The data is shown below and represents the number of times that the answer is in each position 1 to 4, top to bottom. Each column corresponds to the frequency of correct answers for a particular grouping. position of the correct answer out of the four possible answers. I gathered data over a number of days in five different groups. I chose to distribute the groupings into different sizes also, stopping the gathering of data when there were 25,50 or 100 answers in position 1.

The data speaks for itself (and is available for download on FigShare here). In all five groupings the majority of answers are in position 1, commonly the chances of the answers being in position 1 versus position 4 is about double. This means that if you are lost in terms of answering a question, and have no idea which answer to choose, you should select position 1. The results, over time, will be that you will be right more often than not. If you know that positions 2 and 3 are not the correct answers and are trying to choose between positions 1 and 4 then choose position 1. You will be correct 2x more often than if you chose position 4.

While I believe the data speaks for itself a statistical analysis is certainly in order. I’ve done a lot of stats over the years but I am fortunate enough to know people who are way more proficient than I am. So, I approached my friend John Wambaugh and asked for him to apply his most preferred approach to analyze data that I would provide. He wrote a little bit of code in R and produced the analysis below which he concluded as “So, if you don’t know the answer, always guess A.” I agree – it’s a useful strategy and worth trying out for your own Trivia Crack game. That said I would expect that they would have a random distribution of the correct answers in the game and maybe something they should address?

“If we assume that each time you answer a question one of the four answers must be right, then there are four probabilities describing the chance that each answer is right. These four probabilities must add up to 100 percent. The number of correct answers observed should follow what is called a Dirichlet distribution. The simplest thing would be for all the answers to be equally likely (25 percent) but we have Tony’s data from 6 groupings in which he got “A” 275 times, “B” 193 times, “C” 166 times, and “D” 134 times.

The probability density for Total given observed likelihood is 23200

While the probability density for Total assuming equal odds is 4.61e-08

But it is unlikely that even 768 questions gives us the exact odds. Instead, lets construct a hypothesis.

We observe that answer “A” was correct 35.8 percent of the time instead of 25 percent (even odds for all answers).

We can hypothesize that 35.8 percent is roughly the “right” number and that the other three answers are equally likely.

The probability density for Total assuming only “A” is more likely 101.

Our hypothesis that “A” is right 35.8 percent of the time is 2.19e+09 times more likely than “A” being right only 25 percent of the time.

Among the individual games, the hypothesis is not necessarily always more likely:

For Game.1 the hypothesis that “A” is right 35.8 percent of the time is 129 times more likely.

For Game.2 the hypothesis that “A” is right 35.8 percent of the time is 1910 times more likely.

For Game.3 the hypothesis that “A” is right 35.8 percent of the time is 5.25 times more likely.

For Game.4 the hypothesis that “A” is right 35.8 percent of the time is 0.754 times more likely.

This value being less than one indicates that even odds are more likely for Game.4 .

For Game.5 the hypothesis that “A” is right 35.8 percent of the time is 32 times more likely.

For Game.6 the hypothesis that “A” is right 35.8 percent of the time is 99.2 times more likely.

So, we might want to consider a range of possible probabilities for “A”.

Unsurprisingly, the density is maximized for probability of “A” being 36 percent.

However, we are 95 percent confident that the true value lies somewhere between 33 and 39 percent.

So, if you don’t know the answer, always guess “A”.


Leave a comment

Posted by on February 19, 2018 in Uncategorized



The National Chemical Database Service Allowing Depositions

The UK National Chemical Database Service (available here) has been online a few years now, since 2012. When I worked at RSC I was intimately involved in writing the technical response to the EPSRC call for the service and, in this blog, I outlined a lot of intentions for the project. A key part of the project from my point of view was to deliver a repository to store structures, spectra, reactions, CIF files etc as I outlined in the blog post.

“Our intention is to allow the repository to host data including chemicals, syntheses, property data, analytical data and various other types of chemistry related data. The details of this will be scoped out with the user-community, prioritized and delivered to the best of our abilities during the lifetime of the tender. With storage of structured data comes the ability to generate models, to deliver reference data as the community contributes to its validation, and to integrate and disseminate the data, as allowed by both licensing and technology, to a growing internet of the chemical sciences.”

In March 2014 at the ACS Meeting in Dallas I presented on our progress towards providing the repository (see this Slidedeck). ChemSpider has been online for over ten years and we were accepting structure depositions in the first 3 months and spectra a few weeks later (see blogpost). The ability to deposit structures as molfiles or SDF files has been available on ChemSpider for a long time and we delivered the ability to validate and standardize using the CVSP platform ( that we submitted for publication three years ago (October 28th, 2014) and is published here: With structure and spectra deposition in place for over a decade, a validation and standardization platform made public three years ago, and a lot of experience with depositing data onto ChemSpider, all building blocks have been in place for the repository.

Today I received an email into my inbox announcing “Compound and Spectra Deposition into ChemSpider“. I read it with interest as I guess it meant it was “going mainstream” in some way as it’s been around for a decade as capability. Refactoring for any mature platform should be a constant so my expectation was that this would show a more seamless process of depositing various types of data, a more beautiful interface, new whizz-bang visualization widgets building on a decade of legacy development and taking the best of what we built as data registration, structure validation and standardization (and all of its lessons!) and rebuilds of some of the spectral display components that we had. It’s not quite what I found when I tested it.

Here’s my review.

My expectations would be to go to and deposit data to ChemSpider. The website is simply a blue button with “Log in with your ORCID”. There is language recognizing that the OpenPHACTS project funded the validation and standardization platform work which is definitely appropriate but some MORE GUIDANCE as to what the site is would be good!

“Validation and standardisation of the chemical structures was developed as part of the Open PHACTS project and received support from the Innovative Medicines Initiative Joint Undertaking under grant agreement no. 115191, resources of which are composed of financial contribution from the European Union’s Seventh Framework Programme (FP7/2007-2013) and EFPIA companies’ in-kind contribution.”

This means that it should be possible to deposit a molfile, have it checked (validated) and standardized then deposited into ChemSpider, having passed through CVSP. So what happened?

I downloaded the structure of Chlorothalonil from our dashboard and loaded it. The result is shown below. The structure was standardized and correctly recognized as a V3000 molfile. The original structure was not visible, there were no errors or warnings and the structure DID standardize.

Deposition into ChemSpider failed with an Oops

Next I tried a structure from ChemSpider, because if the structures are going INTO ChemSpider then I should be able to load one that comes FROM ChemSpider. I wanted to get something fun so grabbed one of the many Taxol-related structures. There are 61 Taxol-related structures in total. I downloaded the version with multiple C13-labels. It looked like this:

When I uploaded this, a V2000 molfile, the result is as shown below.

The original isotope labels were removed, the layout was recognized as congested and partially defined stereo recognized. But it wouldn’t deposit. I tried many others and they would not deposit and was going to give up but tried Benzene, V2000, downloaded from ChemSpider. And….YAY….it went in. The result is below.

A unique DOI is issued to the record, associated with my name. It is NOT deposited into ChemSpider as far as I can tell because the structure is already in ChemSpider. There is also no link from ChemSPider back to my deposition, that I can find. My next try was to find a chemical NOT in ChemSpider and to deposit that. That failed. I tried Benzene again and it worked a second time. I judged that maybe a simple alkyl chain would work for deposition. The result is below.

The warning “Contains completely undefined stereo: mixtures” does not make sense at all for this chemical. PLUS it wouldn’t deposit.

I then tried to register a sugar as a projection with the result shown below. I consider this one to have some real errors and do not AT ALL like the standardized version.

I tried a simple inorganic. I think KCl should be recognized as an ionic compound as K+Cl-, at least SOME warning!?

The testing I did took about an hour overall. I identified a LOT of issues. I think this release, while it may be a beta release for feedback, is way premature and needs a lot more testing. I am hopeful that more people will fully test the platform as the ABILITY to deposit data, get a DOI, and associate it with your ORCID account, but it’s not obvious that anything is linked back to ORCID and it is nothing more than being used for login.

I did NOT test spectral deposition but am concerned that the request seems to be for original data. In binary vendor file format? Uh-oh. That’s not a good idea!

I hope this blog motivates the community to test, give feedback and push the deposition system to deal with complex chemistries so at least the boundary conditions of performance for Deposit.ChemSpider.Com, which appears to be more of writing a chemical to some other repository as there is no real connection to ChemSpider I can find (?), can be defined, the system can be improved and a community can be built around the functionality.

Building public domain chemistry databases is hard work. User feedback and guidance is essential. Please give your feedback and test the system.


Posted by on October 20, 2017 in Uncategorized


Call for Abstracts for ACS Spring 2018 Symposium ” Applications of Cheminformatics to Environmental Chemistry”

Grace Patlewicz and I have the pleasure of hosting a symposium at the Spring 2018 ACS National Meeting in New Orleans as outlined below. We believe that a presentation from you would enhance the line-up for the gathering and encourage you to consider our invitation. Our expectations are that we will have a full day of stimulating presentations and discussions regarding the application of cheminformatics to Environmental Chemistry. We sincerely hope you will consider our invitation and  submit an abstract to the CINF division listed at  Please confirm your intention to participate via email. Thank you in advance.

 Applications of Cheminformatics to Environmental Chemistry

Cheminformatics and computational chemistry have had an enormous impact in regards to providing environmental chemists access to data, information and software tools and algorithms. There is an increasing number of online resources and software tools and the ability to source data, perform real time QSAR prediction and even read-across analyses online is now available. Environmental scientists generally seek chemical data in the form of chemical properties, environmental fate and transport or toxicity-based endpoints. They also search for data regarding chemical function and use, information regarding their exposure potential, and their transformation in environmental and biological systems. The increasing rate of production and release of new chemicals into commerce requires improved access to historical data and information to assist in hazard and risk assessment. High-throughput in vitro and in silico analyses increasingly are being brought to bear to rapidly screen chemicals for their potential impacts and interweaving this information with more traditional in vivo toxicity data and exposure estimation to provide integrated insight into chemical risk is a burgeoning frontier on the cusp of cheminformatics and environmental sciences.

This symposium will bring together a series of talks to provide an overview of the present state of data, tools, databases and approaches available to environmental chemists. The session will include the various modeling approaches and platforms, will examine the issues of data quality and curation, and intends to provide the attendees with details regarding availability, utility and applications of these systems. We will focus especially on the availability of Open systems, data and code to ensure no limitations to access and reuse.

The topics that would be covered in this session are, but are not limited to:


  • Environmental chemistry databases
  • Data: Quality, Modeling and Delivery
  • Computational hazard and risk assessment
  • Prioritizing environmental chemicals using screening and predictive computational tools
  • Standards for data exchange and integration in environmental chemistry
  • Implementations of Read-across prediction
  • Adverse Outcome Pathway data and delivery


Please submit your abstracts using the ACS Meeting Abstracts Programming System (MAPS) at  General information about the conference can be found at  Any other inquiries should be directed to the symposium organizers:

Antony J. Williams and Grace Patlewicz, National Center for Computational Toxicology, Environmental Protection Agency, Research Triangle Park, Durham, NC

Emails: and

Leave a comment

Posted by on September 20, 2017 in Uncategorized


Call for Abstracts for ACS Spring 2018 Symposium: “Open Resources for automated structure verification and elucidation”

I have the pleasure of hosting a symposium with Emma Schymanski at the Spring 2018 ACS National Meeting in New Orleans as outlined below. Our expectations are that we will have a full day of stimulating presentations and discussions regarding how Open Resources, specifically data and software, can support automated structure verification and elucidation. If this is an area of research for you please submit an abstract to the ANYL division listed at

Open Resources for automated structure verification and elucidation

Antony J. Williams1 and Emma L. Schymanski2
1National Center for Computational Toxicology, US EPA, Research Triangle Park, Durham, NC, USA.
2Luxembourg Centre for Systems Biomedicine (LCSB), University of Luxembourg, Campus Belval, Luxembourg.
Cheminformatics methods form an essential basis for providing analytical scientists with access to data, algorithms and workflows. There are an increasing number of free online databases (compound databases, spectral libraries, data repositories) and a rich collection of software approaches that can be used to support automated structure verification and elucidation, specifically for Nuclear Magnetic Resonance (NMR) and Mass Spectrometry (MS). This symposium will bring together a series of speakers to overview the state of data, tools, databases and approaches available to support chemical structure verification and elucidation. The session will cover the different databases and libraries available and examine the issues of data quality and curation. We intend to provide attendees with details regarding availability (both online and offline), utility and application of various tools and algorithms to support their identification and interpretation efforts. We will focus especially on the availability of Open systems, data and code with no limitations to access and reuse, yet reflect critically on the potential limitations and future needs of Open approaches. Case studies will demonstrate the potential for cheminformatics to enable single-structure elucidation through to high throughput, untargeted data discovery approaches. This work does not necessarily reflect U.S. EPA policy.

Emma Schymanski and Antony Williams,
Chairs of the Open Resources for automated structure verification and elucidation symposium,
ANYL Division, ACS Spring Meeting 2018, New Orleans


Where did all of these Articles Associated With Me Come From on Mendeley

Recently I posted that Google must have changed their algorithm and as a result introduced a lot of new articles to my profile automagically that were nothing to do with me. It took work to prune them off and hopefully they do not reappear. Tonight I went through the process of updating the past few months of publications to get my Mendeley profile up to date and, lo and behold, there were a whole series of new publications that were NOT there the last time that I checked Mendeley. Interestingly they were all articles about superconducting materials as many of those that had appeared on my Google profile were. Is it possible that Elsevier is somehow sourcing the information from Scholar? Or is Elsevier sourcing these articles from within its own library? Of course the articles all have an author “A. Williams” associated with them. I have already started the process of pruning them out. Not happy…

Articles associated with A. Williams on Mendeley

Articles associated with A. Williams on Mendeley

1 Comment

Posted by on December 5, 2016 in Uncategorized


Mendeley Expanding my Worldwide Followers in a Big Way

I adopted Mendeley very early and was a defender of their decision to join Elsevier. I didn’t beat them up in the mediasphere for moving from the Open start-up to the publishers corporate mode. I did that myself when ChemSpider was acquired by the Royal Society of Chemistry (RSC is a charity but is also a publisher).

Over the past few weeks I have noticed new followers showing up on my profile. In the first couple of years most of my Mendeley followers were actually names I recognized from my domains of experience of cheminformatics and Nuclear Magnetic Resonance. Most of the followers were scientists whose papers I had read and whose work I was aware of. But things are now different.

I have pasted a picture below of the past month or so of new followers. I don’t recognize any of them at all and as far as I can see they are not from my domain, based on me drilling down into their profile. I cannot figure out whether these are just random followers or not but I guess I should appreciate Mendeley and Elsevier for exposing my work, and publications, to a worldwide community of new followers. I am surprised by the new international exposure! THANKS

The past few days of new Mendeley followers

The past few days of new Mendeley followers



Leave a comment

Posted by on December 4, 2016 in Uncategorized


Increasing Noise in my Google Scholar Citations Profile

I have always been impressed with Google Scholar Citations. When I first set up my profile I was impressed with how fast the site allowed me to set up my profile (available at and the overall accuracy that was evident in terms of recognizing the articles I had authored or co-authored. There was very little noise in terms of associating articles for “Antony Williams, Anthony Williams or A.J. WIlliams” (or some other combination) with my profile that were not actually my articles. As I recall maybe 3 articles overall out of about 120 at the time. I did have to add a couple of  publications that were missed but these were old, from the late 1980s.

Over the years I have been kept informed of publications that have been of relevance to my work and definitely of interest. I have also been made aware of citations to my work via email. Overall, it’s a great service.

However, of late I have become increasingly concerned regarding data quality. I have started to notice suggested co-authors showing up on my profile and emails regarding citing articles that puzzle me.

For example, today on my profile I notice the following list of suggested co-authors. Four of these are blocked in red and I have no recollection of authoring with. It is possible that these people are editors of a book that I have a chapter in but not that I recall.

Misassociated co-authors

I have rarely had to remove many associations with my profile that were incorrect but something is afoot methinks. I ended up deleting a grand total of over SEVENTY mis-associations. Some examples are below. To clarify, I know how to sleep but don’t study sleep disorders and breathing.


I eat cream cheese but know nothing about cheese manufacture


and I don’t know much about energy demands in Western Europe.


These articles have shown up on my profile only of late (as far as I know) and it seems that Google is casting a wider net to map more works to my profile but the dramatic DECREASE in data quality is very concerning. Whatever the decision was to do this I think it has backfired. How badly?? See below where publications are associated with my profile…that I somehow authored before I was born! I was born in 1964 so how did the 1953 article get associated with me?


The BOOK by Anna Williams from 1766 from can be purchased on eBay for less than $1000 if you want it. However, it wasn’t written by Antony Williams and should NOT be associated with my profile.

Hopefully someone associated with Google Scholar Citations sees this as input to revisit any recent changes in algorithms for associating publications with profiles.

By the way, I did take a hit, appropriately so, on my h-index when I deleted the 70 mis-associations with my name. They weren’t mine for sure!


Leave a comment

Posted by on November 8, 2016 in Uncategorized


Delivering The Benefits of Chemical-Biological Integration in Computational Toxicology at the EPA

This presentation was given at the ACS Meeting in Philadelphia in August 2016.

Delivering The Benefits of Chemical-Biological Integration in Computational Toxicology at the EPA

Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intention of this research program is to quickly evaluate thousands of chemicals for potential risk but with much-reduced cost relative to historical approaches. This work involves computational and data-driven approaches including high-throughput screening, modeling, text mining and the integration of chemistry, exposure and biological data. We have developed a number of databases and applications that are delivering on the vision of developing a deeper understanding of chemicals and their effects on exposure and biological processes that are supporting a large community of scientists in their research efforts. This presentation will provide an overview of our work to bring together diverse large scale data from the chemical and biological domains, our approaches to integrate and disseminate these data, and the delivery of models supporting computational toxicology. This abstract does not reflect U.S. EPA policy.


Leave a comment

Posted by on September 21, 2016 in Uncategorized


Zika Virus and a hypothesis regarding the impact of Pyriproxyfen

I have been interested in the Zika Virus ever since I heard about it while visiting Brazil last year to give a talk at the Brazilian Natural Products conference. What I did not expect was the incredible surge in worldwide attention that Zika would attract. I am grateful to have been included in the work led by Sean Ekins (@collabchem) in the perspective “Open Drug Discovery for the Zika Virus” recently published on F1000Research. Up until last week the hypothesis was that Zika was a mosquito-borne disease but now the suggestion is that the disease may be related to a larvicide.

The chemical in question that is being named as the offending agent is Pyriproxyfen. I had never even heard of this chemical until a couple of days ago. At that time there was nothing on Wikipedia but, of course, it has since been updated with this

“In 2014, pyriproxifen was put into Brazilian water supplies to fight the proliferation of mosquito larvae.[2] Some Brazilian doctors have hypothesized that pyriproxyfen, not the Zika virus, is the cause of the 2015-2016 microcephaly epidemic in Brazil. [3]

Consequently, in 2016, the Brazilian state of Rio Grande do Sul suspended pyriproxyfen’s use. The Health Minister of Brazil, Marcelo Castro, criticized this step, noting that the claim is “a rumor lacking logic and sense. It has no basis.” They also noted that the insecticide is approved by the National Sanitary Monitoring Agency and “all regulatory agencies in the whole world”. The manufacturer of the insecticide, Sumitomo Chemical, stated “”there is no scientific basis for such a claim” and also referred to the approval of pyriproxyfen by the World Health Organization since 2004 and the United States Environmental Protection Agency since 2001.[4]

Noted skeptic David Gorski discussed the claim and pointed out that anti-vaccine proponents had also claimed that the Tdap vaccine was the cause of the microcephaly epidemic, due to its introduction in 2014, along with adding “One can’t help but wonder what else the Brazilian Ministry of Health did in 2014 that cranks can blame microcephaly on.” Gorski also pointed out the extensive physiochemical understanding of pyriproxyfen that the WHO has, which concluded in a past evaluation that the insecticide is not genotoxic, and that the doctor organization making the claim has been advocating against all pesticides since 2010, complicating their reliability.[2][5]

Because we live in a time of Open Data, and at a time when there is soooooo much information available on open databases, I thought I would go after any evidence-based identification of the chemical as a potential contributor to the explosion in Microcephaly.

PubChem exposes a LOT of useful data under the Safety and Hazards tab. The long-term exposure points to issues with blood and liver. FIFRA requirements are listed on PubChem and toxicity data is also available here. Reproductive toxicity is limited to reports in animals that reports

/LABORATORY ANIMALS: Developmental or Reproductive Toxicity/ In /a/ developmental study in rats, a maternal NOAEL/LOAEL were determined to be 100 mg/kg/day and 300 mg/kg/day, respectively. These findings were based on increased incidences in mortality and clinical signs at 1,000 mg/kg/day with decreased in food consumption, body weight, and body weight gain together with increases in water consumption at 300 and 1,000 mg/kg/day. The developmental NOAEL /and/ /LOAEL were 100 mg/kg/day and 300 mg/kg/day /respectively/ based on the incr of skeletal variations at 300 mg/kg/day and above.

64 FR 56681 (10/21/99). Available from, as of April 28, 2003:
/LABORATORY ANIMALS: Developmental or Reproductive Toxicity/ In /a/ developmental study in rabbits, the maternal NOAEL/LOAEL for maternal toxicity were 100 and 300 mg/kg/day based on premature delivery/abortions, soft stools, emaciation, decreased activity and bradypnea. The developmental NOAEL was determined to be 300 mg/kg/day and developmental LOAEL was /not/ … determined; no dose related anomalies occurred in the four remaining litters studied at 1,000 mg/kg/day.

64 FR 56681 (10/21/99). Available from, as of April 28, 2003:
/LABORATORY ANIMALS: Developmental or Reproductive Toxicity/ In a 2-generation reproduction study in rats, the systemic NOAEL was 1,000 ppm (87 mg/kg/day). The LOAEL for systemic toxicity was 5,000 ppm (453 mg/kg/day). Effects were based on decreased body weight, weight gain and food consumption in both sexes and both generations, and increased liver weights in both sexes associated with liver and kidney histopathology in males. The reproductive NOAEL was 5,000 ppm. A reproductive LOAEL was not established.

64 FR 56681 (10/21/99). Available from, as of April 28, 2003:
Just to point out that this information, and it is valuable, is sourced from HSDB.
Pyriproxyfen reports on PubMed doesn’t seem to turn up anything about birth defects that I can find.
There is no evidence, yet, for the potential impact of this chemical on the incidence of microcephaly but the hypothesis is now out there and it will be interesting to see what happens as investigations are pursued. As yet I have no opinion….but will be watching with interest to see what comes out.

Posted by on February 15, 2016 in Uncategorized


Transform Tox Testing Challenge – Innovating for Metabolism

Scientists from EPA, NTP and NCATS have used high-throughput screening (HTS) assays to evaluate the potential health effects of thousands of chemicals. The Transform Tox Testing Challenge: Innovating for Metabolism is calling on innovative thinkers to find new ways to incorporate physiological levels of chemical metabolism into HTS assays. Since current HTS assays do not fully incorporate chemical metabolism, they may miss chemicals that are metabolized to a more toxic form. Adding metabolic competence to HTS assays will help researchers more accurately assess chemical effects and better protect human health.

Details can be found here 

Leave a comment

Posted by on January 26, 2016 in Uncategorized

%d bloggers like this: