Bias and Data-Driven and Probability-Based Decision Making in Trivia Crack

19 Feb

I’ve been playing Trivia Crack for a few years now and, as of today, I am a level 326. I am an iPhone/iPad user so grabbed it from the AppStore. In playing I have filled my brain with some useless information, learned a lot of history, geography and sports, and taken advantage of my Science background to win more than a few challenges. In terms of Entertainment its clear I stopped learning about new music a few years ago. I’m stuck in my musical history with little interest in the new music scene really. I have enjoyed playing Trivia Crack against my girlfriend for over three years and we continue to have regular periods where we actively engage with the game.

Trivia Crack has a lot of downloads with Forbes reporting on over 300 million downloads and I can hear the theme tune while sitting in restaurants as people get their dose of the day.

There is a lot of advice online for people to try and beat the game. Much of this advice is tactics based. Hackers have even been taking their pokes at it. Looking at some of the analyses that have been made I am at least in the top 1% for Science, and with a category performance of 86 am better than 99.8% of the people playing Trivia Crack. My weakest category is Sports…not a surprise as I prefer to do sports rather than watch it or read about it. I am generally flat across the board for Entertainment, Art and Literature and Geography.

As a scientist I am data driven. However, as Louis Pasteur once commented, “In the fields of observation chance favors only the prepared mind” ( So, while playing over 3500 games I started noticing patterns that helped me play the game. There were numerous patterns I noticed over the years but I will summarize them here and then share the data.

  • If I did not know the answer to a question and HAD to guess, my guesses generally worked out best when I always guessed that the first (top) answer was correct. If I guessed the fourth (bottom) answer I was generally, but not always wrong.
  • With the observation, that I could reproduce over and over, I decided to gather the data and analyze it statistically. The data is shown below and represents the number of times that the answer is in each position 1 to 4, top to bottom. Each column corresponds to the frequency of correct answers for a particular grouping. position of the correct answer out of the four possible answers. I gathered data over a number of days in five different groups. I chose to distribute the groupings into different sizes also, stopping the gathering of data when there were 25,50 or 100 answers in position 1.

The data speaks for itself (and is available for download on FigShare here). In all five groupings the majority of answers are in position 1, commonly the chances of the answers being in position 1 versus position 4 is about double. This means that if you are lost in terms of answering a question, and have no idea which answer to choose, you should select position 1. The results, over time, will be that you will be right more often than not. If you know that positions 2 and 3 are not the correct answers and are trying to choose between positions 1 and 4 then choose position 1. You will be correct 2x more often than if you chose position 4.

While I believe the data speaks for itself a statistical analysis is certainly in order. I’ve done a lot of stats over the years but I am fortunate enough to know people who are way more proficient than I am. So, I approached my friend John Wambaugh and asked for him to apply his most preferred approach to analyze data that I would provide. He wrote a little bit of code in R and produced the analysis below which he concluded as “So, if you don’t know the answer, always guess A.” I agree – it’s a useful strategy and worth trying out for your own Trivia Crack game. That said I would expect that they would have a random distribution of the correct answers in the game and maybe something they should address?

“If we assume that each time you answer a question one of the four answers must be right, then there are four probabilities describing the chance that each answer is right. These four probabilities must add up to 100 percent. The number of correct answers observed should follow what is called a Dirichlet distribution. The simplest thing would be for all the answers to be equally likely (25 percent) but we have Tony’s data from 6 groupings in which he got “A” 275 times, “B” 193 times, “C” 166 times, and “D” 134 times.

The probability density for Total given observed likelihood is 23200

While the probability density for Total assuming equal odds is 4.61e-08

But it is unlikely that even 768 questions gives us the exact odds. Instead, lets construct a hypothesis.

We observe that answer “A” was correct 35.8 percent of the time instead of 25 percent (even odds for all answers).

We can hypothesize that 35.8 percent is roughly the “right” number and that the other three answers are equally likely.

The probability density for Total assuming only “A” is more likely 101.

Our hypothesis that “A” is right 35.8 percent of the time is 2.19e+09 times more likely than “A” being right only 25 percent of the time.

Among the individual games, the hypothesis is not necessarily always more likely:

For Game.1 the hypothesis that “A” is right 35.8 percent of the time is 129 times more likely.

For Game.2 the hypothesis that “A” is right 35.8 percent of the time is 1910 times more likely.

For Game.3 the hypothesis that “A” is right 35.8 percent of the time is 5.25 times more likely.

For Game.4 the hypothesis that “A” is right 35.8 percent of the time is 0.754 times more likely.

This value being less than one indicates that even odds are more likely for Game.4 .

For Game.5 the hypothesis that “A” is right 35.8 percent of the time is 32 times more likely.

For Game.6 the hypothesis that “A” is right 35.8 percent of the time is 99.2 times more likely.

So, we might want to consider a range of possible probabilities for “A”.

Unsurprisingly, the density is maximized for probability of “A” being 36 percent.

However, we are 95 percent confident that the true value lies somewhere between 33 and 39 percent.

So, if you don’t know the answer, always guess “A”.



About tony

Antony (Tony) J. Williams received his BSc in 1985 from the University of Liverpool (UK) and PhD in 1988 from the University of London (UK). His PhD research interests were in studying the effects of high pressure on molecular motions within lubricant related systems using Nuclear Magnetic Resonance. He moved to Ottawa, Canada to work for the National Research Council performing fundamental research on the electron paramagnetic resonance of radicals trapped in single crystals. Following his postdoctoral position he became the NMR Facility Manager for Ottawa University. Tony joined the Eastman Kodak Company in Rochester, New York as their NMR Technology Leader. He led the laboratory to develop quality control across multiple spectroscopy labs and helped establish walk-up laboratories providing NMR, LC-MS and other forms of spectroscopy to hundreds of chemists across multiple sites. This included the delivery of spectroscopic data to the desktop, automated processing and his initial interests in computer-assisted structure elucidation (CASE) systems. He also worked with a team to develop the worlds’ first web-based LIMS system, WIMS, capable of allowing chemical structure searching and spectral display. With his developing cheminformatic skills and passion for data management he left corporate America to join a small start-up company working out of Toronto, Canada. He joined ACD/Labs as their NMR Product Manager and various roles, including Chief Science Officer, during his 10 years with the company. His responsibilities included managing over 50 products at one time prior to developing a product management team, managing sales, marketing, technical support and technical services. ACD/Labs was one of Canada’s Fast 50 Tech Companies, and Forbes Fast 500 companies in 2001. His primary passions during his tenure with ACD/Labs was the continued adoption of web-based technologies and developing automated structure verification and elucidation platforms. While at ACD/Labs he suggested the possibility of developing a public resource for chemists attempting to integrate internet available chemical data. He finally pursued this vision with some close friends as a hobby project in the evenings and the result was the ChemSpider database ( Even while running out of a basement on hand built servers the website developed a large community following that eventually culminated in the acquisition of the website by the Royal Society of Chemistry (RSC) based in Cambridge, United Kingdom. Tony joined the organization, together with some of the other ChemSpider team, and became their Vice President of Strategic Development. At RSC he continued to develop cheminformatics tools, specifically ChemSpider, and was the technical lead for the chemistry aspects of the Open PHACTS project (, a project focused on the delivery of open data, open source and open systems to support the pharmaceutical sciences. He was also the technical lead for the UK National Chemical Database Service ( and the RSC lead for the PharmaSea project ( attempting to identify novel natural products from the ocean. He left RSC in 2015 to become a Computational Chemist in the National Center of Computational Toxicology at the Environmental Protection Agency where he is bringing his skills to bear working with a team on the delivery of a new software architecture for the management and delivery of data, algorithms and visualization tools. The “Chemistry Dashboard” was released on April 1st, no fooling, at, and provides access to over 700,000 chemicals, experimental and predicted properties and a developing link network to support the environmental sciences. Tony remains passionate about computer-assisted structure elucidation and verification approaches and continues to publish in this area. He is also passionate about teaching scientists to benefit from the developing array of social networking tools for scientists and is known as the ChemConnector on the networks. Over the years he has had adjunct roles at a number of institutions and presently enjoys working with scientists at both UNC Chapel Hill and NC State University. He is widely published with over 200 papers and book chapters and was the recipient of the Jim Gray Award for eScience in 2012. In 2016 he was awarded the North Carolina ACS Distinguished Speaker Award.
Leave a comment

Posted by on February 19, 2018 in Uncategorized



Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.