You might remember earlier this year, Kevin Martin’s post about how many a’s people put in Khan. He also mentioned that one might fit an equation to the curve.
To a geeky statistician, those are dangerous words. Dangerously appealing words.
Before you continue, let me warn you: extreme geekitude follows; performing some analysis of this was like bringing an elephant gun to a squirrel hunt. A very geeky squirrel hunt (perhaps squirrel fishing). If you’d just like to see a graph of the final model, feel free to skip to the end.
So, the first thing we do is try to come up with a model for this curve. The basic idea is this: every time someone puts up a web page mentioning Kirk’s Khan scream, they have some number of a’s which they’re going to use. We consider that everyone has some number of a’s they tend to feel is appropriate, and that we are selecting from the population of people who put Khan scream references on the web. So we are modeling some underlying distribution of preference for a’s among these people.
Footnote: I also have to recognize that in addition to a distribution of preference over people, an individual person has some variation in how many a’s they actually put up; that there may be multiple populations of people; and that different kinds of people are more likely to add references to Khan on the web. Some may even post multiple times. While a more complex model which took this into account might be able to make a better fit to the data, we simply consider it all as combined into a single conditional distribution–given that the post was made, what is the probability of it including a certain number of a’s.
The first model is pretty basic: it says that after each ‘a’ is added, there’s a chance that you’ll stop, add an ‘n’, and be done. This probability is the same after each a–it’s not dependent on how many you’ve entered before. This results in the number of a’s being expected to follow a geometric distribution: each ‘a’ entered is a trial, and we continue adding a’s until we ‘succeed’ and add an ‘n’. On a log scale, this model is a straight line.
After seeing this (and a few other models), and doing a little web research, we remove the two leftmost points from the data for our model. These are ‘Khan’ and ‘Khaan’ (1 and 2 a’s). They are much higher than the rest, and substantially change the model. We suspect that their references are largely due to very different sources: anyone referring to Khan Noonien Singh himself (or Gengis Khan, or any other Khan) for the first, and anyone referring to Khaan (an actual animal and also a common alternative transliteration of Khan) for the second.
After we do this, we can see an improved fit, though there are clearly still some regions of higher- or lower-than-expected occurrences.
So we now make our model a bit more complex, reflecting in part the complexity discussed above. We make a mixed model, suggesting that there are two populations posting Khan references. One follows the geometric model we used above; but the other, we will model as a negative binomial distribution: one explanation is that these are people who are aiming for a large number of a’s, and we are modeling their variation in what they think of as “a large number of a’s”. Fitting this mixed model (using maximum likelihood to determine how many people fall into each group, and the distribution parameters for each group) gives us the next graph.
A more complex model would attempt to model the conditional probability of adding another a (given how many a’s have already been added) as varying smoothly, depending on the number of a’s already added…we could, of course, model this as some sort of generalized additive model…sorry, please excuse my drool. Let’s continue.
Of course, I had to take it another couple of steps further. When I started this project, I wrote a perl script which would go to google each day and save the number of Google results for each search, stored in a file by date. Further, I extended the range to 125 a’s (anything longer than this, Google considers too long). So what we now have is a time series: for each day, we have an entire graph of values. Using this, I was hoping to see how the numbers change over time. Unfortunately, it appears that the results are not consistent over time, having significant variance up or down. Presumably, this is a result of Google trying out different variants on what results to return. But it means that rather than seeing counts increase over time, we see some variance in each count. For example, the counts for “khaaaaaaaaaaaaaaaaaaaaaaaaaan” (26 a’s) vary from around 150 to around 8000.
You can see the variance overall by looking at a boxplot of the ranges for each number. For some reason, there’s a lot of variance for 5-34 A’s, but not too much outside of that range.
So, time series analysis is pretty much out; this is a shame, because you can pretty easily make a video of the counts on each day, over time (with a fitted model for each day). The trouble is that the counts are more affected by the algorithmic decisions google is making behind the scenes than by any underlying change in the number of pages.
But we can at least try to use this variance to see if it smooths out any of our earlier outliers. Here, we’ll take the median reported values, over time, for each number of a’s (rather than the individual reported numbers on any specific day) and repeat the earlier geometric/negative binomial mixed model:
And that, I think, actually looks like a pretty decent fit. Notice that the negative binomial portion is actually fitting the low-A section now, rather than the strange middle-A hump we saw before; this seems to give a more natural interpretation: most people will put in around 6 A’s for KHAAAAAAN!, and for people stretching longer, a geometric distribution fits pretty well for determining how long they’ll keep adding A’s.
So there you go. Proof that anything can be overanalyzed. If people like this (drop a comment here or email me), I’ll keep collecting data and will look at doing some additional analysis with more data in a few months.
You can download the perl and R code and khan data from thomaslotze.com. While this was inspired directly by Kevin Martin’s post referencing squidnews, there were also earlier graphs from drtofu, Walrus, and Jim Finnis.4 comments
I expect that most of our readers are familiar with TEDTalks. The TED Conferences takes place annually and “bring together the world’s most fascinating thinkers and doers, who are challenged to give the talk of their lives (in 18 minutes).” Their talks are then published on their website, so that we mere mortals can experience them as well.
In the past I’ve mostly watched individual talks that others have pointed out to me, but today I took some time to explore the site and find things on my own. One of my discoveries was the “TED in 3 Minutes” series, which includes shorter talks. I particular liked “Arthur Benjamin’s formula for changing math education“. His idea, which I whole-heartedly agree with, is that high school math education should shift its focus away from calculus and onto statistics. Although calculus is integral (pun intended!) to higher math and sciences, most students will never need it. Probability theory, on the other hand, is immediately applicable to every student’s life. As we manage our finances or make medical decisions, it’s important for everyone to be able to intelligently assess risks and benefits.
To help spread all these ideas, the TEDTalks website has transcripts for all their videos. The transcripts allow the text of each talk to be searchable, and through the “interactive transcript” feature you can jump straight to the point in a video where given text appears. The “TED Open Translation Project” allows anyone to submit translations of these transcripts into other languages, to further spread these ideas beyond the English-speaking community.
With over 450 videos available, it’s difficult to know where to start watching TEDTalks. If you have a favorite talk or two, please let us know in the comments.3 comments
Students from the Cornell Summer Animation Workshop have produced a fantastic and suitably quirky animation for Jonathan Coulton’s “Mandelbrot Set”:
You can find out more about Jonathan Coulton on his website. I actually don’t know much apart from his most popular songs, but perhaps someone will enlighten us (with further song recommendations, for example) in the comments or with a follow-up blog post?No comments