Kinja Data Team

it's A/B test time, my dudes

Last week, the personalization team tested out our first new feature: the recommended post module.

Previous to our test, stories with the most current visitors across our network populated this module. This worked reasonably well, but there’s no connection between what the user is interested in and the topic that’s being suggested.


Here’s an example of the old system, recommendations on the left:

While it’s totally plausible that a ‘500 Days of Kristin’ reader would be interested in Drew Magary’s latest piece, we have no reason to think that. All we know about that reader is that they went to read about Kristin Cavallari.

Here are the recommendations based on one of the new algorithms we tested:


AH! So much Kristin!

We actually tested 4 different algorithms out against our ‘Trending’ feed (on posts only: homepages still show the most popular posts):

  1. A tag-based recommendation, that recommends posts based on their topic
  2. A personalized recommendation, based on the Kinja posts that device had recently visited
  3. A collaborative filter* using data from a few visits over several days
  4. A (different) collaborative filter that uses data from all visits over a short period of time

*Our collaborative filters work along the lines of ‘readers who visited this post also visited these other posts, you might like them too’ to generate recommendations.

Each of these algorithms was used to generate recommendations for 5% of our users over several days. Afterwards, we gathered the data to see whether readers clicked on the recommendations, whether they stayed on our site longer, and if these links decreased the likelihood of a visitor ‘bouncing’.


Since this unit is only available on displays 1024 pixels wide or more, we limited the analysis to that traffic.

While there was no significant change in bounce rate, we did see changes in the other numbers:


(numbers in bold are statistically significant)

All four of the algorithms increased the likelihood of a user clicking (from a low baseline). We think this is a great sign that we’re recommending content that our readers are interested in.


Unfortunately, none of the variations have meaningfully increased the amount of time readers spent on our site. We’re concerned that this means that our current winner is cannibalizing existing clicks on permalinks, or else it’s biased toward less engaging posts than our existing baseline is.

We’re encouraged by the improvements, though, and are using this for a starting point for our future personalization/contextualization work. We’ll be turning the winning collaborative filtering recommendation engine live tomorrow and are continuing to test tweaks. As of now, you should start seeing recommendations on posts that are about 40 minutes old: that should go down to about 20 minutes in a day or two.


Thanks go to the whole personalization team for their help on this: Chris and Lapi on the data engineering, Grace on design, Allison for the number-crunching, Dima and Laci for front-end work, and Istvan, Levente, and Ali for their feedback and help across the board. Also, thanks Kevin for all your support on the ops side, and to our former colleagues Eric and Pedro for getting the architecture for this off the ground.

Share This Story

Get our newsletter