jump to navigation

Recommending BioMedical Articles March 2, 2008

Posted by Andre Vellino in CISTI Visualization, Citation, Collaborative filtering, Digital library, Recommender.

I have just finished an initial prototype of a recommender for a digital library. This web application was built using Sean Owen’s open source collaborative filtering toolkit Taste (with a lot of adaptations by Dave Zeber.) It uses data from 1.6 Million articles in a collection of about 1500 bio-medical journals.

This demo isn’t ready to be made publicly available in part because of some licensing uncertainties about the meta-data. Later this quarter I may be able put a more polished version on CISTI Lab (currently undergoing a makeover, so please forgive the “under construction” skin) using the NRC Press collection, although I’m worried that the citation graph for that collection may be too sparse to yield reliable recommendations.

The Synthese Recommender uses many of the ideas from TechLens. For example, to seed the recommender with ratings we use the citation graph for the collection. Out of the 1.6M articles, 370K of them qualify as “useful” for recommending – i.e. only those whose articles with 3 or more citations. The total number of citations is ~ 1.5M, making the average number of citations per article roughly 4.

In contrast, the number of citations per article in the 100K article Citeseer collection (which, incidentally, is now in it’s next generation with CiteseerX, whose design can be read about here) used in TechLens is roughly 12. It strikes me as a little odd that the our bio-medical collection should have have almost 3 times fewer citations per article. I will have to look at the citation data more carefully! [P.S. I did look into this and “4” is the average number of references per article for which we have entries in the bibliographic database. I am told that biomedical articles do in fact have a much higher number of overall references than computer science articles.]

Compared with data from consumer-product recommenders, citation-based “ratings” in a digital library are much (three orders of magnitude) more sparse. For instance, the Netflix prize data contains 100 million ratings from 480 thousand customers over 17,000 movie titles. That’s roughly 1.2% of non-zeros. With 1.5Million citations (“ratings”) on 370K users and 370K items we get roughly 0.00116% of non-zeros.

What do you think the odds are that applying PageRank to give a numeric value to citation-based ratings is going to affect the quality of recommendations? Stay tuned for the answer (in a couple of months, probably.)


1. Daniel Lemire - March 4, 2008

Mathematically, the number of inbound links has to be equal to the number of outbound links.

This means that if “4” is the average number of citations, then each paper cites on average 4 papers in your database. I don’t know about you, but I typically cite over 15 papers in each one of my papers, so 4 seems very small.

Netflix is a bit odd as a data set. 1.2% is pretty dense and that’s explained away by the fact that numerous users have rated over a thousand movies. I don’t know about you, but this has always seemed odd to me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: