Bayesian Personalized Ranking (BPR) Algorithm

From GM-RKB
Jump to navigation Jump to search

A Bayesian Personalized Ranking (BPR) Algorithm is a pairwise ranking algorithm that (approximately) optimizes average per-user AUC using stochastic gradient descent (SGD) on randomly sampled (user, positive item, negative item) triplets.



References

2016a

2016b

  • https://cran.r-project.org/web/packages/rrecsys/vignettes/b6_BPR.html
    • QUOTE: The algorithm addresses the One Class Collaborative Filtering problem (OCCF) by turning it into a ranking problem and implicitly assuming that users prefer items they have already interacted with another time. Instead of applying rating prediction techniques, BPR ranks candidate items for a user without calculating a “virtual” rating.

      The overall goal of the algorithm is to find a personalized total ranking [math]\displaystyle{ \gt _u ⊂ I^2 }[/math] for any user [math]\displaystyle{ u ∈ \text{Users} }[/math] and pairs of items [math]\displaystyle{ (i,j) \in I^2 }[/math] that meet the properties of a total order (totality, anti-symmetry, transitivity).

      The model uses a pair-wise interpretation of the implicit feedback matrix and tries to reconstruct for each user parts of >u, meaning a user's positive feedback represents a preference of the user over an item that the user did not provide any feedback on. In other words a positive only feedback will be transformed into positive and negative feedback in terms of pairs of items (i,j), where the user prefers i over j (positive) and correspondingly rephrased dislikes j over i (negative).

      Given [math]\displaystyle{ \Theta }[/math] the parameter vector of a model (in rrecsys the model is a factorized matrix) to determine the personalized ranking for any [math]\displaystyle{ i∈I }[/math], BPR aims to maximize [math]\displaystyle{ p(\Theta| \gt u) ∝ p( \gt u|\Theta)p(\Theta) }[/math] posterior probabilities. Optimization of [math]\displaystyle{ \Theta }[/math] is achieved through a criterion called BPR-OPT which is related to the AUC metric and optimizes it implicitly. We optimized the parameter [math]\displaystyle{ \Theta }[/math] with gradient descent, by choosing randomly choosing triples [math]\displaystyle{ (u,i,j) }[/math] from the training data.

2015

2014

2009