Difference between revisions of "2018 ExploreExploitandExplainPersona"

From GM-RKB
Jump to: navigation, search
(ContinuousReplacement)
(Tag: continuous replacement)
m (Text replacement - " Paul Covington" to " Paul Covington")
 
Line 32: Line 32:
 
* 2. Svetlin Bostandjiev, John O'Donovan, and Tobias Höllerer. 2012. TasteWeights: A Visual Interactive Hybrid Recommender System. In <i>Proceedings of the Sixth ACM Conference on Recommender Systems.</i> ACM, 35--42.
 
* 2. Svetlin Bostandjiev, John O'Donovan, and Tobias Höllerer. 2012. TasteWeights: A Visual Interactive Hybrid Recommender System. In <i>Proceedings of the Sixth ACM Conference on Recommender Systems.</i> ACM, 35--42.
 
* 3. Allison J. B. Chaney, Brandon Stewart, and Barbara Engelhardt. 2017. How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility. <i>arXiv Preprint ArXiv:1710.11214</i> (2017).
 
* 3. Allison J. B. Chaney, Brandon Stewart, and Barbara Engelhardt. 2017. How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility. <i>arXiv Preprint ArXiv:1710.11214</i> (2017).
* 4. Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In <i>Proceedings of the 10th ACM Conference on Recommender Systems.</i> ACM, 191--198.
+
* 4. [[Paul Covington]], Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In <i>Proceedings of the 10th ACM Conference on Recommender Systems.</i> ACM, 191--198.
 
* 5. Miroslav Dudik, John Langford, and Lihong Li. 2011. Doubly Robust Policy Evaluation and Learning. <i>arXiv Preprint ArXiv:1103.4601</i> (2011).
 
* 5. Miroslav Dudik, John Langford, and Lihong Li. 2011. Doubly Robust Policy Evaluation and Learning. <i>arXiv Preprint ArXiv:1103.4601</i> (2011).
 
* 6. Gerhard Friedrich and Markus Zanker. 2011. A Taxonomy for Generating Explanations in Recommender Systems. <i>AI Magazine</i> 32, 3 (2011), 90--98.
 
* 6. Gerhard Friedrich and Markus Zanker. 2011. A Taxonomy for Generating Explanations in Recommender Systems. <i>AI Magazine</i> 32, 3 (2011), 90--98.

Latest revision as of 22:37, 26 March 2020

Subject Headings:

Notes

Cited By


Quotes

Abstract

The multi-armed bandit is an important framework for balancing exploration with exploitation in recommendation. Exploitation recommends content (e.g., products, movies, music playlists) with the highest predicted user engagement and has traditionally been the focus of recommender systems. Exploration recommends content with uncertain predicted user engagement for the purpose of gathering more information. The importance of exploration has been recognized in recent years, particularly in settings with new users, new items, non-stationary preferences and attributes. In parallel, explaining recommendations (" recsplanations ") is crucial if users are to understand their recommendations. Existing work has looked at bandits and explanations independently. We provide the first method that combines both in a principled manner. In particular, our method is able to jointly (1) learn which explanations each user responds to; (2) learn the best content to recommend for each user; and (3) balance exploration with exploitation to deal with uncertainty. Experiments with historical log data and tests with live production traffic in a large-scale music recommendation service show a significant improvement in user engagement.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2018 ExploreExploitandExplainPersonaJames McInerney
Benjamin Lacker
Samantha Hansen
Karl Higley
Hugues Bouchard
Alois Gruson
Rishabh Mehrotra
Explore, Exploit, and Explain: Personalizing Explainable Recommendations with Bandits10.1145/3240323.32403542018
AuthorJames McInerney +, Benjamin Lacker +, Samantha Hansen +, Karl Higley +, Hugues Bouchard +, Alois Gruson + and Rishabh Mehrotra +
doi10.1145/3240323.3240354 +
titleExplore, Exploit, and Explain: Personalizing Explainable Recommendations with Bandits +
year2018 +