Cross-Domain Recommendation System
(Redirected from Cross-domain recommender system)
Jump to navigation
Jump to search
A Cross-Domain Recommendation System is a recommendation system that implements algorithms, methods, or models to systematically and automatically solve a cross-domain recommendation task.
- AKA: Multi-Domain Recommender System, Cross-Context Recommender, Knowledge-Transfer Recommender.
- Context:
- It can utilize algorithms, methods, techniques, and models:
- Collaborative filtering, for learning user preferences across multiple domains.
- Matrix factorization, to represent latent factors shared between domains.
- Transfer learning, for transferring user/item representations between domains.
- Graph neural networks, for modeling cross-domain relationships using interconnected graph structures.
- Domain adaptation, for minimizing discrepancies between source and target domains.
- Multi-task learning, for jointly learning recommendation models across domains.
- It can be evaluated by a cross-domain recommendation benchmarking task.
- It can improve recommendation quality in sparse data domains by leveraging auxiliary data from richer domains.
- It can address the cold-start problem by transferring knowledge from domains where user behavior is already known.
- It can be designed for symmetric (bidirectional) or asymmetric (unidirectional) knowledge transfer between domains.
- It can integrate both explicit feedback (e.g., ratings) and implicit feedback (e.g., clicks, views) from multiple domains.
- It can support applications such as recommending books based on movie preferences, or apps based on e-commerce history.
- It can range from simple mapping-based systems to deep learning models with shared representation layers.
- It can incorporate domain-specific constraints, contextual signals, and personalization objectives across domains.
- ...
- It can utilize algorithms, methods, techniques, and models:
- Example(s):
- TransferRec, which uses transfer learning to recommend items across e-commerce and media platforms.
- Cross-domain matrix factorization models, which share latent representations between domains.
- CDL (Collaborative Deep Learning), which combines deep representation learning with recommendation across domains.
- Graph-based cross-domain recommenders, which model users and items as nodes across multiple domain graphs.
- ...
- Counter-Example(s):
- Single-Domain Recommendation Systems, which operate in isolation within a single domain.
- General-Purpose Recommender Systems, which do not leverage inter-domain knowledge transfer.
- Cold-Start Recommender Systems, which address new user/item scenarios without necessarily using cross-domain data.
- ...
- See: Domain-Specific Text Understanding Task, Cross-Domain Transfer Learning Task, Automated Domain-Specific Writing Task, Targeted Concept Simplification System, Transfer Learning, Matrix Factorization, Collaborative Filtering, Multi-Task Learning, XLD Framework.
References
2023
- (Wu et al., 2023) ⇒ Zhaofeng Wu, Ananth Balashankar, Yoon Kim, Jacob Eisenstein, & Ahmad Beirami. (2023). "Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment". In: Proceedings of AAAI 2023.
- QUOTE: Proposes XLD framework for cross-lingual alignment through reward model transfer, achieving 7.1 BLEU improvement over baseline methods across 7 languages. Demonstrates parameter-efficient fine-tuning reduces language divergence by 38% compared to full-model adaptation.
2021
- (Stewart et al., 2021) ⇒ Ian Stewart, Suresh Naidu, & Jacob Eisenstein. (2021). "Tuiteamos o pongamos un tuit? Investigating the Social Constraints of Loanword Integration in Spanish Social Media".
- QUOTE: Reveals sociolinguistic constraints on loanword integration through analysis of 4M Spanish tweets. Shows integrated verb forms (e.g., 'tuitear') occur 3x more in newspaper contexts than social media, with author formality expectations explaining 22% variance in integration patterns.
2020
- (Mohamed et al., 2020) ⇒ Hatem Mohamed, Abdelmgeid Amin, & Mohamed A. Ismail. (2020). "Data Stream Clustering: A Systematic Review". In: Artificial Intelligence Review.
- QUOTE: Comprehensive survey analyzes 45 stream clustering algorithms, identifying CluStream and DenStream as most robust to concept drift. Introduces StreamEval framework for standardized benchmarking, showing MStream achieves 0.72 NMI on Twitter stream datasets.
2018
- (Yin et al., 2018) ⇒ Jianhua Yin, Daren Chao, Zhongkun Liu, Wei Zhang, Xiaohui Yu, & Jianyong Wang. (2018). "Model-based Clustering of Short Text Streams". In: Proceedings of KDD 2018.
- QUOTE: Introduces MStream algorithm for short text stream clustering using Dirichlet process multinomial mixture model. Achieves 0.65 NMI on Twitter dataset through incremental Gibbs sampling that reduces concept drift impact by 42% compared to CluStream.