Domain-Invariant Feature Exploration (DIFEX)
Jump to navigation
Jump to search
A Domain-Invariant Feature Exploration (DIFEX) is a domain adaptation algorithm and training framework that jointly learns internally- and mutually-invariant features to train models that generalize across unseen domains.
- AKA: Domain-Invariant Feature Learning via Exploration, Internally and Mutually Invariant Feature Learning.
- Context:
- It can function as an algorithm that guides the extraction and disentanglement of domain-invariant features through multiple loss components.
- It can train a model architecture composed of a teacher-student framework, where:
- teachers learn internally-invariant features via Fourier-based representations.
- students learn mutually-invariant features from raw data and is supervised through knowledge distillation.
- It can extract internally-invariant features by leveraging Fourier phase information to capture semantic structures within each domain.
- It can extract mutually-invariant features by aligning inter-domain distributions using correlation alignment techniques.
- It can apply an exploration loss that encourages diversity between the internally- and mutually-invariant representations, improving robustness.
- It can be trained in a supervised or semi-supervised manner using only source domains, with no access to target domain data.
- It can be deployed across various tasks such as image classification, time-series prediction, and sensor-based activity recognition.
- It can outperform traditional domain generalization models on benchmark datasets like PACS, VLCS, and Office-Home.
- ...
- Example(s):
- Applying DIFEX to train a robust image classifier that generalizes to unseen visual styles using the PACS dataset.
- Utilizing DIFEX in time-series classification for wearable sensor data across different users and devices.
- Implementing DIFEX in medical image analysis to generalize tumor detection models across hospitals with differing imaging protocols.
- ...
- Counter-Example(s):
- Domain-Adversarial Neural Network (DANN), which relies on adversarial loss to align features, unlike DIFEX’s decoupled exploration strategy.
- Transfer Component Analysis (TCA), which performs domain alignment but requires access to the target domain during training.
- Standard Convolutional Neural Networks, which do not explicitly aim for domain invariance and may overfit to training domains.
- ...
- See: Domain Generalization, Fourier Transform, Knowledge Distillation, Correlation Alignment, Feature Disentanglement, Domain-Invariant Feature.
References
2024
- (Lu et al., 2024) ⇒ Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, & Xing Xie. (2024). "Domain-invariant Feature Exploration for Domain Generalization". In: Transactions on Machine Learning Research.
- QUOTE: Domain-Invariant Feature Exploration (DIFEX) captures internal invariance using a knowledge distillation framework to extract high-level Fourier phase as domain-specific features, and learns cross-domain correlation alignment as mutually-invariant features. Exploration loss enhances feature diversity for improved generalization performance, achieving state-of-the-art results on time-series data and visual data benchmarks.
2022a
- (Wang et al., 2022a) ⇒ Jindong Wang, Yue Zhang, Yiqiang Chen, Wen Li, Mingsheng Long, Qiang Yang, & Philip S. Yu. (2022). "Domain-invariant Feature Exploration for Domain Generalization". arXiv Preprint.
- QUOTE: Domain-Invariant Feature Exploration (DIFEX) enhances domain generalization by learning domain-invariant features originating from both internal invariance (captured via high-level Fourier phase) and mutual invariance (captured via cross-domain correlation alignment), with an exploration loss designed to promote feature diversity and better generalization on unseen domains.
2022b
- (Wang et al., 2022b) ⇒ Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, & Xing Xie. (2022). "Domain-invariant Feature Exploration for Domain Generalization".
- QUOTE: Domain-Invariant Feature Exploration (DIFEX) is a domain generalization approach that promotes internal invariance by extracting Fourier phase components and mutual invariance by aligning cross-domain correlations, using an exploration loss to encourage feature diversity and achieve improved generalization to unseen test distributions.