Generalization Distance
Jump to navigation
Jump to search
A Generalization Distance is a generalization measure that quantifies domain shift magnitude for learning systems.
- AKA: Transfer Gap, Domain Shift Distance, Generalization Gap, Distribution Distance Measure, Transfer Distance.
- Context:
- It can typically predict model degradation in new environments.
- It can typically inform transfer learning strategy selection.
- It can often correlate with routine thresholds in ML applications.
- It can often guide model selection for deployment scenarios.
- It can range from being a Zero Generalization Distance to being an Infinite Generalization Distance, depending on its distribution overlap.
- It can range from being a Euclidean Generalization Distance to being a Wasserstein Generalization Distance, depending on its metric type.
- It can range from being a Feature-Space Generalization Distance to being a Output-Space Generalization Distance, depending on its measurement domain.
- It can range from being a Theoretical Generalization Distance to being an Empirical Generalization Distance, depending on its estimation method.
- ...
- Example(s):
- Computer Vision Generalization Distance, such as:
- NLP Generalization Distance, such as:
- ...
- Counter-Example(s):
- Training Error, which measures fit rather than transfer.
- Model Complexity, which assesses capacity rather than adaptation.
- Inference Latency, which measures speed rather than generalization.
- See: Generalization Measure, Domain Shift, Distance from the Known, Routine Threshold, Transfer Learning, Generalization Error, Distribution Distance, ML Robustness, Statistical Distance.