Transferability Measurement Task
Jump to navigation
Jump to search
A Transferability Measurement Task is a model evaluation task that aims to estimate how effectively a model trained on a source task can be reused or adapted for a different target task, without full training or fine-tuning on the target task.
- AKA: Task Transferability Estimation Task, Transferability Scoring Task, Cross-Task Suitability Estimation.
- Context:
- Task Input: A pre-trained source model or feature extractor, and a set of input instances from a target task.
- Optional Input: Target task labels, source logits, class priors, or task metadata.
- Task Output: A transferability score or ranked suitability measure estimating source-to-target task alignment.
- Task Performance Measures: Correlation with target task accuracy, ranking agreement, Kendall's τ, Spearman correlation, and time-efficiency.
- Task Objective: To predict the performance of transfer learning from a source to a target task without requiring full training on the target task.
- It can be systematically solved and automated by a Transferability Measurement System.
- It can be used to guide model reuse, task selection, curriculum design, and dataset pruning in transfer learning pipelines.
- It can enable fast model selection in few-shot, zero-shot, or multi-task learning settings.
- It can be unsupervised (using unlabeled target data only) or semi-supervised (using few target labels).
- ...
- Task Input: A pre-trained source model or feature extractor, and a set of input instances from a target task.
- Example(s):
- Estimating task transferability between CIFAR-10 (source) and SVHN (target) using the LEEP Algorithm.
- Measuring how suitable a BERT-based source task is for a new NLP task using TransRate.
- Using TMI to assess how well an image classifier transfers to unseen classes with limited examples.
- ...
- Counter-Example(s):
- Model Evaluation Task, which evaluates fully trained models, not pre-transfer conditions.
- Transfer Learning Task, which performs the adaptation itself rather than estimating its potential.
- Benchmarking Task, which focuses on standard performance measurement, not predictive suitability.
- ...
- See: Transferability Measurement Algorithm, Transfer Learning, Model Evaluation Task, Log Expected Empirical Prediction (LEEP), Negative Conditional Entropy, TransRate, Few-Shot Evaluation Task.
References
2023
- (Xu et al., 2023) ⇒ Zheyang Xu, Haohe Liu, Xinhao Mei, Yiming Hu, & Wangmeng Xiang. (2023). "Fast and Accurate Transferability Measurement by Evaluating Intra-class Feature Variance". In: Proceedings of ICCV 2023.
- QUOTE: Proposes a transferability measurement method based on intra-class feature variance analysis, achieving 98% correlation with ground-truth transfer accuracy on ImageNet-to-CIFAR benchmarks. The approach requires only 0.02 seconds per measurement and demonstrates label-free capability through self-supervised feature clustering.
2022
- (Huang et al., 2022) ⇒ Long-Kai Huang, Junzhou Huang, Yu Rong, Qiang Yang, & Ying Wei. (2022). "Frustratingly Easy Transferability Estimation". In: Proceedings of ICML 2022.
- QUOTE: Introduces TransRate, a transferability measurement that computes mutual information between pre-trained features and target labels via coding rate estimation. Evaluates 32 models across 16 tasks with 10-line code implementation, showing 0.92 Spearman correlation with fine-tuning results while requiring only single forward passes.
2021
- (Tran et al., 2021) ⇒ Anh Tuan Tran, Cuong V. Nguyen, & Tal Hassner. (2021). "Transferability and Hardness of Supervised Classification Tasks". arXiv Preprint.
- QUOTE: Develops Negative Conditional Entropy (NCE) as transferability measurement, establishing theoretical connections between task similarity and transfer learning performance. Validated on 15 image datasets with 0.85 Kendall-τ correlation between predicted and actual transfer accuracy across different label space mismatch scenarios
2020
- (Nguyen et al., 2020) ⇒ Cuong V. Nguyen, Tal Hassner, Matthias Seeger, & Cedric Archambeau. (2020). "LEEP: A New Measure to Evaluate Transferability of Learned Representations". In: Proceedings of ICML 2020.
- QUOTE: Proposes Log Expected Empirical Prediction (LEEP) for transferability measurement, requiring only single forward pass through source model. Demonstrates 30% improvement over prior methods in ImageNet-to-CIFAR100 transfer prediction accuracy while maintaining linear time complexity relative to dataset size.