Micro-F1 Measure from Group Counts Method
Jump to navigation
Jump to search
A Micro-F1 Measure from Group Counts Method is a micro-averaged performance measure computation method that pools confusion matrix counts across all classification classes before computing a single F1 score.
- AKA: Global F1 Computation Method, Pooled Count F1 Method, Micro-Averaged F1 Method, Aggregate F1 Calculation Method.
- Context:
- It can typically compute TP_total = Σ(TP_i), FP_total = Σ(FP_i), FN_total = Σ(FN_i) across classes.
- It can typically calculate F1_micro = 2*TP_total/(2*TP_total + FP_total + FN_total).
- It can typically weight class contributions by class support size.
- It can often equal accuracy in multi-class classification with single-label assignment.
- It can often differ substantially from Macro-F1 Measure from Group Counts Method under class imbalance.
- It can often be preferred when overall performance matters more than per-class performance.
- It can range from being a Binary Micro-F1 Measure from Group Counts Method to being a Multi-Class Micro-F1 Measure from Group Counts Method, depending on its class count.
- It can range from being a Balanced Micro-F1 Measure from Group Counts Method to being an Imbalanced Micro-F1 Measure from Group Counts Method, depending on its class distribution.
- It can range from being a Single-Label Micro-F1 Measure from Group Counts Method to being a Multi-Label Micro-F1 Measure from Group Counts Method, depending on its label assignment.
- It can range from being a Weighted Micro-F1 Measure from Group Counts Method to being an Unweighted Micro-F1 Measure from Group Counts Method, depending on its aggregation scheme.
- ...
- Example(s):
- Three-Class Micro-F1 Calculations, such as:
- Class A: TP=80, FP=10, FN=20; Class B: TP=30, FP=5, FN=15; Class C: TP=90, FP=15, FN=10; Micro-F1 = 2*(80+30+90)/(2*(80+30+90)+(10+5+15)+(20+15+10)) = 400/475 = 0.842.
- Heavily imbalanced: 90% Class A, 8% Class B, 2% Class C.
- Multi-Label Micro-F1s, such as:
- Document tagging with 50 possible labels.
- Pooling all label predictions for global F1.
- Comparison with Macro-F1s, such as:
- Same data: Micro-F1=0.85, Macro-F1=0.72 showing class imbalance effect.
- Balanced classes: Micro-F1 ≈ Macro-F1.
- ...
- Three-Class Micro-F1 Calculations, such as:
- Counter-Example(s):
- Macro-F1 Measure from Group Counts Method, which averages per-class F1s.
- Weighted F1 Measure from Group Counts Method, which uses custom class weights.
- Per-Class F1 Method, which doesn't aggregate.
- See: Performance Measure Computation Method, Micro-F1 Measure, F1 Score, Confusion Matrix, Count Aggregation, Macro-F1 Measure from Group Counts Method, Multi-Class Classification, Class Imbalance, Global Performance Metric, Pooled Estimation Method.