Micro-Recall Metric
Jump to navigation
Jump to search
A Micro-Recall Metric is a micro-averaged performance measure that is a recall metric computed by aggregating true positives and false negatives globally across all classes.
- AKA: Micro-Recall, Global Recall Metric, Pooled Recall Measure, Aggregate Sensitivity Metric.
- Context:
- It can typically calculate Micro-Recall Score Values by summing all micro-recall true positives and dividing by the sum of all micro-recall true positives and micro-recall false negatives across all classes.
- It can typically weight Micro-Recall Class Contributions implicitly by their micro-recall actual frequency, giving more influence to micro-recall frequent classes.
- It can typically provide Micro-Recall Global Sensitivity that measures overall micro-recall detection rate across all micro-recall positive instances.
- It can typically equal Micro-Recall Overall Accuracy in certain micro-recall multi-class settings when micro-recall class distributions are considered.
- It can typically serve as a component for computing Micro-F1 Measures when combined with micro-precision metrics.
- ...
- It can often yield Micro-Recall Different Values than macro-recall metrics depending on micro-recall class imbalance.
- It can often be preferred when Micro-Recall Overall Detection Rate matters more than micro-recall per-class sensitivity.
- It can often be computed efficiently from Micro-Recall Global Counters without micro-recall per-class calculations.
- It can often be interpreted as Micro-Recall Instance-Level Sensitivity rather than micro-recall class-level sensitivity.
- ...
- It can range from being a Low Micro-Recall Metric to being a High Micro-Recall Metric, depending on its micro-recall detection quality.
- It can range from being a Conservative Micro-Recall Metric to being a Liberal Micro-Recall Metric, depending on its micro-recall prediction threshold.
- It can range from being a Binary-Derived Micro-Recall Metric to being a Native Multi-Class Micro-Recall Metric, depending on its micro-recall computation method.
- It can range from being a Stable Micro-Recall Metric to being a Volatile Micro-Recall Metric, depending on its micro-recall temporal variance.
- It can range from being a Balanced Micro-Recall Metric to being an Imbalanced Micro-Recall Metric, depending on its micro-recall class distribution.
- ...
- It can be calculated using Micro-Recall Formula: ΣTP / (ΣTP + ΣFN) across all classes.
- It can be visualized through Micro-Recall Performance Curves showing micro-recall threshold sensitivity.
- It can be monitored through Micro-Recall Tracking Dashboards for micro-recall model evaluation.
- It can be optimized using Micro-Recall Enhancement Techniques targeting micro-recall false negative reduction.
- It can be compared with Micro-Recall Baseline Scores to assess micro-recall improvement.
- ...
- Example(s):
- Medical Diagnosis Micro-Recall Metrics, such as:
- Security System Micro-Recall Metrics, such as:
- Information Retrieval Micro-Recall Metrics, such as:
- Quality Control Micro-Recall Metrics, such as:
- ...
- Counter-Example(s):
- Macro-Recall Metric, which calculates recall for each class individually and then averages without considering class frequency.
- Weighted Recall Metric, which applies explicit class weights rather than implicit frequency-based weighting.
- Micro-Precision Metric, which measures positive predictive value rather than true positive rate.
- Micro-F1 Measure, which combines micro-recall with micro-precision rather than measuring recall alone.
- Class-Specific Recall Metric, which evaluates individual class recall rather than global aggregation.
- See: Recall Metric, Micro-Averaged Performance Measure, Macro-Recall Metric, Micro-Precision Metric, Micro-F1 Measure, Sensitivity Measure, True Positive Rate, Multi-Class Classification Task, True Positive, False Negative, Confusion Matrix.