F1 Measure from Counts Method

From GM-RKB
Jump to navigation Jump to search

An F1 Measure from Counts Method is a performance measure computation method that calculates Fβ-score measures (specifically F1-score metrics) directly from true positive counts, false positive counts, and false negative counts with continuity correction.



References

2025-01-03

[1] Scikit-learn Documentation -- "f1_score": Definition of F1 as harmonic mean of precision and recall; formula in terms of TP, FP, FN. https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html
[2] Google Developers ML Crash Course -- "Accuracy, Precision, Recall, and F1": Motivations for F1, especially in imbalanced data, and effect of precision/recall imbalance on F1. https://developers.google.com/machine-learning/crash-course/classification/accuracy-precision-recall
[3] Futurense Blog (2025) -- "F1 Score in Machine Learning: Formula, Range & Interpretation": Explanation of micro vs macro vs weighted F1 averaging. https://futurense.com/uni-blog/f1-score-machine-learning
[4] V7 Labs Blog -- "Intro to F1 score": Discussion of precision, recall, F1, and multi-class averaging strategies. https://www.v7labs.com/blog/f1-score-guide
[5] Stack Exchange (Data Science) -- Q&A on "mean F1-score": Distinction between averaging F1s vs computing from aggregated counts (overall F1). https://datascience.stackexchange.com/questions/16179/what-is-the-correct-way-to-compute-mean-f1-score