# True Positive Classification

Jump to navigation
Jump to search

A True Positive Classification is a binary classifier positive prediction that is a correct class prediction.

**AKA:**TP Outcome.**Context:**- It can be a member of a True Positive Classification Set (to calculate a true positive error rate).
- …

**Counter-Example(s):****See:**True Positive Rate, Confusion Matrix, True Belief.

## References

### 2017

- (Sammut & Webb, 2017) ⇒ Claude Sammut, and Geoffrey I. Webb. (2017). "True Positive". In: (Sammut & Webb, 2017). DOI:10.1007/978-1-4899-7687-1_855
- QUOTE:
*True positives*are the positive examples that are correctly classified by a classification model. See confusion matrix for a complete range of related terms.

- QUOTE:

### 2006

- (Fawcett, 2006) ⇒ Tom Fawcett. (2006). “An Introduction to ROC Analysis.” In: Pattern Recognition Letters, 27(8). doi:10.1016/j.patrec.2005.10.010
- QUOTE: Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a
*true positive*; if it is classified as negative, it is counted as a*false negative*. If the instance is negative and it is classified as negative, it is counted as a*true negative*; if it is classified as positive, it is counted as a*false positive*. Given a classifier and a set of instances (the test set), a two-by-two*confusion matrix*(also called a contingency table) can be constructed representing the dispositions of the set of instances.

- QUOTE: Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a