# True Positive Success Rate

(Redirected from Recall Metric)

A True Positive Success Rate is a binary classification performance measure that is based on the Probability that a true test instance is a positive prediction

• AKA: Recall Measure, Sensitivity, R, TPR.
• Context:
• It can be Estimated by: TP / (TP + FN)
• # of correct answers given by the system as a proportion of the total # of possible correct predictions)
• the proportion of cases with a positive test result who are correctly diagnosed.
• It can be illustrated over a series of cutoffs for defining an Accurate Prediction with a Receiver Operator Curve
• Example(s):
• The probability of a positive test result in a patient who has the disease under consideration. E.g. probability that a test for cancer will predict that a patient has cancer when in fact they do have cancer.
• Counter-Example(s):
• See: Receiver Operator Curve, Prevalence.

## References

### 2015

• (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/precision_and_recall Retrieved:2015-1-20.
• In pattern recognition and information retrieval with binary classification, precision (also called positive predictive value) is the fraction of retrieved instances that are relevant, while recall (also known as sensitivity) is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. Suppose a program for recognizing dogs in scenes from a video identifies 7 dogs in a scene containing 9 dogs and some cats. If 4 of the identifications are correct, but 3 are actually cats, the program's precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3.

In statistics, if the null hypothesis is that all and only the relevant items are retrieved, absence of type I and type II errors corresponds respectively to maximum precision (no false positive) and maximum recall (no false negative). The above pattern recognition example contained 7 − 4 = 3 type I errors and 9 − 4 = 5 type II errors. Precision can be seen as a measure of exactness or quality, whereas recall is a measure of completeness or quantity.

In simple terms, high precision means that an algorithm returned substantially more relevant results than irrelevant, while high recall means that an algorithm returned most of the relevant results.

• (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Sensitivity_and_specificity Retrieved:2015-7-19.
• Sensitivity and specificity are statistical measures of the performance of a binary classification test, also known in statistics as classification function:
• Sensitivity (also called the true positive rate, or the recall in some fields) measures the proportion of positives that are correctly identified as such (e.g., the percentage of sick people who are correctly identified as having the condition).
• Specificity (also called the true negative rate) measures the proportion of negatives that are correctly identified as such (e.g., the percentage of healthy people who are correctly identified as not having the condition).
• For any test, there is usually a trade-off between the measures. For instance, in an airport security setting in which one is testing for potential threats to safety, scanners may be set to trigger on low-risk items like belt buckles and keys (low specificity), in order to reduce the risk of missing objects that do pose a threat to the aircraft and those aboard (high sensitivity). This trade-off can be represented graphically as a receiver operating characteristic curve.

A perfect predictor would be described as 100% sensitive (e.g., all sick are identified as sick) and 100% specific (e.g., all healthy are not identified as sick); however, theoretically any predictor will possess a minimum error bound known as the Bayes error rate.