Recall Metric
A Recall Metric is a performance metric for a binary classification model that is based on the Probability that a true test instance is a positive prediction
 AKA: True Positive Rate, Sensitivity, R, TPR.
 Context:
 It can be Estimated by: TP / (TP + FN)

# of correct answers given by the system as a proportion of the total # of possible correct predictions
)  the proportion of cases with a positive test result who are correctly diagnosed.

 It can be illustrated over a series of cutoffs for defining an Accurate Prediction with a Receiver Operator Curve
 It can be Estimated by: TP / (TP + FN)
 Example(s):
 The probability of a positive test result in a patient who has the disease under consideration. E.g. probability that a test for cancer will predict that a patient has cancer when in fact they do have cancer.
 CounterExample(s):
 See: Receiver Operator Curve, Prevalence.
References
2015
 (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/precision_and_recall Retrieved:2015120.
 In pattern recognition and information retrieval with binary classification, precision (also called positive predictive value) is the fraction of retrieved instances that are relevant, while recall (also known as sensitivity) is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. Suppose a program for recognizing dogs in scenes from a video identifies 7 dogs in a scene containing 9 dogs and some cats. If 4 of the identifications are correct, but 3 are actually cats, the program's precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3.
In statistics, if the null hypothesis is that all and only the relevant items are retrieved, absence of type I and type II errors corresponds respectively to maximum precision (no false positive) and maximum recall (no false negative). The above pattern recognition example contained 7 − 4 = 3 type I errors and 9 − 4 = 5 type II errors. Precision can be seen as a measure of exactness or quality, whereas recall is a measure of completeness or quantity.
In simple terms, high precision means that an algorithm returned substantially more relevant results than irrelevant, while high recall means that an algorithm returned most of the relevant results.
 In pattern recognition and information retrieval with binary classification, precision (also called positive predictive value) is the fraction of retrieved instances that are relevant, while recall (also known as sensitivity) is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. Suppose a program for recognizing dogs in scenes from a video identifies 7 dogs in a scene containing 9 dogs and some cats. If 4 of the identifications are correct, but 3 are actually cats, the program's precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3.
 (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/sensitivity_and_specificity Retrieved:2015120.
 Sensitivity and specificity are statistical measures of the performance of a binary classification test, also known in statistics as classification function. Sensitivity (also called the true positive rate, or the recall rate in some fields) measures the proportion of actual positives which are correctly identified as such (e.g., the percentage of sick people who are correctly identified as having the condition), and is complementary to the false negative rate. Specificity (sometimes called the true negative rate) measures the proportion of negatives which are correctly identified as such (e.g., the percentage of healthy people who are correctly identified as not having the condition), and is complementary to the false positive rate.
A perfect predictor would be described as 100% sensitive (e.g., all sick are identified as sick) and 100% specific (e.g., all healthy are not identified as sick); however, theoretically any predictor will possess a minimum error bound known as the Bayes error rate.
For any test, there is usually a tradeoff between the measures. For instance, in an airport security setting in which one is testing for potential threats to safety, scanners may be set to trigger on lowrisk items like belt buckles and keys (low specificity), in order to reduce the risk of missing objects that do pose a threat to the aircraft and those aboard (high sensitivity). This tradeoff can be represented graphically as a receiver operating characteristic curve.
 Sensitivity and specificity are statistical measures of the performance of a binary classification test, also known in statistics as classification function. Sensitivity (also called the true positive rate, or the recall rate in some fields) measures the proportion of actual positives which are correctly identified as such (e.g., the percentage of sick people who are correctly identified as having the condition), and is complementary to the false negative rate. Specificity (sometimes called the true negative rate) measures the proportion of negatives which are correctly identified as such (e.g., the percentage of healthy people who are correctly identified as not having the condition), and is complementary to the false positive rate.
2011
 (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). "Recall." In: (Sammut & Webb, 2011) p.829
 (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). "Sensitivity." In: (Sammut & Webb, 2011) p.901
 (Ting, 2011d) ⇒ Kai Ming Ting. (2011). "Sensitivity and Specificity." In: (Sammut & Webb, 2011) p.901
2009
 (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Sensitivity_(tests)#Sensitivity
 A sensitivity of 100% means that the test recognizes all sick people as such. Thus in a high sensitivity test, a negative result is used to rule out the disease.
 Sensitivity alone does not tell us how well the test predicts other classes (that is, about the negative cases). In the binary classification, as illustrated above, this is the corresponding specificity test, or equivalently, the sensitivity for the other classes.
 http://www.health.state.mn.us/newbornscreening/glossary.html#positive
 Positive predictive value  This is a measure of how well a test correctly finds individuals who truly have the condition being checked for. It is the proportion of individuals with positive test results who are correctly diagnosed.
2000
 2000_SpeechAndLanguageProcessing
 "Recall is a measure of how much relevant information the system has extracted from the text; it is thus a measure of the coverage of the system."
1998
 (Kohavi & Provost, 1998) ⇒ Ron Kohavi, and Foster Provost. (1998). "Glossary of Terms." In: Machine Leanring 30(23).
 True positive rate (Recall, Sensitivity): d/(c+d).
 Sensitivity: True positive rate (see Confusion matrix).