# False Negative Classification

(Redirected from False Negative Prediction)

Jump to navigation
Jump to search
A False Negative Classification is a decisioning error where a negative prediction is an incorrect prediction.

**AKA:**FN Outcome.**Context:**- It can be a member of a False Negative Prediction Set (to calculate a false negative error rate)

**Example:**- If a model predicts that a person does not have cancer when in fact they do have cancer then the prediction is labeled as a False Negative Prediction.
- a Type II Hypothesis Testing Error.
- …

**Counter-Example(s):****See:**Relation; Misclassification Error.

## References

### 2015

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/false_positives_and_false_negatives Retrieved:2015-1-20.
- In medical testing, and more generally in binary classification, a
**false positive**is an error in data reporting in which a test result indicates that a condition – such as a disease – is present (the result is*positive*), but it is not in fact present (the result is*false*), while a false negative is when a test result indicates that a condition is not present (the result is*negative*), but it is in fact present (the result is*true*). These are the two kinds of errors in a binary test, and are contrasted with a correct result, either a**Template:Visible anchor**or a**Template:Visible anchor.**These are also known in medicine as a false positive diagnosis (resp.**false negative diagnosis**), and in statistical classification as a false positive error (resp.**false negative error**).In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis. The terms are often used interchangeably, but there are differences in detail and interpretation due to the differences between medical testing and statistical hypothesis testing.

- In medical testing, and more generally in binary classification, a

### 2006

- (Fawcett, 2006) ⇒ Tom Fawcett. (2006). “An Introduction to ROC Analysis.” In: Pattern Recognition Letters, 27(8). doi:10.1016/j.patrec.2005.10.010
- QUOTE: ... Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a
*true positive*; if it is classified as negative, it is counted as a*false negative*. If the instance is negative and it is classified as negative, it is counted as a*true negative*; if it is classified as positive, it is counted as a*false positive*. Given a classifier and a set of instances (the test set), a two-by-two*confusion matrix*(also called a contingency table) can be constructed representing the dispositions of the set of instances. …

- QUOTE: ... Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a