Open main menu

GM-RKB β

Confusion Matrix

A confusion matrix is a contingency table that represents the count of a classifier's class predictions with respect to the actual outcome on some labeled learning set.

ACTUAL/PREDICTED
ABC SUM
A20 2 11 33
B2 25 1 28
C9 5 24 38
SUM31 32 36 100



References

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/confusion_matrix Retrieved:2018-7-19.
    • In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class (or vice versa).[1] The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another).

      It is a special kind of contingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table).

2011

2007

2006

2002

1998

actual \ predicted

negative

positive

Negative

a

b

Positive

c

d

1971

  • (Townsend, 1971) ⇒ J. T. Townsend. (1971). “Theoretical Analysis of an Alphabetic Confusion Matrix.” In: Attention, Perception, & Psychophysics, 9(1).
    • ABSTRACT: Attempted to acquire a confusion matrix of the entire upper-case English alphabet with a simple nonserified font under tachistoscopic conditions. This was accomplished with 2 experimental conditions, 1 with blank poststimulus field and 1 with noisy poststimulus field, for 6 Ss in 650 trials each. Results were: (a) the finite-state model that assumed stimulus similarity (the overlap activation model) and the choice model predicted the confusion-matrix entries about equally well in terms of a sum-of-squared deviations criterion and better than the all-or-none activation model, which assumed only a perfect perception or random-guessing state following a stimulus presentation; (b) the parts of the confusion matrix that fit best varied with the particular model, and this finding was related to the models; (c) the best scaling result in terms of a goodness-of-fit measure was obtained with the blank poststimulus field condition, with a technique allowing different distances for tied similarity values, and with the Euclidean as opposed to the city-block metric; and (d) there was agreement among the models in terms of the way in which the models reflected sensory and response bias structure in the data, and in the way in which a single model measured these attributes across experimental conditions, as well as agreement among similarity and distance measures with physical similarity. (24 ref.)

  1. Cite error: Invalid <ref> tag; no text was provided for refs named Powers2011