k-Nearest Neighbor (kNN) Algorithm

From GM-RKB
(Redirected from KNN)
Jump to navigation Jump to search

A k-Nearest Neighbor (kNN) Algorithm is a search algorithm that can be implemented by a kNN system to solve a kNN task (that locates the most similar neighbor items according to some distance function).



References

2017

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/k-nearest_neighbors_algorithm Retrieved:2017-8-30.
    • In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression: :* In k-NN classification, the output is a class membership. An object is classified by a majority vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. :* In k-NN regression, the output is the property value for the object. This value is the average of the values of its k nearest neighbors. k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all computation is deferred until classification. The k-NN algorithm is among the simplest of all machine learning algorithms. Both for classification and regression, it can be useful to assign weight to the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. For example, a common weighting scheme consists in giving each neighbor a weight of 1/d, where d is the distance to the neighbor. [1]

      The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.

      A peculiarity of the k-NN algorithm is that it is sensitive to the local structure of the data. The algorithm is not to be confused with k-means, another popular machine learning technique.

  1. This scheme is a generalization of linear interpolation.

2017

2011

2009

2006

  • (Zezula et al., 2006) ⇒ Pavel Zezula, Giuseppe Amato, Vlastislav Dohnal, and Michal Batko. “Similarity Search: The Metric Space Approach."
    • QUOTE: the nearest neighbor query
 NN(q) = x
 x ∈ X, ∀y ∈ X, d(q,x) ≤ d(q,y)
    • QUOTE: k-nearest neighbor query
 k-NN(q,k) = A
 A ⊆ X, |A| = k
 ∀x ∈ A, y ∈ X – A, d(q,x) ≤ d(q,y)

1997

  • (Mitchell, 1997) ⇒ Tom M. Mitchell. (1997). “Machine Learning." McGraw-Hill.
    • QUOTE: Section 8.6 Remarks on Lazy and Eager Learning: In this chapter we considered three lazy learning methods: the k-Nearest Neighbor algorithm, locally weighted regression, and case-based reasoning.

1991

1973