# Affinity Matrix

(Redirected from Similarity Matrix)

Jump to navigation
Jump to search
An affinity matrix is a symmetric matrix that represents the affinity scores between the objects of two object lists.

**AKA:**Similarity Matrix.- …

**Counter-Example(s):****See:**Spectral Clustering, Laplacian Matrix.

## References

### 2011

- (Wikipedia - Spectral Clustering, 2011-Jun-13) ⇒ http://en.wikipedia.org/wiki/Cluster_analysis#Spectral_clustering
- QUOTE: Given a set of data points A, the similarity matrix may be defined as a matrix [math]\displaystyle{ S }[/math] where [math]\displaystyle{ S_{ij} }[/math] represents a measure of the similarity between points [math]\displaystyle{ i, j\in A }[/math]. Spectral clustering techniques make use of the spectrum of the similarity matrix of the data to perform dimensionality reduction for clustering in fewer dimensions.

### 2009

- (Ruan, 2009) ⇒ Jianhua Ruan. (2009). “A Fully Automated Method for Discovering Community Structures in High Dimensional Data.” In: Proceedings of the Ninth IEEE International Conference on Data Mining (ICDM 2009). doi:10.1109/ICDM.2009.141
- QUOTE: … networks are often not directly defined, or may be given as an affinity matrix. … an affinity matrix may define a dense weighted graph, for which modularity-based methods do not perform well. … sometimes a network may be given as a weight matrix [math]\displaystyle{ W }[/math], where [math]\displaystyle{ W_{ij} }[/math] represents the affinity between nodes [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] based on some measurement. For example, in a collaboration network, the affinity between two researchers may be the number of papers they have co-authored … Spectral clustering algorithms, such as the one proposed in (Ng et al., 2002), often works well by treating the affinity matrix as a network with weighted edges.

### 2005

- (Leordeanu & Hebert, 2005) ⇒ Marius Leordeanu, and Martial Hebert. (2005). “A Spectral Technique for Correspondence Problems Using Pairwise Constraints.” In: Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV 2005) doi:10.1109/ICCV.2005.20
- QUOTE: We require these affinities to be non-negative, symmetric [math]\displaystyle{ (M(a,b)=M(b,a)) }[/math], and increasing with the quality of the match, without any loss of generality. The candidate assignments [math]\displaystyle{ a = (i, i_0) }[/math] from [math]\displaystyle{ L }[/math] can be seen as nodes forming an undirected graph, with the pairwise scores [math]\displaystyle{ M(a,b) }[/math] as weights on the edges and the individual scores [math]\displaystyle{ M(a,a) }[/math] as weights at the nodes. Then, [math]\displaystyle{ M }[/math] represents the affinity matrix of this undirected weighted graph. The number of nodes in this graph (= number of elements in [math]\displaystyle{ L }[/math]), adapts based on the actual data and it depends mainly on how discriminative the features’s descriptors are. … The largest eigenvalue [math]\displaystyle{ \lambda_1^* }[/math] of [math]\displaystyle{ M^* }[/math] will be equal to the total association score of the cluster formed by the correct assignments, while its second largest eigenvalue [math]\displaystyle{ \lambda_2^* }[/math] will equal the largest affinity score [math]\displaystyle{ M(a,a) }[/math] of some wrong assignment
*a*.

- QUOTE: We require these affinities to be non-negative, symmetric [math]\displaystyle{ (M(a,b)=M(b,a)) }[/math], and increasing with the quality of the match, without any loss of generality. The candidate assignments [math]\displaystyle{ a = (i, i_0) }[/math] from [math]\displaystyle{ L }[/math] can be seen as nodes forming an undirected graph, with the pairwise scores [math]\displaystyle{ M(a,b) }[/math] as weights on the edges and the individual scores [math]\displaystyle{ M(a,a) }[/math] as weights at the nodes. Then, [math]\displaystyle{ M }[/math] represents the affinity matrix of this undirected weighted graph. The number of nodes in this graph (= number of elements in [math]\displaystyle{ L }[/math]), adapts based on the actual data and it depends mainly on how discriminative the features’s descriptors are. … The largest eigenvalue [math]\displaystyle{ \lambda_1^* }[/math] of [math]\displaystyle{ M^* }[/math] will be equal to the total association score of the cluster formed by the correct assignments, while its second largest eigenvalue [math]\displaystyle{ \lambda_2^* }[/math] will equal the largest affinity score [math]\displaystyle{ M(a,a) }[/math] of some wrong assignment

### 2003

- (Bengio et al., 2003b) ⇒ Yoshua Bengio, Jean-François Paiement, Pascal Vincent, Olivier Delalleau, Nicolas Le Roux, Marie Ouimet. (2003). “Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering.” In: Advances in Neural Information Processing Systems 16 (NIPS 2003).
- QUOTE: … Algorithm 1:
**1.**Start from a data set [math]\displaystyle{ D = {x_1,...,x_n} }[/math] with [math]\displaystyle{ n }[/math] points in some space. Construct a [math]\displaystyle{ n \times n }[/math] "neighborhood" or Similarity Matrix [math]\displaystyle{ M }[/math]. … Optionally transform*M*, yielding a "normalized" matrix … In the following, we consider the specializations of Algorithm 1 for different unsupervised learning algorithms. Let [math]\displaystyle{ S_i }[/math] be the*i*-th row sum of the affinity matrix [math]\displaystyle{ M }[/math]

- QUOTE: … Algorithm 1:

### 2001

- (Ng, Jordan & Weiss, 2001) ⇒ Andrew Y. Ng , Michael I. Jordan, and Yair Weiss. (2002). “On Spectral Clustering: Analysis and an algorithm.” In: Advances in Neural Information Processing Systems, 14 (NIPS 2001)