Divisive Hierarchical Clustering Algorithm

Jump to: navigation, search

A Divisive Hierarchical Clustering Algorithm is an Hierarchical Clustering Algorithm in which all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.



  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Hierarchical_clustering Retrieved:2019-5-19.
    • In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:[1]
      • Agglomerative: This is a "bottom-up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
      • Divisive: This is a "top-down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.
    • In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram.

      The standard algorithm for hierarchical agglomerative clustering (HAC) has a time complexity of [math] \mathcal{O}(n^3) [/math] and requires [math] \mathcal{O}(n^2) [/math] memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexity [math] \mathcal{O}(n^2) [/math] ) are known: SLINKK[2] for single-linkage and CLINK[3] for complete-linkage clustering. With a heap the runtime of the general case can be reduced to [math] \mathcal{O}(n^2 \log n) [/math] at the cost of further increasing the memory requirements. In many programming languages, the memory overheads of this approach are too large to make it practically usable.

      Except for the special case of single-linkage, none of the algorithms (except exhaustive search in [math] \mathcal{O}(2^n) [/math] ) can be guaranteed to find the optimum solution.

      Divisive clustering with an exhaustive search is [math] \mathcal{O}(2^n) [/math] , but it is common to use faster heuristics to choose splits, such as k-means.

  1. Rokach, Lior, and Oded Maimon. "Clustering methods." Data mining and knowledge discovery handbook. Springer US, 2005. 321-352.


  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Hierarchical_clustering#Divisive_clustering Retrieved:2019-5-19.
    • The basic principle of divisive clustering was published as the DIANA (DIvisive ANAlysis Clustering) algorithm. [1] Initially, all data is in the same cluster, and the largest cluster is split until every object is separate.

      Because there exist [math] O(2^n) [/math] ways of splitting each cluster, heuristics are needed. DIANA chooses the object with the maximum average dissimilarity and then moves all objects to this cluster that are more similar to the new cluster than to the remainder.

  1. Kaufman, L., & Roussew, P. J. (1990). Finding Groups in Data - An Introduction to Cluster Analysis. A Wiley-Science Publication John Wiley & Sons.