Scikit learn Hierarchical Clustering

In this Python tutorial, we will learn How to make Scikit learn Hierarchical clustering in Python and we will also cover different examples related to Hierarchical clustering. And we will cover these topics.

  • Scikit learn hierarchical clustering
  • Scikit learn hierarchical clustering example
  • Scikit learn hierarchical clustering features
  • Scikit learn hierarchical clustering dendrogram
  • Scikit learn hierarchical clustering linkage
  • Scikit learn hierarchical classification

Learn What is Scikit Learn in Python

Scikit learn hierarchical clustering

In this section, we will learn about how to make scikit learn hierarchical clustering in python.

Hierarchical clustering is defined it is an algorithm that categorizes similar objects into groups.

The endpoint of a cluster is a set of clusters and each cluster is distinct from the other cluster.

Code:

In the following code, we will import some libraries import time as time, import numpy as np, import matplotlib.pyplot as plot from which we define the structure of data.

  • print(“Compute structured hierarchical clustering…”) is used to compute the clustering and print on the screen.
  • plot.figure() is used for plotting the figure on the screen.
  • p3.Axes3D(fig) is used to plot the 3d fig on the screen.
  • plot.title(“With connectivity constraints (time %.2fs)” % elapsed_time) is used to give the title to the graph.
import time as time
import numpy as np
import matplotlib.pyplot as plot
import mpl_toolkits.mplot3d.axes3d as p3
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets import make_swiss_roll

connectivity = kneighbors_graph(X, n_neighbors=12, include_self=False)

print("Compute structured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(
    n_clusters=8, connectivity=connectivity, linkage="ward"
).fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)

fig = plot.figure()
axis = p3.Axes3D(fig)
axis.view_init(8, -90)
for l in np.unique(label):
    axis.scatter(
        X[label == l, 0],
        X[label == l, 1],
        X[label == l, 2],
        color=plt.cm.jet(float(l) / np.max(label + 1)),
        s=20,
        edgecolor="k",
    )
plot.title("With connectivity constraints (time %.2fs)" % elapsed_time)

plot.show()

Output:

After running the above code we get the following output in which we can see that a hierarchical clustering is done with the connectivity of the constraints.

scikit learn hierarchical clustering
scikit learn hierarchical clustering

Read Scikit-learn logistic regression

Scikit learn hierarchical clustering example

In this section, we will learn about how to make scikit learn hierarchical clustering examples in python.

  • As we know hierarchical clustering categories similar objects into groups. It treats each cluster as a separate cluster.
  • It identifies the two cluster which is very near to each other. And merger the two most similar clusters.

Code:

In the following code, we will import some libraries from which we can plot the hierarchical clustering graph.

  • X = np.array() is used for creating a dataset.
  • from sklearn.cluster import AgglomerativeClustering is used for importing the class from the cluster.
  • clusters.fit_predict(X) is used to predict the cluster from which each data point belongs.
  • plt.scatter(X[:,0],X[:,1], c=clusters.labels_, cmap=’rainbow’) is used to plot the cluster on the screen.
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
import numpy as np
X = np.array([[10,3],
    [15,15],
    [20,12],
    [30,10],
    [35,30],
    [40,70],
    [45,80],
    [50,78],
    [55,55],
    [60,91],])
from sklearn.cluster import AgglomerativeClustering
clusters = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='ward')
clusters.fit_predict(X)
print(clusters.labels_)
plt.scatter(X[:,0],X[:,1], c=clusters.labels_, cmap='rainbow')

Output:

After running the above code we get the following output in which we can see that two cluster groups is shown where the first five clusters are together and the next five clusters are together.

scikit learn hierarchical clustering example
scikit learn hierarchical clustering example

Read Scikit-learn Vs Tensorflow

Scikit learn hierarchical clustering features

In this section, we will learn about the scikit learn hierarchical clustering features in python.

The main features of scikit learn hierarchical clusterin in python are:

  • Deletion Problem
  • Data hierarchy
  • Hierarchy through pointer
  • Minimize disk input and output
  • Fast navigation
  • Predefined relationship between records
  • Difficult to re-organize
  1. Deletion Problem: The deletion problem occurs when the parent cluster is deleted then the child is automatically deleted because the child is connected to the parent.

2. Data hierarchy: All the data of hierarchical clustering is represented as a hierarchical tree.

3. Hierarchy through pointer: Pointer is used to link the records. Pointer points out which record is parent record and which one is child record.

4. Minimize disk input and output: Parent-child records are stored very close to each other on the storage device which minimizes the hard disk input and output.

5. Fast navigation: The distance between the parent and child is very less so that’s why the performance of the data is improved. And the navigation through the database is very fast.

6. Predefined relationship between the records: All the relationships are predefined. In hierarchical clustering all the root nodes, Parent node are predefined in the database.

7. Difficult to reorganize: In hierarchical clustering, it is difficult to reorganize the database because of the child-Parent relationship. The child is connected to the parent if we reorganize the data the relationship between the parent and child gets mismatched.

Read Scikit learn Decision Tree

Scikit learn hierarchical clustering dendrogram

In this section, we will learn about how to make scikit learn hierarchical clustering dendrogram in python.

Hierarchal clustering is used to build a tree of clusters to represent the data where each cluster is linked with the nearest similar nodes and forms a dendrogram.

Code:

In the following code, we will import some libraries import numpy as np, from matplotlib import pyplot as plot, from scipy.cluster.hierarchy import dendrogram from this we built a dendrogram.

  • dendrogram(linkagematrix, **kwargs) is used to plot the dendrogram.
  • model = AgglomerativeClustering(distance_threshold=0, n_clusters=None) is used to setting the distance_threshold to ensure we complete the full tree.
  • plot_dendrogram(model, truncate_mode=”level”, p=3) is used to plot the top three level of dendrogram.
import numpy as np
from matplotlib import pyplot as plot
from scipy.cluster.hierarchy import dendrogram
from sklearn.datasets import load_iris
from sklearn.cluster import AgglomerativeClustering


def plot_dendrogram(model, **kwargs):
  
    count = np.zeros(model.children_.shape[0])
    nsamples = len(model.labels_)
    for i, merge in enumerate(model.children_):
        currentcount = 0
        for child_idx in merge:
            if child_idx < nsamples:
                currentcount += 1  
            else:
                currentcount += count[child_idx - nsamples]
        count[i] = currentcount

    linkagematrix = np.column_stack(
        [model.children_, model.distances_, count]
    ).astype(float)

    dendrogram(linkagematrix, **kwargs)


iris = load_iris()
X = iris.data

model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)

model = model.fit(X)
plot.title("Hierarchical Clustering Dendrogram")
plot_dendrogram(model, truncate_mode="level", p=3)
plot.xlabel("Number of points in node (or index of point if no parenthesis).")

Output:

After running the above code we get the following output in which we can see that a dendrogram is shown on the screen.

scikit learn hierarchical clustering dendrogram
scikit learn hierarchical clustering dendrogram

Read Scikit learn accuracy_score

Scikit learn hierarchical clustering linkage

In this section, we will learn about scikit learn hierarchical clustering linkage in python.

Hierarchal clustering is used to build a tree of clusters to represent the data where each cluster is linked with the nearest similar nodes.

In hierarchical linkage clustering, the linkage between the two clusters is the longest distance between the two-point.

Code:

In the following code, we will import dendrogram, linkage from scipy, and also import pyplot as plot from matplotlib.

We can create a ward linkage between the two clusters and we can plot the figure of that clusters.

from scipy.cluster.hierarchy import dendrogram, linkage
from matplotlib import pyplot as plot
x = [[i] for i in [4, 12, 0, 6, 1, 11, 11, 0]]
Z = linkage(x, 'ward')
figure = plot.figure(figsize=(25, 10))
den = dendrogram(Z)

After running the above code we get the following output in which we can see that a ward clusters dendrogram is plotted on the screen.

scikit learn hierarchical clustering ward linkage
scikit learn hierarchical clustering ward linkage

Here we can create a single linkage cluster between two clusters and we can plot the figure of these clusters.

Z = linkage(x, 'single')
figure = plot.figure(figsize=(25, 10))
den = dendrogram(Z)
plot.show()

After running the above code we get the following output in which we can see that a single linkage is created in the form of dendrograms.

scikit learn hierarchical clustering single linkage
scikit learn hierarchical clustering single linkage

Read: Scikit learn Linear Regression

Scikit learn hierarchical classification

In this section, we will learn about scikit learn hierarchical classification in python.

Hierarchical classification is defined as a system of clustering things according to hierarchy or order.

Code:

In this code, we will import some libraries from which we can create classifiers and make a hierarchical classification.

  • StandardScaler().fit_transform(x) is used to fit the data.
  • x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.4, random_state=42) it is a preprocessing dataset which split the data into training and testing part.
  • ax.scatter(x_train[:, 0], x_train[:, 1], c=y_train, cmap=cm_bright, edgecolors=”k”) it just plot the training point.
  • ax.scatter(x_test[:, 0], x_test[:, 1],c=y_test,cmap=cm_bright,alpha=0.6,edgecolors=”k”) it just plot the testing point.
import numpy as np
import matplotlib.pyplot as plot
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis

h = 0.02  

classifiernames = [
    "Nearest Neighbors",
    "Linear SVM",
    "RBF SVM",
    "Gaussian Process",
    "Decision Tree",
    "Random Forest",
    "Neural Net",
    "AdaBoost",
    "Naive Bayes",
    "QDA",
]

classifiers = [
    KNeighborsClassifier(3),
    SVC(kernel="linear", C=0.025),
    SVC(gamma=2, C=1),
    GaussianProcessClassifier(1.0 * RBF(1.0)),
    DecisionTreeClassifier(max_depth=5),
    RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
    MLPClassifier(alpha=1, max_iter=1000),
    AdaBoostClassifier(),
    GaussianNB(),
    QuadraticDiscriminantAnalysis(),
]

x, y = make_classification(
    n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1
)
rng = np.random.RandomState(2)
x += 2 * rng.uniform(size=x.shape)
linearly_separable = (x, y)

datasets = [
    make_moons(noise=0.3, random_state=0),
    make_circles(noise=0.2, factor=0.5, random_state=1),
    linearly_separable,
]

figure = plot.figure(figsize=(27, 9))
i = 1

for ds_cnt, ds in enumerate(datasets):
   
    x, y = ds
    x = StandardScaler().fit_transform(x)
    x_train, x_test, y_train, y_test = train_test_split(
        x, y, test_size=0.4, random_state=42
    )

    x_min, x_max = x[:, 0].min() - 0.5, x[:, 0].max() + 0.5
    y_min, y_max = x[:, 1].min() - 0.5, x[:, 1].max() + 0.5
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
    cm = plot.cm.RdBu
    cm_bright = ListedColormap(["#FF0000", "#0000FF"])
    ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
    if ds_cnt == 0:
        ax.set_title("Input data")
   
    ax.scatter(x_train[:, 0], x_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k")

    ax.scatter(
        x_test[:, 0], x_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k"
    )
    ax.set_xlim(xx.min(), xx.max())
    ax.set_ylim(yy.min(), yy.max())
    ax.set_xticks(())
    ax.set_yticks(())
    i += 1


    for name, clf in zip(classifiernames, classifiers):
        ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
        clf.fit(x_train, y_train)
        score = clf.score(x_test, y_test)

       
        if hasattr(clf, "decision_function"):
            Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
        else:
            Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]

        Z = Z.reshape(xx.shape)
        ax.contourf(xx, yy, Z, cmap=cm, alpha=0.8)

        # Plot the training points
        ax.scatter(
            x_train[:, 0], x_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k"
        )

        ax.scatter(
            x_test[:, 0],
            x_test[:, 1],
            c=y_test,
            cmap=cm_bright,
            edgecolors="k",
            alpha=0.6,
        )

        ax.set_xlim(xx.min(), xx.max())
        ax.set_ylim(yy.min(), yy.max())
        ax.set_xticks(())
        ax.set_yticks(())
        if ds_cnt == 0:
            ax.set_title(name)
        ax.text(
            xx.max() - 0.3,
            yy.min() + 0.3,
            ("%.2f" % score).lstrip("0"),
            size=15,
            horizontalalignment="right",
        )
        i += 1

plot.tight_layout()
plot.show()

Output:

After running the above code we get the following output in which we can see that the classification is done on the screen.

scikit learn hierarchical classification
scikit learn hierarchical classification

Also, take a look at some more tutorials on scikit learn.

So, in this tutorial we discussed scikit learn hierarchical clustering and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

  • Scikit learn hierarchical clustering
  • Scikit learn hierarchical clustering example
  • Scikit learn hierarchical clustering features
  • Scikit learn hierarchical clustering dendrogram
  • Scikit learn hierarchical clustering linkage
  • Scikit learn hierarchical classification