Scikit learn non-linear [Complete Guide]

In this Python tutorial, we will learn How Scikit learn non-linear works and we will also cover different example related to Scikit learn non-linear. Additionally, we will cover these topics.

  • Scikit learn non-linear
  • Scikit learn non-linear regression
  • Scikit learn non-linear regression example
  • Scikit learn non-linear SVM
  • Scikit learn non-linear model
  • Scikit learn a non-linear classifier
  • Scikit learn non-linear dimensionality reduction
  • Scikit learn non linear PCA

Before moving forward in this tutorial, we recommend you to read What is Scikit Learn in Python.

Scikit learn non-linear

In this section, we will learn how Scikit learn non-linear works in python.

Code:

In the following code, we will import some libraries from which we can create the scikit learn non-linearity.

  • x = num.sort(5 * num.random.rand(42, 1), axis=0) is used to generate same data.
  • y[::5] += 3 * (0.5 – num.random.rand(9)) is used to add the noise to the targets.
  • svrrbf = SVR(kernel=”rbf”, C=100, gamma=0.1, epsilon=0.1) is used to it the regression model.
  • lw = 2 is used to look at the result.
  • fig, axes = plot.subplots(nrows=1, ncols=3, figsize=(15, 10), sharey=True) is used to plot the igure and axis on the screen.
  • axes[ix].plot(x,svr.fit(x,y).predict(x),color=model_color[ix],lw=lw,label{}model”.format(kernel_label[ix]),) is used to plot the axis on the screen.
  • axes[ix].scatter(x[svr.support_],y[svr.support_],facecolor=”none”,edgecolor=model_color[ix],s=50,label=”{} support vectors”.format(kernel_label[ix]),) is used to plot the scatter plot on the screen.
  • fig.text(0.5, 0.04, “data”, ha=”center”, va=”center”) is used to text to figure.
import numpy as num
from sklearn.svm import SVR
import matplotlib.pyplot as plot

x = num.sort(5 * num.random.rand(42, 1), axis=0)
y = num.sin(x).ravel()
y[::5] += 3 * (0.5 - num.random.rand(9))

svrrbf = SVR(kernel="rbf", C=100, gamma=0.1, epsilon=0.1)

lw = 2

svrs = [svrrbf]
kernel_label = ["RBF"]
model_color = ["m"]

fig, axes = plot.subplots(nrows=1, ncols=3, figsize=(15, 10), sharey=True)
for ix, svr in enumerate(svrs):
    axes[ix].plot(
        x,
        svr.fit(x, y).predict(x),
        color=model_color[ix],
        lw=lw,
        label="{} model".format(kernel_label[ix]),
    )
    axes[ix].scatter(
        x[svr.support_],
        y[svr.support_],
        facecolor="none",
        edgecolor=model_color[ix],
        s=50,
        label="{} support vectors".format(kernel_label[ix]),
    )
    axes[ix].scatter(
        x[num.setdiff1d(num.arange(len(x)), svr.support_)],
        y[num.setdiff1d(num.arange(len(x)), svr.support_)],
        facecolor="none",
        edgecolor="r",
        s=50,
        label="other training data",
    )
    
fig.text(0.5, 0.04, "data", ha="center", va="center")
fig.text(0.06, 0.5, "target", ha="center", va="center", rotation="vertical")
plot.show()

Output:

After running the above code, we get the following output in which we can see that the non-linear data is shown on the screen.

Scikit learn non linear
Scikit learn non-linear

Read: Scikit-learn logistic regression

Scikit learn non-linear regression

In this section, we will learn how Scikit learn non-linear regression works in python.

  • Regression is defined as a supervised machine learning technique. There are two types of regression algorithms Linear and non-linear.
  • Here we can use the non-linear regression technique which is used to describe the non-linearity and its parameter depending upon one or more independent variables.

Code:

In the following code, we will learn some libraries from which we can create a non-linear regression model.

  • df = pds.read_csv(“regressionchina_gdp.csv”) is used to read the file which we are importing.
  • plot.figure(figsize=(8,5)) is used to plot the figure.
  • x_data, y_data = (df[“Year”].values, df[“Value”].values) is used to describe the values and years.
  • plot.plot(x_data, y_data, ‘ro’) is used plot the x data and y data.
  • plot.ylabel(‘GDP’) is used to plot the y label.
  • plot.xlabel(‘Year’) is used to plot the x label.
import numpy as num
import pandas as pds
import matplotlib.pyplot as plot
df = pds.read_csv("regressionchina_gdp.csv")
df.head(10)
plot.figure(figsize=(8,5))
x_data, y_data = (df["Year"].values, df["Value"].values)
plot.plot(x_data, y_data, 'ro')
plot.ylabel('GDP')
plot.xlabel('Year')
plot.show()
scikit learn non linear regression
scikit learn non-linear regression

In the following code, we choose a model to draw a linear regressing on the screen.

  • plot.plot(x,y) is used to plot the x and y on the screen.
  • plot.ylabel(‘Dependent Variable’) is used to plot the y label on the screen.
  • plot.xlabel(‘Indepdendent Variable’) is used to plot the x label on the screen.
x = np.arange(-5.0, 5.0, 0.1)
y = 1.0 / (1.0 + np.exp(-x))
plot.plot(x,y) 
plot.ylabel('Dependent Variable')
plot.xlabel('Indepdendent Variable')
plot.show()
scikit learn non linear regression choosing a model
scikit learn non-linear regression choosing a model

Here, we can use the logistic function to build our non-linear model.

Now, plot.plot(x_data, Y_pred*15000000000000.) is used to plot the initial position against data points.

def sigmoid(x, Beta_1, Beta_2):
     y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
     return y
beta1 = 0.10
beta2 = 1990.0
#logistic function
Y_pred = sigmoid(x_data, beta1 , beta2)

plot.plot(x_data, Y_pred*15000000000000.)
plot.plot(x_data, y_data, 'ro')
scikit learn non linear regression building a model
scikit learn non-linear regression building a model

Here we can normalize our data to make the best fit of the curve.

  • plot.figure(figsize=(8,5)) is used to plot the figure on the screen.
  • plot.plot(xdata, ydata, ‘ro’, label=’data’)is used to plot the ydata and xdata on the screen.
  • plot.plot(x,y, linewidth=3.0, label=’fit’) is used to plot fit line on the screen.

xdata =x_data/max(x_data)
ydata =y_data/max(y_data)
from scipy.optimize import curve_fit
popt, pcov = curve_fit(sigmoid, xdata, ydata)
# Now we plot our resulting regression model.
x = np.linspace(1960, 2015, 55)
x = x/max(x)
plot.figure(figsize=(8,5))
y = sigmoid(x, *popt)
plot.plot(xdata, ydata, 'ro', label='data')
plot.plot(x,y, linewidth=3.0, label='fit')
plot.legend(loc='best')
plot.ylabel('GDP')
plot.xlabel('Year')
plot.show()

After running the above code, we get the following output in which we can see that the non-linear best fit line is plotted on the screen.

scikit learn non linear regression best fit parameter
scikit learn non-linear regression best fit parameter

Read: Scikit learn Decision Tree

Scikit learn non-linear regression example

In this section, we will learn about how Scikit learn non-linear regression example works in python.

Non-linear regression is defined as a quadratic regression that builds a relationship between dependent and independent variables. This data is shown by a curve line.

Code:

In the following code, we will import some libraries by which a non-linear regression example works.

  • df = pds.read_csv(“regressionchina_gdp.csv”) is used to read the csv file which we are load.
  • y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2))) is used to define a sigmoid function.
  • ypred = sigmoid(x_data, beta1, beta2) is used as a logistic function.
  • plot.plot(x_data, ypred * 16000000000000.) is used to plot the initial prediction against datapoints.
  • plot.plot(x_data, y_data, ‘go’) is used to plot the x_data and y_data on the graph.
import numpy as num
import pandas as pds
import matplotlib.pyplot as plot

     
df = pds.read_csv("regressionchina_gdp.csv")
 
def sigmoid(x, Beta_1, Beta_2):
     y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2)))
     return y
     
beta1 = 0.10
beta2 = 1990.0
 
ypred = sigmoid(x_data, beta1, beta2)

plot.plot(x_data, ypred * 16000000000000.)
plot.plot(x_data, y_data, 'go')

Output:

After running the above code, we get the following output in which we can see that the curve line shows the non-linearity of the graph.

scikit learn non linear regression example
scikit learn non-linear regression example

Read: Scikit learn Hierarchical Clustering

Scikit learn non-linear SVM

In this section, we will learn how scikit learn non-linear SVM works in python.

  • Non-linear SVM stands for support vector machine which is a supervised machine learning algorithm used as a classification and regression both.
  • As we know non-linear is defined as a relationship between the dependent and independent variable and it makes a curvy line to describe the model.

Code:

In the following code, we will import some libraries from which we can make a non-linear SVM model.

  • x = num.random.randn(350, 2) is used to generate the random numbers.
  • classifier = svm.NuSVC() is used to make svm classifier
  • classifier.fit(x, Y) is used to fit the model.
  • Z = classifier.decision_function(np.c_[xx.ravel(), yy.ravel()]) is used to plot the decision function of every data points on the grid.
  • plot.imshow(Z, interpolation=’nearest’,extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect=’auto’,origin=’lower’, cmap=plot.cm.PuOr_r) is used to plot the graph on the screen.
  • plot.scatter(x[:, 0], x[:, 1], s=35, c=Y, cmap=plot.cm.Paired) is used to plot the scatter point on the grid.
import numpy as num
import matplotlib.pyplot as plot
from sklearn import svm

xx, yy = num.meshgrid(num.linspace(-3, 3, 500),
                     num.linspace(-3, 3, 500))
num.random.seed(0)
x = num.random.randn(350, 2)
Y = num.logical_xor(x[:, 0] > 0, x[:, 1] > 0)


classifier = svm.NuSVC()
classifier.fit(x, Y)

Z = classifier.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plot.imshow(Z, interpolation='nearest',
           extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto',
           origin='lower', cmap=plot.cm.PuOr_r)
contours = plot.contour(xx, yy, Z, levels=[0], linewidths=2,
                       linetypes='--')
plot.scatter(x[:, 0], x[:, 1], s=35, c=Y, cmap=plot.cm.Paired)
plot.xticks(())
plot.yticks(())
plot.axis([-3, 3, -3, 3])
plot.show()

Output:

After running the above code, we get the following output in which we can see that the Scikit learn non-linear SVM graph is plotted on the screen.

scikit learn non linear SVM
scikit learn non-linear SVM

Read: Scikit learn Hidden Markov Model

Scikit learn non-linear model

In this section, we will learn about how Scikit learn non-linear model works in python.

  • The non-linear model defines the non-linear relation between the data and its parameter depending upon one or more independent variables.
  • The non-linearity is shown where the data point makes a curvy line from this a non-linearity of the data is proved.

Code:

In the following code, we will import some libraries from which we can see that the non-linear model works.

  • range = num.random.RandomState(0) is used to generate the random state.
  • lendata = (datamax – datamin) is used to get the length of data.
  • data = num.sort(range.rand(nsample) * lendata – lendata / 2) is used to sort the data to make plotting rasy.
  • target = data ** 3 – 0.6 * data ** 2 + noise is used to make the target.
  • full_data = pds.DataFrame({“input feature”: data, “target”: target}) is used to get the full data from dataframe.
  • nonlineardata = sns.scatterplot(data=full_data, x=”input feature”, y=”target”, color=”blue”, alpha=0.5) is used to plot the scatter points on the graph.
import numpy as num

range = num.random.RandomState(0)

nsample = 100
datamax, datamin = 1.5, -1.5
lendata = (datamax - datamin)

data = num.sort(range.rand(nsample) * lendata - lendata / 2)
noise = range.randn(nsample) * .3
target = data ** 3 - 0.6 * data ** 2 + noise
import pandas as pds

full_data = pds.DataFrame({"input feature": data, "target": target})
import seaborn as sns

nonlineardata = sns.scatterplot(data=full_data, x="input feature", y="target",
color="blue", alpha=0.5)

Output:

After running the above code, we get the following output in which we can see that the Scikit learn non-linear model is plotted on the screen.

Scikit learn non linear model
Scikit learn non-linear model

Read: Scikit learn Ridge Regression

Scikit learn a non-linear classifier

In this section, we will learn about how a Scikit learn non-linear classifier works in python.

The non-linear classifier is defined as a process of classification which is used to describe the non-linearity and its parameter depending upon one or more independent variables.

code:

In the following code, we will import some libraries from which we can create a non-linear classifier.

  • x = x.copy() is used to copy the data.
  • x = num.random.normal(size=(n, 2)) is used to generate the random numbers.
  • xtrain, xtest, ytrain, ytest = train_test_split(x, y, random_state=0, test_size=0.5) is used to split the dataset into train data and test data.
  • plot.figure(figsize=(5,5)) is used to plot the figure on the screen.
  • plot.scatter(xtrain[:,0], xtrain[:,1], c=ytrain, edgecolors=’r’); is used to plot the scatter plot on the screen.
import numpy as num
import matplotlib.pyplot as plot

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

%config InlineBackend.figure_format = 'svg' 
plot.style.use('bmh')
plot.rcParams['image.cmap'] = 'Paired_r'
num.random.seed(5)

def f(x):
    x = x.copy()
    x[:,0] -= 0.4
    x[:,1] += 0.2
    return 1.1*x[:,0]**2 + 0.3*x[:,1]**2 - 0.6*x[:,0]*x[:,1]

def makedata():
    n = 800
    x = num.random.normal(size=(n, 2))
    y = f(x) < 0.5
    x += num.random.normal(size=(n,2), scale=0.2)
    return x, y

x, y = makedata()

xtrain, xtest, ytrain, ytest = train_test_split(x, y, random_state=0, test_size=0.5)

plot.figure(figsize=(5,5))
plot.scatter(xtrain[:,0], xtrain[:,1], c=ytrain, edgecolors='r');
scikit learn non linear classifier
scikit learn a non-linear classifier

In the below code we will plot the boundaries of the classifier.

  • xx, yy = num.meshgrid(num.arange(x_min, x_max, h),num.arange(y_min, y_max, h)) is used to create the meshgridon the screen.
  • Z = classifier.predict(num.c_[xx.ravel(), yy.ravel()]) is used to predict the classifier.
  • plot.figure(figsize=(5,5)) is used to plot the classifier on the screen.
  • plot.scatter(X[:,0], X[:,1], c=Y, edgecolors=’r’); is used to plot the scatter plot on the screen.
  • plot_boundary(classifier, xtrain, ytrain) is used to plot the boundaries of the classifier.
  • accuracy_score(ytest, classifier.predict(xtest)) is used to predict the accuracy score.
def plot_boundary(classifier, X, Y):
    h = 0.02
    x_min, x_max = X[:,0].min() - 10*h, X[:,0].max() + 10*h
    y_min, y_max = X[:,1].min() - 10*h, X[:,1].max() + 10*h
    xx, yy = num.meshgrid(num.arange(x_min, x_max, h),
                         num.arange(y_min, y_max, h))
    Z = classifier.predict(num.c_[xx.ravel(), yy.ravel()])
    Z = Z.reshape(xx.shape)

    plot.figure(figsize=(5,5))
    plot.contourf(xx, yy, Z, alpha=0.25)
    plot.contour(xx, yy, Z, colors='r', linewidths=0.7)
    plot.scatter(X[:,0], X[:,1], c=Y, edgecolors='r');
    from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression().fit(xtrain, ytrain)

plot_boundary(classifier, xtrain, ytrain)
accuracy_score(ytest, classifier.predict(xtest))
scikit learn non linear classifier boundary
scikit learn non-linear classifier boundary

Read: Scikit learn Linear Regression

Scikit learn non-linear dimensionality reduction

In this section, we will learn about how Scikit learn non-linear dimensionality reduction works in python.

Non-Linear dimensionality reduction is used to reduce the number of items in the dataset without any drop of information.

code:

In the following code, we will import some libraries from which we can create scikit learn non-linear dimensionality reduction.

  • warnings.filterwarnings(‘ignore’) is used to give the filter warning.
  • x, y = make_s_curve(n_samples=100) is used to make the curve.
  • digits = load_digits(n_class=6) is used to load the digit.
  • plot.figure(figsize=(12,8)) is used to plot the figure on the screen.
  • axis = plot.axes(projection=’3d’) is used to plot the axes on the screen.
  • axis.scatter3D(x[:, 0], x[:, 1], x[:, 2], c=y) is used to plot the scatter on the graph.
import sklearn

import numpy as num
import pandas as pds
import matplotlib.pyplot as plot
from mpl_toolkits.mplot3d import Axes3D

import warnings
import sys

warnings.filterwarnings('ignore')

%matplotlib inline
from sklearn.datasets import make_s_curve

x, y = make_s_curve(n_samples=100)
from sklearn.datasets import load_digits

digits = load_digits(n_class=6)
x_digits, y_digits  = digits.data, digits. target
print('Dataset Size : ', x_digits.shape, y_digits.shape)
plot.figure(figsize=(12,8))
axis = plot.axes(projection='3d')

axis.scatter3D(x[:, 0], x[:, 1], x[:, 2], c=y)
axis.view_init(10, -60);

Output:

After running the above code, we get the following output in which we can see the non-linear dimensionality reduction.

scikit learn non linear dimensionality reduction
Scikit learn non-linear dimensionality reduction

Read: Scikit learn Hyperparameter Tuning

Scikit learn non-linear PCA

In this section, we will learn about how Scikit learn non-linear PCA works in python. Where we are going to show the difference between the PCA and KernalPCA.

  • In this, we are explaining the difference by using the example where on one side the KernalPCA is able to find the projection of the data which linearly separates them, and this is not happening in the case of the PCA.
  • PCA stands for the Principal Components Analysis. In this process, it is used in the process of the principal components. It is also the dimensionality-reduction method that helps to reduce the dimensionality.
  • Now, we are explaining the example of the non-linear PCA by explaining the difference of the PCA vs KernalPCA using the projecting data.

Code:

In the following code, we are telling the advantages of using the kernel when projecting data using the PCA.

In this block of code, we are generating the two nested datasets.

from sklearn.datasets import make_circles
from sklearn.model_selection import train_test_split

x, y = make_circles(n_samples=1_000, factor=0.3, noise=0.05, random_state=0)
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify=y, random_state=0)

import matplotlib.pyplot as plot

_, (train_ax1, test_ax1) = plot.subplots(ncols=2, sharex=True, sharey=True, figsize=(8, 4))

train_ax1.scatter(x_train[:, 0], x_train[:, 1], c=y_train)
train_ax1.set_ylabel("Feature 1")
train_ax1.set_xlabel("Feature 0")
train_ax1.set_title("Train data")

test_ax1.scatter(x_test[:, 0], x_test[:, 1], c=y_test)
test_ax1.set_xlabel("Feature 0")
_ = test_ax1.set_title("Test data")

Output:

After running the following code, we get the following output where we can have a quick view of the two nested generated datasets.

  • One is the training dataset and the other is the testing dataset.
  • The samples from each class cannot be linearly separated because there is no straight line that is linearly separated through which it can split the inner dataset with the outer dataset.
scikit learn non linear PCA
Scikit learn non-linear PCA

In this block of code, we are using the PCA with and without the kernels to see what effects can be there while using the kernels.

  • The kernel used in this is the Radial basis function (RBF)kernel.
  • orig_data_ax1.set_ylabel() is used to give label to y-axis for the Testing data.
  • orig_data_ax1.set_xlabel()is used to give label to x-axis for the Testing data.
  • orig_data_ax1.set_title()is used to give label to Title of the graph for the Testing data.
  • pca_proj_ax1.set_ylabel() is used to give label to y-axis for the PCA.
  • pca_proj_ax1.set_xlabel() is used to give label to x-axis for the PCA.
  • pca_proj_ax1.set_title() is used to give the title of the graph for the PCA.
from sklearn.decomposition import PCA, KernelPCA

pca1 = PCA(n_components=2)
kernel_pca1 = KernelPCA(
    n_components=None, kernel="rbf", gamma=10, fit_inverse_transform=True, alpha=0.1
)

x_test_pca1 = pca1.fit(x_train).transform(x_test)
x_test_kernel_pca1 = kernel_pca1.fit(x_train).transform(x_test)


fig, (orig_data_ax1, pca_proj_ax1, kernel_pca_proj_ax1) = plot.subplots(
    ncols=3, figsize=(14, 4)
)

orig_data_ax1.scatter(x_test[:, 0], x_test[:, 1], c=y_test)
orig_data_ax1.set_ylabel("Feature 1")
orig_data_ax1.set_xlabel("Feature 0")
orig_data_ax1.set_title("Testing data")

pca_proj_ax1.scatter(x_test_pca1[:, 0], x_test_pca1[:, 1], c=y_test)
pca_proj_ax1.set_ylabel("Principal component 1")
pca_proj_ax1.set_xlabel("Principal component 0")
pca_proj_ax1.set_title("projection of test data\n using PCA")

kernel_pca_proj_ax1.scatter(x_test_kernel_pca1[:, 0], x_test_kernel_pca1[:, 1], c=y_test)
kernel_pca_proj_ax1.set_ylabel("Principal component 1")
kernel_pca_proj_ax1.set_xlabel("Principal component 0")
_ = kernel_pca_proj_ax1.set_title("projection of test data using\n Kernel PCA")

Output:

After running the following code, we get the following output where we can see the comparison of the Testing data, Projection of testing data using PCA, and Projection of testing data using KernelPCA.

  • Let us revise that PCA transforms the data linearly which means that the arranged system will be centered, rescaled on all components with respect to its variance, and finally be rotated.
  • Looking at the below output we can see in the middle structure that there is no change in the structure related to the scaling.
  • The kernel PCA allows making a non-linear projection.
  • Here, by using an RBF kernel, we expect that the projection will open out the dataset while caring about maintaining the relative distances of pairs of data points that are close to one another in the native space.
  • We can see and observe such differences in the KernelPCA structure which is on the right side.
Scikit learn non linear Kernel PCA
Scikit learn non-linear Kernel PCA

You may also like to read the tutorials on Scikit learn.

So, in this tutorial we discussed Scikit learn Non-linear and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

  • Scikit learn non-linear
  • Scikit learn non-linear regression
  • Scikit learn non-linear regression example
  • Scikit learn non-linear SVM
  • Scikit learn non-linear model
  • Scikit learn a non-linear classifier
  • Scikit learn non-linear dimensionality reduction
  • Scikit learn non linear PCA