Scikit learn hidden_layer_sizes

In this Python tutorial, we will learn How scikit learn hidden_layer_sizes works in Python and we will also cover different examples related to hidden_layers_sizes. Additionally, we will cover these topics.

  • Scikit learn hidden_layer_sizes
  • Scikit learn hidden_layer_sizes examples

Scikit learn hidden_layer_sizes

In this section, we will learn about how scikit learn hidden_layer_sizes works in Python. Scikit learn hidden_layer_sizes is defined as a parameter that allows us to set the number of layers and number of nodes have in a neural network classifier.

Code:

In the following code, we will import make_blobs from sklearn.datasets by which we can set the number of layers and number of nodes.

n_samples = 200 is used to set the number of samples.

fig, axis = plot.subplots() is used subplot the graphs on the screen.

import matplotlib.pyplot as plot
from sklearn.datasets import make_blobs

n_samples = 200
blob_centers = ([1, 1], [3, 4], [1, 3.3], [3.5, 1.8])
data, labels = make_blobs(n_samples = n_samples,
   centers = blob_centers,
   cluster_std = 0.5,
   random_state = 0)

colours = ('red', 'blue', "green", "orange")
fig, axis = plot.subplots()

for n_class in range(len(blob_centers)):
   axis.scatter(data[labels == n_class][: , 0],
      data[labels == n_class][: , 1],
      c = colours[n_class],
      s = 30,
      label = str(n_class))

Output:

After running the above code, we get the following output in which we can see that scatter point is plotted on the screen.

scikit learn hidden_layer_sizes
scikit learn hidden_layer_sizes

Also, check: Scikit learn Ridge Regression

Scikit learn hidden_layer_sizes examples

In this section, we will learn about how scikit learn hidden_layer_sizes examples works in Python.

The hidden_layer_sizes work in neural networks works as a parameter that allows us to set the number of layers.

Example1:

In the following code, we will import tain_test_aplit from sklearn.model_selection by which we can split the train and test dataset.

X = pd.DataFrame(cal_housing.data, columns=cal_housing.feature_names) is used to create the dataset.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0) is used to split the train and test data.

est = make_pipeline(QuantileTransformer(),MLPRegressor(hidden_layer_sizes=(30, 15), learning_rate_init=0.01,early_stopping=True,random_state=0,), is used to make the pipeline and inside this we give the hidden_layer_sizes.

import pandas as pds
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split

calhousing = fetch_california_housing()
x = pds.DataFrame(calhousing.data, columns=calhousing.feature_names)
y = calhousing.target

y -= y.mean()

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)
from time import time
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import QuantileTransformer
from sklearn.neural_network import MLPRegressor

print("Train MLPRegressor")
tim = time()
estm = make_pipeline(
    QuantileTransformer(),
    MLPRegressor(
        hidden_layer_sizes=(40, 25),
        learning_rate_init=0.02,
        early_stopping=True,
        random_state=0,
    ),
)
estm.fit(x_train, y_train)
print(f"done in {time() - tim:.4f}s")
print(f"Testing R2 score: {estm.score(x_test, y_test):.2f}")

Output:

After running the above code, we get the following output in which we can see that the test score is printed on the screen.

scikit learn hidden_layer_sizes example
scikit learn hidden_layer_sizes example

Example2:

In the following code, we will import partial_dependence from sklearn.inspection by which we can compute partial dependence plots.

  • displays.figure_.suptitle(“Partial dependence of house value on non-locationfeatures\n” “for the California housing dataset, with MLPRegressor” ) is used to display the figure subtitle.
  • displays.figure_.subplots_adjust(hspace=0.3) is used to plot the graph.
import matplotlib.pyplot as plt
from sklearn.inspection import partial_dependence
from sklearn.inspection import PartialDependenceDisplay

print("Compute partial dependence plots...")
tim = time()
features = ["MedInc", "AveOccup", "HouseAge", "AveRooms"]
displays = PartialDependenceDisplay.from_estimator(
    est,
    X_train,
    features,
    kind="both",
    subsample=50,
    n_jobs=3,
    grid_resolution=20,
    random_state=0,
    ice_lines_kw={"color": "tab:green", "alpha": 0.2, "linewidth": 0.5},
    pd_line_kw={"color": "tab:red", "linestyle": "--"},
)
print(f"done in {time() - tim:.3f}s")
displays.figure_.suptitle(
    "Partial dependence of house value on non-location features\n"
    "for the California housing dataset, with MLPRegressor"
)
displays.figure_.subplots_adjust(hspace=0.3)

Output:

After running the above code, we get the following output in which we can see that the partial dependence of house value on non-location features for the California housing dataset is plotted on the screen.

scikit learn hidden_layer_sizes examples
scikit learn hidden_layer_sizes examples

You may also like to read the following Scikit learn tutorials.

So, in this tutorial we discussed Scikit learn hidden_layer_sizes and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

  • Scikit learn hidden_layer_sizes
  • Scikit learn hidden_layer_sizes examples