How to Build Perceptron in Python

If you are a deep learning or machine learning enthusiast, you must know how to build Perceptron in Python, which is the foundation of creating AI, machine learning models, etc.

I will explain perceptron in depth with an example. You will understand how the perceptron works: it takes the input, processes the input and emits the output at a very fundamental level. After knowing about Perceptron, you will be ready to build it.

You will learn how to prepare a small dataset for perceptron in Python and train the perceptron on those data to learn for specific tasks such as classifying the fruits. After building and training the perceptron, you will test it with new data to see if it is making the right prediction.

Additionally, you will understand the important perceptron parameters used to make predictions.

What is Perceptron?

Perceptron is a unit of artificial neural networks (ANNs); the ANNs are computation models that mimic the human brain’s structure and functioning.

The term you are hearing today is Artificial Intelligence (AI), which is about providing the ability of humans to machines. Artificial neural networks are the foundational element of AI, as AI is used in pattern recognition, classification of things, and prediction tasks such as weather, etc.

Whatever the advances we see today in machine learning and data analysis, one thing behind these advancements is artificial neural networks.

This ANN is created by combining multiple perceptrons, but what exactly is a perceptron? It is like a neuron in the human brain; look below in the figure.

Perceptron is Like Neurons

The human brain neuron consists of three main things:

  • Dendrites: Dendrites are tree-like structures in neurons that accept signals from other neurons.
  • Cell body: It is the neuron’s core, where the signal from dendrites is processed or summed up; it maintains the cells’ life process.
  • Axon: A long tail carries the electric impulses from the cell body and transmits the information to different neurons.

Let’s apply the same analogy to understand the perceptron in Artificial Neural Networks. Perceptron (neuron) is a mathematical function that is closely related to the functioning of neurons in the brain.

Mathematical function means each perceptron in ANNs takes the collection of a set of inputs, a set of weights, and an activation function. Then, it converts these inputs into a single output. Next, these outputs are picked by another perceptron as inputs, and this process keeps on going.

Below is a mathematical model of perceptron.

Mathematical Model of Perceptron

As you can perceptron is made of five things:

  • Inputs: It is the information fed to perception; in the real world, it can be featured from a dataset. For example, mathematically, x1, x2,..xn.
  • Weights: Weights are associated with the input; it’s like adding importance to inputs.
  • Weighted Sum: Each input x1 and its corresponding weight w1 are multiplied w1 * x1, and each product is summed up, and this summed value is called weighted sum. Mathematically it is Weighted sum = w1 * x1 + w2 * x2 +. . .+ wm * xn.
  • Bias: The bias is a value added to the weighted sum to increase accuracy.
  • Activation function: It is applied to the weighted sum to bring the non-linearity to the perceptron.
    • It is a function with threshold values; if the weighted sum exceeds the threshold value, the perceptron is activated and outputs the value 1. Otherwise, the perceptron remains deactivated and outputs the value 0.

Overall, inputs are fed to the perceptron with weights, and then inputs and weights are multiplied; when weights are multiplied with inputs, it produces the product, and the product represents the strength of the connection between the input and perceptron.

Then, each product is summed up, called a weighted sum. After this, bias is added to the weighted sum to make a more accurate output. Finally, the activation function based on the weighted sum outputs the value 1 or 0, yes or no.

Let me give you a real-life example of how you can relate this perceptron to real life.

So think like a friend named Geniee who likes to play video games. Based on the two factors, he has to decide whether to play outside or stay indoors: the weather (W) and the number of video games he has (G).

  • Here in the Inputs(W and G): W indicates how good the weather is, whether sunny or rainy, and G is the number of video games Geniee has.
  • Weights(W_weight and G_weight): Geniee now thinks weather is more important, so he assigned a higher weight (W_weight) than the number of video games (G_weight).
  • Weighted Sum: Geniee begins to compute the score by multiplying the weather by its weight and the number of video games by its weight, then adding these two products. This gives him a weighted sum like Weighted Sum = W x W_weight + G x G_weight.
  • Linear Activation Function: Now, Geniee sets a threshold for his decision, or you can call it a decision boundary. If the score exceeds the threshold (decision boundary), he decides to play the output side. Otherwise, he stays indoors.
    • This decision-making process is like a simple switch: play outside (1) if the score is high enough, or stay indoors (0) if it’s not.
  • Bias(B): Sometimes, Geniee might feel like playing outside or staying indoors regardless of the weather or video games. This feeling is a bias (B) that affects his decision. If Geniee is strongly biased to play outside, even if the weather is not perfect, he may still go out.

If you put them all together, we get the following:

Perceptron Decision Boundary

So, here, Geniee’s decision to play outside or stay indoors is like a simple perceptron deciding on the weather, number of video games and personal bias. The above is just an example of how a linear regression model in a machine works to make predictions based on different features.

Build Perceptron in Python

Now, you are familiar with the concept of the perceptron and how it is constructed and works. So here, I will show you a simple example using Python where you will learn how to create a perceptron, train this perceptron with a training dataset, and make predictions.

As humans, to identify anything, we need to remember patterns about things; for example, let’s say you need to identify if the fruit is mango or not—some of the patterns of mango you notice, such as its shape, colour, etc.

If you get a basket of mixed fruits, you can easily find the mango if you know how it looks based on its shape and colour.

But, before creating a perceptron, I want to introduce you to two more parameters that are used in the training of the perceptron: the first is the learning rate, and the second is the error.

Coming back to Geniee, as you know, he likes deciding whether to play outside or stay indoors based on the weather and the number of video games he has.

  • Learning Rate: To make a decision, Geniee has a strategy, so learning rate is like how much he adjusts his decision-making strategy each time he meets a new day. If his learning rate is high, he quickly adapts to changes; if it is low, he might take longer to adjust.
    • So, the learning rate is like how Geniee updates his decision-making approach based on his recent experience with the weather and his video game collection.
  • Error: The error is the difference between Geniee’s decisions and the optimal decision for the day. If Geniee decides to play outside, but it’s raining, there is high error. If he decides to stay indoors, but the weather is perfect, there’s also a high error.
    • The error is like a discrepancy between the Geniee decision and the most appropriate decision based on the actual weather conditions.

Also, remember weight and bias are parameters which are adjusted; for example, weights and bias are like Geniee preferences. if Geniee likes playing outside (high weight for good weather), he might need more evidence (stronger signal) to stay indoors.

So, Geniee adjusts his preferences(weights and biases) over time based on whether he made good decisions. If he regrets staying indoors on a sunny day, he might change his mind and prefer playing outside more in similar conditions.

  • In simple words, Geniee learns to make better decisions by quickly adapting (learning rate), recognizing when he makes mistakes (error), and adjusting his preferences based on past experiences (weights and biases).

Now, let’s build a perceptron and train it to identify whether the given fruit is mango. But make sure you are familiar with numpy. If not, follow the tutorials, NumPy random number, Dot Product and Cros Product, and Indexing and Slicing in Python List.

First, let’s create a small dataset based on the characteristics (weight, colour) of fruits like mango, and from this dataset, the perceptron will learn to identify if the given fruit is mango or not.

So, two features of mango are the weight (in grams) and its colour (0 represents green, 1 for orange):

Creating a Dataset For Perceptron in Python

To create a data set, you can use the lists of lists or Pandas DataFrame. I am going to use Pandas DataFrame.

Import the pandas as shown below.

import pandas as pd

Create a dataset as shown below.

dataset = {
	'Weight (grams)': [150, 200, 120, 180, 100],
	'Color (0 = Green, 1 = Yello)': [0, 1, 0, 1, 0],
	'Label (1 = Mango, 0 = Not Mango)': [0, 1, 0, 1, 0]

Converting to Pandas Datafrem

df = pd.DataFrame(dataset)

View the dataset using the below code.

Creating a Dataset For Perceptron in Python

The dataset contains three columns Weight, Color and Label. Next, use the following code to define the parameters of perceptron, such as wights, bias and learning_rate.

weights = np.random.rand(2)
bias = np.random.rand(1)
learning_rate = 0.1

We randomly initialise the weights and bias using the np.random.rand() function, the weight is here 2 because we have two features in the dataset, which means perceptron will learn from two features.

Bias is also initialized with one random number, and the initial learning_rate is 0.1. These are the parameters of the perceptron that you already know.

Next, define the activation function, a simple step function shown below.

def activation_function(value):
    return 1 if value >= 0 else 0

This function accepts the value; if the value is greater than 0, it returns 1; otherwise, it returns 0 if the value is less than 0. This is the simple activation function; there are other activation functions like sigmoid, etc.

Now, extract the features and labels from the dataset as shown below.

features = np.array(df)[:, :2]
labels = np.array(df)[:, 2]

Training the Perceptron

Use the code below to train the perceptron.

# Training the perceptron
for epoch in range(1000):
    for i in range(len(features)):

        # Extract features and label for each data point
        feature_vector = features[i]
        label = labels[i]

        # Compute the weighted sum
        weighted_sum =, weights) + bias

        # Apply the activation function on the weihted sum
        prediction = activation_function(weighted_sum)

        # Update the weights and bias based on prediction error
        error = label - prediction
        weights += learning_rate * error * feature_vector
        bias += learning_rate * error
Building Perceptron using Python

Let’s understand the training part of the code line by line:

  • Training begins with an outer loop ‘for epoch in range(1000): This loop iterates 1000 times. Each iteration of this loop is called an epoch. An epoch indicated one complete pass through the entire dataset during training.
  • The next is the inner loop ‘for i in range(len(features)): This loop iterates over each data point in the dataset. This loop processes each data point individually during each epoch.
    • After that, the feature in the dataset has two things, weight and colour, so feature_vector= features[i] extracts the feature vector(weight, colour) for the current data point. Then label=lables[i] extracts the corresponding labels (1 for mango, 0 for not mango) for the current data point.
    • Next line weighted_sum =, weights) + bias computes the weighted sum of the features using the current weights and bias. This is the linear combination of inputs.
    • Then, apply the activation function on the weighted sum using the prediction = activation_function(weighted_sum). In this case, this activation function is a step function, which determines whether the prediction is 1 or 0 based on the value of the weighted sum sign.
    • The prediction is a value that is compared with the actual label of the dataset to see if the value predicted is closer to the actual label. The difference between label and prediction is called error, computed using error = label – prediction.
    • Finally, in the next line of code, weights += learning_rate * error * feature_vector and bias += learning_rate * error, update the weights and bias parameter value based on the prediction error.
    • The learning rate determines the step size of the update. This step is part of the backpropagation process, where the perceptron learns from its mistakes and adjusts its parameters to improve future predictions.

By repeating the above steps for 1000 epochs, the perceptron learns to classify the given dataset ( or whether the fruit is mango), adjusting its weights and bias to minimize (reduce) prediction error.

The result is a trained perceptron that can make predictions on the new data points ( new data points related to fruits).

Make Predictions using the Trained Perceptron

Now, Next, let’s provide the new data points to the trained perceptron to see if the given data point is a mango fruit or not.

Create new data points with weights equal to 160 and colours equal to 1 (which means yellow).

new_fruit = np.array([160, 1])

Computed the weighted sum using the code below.

weighted_sum =, weights) + bias

Make a prediction using the same activation function using the code below.

prediction = activation_function(weighted_sum)

Now, check the output of the prediction value.

if prediction == 1:
  print("It is an Mango")
  print("Not Mango")
Prediction by Trained Perceptron

As you can see, the given data point belongs to mango, which is the correct prediction as we know if the weight is 160 and the colour is 1, then it is mango according to the dataset.

Let’s change the colour value to 0 and see what happens.

Prediction by Trained Perceptron Based on the Different Data Points

It predicted the given data point as ‘Not Mango’, which is correct according to the dataset.

Well, you have your first perceptron to classify whether a given fruit value is a mango. Remember that weight and bias are important because these parameters are updated while training the perceptron.

While making predictions, we have used these parameter values to compute the weighted sum. Here, the training means finding optimal parameter values that can be used to make correct predictions for correct data points.

The above is a basic example of how perceptrons work and can be trained on specific datasets for specified tasks; for complex tasks, multiple perceptrons work together as artificial neural networks.

  • Also, remember that the activation function we have used here is called the step function, which outputs binary values 1 or 0. This allows Perceptron to learn linear relationships between data points.
  • However, other activation functions are used for more complex tasks and introduce non-linearity, allowing the perceptron to learn non-linear relationships; these activation functions are sigmoid, hyperbolic, rectified, etc.

Next, I suggest you learn about “Build Artificial Neural Networks in Tensorflow”, where you will learn how the perceptron is combined to create neural networks which perform very complex tasks.


You learned that the perceptron is similar to neurons in the human brain, and you also learned how to represent the perceptron as a mathematical computation model that learns from the dataset and makes a prediction on the new dataset.

Additionally, you learned the parameters of the perceptron and how to adjust them using the learning rate and error. Finally, you have built the perceptron and trained on the small dataset to identify whether the given data point is a mango.