In this Python tutorial, we will learn about the PyTorch activation function. The activation function is a function that performs computations to give an output that acts as an input for the next neuron. And additionally, we will also cover the different examples related to the PyTorch Activation function. And also cover these topics.
- PyTorch activation function
- PyTorch sigmoid activation function
- PyTorch inplace activation function
- PyTorch tanh activation function
- PyTorch ReLU activation function
- PyTorch leaky ReLu activation function
- PyTorch linear activation function
- PyTorch classification activation function
- PyTorch lstm activation function
- PyTorch swish activation function
- PyTorch softmax activation function
PyTorch activation function
In this section, we will learn about the PyTorch activation function in python.
The activation function is the building block of the neural network. It applies a non-linear transformation and determines either a neuron should be activated or not.
It should be applied to the output of the weighted sum of the input. The main part of the activation function is to establish a non-linearity in the decision boundary of the neural network.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- input= torch.tensor([-3, -2, 0, 2, 3]) : We are declaring the input variable by using the torch.tensor() function.
- output = func.relu(input) is used to feed the input tensor to the relu activation function and store the output.
- print(output): The print() function is used to print the output values.
# Importing Library
import torch
from torch.nn import functional as func
# Create a tensor
input= torch.tensor([-3, -2, 0, 2, 3])
# Feeding input tensor to relu function and storing the output
output = func.relu(input)
# Print output
print(output)
Output:
In the below output you can see that the output of the PyTorch activation function is printed on the screen.
So, with this, we understood how to use the PyTorch activation function in python.]
Also, check: PyTorch Fully Connected Layer
PyTorch sigmoid activation function
In this section, we will learn about the PyTorch sigmoid activation function in python.
Sigmoid is a non-linear activation function. It does not pass across the origin because it is an S-Shaped curve and makes an output that lies between 0 and 1.
The output value is used as a probability and it is frequently used for binary classification.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- input = torch.Tensor([2,-3,4,-6]): We are declare the input variable by using the torch.tensor() function.
- sigmoid_fun = nn.Sigmoid(): We are calling the sigmoid activation function.
- output = sigmoid_fun(input): Here we are applying the sigmoid to the tensor.
- print(output): The print() function is used to print the output values.
# Importing Libraries
import torch
import torch.nn as nn
# Create a tensor
input = torch.Tensor([2,-3,4,-6])
# Calling the sigmoid function
sigmoid_fun = nn.Sigmoid()
# Applying sigmoid to the tensor
output = sigmoid_fun(input)
# Print the output
print(output)
Output:
In the below output you can see that the output of the PyTorch sigmoid activation function values is printed on the screen.
This is how we can use the Pytorch sigmoid activation function for building neural networks.
Read: PyTorch Model Summary – Detailed Tutorial
PyTorch inplace activation function
In this section, we will learn about the PyTorch inplace activation function in python.
The activation function is defined as a function that performs computations to give an output that acts as an input for the next neuron.
The activation function holds a non-linear relationship by using the linear concept and decreasing the error and adjusting the weights because it should be differentiable.
Syntax:
The syntax of PyTorch inplace activation function:
Here ReLU is the activation function and within this function, we are using the parameter that is inplace.
nn.ReLU(inplace=True)
Parameter:
inplace = True It means that it will alter the input directly without assigning any additional output and the default value of inplace is False.
This is how the inplace parameter works in the activation function.
Read: PyTorch Logistic Regression
PyTorch tanh activation function
In this section, we will learn about the PyTorch tanh activation function in python.
Tanh function is similar to the sigmoid function. It is also an S-shaped curve but it passes across the origin and the output value range of Tanh is from -1 to +1. The Tanh is also a non-linear and differentiable function.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- input = torch.Tensor([2,-3,4,-6]): We are declare the input variable by using the torch.tensor() function.
- func = nn.Tanh() Here we are calling the Tanh function.
- output = func(input): Applying Tanh to the tensor.
- print(output): The print() function is used to print the output values.
# Importing Libraries
import torch
import torch.nn as nn
# Create a tensor
input = torch.Tensor([2,-3,4,-6])
# Calling the Tanh function
func = nn.Tanh()
# Applying Tanh to the tensor
output = func(input)
# Print Output
print(output)
Output:
In the below output, you can see that the PyTorch Tanh activation function value is printed on the screen.
So, with this, we understood about the PyTorch Tanh activation function.
Read: PyTorch Early Stopping + Examples
PyTorch ReLU activation function
In this section, we will learn about the PyTorch ReLU activation function in Python.
ReLu stands for Rectified Linear Activation Function. It is a popular and non-linear activation function.
It is also a non-linear and differentiable function. In ReLU if the inputs are negative then its derivative becomes zero and the learning rate of neurons stops.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- func = nn.ReLU(): Here we are calling the Tanh function.
- input = torch.Tensor([2,-3,4,-6]): We are declaring the input variable by using the torch.tensor() function.
- output = func(input): Applying Tanh to the tensor.
- print(output): The print() function is used to print the output values.
# Importing Libraries
import torch
import torch.nn as nn
# Calling the ReLU function
func = nn.ReLU()
# Creating a Tensor with an array
input = torch.Tensor([2,-3,4,-6])
# Passing the array to relu function
output = func(input)
# Print an output
print(output)
Output:
In the following output, you can see that PyTorch ReLu Activation function values are printed on the screen.
So with this, we understood how to use the PyTorch ReLU activation function.
Read: PyTorch MSELoss
PyTorch leaky ReLU activation function
In this section, we will learn about the PyTorch leaky ReLU activation function in Python.
The leaky ReLU function is another type of activation function. It is similar to the ReLU activation function. This function is very helpful and useful.
If the input value is negative then the differentiation of the function is not zero and the learning rate of the neurons is not stopped.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- func = nn.LeakyReLU(0.3): We are defining Leakyrelu and the parameter 0.3 is passed to control the negative slope.
- input = torch.Tensor([2,-3,4,-6]): Here creating a Tensor with an array.
- output = func(input): Applying Leaky ReLU to the tensor.
- print(output): The print() function is used to print the output.
# Importing Libraries
import torch
import torch.nn as nn
# defining Leakyrelu and the parameter 0.3 is passed to control the negative slope ; a=0.3
func = nn.LeakyReLU(0.3)
# Creating a Tensor with an array
input = torch.Tensor([2,-3,4,-6])
# Applying Leaky ReLU to the tensor
output = func(input)
# Print() output
print(output)
Output:
In the below output, you can see that PyTorch Leaky ReLu Activation function values are printed on the screen.
This is how we can use the PyTorch Leaky ReLU activation function for building the neural network.
Read: PyTorch Batch Normalization
PyTorch linear activation function
In this section, we will learn about the PyTorch linear activation function in Python.
The nn.linear activation function is defined as a process that takes input and output parameters and makes the matrix.
As we know the activation function is the building block of the PyTorch. It applies a non-linear transformation and determines either a neuron should be activated or not.
Code:
In the following code, firstly we will import the torch module and after that, we will import numpy as np and also import nn from torch.
- l = nn.Linear(in_features=3,out_features=1) is used to creating an object for linear class.
- print(l.weight) is used to print the weight.
- print(l.bias) is used to print the bias.
- print(l(torch.tensor([2,3,4],dtype=torch.float32))): Here we are passing an input to the linear layer.
# Importing necessary libraries
import torch
import numpy as np
from torch import nn
# Creating an object for linear class
l = nn.Linear(in_features=3,out_features=1)
# See the weights and bias
print(l.weight)
print(l.bias)
# Passing an input to the linear layer
print(l(torch.tensor([2,3,4],dtype=torch.float32)))
Output:
In the following output, you can see that the linear activation function values are printed on the screen.
So, with this, we understood about the PyTorch Linear activation function.
Read: PyTorch Load Model + Examples
PyTorch classification activation function
In this section, we will learn about the PyTorch classification activation function in python.
An activation function is applied to the output of the weighted sum of the input. The main part of the activation function is to initiate non-linearity in the decision boundary of the neural network.
Code:
In the following code, firstly we will import all the necessary libraries such as import torch, import torchvision, and import torchvision. transforms, import torch.nn, import torch.nn.functional.
- transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.7, 0.7, 0.7), (0.7, 0.7, 0.7))]) is used to define transforms.
- class classification(nn.Module): We are defining a model class with the help of init() and forward() methods.
- criterion = nn.CrossEntropyLoss(): Here we are defining a loss function.
- optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9): is used to initialize the optimizer.
- inputs, labels = data is used to get the inputs.
- optimizer.zero_grad() is used to zero the parameter gradient.
- print(f'[{epoch + 1}, {x + 1:5d}] loss: {runloss / 2000:.3f}’) is used to print the epoch.
# Importing libraries
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as Function
# Define a transform
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.7, 0.7, 0.7), (0.7, 0.7, 0.7))])
batch_size = 2
# Load training dataset
trainingset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainingloader = torch.utils.data.DataLoader(trainingset, batch_size=batch_size,
shuffle=True, num_workers=2)
test_set = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
shuffle=False, num_workers=2)
class classification(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, y):
y = self.pool(Function.relu(self.conv1(y)))
y = self.pool(Function.relu(self.conv2(y)))
# flatten all dimensions except batch
y = torch.flatten(y, 1)
y = Function.relu(self.fc1(y))
y = Function.relu(self.fc2(y))
y = self.fc3(y)
return y
net = classification()
import torch.optim as optim
# Define Loss function
criterion = nn.CrossEntropyLoss()
# Initialize optimizer
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
# loop over the dataset multiple times
for epoch in range(3):
runloss = 0.0
for x, data in enumerate(trainingloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
runloss += loss.item()
# print every 2000 mini-batches
if x % 2000 == 1999:
print(f'[{epoch + 1}, {x + 1:5d}] loss: {runloss / 2000:.3f}')
running_loss = 0.0
print('complete training')
Output:
In the below output, you can see that the PyTorch classification activation function loss value is printed on the screen.
This is how the PyTorch classification activation function works.
Read: Cross Entropy Loss PyTorch
PyTorch lstm activation function
In this section, we will learn about the PyTorch lstm activation function in python.
The lstm stand for Long Short Term Memory it uses the activation function for the activation of the cell.
The activation function is mainly used for the building block of PyTorch and it performs computation to provide output that may proceed as an input for the next neuron.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- nn = nn.LSTM(12, 22, 4) Calling the LSTM activation function.
- input = torch.randn(7, 5, 12) We are describing the variable by using torch.random() function.
- output= nn(input, (h0, c0)) is used to applying LSTM to the tensor.
- print(output) The print() function is used to print the output values.
# Import Library
import torch
import torch.nn as nn
# Calling the LSTM activation function
nn = nn.LSTM(12, 22, 4)
# Describe the variable by using torch.random()
input = torch.randn(7, 5, 12)
h0 = torch.randn(4, 5, 22)
c0 = torch.randn(4, 5, 22)
# Applying LSTM to the tensor
output= nn(input, (h0, c0))
# Print the output values
print(output)
Output:
In this output, you can see that the PyTorch lstm activation function values are printed on the screen.
So, with this, we understood about the PyTorch lstm activation function in python.
Read: PyTorch Save Model – Complete Guide
PyTorch swish activation function
In this section, we will learn about the PyTorch swish activation function in python.
The swish is an activation function and a bearable parameter. As we know the activation function is applied to the output of the weighted sum of the input and it is the building block of the neural network.
The swish function is also known as the SiLU function. The SiLU stands for Sigmoid Linear Unit.
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- s = nn.SiLU() is used to call the SiLU activation function.
- input = torch.randn(2) is used to describe the input variable by using the torch.random() function.
- output = s(input) is used to apply SiLU to the tensor.
- print(output) The print() function is used to print the output values.
# Import PyTorch
import torch
import torch.nn as nn
# Calling the SiLU() activation function
s = nn.SiLU()
# Describe the variable
input = torch.randn(2)
# Applying SiLU to the tensor
output = s(input)
# Print output
print(output)
Output:
In the below output you can see that the PyTorch swish activation function output values are printed on the screen.
So, with this, we understood about the PyTorch swish activation function in python.
Read: Adam optimizer PyTorch
PyTorch softmax activation function
In this section, we will learn about the PyTorch softmax activation function in python.
The PyTorch softmax activation function is applied to the n-dimension input tensor and rescaling them so that the output tensor of the n-dimensional tensor lies in the range[0,1].
Code:
In the following code, firstly we will import the torch module and after that, we will import functional as func from torch.nn.
- f = nn.Softmax(dim=1) is used to call the softmax activation function.
- input = torch.randn(4, 5) is used to describe the input tensor by using a torch.random() function.
- output = f(input) is used to apply softmax to the tensor.
- print(output) The print() function is used to print the output values.
# Import PyTorch
import torch
import torch.nn as nn
# Calling the Softmax activation function
f = nn.Softmax(dim=1)
# Describe the variable
input = torch.randn(4, 5)
# Applying Softmax to the tensor
output = f(input)
# Print output
print(output)
Output:
In this output, you can see that the PyTorch softmax activation function values are printed on the screen.
This is how the PyTorch softmax activation function works.
Also, take a look at some more Python PyTorch tutorials.
So, in this tutorial, we discussed PyTorch Activation function and we also covered different examples related to its implementation. Here is the list of examples that we have covered.
- PyTorch activation function
- PyTorch sigmoid activation function
- PyTorch inplace activation function
- PyTorch tanh activation function
- PyTorch ReLU activation function
- PyTorch leaky ReLu activation function
- PyTorch linear activation function
- PYTorch classification activation function
- PyTorch lstm activation function
- PyTorch swish activation function
- PyTorch softmax activation function
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.