PyTorch Leaky ReLU – Useful Tutorial

In this Python tutorial, we will learn about the PyTorch leaky ReLU function. This function is an activation function and it is used to solve the problem of dying neurons. And additionally, we will cover the different examples related to the PyTorch Leaky ReLU. And also covers these topics.

  • PyTorch leaky relu
  • PyTorch leaky relu example
  • PyTorch leaky relu inplace
  • PyTorch leaky relu slope
  • PyTorch leaky relu functional
  • PyTorch leaky relu vs relu

PyTorch Leaky Relu

In this section, we will learn about how PyTorch Leaky Relu works in python.

The PyTorch leaky relu is an activation function. It is a beneficial function if the input is negative the derivative of the function is not zero and the learning rate of the neuron does not stop. This function is used to solve the problem of dying neurons.

Syntax:

The syntax of leaky relu is:

torch.nn.LeakyReLU(negative_slope = 0.01, inplace = False)

Parameters

The following are the parameter that is used within LeakyReLU() function.

  • negative_slope: It is used to control the angle of the negative slope. The default value of the negative_slope is 1e-2.
  • inplace: It can optionally do the operation in-place. The default value of inplace is False. If the value of inplace is True, it will alter the input directly without assigning any additional output.

So, with this, we understood how the PyTorch leaky relu works in python.

Read: PyTorch Activation Function

PyTorch leaky relu example

In this section, we will learn about the PyTorch leaky relu with the help of an example in python.

The PyTorch leaky relu is defined as an activation function. If the input is negative the derivative of the function would be a very small fraction and never zero.

This makes sure that the learning rate of the neuron does not stop during backpropagation and thus avoiding the dying neuron issue.

Code:

In the following code, firstly we will import the torch module and after that, we will import torch.nn as nn.

  • n = nn.LeakyReLU(0.2) Here we are using LeakyReLU() function.
  • input = torch.randn(4) Here we are describing the input variable by using torch.random() function.
  • output = n(input) Here we are declaring the output variable.
  • print(output) is used to print the output values by using the print() function.
# Import library
import torch
import torch.nn as nn
# Using the leakyReLU()
n = nn.LeakyReLU(0.2)
# Describing the input variable
input = torch.randn(4)
# Declaring the output variable
output = n(input)
print(output)

Output:

READ:  How to use Quill Editor in Django?

After running the above code, we get the following output in which we can see that the PyTorch leaky relu value is printed on the screen.

PyTorch leaky relu example
PyTorch leaky relu example

This is how the implementation of the PyTorch leaky relu is done.

Read: PyTorch fully connected layer

PyTorch leaky relu inplace

In this section, we will learn about the PyTorch leaky relu inplace in PyThon.

The PyTorch leaky relu inplace is defined as an activation function and within this function, we are using the parameter that is inplace.

Syntax:

The syntax of PyTorch leaky relu inplace:

torch.nn.LeakyReLU(inplace=True)

Parameter:

The following are the parameter:

  • inplace = True Which means that it will alter the input directly without allocating any additional output and the default value of the inplace parameter is False.

This is how the inplace parameter works in the Pytorch leaky relu function.

Read: PyTorch Model Summary

PyTorch leaky relu slope

In this section, we will learn about how PyTorch leaky relu works in python.

Before moving forward we should have a piece of knowledge about slope. The slope is a surface where one side is higher that the other side.

The PyTorch leaky relu slope is defined as when the input is negative and the differentiation of the function is not zero.

Syntax:

The syntax of leaky relu slope:

torch.nn.LeakyReLU(negative_slope = 0.01)

Parameter:

The following are the parameter of leaky relu:

negative_slope: It is used to control the angle of the negative slope. The default value of the negative_slope is 1e-2.

Code:

In the following code, firstly we will import the torch module and after that, we will import torch.nn as nn.

  • s = nn.LeakyReLU(0.4) is used to define the LeakyReLU() function and within this function, we are using the parameter 0.4 that controls the negative slope.
  • input = torch.Tensor([2,-4,5,-6]) is used to create a tensor with an array.
  • output = s(input) Here we are declaring the output variable.
  • print(output) is used to print the output values with the help of the print() function.
# Importing libraries
import torch
import torch.nn as nn
 
# Defining Leaky relu and the parameter 0.4 is passed to control the negative slope 
s = nn.LeakyReLU(0.4)
 
# Creating a Tensor with an array
input = torch.Tensor([2,-4,5,-6])

# Declaring the output variable
output = s(input)

# Print the output
print(output)

Output:

READ:  Jax Vs PyTorch [Key Differences]

After running the above code, we get the following output in which we can see that the PyTorch leaky relu slope value is printed on the screen.

PyTorch leaky relu slope
PyTorch leaky relu slope

So, with this, we understood how the PyTorch leaky relu slope works.

Read: PyTorch Logistic Regression

PyTorch leaky relu functional

In this section, we will learn about the PyTorch leaky relu functional in python.

The PyTorch leaky relu functional is defined as a process that is used to solve the problem of dying neurons.

This function is very helpful and useful. The derivative of the function is not zero if the input value is negative.

Syntax:

The syntax of the PyTorch leaky relu functional:

torch.nn.functional.leaky_relu(input,negative_slope = 0.01, inplace = False)

Parameter:

The following are the parameters that are used within the leaky relu functional:

  • negative_slope: It is used to control the angle of the negative slope. The default value of the negative_slope is 1e-2.
  • inplace: It can optionally do the operation in-place. The default value of inplace is False. If the value of inplace is T, it will alter the input directly without assigning any additional output.

This is how the Pytorch leaky relu functional works.

Read: PyTorch Model Eval + Examples

PyTorch leaky relu vs relu

In this section, we will learn the difference between the PyTorch leaky relu and relu in python.

PyTorch leaky relu:

The leaky relu function is very useful. In leaky relu the derivative becomes not zero if the input value is negative.

The leaky relu also solves the problem of dying the neurons and the learning rate of the neuron does not stop.

Example:

In the following code, firstly we will import all the necessary libraries such as import torch and import torch.nn as nn.

re = nn.LeakyReLU(0.6): Here we are defining the LeakyReLU() function.

input = torch.Tensor([2,-3,4,-6]) is used to create a tensor with an array.

READ:  Attributeerror: Module 'tensorflow' has no attribute 'sparse_tensor_to_dense'

output = re(input) is used to pass the array to leaky relu function.

print(output) is used to print the output using the print() function.

# Importing libraries
import torch
import torch.nn as nn
 
# defining Leaky relu
re = nn.LeakyReLU(0.6)
 
# Creating a Tensor with an array
input = torch.Tensor([2,-3,4,-6])
 
# Passing the array to leaky relu function
output = re(input)

# Print the output
print(output)

Output:

After running the above example, we get the following output in which we can see that the PyTorch leaky relu value is printed on the screen.

PyTorch leaky relu
PyTorch leaky relu

PyTorch relu:

The relu function is a non-linear and differentiable function. In relu the derivative becomes zero if the inputs are negative which causes the dying of neurons and the learning rate of the neuron to stop.

Example:

In the following code, firstly we will import all the necessary libraries such as import torch and import torch.nn as nn.

  • lr = nn.ReLU(): Here we are defining the ReLU() function.
  • input = torch.Tensor([2,-3,4,-6]) is used to create a tensor with an array.
  • output = lr(input) is used to pass the array to the relu function.
  • print(output) is used to print the function with the help of print() function.
# Importing libararies
import torch
import torch.nn as nn
 
# defining relu
lr = nn.ReLU()
 
# Creating a Tensor with an array
input = torch.Tensor([2,-3,4,-6])
 
# Passing the array to relu function
output = lr(input)

# Print output
print(output)

Output:

In the below output, you can see that the PyTorch relu value is printed on the screen.

PyTorch relu
PyTorch relu

So, with this we understood the difference between the PyTorch leaky relu and relu function.

Also, take a look at some more PyTorch tutorials.

So, in this tutorial, we discussed the PyTorch Leaky ReLU and covered different examples related to its implementation. Here is the list of examples that we have covered.

  • PyTorch leaky relu
  • PyTorch leaky relu example
  • PyTorch leaky relu inplace
  • PyTorch leaky relu slope
  • PyTorch leaky relu functional
  • PyTorch leaky relu vs relu