The PyTorch Lenet is a simple Convolutional Neural Network and we can train this model on a gray scale of 32 x 32 pixels and it has Leranable Parameters. In detail, we will discuss Lenet using PyTorch in Python.
And additionally, we will also cover different examples related to PyTorch Lenet. And we will cover these topics.
- PyTorch Lenet
- PyTorch Lenet implementation
- PyTorch Lenet MNIST
- PyTorch Lenet cifar10
- PyTorch Lenet5 MNIST
PyTorch Lenet
In this section, we will learn about the PyTorch Lenet in python.
Lenet is defined as a simple Convolutional Neural Network. The Convolutional Neural Network is a type of feed-forward neural network.
The Lenet model can train on grayscale images of size 32 x 32 pixels. It has Learnable parameters.
Code:
In the following code we will import all the necessary libraries such as import torch, import torch.nn as nn, import torch.nn.functional as func, import summary from torchsummary.
- class leNetModel(nn.Module): Here we are using a model class by using init() and forward() methods.
- y = torch.randn(2,2,34,34): Here we are describing the variable by using torch.randn() function.
- mdl = leNetModel(): Here we are initializing the model.
- summary(mdl,(2,34,34)) is used to get the summary of the model.
import torch
import torch.nn as nn
import torch.nn.functional as func
from torchsummary import summary
class leNetModel(nn.Module):
def __init__(self):
super(leNetModel,self).__init__()
#Here we are starting the implementation of Layer 1 which has six kernels of size 5x5 with padding 0 and stride 1
self.conv = nn.Conv2d(in_channels=2,out_channels=8,kernel_size=(5,5),padding=0,stride=1)
#Here we are starting the implementation of Layer 3 which has eighteen kernels of size 5x5 with padding 0 and stride 1
self.conv1 = nn.Conv2d(in_channels = 8, out_channels = 18,kernel_size = (5,5),padding=0,stride=1)
# Here we are starting the implementation of Layer 5 which is basically flattening the data
self.conv2 = nn.Conv2d(in_channels = 18, out_channels = 122,kernel_size = (5,5),padding=0,stride=1)
#Here we are starting the implementation of Layer 6 which has eighty six Linear Neurons
self.L1 = nn.Linear(122,86)
#Here we are starting the implementation of Layer 7 which has ten Linear Neurons and input of 84
self.L2 = nn.Linear(86,12)
# Here we are using pooling of size 2 and stride 2 in this architecture
self.pool = nn.AvgPool2d(kernel_size = 2, stride = 2)
#Here we are using tanh as an activation function so we will use tanh at all layers excluding F7.
self.act = nn.Tanh()
#Here we are implementing forward function to produce entire flow of the architecture.
def forward(self,y):
y = self.conv(y)
# Here we are using tanh as an activation function in this architecture so we will use tanh at all layers excluding F7.
y = self.act(y)
# Here this will be passed from pooling
y = self.pool(y)
# The next stage is convolution
y = self.conv1(y)
y = self.act(y)
y = self.pool(y)
# The next we will pass conv3, here we will not pass data from pooling as per Architecture
y = self.conv2(y)
y = self.act(y)
# Now the data should be flaten and it would be passed from FullyConnected layers.
y = y.view(y.size()[0], -1)
y = self.L1(y)
y = self.act(y)
y = self.L2(y)
return y
y = torch.randn(2,2,34,34)
mdl = leNetModel()
summary(mdl,(2,34,34))
Output:
After running the above code, we get the following output in which we can see that the PyTorch Lenet model is printed on the screen.
So, with this, we understood the PyTorch Lenet Model.
Read: PyTorch Reshape Tensor
PyTorch Lenet implementation
In this section, we will learn how to implement the PyTorch Lenet with the help of an example.
PyTorch Lenet is a convolutional neural network structure. It was suggested by Yann LeCun at Bell Labs in 1989. It earlier applies the backpropagation algorithm to practical applications.
Code:
In the following code, we firstly import all the necessary libraries such as import torch, import torch.nn as nn, import torch.nn.function as func.
- class lenet(nn.Module): Here is the lenet model class by using init() and forward() methods.
- params = list(net.parameters()) is used to print the length of the parameter.
- print(out) is used to print the output with the help of the print() function.
import torch
import torch.nn as nn
import torch.nn.functional as func
class lenet(nn.Module):
def __init__(self):
super(lenet, self).__init__()
# Here is one input image channel, six output channels, 5x5 square convolution
self.conv = nn.Conv2d(1, 6, 5)
self.conv1 = nn.Conv2d(6, 16, 5)
# operation y = Wx + b
self.fc = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension
self.fc1 = nn.Linear(120, 84)
self.fc2 = nn.Linear(84, 10)
def forward(self, y):
# Here we are doing max pooling over a (2, 2) window
y = func.max_pool2d(func.relu(self.conv(y)), (2, 2))
# Here if the size is a square, you can specify with a single number
y = func.max_pool2d(func.relu(self.conv1(y)), 2)
y = torch.flatten(y, 1)
y = func.relu(self.fc(y))
y = func.relu(self.fc1(y))
y = self.fc2(y)
return y
net = lenet()
print(net)
# Print the lenght of the parameters
params = list(net.parameters())
print(len(params))
print(params[0].size())
#Print the output
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
Output:
After running the above code, we get the following output in which you can see that the PyTorch Lenet implementation is done on the screen.
So, with this, we understood the implementation of PyTorch Lenet.
Read: Jax Vs PyTorch
PyTorch Lenet MNIST
In this section, we will learn about the PyTorch Lenet MNIST in python.
Here we are defining our lenet model with a convolutional neural network and we are using the classification of the MNIST dataset. The MNIST dataset holds the number of images that are the gray scale images.
Code:
In the following code firstly we will import all the necessary libraries such as import torch, import torch.nn as nn, import torchvision, and import torch.transforms as transforms.
- device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’) is used as a device available.
- trainingdtset = torchvision.datasets.MNIST(root = ‘./data’, train = True,transform = transforms.Compose([transforms.Resize((32,32)), transforms.ToTensor(), transforms.Normalize(mean = (0.1307,), std = (0.3081,))]), download = True) is used to load the dataset.
- trainldr = torch.utils.data.DataLoader(dataset = trainingdtset, batch_size = batchsiz,shuffle = True) is used to define as a dataloader.
- class lenet(nn.Module): Here we are defining a lenet model class by using init() and forward() methods.
- l = nn.CrossEntropyLoss() is define as a loss function.
- optimizer = torch.optim.Adam(mdl.parameters(), lr=lr) is used to initialize the optimizer.
- ttlstep = len(trainldr) is used to print the total step.
- print (‘Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}’.format(epoch+1, nepochs, x+1, ttlstep, loss.item())) is used to print the epoch and the loss.
# Importing Libraries
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# Define variable
batchsiz = 52
nclasses = 10
lr = 0.001
nepochs = 5
# Describing the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Loading the dataset
trainingdtset = torchvision.datasets.MNIST(root = './data',
train = True,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.1307,), std = (0.3081,))]),
download = True)
testingdtset = torchvision.datasets.MNIST(root = './data',
train = False,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.1325,), std = (0.3105,))]),
download=True)
# Define the data loader
trainldr = torch.utils.data.DataLoader(dataset = trainingdtset,
batch_size = batchsiz,
shuffle = True)
testldr = torch.utils.data.DataLoader(dataset = testingdtset,
batch_size = batchsiz,
shuffle = True)
# Defining the Lenet model
class lenet(nn.Module):
def __init__(self, num_classes):
super(lenet, self).__init__()
self.layer = nn.Sequential(
nn.Conv2d(1, 6, kernel_size=5, stride=1, padding=0),
nn.BatchNorm2d(6),
nn.ReLU(),
nn.MaxPool2d(kernel_size = 2, stride = 2))
self.layer1 = nn.Sequential(
nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=0),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size = 2, stride = 2))
self.fc = nn.Linear(400, 120)
self.relu = nn.ReLU()
self.fc1 = nn.Linear(120, 84)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(84, num_classes)
def forward(self, y):
outp = self.layer(y)
outp = self.layer1(outp)
outp = outp.reshape(outp.size(0), -1)
outp = self.fc(outp)
outp = self.relu(outp)
outp = self.fc1(outp)
outp = self.relu1(outp)
outp = self.fc2(outp)
return outp
mdl = lenet(nclasses).to(device)
# Defining the loss function
l = nn.CrossEntropyLoss()
# Initializing the optimizer
optimizer = torch.optim.Adam(mdl.parameters(), lr=lr)
# Print the total step
ttlstep = len(trainldr)
ttlstep = len(trainldr)
for epoch in range(nepochs):
for x, (imgs, lbls) in enumerate(trainldr):
imgs = imgs.to(device)
lbls = lbls.to(device)
#Forward pass
output = mdl(imgs)
loss = l(output, lbls)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (x+1) % 400 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, nepochs, x+1, ttlstep, loss.item()))
Output:
In the below output, you can see that the epoch and the loss values of the PyTorch Lenet MNSIT are printed on the screen.
So, with this, we understood the PyTorch Lenet MNIST.
Read: PyTorch Numpy to Tensor
PyTorch Lenet cifar10
In this section, we will learn about the PyTorch Lenet cifar10 in python.
Before moving forward we should have some piece of knowledge about the cifar10.
CIFAR stands for Canadian Institute for Advanced Research. It is a collection of images that are regularly used to train machine learning and computer vision algorithm.
And here we are describing the lenet model with the cifar10 dataset.
Code:
In the following code, firstly we will import all the necessary libraries such as import torch, import matplotlib.pyplot as plt, import numpy as np, import torch.nn.functional etc.
- transform1=transforms.Compose([transforms.Resize((32,32)),transforms.ToTensor(),transforms.Normalize((0.7,),(0.7,))]) is used as the compose method of transform.
- traindst=datasets.CIFAR10(root=’./data’,train=True,download=True,transform=transform1) is used to load the cifar10 dataset.
- classes=(‘plane’,’car’,’cat’,’dog’,’frog’,’dear’,’horse’,’ship’,’truck’,’bird’): Here we are declaring the list of classes.
- class LeNet(nn.Module): Here we are defining the lenet model class by using init(0 and forward() methods.
- criteron=nn.CrossEntropyLoss(): Here we are defining the loss function.
- optimizer=torch.optim.Adam(mdl.parameters(),lr=0.00001) is used to initialize the optimizer.
- print(‘training_loss:{:.4f},{:.4f}’.format(epoch_loss,epoch_acc.item())) is used to print the training loss.
- print(‘validation_loss:{:.4f},{:.4f}’.format(valepochloss,valepochacc.item())) is used to print the validation loss.
# Importing libraries
import torch
import matplotlib.pyplot as plt
import numpy as np
import torch.nn.functional as func
import PIL.ImageOps
from torch import nn
from torchvision import datasets,transforms
import requests
from PIL import Image
# Using the device
d=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Using the compose method of transforms
transform1=transforms.Compose([transforms.Resize((32,32)),transforms.ToTensor(),transforms.Normalize((0.7,),(0.7,))])
# Loading the dataset
traindst=datasets.CIFAR10(root='./data',train=True,download=True,transform=transform1)
validdst=datasets.CIFAR10(root='./data',train=False,download=True,transform=transform1)
trainldr=torch.utils.data.DataLoader(dataset=traindst,batch_size=100,shuffle=True)
validldr=torch.utils.data.DataLoader(dataset=validdst,batch_size=100,shuffle=False)
# Define the function to show an images
def iconvert(tensor):
img=tensor.cpu().clone().detach().numpy()
img=img.transpose(1,2,0)
print(img.shape)
img=img*(np.array((0.5,0.5,0.5))+np.array((0.5,0.5,0.5)))
img=img.clip(0,1)
return img
# Declare the list of classes
classes=('plane','car','cat','dog','frog','dear','horse','ship','truck','bird')
diter=iter(trainldr)
imgs,lbls=diter.next()
figure=plt.figure(figsize=(25,4))
for index in np.arange(10):
axis=figure.add_subplot(2,10,index+1)
plt.imshow(iconvert(imgs[index]))
axis.set_title(classes[lbls[index].item()])
# Define the model
class LeNet(nn.Module):
def __init__(self):
super().__init__()
self.conv=nn.Conv2d(3,20,5,1)
self.conv1=nn.Conv2d(20,50,5,1)
self.fully=nn.Linear(5*5*50,500)
self.dropout=nn.Dropout(0.5)
self.fully1=nn.Linear(500,10)
def forward(self,y):
y=func.relu(self.conv(y))
y=func.max_pool2d(y,2,2)
y=func.relu(self.conv1(y))
y=func.max_pool2d(y,2,2)
# Reshaping the output
y=y.view(-1,5*5*50)
# Apply the relu activation function
y=func.relu(self.fully(y))
y=self.dropout(y)
y=self.fully1(y)
return y
mdl=LeNet().to(d)
# Define the loss
criteron=nn.CrossEntropyLoss()
# Initialize the optimizer
optimizer=torch.optim.Adam(mdl.parameters(),lr=0.00001)
# Specify the number of epochs
epochs=10
losshistry=[]
crrcthistry=[]
vallosshistry=[]
valcrrcthistry=[]
# Validate the model
for x in range(epochs):
loss=0.0
correct=0.0
valloss=0.0
valcrrct=0.0
for input,lbls in trainldr:
input=input.to(d)
lbls=lbls.to(d)
outputs=mdl(input)
loss1=criteron(outputs,lbls)
optimizer.zero_grad()
loss1.backward()
optimizer.step()
_,preds=torch.max(outputs,1)
loss+=loss1.item()
correct+=torch.sum(preds==lbls.data)
else:
with torch.no_grad():
for valinp,vallbls in validldr:
valinp=valinp.to(d)
vallbls=vallbls.to(d)
valoutps=mdl(valinp)
valloss1=criteron(valoutps,vallbls)
_,val_preds=torch.max(valoutps,1)
valloss+=valloss1.item()
valcrrct+=torch.sum(val_preds==vallbls.data)
epoch_loss=loss/len(trainldr)
epoch_acc=correct.float()/len(trainldr)
losshistry.append(epoch_loss)
crrcthistry.append(epoch_acc)
valepochloss=valloss/len(validldr)
valepochacc=valcrrct.float()/len(validldr)
vallosshistry.append(valepochloss)
valcrrcthistry.append(valepochacc)
print('training_loss:{:.4f},{:.4f}'.format(epoch_loss,epoch_acc.item()))
print('validation_loss:{:.4f},{:.4f}'.format(valepochloss,valepochacc.item()))
Output:
After running the above code, we get the following output in which we can see that the training loss and validation loss values are printed on the screen.
So, with this, we understood about the PyTorch Lenet cifar10.
Read: PyTorch RNN – Detailed Guide
PyTorch Lenet5 MNIST
In this section, we will learn about the Pytorch Lenet5 in python.
Lenet5 is the advanced convolutional neural network. It was used for the recognition of handwritten characters.
Code:
In the following code firstly we will import all the necessary libraries such as import torch, import torch.nn as nn, import torchvision, and import torch.transforms as transforms.
- device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’) is used as a device available.
- trainingdtset = torchvision.datasets.MNIST(root = ‘./data’, train = True,transform = transforms.Compose([transforms.Resize((32,32)), transforms.ToTensor(), transforms.Normalize(mean = (0.1307,), std = (0.3081,))]), download = True) is used to load the dataset.
- trainldr = torch.utils.data.DataLoader(dataset = trainingdtset, batch_size = batchsiz,shuffle = True) is used to define as a dataloader.
- class lenet(nn.Module): Here we are defining a lenet model class by using init() and forward() methods.
- print(mdl) is used to print the model.
# Importing Libraries
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# Define variable
batchsiz = 52
nclasses = 10
lr = 0.001
nepochs = 5
# Describing the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Loading the dataset
trainingdtset = torchvision.datasets.MNIST(root = './data',
train = True,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.1307,), std = (0.3081,))]),
download = True)
testingdtset = torchvision.datasets.MNIST(root = './data',
train = False,
transform = transforms.Compose([
transforms.Resize((32,32)),
transforms.ToTensor(),
transforms.Normalize(mean = (0.1325,), std = (0.3105,))]),
download=True)
# Define the data loader
trainldr = torch.utils.data.DataLoader(dataset = trainingdtset,
batch_size = batchsiz,
shuffle = True)
testldr = torch.utils.data.DataLoader(dataset = testingdtset,
batch_size = batchsiz,
shuffle = True)
# Defining the Lenet model
class lenet5(nn.Module):
def __init__(self, num_classes):
super(lenet5, self).__init__()
self.layer = nn.Sequential(
nn.Conv2d(1, 6, kernel_size=5, stride=1, padding=0),
nn.BatchNorm2d(6),
nn.ReLU(),
nn.MaxPool2d(kernel_size = 2, stride = 2))
self.layer1 = nn.Sequential(
nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=0),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size = 2, stride = 2))
self.fc = nn.Linear(400, 120)
self.relu = nn.ReLU()
self.fc1 = nn.Linear(120, 84)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(84, num_classes)
def forward(self, y):
outp = self.layer(y)
outp = self.layer1(outp)
outp = outp.reshape(outp.size(0), -1)
outp = self.fc(outp)
outp = self.relu(outp)
outp = self.fc1(outp)
outp = self.relu1(outp)
outp = self.fc2(outp)
return outp
mdl = lenet5(nclasses).to(device)
print(mdl)
Output:
After running the above code, we get the following output in which we can see that the PyTorch Lenet5 MNIST model is printed on the screen.
You may also like to read the following PyTorch tutorials.
- PyTorch Conv1d
- PyTorch Conv3d
- How to use PyTorch Polar
- PyTorch Full() Function
- PyTorch View Tutorial
- PyTorch Binary Cross Entropy
- PyTorch Dataloader + Examples
- PyTorch Pretrained Model
- PyTorch Tensor to Numpy
So, in this tutorial, we discussed PyTorch Lenet and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.
- PyTorch Lenet
- PyTorch Lenet implementation
- PyTorch Lenet MNIST
- PyTorch Lenet cifar10
- PyTorch Lenet5 MNIST
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.