PyTorch MNIST Tutorial

In this Python tutorial, we will learn about the PyTorch MNIST in python and we will also cover different examples related to PyTorch Minist. And, we will cover these topics.

  • PyTorch mnist
  • PyTorch mnist example
  • PyTorch mnist classification
  • PyTorch mnist cnn
  • PyTorch mnist dataset
  • PyTorch mnist training
  • PyTorch mnist fashion
  • PyTorch mnist load
  • PyTorch mnist accuracy

PyTorch MNIST

In this section, we will learn how the PyTorch minist works in python.

MNIST stands for Modified National Institute of Standards and Technology database which is a large database of handwritten digits which is mostly used for training various processing systems.

Syntax:

datasets.MNIST(root='./data', train=False, download=True, transform=None)

Parameter:

  • root is used to give the root to the database where the database is stored.
  • train=False The train parameter is set to false because we want the test set, not the train set.
  • download=True The download is set to true because we want to download the dataset.
  • transform=None The transform parameter is set to None because there is no transformation of data.

Also, check: PyTorch Binary Cross-Entropy

PyTorch MNIST Example

In this section, we will learn about how we can implement the PyTorch mnist data with the help of an example.

MNIST is a large database that is mostly used for training various processing systems.

Code:

In the following code, we will import the torch module from which we can see that the mnist database is loaded on the screen.

  • dts.MNIST(root = ‘data’, train = True,transform = ToTensor(),download = True,) is used as train dataset.
  • dts.MNIST(root = ‘data’, train = False, transform = ToTensor()) is used as test dataset.
from torchvision import datasets as dts
from torchvision.transforms import ToTensor 
traindt = dts.MNIST(
    root = 'data',
    train = True,                         
    transform = ToTensor(), 
    download = True,            
)
testdt = dts.MNIST(
    root = 'data', 
    train = False, 
    transform = ToTensor()
)

Output:

After running the above code, we get the following output in which we can see that the mnist dataset is downloaded on the screen.

PyTorch Mnist example
PyTorch Mnist example

Read: PyTorch Logistic Regression

PyTorch MNIST Classification

In this section, we will learn about the PyTorch mnist classification in python.

MNIST database is generally used for training and testing the data in the field of machine learning.

Code:

In the following code, we will import the torch library from which we can get the mnist classification.

  • mnisttrainset = dts.MNIST(root=’./data’, train=True, download=True, transform=trnsform) is used as mnist train dataset.
  • trainldr = trch.utils.data.DataLoader(mnisttrainset, batch_size=10, shuffle=True) is used to load the data.
  • nn.Linear() is used as a feed-forward network with inputs and outputs.
  • cmodel = classicationmodel() is used as a classification model.
  • print(cmodel) is used to print the model.
import torch as trch
import torchvision.datasets as dts 
import torchvision.transforms as trnsfrms
import torch.nn as nn
import matplotlib.pyplot as plot

trnsform = trnsfrms.Compose([trnsfrms.ToTensor(), trnsfrms.Normalize((0.7,), (0.7,)),])

mnisttrainset = dts.MNIST(root='./data', train=True, download=True, transform=trnsform)
trainldr = trch.utils.data.DataLoader(mnisttrainset, batch_size=10, shuffle=True)

mnist_testset = dts.MNIST(root='./data', train=False, download=True, transform=trnsform)
testldr = trch.utils.data.DataLoader(mnist_testset, batch_size=10, shuffle=True)
trnsform = trnsfrms.Compose([trnsfrms.ToTensor(), trnsfrms.Normalize((0.7,), (0.7,)),])
class classicationmodel(nn.Module):
    def __init__(self):
        super( classicationmodel,self).__init__()
        self.linear1 = nn.Linear(28*28, 100) 
        self.linear2 = nn.Linear(100, 50) 
        self.final = nn.Linear(50, 10)
        self.relu = nn.ReLU()

    def forward(self, image):
        a = image.view(-1, 28*28)
        a = self.relu(self.linear1(a))
        a = self.relu(self.linear2(a))
        a = self.final(a)
        return a

cmodel = classicationmodel()
print(cmodel)

Output:

In the following output, we can see that the PyTorch mnist classification data is printed on the screen.

PyTorch mnist classification
PyTorch Mnist classification

Read: Keras Vs PyTorch

PyTorch MNIST CNN

In this section, we will learn about the PyTorch MNIST CNN data in python.

CNN stands for convolutional neural network, it is a type of artificial neural network which is most commonly used in recognition.

Code:

In the following code, we will import some torch modules from which we can get the CNN data.

  • dts.MNIST() is used as a dataset.
  • nn.Sequential() is used when we want to run certain layers sequentially.
  • nn.MaxPool2d() is used as applied the input signal composed of input planes.
  • outp = self.out(a) is used to get the output of the model.
  • CNN = cnn() is used as an CNN model.
  • print(CNN) is used to print a CNN model.
from torchvision import datasets as dts
from torchvision.transforms import ToTensor
traindt = dts.MNIST(
    root = 'data',
    train = True,                         
    transform = ToTensor(), 
    download = True,            
)
testdt = dts.MNIST(
    root = 'data', 
    train = False, 
    transform = ToTensor()
)
import torch.nn as nn
class cnn(nn.Module):
    def __init__(self):
        super(cnn, self).__init__()
        self.conv = nn.Sequential(         
            nn.Conv2d(
                in_channels=3,              
                out_channels=18,            
                kernel_size=7,              
                stride=3,                   
                padding=4,                  
            ),                              
            nn.ReLU(),                      
            nn.MaxPool2d(kernel_size=4),    
        )
        self.conv1 = nn.Sequential(         
            nn.Conv2d(18, 34, 7, 3, 4),     
            nn.ReLU(),                      
            nn.MaxPool2d(2),                
        )
        
        self.out = nn.Linear(34 * 9 * 9, 12)
    def forward(self, a):
        a = self.conv1(a)
        a = self.conv2(a)
        a = a.view(a.size(0), -1)       
        outp = self.out(a)
        return outp, a 
CNN = cnn()
print(CNN)

Output:

After running the above code, we get the following output in which we can see that the PyTorch mnist CNN model data is printed on the screen.

PyTorch mnist cnn
PyTorch mnist CNN

Read: PyTorch MSELoss – Detailed Guide

PyTorch MNIST Dataset

In this section, we will learn about the PyTorch MNIST dataset works in Python.

The MNIST dataset is known as the Modified National Institute of Standards and Technology dataset. It is mainly used for text classification using a deep learning model.

Syntax:

The following syntax of the MNIST dataset:

torchvision.datasets.MNIST(root: str, train: bool = True , transform = None, target_transform = None, download: bool = False)

Parameter:

  • root is a directory where the mnist dataset exists.
  • train: if it is true it creates the dataset.
  • download: if it is true, it downloads the dataset from the internet and put it in the root directory.
  • transform takes in the PIL image and returns a transformed version.
  • target_transform: A function that takes in the target and transforms it.

Read: PyTorch Tensor to Numpy

PyTorch MNIST Training

In this section, we will learn about how to train the data with PyTorch MNIST dataset in python.

The MNIST dataset is used to train the model with training data and evaluate the model with test data.

Code:

In the following code, we will import the torch module from which we can train the model with training data.

  • trainds=torchvision.datasets.MNIST(root=’./data’,train=True,transform=trans.ToTensor(),download=True) is used to import the mnist dataset.
  • trainldr = torch.utils.data.DataLoader(dataset=trainds, batch_size=bachsiz,shuffle=True) is used to load the data with the help of dataloader.
  • nn.Linear() is used to create the feed-forward neural network with inputs and output.
  • optim = torch.optim.Adam(modl.parameters(), lr=l_r) is used to initialize the optimizer.
  • losses = criter(outp, lbls) is used to create losses.
  • print (f’Epochs [{epoch+1}/{numepchs}], Step[{x+1}/{nttlstps}], Losses: {losses.item():.4f}’) is used to print the epoch andlosses on the screen.
import torch
import torch.nn as nn 
import torchvision
import torchvision.transforms as trans
import matplotlib.pyplot as plot 
inpsiz = 784 
hidensiz = 500 
numclases = 10
numepchs = 4
bachsiz = 100
l_r = 0.001 

trainds = torchvision.datasets.MNIST(root='./data', 
                                          train=True, 
                                          transform=trans.ToTensor(),  
                                          download=True)
testds = torchvision.datasets.MNIST(root='./data', 
                                           train=False, 
                                           transform=trans.ToTensor()) 

trainldr = torch.utils.data.DataLoader(dataset=trainds, 
                                           batch_size=bachsiz, 
                                           shuffle=True)
testldr = torch.utils.data.DataLoader(dataset=testds, 
                                           batch_size=bachsiz, 
                                           shuffle=False)

class neural_network(nn.Module):
    def __init__(self, inpsiz, hidensiz, numclases):
         super(neural_network, self).__init__()
         self.inputsiz = inpsiz
         self.l1 = nn.Linear(inpsiz, hidensiz) 
         self.relu = nn.ReLU()
         self.l2 = nn.Linear(hidensiz, numclases) 
    def forward(self, y):
         outp = self.l1(y)
         outp = self.relu(outp)
         outp = self.l2(outp)

         return outp
modl = neural_network(inpsiz, hidensiz, numclases)

criter = nn.CrossEntropyLoss()
optim = torch.optim.Adam(modl.parameters(), lr=l_r)
nttlstps = len(trainldr)
for epoch in range(numepchs):
    for x, (imgs, lbls) in enumerate(trainldr): 
         imgs = imgs.reshape(-1, 28*28)
         labls = lbls

         outp = modl(imgs)
         losses = criter(outp, lbls)

         optim.zero_grad()
         losses.backward()
         optim.step() 
    if (x+1) % 100 == 0:
             print (f'Epochs [{epoch+1}/{numepchs}], Step[{x+1}/{nttlstps}], Losses: {losses.item():.4f}')

Output:

After running the above code, we get the following output in which we can see that the epoch and losses are printed on the screen.

PyTorch mnist training
PyTorch mnist training

Read: PyTorch Batch Normalization

PyTorch MNIST Fashion

In this section, we will learn about the PyTorch mnist fashion in python.

The fashion MNIST dataset is used in computer vision and also used to evaluate the deep neural network for classification.

Syntax:

The following syntax of Fashion MNIST where torchvision already has the Fashion MNIST dataset.

torchvision.datasets.FashionMNIST(root: str, train: bool = True, transform = None, traget_transform = None, download: bool = False)

Parameters:

  • root The root directory where our FashionMNIST dataset is stored.
  • train if it is true then it creates a dataset.
  • transform The function takes in a PIL image and returns a transform version.
  • target_transform The function takes in the target and transforms it.
  • download If it is true then download the dataset and puts it in the root directory.

Read: PyTorch Load Model + Examples

PyTorch MNIST Load

In this section, we will learn about how to load the mnist dataset in python.

Here we can load the MNIST dataset from PyTorch torchvision. The MNIST dataset is used to train the model with training data and evaluate the model with test data.

Code:

In the following code, we will import the torch module from which we can load the mnist dataset.

  • dtsets.MNIST(root=’./data’,train=True,transform=trans.ToTensor(),download=True) is used to initialize the train dataset.
  • testdt=dtsets.MNIST(root=’./data’,train=False,transform=trans.ToTensor(),download= True) is used to initialize the test dataset.
import torch
import torch.nn as nn 
import torchvision
import torchvision.transforms as trans
from torchvision import datasets as dtsets

traindt = dtsets.MNIST(root='./data', 
                            train=True, 
                            transform=trans.ToTensor(),
                            download=True)

testdt = dtsets.MNIST(root='./data', 
                           train=False, 
                           transform=trans.ToTensor(),download=True)

Output:

After running the above code, we get the following output in which we can see that the MNIST dataset is loaded on the screen.

PyTorch mnist load
PyTorch mnist load

Read: PyTorch nn linear + Examples

PyTorch MNIST Accuracy

In this section, we will learn about the PyTorch mnist accuracy in python.

PyTorch mnist is large data that is used for training and testing the model and getting the accuracy of the model.

Code:

In the following code, we will import the torch module from which we can calculate the accuracy of the model.

  • datasets.FashionMNIST() is used as a dataset.
  • nn.Sequential() is used when we want to run certain layers sequentially.
  • nn.MaxPool2d() applies over an input signals composed of several input planes.
  • optim = optim.Adam(Cnn.parameters(), lr = 0.01) is used to initialize the optimizer.
  • ax = var(imgs) is used to give the batch data.
  • losses.backward() is used as backward propagation.
  • predicy = torch.max(testoutp, 1)[1].data.squeeze() is used to predict the y value.
  • accu = (predicy == lbls).sum().item() / float(lbls.size(0)) is used to calculate the accuracy.
  • print(‘ Accuracy of the model %.2f’ % accu) is used to print the accuracy of the model.
import torch
from torchvision import datasets
from torchvision.transforms import ToTensor
traindt = datasets.FashionMNIST(
    root = 'data',
    train = True,                         
    transform = ToTensor(), 
    download = True,            
)
testdt = datasets.FashionMNIST(
    root = 'data', 
    train = False, 
    transform = ToTensor()
)
from torch.utils.data import DataLoader 
ldrs = {
    'train' : torch.utils.data.DataLoader(traindt, 
                                          batch_size=80, 
                                          shuffle=True, 
                                          num_workers=1),
    
    'test'  : torch.utils.data.DataLoader(test_data, 
                                          batch_size=80, 
                                          shuffle=True, 
                                          num_workers=1),
}
import torch.nn as nn
class cnn(nn.Module):
    def __init__(self):
        super(cnn, self).__init__()
        self.conv = nn.Sequential(         
            nn.Conv2d(
                in_channels=1,              
                out_channels=16,            
                kernel_size=5,              
                stride=1,                   
                padding=2,                  
            ),                              
            nn.ReLU(),                      
            nn.MaxPool2d(kernel_size=2),    
        )
        self.conv1 = nn.Sequential(         
            nn.Conv2d(16, 32, 5, 1, 2),     
            nn.ReLU(),                      
            nn.MaxPool2d(2),                
        )
        self.out = nn.Linear(32 * 7 * 7, 10)
    def forward(self, y):
        y = self.conv(y)
        y = self.conv1(y)
        y = y.view(y.size(0), -1)       
        outp = self.out(y)
        return outp, y   
Cnn = cnn()
lossfunct = nn.CrossEntropyLoss()   
from torch import optim
optim = optim.Adam(Cnn.parameters(), lr = 0.01)   
from torch.autograd import Variable as var
numepch = 3
def train(numepchs, Cnn, ldrs):
    
    Cnn.train()
        
    # Train the model
    ttlstp = len(ldrs['train'])
        
    for epoch in range(numepchs):
        for a, (imgs, lbls) in enumerate(ldrs['train']):
            ax = var(imgs)   
            ay = var(lbls)   
            outp = Cnn(ax)[0]               
            losses = lossfunct(outp, ay)
            
              
            optim.zero_grad()           
            losses.backward()    
                      
            optim.step()                
            
            if (a+1) % 100 == 0:
                print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' 
                       .format(epoch + 1, numepchs, i + 1, ttlstp, losses.item()))

def test():
    # Test the model
    Cnn.eval()
    with torch.no_grad():
        corct = 0
        ttl = 0
        for imgs, lbls in ldrs['test']:
            testoutp, lstlayr = Cnn(imgs)
            predicy = torch.max(testoutp, 1)[1].data.squeeze()
            accu = (predicy == lbls).sum().item() / float(lbls.size(0))
            pass
            print(' Accuracy of the model  %.2f' % accu)
test()

Output:

In the following output, we can see that the accuracy of the model is printed on the screen.

PyTorch mnist accuracy
PyTorch mnist accuracy

Also, take a look at some more PyTorch tutorials.

So, in this tutorial, we discussed PyTorch Minist and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

  • PyTorch mnist
  • PyTorch mnist example
  • PyTorch mnist classification
  • PyTorch mnist cnn
  • PyTorch mnist dataset
  • PyTorch mnist training
  • PyTorch mnist fashion
  • PyTorch mnist load
  • PyTorch mnist accuracy