In this Python tutorial, we will learn about the PyTorch fully connected layer in Python and we will also cover different examples related to PyTorch fully connected layer. And, we will cover these topics.
- PyTorch fully connected layer
- PyTorch fully connected layer initialization
- PyTorch fully connected layer input size
- PyTorch cnn fully connected layer
- PyTorch 2d fully connected layer
- PyTorch fully connected layer with 128 neurons
- PyTorch fully connected layer with dropout
- PyTorch fully connected layer relu
PyTorch fully connected layer
In this section, we will learn about the PyTorch fully connected layer in Python.
The linear layer is also called the fully connected layer. This layer help in convert the dimensionality of the output from the previous layer.
Code:
In the following code, we will import the torch module from which we can convert the dimensionality of the output from previous layer.
- inp = torch.randn(15, 9) is used as input value.
- weght = torch.randn(7,9) is used to generate weight.
- torch.mm(inp,weght.t()) is used as a matrix multiplication.
- lay=nn.Linear(in_features=9,out_features=7,bias=False) is used to create an feed forward network.
- lay.weght=nn.Parameter(weght) is used to define as a weight.
import torch
import torch.nn as nn
inp = torch.randn(15, 9)
weght = torch.randn(7,9)
torch.mm(inp,weght.t())
lay=nn.Linear(in_features=9,out_features=7,bias=False)
lay.weght=nn.Parameter(weght)
lay.weght
lay(inp)
Output:
After running the above code, we get the following output in which we can see that the PyTorch fully connected layer is shown on the screen.
Read: PyTorch Model Summary
PyTorch fully connected layer initialization
In this section, we will learn about how to initialize the PyTorch fully connected layer in python.
The linear layer is used in the last stage of the neural network. It Linear layer is also called a fully connected layer.
The linear layer is initialize and helps in converting the dimensionality of the output from the previous layer.
For this the model can easily explain the relationship between the values of the data.
Code:
In the following code, we will import the torch module from which we can initialize the fully connected layer.
- nn.Conv2d() is used to apply the 2d convolution over the input.
- nn.Dropout2d() is used to help promote the independence between feature maps.
- self.fc = nn.Linear(9218, 130) is used as first fully connected layer.
- self.fc1 = nn.Linear(130, 12) is used as second fully connected layer.
- print(nnmodel) is used to print the model.
import torch as tor
import torch.nn as nn
import torch.nn.functional as fun
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.conv = nn.Conv2d(3, 34, 5, 3)
self.conv1 = nn.Conv2d(34, 66, 5, 3)
self.dropout = nn.Dropout2d(0.27)
self.dropout1 = nn.Dropout2d(0.7)
self.fc = nn.Linear(9218, 130)
# Second fully connected layer that outputs our 10 labels
self.fc1 = nn.Linear(130, 12)
nnmodel = model()
print(nnmodel)
Output:
In the following output, we can see that the fully connected layer is initializing successfully.
Read: PyTorch Dataloader + Examples
PyTorch fully connected layer input size
In this section we will learn about the PyTorch fully connected layer input size in python.
The Fully connected layer multiplies the input by a weight matrix and adds a bais by a weight. The Input of the neural network is a type of Batch_size*channel_number*Height*Weight.
Code:
In the following code, we will import the torch module from which we can get the input size of fully connected layer.
- nn.Conv2d() awaits the inputs to be of the shape batch_size, input_channels, input_height, input_width.
- nn.Linear() is used as a feed-forward neural network.
- print(fc) is used to print the fully connected neural network.
import torch
import torch.nn as nn
import torch.nn.functional as fun
class fcmodel(nn.Module):
def __init__(self):
super(fcmodel, self).__init__()
self.conv = nn.Conv2d(5, 8, 7)
self.pool = nn.MaxPool2d(2,2)
self.conv1 = nn.Conv2d(8, 18, 7)
self.fc = nn.Linear(16*5*5, 120)
self.fc1 = nn.Linear(140, 86)
self.fc2 = nn.Linear(86, 12)
def forward(self, y):
y = self.pool(fun.relu(self.conv(y)))
y = self.pool(fun.relu(self.conv1(y)))
y = y.view(-1, 16*5*5)
y = fun.relu(self.fc(y))
y = fun.relu(self.fc1(y))
y = self.fc2(y)
return y
fc = fcmodel()
print(fc)
Output:
After running the above code, we get the following output in which we can see that the fully connected layer input size is printed on the screen.
Read: PyTorch Model Eval + Examples
PyTorch CNN fully connected layer
In this section, we will learn about the PyTorch CNN fully connected layer in python.
CNN is the most popular method to solve computer vision for example object detection. CNN peer for pattern in an image.
The linear layer is used in the last stage of the convolution neural network. It Linear layer is also called a fully connected layer.
Code:
In the following code, we will import the torch module from which we can create cnn fully connected layer.
- def forward(self, y) here y is represent our data.
- y = self.conv(y) is used to pass data through conv.
- y = func.relu(y) is used the rectified linear activation function over y.
- y = func.max_pool2d(y, 2) is used to run the max pooling over y.
- y = self.dropout(y) is used to pass the data through dropout.
- y = torch.flatten(y, 1) is used as flatten y with start_dim 1.
- y = self.fc(y) is used to pass the data through fc.
- print (cnn) is used to print the cnn layer.
import torch
import torch.nn as nn
import torch.nn.functional as func
class cnnfc(nn.Module):
def __init__(self):
super(cnnfc, self).__init__()
self.conv = nn.Conv2d(3, 34, 5, 3)
self.conv1 = nn.Conv2d(34, 66, 5, 3)
self.dropout = nn.Dropout2d(0.30)
self.dropout1 = nn.Dropout2d(0.10)
self.fc = nn.Linear(9218, 130)
self.fc1 = nn.Linear(130, 15)
def forward(self, y):
y = self.conv(y)
y = func.relu(y)
y = self.conv1(y)
y = func.relu(y)
y = func.max_pool2d(y, 2)
y = self.dropout(y)
y = torch.flatten(y, 1)
y = self.fc(y)
y = func.relu(y)
y = self.dropout1(y)
y = self.fc1(y)
cnn = cnnfc()
print (cnn)
Output:
In the following output, we can see that the PyTorch cnn fully connected layer is printed on the screen.
Read: PyTorch MSELoss – Detailed Guide
PyTorch 2d fully connected layer
In this section, we will learn about the PyTorch 2d connected layer in Python.
The 2d fully connected layer helps change the dimensionality of the output for the preceding layer. The model can easily define the relationship between the value of the data.
Code:
In the following code, we will import the torch module from which we can intialize the 2d fully connected layer.
- nn.Conv2d() is used to perform convolution of 2d data.
- nn.Linear() is used as to create a feed-forward neural network.
- modl = model() is used as initiating the model.
- print(modl) is used to print the model.
import torch
import torch.nn as nn
import torchvision
import torch.nn.functional as func
import torchvision.transforms as transforms
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.conv = nn.Conv2d(in_channels=3, out_channels=34, kernel_size= 7,stride = 3)
self.conv1 = nn.Conv2d(34, 66, 7, 2)
self.fc = nn.Linear(66*22*22, 49)
def forward(self, y):
y = func.relu(self.conv(y))
y = func.relu(self.conv1(y))
y = func.max_pool2d(y, 1)
y = torch.flatten(y, 1)
y = self.fc(y)
out = func.log_softmax(y, dim=1)
return out
# initiating the model
modl = model()
print(modl)
Output:
After running the above code, we get the following output in which we can see that the PyTorch 2d fully connected layer is printed on the screen.
Read: PyTorch Batch Normalization
PyTorch fully connected layer with 128 neurons
In this section, we will learn about the PyTorch fully connected layer with 128 neurons in python.
The Fully connected layer is defined as a those layer where all the inputs from one layer are connected to every activation unit of the next layer.
Code:
In the following code, we will import the torch module from which we can make fully connected layer with 128 neurons.
- self.conv = nn.Conv2d(1, 35, (7, 7), padding=0) is used as convolution with 35 features maps of size 7 x 7 with ReLU activation.
- self.fc = nn.Linear(20 * 7**4, 128) is used as fully connected layer with 128 neurons.
- z = self.conv(y) is used as pass the data through conv.
- z = func.relu(z) is used as rectified- linear activation function over z.
- z = func.max_pool2d(z, 2) run max pooling over z.
- z = self.dropout(z) data can be pass through dropout.
- z = torch.flatten(z, 1) flatten z with start_dim = 1.
- cnn = cnnwithneurons() is used as initiating the model.
import torch
import torch.nn.functional as func
from torch import nn
class cnnwithneurons(nn.Module):
def __init__(self):
super(cnnwithneurons, self).__init__()
self.conv = nn.Conv2d(1, 35, (7, 7), padding=0)
self.conv1 = nn.Conv2d(35, 20, (5, 5), padding=0)
self.fc = nn.Linear(20 * 7**4, 128)
self.fc1 = nn.Linear(128, 52)
self.fc2 = nn.Linear(52, 12)
def forward(self, z):
z = self.conv(y)
z = func.relu(z)
z = self.conv1(z)
z = func.relu(z)
z = func.max_pool2d(z, 2)
z = self.dropout(z)
z = torch.flatten(z, 1)
z = self.fc(z)
z = func.relu(z)
z = self.dropout1(z)
z = self.fc1(z)
cnn = cnnwithneurons()
print (cnn)
Output:
In the following output, we can see that the fully connected layer with 128 neurons is printed on the screen.
Read: PyTorch Load Model + Examples
PyTorch fully connected layer with dropout
In this section, we will learn about the PyTorch fully connected layer with dropout in python.
The dropout technique is used to remove the neural net to imitate training a large number of architecture simultaneously.
Code:
In the following code, we will import the torch module from which we can get the fully connected layer with dropout.
- self.conv = nn.Conv2d(5, 34, 5) awaits the inputs to be of the shape batch_size, input_channels, input_height, input_width.
- nn.Linear() is used to create the feed-forward neural network.
- self.dropout = nn.Dropout(0.27) is used to define proportion or neurons to dropout.
- model =dropoutmodel() is used to initiating the model.
- print(model) is used to print the model.
import torch
from torch import nn
import torch.nn.functional as func
from torch import nn
class dropoutmodel(nn.Module):
def __init__(self, input_shape=(3,32,32)):
super(dropoutmodel, self).__init__()
self.conv = nn.Conv2d(5, 34, 5)
self.conv1 = nn.Conv2d(34, 66, 5)
self.conv2 = nn.Conv2d(66, 130, 5)
self.pool = nn.MaxPool2d(2,2)
self.fc = nn.Linear(20 * 9**4, 218)
self.fc1 = nn.Linear(218, 10)
# Define proportion or neurons to dropout
self.dropout = nn.Dropout(0.27)
def forward(self, z):
z = self._forward_features(z)
z = z.view(z.size(0), -1)
z = self.dropout(z)
z = func.relu(self.fc1(z))
# Apply dropout
z = self.dropout(z)
z = self.fc2(z)
return z
model =dropoutmodel()
print(model)
Output:
After running the above code, we get the following output in which we can see that the PyTorch fully connected dropout is printed on the screen.
Read: PyTorch nn linear + Examples
PyTorch fully connected layer relu
In this section, we will learn about the PyTorch fully connected layer relu in python.
Before moving forward we should have some piece of knowedge about relu. ReLu stand for rectified linear activation function.
It is also known as non-linear activation function that is used in multi-linear neural network.
Code:
In the following code, we will import the torch module from which we can nake fully connected layer relu.
- class fcrmodel(nn.Module) is used to define the network class.
- super().__init__() is used to call constructor from superclass.
- nn.Linear() is used to create a feed-forward neural network.
- func.relu(self.fc1(z)) is used to define the forward pass.
- rmodl = fcrmodel() is used to initiate the model.
- print(rmodl) is used to print the model architecture.
import torch
import torch.nn.functional as func
from torch import nn
class fcrmodel(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(18, 14)
self.fc1 = nn.Linear(14, 12)
self.fc2 = nn.Linear(12, 3)
def forward(self, z):
z = func.relu(self.fc1(z))
z = func.relu(self.fc2(z))
z = torch.sigmoid(self.fc3(z))
return z
rmodl = fcrmodel()
print(rmodl)
Output:
In the following output, we can see that the PyTorch fully connected layer relu activation is printed on the screen.
You may also like to read the following PyTorch tutorials.
- PyTorch nn Conv2d
- PyTorch Numpy to Tensor
- PyTorch Linear Regression
- PyTorch Save Model – Complete Guide
- PyTorch Activation Function [With 11 Examples]
- Adam optimizer PyTorch with Examples
- PyTorch Binary Cross Entropy
So, in this tutorial, we have discussed the PyTorch fully connected layer and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.
- PyTorch fully connected layer
- PyTorch fully connected layer initialization
- PyTorch fully connected layer input size
- PyTorch cnn fully connected layer
- PyTorch 2d fully connected layer
- PyTorch fully connected layer with 128 neurons
- PyTorch fully connected layer with dropout
- PyTorch fully connected layer relu
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.