In this Python tutorial, we will focus on how to build a TensorFlow fully connected layer in Python. Also, we will look at some examples of how to get the output of the previous layer in TensorFlow. And we will cover these topics.
- TensorFlow Fully Connected Layer
- TensorFlow fully connected layer vs convolutional layer
- TensorFlow CNN fully connected layer
- sparse fully connected layer TensorFlow
- TensorFlow list of layers
- TensorFlow dense layer example
- TensorFlow get layer by name
- TensorFlow remove layers
- TensorFlow get layers weights
TensorFlow Fully Connected Layer
- A group of interdependent non-linear functions makes up neural networks. A neuron is the basic unit of each particular function (or perception).
- The neuron in fully connected layers transforms the input vector linearly using a weights matrix. The product is then subjected to a non-linear transformation using a non-linear activation function f.
- The dot product between the layer’s input and weights matrix is wrapped by the activation function f. while the model is trained, the weights matrix’s columns will all have various values and be optimized.
Example:
Let’s take an example and check how we can create a fully connected layer.
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
(new_train_images, new_train_labels), (new_test_images, new_test_labels) = datasets.cifar10.load_data()
new_train_images, new_test_images = new_train_images / 255.0, new_test_images / 255.0
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(new_train_images[i])
plt.xlabel(class_names[new_train_labels[i][0]])
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(new_train_images, new_train_labels, epochs=10,
validation_data=(new_test_images, new_test_labels))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(new_test_images, new_test_labels, verbose=2)
In the following given code, we have imported the Tensorflow and matplotlib library and then loaded the datasets by using the command datasets.cifar10.load_data(). Next, we have divided the datasets into the train and test parts.
After that, we created a sequential model and use the conv 2d and mention the input image shape (32, 32, 3), and then used the model.compile() and assign the optimizer ‘adam’.
Here is the Screenshot of the following given code.
This is how we find the loss and the accuracy value of a fully connected layer by using TensorFlow.
Read: Tensorflow custom loss function
TensorFlow fully connected layer vs convolutional layer
- In this section, we will discuss what is dense layer and also we will learn the difference between a connected layer and a dense layer.
- In a model, each neuron in the preceding layer sends signals to the neurons in the dense layer, which multiply matrices and vectors. The row vector of the output from the previous layers is equal to the column vector of the dense layer during matrix-vector multiplication.
- The row vector must have an equal number of columns to the column vector to multiply matrices with vectors.
- A neuron in a layer that is fully linked is connected to every neuron in the layer before it and can change if any of those neurons change. However, within the confines of the convolutional kernel, a neuron in a convolutional layer is only connected to “nearby” neurons from the layer that came before.
Example:
import numpy as np
import tensorflow as tf
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense, Flatten, Conv2D
np.random.seed(44)
tf.random.set_seed(44)
new_arr = np.random.rand(4, 32, 32, 3)
input_shape = Input(shape=new_arr.shape[1:])
convolution_layer = Conv2D(filters=8, kernel_size=(3, 3), activation='relu')(input_shape)
flatten = Flatten()(convolution_layer)
feature_map = Dense(8, activation='relu')(flatten)
new_output = Dense(2, activation='softmax')(flatten)
result = Model(inputs=input_shape, outputs=new_output)
result(new_arr)
In the above code, we have imported the numpy and TensorFlow library. Next, we used the tf.random.set_seed() function. The global and operation-level seeds are the source of the random seed used by operations.
When the global seed is pre-determined but the operation seed is not, the system deterministically chooses an operation seed in addition to the global seed to produce a distinct random sequence. Next, I used the conv2d() layer and assign filters with kernel_size().
After that, we add the dense layer with input shape 8 and the activation function ‘relu’.
Here is the Screenshot of the following given code.
In this example, we have learned the difference between the fully connected layer and the convolutional layer.
Read: TensorFlow next_batch + Examples
TensorFlow CNN fully connected layer
- Convolutional Neural Networks (CNNs), commonly referred to as CNNs, are a subset of deep neural networks that are used to evaluate visual data in computer vision applications. It is utilized in programs for neural language processing, video or picture identification, etc.
- The Cnn and other neural networks differ primarily in that the input for the Cnn is a two-dimensional array, whereas the input for the other neural networks is an n-dimensional array.
- The convolutional layer is the most important part of the model. The primary goals of this layer are to improve generalization and shrink the size of the image for the quicker portion of the weights.
Example:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
train_images, test_images = train_images / 255.0, test_images / 255.0
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.summary()
You can refer to the below Screenshot.
This is how we can use the convolutional neural network in a fully connected layer.
Read: TensorFlow global average pooling
sparse fully connected layer TensorFlow
- In essence, we randomly initialize Sparse Connected Layers in our network and begin training with backpropagation and other common deep learning optimization methods.
- The “weakest” connections are eliminated after each epoch, and their places are taken by newly created connections that have random initialization.
- To perform this task we are going to use the tf.sparse tensor() function Indexes, values, and dense shapes are the three distinct dense tensors that TensorFlow uses to represent a sparse tensor.
- For simplicity of use, the three tensors are combined into a SparseTensor class in Python. Before providing them to the operations below, wrap any distinct dense shape, dense value, and index tensors you may have in a SparseTensor object.
Syntax:
Let’s have a look at the Syntax and understand the working of tf.sparse.SparseTensor() function.
tf.sparse.SparseTensor(
indices,
values,
dense_shape
)
- It consists of a few parameters
- indices: It is a 2-dimensional int 64 tensor of shape.
- values: It is a 1-d tensor of any type and shape.
- dense_shape: It specifies the shape of the dense tensor.
Example:
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32,shape = [4,2],name = "x")
y = tf.placeholder(tf.float32,shape = [4,1],name = "y")
m = np.shape(x)[0]
n = np.shape(x)[1]
hidden_s = 2
l_r = 1
theta1 = tf.SparseTensor(indices=[[0, 0],[0, 1], [1, 1]], values=[0.1, 0.2, 0.1], dense_shape=[3, 2])
theta2 = tf.cast(tf.Variable(tf.random_normal([hidden_s+1,1]),name = "theta2"),tf.float32)
a1 = tf.concat([np.c_[np.ones(x.shape[0])],x],1)
z1 = tf.matmul(a1,tf.sparse_tensor_to_dense(theta1))
a2 = tf.concat([np.c_[np.ones(x.shape[0])],tf.sigmoid(z1)],1)
z3 = tf.matmul(a2,theta2)
h3 = tf.sigmoid(z3)
cost_func = -tf.reduce_sum(y*tf.log(h3)+(1-y)*tf.log(1-h3),axis = 1)
optimiser = tf.train.GradientDescentOptimizer(learning_rate = l_r).minimize(cost_func)
X = [[0,0],[0,1],[1,0],[1,1]]
Y = [[0],[1],[1],[0]]
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(200):
sess.run(optimiser, feed_dict = {x:X,y:Y})
if i%100==0:
print("Epoch:",i)
print(sess.run(theta1))
In the following given code, we have used the tf.placeholder() function, for creating the tensor and within this function, we used the tf.float() datatype along with the shape. Next, we used the sparse tensor in which we have passed the Indexes, values, and dense shapes.
You can refer to the below Screenshot
This is how we can use the sparse tensor in a fully connected layer by using TensorFlow.
Read: Binary Cross Entropy TensorFlow
TensorFlow list of layers
- Here we will discuss the list of layers by using TensorFlow.
- Through its Keras Layers API, Keras offers a wide variety of pre-built layers for various neural network topologies and uses. These readily available layers are typically suitable for building the majority of deep learning models with a great deal of flexibility, making them highly helpful.
- Now let’s discuss some popular Keras layers.
- Dense layer: Dense Layer is a popular Keras layer for building a densely linked layer in the neural network where each layer’s neurons get input from all preceding layers’ neurons.
- Flatten layer: The input is flattened using flatten. For instance, the layer’s output shape will be (batch size, 4) if flatten is applied to a layer with an input shape of (batch size, 2,2).
- Dropout layer: One of the key ideas in machine learning is a dropout. It is applied to address the overfitting problem. Data input may contain some undesirable information, commonly referred to as noise. Dropout will attempt to exclude the noisy data to avoid over-fitting the model.
- Reshape layers: The input shape can be altered using the reshape command. For instance, if the layer’s input shape is (batch size, 3, 2) and reshape with parameter (2,3) is used, the layer’s output shape will be (batch size, 2, 3).
- Lambda Layers: With the use of an expression or function, the input data is transformed using the lambda layer.
- Pooling Layer: The main objective of the pooling layer is to gradually lower the spatial size of the input image, which will decrease the number of computations required by the network.
Dense Layer Example
It performs classification on the feature extracted by the convolutional layers.
Source Code:
from keras.models import Sequential
from keras.layers import Activation, Dense
new_model_var = Sequential()
new_layer = Dense(20, input_shape = (5,))
new_model_var.add(new_layer)
new_layer.input_shape
new_layer.output_shape
In the following given code, we have created the model sequential() and used the dense layer with input shape. Next, we added a layer to the model and get the shape of a dense layer.
Here is the Screenshot of the following given code.
Flatten Layer
We flatten the output of the convolutional layers to declare a single long feature vector.
Source Code:
from keras.layers import Flatten
from keras.models import Sequential
from keras.layers import Activation, Dense
model = Sequential()
layer_1 = Dense(8, input_shape=(8,8))
model.add(layer_1)
layer_2 = Flatten()
model.add(layer_2)
layer_2.input_shape
layer_2.output_shape
In the above code, we have imported the flattened, sequential model. Next, we created the sequential model and add the first dense layer. After that, I added the flatten layer() and assign layer2 to it.
You can refer to the below Screenshot
Dropout layer
Dropout is a training method in which some neurons are ignored at random.
Source Code:
import keras
result= keras.layers.Dropout(0.5, noise_shape = None, seed = None)
print(result)
In the above code, we have imported the Keras library and then used the keras.layers.Dropout() function and assign the noise_shape and seed parameter to it.
Here is the Screenshot of the following given code
Reshape layer
If a reshape layer has a parameter (4,5) and it is applied to a layer having input shape as (batch_size,5,4), then the resulting shape of the layer changes to (batch_size,4,5).
from keras.models import Sequential
from keras.layers import Activation, Dense, Reshape
model = Sequential()
layer_1 = Dense(36, input_shape = (6,6))
model.add(layer_1)
layer_2 = Reshape((36, 6))
model.add(layer_2)
layer_2.input_shape
You can refer to the below Screenshot
Read: Binary Cross Entropy TensorFlow
TensorFlow dense layer example
- In a model, each neuron in the preceding layer sends signals to the neurons in the dense layer, which multiply matrices and vectors.
- The row vector of the output from the previous layers is equal to the column vector of the dense layer during matrix-vector multiplication.
- The row vector must have an equal number of columns to the column vector to multiply matrices with vectors.
- The dense layer multiplies matrices and vectors in the background. Backpropagation can be used to train and update the parameters that make up the values utilized in the matrix.
- An “m” dimensional vector is the result of the dense layer. Thus, the main purpose of a dense layer is to alter the vector’s dimensions. Dense layers also perform operations on the vector, such as rotation, scaling, and translation.
Syntax:
Here is the Syntax of the dense layer in TensorFlow.
tf.keras.layers.Dense(
units,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
- It consists of a few parameters
- units: One of the most fundamental and important factors of the Keras dense layer, which determines the size of the dense layer’s output, is units. Given that it indicates the output vector’s dimensions, it must be a positive integer.
- activation: The activation function in neural networks is a function that transforms the input values of the neurons. In essence, it adds nonlinearity to neural network networks so that the networks can figure out how input and output values relate to one another.
- use_bias: When selecting whether or not to utilize a bias vector in a dense layer, we use the parameter. In a boolean argument, use bias is set to true if it is not defined.
- kernel_initializer: The kernel weights matrix is initialized using this option. To extract pertinent feature kernels, the input is multiplied by the weight matrix, which is a matrix of weights.
Example of Dense layer
from keras.models import Sequential
from keras.layers import Activation, Dense
new_model_var = Sequential()
new_layer = Dense(20, input_shape = (5,))
new_model_var.add(new_layer)
new_layer.input_shape
new_layer.output_shape
In the following given code, we have created the model sequential() and used the dense layer with input shape. Next, we added a layer to the model and get the shape of a dense layer.
You can refer to the below Screenshot.
As you can see in the Screenshot we have used the dense layer in the sequential model.
Read: TensorFlow clip_by_value
TensorFlow get layer by name
- In this example, we will discuss how to get the layer by name in TensorFlow.
- To do this task we are going to use the sequential() function and then add the dense layer with input shape and kernel_regularizer.
Example:
from keras.models import Sequential
from keras.layers import Activation, Dense
from keras import initializers
from keras import regularizers
from keras import constraints
model = Sequential()
model.add(Dense(32, input_shape=(16,),
kernel_regularizer = None, kernel_constraint = 'MaxNorm', activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(8))
print(model)
In the above code, we have imported initializers, regularizers, and constraints from the keras module. Next, we used the sequential model() and added the dense layer with input shape and kernel_regularizer with none value.
Here is the Output of the following given code.
This is how we can get the layer by name using TensorFlow.
Read: Module ‘tensorflow’ has no attribute ‘log’
TensorFlow remove layers
- In this section, we will discuss how to remove layers in TensorFlow.
- To perform this particular task we are going to use the model.layers.pop() function to take the model’s last layer to remove.
- Use hidden = Dense(120, activation=’relu’) to delete the previous dense layer and add your new one. Model. Layers [-1]. Output (model. Layers [-2]].
Example:
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Dense, Input, Layer
from tensorflow.keras.models import Model
new_input_tens = Input(shape=(10,))
new_hidden_layer = Dense(100, activation='relu')(new_input_tens)
new_output = Dense(10, activation='relu')(new_hidden_layer)
model = Model(new_input_tens, new_output)
model.compile(loss="mse", optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=0.001))
model.summary()
model.layers.pop()
model.layers.pop()
model.summary()
new_hidden = Dense(120, activation='relu')(model.layers[-1].output)
new_output = Dense(5, activation='softmax')(new_hidden)
model = Model(new_input_tens, new_output )
model.summary()
You can refer to the below Screenshot
This is how we can remove the layers in TensorFlow.
Read: TensorFlow mean squared error
TensorFlow get layers weights
- We use the TensorFlow function “random normal initializer” to initialize the weights, which will initialize weights randomly with a normal distribution. The states of the weights are contained as a tensor variable. ‘ init’ will be used to initialize these states.
- The weight value will be presented in the ‘float 32’ format. The starting weights will be modified in line with the loss function and optimizer after each run because it is set to “trainable”.
- It is given the label “kernel” so that it may be found later on with ease.
Example:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(5, activation="relu", name="first_layer"),
tf.keras.layers.Dense(6, activation="tanh", name="second_layer"),
tf.keras.layers.Dense(5, name="last_layer"),
])
new_input_val = tf.random.normal((1,4))
new_output_val = model(new_input_val)
for new_layer in model.layers:
print(new_layer.name, new_layer)
print(model.layers[0].weights)
print(model.layers[0].bias.numpy())
print(model.layers[0].bias_initializer)
In this example, we have used the tf.keras.Sequential() model and within this I have added three dense layers and assigned the input shape with the activation function. Next, we used the tf.random.normal() function and mentioned the shape (1,4). Now we want to extract the first layer for that I have used the command (model.layers[0].weights).
Here is the Screenshot of the following given code
As you can see in the Screenshot we have learned how to use the weights in layers.
Also, take a look at some more TensorFlow tutorials in Python.
- Convert list to tensor TensorFlow
- Tensorflow convert sparse tensor to tensor
- Module ‘tensorflow’ has no attribute ‘log’
- Python TensorFlow one_hot
- Python TensorFlow random uniform
- Python TensorFlow reduce_sum
So, in this Python tutorial, we have learned how to build a Fully connected layer in TensorFlow. Also, we will look at some examples of how to get the output of the previous layer in TensorFlow. And we have covered these topics.
- TensorFlow Fully Connected Layer
- TensorFlow fully connected layer vs convolutional layer
- TensorFlow CNN fully connected layer
- TensorFlow slim fully connected layer
- sparse fully connected layer TensorFlow
- TensorFlow list of layers
- TensorFlow dense layer example
- TensorFlow get layer by name
- TensorFlow remove layers
- TensorFlow get layers weights
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.