Batch Normalization TensorFlow [10 Amazing Examples]

In this Python tutorial, we will focus on customizing batch normalization in our model, and also we will look at some examples of how we can normalize in TensorFlow. And we will cover these topics.

  • Batch normalization TensorFlow Keras
  • Batch normalization TensorFlow CNN example
  • Conditional batch normalization TensorFlow
  • Fused batch normalization TensorFlow
  • TensorFlow batch normalization weights
  • TensorFlow batch normalization not working
  • TensorFlow batch normalization epsilon
  • TensorFlow batch normalization activation
  • TensorFlow sequential batch normalization
  • TensorFlow dense batch normalization

Batch Normalization TensorFlow

  • When training a neural network, we want to normalize or standardize our data in some way ahead of time as part of the pre-processing step.
  • This is the step where we prepare our data to get it ready for training normalization and standardization both have the same objective of transforming the data to put all the data points on the same scale a typical normalization process consists of scaling the numerical data down to be on a scale from zero to one.
  • Batch normalization is applied to layers that you choose to apply it to within your network when applying batch normalization to a layer the first thing the batch normalization does is normalize the output from the activation function.
  • Batch normalization is the process of adding additional layers to a deep neural network to speed up and stabilize neural networks. On the input of a layer originating from a previous layer, the new layer applies standardizing and normalizing procedures.

Now that we are clear on the requirement for Batch normalization, let’s examine its operation and discuss the steps

Example:

import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense, BatchNormalization
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
  


new_scale_val = 255.0
x_train /= 255.0
x_test /= 255.0
print("Feature matrix:", x_train.shape)
print("Target matrix:", x_test.shape)
print("Feature matrix:", y_train.shape)
print("Target matrix:", y_test.shape)
model = Sequential([
	
	# reshape 28 row * 28 column data to 28*28 rows
	Flatten(input_shape=(28, 28)),
	
	# dense layer 1
	Dense(256, activation='sigmoid'),
    # Batch normalization
    BatchNormalization(),
	
	# dense layer 2
	Dense(128, activation='sigmoid'),
    BatchNormalization(),
	
	# output layer
	Dense(10, activation='sigmoid'),
])
model.summary()
model.compile(optimizer='adam',
			loss='sparse_categorical_crossentropy',
			metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10,
		batch_size=2000,
		validation_split=0.2)
results = model.evaluate(x_test, y_test, verbose = 0)
print('test loss, test acc:', results)
  • In this example, we are going to use the dataset and for this, I am going to use the mnist dataset under tf.keras.datasets and then I load the train and test data as (x_train y_train). Since the input features are between 0 to 255. I will just normalize it by dividing it by 255.
  • After that, I will create a new sequential model with a single drop-out layer as model = tf.keras.models.sequential so in the first layer I have created a flattened layer that will take the input images of shape (28,28). In the second layer, I have created a dense layer with 512 neurons and the activation function as relu and it is dropped out by the layer with the drop_out rate =0.2, and the final output layer has created a dense layer with 10 neurons with the SoftMax activation function.
  • Now we will display the summary by using the model.summary(). now I will compile the model. So in the TensorFlow batch normalization can be added as a layer in the model.
  • This particular layer is responsible for bringing all the activation functions and we can also apply this layer to the second dense layer

Here is the Screenshot of the following given code

Batch Normalization TensorFlow
Batch Normalization TensorFlow

As you can see in the summary the batch normalization layers are added.

Read: Tensorflow custom loss function

Batch normalization TensorFlow CNN example

  • Let us how we can use batch normalization in a Convolutional neural network.
  • Convolutional Neural networks, also known as CNN, and it is used for computer vision applications, and it is a class of deep neural networks that are used to analyze visual imagery.

Example:

import tensorflow as tf 

from tensorflow.keras.layers import Dense, BatchNormalization 
from tensorflow.keras import datasets, layers, models 

import matplotlib.pyplot as plt 

(new_train_images, new_train_labels), (new_test_images, new_test_labels) = datasets.cifar10.load_data() 

 
 

new_train_images, new_test_images = new_train_images / 255.0, new_test_images / 255.0 

class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 

               'dog', 'frog', 'horse', 'ship', 'truck'] 

 
plt.figure(figsize=(10,10)) 

for i in range(25): 

    plt.subplot(5,5,i+1) 

    plt.xticks([]) 

    plt.yticks([]) 

    plt.grid(False) 

    plt.imshow(new_train_images[i]) 

    plt.xlabel(class_names[new_train_labels[i][0]]) 

plt.show() 

model = models.Sequential() 

model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) 

model.add(layers.MaxPooling2D((2, 2))) 
model.add(layers.Conv2D(64, (3, 3), activation='relu')) 

model.add(layers.MaxPooling2D((2, 2))) 

model.add(layers.Conv2D(64, (3, 3), activation='relu')) 

model.add(layers.Flatten()) 
BatchNormalization(),
model.add(layers.Dense(64, activation='relu')) 
BatchNormalization(),
model.add(layers.Dense(10)) 
model.summary()

model.compile(optimizer='adam', 

              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), 

              metrics=['accuracy']) 

 
 

history = model.fit(new_train_images, new_train_labels, epochs=10,  

                    validation_data=(new_test_images, new_test_labels)) 

 

plt.plot(history.history['accuracy'], label='accuracy') 

plt.plot(history.history['val_accuracy'], label = 'val_accuracy') 

plt.xlabel('Epoch') 

plt.ylabel('Accuracy') 

plt.ylim([0.5, 1]) 

plt.legend(loc='lower right') 

 
 

test_loss, test_acc = model.evaluate(new_test_images,  new_test_labels, verbose=2) 

In the following given code, we have imported the TensorFlow library and then loaded the dataset. I am going to use the cifar10 dataset under tf.keras.datasets and then I load the train and test data as (x_train y_train). Since the input features are between 0 to 255. I will just normalize it by dividing it by 255.

READ:  Python Django get enum choices

After that, I will create a new sequential model with a single drop-out layer as model = tf.keras.models.sequential so in the first layer I have created a flattened layer that will take the input images of shape input_shape=(32, 32, 3).

You can refer to the below Screenshot

Batch normalization TensorFlow CNN example
Batch normalization TensorFlow CNN example

This is how we can use the Convolutional neural network in batch normalization by using TensorFlow.

Read: TensorFlow next_batch + Examples

Conditional batch normalization TensorFlow

  • Batch normalization has a class-conditional form called conditional batch normalization (CBN). The main concept is to infer the and of batch normalization from an embedding, such as a language embedding in VQA. The linguistic embedding can alter entire feature maps via CBN by scaling, canceling, or turning off individual features. To enable class information to influence the batch normalization parameters, CBN has also been employed in GANs.
  • Recently, conditional batch normalization was developed, and some recent research seems to indicate that it has some intriguing qualities and performs well in particular workloads.

Example:

Let’s take an example and understand how we can add conditional batch normalization in TensorFlow.

from tensorflow.keras.layers import Dense, BatchNormalization 
from tensorflow.keras import datasets, layers, models 
from tensorflow.keras.layers import UpSampling2D, Reshape, Activation, Conv2D, BatchNormalization, LeakyReLU, Input, Flatten, multiply
from tensorflow.keras.layers import Dense, Embedding
from tensorflow.keras.layers import Dropout, Concatenate
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.datasets import mnist

import matplotlib.pyplot as plt
import numpy as np

(X_train,y_train),(X_test,y_test) = mnist.load_data()

new_image_width, new_image_height =(28,28)
new_image_channel = 1
img_shape = (new_image_width, new_image_height, new_image_channel)
number_of_outputs = 10
dimensions = 100

X_train.shape
def build_generator():
    model = Sequential()
    model.add(Dense(128*7*7, activation = 'relu', input_shape = (dimensions, )))
    model.add(Reshape((7,7,128)))
    model.add(UpSampling2D())
    model.add(Conv2D(128, 3, 2, padding = 'same'))
    model.add(BatchNormalization())
    model.add(LeakyReLU(alpha = 0.02))
    model.add(UpSampling2D())
    model.add(Conv2D(128, 3, 2, padding = 'same'))
    model.add(BatchNormalization())
    model.add(LeakyReLU(alpha = 0.02))
    model.add(UpSampling2D())
    model.add(Conv2D(128, 3, 2, padding = 'same'))
    model.add(Activation('tanh'))
    
    new_output = Input(shape= (dimensions,))
    label = Input(shape=(1,), dtype = 'int32')
    
    label_embedding = Embedding(number_of_outputs, dimensions, input_length = 1)(label)
    label_embedding = Flatten()(label_embedding)
    joined = multiply([new_output, label_embedding])
    
    img = model(joined)
    return Model([new_output, label], img)

generator = build_generator()
generator.summary()
  • Let’s start by importing all the necessary libraries and modules for building the conditional GAN (CGAN) architecture.
  • The majority of the layers will be used to build the CGAN model network. With the addition of labels to train the model, the majority of the CGAN architecture will be based on a basic GAN framework.
  • We will use the embedding layers to convert the labels into a vector representation. The deep learning frameworks TensorFlow and Keras will be used to load all of the layers. n
  • Next, we will use the mnist dataset and load the data into the training and testing part. After that, we have mentioned the image width, image height, and the number of channels. Each image in the MNIST dataset is a single-channel, grayscale image with a size of 28 x 28.
  • Ten classes in total will serve as the labels for our CGAN model as it learns. The defined z-dimensional space has a default value of 100.

Here is the Screenshot of the following given code

Conditional batch normalization TensorFlow
Conditional batch normalization TensorFlow

In the given example we have used the Conditional batch normalization in TensorFlow.

Read: Binary Cross Entropy TensorFlow

Fused batch normalization TensorFlow

  • Let us take an example and understand how we can add the fused parameter in batch normalization.
  • In this example, we will use the concept of tf.keras.layers.BatchNormalization() function Batch normalization employs a transformation that keeps the output mean and standard deviation close to 0 and 1, respectively.
  • On the input of a layer originating from a previous layer, the new layer applies standardizing and normalizing procedures.

Syntax:

Here is the Syntax of tf.keras.layers.BatchNormalization() function in Python TensorFlow

tf.keras.layers.BatchNormalization(
    axis=-1,
    momentum=0.99,
    epsilon=0.001,
    center=True,
    scale=True,
    beta_initializer='zeros',
    gamma_initializer='ones',
    moving_mean_initializer='zeros',
    moving_variance_initializer='ones',
    beta_regularizer=None,
    gamma_regularizer=None,
    beta_constraint=None,
    gamma_constraint=None,
    **kwargs
)
  • It consists of a few parameters
    • axis: The axis that needs to be normalized is an integer (typically the features axis). For instance, set axis=1 in BatchNormalization after a Conv2D layer with data format=”channels first”. By default, it takes the -1 value.
    • momentum: This Parameter is used to define the momentum for the moving average.
    • epsilon: By default, the value is 0.001 and Variance has a small float added to it to prevent division by zero.
    • center: It will check the condition if it is true then offset of beta to normalized to tensor. If it is false then the beta will be ignored.
    • scale: It will check the condition if it is true then it multiplies by gamma and if it is false then the gamma will not be used. This can be disabled if the next layer is linear (for example, nn.relu), as the next layer will handle scaling.
    • beta_initializer: This parameter defines the initializer for the beta weight.
    • gamma_initializer: This parameter specifies the initializer for the gamma weight.
    • moving_mean_initializer: By default, the value is ‘zeros’ and it Initializes for the moving mean.
    • moving_variance_initializer: This parameter defines the Initializer for the moving variance and by default, it takes the ‘ones’ value.
READ:  Python Copy Dict Without One Key

Example:

Let’s take an example and understand the working of tf.keras.layers.BatchNormalization() function.

from tensorflow.keras import layers
from tensorflow.keras.models import Model
from tensorflow.keras.models import model_from_json
import json
input_layer = layers.Input((32, 32, 1))
new_output = Model(input_layer, layers.BatchNormalization(fused=False)(input_layer))
print('fused' in new_output.to_json())

new_loaded_value = model_from_json(new_output.to_json())
print(new_loaded_value.layers[1].fused)

new_json_val = json.loads(new_output.to_json())
new_json_val['config']['layers'][1]['config']['fused'] = False
new_json_val = json.dumps(new_json_val)

new_loaded_value = model_from_json(new_json_val)
print(new_loaded_value.layers[1].fused)

In the following given code we have imported the TensorFlow model package and then set the input shape as ((32, 32, 1)). Next, we will use the layers.BatchNormalization() function and within this function, we have assigned the fused= false as an argument.

You can refer to the below Screenshot.

fused batch normalization TensorFlow
fused batch normalization TensorFlow

This is how we can use the fused parameter in batch normalization by using TensorFlow.

Read: Tensorflow embedding_lookup

TensorFlow batch normalization weights

  • This method of reparameterizing the weights enhances the conditioning of the optimization problem and has tens stochastic gradient descent convergence.
  • Although batch normalization is the inspiration for our reparameterization, there are no dependencies between the samples in a minibatch.
  • We can employ considerably greater learning rates and allow the user to batch normalization, which accelerates the training of networks even more.

Example:

import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
from matplotlib import pyplot as plt
new_batch_size_val = 32
new_epochs_val = 10
number_classes=10
new_regression_model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(6, 5, activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(16, 5, activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(120, activation='relu'),
    tf.keras.layers.Dense(84, activation='relu'),
    tf.keras.layers.Dense(number_classes, activation='softmax'),
])
new_weight_model = tf.keras.Sequential([
    tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(6, 5, activation='relu')),
    tf.keras.layers.MaxPooling2D(2, 2),
    tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(16, 5, activation='relu')),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Flatten(),
    tfa.layers.WeightNormalization(tf.keras.layers.Dense(120, activation='relu')),
    tfa.layers.WeightNormalization(tf.keras.layers.Dense(84, activation='relu')),
    tfa.layers.WeightNormalization(tf.keras.layers.Dense(number_classes, activation='softmax')),
])
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

y_train = tf.keras.utils.to_categorical(y_train, number_classes)
y_test = tf.keras.utils.to_categorical(y_test, number_classes)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
new_regression_model.compile(optimizer='adam', 
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])

regression_history = new_regression_model.fit(x_train, y_train,
                            batch_size=new_batch_size_val,
                            epochs=new_epochs_val,
                            validation_data=(x_test, y_test),
                            shuffle=True)
new_weight_model.compile(optimizer='adam', 
                 loss='categorical_crossentropy',
                 metrics=['accuracy'])

new_weight_history = new_weight_model.fit(x_train, y_train,
                          batch_size=new_batch_size_val,
                          epochs=new_epochs_val,
                          validation_data=(x_test, y_test),
                          shuffle=True)

You can refer to the below Screenshot

TensorFlow-batch-normalization-weights
TensorFlow-batch-normalization-weights

As you can see in the summary the batch normalization layers are added in epochs by using TensorFlow.

Read: TensorFlow clip_by_value

TensorFlow batch normalization not working

  • In this section, we will discuss the batch normalization is not working in TensorFlow.
  • To perform this task we will use the concept of tf.keras.layers.BatchNormalization() function
  • Batch normalization employs a transformation that keeps the output mean and standard deviation close to 0 and 1, respectively, and within this function we will set the axis =-1.

Syntax:

Let’s have a look at the syntax and understand the working of tf.keras.layers.BatchNormalization() function.

tf.keras.layers.BatchNormalization(
    axis=-1,
    momentum=0.99,
    epsilon=0.001,
    center=True,
    scale=True,
    beta_initializer='zeros',
    gamma_initializer='ones',
    moving_mean_initializer='zeros',
    moving_variance_initializer='ones',
    beta_regularizer=None,
    gamma_regularizer=None,
    beta_constraint=None,
    gamma_constraint=None,
    **kwargs
)

Example:

import tensorflow as tf
tf.compat.v1.disable_eager_execution()
input_tens=tf.constant([12,34,5,67,8])
result= tf.compat.v1.keras.layers.BatchNormalization(axis=-1)
print(result)

In this example we used the concept of tf.compat.v1.keras.layers.BatchNormalization() function and this function will work in tensorflow 2.x as well as 1.x version.

Here is the Screenshot of the following given code.

TensorFlow batch normalization not working
TensorFlow batch normalization not working

Here we got the solution of batch normalization is working in TensorFlow.

Read: TensorFlow Graph – Detailed Guide

TensorFlow batch normalization epsilon

  • In this example, we will use the epsilon parameter in the batch normalization function in TensorFlow.
  • By default, the value of epsilon is 0.001 and Variance has a small float added to it to prevent division by zero.

Example:

import tensorflow as tf
tf.compat.v1.disable_eager_execution()
input_tens=tf.constant([67,17,28,67,98])
result= tf.keras.layers.BatchNormalization(axis=-1,epsilon=0.01)
print(result)

Here is the implementation of the following given code

TensorFlow batch normalization epsilon
TensorFlow batch normalization epsilon

This is how we can use the epsilon parameter in batch normalization by using TensorFlow.

Read: TensorFlow mean squared error

TensorFlow batch normalization activation

  • In this section, we will discuss how to use the activation function in batch normalization by using TensorFlow.
  • The activation function helps in normalizing any output with an input value between 1 and -1. Because the neural network is occasionally trained on millions of data points, the activation function needs to be effective and should shorten computation time.
  • In this example, we will use the ‘relu’ activation function. Rectified linear unit, or ReLU, is now the most popular activation function, ranging from 0 to infinity. Because of its rapid rate of conversion, neither it can map nor fit into data adequately, which poses a challenge. However, where there is a problem, there is usually a solution.

Example:

Let’s take an example and check how to use the activation function in batch normalization

import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Activation
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense, BatchNormalization
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
  


new_scale_val = 255.0
x_train /= 255.0
x_test /= 255.0
print("Feature matrix:", x_train.shape)
print("Target matrix:", x_test.shape)
print("Feature matrix:", y_train.shape)
print("Target matrix:", y_test.shape)
model = Sequential([
	
	# reshape 28 row * 28 column data to 28*28 rows
	Flatten(input_shape=(28, 28)),
	
	# dense layer 1
	Dense(256, activation='relu'),
    # Batch normalization
    BatchNormalization(),
	
	# dense layer 2
	Dense(128, activation='relu'),
    BatchNormalization(),
	
	# output layer
	Dense(10, activation='relu'),
])
model.summary()
model.compile(optimizer='adam',
			loss='sparse_categorical_crossentropy',
			metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10,
		batch_size=2000,
		validation_split=0.2)
results = model.evaluate(x_test, y_test, verbose = 0)
print('test loss, test acc:', results)

In this example, we are going to use the dataset and for this, I am going to use the mnist dataset under tf.keras.datasets and then I load the train and test data as (x_train y_train). Since the input features are between 0 to 255. I will just normalize it by dividing it by 255.

READ:  Python Turtle Dot - Helpful Guide

In the given example we have used the activation function as a relu and set the epoch value that is 10.

Here is the Screenshot of the following given code.

TensorFlow batch normalization activation
TensorFlow batch normalization activation

As you can see in the summary the batch normalization Activation is added in epochs by using TensorFlow.

Read: Tensorflow iterate over tensor

TensorFlow sequential batch normalization

  • Here we use the sequential model in batch normalization by using TensorFlow.
  • Simply placing the Keras layers in a sequential order is the fundamental concept behind Sequential API, hence the name.
  • The majority of ANNs also have layers that are arranged in sequential order, and data flows from one layer to the next in the designated order until it eventually reaches the output layer.

Example:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers import BatchNormalization

# Model configuration
new_batch_size = 250
number_of_epochs = 10
number_of_classes = 10
new_validation_split = 0.2
new_verbosity = 1

# Load mnist dataset
(input_train, target_train), (input_test, target_test) = tf.keras.datasets.mnist.load_data()

input_train_shape = input_train.shape
input_test_shape = input_test.shape 

new_shape_val = (input_train_shape[1], input_train_shape[2], 1)

# Reshape the training data to include channels
input_train = input_train.reshape(input_train_shape[0], input_train_shape[1], input_train_shape[2], 1)
input_test = input_test.reshape(input_test_shape[0], input_test_shape[1], input_test_shape[2], 1)

# Parse numbers as floats
input_train = input_train.astype('float32')
input_test = input_test.astype('float32')

# Normalize input data
input_train = input_train / 255
input_test = input_test / 255

# Create the model
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=new_shape_val))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(number_of_classes, activation='softmax'))

# Compile the model
model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
              optimizer=tf.keras.optimizers.Adam(),
              metrics=['accuracy'])

# Fit data to model
history = model.fit(input_train, target_train,
            batch_size=new_batch_size ,
            epochs=number_of_epochs ,
            verbose=new_verbosity,
            validation_split=new_validation_split)

# Generate generalization metric  s
score = model.evaluate(input_test, target_test, verbose=0)
print(f'Test loss: {score[0]} / Test accuracy: {score[1]}')

I am going to use the mnist dataset under tf.keras.datasets and then I load the train and test data as (x_train y_train). Since the input features are between 0 to 255. I will just normalize it by dividing it by 255.

After that, I will create a new sequential model with a single drop-out layer as model = tf.keras.models.sequential so in the first layer I have created a flattened layer that will take the input images of shape input_shape=(32, 32, 3).

Here is the Screenshot of the following given code.

TensorFlow sequential batch normalization
TensorFlow sequential batch normalization

This is how we can use the sequential batch normalization by using TensorFlow.

Read: Python TensorFlow truncated normal

TensorFlow dense batch normalization

  • In this section, we will discuss how to use the dense layer in batch normalization by using TensorFlow.
  • The dense layer is the typical layer of a neural network with many connections. It is the most typical and often utilized layer. The dense layer does the below operation on the input and returns the output.

Example:

from tensorflow.keras.layers import Dense, BatchNormalization 
from tensorflow.keras import datasets, layers, models 
from tensorflow.keras.layers import UpSampling2D, Reshape, Activation, Conv2D, BatchNormalization, LeakyReLU, Input, Flatten, multiply
from tensorflow.keras.layers import Dense, Embedding
from tensorflow.keras.layers import Dropout, Concatenate
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.datasets import mnist

import matplotlib.pyplot as plt
import numpy as np

(X_train,y_train),(X_test,y_test) = mnist.load_data()

new_image_width, new_image_height =(28,28)
new_image_channel = 1
img_shape = (new_image_width, new_image_height, new_image_channel)
number_of_outputs = 10
dimensions = 100

X_train.shape
def build_generator():
    model = Sequential()
    model.add(Dense(128*7*7, activation = 'relu', input_shape = (dimensions, )))
    model.add(Reshape((7,7,128)))
    model.add(UpSampling2D())
    model.add(Conv2D(128, 3, 2, padding = 'same'))
    model.add(BatchNormalization())
    model.add(LeakyReLU(alpha = 0.02))
    model.add(UpSampling2D())
    model.add(Conv2D(128, 3, 2, padding = 'same'))
    model.add(BatchNormalization())
    model.add(LeakyReLU(alpha = 0.02))
    model.add(UpSampling2D())
    model.add(Conv2D(128, 3, 2, padding = 'same'))
    model.add(Activation('tanh'))
    
    new_output = Input(shape= (dimensions,))
    label = Input(shape=(1,), dtype = 'int32')
    
    label_embedding = Embedding(number_of_outputs, dimensions, input_length = 1)(label)
    label_embedding = Flatten()(label_embedding)
    joined = multiply([new_output, label_embedding])
    
    img = model(joined)
    return Model([new_output, label], img)

generator = build_generator()
generator.summary()

You can refer to the below Screenshot.

TensorFlow dense batch normalization
TensorFlow dense batch normalization

As you can see in the Screenshot we have used the dense layer in batch normalization by using TensorFlow.

Also, take a look at some more TensorFlow tutorials.

  • Python TensorFlow expand_dims
  • Python TensorFlow one_hot
  • TensorFlow Natural Language Processing
  • Python TensorFlow reduce_mean
  • Python TensorFlow reduce_sum

In this tutorial, we have learned about customizing batch normalization in our model, and also we have looked at some examples of how we can normalize in TensorFlow. And we have covered these topics.

  • Batch normalization TensorFlow Keras
  • Batch normalization TensorFlow CNN example
  • Conditional batch normalization TensorFlow
  • Fused batch normalization TensorFlow
  • TensorFlow batch normalization weights
  • TensorFlow batch normalization not working
  • TensorFlow batch normalization epsilon
  • TensorFlow batch normalization activation
  • TensorFlow sequential batch normalization
  • TensorFlow dense batch normalization