In this Python tutorial, we will learn how we can use gradient descent optimizer in Python TensorFlow. Also, we will cover the following topics.
- Gradient descent optimizer TensorFlow
- Stochastic gradient descent optimizer TensorFlow
- TensorFlow uses a gradient descent optimizer
- TensorFlow’s gradient descent optimizer function with minimize cost
Gradient descent optimizer TensorFlow
- In this section, we will discuss how to use the Gradient descent optimizer in Python TensorFlow.
- If we want to find the inputs to a model that minimizes its output then this technique will help the user to calculate the gradient descent optimizer the inputs are parameters of the model and the output will be the loss function.
- To perform this particular task, we are going to use the tf.compat.v1.train.GradientDescentOptimizer() function and this function will executes the gradient descent algorithm.
Syntax:
Let’s have a look at the Syntax and understand the working of tf.compat.v1.train.GradientDescentOptimizer() function in Python TensorFlow.
tf.compat.v1.train.GradientDescentOptimizer(
learning_rate, use_locking=False, name='GradientDescent'
)
- It consists of a few parameters
- learning_rate: This parameter specifies the learning rate which we want to use and it is an input tensor.
- use_locking: By default, it takes the false value and if it is true then it uses locks for the update operation.
- name: By default, it takes the ‘GradientDescent‘ name and this parameter specifies the name of the operation.
Example:
Let’s take an example and check how to use the Gradient descent optimizer in Python TensorFlow.
Source Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from pylab import figure, cm
new_val = np.arange(-20,20,0.4)
result = new_val**4
new_img = figure(num=None, figsize=(16, 14), dpi=60, facecolor='w', edgecolor='k')
plt.plot(new_val,result)
plt.title('Gradient TensorFlow descent')
plt.xlabel('new_val')
plt.ylabel('result')
plt.savefig("gradient_descent_tensorflow_01.png", bbox_inches='tight')
plt.show()
def loss_function(new_val):
return new_val ** 4.0
def loss_function_minimzie():
return new_val ** 4.0
def reset():
new_val = tf.Variable(20.0)
return new_val
new_val = reset()
new_out = tf.keras.optimizers.SGD(learning_rate=0.1)
for i in range(20):
print ('y = {:.1f}, x = {:.1f}'.format(loss_function(new_val).numpy(), new_val.numpy()))
new_out.minimize(loss_function_minimzie, var_list=[new_val])
Here is the Screenshot of the following given code.
Read: TensorFlow Multiplication – Helpful Guide
Stochastic gradient descent optimizer TensorFlow
- In this section, we will discuss how to use a stochastic gradient descent optimizer in Python TensorFlow.
- To perform this particular task, we are going to use the tf.keras.optimizers.SGD() algorithm and this function are used to find the model arguments for the dominant neural network.
Syntax:
Here is the Syntax of tf.keras.optimizers.SGD() function in Python TensorFlow.
tf.keras.optimizers.SGD(
learning_rate=0.01,
momentum=0.0,
nesterov=False,
name='SGD',
**kwargs
)
- It consists of a few parameters.
- learning_rate: This parameter indicates the input tensor that takes no parameters and returns the real value to use and by default, it takes a 0.01 value.
- momentum: By default, it takes a 0.0 value and it specifies the
Example:
import tensorflow as tf
opt = tf.keras.optimizers.SGD(learning_rate=4.0)
new_val = tf.Variable(6.0)
result = lambda: (new_val ** 6)/2.0
new_count = opt.minimize(result, [new_val]).numpy()
new_val.numpy()
In the following given code we have imported the TensorFlow library and then use the tf.keras.optimizers.SGD() function and within this function we have assigned the learning rate=4.0.
After that, we have created a tensor and assigned the tensor value, and optimized the given value. Once you will execute this code the output displays a gradient descent value.
Here is the execution of the following given code.
Read: TensorFlow mean squared error
TensorFlow uses a gradient descent optimizer
- In this Program, we will discuss how to use a gradient descent optimizer in Python TensorFlow.
- To perform this particular task, we are going to use the tf.variable() function for creating a tensor by using the tf.variable() function.
- And then use the tf.train.GradientDescentOptimizer() function for optimized the gradient descent value.
Syntax:
Here is the Syntax of tf.train.GradientOptimizer() function in Python TensorFlow.
tf.compat.v1.train.GradientDescentOptimizer(
learning_rate, use_locking=False, name='GradientDescent'
)
Example:
Let’s take an example and check how to use a gradient descent optimizer in Python TensorFlow.
Source Code:
import tensorflow as tf
new_tens = tf.Variable(2, name = 'x', dtype = tf.float32)
new_val = tf.log(new_tens)
new_log = tf.square(new_val)
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(new_log )
init = tf.initialize_all_variables()
def optimize():
with tf.Session() as session:
session.run(init)
print("x:", session.run(new_tens), session.run(new_log ))
for step in range(10):
session.run(train)
print("step", step, "x:", session.run(new_tens), session.run(new_log ))
optimize()
In the following given code we have defined the optimize() function and then create the session by using the tf.session() function and initializing the log value to it.
Note: This example only works in TensorFlow 1.x version because tf.session() function only works in version 1.x
You can refer to the below Screenshot.
Read: Module ‘tensorflow’ has no attribute ‘mul’
TensorFlow’s gradient descent optimizer function with minimize cost
- In this section, we will discuss how to minimize the cost of the gradient descent optimizer function in Python TensorFlow.
- To do this task, we are going to use tf.compat.v1.train.GradientDescentOptimizer() function for getting the minimum value.
- Next, we will import the tf.compat.v1.disable_eager_execution() for creating the session along with that we are going generate the random values by using the tf.compat.v1.set_random_seed() function.
Example:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
tf.compat.v1.set_random_seed(777)
tens1 = [4, 5, 6]
tens2 = [4, 5, 6]
tens3 = tf.Variable(5.)
new_hypto = tens1 * tens3
gradient = tf.reduce_mean((tens3 * tens1 - tens2) * tens1) * 2
cost = tf.reduce_mean(tf.square(new_hypto - tens2))
new_opt = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.01)
train = new_opt.minimize(cost)
new_result = new_opt.compute_gradients(cost, [tens3])
apply_gradients = new_opt.apply_gradients(new_result)
with tf.compat.v1.Session() as val:
val.run(tf.compat.v1.global_variables_initializer())
print(step, val.run([gradient, tens3, new_result]))
In the following given code, we have used the tf.reduce_mean() function and assigned the new_hypto value as an argument and then we use the new_opt.minimize() parameter to minimize the cost value.
Here is the Screenshot of the following given code.
Also, take a look at some more TensorFlow tutorials.
- Python TensorFlow truncated normal
- Python TensorFlow expand_dims
- Pandas dataframe to tensorflow dataset
- Tensorflow custom loss function
- TensorFlow next_batch + Examples
- TensorFlow cross-entropy loss
- Python TensorFlow random uniform
- Python TensorFlow reduce_sum
So, in this tutorial, we have learned how we can use gradient descent optimizer in Python TensorFlow. Also, we have covered the following topics.
- Gradient descent optimizer TensorFlow
- Stochastic gradient descent optimizer TensorFlow
- TensorFlow uses a gradient descent optimizer
- TensorFlow’s gradient descent optimizer function with minimize cost
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.