TensorFlow feed_dict + 9 Examples

Do you know how we can create a feed_dict in TensorFlow Placeholder? So, in this Python tutorial, we will learn about the TensorFlow feed_dict in Python and we will also cover different examples related to the TensorFlow feed_dict. And we will cover these topics.

  • TensorFlow feed_dict multiple values
  • TensorFlow feed_dict numpy array
  • TensorFlow feed_dict vs dataset
  • TensorFlow feed_dict tensor
  • TensorFlow feed_dict list
  • TensorFlow feed_dict batch
  • TensorFlow session run without feed_dict
  • TensorFlow eval feed_dict
  • TensorFlow cannot interpret the feed_dict key as tensor

TensorFlow feed_dict

  • In this section, we will discuss how to use the feed_dict in the placeholder using TensorFlow.
  • In TensorFlow, there are placeholders that are the same as variables that anyone could declare, even during the runtime by using the feed_dict argument. The feed_dict argument is used in TensorFlow to feed values to these placeholders, to keep away from an error that prompts you to feed a value for placeholders in the TensorFlow.
  • We include feed dict pass and fetches in each session by using the syntax feed_dict={x:[2,20],b:[1,25]})). The feed dictionary specifies the placeholder values for that computation, and the fetches parameter indicates what we want to compute.
  • In TensorFlow, the placeholder is a variable that assigns data and feeds values into a computation graph. This method allows the user to provide the data for operation and generate our computation graph.
  • In Python, if we want to initialize some data then we have used variables but in placeholder allows you to feed data into a computation graph.
  • In the latest version of TensorFlow, we are going to use the tf.compat.v1.placeholder() function, and this function will only execute in TensorFlow 2.x version.

Syntax:

Let’s look at the Syntax and understand the working of the tf.compat.v1.placeholder() function in Python TensorFlow.

tf.compat.v1.placeholder
                       (
                        dtype,
                        shape=None,
                        name=None
                       )

Now let’s discuss these parameters which we are going to use in an example

  • It consists of a few parameters
    • dtype: This parameter specifies which type of elements are in the tensor.
    • shape: By default, it takes no value and if you do not mention the shape in the tensor then you can feed a tensor of any shape.
    • name: This parameter specifies the name of the operation and it is an optional parameter.

Example:

Let’s take an example and check how to create a feed_dict in Placeholder by using TensorFlow.

import tensorflow as tf

tf.compat.v1.disable_eager_execution()
input_tens_1 = tf.compat.v1.placeholder(tf.int32)
input_tens_2 = tf.compat.v1.placeholder(tf.int32)
result = tf.math.multiply(input_tens_1, input_tens_2)
with tf.compat.v1.Session() as val:
    new_output=val.run(result, feed_dict={input_tens_1: 56, input_tens_2: 78})
    print(new_output)
  • In the following code, we have imported the TensorFlow library with the alias name ‘tf’ and then we have declared two placeholders and its datatype is tf.int32().
  • After that we performed operation by using the tf.math.multiply and then created the session by importing the tf.compat.v1.disable_eager_execution() function.
  • While creating the session we have assigned the feed_dict as an argument.

Here is the Screenshot of the following given code

Tensorflow feed_dict
Tensorflow feed_dict

This is how we can use the fee_dict in TensorFlow Placeholder

Read: Tensorflow get static value

TensorFlow feed_dict multiple values

  • Let us discuss how to use the multiple values in placeholder fee_dict by using TensorFlow.
  • To perform this particular task we are going to use the concept of tf.graph() and the graph specifies the nodes and an edge, while nodes take more tensors as inputs and generate a given tensor as an output. In this example we will use the tf.compat.v1.graph(). In Tensorflow if you want to add variables in graphs then you can easily call the constructor and while creating a tensor we have the same datatype as the initialization value.
  • In tf. placeholder() we can easily store the value later in the session as feed_dict. If we don’t pass any value while running the session then it will generate an error.

Syntax:

Here is the Syntax of tf.placeholder() function in Python TensorFlow

tf.compat.v1.placeholder
                       (
                        dtype,
                        shape=None,
                        name=None
                       )

Example:

Let’s take an example and understand how we can use the multiple values in feed_dict

import tensorflow as tf
import numpy as np

tf.compat.v1.disable_eager_execution()
# Build a graph
graph = tf.compat.v1.Graph()
with graph.as_default():
    # declare a placeholder that is 3 by 4 of type float32
    input_tens = tf.compat.v1.placeholder(tf.int32, shape=(2, 2), name='input_tensor')
    
    # Perform some operation on the placeholder
    result = input_tens * 3
    
# Create an input array to be fed
arr = np.ones((2,2))

# Create a session, and run the graph
with tf.compat.v1.Session(graph=graph) as val:
    # run the session up to node b, feeding an array of values into a
    new_output = val.run(result, feed_dict={input_tens: arr})
    print(new_output)
  • In the above code we have imported the TensorFlow library and then declares a placeholder for a 2 by 2 tensor with values that are (or can be typecasted to) 32-bit integers.
  • Once you will execute this code the output displays the multiply of 3 by one’s value which means the input array has been multiplied by twice time.

Here is the Screenshot of the following given code

TensorFlow feed_dict multiple values
TensorFlow feed_dict multiple values

As you can see in the Screenshot we have assigned the input tensor and array in feed_dict

Read: TensorFlow cross-entropy loss

TensorFlow feed_dict numpy array

  • In this section, we will discuss how to create a numpy array in feed_dict.
  • To perform this particular task we will use the tf.compat.v1.placeholder() function and assign the datatype, shape as an argument.
  • Next, we will create the numpy array by using the np.ones() function and within this function, we have mentioned the shape.

Example:

import tensorflow as tf
import numpy as np

tf.compat.v1.disable_eager_execution()
new_tens= tf.compat.v1.placeholder(tf.int32,shape=(2, 2),name='tensor')

z = new_tens *2
new_arr= np.ones((2,2))

with tf.compat.v1.Session() as val:
    new_output=val.run(z, feed_dict={new_tens:new_arr})
    print(new_output)
    print(type(new_output))

You can refer to the below Screenshot

TensorFlow feed_dict numpy array
TensorFlow feed_dict numpy array

In the given example we have assigned the numpy array in a feed_dict

Read: Gradient descent optimizer TensorFlow

TensorFlow feed_dict tensor

  • Here we will discuss how to create a tensor in feed_dict by using TensorFlow.
  • To perform this particular task we will use the tf.compat.v1.placeholder() function and the placeholder is a variable that assigns data and feeds values into a computation graph.

Example:

Let’s take an example and understand how we can create a tensor in feed_dict.

import tensorflow as tf


tf.compat.v1.disable_eager_execution()
input_tensor_1= tf.compat.v1.placeholder(tf.int32)
input_tensor_2 = tf.compat.v1.placeholder(tf.int32)
result = tf.add(input_tensor_1, input_tensor_2)
with tf.compat.v1.Session() as val:
    new_output=val.run(result, feed_dict={input_tensor_1: 156, input_tensor_2: 278})
    print(new_output)
  • In the following given code, we have imported the TensorFlow library with the alias name ‘tf’ and then we have declared two placeholders and its datatype is tf.int32().
  • After that we performed operation by using the tf.add() and then created the session by importing the tf.compat.v1.disable_eager_execution() function.
  • While creating the session we have assigned the feed_dict as an argument.

Here is the Screenshot of the following given code

TensorFlow feed_dict tensor
TensorFlow feed_dict tensor

This is how we can assign the input tensor in feed_dict

Read: TensorFlow clip_by_value

TensorFlow feed_dict list

  • In this section, we will discuss how to declare a list in feed_dict by using TensorFlow.
  • To perform this particular task we are going to use the for-loop() method while creating the session.
  • In this example, we will use the tf.compat.v1.placeholder() function, and then by using the for loop method we can easily iterate the placeholder values which has been assigned in feed_dict() and within this, we have assigned the list in it.

Example:

import tensorflow as tf
tf.compat.v1.disable_eager_execution()

tens = tf.compat.v1.placeholder(tf.int32, shape=[4])
with tf.compat.v1.Session() as val:
    for i in range(4):
        print(val.run(tens[i], feed_dict={tens : [12,27,95,13]}))

In the following given code, we have imported

You can refer to the below Screenshot

TensorFlow feed_dict list
TensorFlow feed_dict list

As you can see in the Screenshot we have assigned the list in feed_dict

Read: TensorFlow Multiplication

TensorFlow feed_dict batch

  • In this section, we will discuss how to get the batch in feed_dict by using TensorFlow.
  • To do this task we are going to use the tf.data.Dataset.from_tensorslices() function and within this function we are going to set the batch and epochs() value.
  • Next, we will declare the variables for size and epoch value and then use the tf.variable scope() function this function declares new variables and it works as expected when the eager execution will be disabled.

Example:

import tensorflow as tf
from __future__ import absolute_import
from __future__ import division


def ds_train(new_val_size, new_epochs):  
    new_val = (tf.data.Dataset.from_tensor_slices(([16.2,76.2,38.4,11.6,19.3], [-12,-15,-28,-45,-89]))
            .batch(new_val_size)
            .repeat(new_epochs)        
            )
    return new_val


new_val_size= 1
input_size = 1
new_epochs = 2

with tf.variable_scope("dataset"):       
    result= ds_train(new_val_size, new_epochs)

with tf.variable_scope("iterator"):
    val_iterate = result.make_initializable_iterator()
    new_iterate_handle = tf.placeholder(tf.string, shape=[])
    iterator = tf.data.Iterator.from_string_handle(new_iterate_handle, 
                                                val_iterate.output_types,
                                                val_iterate.output_shapes)

    def next_item():
        new_element = iterator.get_next(name="new_element")
        m, n = tf.cast(new_element[0], tf.float32), new_element[1]
        return m, n        


inputs = tf.Variable(tf.zeros(shape=[new_val_size,input_size]), dtype=tf.float32, name="inputs", trainable=False, use_resource=True)
target = tf.Variable(tf.zeros(shape=[new_val_size], dtype=tf.int32), dtype=tf.int32, name="target", trainable=False,use_resource=True)
is_new = tf.placeholder_with_default(tf.constant(False), shape=[], name="new_item_flag")

def new_data(new_val_size, input_size):
    
    next_inputs, next_target = next_item()
    next_inputs = tf.reshape(next_inputs, shape=[new_val_size, input_size])
    with tf.control_dependencies([tf.assign(inputs, next_inputs), tf.assign(target, next_target)]):
        return tf.identity(inputs), tf.identity(target)

def old_data():
    
    return inputs, target

next_inputs, next_target = next_item()

inputs, target =  tf.cond(is_new, lambda:new_data(new_val_size, input_size), old_data)

with tf.Session() as sess:
    sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])
    handle_t = sess.run(val_iterate.string_handle())
    sess.run(val_iterate.initializer)
    while True:
        try:
            print(sess.run([inputs, target], feed_dict={new_iterate_handle:handle_t, is_new: False}))
            print(sess.run([inputs, target], feed_dict={new_iterate_handle:handle_t, is_new: False}))
            print(sess.run([inputs, target], feed_dict={new_iterate_handle:handle_t, is_new: True}))
        except tf.errors.OutOfRangeError:
            print("End of training dataset.")
            break   

In the following given code, we have used the tf.data.Iterator.from_string_handle() function and this function indicate the state of iterating through a dataset and then declaring a tensor by using the tf.variables() function.

Here is the Screenshot of the following given code

TensorFlow feed_dict batch
TensorFlow feed_dict batch

This is how we can use the batch in feed_dict by using Tensorflow

Read: Python TensorFlow Placeholder

TensorFlow session run without feed_dict

  • Here we can create a running session without using feed_dict in TensorFlow.
  • In this example, we are going to use the concept of tf.compat.v1.Session() and it is a class for running the session. To do this task first we will import the TensorFlow library with tf alias where tf represents the TensorFlow and it is used for numerical computation problems. Next, we will create a variable in the form of tensors and assign a tf.constant() function.
  • In Python, this function takes a constant value that represents the value that does not modify and it also initialized an object like an array or list.
  • Here we are going to apply the mathematical operation(*) at node new_output. To run this session we will use the val.run() syntax within this argument we will use the multiplication operation in it.

Syntax:

Here is the Syntax of tf.compat.v1.Session() function in Python TensorFlow

tf.compat.v1.Session(
    target='', graph=None, config=None
)
  • It consists of a few parameters.
    • target: This is an optional parameter and by default, it is used in the process engine.
    • graph: The graph to start and is an optional parameter.
    • Config: By default, it takes none value and it is buffer with configuration.

Example:

Let’s take an example and understand the working of tf.compat.v1.Session() function with creating any session.

import tensorflow as tf
tf.compat.v1.disable_eager_execution()
# Build a tensor
tens_1 = tf.constant(12.0)
tens_2= tf.constant(16.0)
result = tens_1 * tens_2

new_output = tf.compat.v1.Session()
# Display the Content
print(new_output.run(result))

You can refer to the below Screenshot

TensorFlow session run without feed_dict
TensorFlow session run without feed_dict

This is how we can run the session without using feed_dict

Read: Convert list to tensor TensorFlow

TensorFlow eval feed_dict

  • In this section, we will discuss how to use the eval in feed_dict by using TensorFlow.
  • To perform this particular task we are going to use the tf.graph() and this method defines the units of data that flow between operations.
  • In this example, we will use the tf.compat.v1.placeholder() function and within this function, we have assigned the datatype and shape as an argument.

Example:

import tensorflow as tf
import numpy as np
tf.compat.v1.disable_eager_execution()

new_graph = tf.Graph()
with new_graph.as_default():
    
    new_tens = tf.compat.v1.placeholder(tf.int32, shape=(2, 2))
    new_tens2 = tf.compat.v1.placeholder(tf.int32, shape=(2, 2))
    b = tf.math.multiply(new_tens, new_tens2)
    
new_arr = np.zeros((2,2))

with tf.compat.v1.Session(graph=new_graph) as session:
    output = session.run(b, feed_dict={new_tens: new_arr,new_tens2: new_arr})
    print(output)

Here is the Output of the following given code

TensorFlow eval feed_dict
TensorFlow eval feed_dict

Read: Python TensorFlow expand_dims

TensorFlow cannot interpret the feed_dict key as tensor

  • In this section, we will discuss the error message that ‘TensorFlow cannot interpret the feed_dict key as a tensor’.
  • When I try to get the value in the input tensor, the output raises this error and cannot interpret feed_dict as a tensor. In this example, we will create a tensor by using the tf.placeholder() and within this, we have assigned the datatype and the name of the operation.
  • Next, we will run the session and assign the feed_dict with the tensor value {x: [34, 78, 18]}).

Example:

import tensorflow as tf
tens_1 = tf.placeholder(tf.float32, (None,), 'tens_1')
tens_2 = tf.reduce_sum(x)
sess = tf.Session()

sess.run(y, {x: [34, 78, 18]})
sess.run(y, {'tens_1': [34, 78, 18]})
TensorFlow cannot interpret the feed_dict key as tensor
TensorFlow cannot interpret the feed_dict key as tensor

The solution to this error

import tensorflow as tf
# Creation of tensor
tens_1 = tf.placeholder(tf.float32, (None,), 'x')
tens_2 = tf.reduce_sum(tens_1)
sess = tf.Session()
sess.run(tens_2, {tens_1: [14, 16, 18]})

Here is the Screenshot of the following given code

Solution of TensorFlow cannot interpret the feed_dict key as tensor
Solution of TensorFlow cannot interpret the feed_dict key as tensor

This is how we can solve the error of TensorFlow cannot interpret the feed_dict

You may also like to read the following Python TensorFlow tutorials.

In this tutorial, we have learned about the TensorFlow feed_dict in Python and we have also covered different examples related to the TensorFlow feed_dict. And we will cover these topics.

  • TensorFlow feed_dict multiple values
  • TensorFlow feed_dict numpy array
  • TensorFlow feed_dict vs dataset
  • TensorFlow feed_dict tensor
  • TensorFlow feed_dict list
  • TensorFlow feed_dict batch
  • TensorFlow feed-dict dataset
  • TensorFlow session run without feed_dict
  • TensorFlow eval feed_dict
  • TensorFlow cannot interpret the feed_dict key as tensor