While I was working on a deep learning project, I needed to access and manipulate individual elements within TensorFlow tensors. The challenge was that tensors are not as simple to iterate over as regular Python lists.
After spending hours experimenting with different approaches, I discovered several effective methods to iterate over tensors in TensorFlow.
In this tutorial, I’ll share these methods with you, explaining each one with practical examples.
So let’s get in!
Tensors in TensorFlow
Tensors are the primary data structure in TensorFlow. They are multi-dimensional arrays that flow through the computational graph.
Think of tensors as containers that hold data with uniform type and shape. Before we learn how to iterate over them, it’s important to understand their structure.
Read Convert Tensor to Numpy in TensorFlow
Method 1: Use Basic For Loops
The simplest way to iterate over a tensor is by using a standard Python for loop.
import tensorflow as tf
# Create a simple 1D tensor
tensor = tf.constant([1, 2, 3, 4, 5])
# Iterate using a for loop
for element in tensor:
print(element.numpy())Output:
1
2
3
4
5You can see the output in the screenshot below.

This approach works well for 1D tensors. The .numpy() method converts each tensor element to a NumPy array, making it easier to work with.
For multi-dimensional tensors, we can use nested loops:
# Create a 2D tensor
tensor_2d = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Iterate using nested loops
for row in tensor_2d:
for element in row:
print(element.numpy(), end=' ')
print()Check out TensorFlow One_Hot Encoding
Method 2: Use tf.map_fn Function
When you need to apply a function to each element of a tensor, tf.map_fn provides an efficient way to do this.
# Define a function to apply to each element
def double_value(x):
return x * 2
# Create a tensor
tensor = tf.constant([1, 2, 3, 4, 5])
# Apply function to each element
result = tf.map_fn(double_value, tensor)
print(result.numpy())Output:
[ 2 4 6 8 10]You can see the output in the screenshot below.

This method is particularly useful when you need to perform complex operations on each element.
Read TensorFlow Variable
Method 3: Use while Loops with tf.TensorArray
For more control over iteration, especially when building dynamic computations, you can use tf.TensorArray with a while loop:
# Create a tensor
tensor = tf.constant([1, 2, 3, 4, 5])
n = tensor.shape[0]
# Create a TensorArray to store results
result_array = tf.TensorArray(tf.int32, size=n)
# Set up loop variables
i = tf.constant(0)
# Define the condition and body for the while loop
def condition(i, _):
return i < n
def body(i, result_array):
# Get the element at index i and process it
value = tensor[i] * 3
# Write the processed value to the result array
result_array = result_array.write(i, value)
return i + 1, result_array
# Run the while loop
_, result_array = tf.while_loop(condition, body, [i, result_array])
# Stack the results into a tensor
result = result_array.stack()
print(result.numpy())Output:
[ 3 6 9 12 15]You can see the output in the screenshot below.

This approach is more complex but gives you fine-grained control over the iteration process.
Check out Tensorflow Convert String to Int
Method 4: Use tf.keras.layers.Lambda for Batched Data
When working with batched data in a model, you can use Lambda layers to apply operations to each element:
import tensorflow as tf
from tensorflow.keras.layers import Lambda
# Create a batch of data
batch_data = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Define a Lambda layer that processes each example in the batch
def process_batch(x):
return tf.map_fn(lambda ex: ex * 2, x)
lambda_layer = Lambda(process_batch)
result = lambda_layer(batch_data)
print(result.numpy())This method integrates well with Keras models and is optimized for performance with larger datasets.
Read TensorFlow Fully Connected Layer
Method 5: Use NumPy Bridge for Complex Operations
Sometimes, you might need to perform operations that are more easily done with NumPy. In such cases, you can convert to NumPy, process the data, and convert back:
# Create a tensor
tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
# Convert to NumPy, modify, and convert back
numpy_array = tensor.numpy()
for i in range(numpy_array.shape[0]):
for j in range(numpy_array.shape[1]):
numpy_array[i, j] = numpy_array[i, j] ** 2
# Convert back to tensor
processed_tensor = tf.constant(numpy_array)
print(processed_tensor.numpy())This approach can be useful for complex operations but may introduce performance overhead due to the conversions.
Read Batch Normalization TensorFlow
Performance Considerations
When iterating over tensors in TensorFlow, keep these performance tips in mind:
- Vectorization: Whenever possible, use vectorized operations instead of element-wise iteration.
- GPU Utilization: Methods like
tf.map_fncan leverage GPU acceleration, while converting to NumPy may not. - Batch Processing: Process data in batches rather than individual elements when working with large datasets.
- Eager vs. Graph Mode: Some iteration methods perform differently in eager execution versus graph compilation.
Check out Binary Cross-Entropy TensorFlow
Practical Example: Process US Census Data
Let’s apply these concepts to a real-world example using US census data:
# Create a tensor representing population samples (in thousands) from different US states
population_data = tf.constant([
[8804, 710], # California, Wyoming
[6950, 3957], # Texas, Georgia
[1380, 12700] # Hawaii, New York
])
# Define a normalization function
def normalize_population(x):
# Convert to population per square mile
state_area = tf.constant([[163696, 97813], [268596, 59425], [10932, 54555]])
return x * 1000 / state_area
# Apply the function to each element using tf.map_fn
normalized = tf.map_fn(normalize_population, population_data)
print("Population density (people per square mile):")
print(normalized.numpy())This example shows how to iterate over and transform tensor data representing US state populations.
I hope this article has helped you understand how to iterate over tensors in TensorFlow. These methods have been invaluable in my deep learning projects, and I’m confident they’ll help you too.
Remember to choose the iteration method that best fits your specific use case, considering factors like tensor dimensionality, performance requirements, and integration with your broader TensorFlow workflow.
Other TensorFlow articles you may also like:
- Compile Neural Network in Tensorflow
- Build an Artificial Neural Network in Tensorflow
- Training a Neural Network in TensorFlow

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.