Recently, while working on a deep learning project in TensorFlow, I ran into an error that stopped me in my tracks:
AttributeError: module 'tensorflow' has no attribute 'truncated_normal_initializer'At first, I thought I had a typo in my code. But after checking a few times, I realized the issue was with TensorFlow itself.
If you’ve been coding in TensorFlow for a while, you probably know that many functions from TensorFlow 1.x have been removed or moved in TensorFlow 2.x. This is exactly what’s happening here.
In this tutorial, I’ll show you step-by-step how I fixed this error. I’ll also share multiple methods so you can choose the one that works best for your project.
Why Does This Error Happen?
The function truncated_normal_initializer was part of TensorFlow 1.x. In TensorFlow 2.x, it has been removed or moved under different namespaces.
So, if you’re using TensorFlow 2.x (which most of us are today), calling it directly like this:
tf.truncated_normal_initializer()…will throw the error.
Method 1 – Use tf.keras.initializers.TruncatedNormal
The easiest fix is to use the Keras initializer that comes with TensorFlow 2.x.
Here’s an example:
import tensorflow as tf
# Define a TruncatedNormal initializer
initializer = tf.keras.initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=42)
# Example: Create a variable with this initializer
weights = tf.Variable(initializer(shape=(3, 3)), dtype=tf.float32)
print("Weights initialized with TruncatedNormal:")
print(weights.numpy())You can see the output in the screenshot below.

This works perfectly in TensorFlow 2.x. The TruncatedNormal initializer is the modern replacement for the old truncated_normal_initializer.
Method 2 – Use tf.random.truncated_normal
If you don’t want an initializer object and just need random values, you can directly use tf.random.truncated_normal.
Example:
import tensorflow as tf
# Generate random values with truncated normal distribution
values = tf.random.truncated_normal(shape=[3, 3], mean=0.0, stddev=0.05, seed=42)
print("Random values with truncated normal distribution:")
print(values.numpy())You can see the output in the screenshot below.

This is useful when you just want raw values without wrapping them inside an initializer.
Method 3 – Use Compatibility Mode (tf.compat.v1)
If you’re migrating old TensorFlow 1.x code and don’t want to rewrite everything right away, you can use the compatibility module.
Here’s how:
import tensorflow as tf
# Use TensorFlow v1 compatibility
initializer = tf.compat.v1.truncated_normal_initializer(mean=0.0, stddev=0.05, seed=42)
# Example variable
weights = tf.Variable(initializer(shape=(3, 3)), dtype=tf.float32)
print("Weights initialized using compat.v1:")
print(weights.numpy())You can see the output in the screenshot below.

This is not the recommended long-term solution, but it’s quick and helps when you’re upgrading old projects.
Method 4 – Full Example (Neural Network Weights Initialization)
Let me show you how I used the fix in a small neural network example.
Suppose I’m working on a classification task with U.S. Census data (income prediction). I want my model weights initialized with a truncated normal distribution.
Here’s the complete code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Define the initializer
initializer = tf.keras.initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=42)
# Build a simple model
model = models.Sequential([
layers.Dense(64, activation='relu', kernel_initializer=initializer, input_shape=(10,)),
layers.Dense(32, activation='relu', kernel_initializer=initializer),
layers.Dense(1, activation='sigmoid', kernel_initializer=initializer)
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Print summary
model.summary()Here, I replaced the old truncated_normal_initializer with tf.keras.initializers.TruncatedNormal. This way, the code runs smoothly in TensorFlow 2.x.
Which Method Should You Use?
- If you’re writing new TensorFlow 2.x code → Use tf.keras.initializers.TruncatedNormal.
- If you just need random values → Use tf.random.truncated_normal.
- If you’re migrating old TensorFlow 1.x code → Use tf.compat.v1.truncated_normal_initializer.
When I first saw this error, it looked intimidating. But once I understood that TensorFlow 2.x had reorganized its functions, the fix was simple.
Now, whenever I run into such errors, I immediately check if the function has been moved to tf.keras or tf.random. Most of the time, that’s the solution
You may also read other Python-related articles:
- Use the Mean() Function in Python
- Implement the Sigmoid Activation Function in Python
- Use the insert() Function in Python
- Use the round() Function in Python

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.