While working on a deep learning project, I used some code from an older TensorFlow tutorial. However, when I attempted to run the code, I encountered an error: ModuleNotFoundError: No module named ‘tensorflow.contrib’.The issue is that the contrib module was removed in TensorFlow 2.0 and above.
In this article, I will explain why this error occurs and provide several practical methods to fix it. Whether you need to migrate your code to TensorFlow 2.x or explore alternative solutions, I have you covered.
Let’s get in!
Why Does This Error Occur?
The tensorflow.contrib module was a collection of experimental code and extensions in TensorFlow 1.x. However, when TensorFlow 2.0 was released, the entire contrib module was removed from the library.
If you’re using TensorFlow 2.x (like the current version 2.19.0) and your code or a library you’re using tries to import from tensorflow.contrib, you’ll encounter this error.
Read Fix ModuleNotFoundError: No module named ‘tensorflow.compat’
Method 1: Use TensorFlow 1.x Compatibility Mode
TensorFlow 2.x includes a compatibility module that allows you to run TensorFlow 1.x code. This is the simplest solution if you don’t want to rewrite your code.
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# Now you can use many TensorFlow 1.x APIs, but contrib is still not availableHowever, it’s important to note that even with compatibility mode, the contrib module cannot be accessed as it has been completely removed.
Method 2: Find Alternative APIs in TensorFlow 2.x
The best long-term solution is to update your code to use the equivalent APIs in TensorFlow 2.x. Many functions from contrib have been moved to the core TensorFlow API or other packages.
Here’s an example of migrating code that used tf.contrib.layers.xavier_initializer():
# Old code (TensorFlow 1.x with contrib)
import tensorflow as tf
initializer = tf.contrib.layers.xavier_initializer()
# New code (TensorFlow 2.x)
import tensorflow as tf
initializer = tf.keras.initializers.GlorotUniform() # Xavier/Glorot initializationAnother example is migrating tf.contrib.rnn code:
# Old code (TensorFlow 1.x with contrib)
import tensorflow as tf
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=128)
# New code (TensorFlow 2.x)
import tensorflow as tf
lstm_cell = tf.keras.layers.LSTMCell(units=128)Check out Module ‘keras.backend’ has no attribute ‘set_session’
Method 3: Use TensorFlow Addons
Many features from contrib have been moved to TensorFlow Addons, a community-maintained repository of extensions for TensorFlow 2.x.
Here’s how to use it:
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
# Create a sample 2D image (grayscale)
image = tf.constant(np.arange(1, 10, dtype=np.float32), shape=(3, 3, 1))
# Rotate the image by 90 degrees (in radians)
rotated_image = tfa.image.rotate(image, angles=tf.constant(np.pi / 2))
# Display result
print("Original image:")
print(image.numpy().squeeze())
print("\nRotated image:")
print(rotated_image.numpy().squeeze()) Output:
Original image:
[[1. 2. 3.]
[4. 5. 6.]
[7. 8. 9.]]
Rotated image:
[[3. 6. 9.]
[2. 5. 8.]
[1. 4. 7.]]I executed the above example code and added the screenshot below.

TensorFlow Addons provides powerful extensions like tfa.image.rotate that replace deprecated tf.contrib features. It’s a reliable way to access advanced functionality while staying fully compatible with TensorFlow 2.x.
Read AttributeError: Module ‘keras.backend’ has no attribute ‘get_session’
Method 4: Use External Libraries That Replace Contrib Functionality
Some contrib standalone libraries have replaced modules:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
# Prepare simple dummy data
x_train = np.array([[0.], [1.], [2.], [3.]], dtype=float)
y_train = np.array([[0.], [1.], [2.], [3.]], dtype=float)
# Build a simple model (instead of using tf.contrib.learn)
model = keras.Sequential([
layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
# Train the model
model.fit(x_train, y_train, epochs=3, verbose=1)
# Make a prediction
result = model.predict(np.array([[4.0]]))
print("\nPredicted result for input 4.0 is:", result[0][0])I executed the above example code and added the screenshot below.

This example shows how external libraries like Keras now handle tasks previously managed by tf.contrib. Migrating to these supported APIs ensures better compatibility, maintainability, and future-proofing of your TensorFlow projects.
Read Fix Module ‘TensorFlow’ has no attribute ‘session’
Method 5: Downgrade to TensorFlow 1.x (Not Recommended)
While downgrading to TensorFlow 1.x would restore access to contrib, this is not recommended for new projects. TensorFlow 1.x will eventually lose support, and you’ll miss out on the improvements in TensorFlow 2.x.
If you must use TensorFlow 1.x for compatibility reasons:
# Create a virtual environment first
python -m venv tf1_env
source tf1_env/bin/activate # On Windows: tf1_env\Scripts\activate
# Install TensorFlow 1.15 (the last 1.x release)
pip install tensorflow==1.15.0Real-World Example: Migrate a Stock Prediction Model
Let’s look at a real-world example. Imagine you’re working on a stock prediction model for the US market that uses code originally written for TensorFlow 1.x:
# Old code (TensorFlow 1.x with contrib)
import tensorflow as tf
import numpy as np
import pandas as pd
# Load S&P 500 data
data = pd.read_csv('sp500_data.csv')
prices = data['Close'].values.reshape(-1, 1)
# Normalize data
def normalize_data(data):
return (data - np.mean(data)) / np.std(data)
normalized_prices = normalize_data(prices)
# Create sequences
def create_sequences(data, seq_length):
xs, ys = [], []
for i in range(len(data) - seq_length):
x = data[i:i+seq_length]
y = data[i+seq_length]
xs.append(x)
ys.append(y)
return np.array(xs), np.array(ys)
seq_length = 10
X, y = create_sequences(normalized_prices, seq_length)
# Build model with tf.contrib
inputs = tf.placeholder(tf.float32, [None, seq_length, 1])
targets = tf.placeholder(tf.float32, [None, 1])
# Using contrib for LSTM cells
cell = tf.contrib.rnn.LSTMCell(64)
outputs, states = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
outputs = tf.layers.dense(outputs[:, -1, :], 1,
kernel_initializer=tf.contrib.layers.xavier_initializer())
loss = tf.reduce_mean(tf.square(outputs - targets))
optimizer = tf.train.AdamOptimizer(0.01).minimize(loss)Here’s how to migrate this code to TensorFlow 2.19.0:
import tensorflow as tf
import numpy as np
import pandas as pd
# Load S&P 500 data
data = pd.read_csv('sp500_data.csv')
prices = data['Close'].values.reshape(-1, 1)
# Normalize data
def normalize_data(data):
return (data - np.mean(data)) / np.std(data)
normalized_prices = normalize_data(prices)
# Create sequences
def create_sequences(data, seq_length):
xs, ys = [], []
for i in range(len(data) - seq_length):
x = data[i:i+seq_length]
y = data[i+seq_length]
xs.append(x)
ys.append(y)
return np.array(xs), np.array(ys)
seq_length = 10
X, y = create_sequences(normalized_prices, seq_length)
# Build model using Keras (TF 2.x approach)
model = tf.keras.Sequential([
tf.keras.layers.LSTM(64, input_shape=(seq_length, 1)),
tf.keras.layers.Dense(1, kernel_initializer='glorot_uniform')
])
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='mse')
# Train the model
model.fit(X, y, epochs=50, batch_size=32, validation_split=0.1)
# Make predictions
predictions = model.predict(X)Notice how we’ve replaced:
tf.contrib.rnn.LSTMCellwithtf.keras.layers.LSTMtf.contrib.layers.xavier_initializer()with'glorot_uniform'(they’re the same)- The entire model structure with the Keras Sequential API
Check out Module ‘TensorFlow’ has no attribute ‘get_default_graph’
Common contrib Modules and Their TF 2.x Replacements
Here’s a quick reference table for common contrib modules and their TensorFlow 2.x replacements:
| TensorFlow 1.x (contrib) | TensorFlow 2.x Replacement |
|---|---|
tf.contrib.layers.xavier_initializer() | tf.keras.initializers.GlorotUniform() |
tf.contrib.layers.variance_scaling_initializer() | tf.keras.initializers.VarianceScaling() |
tf.contrib.rnn.LSTMCell | tf.keras.layers.LSTMCell |
tf.contrib.opt.LazyAdamOptimizer | tf.keras.optimizers.Adam |
tf.contrib.data.shuffle_and_repeat | dataset.shuffle().repeat() |
tf.contrib.distribute | tf.distribute.Strategy |
tf.contrib.layers.batch_norm | tf.keras.layers.BatchNormalization |
tf.contrib.losses.softmax_cross_entropy | tf.keras.losses.CategoricalCrossentropy |
I hope you found this tutorial helpful for resolving the ModuleNotFoundError: No module named 'tensorflow.contrib' error. Remember that while migrating your code might take some effort, it’s worth it to take advantage of the improvements in TensorFlow 2.x.
The best approach is to find equivalent APIs in TensorFlow 2.x or utilize community-maintained packages such as TensorFlow Addons that offer similar functionality. This will ensure your code remains compatible with future versions of TensorFlow.
You may like to read:
- AttributeError: Module ‘tensorflow’ has no attribute ‘logging’
- AttributeError module ‘tensorflow’ has no attribute ‘summary’
- ModuleNotFoundError: No module named tensorflow Keras

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.