While working on a deep learning project, I ran into an error that left me scratching my head:
AttributeError: module 'tensorflow' has no attribute 'trainable_variables'At first, I thought it was a version mismatch or maybe a typo in my code. But after digging deeper, I realized this error is quite common, especially when moving between TensorFlow 1.x and TensorFlow 2.x.
In this tutorial, I’ll show you exactly why this happens and how you can fix it. I’ll also share some code examples that I personally tested, so you don’t have to waste hours debugging.
Why Does This Error Happen?
In TensorFlow 1.x, we often used functions like:
tf.trainable_variables()But in TensorFlow 2.x, eager execution is the default, and the API has changed. The trainable_variables function no longer exists at the module level. Instead, it’s available as an attribute of layers and models, or through tf.compat.v1.
So, if you directly call tf.trainable_variables() in TensorFlow 2.x, you’ll see this error.
Method 1 – Use model.trainable_variables in TensorFlow 2.x
The simplest solution is to use the trainable_variables attribute of a model or layer.
Here’s an example:
import tensorflow as tf
# Define a simple Sequential model
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(32,)),
tf.keras.layers.Dense(10, activation='softmax')
])
# Build the model by calling it once
model(tf.random.normal([1, 32]))
# Access trainable variables
for var in model.trainable_variables:
print(f"Name: {var.name}, Shape: {var.shape}")I executed the above example code and added the screenshot below.

You’ll see the names and shapes of the weights and biases in your model. This method works perfectly in TensorFlow 2.x and is the recommended approach.
Method 2 – Use tf.compat.v1.trainable_variables()
If you’re migrating old TensorFlow 1.x code and don’t want to rewrite everything, you can use the compatibility module.
import tensorflow as tf
# Disable eager execution to mimic TF1 behavior
tf.compat.v1.disable_eager_execution()
# Create variables
w = tf.Variable(tf.random.normal([10, 10]), name="weights")
b = tf.Variable(tf.zeros([10]), name="biases")
# List trainable variables
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
vars_list = tf.compat.v1.trainable_variables()
for v in vars_list:
print(v)I executed the above example code and added the screenshot below.

This is useful if you’re maintaining legacy code but still want to run it in TensorFlow 2.x.
Method 3 – Use tf.keras.backend.trainable_variables()
Another approach is to use Keras backend utilities. This is especially handy if you’re working with custom training loops.
import tensorflow as tf
from tensorflow.keras import backend as K
# Define a Sequential model
model = tf.keras.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=(8,)),
tf.keras.layers.Dense(1)
])
# Build the model
model(tf.random.normal([1, 8]))
# Access trainable variables through Keras backend
trainable_vars = K.trainable_variables()
for var in trainable_vars:
print(f"Variable: {var.name}, Shape: {var.shape}")I executed the above example code and added the screenshot below.

This method works well, but I usually prefer model.trainable_variables since it’s cleaner.
Method 4 – Debug a Real-World Example
Let’s say you’re building a predictive model for U.S. housing prices optimization using TensorFlow. You might define a model like this:
import tensorflow as tf
# Example: Predicting housing prices
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)), # 13 features
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1) # Single output
])
# Build the model
model(tf.random.normal([1, 13]))
# Print trainable variables
print("Trainable Variables in Housing Model:")
for var in model.trainable_variables:
print(f"{var.name} - {var.shape}")This way, you can confirm that your model has the correct trainable parameters before moving on to training.
Common Mistakes to Avoid
- Calling tf.trainable_variables() directly in TF2.x – This will always throw the error.
- Forgetting to build the model – Some layers don’t create variables until you call the model once.
- Mixing TF1 and TF2 code – If you use tf.compat.v1, disable eager execution, or you’ll face unexpected issues.
Conclusion
When I first saw the error “AttributeError: module ‘tensorflow’ has no attribute ‘trainable_variables’”, it felt confusing. But once I understood that TensorFlow 2.x changed how variables are managed, the fix was simple.
If you’re working in TensorFlow 2.x, always use model.trainable_variables. If you’re maintaining old code, tf.compat.v1.trainable_variables() is your friend.
Both methods work, but I highly recommend moving toward the TensorFlow 2.x approach; it’s cleaner, modern, and better supported.
You may also like to read:
- SyntaxError: returns outside the function error in Python
- Find the Maximum Value in Python Using the max() Function
- Use the Python pop() Function
- Use the repeat() Function in Python

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.