Recently, I was working on a deep learning project where I needed to build a model that could predict housing prices in the US market. One of the fundamental concepts I had to master was how to properly use TensorFlow variables. The issue is, many tutorials don’t properly explain when and how to use variables in TensorFlow.
In this article, I’ll cover everything you need to know about TensorFlow variables, from basic creation to advanced usage patterns.
So let’s get in!
TensorFlow Variables
TensorFlow variables are special tensors that can be modified during model training. Unlike regular tensors, variables persist across multiple calls to the same model.
In practical terms, when you’re building a neural network to predict housing prices, your model weights and biases are stored as variables because they need to be updated during training.
Here’s a simple example of creating a TensorFlow variable:
import tensorflow as tf
# Creating a simple variable
weights = tf.Variable([[1.0, 2.0], [3.0, 4.0]], name="weights")Method 1: Create TensorFlow Variables
There are several ways to create variables in TensorFlow. Let me show you the most common approaches:
Read Compile Neural Network in Tensorflow
Use tf.Variable()
The simple way to create a variable is by using the tf.Variable() constructor:
# Create a variable with initial value
state_population = tf.Variable([8.8, 39.5, 29.1, 3.0], dtype=tf.float32, name="state_populations")Output:
<tf.Variable 'state_populations:0' shape=(4,) dtype=float32, numpy=array([ 8.8, 39.5, 29.1, 3. ], dtype=float32)>You can see the output in the screenshot below

In this example, I’ve created a variable representing the populations (in millions) of some US states.
Use Variable Factories
For more complex scenarios, TensorFlow provides variable factories like tf.ones, tf.zeros, and tf.random:
# Create variables with specific initialization
weights = tf.Variable(tf.random.normal([1000, 500]), name="weights")
biases = tf.Variable(tf.zeros([500]), name="biases")This approach is especially useful when initializing large neural network layers.
Check out Tensor in TensorFlow
Method 2: Manage Variable Scope
When building complex models, variable scopes help you organize and reuse variables.
Use tf.variable_scope
# Create a named scope (for graph clarity in tools like TensorBoard)
with tf.name_scope("housing_model"):
# Variables created here will have names prefixed with "housing_model/"
weights = tf.Variable(tf.random.normal([10, 5]), name="weights")
biases = tf.Variable(tf.zeros([5]), name="biases")
print(biases)Output:
<tf.Variable 'housing_model/biases:0' shape=(5,) dtype=float32, numpy=array([0., 0., 0., 0., 0.], dtype=float32)>You can see the output in the screenshot below

Variable scopes are particularly helpful when you want to reuse parts of your model or when debugging complex architectures.
Read Tensorflow Convert String to Int
Use name_scope with Variables
Another approach is to use tf.name_scope:
with tf.name_scope("layer_1"):
weights_1 = tf.Variable(tf.random.normal([784, 256]), name="weights")
biases_1 = tf.Variable(tf.zeros([256]), name="biases")This helps organize your TensorBoard graphs and makes debugging easier.
Method 3: Variables in Keras Models
If you’re using Keras (which is now integrated with TensorFlow), variables are created implicitly when you define layers:
import numpy as np
# Define the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(10,)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
# Create dummy input data (e.g., one sample with 10 features)
sample_input = np.random.rand(1, 10).astype(np.float32)
# Get output from the model
output = model(sample_input)
print(output) Output:
tf.Tensor([[-0.23747908]], shape=(1, 1), dtype=float32)You can see the output in the screenshot below

Each layer creates and manages its variables. You can access them through:
# Access variables from the first layer
first_layer_weights = model.layers[0].weights[0]
first_layer_bias = model.layers[0].weights[1]Method 4: Variable Operations
Working with variables involves operations like assignment, reading, and updating.
Check out the TensorFlow Fully Connected Layer
Assigning Values
You can assign new values to variables using assign():
# Create a variable
state_tax_rate = tf.Variable(0.06, name="sales_tax")
# Update its value
state_tax_rate.assign(0.07)Variable Operations
TensorFlow provides special operations for variables:
counter = tf.Variable(0)
# Add to the variable
counter.assign_add(5) # counter is now 5
# Subtract from the variable
counter.assign_sub(2) # counter is now 3Method 5: Use Variables with Optimizers
Variables are crucial when training models with optimizers:
# Define a simple model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
# Compile with optimizer
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
loss='mse',
metrics=['mae'])In this example, the Adam optimizer automatically updates all model variables during training.
Check out Batch Normalization TensorFlow
Method 6: Variable Persistence and Saving
Saving and loading variables is essential for model persistence:
# Create a simple model
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(10,)),
tf.keras.layers.Dense(1)
])
# Train the model (simplified)
model.compile(optimizer='adam', loss='mse')
model.fit(x_train, y_train, epochs=5)
# Save the entire model including variables
model.save('us_housing_model.h5')
# Later, load the model with all variables restored
loaded_model = tf.keras.models.load_model('us_housing_model.h5')Advanced Variable Techniques
I will explain to you some advanced variable techniques.
Read Binary Cross-Entropy TensorFlow
Custom Training Loops with Variables
For more control, you can write custom training loops:
# Create model variables
W = tf.Variable(tf.random.normal([10, 1]))
b = tf.Variable(tf.zeros([1]))
# Define optimizer
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
# Custom training step
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
prediction = tf.matmul(x, W) + b
loss = tf.reduce_mean(tf.square(prediction - y))
# Calculate gradients
gradients = tape.gradient(loss, [W, b])
# Apply gradients to variables
optimizer.apply_gradients(zip(gradients, [W, b]))
return lossVariable Constraints
You can add constraints to variables to limit their values:
# Create a variable with a non-negativity constraint
non_negative_param = tf.Variable(
1.0,
constraint=lambda x: tf.clip_by_value(x, 0, float('inf'))
)This is useful for parameters that must remain positive, like standard deviations or certain model parameters.
Check out Tensorflow Gradient Descent in Neural Network
Common Issues with TensorFlow Variables
When working with TensorFlow variables, I’ve encountered some common issues:
- Variable not initialized: This occurs when you try to use a variable before initializing it.
- Shape mismatches: Always verify that the shapes of your variables match your expectations.
- Memory leaks: Creating variables in loops without proper scope can lead to memory issues.
- Variable placement: Be mindful of which device (CPU/GPU) your variables are placed on.
TensorFlow variables are the foundation of machine learning models, allowing weights and parameters to be updated during training. Whether you’re building a simple regression model for US housing prices or a complex neural network, understanding how to properly create and manage variables is essential.
You may like to read:
- Tensorflow Activation Functions
- Use TensorFlow’s get_shape Function
- Iterate Over Tensor In TensorFlow

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.