TensorFlow has become one of the most widely used frameworks for deep learning and machine learning. The heart of TensorFlow is tensors and the operations we can perform on them.
If you think of deep learning models as giant equations, then tensors are the variables carrying data, while tensor operations are the math and transformations that turn raw inputs into intelligent predictions.
In this tutorial, we will build an intuitive understanding of tensors, learn how to create and manipulate them, run basic mathematical operations, and even construct a small computational example to see how it all fits together.
What Are Tensors?
A tensor may sound like an intimidating mathematical idea, but it is simply a generalization of familiar concepts like scalars, vectors, and matrices. In fact, you can think of a tensor as a container for data with potentially many dimensions.
- A scalar is just a single number (0-dimensional tensor).
- A vector is a list of numbers (1-dimensional tensor).
- A matrix is a 2D grid of numbers arranged in rows and columns.
- Beyond that, tensors can have 3, 4, or even higher dimensions, often required for images, text, or sequence data.
For instance, an image dataset might be represented as a 4D tensor with shape (batch_size, height, width, channels), where channels may be RGB color channels.
Here’s how tensors look in TensorFlow:
import tensorflow as tf
# 0-D Tensor (scalar)
scalar = tf.constant(5)
# 1-D Tensor (vector)
vector = tf.constant([1, 2, 3])
# 2-D Tensor (matrix)
matrix = tf.constant([[1, 2], [3, 4]])
# 3-D Tensor
tensor_3d = tf.constant([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print(scalar)
print(vector)
print(matrix)
print(tensor_3d)When you run this code, TensorFlow will display the tensors along with their shapes and data types.
Key Tensor Properties
Understanding the essential properties of tensors is important because these properties guide how we can operate on them.
- Rank: Number of dimensions a tensor has. Scalars have rank 0, vectors rank 1, matrices rank 2, and so forth.
- Shape: Describes the size of the tensor along each dimension. A 2×3 matrix has shape
(2, 3). - Data type (dtype): Specifies what kind of data the tensor holds, such as int32, float32, or
string. - Device placement: Tensors can live on CPUs, GPUs, or even TPUs. TensorFlow handles efficient placement behind the scenes.
Example:
matrix = tf.constant([[1,2],[3,4]], dtype=tf.float32)
print("Rank:", tf.rank(matrix).numpy())
print("Shape:", matrix.shape)
print("Datatype:", matrix.dtype)This will output the rank (2), shape (2, 2), and data type float32.
Create Tensors in Python TensorFlow
While constants are useful, models often need tensors initialized in specific ways. TensorFlow provides many functions:
# Constant tensor
const = tf.constant([1, 2, 3])
# Tensor of zeros
zeros = tf.zeros([2, 3])
# Tensor of ones
ones = tf.ones([3, 3])
# Identity matrix
identity = tf.eye(3)
# Random tensor with normal distribution
random_tensor = tf.random.normal([2,2], mean=0, stddev=1)
# Tensor as a Variable (mutable)
var_tensor = tf.Variable([1.0, 2.0, 3.0])tf.constant: Static values.tf.zeros,tf.ones: Useful for initialization.tf.eye: Create identity matrices for linear algebra.tf.random.normal: Often used to initialize weights in neural nets.tf.Variable: For mutable tensors, you can update during training (like weights and biases).
Tensor Operations Basics
Once we have tensors, we need to manipulate them. Like Numpy arrays, tensors allow arithmetic, but TensorFlow provides GPU acceleration and compatibility with its computational graphs.
Element-wise Operations
These operations happen by applying functions across corresponding elements:
a = tf.constant([[1,2],[3,4]], dtype=tf.float32)
b = tf.constant([[5,6],[7,8]], dtype=tf.float32)
print(a + b) # addition
print(a * b) # multiplication
print(b - a) # subtractionI executed the above example code and added the screenshot below.

TensorFlow automatically applies the operation element by element.
Matrix Multiplication
Critical in machine learning, especially in layers of neural networks:
print(tf.matmul(a, b))This operation performs true matrix multiplication (dot product), not element-wise multiplication.
Reduction Operations
These condense tensors into smaller shapes:
print(tf.reduce_sum(a)) # sum of all elements
print(tf.reduce_mean(a)) # average
print(tf.reduce_max(a)) # maximum value
print(tf.argmax(a)) # index of maximum along an axisI executed the above example code and added the screenshot below.

Reshape
Reshaping is often required when preparing data:
reshaped = tf.reshape(a, [4,1])
print(reshaped)You can also leverage TensorFlow broadcasting. For example:
print(a + 5)TensorFlow automatically broadcasts the scalar 5 to match the shape of tensor a.
Special Python TensorFlow Math Operations
TensorFlow comes with higher-level math utilities essential for machine learning.
tensor = tf.constant([1.0, 2.0, 3.0])
print(tf.square(tensor)) # squares each element
print(tf.sqrt(tensor)) # square roots
print(tf.exp(tensor)) # exponential
print(tf.math.log(tensor)) # natural log
# Softmax turns values into probabilities
print(tf.nn.softmax(tensor))
# Clipping values
print(tf.clip_by_value(tensor, 1.5, 2.5))
I executed the above example code and added the screenshot below.

These functions frequently appear in neural models, where we normalize inputs, compute activation functions, or stabilize values.
Python Tensor Slicing and Indexing
Like Python lists or NumPy arrays, tensors can be sliced:
tensor = tf.constant([[1,2,3],[4,5,6],[7,8,9]])
print(tensor[0]) # First row
print(tensor[:,1]) # Second column
print(tensor[0:2, :2]) # Top-left 2x2 blockSlicing is especially useful when dealing with batches of data. For example, extracting one image from a batch of 100 images is just a slicing operation along the first dimension.
Tensor Conversion and Interoperability
One powerful feature is TensorFlow’s smooth interaction with NumPy, the de facto standard for numerical operations in Python.
import numpy as np
# Converting from numpy to tensor
np_array = np.array([[1,2],[3,4]])
tensor = tf.convert_to_tensor(np_array)
print("Tensor:", tensor)
# Converting back
back_to_np = tensor.numpy()
print("Numpy Array:", back_to_np)I executed the above example code and added the screenshot below.

Data conversion like this is critical because most preprocessing libraries output NumPy arrays, while models run in TensorFlow.
Best Practices With Tensors
- Always be explicit with dtype, especially when precision matters. For example, use
float32for training models to reduce memory usage. - Use
tf.Variableonly for values that will change during training (weights, biases). For static data (inputs, labels), usetf.constant. - Pay attention to shapes during operations. Shape mismatches often cause errors.
- Place large-scale computations on GPU/TPU rather than CPU when available. TensorFlow handles device placement, but you can control it if needed.
Mini Project: Basic Computation Graph
Let us combine what we’ve learned into a simple demonstration. Suppose we want to simulate a very simple linear model:y=Wx+by=Wx+b
Where:
- xx is an input tensor (like features),
- WW is a weight matrix,
- bb is a bias vector,
- yy is the output we compute.
# Input tensor (2 examples, each with 2 features)
x = tf.constant([[1.0, 2.0],
[3.0, 4.0]])
# Random weight matrix (2 inputs -> 3 outputs)
W = tf.Variable(tf.random.normal([2, 3]))
# Bias term (1 per output)
b = tf.Variable(tf.zeros([3]))
# Linear transformation
y = tf.matmul(x, W) + b
# Activation function (ReLU)
output = tf.nn.relu(y)
print("Input:", x)
print("Weights:", W)
print("Bias:", b)
print("Output:", output)I executed the above example code and added the screenshot below.

What’s happening here:
- We define a simple dataset x.
- Initialize weights
Wand biases b. - Perform a linear transformation with tf.matmul.
- Add bias and pass through ReLU, a popular activation function.
This mimics what happens in the first layer of a neural network. With this, you already have a mini computation graph, the structure that underlies all TensorFlow models.
Conclusion
Tensors and operations form the core of TensorFlow. Everything that happens in deep learning, from raw data inputs to final predictions, is expressed as tensor manipulations. In this tutorial, we explored:
- What tensors are and how they generalize scalars, vectors, and matrices.
- Tensor properties including rank, shape, and dtype.
- Creating tensors with constants, variables, and random initialization.
- Common operations: element-wise, matrix multiplication, reduction, and reshaping.
- Special functions like softmax, square root, and clipping.
- Tensor slicing, indexing, and conversion between NumPy arrays.
- Best practices to avoid common pitfalls.
- A simple mini project simulating the forward step of a neural network.
By mastering these basics, you’ve built the foundation to work with any model in TensorFlow. The next steps involve understanding computational graphs more deeply, learning automatic differentiation, and applying these principles to build, train, and optimize machine learning models.
You may also read:
- Use TensorFlow’s get_shape Function
- Iterate Over Tensor In TensorFlow
- Convert Tensor to Numpy in TensorFlow
- TensorFlow One_Hot Encoding

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.