How to Create PyTorch Empty Tensor?

Have you ever been in the middle of coding a neural network and needed to create a placeholder tensor to store results? I know I have, countless times.

After working with PyTorch for over a decade, creating empty tensors has become second nature to me. When I first started, this seemingly simple task caused me unnecessary headaches.

In this article, I will discuss several methods to create empty tensors in PyTorch, drawing from my real-world experience. These techniques will help you efficiently initialize tensors for your machine learning projects.

Let’s get started with the basics and then move on to more advanced techniques.

Create a PyTorch Empty Tensor

Before getting into creating empty tensors, let’s quickly understand what tensors are in PyTorch.

A tensor is a multi-dimensional array, similar to NumPy’s ndarray, but with additional features that make it suitable for deep learning. Tensors can be used on GPUs to accelerate computing.

In my projects, I utilize tensors to represent various types of data, ranging from simple vectors to complex multi-dimensional data structures.

Method 1: Use torch.empty() Function

The simplest way to create an empty tensor in PyTorch is by using the torch.empty() function in Python.

import torch

# Create a 2x3 empty tensor
empty_tensor = torch.empty(2, 3)
print(empty_tensor)

When you run this code, you’ll get a tensor with uninitialized values. Here’s what the output might look like:

tensor([[-2.1187e-22,  1.3565e-42,  0.0000e+00],
        [ 0.0000e+00,  0.0000e+00,  0.0000e+00]])

I executed the above example code and added the screenshot below.

pytorch empty tensor

Notice how the values are random? That’s because torch.empty() doesn’t initialize the tensor values. It simply allocates memory for the tensor.

I often use this method when I need to create a tensor quickly, and I plan to fill it with values later in my code.

Read PyTorch View Tutorial

Method 2: Use torch.zeros() Function

If you need an empty tensor filled with zeros, torch.zeros() is best.

import torch

# Create a 2x3 tensor filled with zeros
zeros_tensor = torch.zeros(2, 3)
print(zeros_tensor)

Output:

tensor([[0., 0., 0.],
        [0., 0., 0.]])

I executed the above example code and added the screenshot below.

pytorch empty

This is particularly useful when implementing certain algorithms that require a zero-initialized state. I use this approach frequently when creating masks or when I need a “clean slate” to start with.

Method 3: Use torch.ones() Function

Similarly, if you need a tensor filled with ones, you can use Python’s torch.ones() method.

import torch

# Create a 2x3 tensor filled with ones
ones_tensor = torch.ones(2, 3)
print(ones_tensor)

Output:

tensor([[1., 1., 1.],
        [1., 1., 1.]])

I executed the above example code and added the screenshot below.

empty tensor pytorch

This is helpful when creating attention masks or when implementing certain mathematical operations that require an initial value of 1.

Check out PyTorch Conv3d

Method 4: Use torch.randn() for Random Initialization

Sometimes, especially when initializing weights in neural networks, you need tensors filled with random values. The torch.randn() function creates a tensor with values drawn from the standard normal distribution.

import torch

# Create a 2x3 tensor with random values
random_tensor = torch.randn(2, 3)
print(random_tensor)

Output (will vary each time):

tensor([[-0.1117, -0.4966,  0.1631],
        [ 0.8617, -0.1838,  0.6059]])

In my experience, proper weight initialization is crucial for model convergence. I’ve used this method countless times when building custom neural network layers.

Method 5: Create Empty Tensors with Specific Data Types

One aspect that often trips up beginners is tensor data types. You can specify the data type when creating empty tensors:

import torch

# Create empty tensors with specific data types
float_tensor = torch.empty(2, 3, dtype=torch.float32)
int_tensor = torch.empty(2, 3, dtype=torch.int64)
bool_tensor = torch.empty(2, 3, dtype=torch.bool)

print("Float tensor:", float_tensor)
print("Integer tensor:", int_tensor)
print("Boolean tensor:", bool_tensor)

This is particularly important when you’re working with models that require specific data types or when you’re trying to optimize memory usage.

Create Empty Tensors on GPU

If you’re working with deep learning models, you’ll often need to create tensors directly on the GPU to speed up computations. Here’s how:

import torch

# Check if GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Create a 2x3 empty tensor on GPU
gpu_tensor = torch.empty(2, 3, device=device)
print(gpu_tensor)
print("Tensor is on GPU:", gpu_tensor.is_cuda)

When training large models on datasets like ImageNet or processing medical imaging data from US hospitals, moving tensors to the GPU can drastically reduce processing time.

Read PyTorch Conv1d

Practical Example: Create a Batch of Images

Let’s see a practical example where I might use empty tensors in a real-world scenario, creating a placeholder for a batch of images:

import torch

# Create an empty tensor for a batch of images
# [batch_size, channels, height, width]
batch_size = 32
channels = 3  # RGB images
height = 224  # Standard height for many CNN models
width = 224   # Standard width for many CNN models

image_batch = torch.empty(batch_size, channels, height, width)

print(f"Batch shape: {image_batch.shape}")
print(f"Total number of elements: {image_batch.numel()}")
print(f"Memory usage (MB): {image_batch.element_size() * image_batch.numel() / (1024 * 1024):.2f}")

Output:

Batch shape: torch.Size([32, 3, 224, 224])
Total number of elements: 4816896
Memory usage (MB): 18.38

This is exactly how I would initialize a tensor to hold a batch of images before loading data from a dataset of US landmark photos, for example.

Check out PyTorch Add Dimension

Reshape Empty Tensors

After creating empty tensors, you might need to reshape them to fit your data structure:

import torch

# Create an empty 1D tensor
empty_1d = torch.empty(12)
print("Original shape:", empty_1d.shape)

# Reshape to 2D (3x4)
reshaped_2d = empty_1d.reshape(3, 4)
print("Reshaped to 2D:", reshaped_2d.shape)

# Reshape to 3D (2x2x3)
reshaped_3d = empty_1d.reshape(2, 2, 3)
print("Reshaped to 3D:", reshaped_3d.shape)

I use reshaping operations frequently when processing sequential data or when preparing inputs for different model architectures.

Empty Tensors vs. Pre-initialized Tensors

When deciding between empty tensors and pre-initialized ones, consider:

  1. Use torch.empty() when you plan to overwrite all values immediately.
  2. Use torch.zeros() or torch.ones() when you need specific initial values.
  3. Use torch.randn() when initializing weights in neural networks.

The choice depends on your specific use case. In my experience, properly initialized tensors can significantly affect model training speed and convergence.

Creating empty tensors in PyTorch is a fundamental skill that serves as the building block for more complex operations in deep learning.

I’ve shared multiple methods that I’ve used throughout my career, and I hope these techniques help you in your PyTorch journey. Remember, the right tensor initialization can make a significant difference in your model’s performance.

As you continue working with PyTorch, you’ll develop intuition about which method to use in different scenarios. Happy coding!

PyTorch-related tutorials:

51 Python Programs

51 PYTHON PROGRAMS PDF FREE

Download a FREE PDF (112 Pages) Containing 51 Useful Python Programs.

pyython developer roadmap

Aspiring to be a Python developer?

Download a FREE PDF on how to become a Python developer.

Let’s be friends

Be the first to know about sales and special discounts.