Recently, I was working on a data analysis project where I needed to smooth out some noisy sensor readings. The solution? Convolution, an efficient mathematical operation that’s essential for signal processing, image filtering, and more. In Python, the scipy.signal.convolve function makes this complex operation surprisingly easy.
In this article, I’ll share everything you need to know about using scipy’s convolve function based on my decade of experience with it.
Let us start..!
Convolution and Why Use It
Convolution is a mathematical operation that combines two functions to produce a third function. In data analysis and signal processing, it’s like sliding one function (called the kernel) over another (your data) and calculating the sum of their products at each position.
Let me explain with a simple example: if you have a temperature sensor that sometimes gives erratic readings, convolution with a smoothing kernel can help reveal the true temperature trend by averaging out the noise.
Set Up Your Environment
Before we dive in, make sure you have the necessary packages installed:
# Import libraries
import numpy as np
from scipy import signal
import matplotlib.pyplot as pltRead Python SciPy Gamma
Method 1 – Basic Convolution with scipy.signal.convolve
Let’s start with a simple example – smoothing out noisy data:
# Create some noisy data (simulating temperature readings in Fahrenheit)
days = np.linspace(0, 30, 100) # 30 days of readings
actual_temps = 70 + 15 * np.sin(days/30 * 2 * np.pi) # Seasonal pattern
noise = np.random.normal(0, 5, 100) # Random fluctuations
noisy_temps = actual_temps + noise
# Create a simple averaging kernel (5-day moving average)
kernel = np.ones(5) / 5 # Normalized kernel
# Apply convolution
smoothed_temps = signal.convolve(noisy_temps, kernel, mode='same')
# Plot the results
plt.figure(figsize=(10, 6))
plt.plot(days, noisy_temps, 'gray', alpha=0.5, label='Noisy temperature readings')
plt.plot(days, smoothed_temps, 'r', label='Smoothed temperature trend')
plt.plot(days, actual_temps, 'b--', label='Actual temperature pattern')
plt.legend()
plt.xlabel('Days')
plt.ylabel('Temperature (°F)')
plt.title('Temperature Smoothing with Convolution')
plt.grid(True)
plt.show()You can refer to the screenshot below to see the output:

The key parameters in scipy’s convolve function are:
- First argument: Your input signal (the noisy temperature readings)
- Second argument: The kernel (our averaging filter)
- mode: Determines how the edges are handled. Common options are:
- ‘full’: Output is the full discrete linear convolution
- ‘same’: Output has the same length as the input
- ‘valid’: Output only includes points where the signals completely overlap
For most data smoothing applications, I recommend using the ‘same’ mode to maintain your original data length.
Method 2 – 2D Convolution for Image Processing
Convolution isn’t just for 1D signals; it’s also perfect for image processing. Here’s how to apply a blur filter to an image:
# Create a simple 2D image (100×100 pixels with a square in the middle)
image = np.zeros((100, 100))
image[30:70, 30:70] = 1 # White square on black background
# Add some noise
noisy_image = image + np.random.normal(0, 0.2, image.shape)
noisy_image = np.clip(noisy_image, 0, 1) # Clip values to valid range
# Create a 2D Gaussian blur kernel
x, y = np.mgrid[-3:4, -3:4]
kernel = np.exp(-(x**2 + y**2) / 4)
kernel = kernel / kernel.sum() # Normalize
# Apply 2D convolution
blurred_image = signal.convolve2d(noisy_image, kernel, mode='same', boundary='symm')
# Display the results
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
axes[0].imshow(image, cmap='gray')
axes[0].set_title('Original Image')
axes[1].imshow(noisy_image, cmap='gray')
axes[1].set_title('Noisy Image')
axes[2].imshow(blurred_image, cmap='gray')
axes[2].set_title('Blurred Image (Convolution)')
plt.tight_layout()
plt.show()You can refer to the screenshot below to see the output:

This example shows how convolution with a Gaussian kernel can effectively remove noise from an image. The same principle is used in popular photo editing software for their blur filters.
Check out Python SciPy Stats Norm
Method 3 – Fast Convolution with FFT
For large datasets, direct convolution can be slow. Fortunately, scipy provides a faster alternative using the Fast Fourier Transform (FFT):
# Generate a large signal (e.g., a year of hourly stock prices)
hours = np.linspace(0, 365, 365*24) # One year of hourly data
true_trend = 100 + 20 * np.sin(hours/365 * 2 * np.pi) + 5 * np.sin(hours/30 * 2 * np.pi)
noise = np.random.normal(0, 10, len(hours))
stock_prices = true_trend + noise
# Create a kernel for a 24-hour moving average
kernel = np.ones(24) / 24
# Time the standard convolution
import time
start = time.time()
smoothed_standard = signal.convolve(stock_prices, kernel, mode='same')
standard_time = time.time() - start
# Time the FFT-based convolution
start = time.time()
smoothed_fft = signal.fftconvolve(stock_prices, kernel, mode='same')
fft_time = time.time() - start
print(f"Standard convolution time: {standard_time:.4f} seconds")
print(f"FFT convolution time: {fft_time:.4f} seconds")
print(f"Speed improvement: {standard_time/fft_time:.2f}x faster")
# The results should be nearly identical
diff = np.abs(smoothed_standard - smoothed_fft).max()
print(f"Maximum difference between methods: {diff:.2e}")You can refer to the screenshot below to see the output:

For this large dataset, fftconvolve can be 10-100x faster than standard convolution, with essentially identical results. I always use this method when working with large signals or when performance is critical.
Read Python SciPy Stats Poisson
Method 4 – Custom Convolution for Different Boundary Conditions
Sometimes you need special handling for the edges of your data. The mode parameter gives you several options:
import time
# Create a simple signal
signal_data = np.zeros(20)
signal_data[5:15] = 1 # A rectangular pulse
# Create a kernel
kernel = np.ones(5) / 5
# Apply convolution with different modes
modes = ['full', 'same', 'valid']
results = {}
for mode in modes:
results[mode] = signal.convolve(signal_data, kernel, mode=mode)
# Plot the results
plt.figure(figsize=(12, 8))
plt.subplot(4, 1, 1)
plt.plot(signal_data, 'b-o', label='Original Signal')
plt.legend()
plt.grid(True)
for i, mode in enumerate(modes):
plt.subplot(4, 1, i+2)
plt.plot(results[mode], 'r-o', label=f'Mode: {mode}')
plt.legend()
plt.grid(True)
plt.tight_layout()
plt.show()The different modes affect how the edges are handled:
- ‘full’: Returns the complete convolution (length = signal + kernel – 1)
- ‘same’: Keeps the output the same size as the input signal
- ‘valid’: Only returns the part where the kernel fully overlaps with the signal
In practical terms, ‘same’ is most commonly used, but understanding all modes helps when dealing with edge effects in your data.
Method 5 – Separable Convolution for Performance
For certain 2D kernels (like Gaussian), you can dramatically improve performance by separating the convolution into two 1D operations:
# Create a larger test image
large_image = np.zeros((500, 500))
large_image[150:350, 150:350] = 1
noisy_large = large_image + np.random.normal(0, 0.2, large_image.shape)
noisy_large = np.clip(noisy_large, 0, 1)
# Create a 2D Gaussian kernel
size = 21
x = np.linspace(-3, 3, size)
kernel_1d = np.exp(-x**2 / 2)
kernel_1d = kernel_1d / kernel_1d.sum() # Normalize
kernel_2d = np.outer(kernel_1d, kernel_1d) # 2D kernel
# Method 1: Direct 2D convolution
start = time.time()
blurred_2d = signal.convolve2d(noisy_large, kernel_2d, mode='same', boundary='symm')
time_2d = time.time() - start
# Method 2: Separable convolution (two 1D convolutions)
start = time.time()
temp = np.zeros_like(noisy_large)
for i in range(noisy_large.shape[0]):
temp[i, :] = signal.convolve(noisy_large[i, :], kernel_1d, mode='same')
blurred_separable = np.zeros_like(temp)
for j in range(temp.shape[1]):
blurred_separable[:, j] = signal.convolve(temp[:, j], kernel_1d, mode='same')
time_separable = time.time() - start
print(f"2D convolution time: {time_2d:.4f} seconds")
print(f"Separable convolution time: {time_separable:.4f} seconds")
print(f"Speed improvement: {time_2d/time_separable:.2f}x faster")
# Verify the results are similar
diff = np.abs(blurred_2d - blurred_separable).max()
print(f"Maximum difference: {diff:.2e}")This technique can provide significant performance improvements for large images and separable kernels, with the exact same results mathematically.
The beauty of scipy’s convolution functions lies in their flexibility and power for real-world data analysis tasks. Whether you’re analyzing time series, processing images, or developing advanced signal processing applications, understanding these tools can dramatically improve your data analysis workflow.
I hope you found this guide helpful for understanding and using scipy’s convolve function.
Other Python articles you may also like:

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.