Recently, I was working on a data science project where I needed to find the minimum value of a complex mathematical function. The challenge was, multiple parameters needed to be adjusted simultaneously to find the optimal solution. This is where SciPy’s minimize function came to my rescue.
In this article, I’ll share everything you need to know about using SciPy’s minimize function for optimization problems in Python.
Let’s get in..!
What is SciPy’s Minimize Function?
SciPy’s minimize function is a powerful optimization tool that helps you find the minimum value of a given function. It’s part of SciPy’s optimization module and supports various algorithms for different types of optimization problems.
Whether you’re working on machine learning model parameter tuning, engineering design optimization, or financial portfolio optimization, this function can handle it all.
Get Started with SciPy Minimize
Let me explain to you the methods of work with SciPy’s minimize function.
Method 1: Basic Optimization with Nelder-Mead Method
The Nelder-Mead method is one of the simplest optimization algorithms that works well for many problems without requiring derivatives. Here’s how to use it:
import numpy as np
from scipy.optimize import minimize
# Define the function to minimize
def objective_function(x):
return x[0]**2 + x[1]**2 # Simple quadratic function
# Starting point
initial_guess = [10.0, 10.0]
# Minimize the function
result = minimize(objective_function, initial_guess, method='Nelder-Mead')
# Display the results
print("Optimal solution:", result.x)
print("Function value at optimal solution:", result.fun)
print("Number of iterations:", result.nit)
print("Success:", result.success)Output:
Optimal solution: [1.49644896e-05 2.14267502e-05]
Optimal solution: [1.49644896e-05 2.14267502e-05]
Function value at optimal solution: 6.830415723986571e-10
Number of iterations: 46
Success: TrueI executed the above example code and added the screenshot below.

This code minimizes a simple quadratic function starting from the point [10, 10]. The function should have a minimum at [0, 0], and the minimize function will find this point for us.
Check out Python SciPy Optimize Root
Method 2: Optimization with Constraints
Sometimes we need to find the minimum subject to certain constraints. Here’s how to add constraints to our optimization problem:
from scipy.optimize import minimize
# Define the objective function
def objective(x):
return (x[0] - 1)**2 + (x[1] - 2.5)**2
# Define constraints
def constraint1(x):
return x[0] + x[1] - 2 # constraint: x + y >= 2
def constraint2(x):
return 10 - x[0] - x[1] # constraint: x + y <= 10
# Set up constraints
cons = [{'type': 'ineq', 'fun': constraint1},
{'type': 'ineq', 'fun': constraint2}]
# Initial guess
x0 = [0, 0]
# Solve
solution = minimize(objective, x0, method='SLSQP', constraints=cons)
print("Optimal solution:", solution.x)
print("Function value:", solution.fun)Output:
Optimal solution: [1. 2.5]
Function value: 0.0I executed the above example code and added the screenshot below.

In this example, we’re minimizing a function subject to two inequality constraints. We’re using the ‘SLSQP’ method, which is suitable for constrained optimization problems.
Method 3: Optimization with Bounds
If you need to restrict your variables within certain ranges, you can use bounds:
from scipy.optimize import minimize
# Define the objective function
def portfolio_volatility(weights):
# Example: Portfolio volatility calculation
# (simplified for demonstration)
stocks = np.array([0.15, 0.20, 0.25]) # Stock volatilities
return np.sum(weights * stocks)
# Initial guess - equal weighting
initial_weights = np.array([0.33, 0.33, 0.34])
# Bounds - each weight between 0 and 1
bounds = [(0, 1) for _ in range(3)]
# Constraint - weights sum to 1
def weight_constraint(weights):
return np.sum(weights) - 1
constraints = {'type': 'eq', 'fun': weight_constraint}
# Optimize
result = minimize(portfolio_volatility,
initial_weights,
method='SLSQP',
bounds=bounds,
constraints=constraints)
print("Optimal portfolio weights:", result.x)
print("Portfolio volatility:", result.fun)Output:
Optimal portfolio weights: [1.00000000e+00 0.00000000e+00 4.16333634e-17]
Portfolio volatility: 0.15I executed the above example code and added the screenshot below.

This example simulates a portfolio optimization problem where we’re trying to minimize portfolio volatility. We’ve added bounds to ensure each weight is between 0 and 1, and a constraint to ensure the weights sum to 1.
Check out Python Scipy Odeint
Understanding Optimization Methods in SciPy
SciPy’s minimize function offers several different optimization algorithms. Here’s a quick guide to help you choose:
- Nelder-Mead: Good for general-purpose optimization without derivatives
- BFGS: Efficient when gradients can be computed
- L-BFGS-B: Like BFGS but handles bound constraints
- SLSQP: Handles both bounds and constraints
- Powell: Works well for functions that are not continuously differentiable
- CG (Conjugate Gradient): Efficient for large-scale problems
- trust-constr: Modern constrained optimization algorithm
Practical Example: Machine Learning Hyperparameter Tuning
One common application of minimizing is hyperparameter tuning in machine learning:
from scipy.optimize import minimize
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
from sklearn.datasets import load_iris
# Load data
iris = load_iris()
X, y = iris.data, iris.target
# Function to minimize (negative cross-validation score)
def objective_function(params):
C, gamma = params
model = SVC(C=10**C, gamma=10**gamma, kernel='rbf')
score = cross_val_score(model, X, y, cv=5).mean()
return -score # We minimize the negative score
# Initial guess (log10 scale)
initial_guess = [0, 0] # C=1, gamma=1
# Bounds
bounds = [(-3, 3), (-3, 3)] # log10 scale
# Optimize
result = minimize(objective_function, initial_guess, method='L-BFGS-B', bounds=bounds)
# Best parameters
best_C = 10**result.x[0]
best_gamma = 10**result.x[1]
print(f"Best C: {best_C:.4f}")
print(f"Best gamma: {best_gamma:.4f}")
print(f"Best accuracy: {-result.fun:.4f}")In this example, we’re using minimize to find the optimal C and gamma parameters for an SVM model on the Iris dataset.
Read Python Scipy Leastsq
Tips for Effective Optimization
- Choose the right method: Different problems work better with different optimization algorithms.
- Scale your variables: If your variables have vastly different scales, normalize them first.
- Use bounds and constraints: They help guide the optimizer and enforce real-world limitations.
- Try multiple starting points: Sometimes the optimizer can get stuck in local minima.
- Check convergence: Always verify that the optimization succeeded (result.success).
I hope you found this article helpful! SciPy’s minimize function is incredibly powerful and versatile for all kinds of optimization problems. With these methods and examples, you should be well-equipped to solve your optimization challenges in Python.
Other Python articles you may also like:

I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.