In this Python tutorial, we will learn about “Scipy Optimize” where we will implement the different optimization algorithms to get optimal value for a function. Additionally, cover the following topics.
- Scipy Optimize
- Scipy Optimize Minimize example
- Scipy Optimize Minimize
- Scipy Optimize Curve Fit
- Scipy Optimize Least Squares
- Scipy Optimize Fmin
- Scipy Optimize Constraints
- Scipy Optimize Bounds
- Scipy Optimize Minimize Constraints Example
Scipy Optimize
The Scipy Optimize (scipy.optimize)
is a sub-package of Scipy that contains different kinds of methods to optimize the variety of functions.
These different kinds of methods are separated according to what kind of problems we are dealing with like Linear Programming, Least-Squares, Curve Fitting, and Root Finding. The list of methods is provided below based on categories.
Optimization:
Optimization is further divided into three kinds of optimization:
Scalar Functions Optimization: It contains the method minimize_scalar( )
to minimize the scalar function that contains one variable.
Multivariate Optimization: It contains the method minimize( )
to minimize the scalar function that contains more than one variable.
Global Optimization: It contains the different methods based on different algorithms or optimization technique that is shown below.
basinhopping( ):
It uses the basin-hopping algorithm to find the global minimum of a given function.brute( ):
It uses the brute force method to minimize the given function on a specified range.differential_evolution( ):
It helps in finding the global minimum of the given multivariate function.dual_annealing( ):
It uses the Dual Annealing algorithm to find the global minimum of the given function.
Curve Fitting:
It has the method curve_fit( )
that uses non-linear least squares to fit a function to a set of data.
Least-squares:
It is divided into two leas-squares.
Nonlinear Least-squares: It has a method least_squares( )
to solve the problem of nonlinear least-squares with bounds on the given variable.
Linear Least-squares: It contains the methods nnls( )
and lsq_linear( )
to solve the problem of linear least-square with bounds on the given variable.
Root Finding:
It used different methods to find zeros or root of the given function, It is further divided into two categories based on functions.
Scalar Functions: It has the most popular method root_scalar( )
out of many methods which find the zeros of the given scalar function.
Multidimensional: It has only one method which is root( )
to find the zeros or root of the given vector function.
Linear Programming:
It uses the method lingprog( )
to minimize the linear objective function with given constraints such as equality and inequality.
Read: What is Scikit Learn in Python
Scipy Optimize Minimize
There are two optimization functions minimize( )
, minimize_scalar( )
to minimize a function.
The minimize_scalar( )
the function is used to minimize a scalar function that contains one variable. The syntax is given below on how to access and use this function that exists in sub-package scipy.optimize
.
scipy.optimize.minimize_scalar(function,method='brent', bracket=None, bounds=None, args=(), tol=None, options=None)
Where parameters are:
- function: It is the objective function that is passed for minimization.
- method: It is a kind of solver or method that will be used for the given objective function. The methods are
Brent
,Bounded
,Golden
andCustom
. - bracket: It describes the bracketing interval for methods only
Brent
andGolden
. - bounds: It is bound that contains the two values, It is necessary when the method Bound is used.
- args: It is an additional argument that is provided to an objective function.
- tol: It is tolerance for termination.
- options: It is used to define the maximum number of iterations to perform using
maxiter
.
The minimize( )
the function is used to minimize a scalar function that contains more than one variable. The syntax is given below on how to access and use this function that exists in sub-package scipy.optimize
.
scipy.optimize.minimize(function, x0, args=(), method=None, jac=None, hess=None, bounds=None, constraints=(), tol=None, options=None)
Where parameters are.
- function: It is the objective function that is passed for minimization.
method: It is a kind of solver or method that will be used for the given objective function. The methods are trust-KrylovNelder-Mead, CG, Powell, BFGS, L-BFGS-B, TNC, COBYLA,trust-exact,
Newton-CG,
SLSQP, dogleg, trust-ncg
,trust-constr,
. - jac: It is the method to compute the gradient vector.
- hess: It is used to compute the Hessian matrix.
bounds: It is a bound that contains the two values, It is necessary when methods Nelder-Mead, TNC, SLSQP, L-BFGS-B, Powell and trust-constr are used. - constraints: It takes the constraints of objective functions like equality and inequality constraints.
- tol: It is tolerance for termination.
- options: It is used to define the maximum number of iterations to perform using option
maxiter
.
Read: Machine Learning using Python
Scipy Optimize Minimize example
Here we will see the example of the two minimize functions that we have learned in the above sub-section “Scipy Optimize Minimize”.
Before doing an example, let’s know about “What is a scalar function” the scalar function takes one value and outputs the one value.
Here we are going to use the scalar function which is a quadratic function 2x2+5x-4
, so here we will find the minimum value of the objective function 2x2+5x-4
. The graph of the function is shown below.
Look at the graph of the function 2x2+5x-4
, So here we will find the minimum value of a function using the method minimize_scalar()
of scipy.optimize
sub-package.
First import the Scipy optimize subpackage using the below code.
import scipy.optimize as ot
Define the Objective function
that we are going to minimize using the below code.
def Objective_Fun(x):
return 2*x**2+5*x-4
Again import the method minimize_scalar( )
from the sub-package optimize and pass the created Objective function
to that function.
result = ot.minimize_scalar(Objective_Fun)
Check the result the minimum value of the Objective function
using the below code.
print(result)
The minimum value of Objective function
at x: -1.25
which is shown in the above output. This minimum value may not be true but this is the way to find the minimum value of Objective function
.
We have covered the only method minimize_scalar( )
that can deal with a function containing a single variable. But what will happen, if we have a function with more than one variable, in that case, the method minimize( )
is used to find the minimum value of the Objective function
.
The minimize( )
can also deal with constraints on the Objective function
. There are three types of constraints which are given below.
- Bounds Constraints: It implies that the values of x lie between the lower and upper bound.
- Linear Constraints: The solution is limited by performing the inner product of x values with a given user-input array and comparing the result with a lower and upper bound.
- Nonlinear Constraints: The solution is limited by applying the user-given function to x values and comparing the result with a lower and upper bound.
Here we are going to an example to show how minimize( )
to calculate the minimum value of a given Objective function
: 60x2+15x
with constraints. The graph of the function is shown below.
The problem is given below that we will solve using the Scipy.
Objective Function: 60x2+15x
Constraints:
8x+16x ≥ 200
60x+40x ≥ 960
2x+2x ≥ 40
x ≥ 0
First, create an Objective function
in a python using the below code.
def Obj_func(x):
return (60*x[0]**2) + (15*x[1])
Then also define the constraint in python using the below code.
const = ({'type': 'ineq', 'fun': lambda x: 8*x + 16*x-200},
{'type': 'ineq', 'fun': lambda x: 60*x + 40*x-960},
{'type': 'ineq', 'fun': lambda x: 2*x + 2*x-40})
Define bounds for the function where optimal values lie.
bnds = ((0, None), (0, None))
access the method minimize( )
from the sub-package scipy.optimize
and pass the created Objective function
to that method with constraints and bonds using the below code.
res = minimize(Obj_func, (-1, 0), method='SLSQP', bounds=bnds,
constraints=const)
Check the result the minimum value of the
.Objective function
print(res)
The minimum value of Objective function
at x: [10.,10.]
which is shown in the above output.
Read: Pandas in Python
Scipy Optimize Curve Fit
In Scipy, the sub-package scipy.optimize
has method curve_fit( )
that fits the line to a given group of points.
The syntax of the method is given below.
scipy.optimize.curve_fit(fun, x_data, y_data, p_0=None, check_finite=True, bounds=(- inf, inf), method=None)
Where parameters are:
- fun: It is the model function.
- x_data: It is data points or any object in the form of an array and also it is the independent variable.
- y_data: It is the dependent variable that constructs the data points or any object using the function(x_data)this function can sin, cos, etc.
- p_0: It is the starting guess value for the parameters in the method.
- check_finite: It is used to check the array whether it contains nans value or not, if it contains then it rais a ValueError. By default, it is set to True.
- bounds: It defines the lower and upper bound for the parameters.
- method: It is used to specify the algorithm to optimize problems as least-squares have
trf
,lm
, etc.
To know more about the curve fit, follow the official documentation “Scipy Curve Fit”
Follow the below steps to fit a function to generate data using the method curve_fit( )
.
Following the below steps, make sure you know about the topics given below.
Import the necessary libraries using the below code.
# Importing libraries
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as opt
First, generate some random data using the below code.
# Generating random data points in a varaible x and y and plotting it.
np.random.seed(0)
x = np.linspace(-6, 6, num=60)
y = 5.2 * np.sin(1.0 * x) + np.random.normal(size=60)
#plot the genreated data points
plt.figure(figsize=(6, 4))
plt.scatter(x, y)
Look at the above output, and how generated data looks.
Creating a new function sin_func
and passing this function to the method curve_fit( )
to sin_func
to generate data using the below code.
# crating the sin function and fitting this to gnerated data using
#curve_fit method
def sin_func(X, a, b):
return a * np.sin(b * X)
param, param_covariance = opt.curve_fit(sin_func, x, y,
p0=[1, 1])
print(param)
Let’s plot the function that fits generated data using the below code.
# Plotting the fitted line to genreated data
plt.figure(figsize=(6, 4))
plt.scatter(x, y, label='Data')
plt.plot(x, test_func(x, param[0], param[1]),
label='Fitted Sin function')
plt.legend(loc='best')
plt.show()
Read: Scipy Sparse – Helpful Tutorial
Scipy Optimize Fmin
The scipy.optimize
a function contains a method Fmin( )
that uses the downhill simplex algorithm to minimize a given function.
The syntax of the method is given below.
scipy.optimize.fmin(fun, x_0, args=(), max_iter=None, max_fun=None, disp=1, retall=0, initial_simplex=None)
where parameters are:
- fun: It is the objective function that we want to minimize.
- x_0: Guess points that are provided to a method for objection function.
- max_iter: The maximum no of iterations to do.
- max_fun: It is used to set the maximum no of evaluations to do.
- disp: To show the convergence message, then set it to True.
- retall: To show the solutions at each iteration, then set it to True
- intial_simplex: If we provide the initial simplex, then it doesn’t use the x_0 that we provided instead it used the intail_simples guess.
Let’s take an example by following the below steps:
Import the module scipy.optimize
as opt
using the below code.
import scipy.optimize as opt
Define a new function y2
in python using the below code.
def function(y):
return y**2
Access the method fmin( )
from the module scipy.optimize
pass the created function with the initial guess value as 1
.
min = opt.fmin(function, 1)
Check the value using the below code.
print(min)
From the output, the 17 iterations performed and the function gets evaluated 34 times and the minimum value is [-8.8817842e-16]
.
Read: Scipy Stats – Complete Guide
Scipy Optimize Least Squares
In scipy.optimize
sub-package, there are two methods nnls( )
and lsq_linear( )
to deal with problems related to Least Squares.
First nnls( )
are non-negative linear squares that don’t allow the negative coefficients in constrained least-squares problems.
The syntax nnls( )
is given below.
scipy.optimize.nnls(A, b, max_iter=None)
Where parameters are:
- A: It is ndarray data or a matrix.
- b: It is a response variable ( a column) in vector form.
- max_iter: The maximum number of iterations to perform.
The above method aims to find the argmin_x || Ax - b ||_2
where x ≥ 0
means the component of the provided vector must be non-negative. The nnls( )
returns the result in vector form like ndarray with residual value in float type.
Let’s take an example by creating a matrix and a vector using the below steps:
Import the module scipy.optimize
to access the method nnls( )
and numpy
to create a ndarray like a matrix or a vector using the below code.
# Importing the module scipy.optimize and numpy
import scipy.optimize as opt
import numpy as np
Create a matrix B
and a vector c
using the function array
of NumPy using the below code.
# creating the matrix B and a vector c
B = np.array([[2, 1], [2, 1], [1, 2]])
c = np.array([3, 2, 2])
Access the nnls( )
method from the scipy.optimize
and pass the above-created matrix B
with a vector c
to it.
# access the method nnls(), passing the matrix B and a vector c to it.
opt.nnls(B, c)
The output shows the ndarray or solution vector containing values [1. , 0.5]
with residual in float type 0.707106781186547
.
The second method is lsq_linear( )
which solves the problems related to non-linear squares having bounds on variables.
The syntax
is given below.lsq_linear( )
scipy.optimize.lsq_linear(A, b, bounds=(- inf, inf), method='trf', lsmr_tol=None, max_iter=None, verbose=0)
Where parameters are:
- A: It is ndarray data or a matrix.
- b: It is a response variable ( a column) in vector form.
- bounds: It is the bounds (upper and lower) on the independent variable.
- method: It is used to specify which method to use for minimization like TRF (trust-region reflective) and bvls (bounded-variable least-square) algorithm.
- lsmr_tol: It is a tolerance parameter by default set to 1e-2 * tol. Also, it can adjust the tolerance automatically using the option auto.
- max_iter: The maximum number of iterations to perform before termination.
- verbose: It is used to define the verbosity level of the algorithm like specifying 1 means worked silently, 2 means showing the termination information and 3 means showing the information during the iteration process.
The lsq_linear( )
returns the result as a solution in ndarray, the value of the cost function in float type, vector of residual in ndarray and number of iterations, etc.
Let’s take an example using the below steps.
Importing the necessary module rand
,numpy
and method lsq_linear( )
from scipy.optimize
.
from scipy.sparse import rand
import numpy as np
from scipy.optimize import lsq_linear
Creating random number generator as rng
and two-variable l
and m
with values 3000
and 2000
.
rng = np.random.default_rng()
l = 30000
m = 20000
Creating sparse matrix B using the function rand
of module scipy.sparse
and creating a targe vector c using the function standard_normal
.
B = rand(l, m, density=1e-4, random_state=rng)
c = rng.standard_normal(l)
Defining the lower and upper bound using the below code.
lbound = rng.standard_normal(m)
ubound = lbound + 1
Finding the optimal value of the given data by providing the created matrix B and vector c with bounds to the method lsq_linear()
for optimization. using the below code.
res = lsq_linear(B, c, bounds=(lbound, ubound), lsmr_tol='auto', verbose=1)
Checking the full result using the below code.
print(res)
From the output, we can see the result as function cost value, optimality, etc.
Read: Scipy Constants – Multiple Examples
Scipy Optimize Constraints Or Minimize Constraints Example
Here, we are going to optimize the problem with constraints using linear programming, the sub-package scipy.optimize
contains a method lineprog( )
to solve the problem related to linear programming.
The linear problem is given below that we want to optimize.
maximize z=x+3y
subject to 3x + y <= 25
-5x + 6y <= 15
-x = 3y >=-3
-x + 6y = 20
x>= 0
y>= 0
Above problem, we need to optimize but here is one problem and that is the linear programming only deals with the minimization problem with inequality constraints less than or equal to sign.
To solve the problem, we need to convert these problems into minimization with constraints less than equal to the sign. so change the problems as shown below.
minimize -z=x-3y
subject to 3x + y <= 25
-5x + 6y <= 15
x - 3y <=3
-x + 6y = 20
x>= 0
y>= 0
Let’s solve the above objective function -z=x-3y
with constraints using the below steps:
Import the method linprog( )
from the sub-packages scipy.optimize
using the below code.
# Importing the linprog
from scipy.optimize import linprog
Let’s define the constraints for the objective function and its constraints using the below code.
# defining Coefficient for x and y
objective = [-1, -3]
# defining Coefficient inequalities
lhs_inequality = [[ 3, 1],
[-5, 6],
[ 1, -3]]
rhs_inequality = [25,
15,
3]
lhs_equality = [[-1, 6]]
rhs_equality = [20]
Define the bounds using the below code.
# defining the bounds for each variable
bound = [(0, float("inf")), # bounds of x
(0, float("inf"))] # bounds of y
Let’s optimize or minimize the objective function by the defined objective function and its constraints to a method linprog( )
.
# Optimizing the problems using the method linprog()
opt_res = linprog(c=objective, A_ub=lhs_inequality, b_ub=rhs_inequality,
A_eq=lhs_equality, b_eq=rhs_equality, bounds=bound ,
method="revised simplex")
Check the result after optimizing the above function.
print(opt_res)
Look at the above result of optimization of the objective function.
Read: Scipy Misc + Examples
Scipy Optimize Bounds
In Scipy sub-package scipy.optimize
, there is a method called Bounds
that bounds constraint on a variable.
The syntax is given below.
scipy.optimize.Bounds(lb, ub, keep_feasible=False)
The general inequality form is given below.
lowerbound <= x <= upperbound
lb <= x <= ub
Where lb
and ub
is a lower and upper bound on the independent variable and keep_feasible
is used to make constraints component feasible during iterations.
Let’s an example by defining bounds.
0 <= x <= 2
So, in this tutorial, we have learned the use of Scipy Optimize where we have implemented the different optimization algorithms to get optimal value for a function. Additionally, we covered the following topics.
- Scipy Optimize
- Scipy Optimize Minimize example
- Scipy Optimize Minimize
- Scipy Optimize Curve Fit
- Scipy Optimize Least Squares
- Scipy Optimize Fmin
- Scipy Optimize Constraints
- Scipy Optimize Bounds
- Scipy Optimize Minimize Constraints Example
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.