How to use Python Scipy Differential Evolution

We will learn about the “Python Scipy Differential Evolution“, Differential Evolution (DE) is a population-based metaheuristic search technique that improves a potential solution based on an evolutionary process iteratively in order to optimize a problem. And also cover how to compute the solution parallel with a different strategy with the following topics.

  • What is Differential Evolution
  • Python Scipy Differential Evolution
  • How to use Scipy Differential Evolution using different Strategy
  • How to perform Scipy Differential Evolution Parallely
  • Define constraints and use them in Differential Evolution
  • Python Scipy Differential Evolution Bounds

What is Differential Evolution

Differential evolution (DE), a technique used in evolutionary computation, seeks to iteratively enhance a candidate solution concerning a specified quality metric.

Since they can search very wide spaces of potential solutions and make little or no assumptions about the problem being optimized, such techniques are frequently referred to as metaheuristics. Metaheuristics like DE, though, do not ensure that an ideal solution will ever be discovered.

Unlike traditional optimization techniques like gradient descent and quasi-newton methods, which both require the optimization issue to be differentiable, DE does not use the gradient of the problem being optimized and is therefore applicable for multidimensional real-valued functions. DE can consequently be applied to optimization issues that aren’t even continuous, noisy, change over time, etc.

DE uses straightforward formulas to combine current candidate solutions to create new candidate solutions, maintain a population of candidate solutions, and keep the candidate solution with the highest score or fitness on the optimization task at hand.

In this method, the gradient is not required because the optimization issue is viewed as a “black box” that only offers a measure of quality given a candidate solution.

  • A population of potential solutions is how the DE algorithm’s fundamental variation operates (called agents). By combining the positions of the current agents from the population, these agents are moved around in the search space using straightforward mathematical formulas.
  • If an agent’s new position is an improvement, it is accepted and added to the population; otherwise, it is just eliminated. It is hoped, but not guaranteed, that repeating the procedure will lead to the eventual discovery of a workable solution.

Read: How to use Python Scipy Linprog

Python Scipy Differential Evolution

The module scipy.optimize has a method differential_evolution() that finds a multivariate function’s global minimum.

The syntax is given below.

scipy.optimize.differential_evolution(func, bounds, strategy='best1bin', maxiter=1000, popsize=15, tol=0.01, mutation=(0.5, 1), recombination=0.7, seed=None, callback=None, disp=False, polish=True, init='latinhypercube', atol=0, updating='immediate', workers=1, constraints=(), x0=None, *, integrality=None, vectorized=False)

Where parameters are:

  • func(callable): The goal is to minimize the objective function. Must have the form f(x, *args), where args is a tuple of all additional fixed parameters required to fully explain the function and x is the input in the form of a 1-D array. N is equal to len in terms of parameters (x).
  • bounds(sequence): For variables, bounds. The boundaries can be specified in one of two ways: 1. A Bounds class instance. 2. (min, max) pairs that specify the finite lower and upper bounds for the optimization argument of func for each element in x. The number of parameters, N, is calculated from the sum of the boundaries.
  • strategy(string): The technique to use differential evolution. ought to be one of: ‘best1bin’, ‘best1exp’, ‘rand1exp’, ‘randtobest1exp’,‘ currenttobest1exp’, ‘best2exp’, ‘rand2exp’, ‘randtobest1bin’, ‘currenttobest1bin’, ‘best2bin’, ‘rand2bin’, ‘rand1bin’.
  • maxiter(int): The number of generations that the entire population can evolve over at most. (maxiter + 1) * popsize * N is the maximum number of function evaluations that can be performed (without polishing).
  • popsize(int): A multiplier for determining the population’s overall size. There are popsize * N people in the population. If the init keyword is used to provide an initial population, this term is ignored. The population size is determined by taking the next power of two after popsize * N when init=’sobol’ is used.
  • tol(float): When the solver reaches the relative tolerance for convergence, np.std(pop) = atol + tol * np.abs(np.mean(population energies)), where and atol and tol are the corresponding absolute and relative tolerances.
  • mutation(float or tuple): Mutagenic constant. The term “differential weight” is also used in the literature and is represented by the letter F. The range [0, 2] should be used if a float is supplied. Dithering is used when the value is given as a tuple (min, max). On a generation-per-generation basis, dithering randomly modifies the mutation constant. For the generation, U[min, max] is used to calculate the mutation constant. Convergence can be considerably accelerated with dithering. Though it will slow convergence, increasing the mutation constant will expand the search horizon.
  • recombination(float): The range [0, 1] should be the recombination constant. This is also referred to as the crossover probability in the literature and is represented by the letter CR. Increasing this value permits more mutants to survive into the following generation, but does so at the expense of population stability.
  • seed: Numpy.random is used if seed is None (or np.random). It uses a singleton of RandomState. If the seed is an integer, a fresh instance of RandomState is created and seeded with the seed. A Generator or RandomState instance is utilized if the seed already has one. For repeating minimizations, specify a seed.
  • disp(bool): At each iteration, prints the evaluated function.
  • callback: A function for tracking the minimizing process. The best answer so far has been xk. val is a representation of the population convergence’s fractional value. The function stops when val is higher than 1. When callback returns True, minimization stops (any polishing is still carried out).
  • polish(boolean): If True (the default), the best population member is polished at the end using the L-BFGS-B method, which can marginally enhance minimization. The trust-constr approach is employed in its place when researching a confined problem.
  • init(string, array_data): Indicate the kind of population initialization that is carried out. ought to be one of: ‘latinhypercube’, ‘sobol’, ‘halton’, ‘random’.
  • atol(float): When the absolute tolerance for convergence np.std(pop) = atol + tol * np.abs(np.mean(population energies)), where atol and tol are the corresponding absolute and relative tolerances, is reached, the problem cannot be solved.
  • updating(deferred, immediate): If “immediate,” the optimal solution vector is updated repeatedly throughout one generation. As a result, convergence may occur more quickly since trial vectors might benefit from ongoing developments in the ideal outcome. The best solution vector is updated once every generation when the option is selected. The only option that is compatible with parallelization or vectorization is “delayed,” which can be overridden by the workers and vectorized keywords.
  • workers(int or map-like): The population is separated into workers portions and evaluated in parallel (using multiprocessing) if workers is an integer. Pool). To utilize all of the CPU cores, supply -1. Alternatively, provide a callable that resembles a map, like multiprocessing. Pool.map is used to evaluate the population concurrently. This assessment is completed by workers (func, iterable). If workers!= 1, this option will replace the updating keyword with updating=’deferred’. If workers!= 1, this option takes precedence over the vectorized keyword. It also demands that func be pickleable.
  • constraints(bounds, LinearConstraint, NonLinearConstraint): Additional restrictions placed on the solver beyond those imposed by the boundaries kwd. uses Lampinen’s strategy.
  • x0(array_like or None): gives the minimization a rough estimate in the beginning. This vector takes the place of the initial (best) member once the population has been initialized. Even if init is given an initial population, this replacement is still carried out. Shape == x0 (N,).
  • integrality(1-D array): a boolean value indicating whether the decision variable is restricted to integer values for each decision variable. The array transmits to (N,). If any decision variables are required to be integral, the polishing process won’t alter them. We only utilize integer numbers that fall inside the lower and higher boundaries. A ValueError is raised if there are no integer values that fall between the boundaries.
  • vectorized(boolean): Given an x array with x.shape == (N, S), func is supposed to return an array with shape (S,), where S is the number of solution vectors that need to be generated, if vectorized is set to True. If constraints are utilized, each function that builds a Constraint object should accept an x array with x.shape == (N, S) and return an array with shape (M, S), where M is the number of constraint components. This alternative to worker parallelization may increase optimization speed by minimizing interpreter overhead from repeated function calls. If workers!= 1, this keyword will not be used. The updating=’deferred’ option will take precedence over the updating keyword. For more information on when to use “vectorized” and when to use “workers,” see the notes section.
READ:  Expense Tracking Application Using Python Tkinter

The method differential_evolution() returns res : A OptimizeResult object is used to represent the optimization result. The solution array x, success, a Boolean indication indicating if the optimizer successfully terminated, and a message, which explains the termination reason, are important features.

Let’s take an example and understand how the method differential_evolution() works.

Import the required libraries or methods using the below python code.

from scipy import optimize

Consider the Rosenbrock function minimization problem. In “scipy.optimize,” this function is implemented in Rosen using the below code.

bounds_ = [(0,1), (0, 1), (0, 1), (0, 1), (0, 1)]
result_ = optimize.differential_evolution(optimize.rosen, bounds_)
result_.x, result_.fun
Python Scipy Differential Evolution
Python Scipy Differential Evolution

This is how to perform the differential evolution on the objective function rsoen using the method differential_evolution() of Python Scipy.

Read: Python Scipy Lognormal + 10 Examples

Python Scipy Differential Evolution Strategy

The algorithm is particularly suited to non-differential nonlinear objective functions since it does not employ gradient information during the search process.

The method operates by keeping track of a population of potential answers that are represented as vectors with real values. Modified versions of current solutions are used to produce new candidate solutions, which each time the algorithm iterates replace a sizable chunk of the population.

Using a “strategy,” which comprises choosing a base solution to which a mutation is introduced and additional candidate solutions from the population from which the amount and kind of mutation are determined, known as a difference vector, new candidate solutions are formed. For the difference vector in the mutation, for instance, a strategy might choose the best candidate solution as the base and random solutions.

Differential strategies are described using the following common terminology:

READ:  Tensorflow convert string to int

“DE/x/y/z”: Where DE refers to “differential evolution,” x designates the initial solution that will be altered. For example, “rand” represents random and “best” stands for the best solution found in the population. The variables y and z are used to represent the number of difference vectors added to the base solution, such as 1, and the probability distribution is used to decide whether each solution will be retained or replaced in the population, such as binomial or exponential, respectively.

Due to their strong performance over a wide range of objective functions, the configurations “DE/best/1/bin” and “DE/best/2/bin” are well-liked configurations.

The “strategy” argument, which regulates the kind of differential evolution search that is carried out, is a crucial hyperparameter of the Python Scipy method differential_evolution. This is normally set to “best1bin” (DE/best/1/bin), which is a suitable setup for the majority of issues. By choosing random solutions from the population, dividing them by each other, and adding a scaled version of the result to the top candidate solution in the population, it generates new candidate solutions.

Read: Python Scipy Butterworth Filter

Python Scipy Differential Evolution Parallel

The function minimized parallel, so to know how differential evolution parallel works, refer to the parameters updating and workers which is explained in the above subsection.

Let’s take the same example as the above subsection but with parallelization by following the below steps:

Import the required libraries or methods using the below python code.

from scipy import optimize

Consider the Rosenbrock function minimization problem using the below code.

bounds_ = [(1,3), (1, 3), (1, 3), (1, 3), (1, 3)]
result_ = optimize.differential_evolution(optimize.rosen, bounds_,updating='deferred',workers=2)
result_.x, result_.fun
Python Scipy Differential Evolution Parallel
Python Scipy Differential Evolution Parallel

This is how to perform the parallel differential evolution using the method differential_evolution() with the parameter workers of Python Scipy.

READ:  Horizontal line matplotlib

Read: Python Scipy Curve Fit – Detailed Guide

Python Scipy Differential Evolution Constraints

We have already learned about how to compute the differential evolution and its parameters, so in this section, we will do a constraint minimization by following the below steps:

Import the required libraries or methods using the below python code.

from scipy import optimize
import numpy as np

Define the constraints function using the below code.

def constr_fun(x):
    return np.array(x[0] + x[1])

Use the method NonLinearConstraint and Bounds to define the constraints and bounds or limits using the below code.

nlc_ = optimize.NonlinearConstraint(constr_fun, -np.inf, 1.8)
bounds_ = optimize.Bounds([0., 0.], [1., 2.])

Now minimize the constraint using the below code.

result = optimize.differential_evolution(optimize.rosen, bounds_, constraints=(nlc_),
                                seed=1)
result.x, result.fun
Python Scipy Differential Evolution Constraints
Python Scipy Differential Evolution Constraints

This is how to use the constraints with the method differential_evolution() of Python Scipy.

Read: Python Scipy Load Mat File

Python Scipy Differential Evolution Bounds

The method differential_evolution() of Python Scipy accepts a parameter bounds. There are two methods for defining the bounds: 1. Bounds class instance number. 2. For each element in x, (min, max) pairs are used to provide the finite lower and upper bounds for the optimization parameter of func. The total number of bounds is used to calculate the parameter count, N.

Let’s take the same example that we have used in the above subsection and understand how parameter bounds work by following the below steps:

Import the required libraries or methods using the below python code.

from scipy import optimize
import numpy as np

Define the constraints function using the below code.

def constr_fun(x):
    return np.array(x[0] + x[1])

Use the method NonLinearConstraint and Bounds to define the constraints and bounds or limits using the below code.

nlc_ = optimize.NonlinearConstraint(constr_fun, -np.inf, 1.8)
bounds_ = optimize.Bounds([0., 0.], [1., 2.])

In the above code bound is defined as Bounds([0., 0.], [1., 2.]).

Now minimize the constraint with bounds using the below code.

result = optimize.differential_evolution(optimize.rosen, bounds_, constraints=(nlc_),
                                seed=1)
result.x, result.fun

Now change the value of the bound using the below code.

nlc_ = optimize.NonlinearConstraint(constr_fun, -np.inf, 1.8)
bounds_ = optimize.Bounds([2., 0.], [1., 2.])

Compute the minimize the constraint with different bounds using the below code.

result = optimize.differential_evolution(optimize.rosen, bounds_, constraints=(nlc_),
                                seed=1)
result.x, result.fun
Python Scipy Differential Evolution Bounds
Python Scipy Differential Evolution Bounds

This is how to define the bounds and use them with differential_evolution() Python Scipy.

You may also like to read the following Python Scipy tutorials.

We have learned about how to find the optimal solution using the differential evolution and perform the differential evolution in parallel to make the process faster, also learned about how to use the constraints and bounds with differential evolution.

  • What is Differential Evolution
  • Python Scipy Differential Evolution
  • How to use Scipy Differential Evolution using different Strategy
  • How to perform Scipy Differential Evolution Parallely
  • Define constraints and use them in Differential Evolution
  • Python Scipy Differential Evolution Bounds