schrodinger.application.jaguar.qrnn.temp_opt module

class schrodinger.application.jaguar.qrnn.temp_opt.MemoizeJac(fun)

Bases: object

Decorator that caches the return values of a function returning (fun, grad) each time it is called.

__init__(fun)
derivative(x, *args)
class schrodinger.application.jaguar.qrnn.temp_opt.OptimizeResult

Bases: dict

Represents the optimization result.

xndarray

The solution of the optimization.

successbool

Whether or not the optimizer exited successfully.

statusint

Termination status of the optimizer. Its value depends on the underlying solver. Refer to message for details.

messagestr

Description of the cause of the termination.

fun, jac, hess: ndarray

Values of objective function, its Jacobian and its Hessian (if available). The Hessians may be approximations, see the documentation of the function in question.

hess_invobject

Inverse of the objective function’s Hessian; may be an approximation. Not available for all solvers. The type of this attribute may be either np.ndarray or scipy.sparse.linalg.LinearOperator.

nfev, njev, nhevint

Number of evaluations of the objective functions and of its Jacobian and Hessian.

nitint

Number of iterations performed by the optimizer.

maxcvfloat

The maximum constraint violation.

OptimizeResult may have additional attributes not listed here depending on the specific solver being used. Since this class is essentially a subclass of dict with attribute accessors, one can see which attributes are available using the OptimizeResult.keys method.

exception schrodinger.application.jaguar.qrnn.temp_opt.OptimizeWarning

Bases: UserWarning

schrodinger.application.jaguar.qrnn.temp_opt.is_finite_scalar(x)

Test whether x is either a finite scalar or a finite array scalar.

schrodinger.application.jaguar.qrnn.temp_opt.vecnorm(x, ord=2)
schrodinger.application.jaguar.qrnn.temp_opt.rosen(x)

The Rosenbrock function.

The function computed is:

sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)
xarray_like

1-D array of points at which the Rosenbrock function is to be computed.

ffloat

The value of the Rosenbrock function.

rosen_der, rosen_hess, rosen_hess_prod

>>> import numpy as np
>>> from scipy.optimize import rosen
>>> X = 0.1 * np.arange(10)
>>> rosen(X)
76.56

For higher-dimensional input rosen broadcasts. In the following example, we use this to plot a 2D landscape. Note that rosen_hess does not broadcast in this manner.

>>> import matplotlib.pyplot as plt
>>> from mpl_toolkits.mplot3d import Axes3D
>>> x = np.linspace(-1, 1, 50)
>>> X, Y = np.meshgrid(x, x)
>>> ax = plt.subplot(111, projection='3d')
>>> ax.plot_surface(X, Y, rosen([X, Y]))
>>> plt.show()
schrodinger.application.jaguar.qrnn.temp_opt.rosen_der(x)

The derivative (i.e. gradient) of the Rosenbrock function.

xarray_like

1-D array of points at which the derivative is to be computed.

rosen_der(N,) ndarray

The gradient of the Rosenbrock function at x.

rosen, rosen_hess, rosen_hess_prod

>>> import numpy as np
>>> from scipy.optimize import rosen_der
>>> X = 0.1 * np.arange(9)
>>> rosen_der(X)
array([ -2. ,  10.6,  15.6,  13.4,   6.4,  -3. , -12.4, -19.4,  62. ])
schrodinger.application.jaguar.qrnn.temp_opt.rosen_hess(x)

The Hessian matrix of the Rosenbrock function.

xarray_like

1-D array of points at which the Hessian matrix is to be computed.

rosen_hessndarray

The Hessian matrix of the Rosenbrock function at x.

rosen, rosen_der, rosen_hess_prod

>>> import numpy as np
>>> from scipy.optimize import rosen_hess
>>> X = 0.1 * np.arange(4)
>>> rosen_hess(X)
array([[-38.,   0.,   0.,   0.],
       [  0., 134., -40.,   0.],
       [  0., -40., 130., -80.],
       [  0.,   0., -80., 200.]])
schrodinger.application.jaguar.qrnn.temp_opt.rosen_hess_prod(x, p)

Product of the Hessian matrix of the Rosenbrock function with a vector.

xarray_like

1-D array of points at which the Hessian matrix is to be computed.

parray_like

1-D array, the vector to be multiplied by the Hessian matrix.

rosen_hess_prodndarray

The Hessian matrix of the Rosenbrock function at x multiplied by the vector p.

rosen, rosen_der, rosen_hess

>>> import numpy as np
>>> from scipy.optimize import rosen_hess_prod
>>> X = 0.1 * np.arange(9)
>>> p = 0.5 * np.arange(9)
>>> rosen_hess_prod(X, p)
array([  -0.,   27.,  -10.,  -95., -192., -265., -278., -195., -180.])
schrodinger.application.jaguar.qrnn.temp_opt.fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, initial_simplex=None)

Minimize a function using the downhill simplex algorithm.

This algorithm only uses function values, not derivatives or second derivatives.

funccallable func(x,*args)

The objective function to be minimized.

x0ndarray

Initial guess.

argstuple, optional

Extra arguments passed to func, i.e., f(x,*args).

xtolfloat, optional

Absolute error in xopt between iterations that is acceptable for convergence.

ftolnumber, optional

Absolute error in func(xopt) between iterations that is acceptable for convergence.

maxiterint, optional

Maximum number of iterations to perform.

maxfunnumber, optional

Maximum number of function evaluations to make.

full_outputbool, optional

Set to True if fopt and warnflag outputs are desired.

dispbool, optional

Set to True to print convergence messages.

retallbool, optional

Set to True to return list of solutions at each iteration.

callbackcallable, optional

Called after each iteration, as callback(xk), where xk is the current parameter vector.

initial_simplexarray_like of shape (N + 1, N), optional

Initial simplex. If given, overrides x0. initial_simplex[j,:] should contain the coordinates of the jth vertex of the N+1 vertices in the simplex, where N is the dimension.

xoptndarray

Parameter that minimizes function.

foptfloat

Value of function at minimum: fopt = func(xopt).

iterint

Number of iterations performed.

funcallsint

Number of function calls made.

warnflagint

1 : Maximum number of function evaluations made. 2 : Maximum number of iterations reached.

allvecslist

Solution at each iteration.

minimize: Interface to minimization algorithms for multivariate

functions. See the ‘Nelder-Mead’ method in particular.

Uses a Nelder-Mead simplex algorithm to find the minimum of function of one or more variables.

This algorithm has a long history of successful use in applications. But it will usually be slower than an algorithm that uses first or second derivative information. In practice, it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does. Both the ftol and xtol criteria must be met for convergence.

>>> def f(x):
...     return x**2
>>> from scipy import optimize
>>> minimum = optimize.fmin(f, 1)
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 17
         Function evaluations: 34
>>> minimum[0]
-8.8817841970012523e-16
1

Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp. 308-313

2

Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995, Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson (Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208.

schrodinger.application.jaguar.qrnn.temp_opt.approx_fprime(xk, f, epsilon=1.4901161193847656e-08, *args)

Finite difference approximation of the derivatives of a scalar or vector-valued function.

If a function maps from R^n to R^m, its derivatives form an m-by-n matrix called the Jacobian, where an element (i, j) is a partial derivative of f[i] with respect to xk[j].

xkarray_like

The coordinate vector at which to determine the gradient of f.

fcallable

Function of which to estimate the derivatives of. Has the signature f(xk, *args) where xk is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function. The argument xk passed to this function is an ndarray of shape (n,) (never a scalar even if n=1). It must return a 1-D array_like of shape (m,) or a scalar.

Changed in version 1.9.0: f is now able to return a 1-D array-like, with the (m, n) Jacobian being estimated.

epsilon{float, array_like}, optional

Increment to xk to use for determining the function gradient. If a scalar, uses the same finite difference delta for all partial derivatives. If an array, should contain one value per element of xk. Defaults to sqrt(np.finfo(float).eps), which is approximately 1.49e-08.

*argsargs, optional

Any other arguments that are to be passed to f.

jacndarray

The partial derivatives of f to xk.

check_grad : Check correctness of gradient function against approx_fprime.

The function gradient is determined by the forward finite difference formula:

         f(xk[i] + epsilon[i]) - f(xk[i])
f'[i] = ---------------------------------
                    epsilon[i]
>>> import numpy as np
>>> from scipy import optimize
>>> def func(x, c0, c1):
...     "Coordinate vector `x` should be an array of size two."
...     return c0 * x[0]**2 + c1*x[1]**2
>>> x = np.ones(2)
>>> c0, c1 = (1, 200)
>>> eps = np.sqrt(np.finfo(float).eps)
>>> optimize.approx_fprime(x, func, [eps, np.sqrt(200) * eps], c0, c1)
array([   2.        ,  400.00004198])
schrodinger.application.jaguar.qrnn.temp_opt.check_grad(func, grad, x0, *args, epsilon=1.4901161193847656e-08, direction='all', seed=None)

Check the correctness of a gradient function by comparing it against a (forward) finite-difference approximation of the gradient.

funccallable func(x0, *args)

Function whose derivative is to be checked.

gradcallable grad(x0, *args)

Jacobian of func.

x0ndarray

Points to check grad against forward difference approximation of grad using func.

args*args, optional

Extra arguments passed to func and grad.

epsilonfloat, optional

Step size used for the finite difference approximation. It defaults to sqrt(np.finfo(float).eps), which is approximately 1.49e-08.

directionstr, optional

If set to 'random', then gradients along a random vector are used to check grad against forward difference approximation using func. By default it is 'all', in which case, all the one hot direction vectors are considered to check grad. If func is a vector valued function then only 'all' can be used.

seed{None, int, numpy.random.Generator, numpy.random.RandomState}, optional

If seed is None (or np.random), the numpy.random.RandomState singleton is used. If seed is an int, a new RandomState instance is used, seeded with seed. If seed is already a Generator or RandomState instance then that instance is used. Specify seed for reproducing the return value from this function. The random numbers generated with this seed affect the random vector along which gradients are computed to check grad. Note that seed is only used when direction argument is set to 'random'.

errfloat

The square root of the sum of squares (i.e., the 2-norm) of the difference between grad(x0, *args) and the finite difference approximation of grad using func at the points x0.

approx_fprime

>>> import numpy as np
>>> def func(x):
...     return x[0]**2 - 0.5 * x[1]**3
>>> def grad(x):
...     return [2 * x[0], -1.5 * x[1]**2]
>>> from scipy.optimize import check_grad
>>> check_grad(func, grad, [1.5, -1.5])
2.9802322387695312e-08  # may vary
>>> rng = np.random.default_rng()
>>> check_grad(func, grad, [1.5, -1.5],
...             direction='random', seed=rng)
2.9802322387695312e-08
schrodinger.application.jaguar.qrnn.temp_opt.approx_fhess_p(x0, p, fprime, epsilon, *args)
schrodinger.application.jaguar.qrnn.temp_opt.fmin_bfgs(f, x0, fprime=None, args=(), gtol=1e-05, norm=inf, epsilon=1.4901161193847656e-08, maxiter=None, full_output=0, disp=1, retall=0, callback=None, xrtol=0)

Minimize a function using the BFGS algorithm.

fcallable f(x,*args)

Objective function to be minimized.

x0ndarray

Initial guess.

fprimecallable f'(x,*args), optional

Gradient of f.

argstuple, optional

Extra arguments passed to f and fprime.

gtolfloat, optional

Terminate successfully if gradient norm is less than gtol

normfloat, optional

Order of norm (Inf is max, -Inf is min)

epsilonint or ndarray, optional

If fprime is approximated, use this value for the step size.

callbackcallable, optional

An optional user-supplied function to call after each iteration. Called as callback(xk), where xk is the current parameter vector.

maxiterint, optional

Maximum number of iterations to perform.

full_outputbool, optional

If True, return fopt, func_calls, grad_calls, and warnflag in addition to xopt.

dispbool, optional

Print convergence message if True.

retallbool, optional

Return a list of results at each iteration if True.

xrtolfloat, default: 0

Relative tolerance for x. Terminate successfully if step size is less than xk * xrtol where xk is the current parameter vector.

xoptndarray

Parameters which minimize f, i.e., f(xopt) == fopt.

foptfloat

Minimum value.

goptndarray

Value of gradient at minimum, f’(xopt), which should be near 0.

Boptndarray

Value of 1/f’’(xopt), i.e., the inverse Hessian matrix.

func_callsint

Number of function_calls made.

grad_callsint

Number of gradient calls made.

warnflaginteger

1 : Maximum number of iterations exceeded. 2 : Gradient and/or function calls not changing. 3 : NaN result encountered.

allvecslist

The value of xopt at each iteration. Only returned if retall is True.

Optimize the function, f, whose gradient is given by fprime using the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS).

minimize: Interface to minimization algorithms for multivariate

functions. See method='BFGS' in particular.

Wright, and Nocedal ‘Numerical Optimization’, 1999, p. 198.

>>> import numpy as np
>>> from scipy.optimize import fmin_bfgs
>>> def quadratic_cost(x, Q):
...     return x @ Q @ x
...
>>> x0 = np.array([-3, -4])
>>> cost_weight =  np.diag([1., 10.])
>>> # Note that a trailing comma is necessary for a tuple with single element
>>> fmin_bfgs(quadratic_cost, x0, args=(cost_weight,))
Optimization terminated successfully.
        Current function value: 0.000000
        Iterations: 7                   # may vary
        Function evaluations: 24        # may vary
        Gradient evaluations: 8         # may vary
array([ 2.85169950e-06, -4.61820139e-07])
>>> def quadratic_cost_grad(x, Q):
...     return 2 * Q @ x
...
>>> fmin_bfgs(quadratic_cost, x0, quadratic_cost_grad, args=(cost_weight,))
Optimization terminated successfully.
        Current function value: 0.000000
        Iterations: 7
        Function evaluations: 8
        Gradient evaluations: 8
array([ 2.85916637e-06, -4.54371951e-07])
schrodinger.application.jaguar.qrnn.temp_opt.fmin_cg(f, x0, fprime=None, args=(), gtol=1e-05, norm=inf, epsilon=1.4901161193847656e-08, maxiter=None, full_output=0, disp=1, retall=0, callback=None)

Minimize a function using a nonlinear conjugate gradient algorithm.

fcallable, f(x, *args)

Objective function to be minimized. Here x must be a 1-D array of the variables that are to be changed in the search for a minimum, and args are the other (fixed) parameters of f.

x0ndarray

A user-supplied initial estimate of xopt, the optimal value of x. It must be a 1-D array of values.

fprimecallable, fprime(x, *args), optional

A function that returns the gradient of f at x. Here x and args are as described above for f. The returned value must be a 1-D array. Defaults to None, in which case the gradient is approximated numerically (see epsilon, below).

argstuple, optional

Parameter values passed to f and fprime. Must be supplied whenever additional fixed parameters are needed to completely specify the functions f and fprime.

gtolfloat, optional

Stop when the norm of the gradient is less than gtol.

normfloat, optional

Order to use for the norm of the gradient (-np.Inf is min, np.Inf is max).

epsilonfloat or ndarray, optional

Step size(s) to use when fprime is approximated numerically. Can be a scalar or a 1-D array. Defaults to sqrt(eps), with eps the floating point machine precision. Usually sqrt(eps) is about 1.5e-8.

maxiterint, optional

Maximum number of iterations to perform. Default is 200 * len(x0).

full_outputbool, optional

If True, return fopt, func_calls, grad_calls, and warnflag in addition to xopt. See the Returns section below for additional information on optional return values.

dispbool, optional

If True, return a convergence message, followed by xopt.

retallbool, optional

If True, add to the returned values the results of each iteration.

callbackcallable, optional

An optional user-supplied function, called after each iteration. Called as callback(xk), where xk is the current value of x0.

xoptndarray

Parameters which minimize f, i.e., f(xopt) == fopt.

foptfloat, optional

Minimum value found, f(xopt). Only returned if full_output is True.

func_callsint, optional

The number of function_calls made. Only returned if full_output is True.

grad_callsint, optional

The number of gradient calls made. Only returned if full_output is True.

warnflagint, optional

Integer value with warning status, only returned if full_output is True.

0 : Success.

1 : The maximum number of iterations was exceeded.

2Gradient and/or function calls were not changing. May indicate

that precision was lost, i.e., the routine did not converge.

3 : NaN result encountered.

allvecslist of ndarray, optional

List of arrays, containing the results at each iteration. Only returned if retall is True.

minimizecommon interface to all scipy.optimize algorithms for

unconstrained and constrained minimization of multivariate functions. It provides an alternative way to call fmin_cg, by specifying method='CG'.

This conjugate gradient algorithm is based on that of Polak and Ribiere [1]_.

Conjugate gradient methods tend to work better when:

  1. f has a unique global minimizing point, and no local minima or other stationary points,

  2. f is, at least locally, reasonably well approximated by a quadratic function of the variables,

  3. f is continuous and has a continuous gradient,

  4. fprime is not too large, e.g., has a norm less than 1000,

  5. The initial guess, x0, is reasonably close to f ‘s global minimizing point, xopt.

1

Wright & Nocedal, “Numerical Optimization”, 1999, pp. 120-122.

Example 1: seek the minimum value of the expression a*u**2 + b*u*v + c*v**2 + d*u + e*v + f for given values of the parameters and an initial guess (u, v) = (0, 0).

>>> import numpy as np
>>> args = (2, 3, 7, 8, 9, 10)  # parameter values
>>> def f(x, *args):
...     u, v = x
...     a, b, c, d, e, f = args
...     return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f
>>> def gradf(x, *args):
...     u, v = x
...     a, b, c, d, e, f = args
...     gu = 2*a*u + b*v + d     # u-component of the gradient
...     gv = b*u + 2*c*v + e     # v-component of the gradient
...     return np.asarray((gu, gv))
>>> x0 = np.asarray((0, 0))  # Initial guess.
>>> from scipy import optimize
>>> res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args)
Optimization terminated successfully.
         Current function value: 1.617021
         Iterations: 4
         Function evaluations: 8
         Gradient evaluations: 8
>>> res1
array([-1.80851064, -0.25531915])

Example 2: solve the same problem using the minimize function. (This myopts dictionary shows all of the available options, although in practice only non-default values would be needed. The returned value will be a dictionary.)

>>> opts = {'maxiter' : None,    # default value.
...         'disp' : True,    # non-default value.
...         'gtol' : 1e-5,    # default value.
...         'norm' : np.inf,  # default value.
...         'eps' : 1.4901161193847656e-08}  # default value.
>>> res2 = optimize.minimize(f, x0, jac=gradf, args=args,
...                          method='CG', options=opts)
Optimization terminated successfully.
        Current function value: 1.617021
        Iterations: 4
        Function evaluations: 8
        Gradient evaluations: 8
>>> res2.x  # minimum found
array([-1.80851064, -0.25531915])
schrodinger.application.jaguar.qrnn.temp_opt.fmin_ncg(f, x0, fprime, fhess_p=None, fhess=None, args=(), avextol=1e-05, epsilon=1.4901161193847656e-08, maxiter=None, full_output=0, disp=1, retall=0, callback=None)

Unconstrained minimization of a function using the Newton-CG method.

fcallable f(x, *args)

Objective function to be minimized.

x0ndarray

Initial guess.

fprimecallable f'(x, *args)

Gradient of f.

fhess_pcallable fhess_p(x, p, *args), optional

Function which computes the Hessian of f times an arbitrary vector, p.

fhesscallable fhess(x, *args), optional

Function to compute the Hessian matrix of f.

argstuple, optional

Extra arguments passed to f, fprime, fhess_p, and fhess (the same set of extra arguments is supplied to all of these functions).

epsilonfloat or ndarray, optional

If fhess is approximated, use this value for the step size.

callbackcallable, optional

An optional user-supplied function which is called after each iteration. Called as callback(xk), where xk is the current parameter vector.

avextolfloat, optional

Convergence is assumed when the average relative error in the minimizer falls below this amount.

maxiterint, optional

Maximum number of iterations to perform.

full_outputbool, optional

If True, return the optional outputs.

dispbool, optional

If True, print convergence message.

retallbool, optional

If True, return a list of results at each iteration.

xoptndarray

Parameters which minimize f, i.e., f(xopt) == fopt.

foptfloat

Value of the function at xopt, i.e., fopt = f(xopt).

fcallsint

Number of function calls made.

gcallsint

Number of gradient calls made.

hcallsint

Number of Hessian calls made.

warnflagint

Warnings generated by the algorithm. 1 : Maximum number of iterations exceeded. 2 : Line search failure (precision loss). 3 : NaN result encountered.

allvecslist

The result at each iteration, if retall is True (see below).

minimize: Interface to minimization algorithms for multivariate

functions. See the ‘Newton-CG’ method in particular.

Only one of fhess_p or fhess need to be given. If fhess is provided, then fhess_p will be ignored. If neither fhess nor fhess_p is provided, then the hessian product will be approximated using finite differences on fprime. fhess_p must compute the hessian times an arbitrary vector. If it is not given, finite-differences on fprime are used to compute it.

Newton-CG methods are also called truncated Newton methods. This function differs from scipy.optimize.fmin_tnc because

  1. scipy.optimize.fmin_ncg is written purely in Python using NumPy

    and scipy while scipy.optimize.fmin_tnc calls a C function.

  2. scipy.optimize.fmin_ncg is only for unconstrained minimization

    while scipy.optimize.fmin_tnc is for unconstrained minimization or box constrained minimization. (Box constraints give lower and upper bounds for each variable separately.)

Wright & Nocedal, ‘Numerical Optimization’, 1999, p. 140.

schrodinger.application.jaguar.qrnn.temp_opt.fminbound(func, x1, x2, args=(), xtol=1e-05, maxfun=500, full_output=0, disp=1)

Bounded minimization for scalar functions.

funccallable f(x,*args)

Objective function to be minimized (must accept and return scalars).

x1, x2float or array scalar

Finite optimization bounds.

argstuple, optional

Extra arguments passed to function.

xtolfloat, optional

The convergence tolerance.

maxfunint, optional

Maximum number of function evaluations allowed.

full_outputbool, optional

If True, return optional outputs.

dispint, optional
If non-zero, print messages.

0 : no message printing. 1 : non-convergence notification messages only. 2 : print a message on convergence too. 3 : print iteration results.

xoptndarray

Parameters (over given interval) which minimize the objective function.

fvalnumber

The function value evaluated at the minimizer.

ierrint

An error flag (0 if converged, 1 if maximum number of function calls reached).

numfuncint

The number of function calls made.

minimize_scalar: Interface to minimization algorithms for scalar

univariate functions. See the ‘Bounded’ method in particular.

Finds a local minimizer of the scalar function func in the interval x1 < xopt < x2 using Brent’s method. (See brent for auto-bracketing.)

1

Forsythe, G.E., M. A. Malcolm, and C. B. Moler. “Computer Methods for Mathematical Computations.” Prentice-Hall Series in Automatic Computation 259 (1977).

2

Brent, Richard P. Algorithms for Minimization Without Derivatives. Courier Corporation, 2013.

fminbound finds the minimizer of the function in the given range. The following examples illustrate this.

>>> from scipy import optimize
>>> def f(x):
...     return (x-1)**2
>>> minimizer = optimize.fminbound(f, -4, 4)
>>> minimizer
1.0
>>> minimum = f(minimizer)
>>> minimum
0.0
>>> minimizer = optimize.fminbound(f, 3, 4)
>>> minimizer
3.000005960860986
>>> minimum = f(minimizer)
>>> minimum
4.000023843479476
class schrodinger.application.jaguar.qrnn.temp_opt.Brent(func, args=(), tol=1.48e-08, maxiter=500, full_output=0, disp=0)

Bases: object

__init__(func, args=(), tol=1.48e-08, maxiter=500, full_output=0, disp=0)
set_bracket(brack=None)
get_bracket_info()
optimize()
get_result(full_output=False)
schrodinger.application.jaguar.qrnn.temp_opt.brent(func, args=(), brack=None, tol=1.48e-08, full_output=0, maxiter=500)

Given a function of one variable and a possible bracket, return the local minimum of the function isolated to a fractional precision of tol.

funccallable f(x,*args)

Objective function.

argstuple, optional

Additional arguments (if present).

bracktuple, optional

Either a triple (xa,xb,xc) where xa<xb<xc and func(xb) < func(xa), func(xc) or a pair (xa,xb) which are used as a starting interval for a downhill bracket search (see bracket). Providing the pair (xa,xb) does not always mean the obtained solution will satisfy xa<=x<=xb.

tolfloat, optional

Relative error in solution xopt acceptable for convergence.

full_outputbool, optional

If True, return all output args (xmin, fval, iter, funcalls).

maxiterint, optional

Maximum number of iterations in solution.

xminndarray

Optimum point.

fvalfloat

Optimum value.

iterint

Number of iterations.

funcallsint

Number of objective function evaluations made.

minimize_scalar: Interface to minimization algorithms for scalar

univariate functions. See the ‘Brent’ method in particular.

Uses inverse parabolic interpolation when possible to speed up convergence of golden section method.

Does not ensure that the minimum lies in the range specified by brack. See fminbound.

We illustrate the behaviour of the function when brack is of size 2 and 3 respectively. In the case where brack is of the form (xa,xb), we can see for the given values, the output need not necessarily lie in the range (xa,xb).

>>> def f(x):
...     return x**2
>>> from scipy import optimize
>>> minimum = optimize.brent(f,brack=(1,2))
>>> minimum
0.0
>>> minimum = optimize.brent(f,brack=(-1,0.5,2))
>>> minimum
-2.7755575615628914e-17
schrodinger.application.jaguar.qrnn.temp_opt.golden(func, args=(), brack=None, tol=1.4901161193847656e-08, full_output=0, maxiter=5000)

Return the minimum of a function of one variable using golden section method.

Given a function of one variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.

funccallable func(x,*args)

Objective function to minimize.

argstuple, optional

Additional arguments (if present), passed to func.

bracktuple, optional

Triple (a,b,c), where (a<b<c) and func(b) < func(a),func(c). If bracket consists of two numbers (a, c), then they are assumed to be a starting interval for a downhill bracket search (see bracket); it doesn’t always mean that obtained solution will satisfy a<=x<=c.

tolfloat, optional

x tolerance stop criterion

full_outputbool, optional

If True, return optional outputs.

maxiterint

Maximum number of iterations to perform.

minimize_scalar: Interface to minimization algorithms for scalar

univariate functions. See the ‘Golden’ method in particular.

Uses analog of bisection method to decrease the bracketed interval.

We illustrate the behaviour of the function when brack is of size 2 and 3, respectively. In the case where brack is of the form (xa,xb), we can see for the given values, the output need not necessarily lie in the range (xa, xb).

>>> def f(x):
...     return x**2
>>> from scipy import optimize
>>> minimum = optimize.golden(f, brack=(1, 2))
>>> minimum
1.5717277788484873e-162
>>> minimum = optimize.golden(f, brack=(-1, 0.5, 2))
>>> minimum
-1.5717277788484873e-162
schrodinger.application.jaguar.qrnn.temp_opt.bracket(func, xa=0.0, xb=1.0, args=(), grow_limit=110.0, maxiter=1000)

Bracket the minimum of the function.

Given a function and distinct initial points, search in the downhill direction (as defined by the initial points) and return new points xa, xb, xc that bracket the minimum of the function f(xa) > f(xb) < f(xc). It doesn’t always mean that obtained solution will satisfy xa<=x<=xb.

funccallable f(x,*args)

Objective function to minimize.

xa, xbfloat, optional

Bracketing interval. Defaults xa to 0.0, and xb to 1.0.

argstuple, optional

Additional arguments (if present), passed to func.

grow_limitfloat, optional

Maximum grow limit. Defaults to 110.0

maxiterint, optional

Maximum number of iterations to perform. Defaults to 1000.

xa, xb, xcfloat

Bracket.

fa, fb, fcfloat

Objective function values in bracket.

funcallsint

Number of function evaluations made.

This function can find a downward convex region of a function:

>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.optimize import bracket
>>> def f(x):
...     return 10*x**2 + 3*x + 5
>>> x = np.linspace(-2, 2)
>>> y = f(x)
>>> init_xa, init_xb = 0, 1
>>> xa, xb, xc, fa, fb, fc, funcalls = bracket(f, xa=init_xa, xb=init_xb)
>>> plt.axvline(x=init_xa, color="k", linestyle="--")
>>> plt.axvline(x=init_xb, color="k", linestyle="--")
>>> plt.plot(x, y, "-k")
>>> plt.plot(xa, fa, "bx")
>>> plt.plot(xb, fb, "rx")
>>> plt.plot(xc, fc, "bx")
>>> plt.show()
schrodinger.application.jaguar.qrnn.temp_opt.fmin_powell(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None)

Minimize a function using modified Powell’s method.

This method only uses function values, not derivatives.

funccallable f(x,*args)

Objective function to be minimized.

x0ndarray

Initial guess.

argstuple, optional

Extra arguments passed to func.

xtolfloat, optional

Line-search error tolerance.

ftolfloat, optional

Relative error in func(xopt) acceptable for convergence.

maxiterint, optional

Maximum number of iterations to perform.

maxfunint, optional

Maximum number of function evaluations to make.

full_outputbool, optional

If True, fopt, xi, direc, iter, funcalls, and warnflag are returned.

dispbool, optional

If True, print convergence messages.

retallbool, optional

If True, return a list of the solution at each iteration.

callbackcallable, optional

An optional user-supplied function, called after each iteration. Called as callback(xk), where xk is the current parameter vector.

direcndarray, optional

Initial fitting step and parameter order set as an (N, N) array, where N is the number of fitting parameters in x0. Defaults to step size 1.0 fitting all parameters simultaneously (np.eye((N, N))). To prevent initial consideration of values in a step or to change initial step size, set to 0 or desired step size in the Jth position in the Mth block, where J is the position in x0 and M is the desired evaluation step, with steps being evaluated in index order. Step size and ordering will change freely as minimization proceeds.

xoptndarray

Parameter which minimizes func.

foptnumber

Value of function at minimum: fopt = func(xopt).

direcndarray

Current direction set.

iterint

Number of iterations.

funcallsint

Number of function calls made.

warnflagint
Integer warning flag:

1 : Maximum number of function evaluations. 2 : Maximum number of iterations. 3 : NaN result encountered. 4 : The result is out of the provided bounds.

allvecslist

List of solutions at each iteration.

minimize: Interface to unconstrained minimization algorithms for

multivariate functions. See the ‘Powell’ method in particular.

Uses a modification of Powell’s method to find the minimum of a function of N variables. Powell’s method is a conjugate direction method.

The algorithm has two loops. The outer loop merely iterates over the inner loop. The inner loop minimizes over each current direction in the direction set. At the end of the inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced with the difference between the current estimated x and the estimated x from the beginning of the inner-loop.

The technical conditions for replacing the direction of greatest increase amount to checking that

  1. No further gain can be made along the direction of greatest increase from that iteration.

  2. The direction of greatest increase accounted for a large sufficient fraction of the decrease in the function value from that iteration of the inner loop.

Powell M.J.D. (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives, Computer Journal, 7 (2):155-162.

Press W., Teukolsky S.A., Vetterling W.T., and Flannery B.P.: Numerical Recipes (any edition), Cambridge University Press

>>> def f(x):
...     return x**2
>>> from scipy import optimize
>>> minimum = optimize.fmin_powell(f, -1)
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 2
         Function evaluations: 16
>>> minimum
array(0.0)
schrodinger.application.jaguar.qrnn.temp_opt.brute(func, ranges, args=(), Ns=20, full_output=0, finish=<function fmin>, disp=False, workers=1)

Minimize a function over a given range by brute force.

Uses the “brute force” method, i.e., computes the function’s value at each point of a multidimensional grid of points, to find the global minimum of the function.

The function is evaluated everywhere in the range with the datatype of the first call to the function, as enforced by the vectorize NumPy function. The value and type of the function evaluation returned when full_output=True are affected in addition by the finish argument (see Notes).

The brute force approach is inefficient because the number of grid points increases exponentially - the number of grid points to evaluate is Ns ** len(x). Consequently, even with coarse grid spacing, even moderately sized problems can take a long time to run, and/or run into memory limitations.

funccallable

The objective function to be minimized. Must be in the form f(x, *args), where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function.

rangestuple

Each component of the ranges tuple must be either a “slice object” or a range tuple of the form (low, high). The program uses these to create the grid of points on which the objective function will be computed. See Note 2 for more detail.

argstuple, optional

Any additional fixed parameters needed to completely specify the function.

Nsint, optional

Number of grid points along the axes, if not otherwise specified. See Note2.

full_outputbool, optional

If True, return the evaluation grid and the objective function’s values on it.

finishcallable, optional

An optimization function that is called with the result of brute force minimization as initial guess. finish should take func and the initial guess as positional arguments, and take args as keyword arguments. It may additionally take full_output and/or disp as keyword arguments. Use None if no “polishing” function is to be used. See Notes for more details.

dispbool, optional

Set to True to print convergence messages from the finish callable.

workersint or map-like callable, optional

If workers is an int the grid is subdivided into workers sections and evaluated in parallel (uses multiprocessing.Pool). Supply -1 to use all cores available to the Process. Alternatively supply a map-like callable, such as multiprocessing.Pool.map for evaluating the grid in parallel. This evaluation is carried out as workers(func, iterable). Requires that func be pickleable.

New in version 1.3.0.

x0ndarray

A 1-D array containing the coordinates of a point at which the objective function had its minimum value. (See Note 1 for which point is returned.)

fvalfloat

Function value at the point x0. (Returned when full_output is True.)

gridtuple

Representation of the evaluation grid. It has the same length as x0. (Returned when full_output is True.)

Joutndarray

Function values at each point of the evaluation grid, i.e., Jout = func(*grid). (Returned when full_output is True.)

basinhopping, differential_evolution

Note 1: The program finds the gridpoint at which the lowest value of the objective function occurs. If finish is None, that is the point returned. When the global minimum occurs within (or not very far outside) the grid’s boundaries, and the grid is fine enough, that point will be in the neighborhood of the global minimum.

However, users often employ some other optimization program to “polish” the gridpoint values, i.e., to seek a more precise (local) minimum near brute's best gridpoint. The brute function’s finish option provides a convenient way to do that. Any polishing program used must take brute's output as its initial guess as a positional argument, and take brute's input values for args as keyword arguments, otherwise an error will be raised. It may additionally take full_output and/or disp as keyword arguments.

brute assumes that the finish function returns either an OptimizeResult object or a tuple in the form: (xmin, Jmin, ... , statuscode), where xmin is the minimizing value of the argument, Jmin is the minimum value of the objective function, “…” may be some other returned values (which are not used by brute), and statuscode is the status code of the finish program.

Note that when finish is not None, the values returned are those of the finish program, not the gridpoint ones. Consequently, while brute confines its search to the input grid points, the finish program’s results usually will not coincide with any gridpoint, and may fall outside the grid’s boundary. Thus, if a minimum only needs to be found over the provided grid points, make sure to pass in finish=None.

Note 2: The grid of points is a numpy.mgrid object. For brute the ranges and Ns inputs have the following effect. Each component of the ranges tuple can be either a slice object or a two-tuple giving a range of values, such as (0, 5). If the component is a slice object, brute uses it directly. If the component is a two-tuple range, brute internally converts it to a slice object that interpolates Ns points from its low-value to its high-value, inclusive.

We illustrate the use of brute to seek the global minimum of a function of two variables that is given as the sum of a positive-definite quadratic and two deep “Gaussian-shaped” craters. Specifically, define the objective function f as the sum of three other functions, f = f1 + f2 + f3. We suppose each of these has a signature (z, *params), where z = (x, y), and params and the functions are as defined below.

>>> import numpy as np
>>> params = (2, 3, 7, 8, 9, 10, 44, -1, 2, 26, 1, -2, 0.5)
>>> def f1(z, *params):
...     x, y = z
...     a, b, c, d, e, f, g, h, i, j, k, l, scale = params
...     return (a * x**2 + b * x * y + c * y**2 + d*x + e*y + f)
>>> def f2(z, *params):
...     x, y = z
...     a, b, c, d, e, f, g, h, i, j, k, l, scale = params
...     return (-g*np.exp(-((x-h)**2 + (y-i)**2) / scale))
>>> def f3(z, *params):
...     x, y = z
...     a, b, c, d, e, f, g, h, i, j, k, l, scale = params
...     return (-j*np.exp(-((x-k)**2 + (y-l)**2) / scale))
>>> def f(z, *params):
...     return f1(z, *params) + f2(z, *params) + f3(z, *params)

Thus, the objective function may have local minima near the minimum of each of the three functions of which it is composed. To use fmin to polish its gridpoint result, we may then continue as follows:

>>> rranges = (slice(-4, 4, 0.25), slice(-4, 4, 0.25))
>>> from scipy import optimize
>>> resbrute = optimize.brute(f, rranges, args=params, full_output=True,
...                           finish=optimize.fmin)
>>> resbrute[0]  # global minimum
array([-1.05665192,  1.80834843])
>>> resbrute[1]  # function value at global minimum
-3.4085818767

Note that if finish had been set to None, we would have gotten the gridpoint [-1.0 1.75] where the rounded function value is -2.892.

schrodinger.application.jaguar.qrnn.temp_opt.show_options(solver=None, method=None, disp=True)

Show documentation for additional options of optimization solvers.

These are method-specific options that can be supplied through the options dict.

solverstr

Type of optimization solver. One of ‘minimize’, ‘minimize_scalar’, ‘root’, ‘root_scalar’, ‘linprog’, or ‘quadratic_assignment’.

methodstr, optional

If not given, shows all methods of the specified solver. Otherwise, show only the options for the specified method. Valid values corresponds to methods’ names of respective solver (e.g., ‘BFGS’ for ‘minimize’).

dispbool, optional

Whether to print the result rather than returning it.

text

Either None (for disp=True) or the text string (disp=False)

The solver-specific methods are:

scipy.optimize.minimize

  • Nelder-Mead

  • Powell

  • CG

  • BFGS

  • Newton-CG

  • L-BFGS-B

  • TNC

  • COBYLA

  • SLSQP

  • dogleg

  • trust-ncg

scipy.optimize.root

  • hybr

  • lm

  • broyden1

  • broyden2

  • anderson

  • linearmixing

  • diagbroyden

  • excitingmixing

  • krylov

  • df-sane

scipy.optimize.minimize_scalar

  • brent

  • golden

  • bounded

scipy.optimize.root_scalar

  • bisect

  • brentq

  • brenth

  • ridder

  • toms748

  • newton

  • secant

  • halley

scipy.optimize.linprog

  • simplex

  • interior-point

  • revised simplex

  • highs

  • highs-ds

  • highs-ipm

scipy.optimize.quadratic_assignment

  • faq

  • 2opt

We can print documentations of a solver in stdout:

>>> from scipy.optimize import show_options
>>> show_options(solver="minimize")
...

Specifying a method is possible:

>>> show_options(solver="minimize", method="Nelder-Mead")
...

We can also get the documentations as a string:

>>> show_options(solver="minimize", method="Nelder-Mead", disp=False)
Minimization of scalar function of one or more variables using the ...