Python modules

optim

PyOptim Python numerical optimization toolbox

optim.fmin_prox(f, df, g, prox_g, x0, lambd=1.0, backtrack=True, nbitermax=1000, stopvarx=1e-09, stopvarj=1e-09, t0=10.0, verbose=False, m_back=1, sigma=1e-09, eta=2, nbitermax_back=100, bbrule=True, log=False, **kwargs)[source]

Minimize a sum of smooth and nonsmooth function using proximal splitting

The algorithm is the classical Forward Backward Splitting [1] with BB rule for step estimation [2].

Solve the optimization problem:

\[min_x \quad f(x)+\lambda g(x)\]

where:

  • f is differentiable (df) and Lipshictz gradient
  • g is non-differentiable but has a proximal operator (prox_g)

prox_g is a function providing the solution to problem

\[min_x \quad \frac{1}{2}\|x_0-x\|^2+\lambda*g(x)\]

Several proximal operators are available at optim.prox

Parameters:
  • f (function) – Smooth function f: R^d -> R
  • df (function) – Gradient of f, df:R^d -> R^d
  • g (function) – Nonsmooth function g: R^d -> R
  • prox_g (function) – Proximal of g, df:R^d -> R^d
  • x_0 ((d,) numpy.array) – Initial point
  • lambda (float) – Regularization parameter >0
  • backtrack (boolean, optional) – Perform backtracking if true (default: True).
  • bbrule (boolean, optional) – update step with bb rule.
  • nbitermax (int, optional) – Max number of iterations in algorithm
  • stopvarx (float, optional) – Stopping criterion for relative variation of the norm of x
  • stopvarj (float, optional) – Stopping criterion for relative variation of the cost
  • t0 (float, optional) – initial descent step
  • verbose (boolean, optional) – prinrt optimization information
  • m_back (int, optional) – Window size for backtrack (if <1 then non decreasing)
  • sigma (float, optional) – descent parameter for backtrack
  • eta (float, optional) – value multiplying t during backtrack
  • nbitermax_back (int, optional) – Max number of backtrack iterations
Returns:

  • x ((d,) ndarray) – Optimal solution x
  • val (float) – optimal value of the objective (None if optimization error)
  • log (dict) – Optional log output

References

[1]Combettes, P. L., & Wajs, V. R. (2005). Signal recovery by proximal forward-backward splitting. Multiscale Modeling & Simulation, 4(4), 1168-1200.
[2]Barzilai, J., & Borwein, J. M. (1988). Two-point step size gradient methods. IMA journal of numerical analysis, 8(1), 141-148.

See also

optim.prox()
Module containing proximal operators
optim.fmin_proj()
Projected gradient (special case of proximal gradient)
optim.fmin_proj(f, df, proj, x0, nbitermax=1000, stopvarx=1e-09, stopvarj=1e-09, t0=1.0, verbose=False, bbrule=True, log=False, **kwargs)[source]

Minimize a constrained optimization problem with projected gradient

The algorithm is the classical Spectral Projected Gradient [3] with BB rule for step estimation [4].

Solve the optimization problem:

\[ \begin{align}\begin{aligned}min_x \quad f(x)\\\text{s.t. }\quad s\in P\end{aligned}\end{align} \]

where:

  • f is differentiable (df) and Lipshictz gradient
  • proj is a projection onto P

proj is a projection function providing the solution to problem

\[ \begin{align}\begin{aligned}min_x \quad \frac{1}{2}\|x_0-x\|^2\\\text{s.t. }\quad s\in P\end{aligned}\end{align} \]

Several projection functions are available at optim.proj

Parameters:
  • f (function) – Smooth function f: R^d -> R
  • df (function) – Gradient of f, df:R^d -> R^d
  • proj (function) – Projection unto P, proj:R^d -> R^d
  • x_0 ((d,) numpy.array) – Initial point
  • bbrule (boolean, optional) – update step with bb rule.
  • nbitermax (int, optional) – Max number of iterations in algorithm
  • stopvarx (float, optional) – Stopping criterion for relative variation of the norm of x
  • stopvarj (float, optional) – Stopping criterion for relative variation of the cost
  • t0 (float, optional) – initial descent step
  • verbose (boolean, optional) – prinrt optimization information
Returns:

  • x ((d,) ndarray) – Optimal solution x
  • val (float) – optimal value of the objective (None if optimization error)
  • log (dict) – Optional log output

References

[3]Birgin, E. G., Martinez, J. M., & Raydan, M. (2000). Nonmonotone spectral projected gradient methods on convex sets. SIAM Journal on Optimization, 10(4), 1196-1211.
[4]Barzilai, J., & Borwein, J. M. (1988). Two-point step size gradient methods. IMA journal of numerical analysis, 8(1), 141-148.

See also

optim.proj()
Module containing projection operators
optim.fmin_prox()
Proximal splitting (generalization of projected gradient)
optim.fmin_cond(f, df, solve_c, x0, nbitermax=200, stopvarj=1e-09, verbose=False, log=False)[source]

Solve constrained optimization with conditional gradient

The function solves the following optimization problem:
\[ \begin{align}\begin{aligned}\min_x \quad f(x)\\\text{s.t.} \quad x\in P\end{aligned}\end{align} \]

where :

  • f is differentiable (df) and Lipshictz gradient

  • solve_c is the solver for the linearized problem of the form
    \[ \begin{align}\begin{aligned}\min_x \quad x^T v\\\text{s.t.} \quad x\in P\end{aligned}\end{align} \]
Parameters:
  • f (function) – Smooth function f: R^d -> R
  • df (function) – Gradient of f, df:R^d -> R^d
  • solve_c (function) – Solver for linearized problem, solve_c:R^d -> R^d
  • x_0 ((d,) numpy.array) – Initial point
  • nbitermax (int, optional) – Max number of iterations
  • stopThr (float, optional) – Stop threshol on error (>0)
  • verbose (bool, optional) – Print information along iterations
  • log (bool, optional) – record log if True
Returns:

  • x (ndarray) – solution
  • val (float) – Optimal value at solution
  • log (dict) – log dictionary return only if log==True in parameters

References

optim.lp_solve(c, A=None, b=None, Aeq=None, beq=None, lb=None, ub=None, solver='scipy', verbose=False, log=False, **kwargs)[source]

Solve a standard linear program with linear constraints

Solve the following optimization problem:

\[ \begin{align}\begin{aligned}\min_x \quad x^Tc\\lb <= x <= ub\\Ax <= b\\A_{eq} x = b_{eq}\end{aligned}\end{align} \]

Return val as None if optmization failed.

All constraint parameters are optional, they will be ignore is left at default value None.

Use the solver selected from (default ‘scipy’):
  • ‘scipy’
    scipy.optimize solver with interior point solver and available methods ‘interior-point’ or ‘simplex’
  • ‘cvxopt’
    cvxopt interior point solver (‘default’) with other available methods ‘mosek’ or ‘glpk’
  • ‘gurobipy’
    gurobi solver with official python interface
  • ‘stdgurobi’
    gurobi solver with c interface more efficient for dense problems
Parameters:
  • c ((d,) ndarray, float64) – Linear cost vector.
  • A ((n,d) ndarray, float64, optional) – Linear inequality constraint matrix.
  • b ((n,) ndarray, float64, optional) – Linear inequality constraint vector.
  • Aeq ((n,d) ndarray, float64, optional) – Linear equality constraint matrix .
  • beq ((n,) ndarray, float64, optional) – Linear equality constraint vector. .
  • lb – Lower bound constraint, -np.inf if not provided.
  • ub – Upper bound constraint, np.inf if not provided.
  • solver (string, optional) – Select solver used to solve the linear program from ‘scipy’, ‘cvxopt’ ‘gurobipy’, ‘stdgrb’.
  • verbose (boolean, optional) – Print optimization informations.
  • log (boolean, optional) – Return a dictionary with optim informations in adition to x and val
Returns:

  • x ((d,) ndarray) – Optimal solution x
  • val (float) – optimal value of the objective (None if optimization error)
  • log (dict) – Optional log output

optim.qp_solve(Q, c=None, A=None, b=None, Aeq=None, beq=None, lb=None, ub=None, solver='cvxopt', verbose=False, log=False, **kwargs)[source]

Solves a standard quadratic program

Solve the following optimization problem:

\[ \begin{align}\begin{aligned}\min_x \quad \frac{1}{2}x^TQx + x^Tc\\lb <= x <= ub\\Ax <= b\\A_{eq} x = b_{eq}\end{aligned}\end{align} \]

Return val as None if optmization failed.

All constraint parameters are optional, they will be ignore is left at default value None.

Use the solver selected from (default ‘cvxopt’):
  • ‘cvxopt’
    cvxopt interior point solver (‘default’)
  • ‘quadprog’
    quadprog solver (needs to be installed)
Parameters:
  • Q ((d,d) ndarray, float64, optional) – Quadratic cost matrix matrix
  • c ((d,) ndarray, float64, optional) – Linear cost vector
  • A ((n,d) ndarray, float64, optional) – Linear inequality constraint matrix.
  • b ((n,) ndarray, float64, optional) – Linear inequality constraint vector.
  • Aeq ((n,d) ndarray, float64, optional) – Linear equality constraint matrix .
  • beq ((n,) ndarray, float64, optional) – Linear equality constraint vector. .
  • lb – Lower bound constraint, -np.inf if not provided.
  • ub – Upper bound constraint, np.inf if not provided.
  • solver (string, optional) – Select solver used to solve the linear program from ‘cvxopt’ ‘quadprog’, ‘stdgrb’.
  • verbose (boolean, optional) – Print optimization informations.
  • log (boolean, optional) – Return a dictionary with optim informations in adition to x and val
Returns:

  • x ((d,) ndarray) – Optimal solution x
  • val (float) – optimal value of the objective (None if optimization error)
  • log (dict) – Optional log output