Nonlinear programming

Types

MathOptInterface.NLPBoundsPairType
NLPBoundsPair(lower::Float64, upper::Float64)

A struct holding a pair of lower and upper bounds.

-Inf and Inf can be used to indicate no lower or upper bound, respectively.

source
MathOptInterface.NLPBlockDataType
struct NLPBlockData
    constraint_bounds::Vector{NLPBoundsPair}
    evaluator::AbstractNLPEvaluator
    has_objective::Bool
end

A struct encoding a set of nonlinear constraints of the form $lb \le g(x) \le ub$ and, if has_objective == true, a nonlinear objective function $f(x)$.

Nonlinear objectives override any objective set by using the ObjectiveFunction attribute.

The evaluator is a callback object that is used to query function values, derivatives, and expression graphs. If has_objective == false, then it is an error to query properties of the objective function, and in Hessian-of-the-Lagrangian queries, σ must be set to zero.

Note

Throughout the evaluator, all variables are ordered according to ListOfVariableIndices. Hence, MOI copies of nonlinear problems must not re-order variables.

source

Attributes

Functions

MathOptInterface.initializeFunction
initialize(
    d::AbstractNLPEvaluator,
    requested_features::Vector{Symbol},
)::Nothing

Initialize d with the set of features in requested_features. Check features_available before calling initialize to see what features are supported by d.

Warning

This method must be called before any other methods.

Features

The following features are defined:

In all cases, including when requested_features is empty, eval_objective and eval_constraint are supported.

Examples

MOI.initialize(d, Symbol[])
MOI.initialize(d, [:ExprGraph])
MOI.initialize(d, MOI.features_available(d))
source
MathOptInterface.eval_constraintFunction
eval_constraint(d::AbstractNLPEvaluator,
    g::AbstractVector{T},
    x::AbstractVector{T},
)::Nothing where {T}

Given a set of vector-valued constraints $l \le g(x) \le u$, evaluate the constraint function $g(x)$, storing the result in the vector g.

Implementation notes

When implementing this method, you must not assume that g is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.eval_objective_gradientFunction
eval_objective_gradient(
    d::AbstractNLPEvaluator,
    grad::AbstractVector{T},
    x::AbstractVector{T},
)::Nothing where {T}

Evaluate the gradient of the objective function $grad = \nabla f(x)$ as a dense vector, storing the result in the vector grad.

Implementation notes

When implementing this method, you must not assume that grad is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.jacobian_structureFunction
jacobian_structure(d::AbstractNLPEvaluator)::Vector{Tuple{Int64,Int64}}

Returns a vector of tuples, (row, column), where each indicates the position of a structurally nonzero element in the Jacobian matrix: $J_g(x) = \left[ \begin{array}{c} \nabla g_1(x) \\ \nabla g_2(x) \\ \vdots \\ \nabla g_m(x) \end{array}\right],$ where $g_i$ is the $i\text{th}$ component of the nonlinear constraints $g(x)$.

The indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together.

The sparsity structure is assumed to be independent of the point $x$.

source
MathOptInterface.eval_constraint_gradientFunction
eval_constraint_gradient(
    d::AbstractNLPEvaluator,
    ∇g::AbstractVector{T},
    x::AbstractVector{T},
    i::Int,
)::Nothing where {T}

Evaluate the gradient of constraint i, $\nabla g_i(x)$, and store the non-zero values in ∇g, corresponding to the structure returned by constraint_gradient_structure.

Implementation notes

When implementing this method, you must not assume that ∇g is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.constraint_gradient_structureFunction
constraint_gradient_structure(d::AbstractNLPEvaluator, i::Int)::Vector{Int64}

Returns a vector of indices, where each element indicates the position of a structurally nonzero element in the gradient of constraint $\nabla g_i(x)$.

The indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together.

The sparsity structure is assumed to be independent of the point $x$.

source
MathOptInterface.eval_constraint_jacobianFunction
eval_constraint_jacobian(d::AbstractNLPEvaluator,
    J::AbstractVector{T},
    x::AbstractVector{T},
)::Nothing where {T}

Evaluates the sparse Jacobian matrix $J_g(x) = \left[ \begin{array}{c} \nabla g_1(x) \\ \nabla g_2(x) \\ \vdots \\ \nabla g_m(x) \end{array}\right]$.

The result is stored in the vector J in the same order as the indices returned by jacobian_structure.

Implementation notes

When implementing this method, you must not assume that J is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.eval_constraint_jacobian_productFunction
eval_constraint_jacobian_product(
    d::AbstractNLPEvaluator,
    y::AbstractVector{T},
    x::AbstractVector{T},
    w::AbstractVector{T},
)::Nothing where {T}

Computes the Jacobian-vector product $y = J_g(x)w$, storing the result in the vector y.

The vectors have dimensions such that length(w) == length(x), and length(y) is the number of nonlinear constraints.

Implementation notes

When implementing this method, you must not assume that y is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.eval_constraint_jacobian_transpose_productFunction
eval_constraint_jacobian_transpose_product(
    d::AbstractNLPEvaluator,
    y::AbstractVector{T},
    x::AbstractVector{T},
    w::AbstractVector{T},
)::Nothing where {T}

Computes the Jacobian-transpose-vector product $y = J_g(x)^Tw$, storing the result in the vector y.

The vectors have dimensions such that length(y) == length(x), and length(w) is the number of nonlinear constraints.

Implementation notes

When implementing this method, you must not assume that y is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.hessian_lagrangian_structureFunction
hessian_lagrangian_structure(
    d::AbstractNLPEvaluator,
)::Vector{Tuple{Int64,Int64}}

Returns a vector of tuples, (row, column), where each indicates the position of a structurally nonzero element in the Hessian-of-the-Lagrangian matrix: $\nabla^2 f(x) + \sum_{i=1}^m \nabla^2 g_i(x)$.

The indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together.

Any mix of lower and upper-triangular indices is valid. Elements (i,j) and (j,i), if both present, should be treated as duplicates.

The sparsity structure is assumed to be independent of the point $x$.

source
MathOptInterface.hessian_objective_structureFunction
hessian_objective_structure(
    d::AbstractNLPEvaluator,
)::Vector{Tuple{Int64,Int64}}

Returns a vector of tuples, (row, column), where each indicates the position of a structurally nonzero element in the Hessian matrix: $\nabla^2 f(x)$.

The indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together.

Any mix of lower and upper-triangular indices is valid. Elements (i,j) and (j,i), if both present, should be treated as duplicates.

The sparsity structure is assumed to be independent of the point $x$.

source
MathOptInterface.hessian_constraint_structureFunction
hessian_constraint_structure(
    d::AbstractNLPEvaluator,
    i::Int64,
)::Vector{Tuple{Int64,Int64}}

Returns a vector of tuples, (row, column), where each indicates the position of a structurally nonzero element in the Hessian matrix: $\nabla^2 g_i(x)$.

The indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together.

Any mix of lower and upper-triangular indices is valid. Elements (i,j) and (j,i), if both present, should be treated as duplicates.

The sparsity structure is assumed to be independent of the point $x$.

source
MathOptInterface.eval_hessian_objectiveFunction
eval_hessian_objective(
    d::AbstractNLPEvaluator,
    H::AbstractVector{T},
    x::AbstractVector{T},
)::Nothing where {T}

This function computes the sparse Hessian matrix: $\nabla^2 f(x)$, storing the result in the vector H in the same order as the indices returned by hessian_objective_structure.

Implementation notes

When implementing this method, you must not assume that H is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.eval_hessian_constraintFunction
eval_hessian_constraint(
    d::AbstractNLPEvaluator,
    H::AbstractVector{T},
    x::AbstractVector{T},
    i::Int64,
)::Nothing where {T}

This function computes the sparse Hessian matrix: $\nabla^2 g_i(x)$, storing the result in the vector H in the same order as the indices returned by hessian_constraint_structure.

Implementation notes

When implementing this method, you must not assume that H is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.eval_hessian_lagrangianFunction
eval_hessian_lagrangian(
    d::AbstractNLPEvaluator,
    H::AbstractVector{T},
    x::AbstractVector{T},
    σ::T,
    μ::AbstractVector{T},
)::Nothing where {T}

Given scalar weight σ and vector of constraint weights μ, this function computes the sparse Hessian-of-the-Lagrangian matrix: $\sigma\nabla^2 f(x) + \sum_{i=1}^m \mu_i \nabla^2 g_i(x)$, storing the result in the vector H in the same order as the indices returned by hessian_lagrangian_structure.

Implementation notes

When implementing this method, you must not assume that H is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.eval_hessian_lagrangian_productFunction
eval_hessian_lagrangian_product(
    d::AbstractNLPEvaluator,
    h::AbstractVector{T},
    x::AbstractVector{T},
    v::AbstractVector{T},
    σ::T,
    μ::AbstractVector{T},
)::Nothing where {T}

Given scalar weight σ and vector of constraint weights μ, computes the Hessian-of-the-Lagrangian-vector product $h = \left(\sigma\nabla^2 f(x) + \sum_{i=1}^m \mu_i \nabla^2 g_i(x)\right)v$, storing the result in the vector h.

The vectors have dimensions such that length(h) == length(x) == length(v).

Implementation notes

When implementing this method, you must not assume that h is Vector{Float64}, but you may assume that it supports setindex! and length. For example, it may be the view of a vector.

source
MathOptInterface.objective_exprFunction
objective_expr(d::AbstractNLPEvaluator)::Expr

Returns a Julia Expr object representing the expression graph of the objective function.

Format

The expression has a number of limitations, compared with arbitrary Julia expressions:

  • All sums and products are flattened out as simple Expr(:+, ...) and Expr(:*, ...) objects.
  • All decision variables must be of the form Expr(:ref, :x, MOI.VariableIndex(i)), where i is the $i$th variable in ListOfVariableIndices.
  • There are currently no restrictions on recognized functions; typically these will be built-in Julia functions like ^, exp, log, cos, tan, sqrt, etc., but modeling interfaces may choose to extend these basic functions, or error if they encounter unsupported functions.

Examples

The expression $x_1+\sin(x_2/\exp(x_3))$ is represented as

:(x[MOI.VariableIndex(1)] + sin(x[MOI.VariableIndex(2)] / exp(x[MOI.VariableIndex[3]])))

or equivalently

Expr(
    :call,
    :+,
    Expr(:ref, :x, MOI.VariableIndex(1)),
    Expr(
        :call,
        :/,
        Expr(:call, :sin, Expr(:ref, :x, MOI.VariableIndex(2))),
        Expr(:call, :exp, Expr(:ref, :x, MOI.VariableIndex(3))),
    ),
)
source
MathOptInterface.constraint_exprFunction
constraint_expr(d::AbstractNLPEvaluator, i::Integer)::Expr

Returns a Julia Expr object representing the expression graph for the $i\text{th}$ nonlinear constraint.

Format

The format is the same as objective_expr, with an additional comparison operator indicating the sense of and bounds on the constraint.

For single-sided comparisons, the body of the constraint must be on the left-hand side, and the right-hand side must be a constant.

For double-sided comparisons (that is, $l \le f(x) \le u$), the body of the constraint must be in the middle, and the left- and right-hand sides must be constants.

The bounds on the constraints must match the NLPBoundsPairs passed to NLPBlockData.

Examples

:(x[MOI.VariableIndex(1)]^2 <= 1.0)
:(x[MOI.VariableIndex(1)]^2 >= 2.0)
:(x[MOI.VariableIndex(1)]^2 == 3.0)
:(4.0 <= x[MOI.VariableIndex(1)]^2 <= 5.0)
source