# API Reference

[Some introduction to API. List basic standalone methods.]

## Attributes

List of attribute categories.

`AbstractOptimizerAttribute`

Abstract supertype for attribute objects that can be used to set or get attributes (properties) of the optimizer.

**Note**

The difference between `AbstractOptimizerAttribute`

and `AbstractModelAttribute`

lies in the behavior of `is_empty`

, `empty!`

and `copy_to`

. Typically optimizer attributes only affect how the model is solved.

`AbstractModelAttribute`

Abstract supertype for attribute objects that can be used to set or get attributes (properties) of the model.

`AbstractVariableAttribute`

Abstract supertype for attribute objects that can be used to set or get attributes (properties) of variables in the model.

`AbstractConstraintAttribute`

Abstract supertype for attribute objects that can be used to set or get attributes (properties) of constraints in the model.

Attributes can be set in different ways:

it is either set when the model is created like

`SolverName`

and`RawSolver`

,or explicitly when the model is copied like

`ObjectiveSense`

,or implicitly, e.g.,

`NumberOfVariables`

is implicitly set by`add_variable`

and`ConstraintFunction`

is implicitly set by`add_constraint`

.or it is set to contain the result of the optimization during

`optimize!`

like`VariablePrimal`

.

The following functions allow to distinguish between some of these different categories:

`MathOptInterface.is_set_by_optimize`

— Function.`is_set_by_optimize(::AnyAttribute)`

Return a `Bool`

indicating whether the value of the attribute is modified during an `optimize!`

call, that is, the attribute is used to query the result of the optimization.

**Important note when defining new attributes**

This function returns `false`

by default so it should be implemented for attributes that are modified by `optimize!`

.

`MathOptInterface.is_copyable`

— Function.`is_copyable(::AnyAttribute)`

Return a `Bool`

indicating whether the value of the attribute may be copied during `copy_to`

using `set`

.

**Important note when defining new attributes**

By default `is_copyable(attr)`

returns `!is_set_by_optimize(attr)`

. A specific method should be defined for attibutes which are copied indirectly during `copy_to`

. For instance, both `is_copyable`

and `is_set_by_optimize`

return `false`

for the following attributes:

`ListOfOptimizerAttributesSet`

,`ListOfModelAttributesSet`

,`ListOfConstraintAttributesSet`

and`ListOfVariableAttributesSet`

.`SolverName`

and`RawSolver`

: these attributes cannot be set.`NumberOfVariables`

and`ListOfVariableIndices`

: these attributes are set indirectly by`add_variable`

and`add_variables`

.`ObjectiveFunctionType`

: this attribute is set indirectly when setting the`ObjectiveFunction`

attribute.`NumberOfConstraints`

,`ListOfConstraintIndices`

,`ListOfConstraints`

,`ConstraintFunction`

and`ConstraintSet`

: these attributes are set indirectly by`add_constraint`

and`add_constraints`

.

Functions for getting and setting attributes.

`MathOptInterface.get`

— Function.`get(optimizer::AbstractOptimizer, attr::AbstractOptimizerAttribute)`

Return an attribute `attr`

of the optimizer `optimizer`

.

`get(model::ModelLike, attr::AbstractModelAttribute)`

Return an attribute `attr`

of the model `model`

.

`get(model::ModelLike, attr::AbstractVariableAttribute, v::VariableIndex)`

Return an attribute `attr`

of the variable `v`

in model `model`

.

`get(model::ModelLike, attr::AbstractVariableAttribute, v::Vector{VariableIndex})`

Return a vector of attributes corresponding to each variable in the collection `v`

in the model `model`

.

`get(model::ModelLike, attr::AbstractConstraintAttribute, c::ConstraintIndex)`

Return an attribute `attr`

of the constraint `c`

in model `model`

.

`get(model::ModelLike, attr::AbstractConstraintAttribute, c::Vector{ConstraintIndex{F,S}})`

Return a vector of attributes corresponding to each constraint in the collection `c`

in the model `model`

.

`get(model::ModelLike, ::Type{VariableIndex}, name::String)`

If a variable with name `name`

exists in the model `model`

, return the corresponding index, otherwise return `nothing`

. Errors if two variables have the same name and the model implementation does not check for duplicates when the names are set.

`get(model::ModelLike, ::Type{ConstraintIndex{F,S}}, name::String) where {F<:AbstractFunction,S<:AbstractSet}`

If an `F`

-in-`S`

constraint with name `name`

exists in the model `model`

, return the corresponding index, otherwise return `nothing`

. Errors if two constraints have the same name and the model implementation does not check for duplicates when the names are set.

`get(model::ModelLike, ::Type{ConstraintIndex}, name::String)`

If *any* constraint with name `name`

exists in the model `model`

, return the corresponding index, otherwise return `nothing`

. This version is available for convenience but may incur a performance penalty because it is not type stable. Errors if two constraints have the same name and the model implementation does not check for duplicates when the names are set.

**Examples**

```
get(model, ObjectiveValue())
get(model, VariablePrimal(), ref)
get(model, VariablePrimal(5), [ref1, ref2])
get(model, OtherAttribute("something specific to cplex"))
get(model, VariableIndex, "var1")
get(model, ConstraintIndex{ScalarAffineFunction{Float64},LessThan{Float64}}, "con1")
get(model, ConstraintIndex, "con1")
```

`MathOptInterface.get!`

— Function.`get!(output, model::ModelLike, args...)`

An in-place version of `get`

. The signature matches that of `get`

except that the the result is placed in the vector `output`

.

`MathOptInterface.set`

— Function.`set(optimizer::AbstractOptimizer, attr::AbstractOptimizerAttribute, value)`

Assign `value`

to the attribute `attr`

of the optimizer `optimizer`

.

`set(model::ModelLike, attr::AbstractModelAttribute, value)`

Assign `value`

to the attribute `attr`

of the model `model`

.

`set(model::ModelLike, attr::AbstractVariableAttribute, v::VariableIndex, value)`

Assign `value`

to the attribute `attr`

of variable `v`

in model `model`

.

`set(model::ModelLike, attr::AbstractVariableAttribute, v::Vector{VariableIndex}, vector_of_values)`

Assign a value respectively to the attribute `attr`

of each variable in the collection `v`

in model `model`

.

`set(model::ModelLike, attr::AbstractConstraintAttribute, c::ConstraintIndex, value)`

Assign a value to the attribute `attr`

of constraint `c`

in model `model`

.

`set(model::ModelLike, attr::AbstractConstraintAttribute, c::Vector{ConstraintIndex{F,S}}, vector_of_values)`

Assign a value respectively to the attribute `attr`

of each constraint in the collection `c`

in model `model`

.

An `UnsupportedAttribute`

error is thrown if `model`

does not support the attribute `attr`

(see `supports`

) and a `SetAttributeNotAllowed`

error is thrown if it supports the attribute `attr`

but it cannot be set.

**Replace set in a constraint**

`set(model::ModelLike, ::ConstraintSet, c::ConstraintIndex{F,S}, set::S)`

Change the set of constraint `c`

to the new set `set`

which should be of the same type as the original set.

**Examples**

If `c`

is a `ConstraintIndex{F,Interval}`

```
set(model, ConstraintSet(), c, Interval(0, 5))
set(model, ConstraintSet(), c, GreaterThan(0.0)) # Error
```

**Replace function in a constraint**

`set(model::ModelLike, ::ConstraintFunction, c::ConstraintIndex{F,S}, func::F)`

Replace the function in constraint `c`

with `func`

. `F`

must match the original function type used to define the constraint.

**Examples**

If `c`

is a `ConstraintIndex{ScalarAffineFunction,S}`

and `v1`

and `v2`

are `VariableIndex`

objects,

```
set(model, ConstraintFunction(), c, ScalarAffineFunction([v1,v2],[1.0,2.0],5.0))
set(model, ConstraintFunction(), c, SingleVariable(v1)) # Error
```

`MathOptInterface.supports`

— Function.`supports(model::ModelLike, attr::AbstractOptimizerAttribute)::Bool`

Return a `Bool`

indicating whether `model`

supports the optimizer attribute `attr`

. That is, it returns `false`

if `copy_to(model, src)`

shows a warning in case `attr`

is in the `ListOfOptimizerAttributesSet`

of `src`

; see `copy_to`

for more details on how unsupported optimizer attributes are handled in copy.

`supports(model::ModelLike, attr::AbstractModelAttribute)::Bool`

Return a `Bool`

indicating whether `model`

supports the model attribute `attr`

. That is, it returns `false`

if `copy_to(model, src)`

cannot be performed in case `attr`

is in the `ListOfModelAttributesSet`

of `src`

.

`supports(model::ModelLike, attr::AbstractVariableAttribute, ::Type{VariableIndex})::Bool`

Return a `Bool`

indicating whether `model`

supports the variable attribute `attr`

. That is, it returns `false`

if `copy_to(model, src)`

cannot be performed in case `attr`

is in the `ListOfVariableAttributesSet`

of `src`

.

`supports(model::ModelLike, attr::AbstractConstraintAttribute, ::Type{ConstraintIndex{F,S}})::Bool where {F,S}`

Return a `Bool`

indicating whether `model`

supports the constraint attribute `attr`

applied to an `F`

-in-`S`

constraint. That is, it returns `false`

if `copy_to(model, src)`

cannot be performed in case `attr`

is in the `ListOfConstraintAttributesSet`

of `src`

.

For all four methods, if the attribute is only not supported in specific circumstances, it should still return `true`

.

Note that `supports`

is only defined for attributes for which `is_copyable`

returns `true`

as other attributes do not appear in the list of attributes set obtained by `ListOf...AttributesSet`

.

## Model Interface

`MathOptInterface.ModelLike`

— Type.`ModelLike`

Abstract supertype for objects that implement the "Model" interface for defining an optimization problem.

`Base.isempty`

— Function.`isempty(collection) -> Bool`

Determine whether a collection is empty (has no elements).

**Examples**

```
julia> isempty([])
true
julia> isempty([1 2 3])
false
```

`MathOptInterface.empty!`

— Function.`empty!(model::ModelLike)`

Empty the model, that is, remove all variables, constraints and model attributes but not optimizer attributes.

`MathOptInterface.write_to_file`

— Function.`write_to_file(model::ModelLike, filename::String)`

Writes the current model data to the given file. Supported file types depend on the model type.

`MathOptInterface.read_from_file`

— Function.`read_from_file(model::ModelLike, filename::String)`

Read the file `filename`

into the model `model`

. If `model`

is non-empty, this may throw an error.

Supported file types depend on the model type.

**Note**

Once the contents of the file are loaded into the model, users can query the variables via `get(model, ListOfVariableIndices())`

. However, some filetypes, such as LP files, do not maintain an explicit ordering of the variables. Therefore, the returned list may be in an arbitrary order. To avoid depending on the order of the indices, users should look up each variable index by name: `get(model, VariableIndex, "name")`

.

Copying

`MathOptInterface.copy_to`

— Function.`copy_to(dest::ModelLike, src::ModelLike; copy_names=true, warn_attributes=true)`

Copy the model from `src`

into `dest`

. The target `dest`

is emptied, and all previous indices to variables or constraints in `dest`

are invalidated. Returns a dictionary-like object that translates variable and constraint indices from the `src`

model to the corresponding indices in the `dest`

model.

If `copy_names`

is `false`

, the `Name`

, `VariableName`

and `ConstraintName`

attributes are not copied even if they are set in `src`

. If a constraint that is copied from `src`

is not supported by `dest`

then an `UnsupportedConstraint`

error is thrown. Similarly, if a model, variable or constraint attribute that is copied from `src`

is not supported by `dest`

then an `UnsupportedAttribute`

error is thrown. Unsupported *optimizer* attributes are treated differently:

If

`warn_attributes`

is`true`

, a warning is displayed, otherwise,the attribute is silently ignored.

**Example**

```
# Given empty `ModelLike` objects `src` and `dest`.
x = add_variable(src)
is_valid(src, x) # true
is_valid(dest, x) # false (`dest` has no variables)
index_map = copy_to(dest, src)
is_valid(dest, x) # false (unless index_map[x] == x)
is_valid(dest, index_map[x]) # true
```

List of model attributes

`MathOptInterface.Name`

— Type.`Name()`

A model attribute for the string identifying the model.

`MathOptInterface.ObjectiveSense`

— Type.`ObjectiveSense()`

A model attribute for the `OptimizationSense`

of the objective function, which can be `MinSense`

, `MaxSense`

, or `FeasiblitySense`

.

`NumberOfVariables()`

A model attribute for the number of variables in the model.

`ListOfVariableIndices()`

A model attribute for the `Vector{VariableIndex}`

of all variable indices present in the model (i.e., of length equal to the value of `NumberOfVariables()`

) in the order in which they were added.

`ListOfConstraints()`

A model attribute for the list of tuples of the form `(F,S)`

, where `F`

is a function type and `S`

is a set type indicating that the attribute `NumberOfConstraints{F,S}()`

has value greater than zero.

`NumberOfConstraints{F,S}()`

A model attribute for the number of constraints of the type `F`

-in-`S`

present in the model.

`ListOfConstraintIndices{F,S}()`

A model attribute for the `Vector{ConstraintIndex{F,S}}`

of all constraint indices of type `F`

-in-`S`

in the model (i.e., of length equal to the value of `NumberOfConstraints{F,S}()`

) in the order in which they were added.

`ListOfOptimizerAttributesSet()`

An optimizer attribute for the `Vector{AbstractOptimizerAttribute}`

of all optimizer attributes that were set.

`ListOfModelAttributesSet()`

A model attribute for the `Vector{AbstractModelAttribute}`

of all model attributes `attr`

such that 1) `is_copyable(attr)`

returns `true`

and 2) the attribute was set to the model.

`ListOfVariableAttributesSet()`

A model attribute for the `Vector{AbstractVariableAttribute}`

of all variable attributes `attr`

such that 1) `is_copyable(attr)`

returns `true`

and 2) the attribute was set to variables.

`ListOfConstraintAttributesSet{F, S}()`

A model attribute for the `Vector{AbstractConstraintAttribute}`

of all constraint attributes `attr`

such that 1) `is_copyable(attr)`

returns `true`

and

the attribute was set to

`F`

-in-`S`

constraints.

**Note**

The attributes `ConstraintFunction`

and `ConstraintSet`

should not be included in the list even if then have been set with `set`

.

## Optimizers

`AbstractOptimizer`

Abstract supertype for objects representing an instance of an optimization problem tied to a particular solver. This is typically a solver's in-memory representation. In addition to `ModelLike`

, `AbstractOptimizer`

objects let you solve the model and query the solution.

`MathOptInterface.optimize!`

— Function.`optimize!(optimizer::AbstractOptimizer)`

Start the solution procedure.

List of attributes optimizers attributes

`MathOptInterface.SolverName`

— Type.`SolverName()`

An optimizer attribute for the string identifying the solver/optimizer.

List of attributes useful for optimizers

`MathOptInterface.RawSolver`

— Type.`RawSolver()`

A model attribute for the object that may be used to access a solver-specific API for this optimizer.

`MathOptInterface.ResultCount`

— Type.`ResultCount()`

A model attribute for the number of results available.

`ObjectiveFunction{F<:AbstractScalarFunction}()`

A model attribute for the objective function which has a type `F<:AbstractScalarFunction`

. `F`

should be guaranteed to be equivalent but not necessarily identical to the function type provided by the user. Throws an `InexactError`

if the objective function cannot be converted to `F`

, e.g. the objective function is quadratic and `F`

is `ScalarAffineFunction{Float64}`

or it has non-integer coefficient and `F`

is `ScalarAffineFunction{Int}`

.

`ObjectiveFunctionType()`

A model attribute for the type `F`

of the objective function set using the `ObjectiveFunction{F}`

attribute.

**Examples**

In the following code, `attr`

should be equal to `MOI.SingleVariable`

:

```
x = MOI.add_variable(model)
MOI.set(model, MOI.ObjectiveFunction{MOI.SingleVariable}(),
MOI.SingleVariable(x))
attr = MOI.get(model, MOI.ObjectiveFunctionType())
```

`MathOptInterface.ObjectiveValue`

— Type.`ObjectiveValue(resultidx::Int=1)`

A model attribute for the objective value of the `resultindex`

th primal result.

`MathOptInterface.ObjectiveBound`

— Type.`ObjectiveBound()`

A model attribute for the best known bound on the optimal objective value.

`MathOptInterface.RelativeGap`

— Type.`RelativeGap()`

A model attribute for the final relative optimality gap, defined as $\frac{|b-f|}{|f|}$, where $b$ is the best bound and $f$ is the best feasible objective value.

`MathOptInterface.SolveTime`

— Type.`SolveTime()`

A model attribute for the total elapsed solution time (in seconds) as reported by the optimizer.

`SimplexIterations()`

A model attribute for the cumulative number of simplex iterations during the optimization process. In particular, for a mixed-integer program (MIP), the total simplex iterations for all nodes.

`BarrierIterations()`

A model attribute for the cumulative number of barrier iterations while solving a problem.

`MathOptInterface.NodeCount`

— Type.`NodeCount()`

A model attribute for the total number of branch-and-bound nodes explored while solving a mixed-integer program (MIP).

`TerminationStatus()`

A model attribute for the `TerminationStatusCode`

explaining why the optimizer stopped.

`MathOptInterface.PrimalStatus`

— Type.```
PrimalStatus(N)
PrimalStatus()
```

A model attribute for the `ResultStatusCode`

of the primal result `N`

. If `N`

is omitted, it defaults to 1.

`MathOptInterface.DualStatus`

— Type.```
DualStatus(N)
DualStatus()
```

A model attribute for the `ResultStatusCode`

of the dual result `N`

. If `N`

is omitted, it defaults to 1.

### Termination Status

The `TerminationStatus`

attribute indicates why the optimizer stopped executing. The value of the attribute is of type `TerminationStatusCode`

.

`TerminationStatusCode`

An Enum of possible values for the `TerminationStatus`

attribute. This attribute is meant to explain the reason why the optimizer stopped executing.

**OK**

These are generally OK statuses.

`Success`

: the algorithm ran successfully and has a result; this includes cases where the algorithm converges to an infeasible point (NLP) or converges to a solution of a homogeneous self-dual problem and has a certificate of primal/dual infeasibility`InfeasibleNoResult`

: the algorithm stopped because it decided that the problem is infeasible but does not have a result to return`UnboundedNoResult`

: the algorithm stopped because it decided that the problem is unbounded but does not have a result to return`InfeasibleOrUnbounded`

: the algorithm stopped because it decided that the problem is infeasible or unbounded (no result is available); this occasionally happens during MIP presolve

**Limits**

The optimizer stopped because of some user-defined limit. To be documented: `IterationLimit`

, `TimeLimit`

, `NodeLimit`

, `SolutionLimit`

, `MemoryLimit`

, `ObjectiveLimit`

, `NormLimit`

, `OtherLimit`

.

**Problematic**

This group of statuses means that something unexpected or problematic happened.

`SlowProgress`

: the algorithm stopped because it was unable to continue making progress towards the solution`AlmostSuccess`

should be used if there is additional information that relaxed convergence tolerances are satisfied

To be documented: `NumericalError`

, `InvalidModel`

, `InvalidOption`

, `Interrupted`

, `OtherError`

.

### Result Status

The `PrimalStatus`

and `DualStatus`

attributes indicate how to interpret the result returned by the solver. The value of the attribute is of type `ResultStatusCode`

.

`ResultStatusCode`

An Enum of possible values for the `PrimalStatus`

and `DualStatus`

attributes. The values indicate how to interpret the result vector.

`NoSolution`

`FeasiblePoint`

`NearlyFeasiblePoint`

`InfeasiblePoint`

`InfeasibilityCertificate`

`NearlyInfeasibilityCertificate`

`ReductionCertificate`

`NearlyReductionCertificate`

`UnknownResultStatus`

`OtherResultStatus`

## Variables and Constraints

### Basis Status

The `BasisStatus`

attribute of a variable or constraint describes its status with respect to a basis, if one is known. The value of the attribute is of type `BasisStatusCode`

.

`MathOptInterface.BasisStatusCode`

— Type.`BasisStatusCode`

An Enum of possible values for the `VariableBasisStatus`

and `ConstraintBasisStatus`

attributes. This explains the status of a given element with respect to an optimal solution basis. Possible values are:

`Basic`

: element is in the basis`Nonbasic`

: element is not in the basis`NonbasicAtLower`

: element is not in the basis and is at its lower bound`NonbasicAtUpper`

: element is not in the basis and is at its upper bound`SuperBasic`

: element is not in the basis but is also not at one of its bounds

### Index types

`MathOptInterface.VariableIndex`

— Type.`VariableIndex`

A type-safe wrapper for `Int64`

for use in referencing variables in a model. To allow for deletion, indices need not be consecutive.

`MathOptInterface.ConstraintIndex`

— Type.`ConstraintIndex{F,S}`

A type-safe wrapper for `Int64`

for use in referencing `F`

-in-`S`

constraints in a model. The parameter `F`

is the type of the function in the constraint, and the parameter `S`

is the type of set in the constraint. To allow for deletion, indices need not be consecutive. Indices within a constraint type (i.e. `F`

-in-`S`

) must be unique, but non-unique indices across different constraint types are allowed.

`MathOptInterface.is_valid`

— Function.`is_valid(model::ModelLike, index::Index)::Bool`

Return a `Bool`

indicating whether this index refers to a valid object in the model `model`

.

`MathOptInterface.delete`

— Method.`delete(model::ModelLike, index::Index)`

Delete the referenced object from the model.

### Variables

Functions for adding variables. For deleting, see index types section.

`MathOptInterface.add_variables`

— Function.`add_variables(model::ModelLike, n::Int)::Vector{VariableIndex}`

Add `n`

scalar variables to the model, returning a vector of variable indices.

A `AddVariableNotAllowed`

error is thrown if adding variables cannot be done in the current state of the model `model`

.

`MathOptInterface.add_variable`

— Function.`add_variable(model::ModelLike)::VariableIndex`

Add a scalar variable to the model, returning a variable index.

A `AddVariableNotAllowed`

error is thrown if adding variables cannot be done in the current state of the model `model`

.

List of attributes associated with variables. [category AbstractVariableAttribute] Calls to `get`

and `set`

should include as an argument a single `VariableIndex`

or a vector of `VariableIndex`

objects.

`MathOptInterface.VariableName`

— Type.`VariableName()`

A variable attribute for the string identifying the variable. It is invalid for two variables to have the same name.

**Note**

An implementation may but is not required to check for duplicate names when the `VariableName`

attribute is set. If this check is not performed when the name is set, then looking up a variable by name must throw an error when more than one variable has the same name.

`VariablePrimalStart()`

A variable attribute for the initial assignment to some primal variable's value that the optimizer may use to warm-start the solve.

`MathOptInterface.VariablePrimal`

— Type.```
VariablePrimal(N)
VariablePrimal()
```

A variable attribute for the assignment to some primal variable's value in result `N`

. If `N`

is omitted, it is 1 by default.

`VariableBasisStatus()`

A variable attribute for the `BasisStatusCode`

of some variable, with respect to an available optimal solution basis.

### Constraints

Functions for adding and modifying constraints.

`MathOptInterface.is_valid`

— Method.`is_valid(model::ModelLike, index::Index)::Bool`

Return a `Bool`

indicating whether this index refers to a valid object in the model `model`

.

`MathOptInterface.add_constraint`

— Function.`add_constraint(model::ModelLike, func::F, set::S)::ConstraintIndex{F,S} where {F,S}`

Add the constraint $f(x) \in \mathcal{S}$ where $f$ is defined by `func`

, and $\mathcal{S}$ is defined by `set`

.

```
add_constraint(model::ModelLike, v::VariableIndex, set::S)::ConstraintIndex{SingleVariable,S} where {S}
add_constraint(model::ModelLike, vec::Vector{VariableIndex}, set::S)::ConstraintIndex{VectorOfVariables,S} where {S}
```

Add the constraint $v \in \mathcal{S}$ where $v$ is the variable (or vector of variables) referenced by `v`

and $\mathcal{S}$ is defined by `set`

.

An `UnsupportedConstraint`

error is thrown if `model`

does not support `F`

-in-`S`

constraints and a `AddConstraintNotAllowed`

error is thrown if it supports `F`

-in-`S`

constraints but it cannot add the constraint(s) in its current state.

`MathOptInterface.add_constraints`

— Function.`add_constraints(model::ModelLike, funcs::Vector{F}, sets::Vector{S})::Vector{ConstraintIndex{F,S}} where {F,S}`

Add the set of constraints specified by each function-set pair in `funcs`

and `sets`

. `F`

and `S`

should be concrete types. This call is equivalent to `add_constraint.(model, funcs, sets)`

but may be more efficient.

`MathOptInterface.transform`

— Function.**Transform Constraint Set**

`transform(model::ModelLike, c::ConstraintIndex{F,S1}, newset::S2)::ConstraintIndex{F,S2}`

Replace the set in constraint `c`

with `newset`

. The constraint index `c`

will no longer be valid, and the function returns a new constraint index with the correct type.

Solvers may only support a subset of constraint transforms that they perform efficiently (for example, changing from a `LessThan`

to `GreaterThan`

set). In addition, set modification (where `S1 = S2`

) should be performed via the `modify`

function.

Typically, the user should delete the constraint and add a new one.

**Examples**

If `c`

is a `ConstraintIndex{ScalarAffineFunction{Float64},LessThan{Float64}}`

,

```
c2 = transform(model, c, GreaterThan(0.0))
transform(model, c, LessThan(0.0)) # errors
```

`MathOptInterface.supports_constraint`

— Function.`MOI.supports_constraint(BT::Type{<:AbstractBridge}, F::Type{<:MOI.AbstractFunction}, S::Type{<:MOI.AbstractSet})::Bool`

Return a `Bool`

indicating whether the bridges of type `BT`

support bridging `F`

-in-`S`

constraints.

`supports_constraint(model::ModelLike, ::Type{F}, ::Type{S})::Bool where {F<:AbstractFunction,S<:AbstractSet}`

Return a `Bool`

indicating whether `model`

supports `F`

-in-`S`

constraints, that is, `copy_to(model, src)`

does not return `CopyUnsupportedConstraint`

when `src`

contains `F`

-in-`S`

constraints. If `F`

-in-`S`

constraints are only not supported in specific circumstances, e.g. `F`

-in-`S`

constraints cannot be combined with another type of constraint, it should still return `true`

.

List of attributes associated with constraints. [category AbstractConstraintAttribute] Calls to `get`

and `set`

should include as an argument a single `ConstraintIndex`

or a vector of `ConstraintIndex{F,S}`

objects.

`MathOptInterface.ConstraintName`

— Type.`ConstraintName()`

A constraint attribute for the string identifying the constraint. It is invalid for two constraints of any kind to have the same name.

**Note**

An implementation may but is not required to check for duplicate names when the `ConstraintName`

attribute is set. If this check is not performed when the name is set, then looking up a constraint by name must throw an error when more than one constraint (of any type) has the same name.

`ConstraintPrimalStart()`

A constraint attribute for the initial assignment to some constraint's primal value(s) that the optimizer may use to warm-start the solve.

`ConstraintDualStart()`

A constraint attribute for the initial assignment to some constraint's dual value(s) that the optimizer may use to warm-start the solve.

```
ConstraintPrimal(N)
ConstraintPrimal()
```

A constraint attribute for the assignment to some constraint's primal value(s) in result `N`

. If `N`

is omitted, it is 1 by default.

Given a constraint `function-in-set`

, the `ConstraintPrimal`

is the value of the function evaluated at the primal solution of the variables. For example, given the constraint `ScalarAffineFunction([x,y], [1, 2], 3)`

-in-`Interval(0, 20)`

and a primal solution of `(x,y) = (4,5)`

, the `ConstraintPrimal`

solution of the constraint is `1 * 4 + 2 * 5 + 3 = 17`

.

`MathOptInterface.ConstraintDual`

— Type.```
ConstraintDual(N)
ConstraintDual()
```

A constraint attribute for the assignment to some constraint's dual value(s) in result `N`

. If `N`

is omitted, it is 1 by default.

`ConstraintBasisStatus()`

A constraint attribute for the `BasisStatusCode`

of some constraint, with respect to an available optimal solution basis.

`ConstraintFunction()`

A constraint attribute for the `AbstractFunction`

object used to define the constraint. It is guaranteed to be equivalent but not necessarily identical to the function provided by the user.

`MathOptInterface.ConstraintSet`

— Type.`ConstraintSet()`

A constraint attribute for the `AbstractSet`

object used to define the constraint.

## Functions and function modifications

List of recognized functions.

`AbstractFunction`

Abstract supertype for function objects.

`MathOptInterface.SingleVariable`

— Type.`SingleVariable(variable)`

The function that extracts the scalar variable referenced by `variable`

, a `VariableIndex`

. This function is naturally be used for single variable bounds or integrality constraints.

`VectorOfVariables(variables)`

The function that extracts the vector of variables referenced by `variables`

, a `Vector{VariableIndex}`

. This function is naturally be used for constraints that apply to groups of variables, such as an "all different" constraint, an indicator constraint, or a complementarity constraint.

```
struct ScalarAffineTerm{T}
coefficient::T
variable_index::VariableIndex
end
```

Represents $c x_i$ where $c$ is `coefficient`

and $x_i$ is the variable identified by `variable_index`

.

`ScalarAffineFunction{T}(terms, constant)`

The scalar-valued affine function $a^T x + b$, where:

$a$ is a sparse vector specified by a list of

`ScalarAffineTerm`

structs.$b$ is a scalar specified by

`constant::T`

Duplicate variable indices in `terms`

are accepted, and the corresponding coefficients are summed together.

```
struct VectorAffineTerm{T}
output_index::Int64
scalar_term::ScalarAffineTerm{T}
end
```

A `ScalarAffineTerm`

plus its index of the output component of a `VectorAffineFunction`

or `VectorQuadraticFunction`

. `output_index`

can also be interpreted as a row index into a sparse matrix, where the `scalar_term`

contains the column index and coefficient.

`VectorAffineFunction{T}(terms, constants)`

The vector-valued affine function $A x + b$, where:

$A$ is a sparse matrix specified by a list of

`VectorAffineTerm`

objects.$b$ is a vector specified by

`constants`

Duplicate indices in the $A$ are accepted, and the corresponding coefficients are summed together.

```
struct ScalarQuadraticTerm{T}
coefficient::T
variable_index_1::VariableIndex
variable_index_2::VariableIndex
end
```

Represents $c x_i x_j$ where $c$ is `coefficient`

, $x_i$ is the variable identified by `variable_index_1`

and $x_j$ is the variable identified by `variable_index_2`

.

`ScalarQuadraticFunction{T}(affine_terms, quadratic_terms, constant)`

The scalar-valued quadratic function $\frac{1}{2}x^TQx + a^T x + b$, where:

$a$ is a sparse vector specified by a list of

`ScalarAffineTerm`

structs.$b$ is a scalar specified by

`constant`

.$Q$ is a symmetric matrix specified by a list of

`ScalarQuadraticTerm`

structs.

Duplicate indices in $a$ or $Q$ are accepted, and the corresponding coefficients are summed together. "Mirrored" indices `(q,r)`

and `(r,q)`

(where `r`

and `q`

are `VariableIndex`

es) are considered duplicates; only one need be specified.

```
struct VectorQuadraticTerm{T}
output_index::Int64
scalar_term::ScalarQuadraticTerm{T}
end
```

A `ScalarQuadraticTerm`

plus its index of the output component of a `VectorQuadraticFunction`

. Each output component corresponds to a distinct sparse matrix $Q_i$.

`VectorQuadraticFunction{T}(affine_terms, quadratic_terms, constant)`

The vector-valued quadratic function with i`th`

component ("output index") defined as $\frac{1}{2}x^TQ_ix + a_i^T x + b_i$, where:

$a_i$ is a sparse vector specified by the

`VectorAffineTerm`

s with`output_index == i`

.$b_i$ is a scalar specified by

`constants[i]`

$Q_i$ is a symmetric matrix specified by the

`VectorQuadraticTerm`

with`output_index == i`

.

Duplicate indices in $a_i$ or $Q_i$ are accepted, and the corresponding coefficients are summed together. "Mirrored" indices `(q,r)`

and `(r,q)`

(where `r`

and `q`

are `VariableIndex`

es) are considered duplicates; only one need be specified.

Functions for getting and setting properties of sets.

`MathOptInterface.output_dimension`

— Function.`output_dimension(f::AbstractFunction)`

Return 1 if `f`

has a scalar output and the number of output components if `f`

has a vector output.

## Sets

List of recognized sets.

`MathOptInterface.AbstractSet`

— Type.`AbstractSet`

Abstract supertype for set objects used to encode constraints.

`MathOptInterface.Reals`

— Type.`Reals(dimension)`

The set $\mathbb{R}^{dimension}$ (containing all points) of dimension `dimension`

.

`MathOptInterface.Zeros`

— Type.`Zeros(dimension)`

The set $\{ 0 \}^{dimension}$ (containing only the origin) of dimension `dimension`

.

`MathOptInterface.Nonnegatives`

— Type.`Nonnegatives(dimension)`

The nonnegative orthant $\{ x \in \mathbb{R}^{dimension} : x \ge 0 \}$ of dimension `dimension`

.

`MathOptInterface.Nonpositives`

— Type.`Nonpositives(dimension)`

The nonpositive orthant $\{ x \in \mathbb{R}^{dimension} : x \le 0 \}$ of dimension `dimension`

.

`MathOptInterface.GreaterThan`

— Type.`GreaterThan{T <: Real}(lower::T)`

The set $[lower,\infty) \subseteq \mathbb{R}$.

`MathOptInterface.LessThan`

— Type.`LessThan{T <: Real}(upper::T)`

The set $(-\infty,upper] \subseteq \mathbb{R}$.

`MathOptInterface.EqualTo`

— Type.`EqualTo{T <: Number}(value::T)`

The set containing the single point $x \in \mathbb{R}$ where $x$ is given by `value`

.

`MathOptInterface.Interval`

— Type.`Interval{T <: Real}(lower::T,upper::T)`

The interval $[lower, upper] \subseteq \mathbb{R}$. If `lower`

or `upper`

is `-Inf`

or `Inf`

, respectively, the set is interpreted as a one-sided interval.

`Interval(s::GreaterThan{<:AbstractFloat})`

Construct a (right-unbounded) `Interval`

equivalent to the given `GreaterThan`

set.

`Interval(s::LessThan{<:AbstractFloat})`

Construct a (left-unbounded) `Interval`

equivalent to the given `LessThan`

set.

`Interval(s::EqualTo{<:Real})`

Construct a (degenerate) `Interval`

equivalent to the given `EqualTo`

set.

`MathOptInterface.SecondOrderCone`

— Type.`SecondOrderCone(dimension)`

The second-order cone (or Lorenz cone) $\{ (t,x) \in \mathbb{R}^{dimension} : t \ge || x ||_2 \}$ of dimension `dimension`

.

`RotatedSecondOrderCone(dimension)`

The rotated second-order cone $\{ (t,u,x) \in \mathbb{R}^{dimension} : 2tu \ge || x ||_2^2, t,u \ge 0 \}$ of dimension `dimension`

.

`GeometricMeanCone(dimension)`

The geometric mean cone $\{ (t,x) \in \mathbb{R}^{n+1} : x \ge 0, t \le \sqrt[n]{x_1 x_2 \cdots x_n} \}$ of dimension `dimension`

${}=n+1$.

`MathOptInterface.ExponentialCone`

— Type.`ExponentialCone()`

The 3-dimensional exponential cone $\{ (x,y,z) \in \mathbb{R}^3 : y \exp (x/y) \le z, y > 0 \}$.

`DualExponentialCone()`

The 3-dimensional dual exponential cone $\{ (u,v,w) \in \mathbb{R}^3 : -u \exp (v/u) \le \exp(1) w, u < 0 \}$.

`MathOptInterface.PowerCone`

— Type.`PowerCone{T <: Real}(exponent::T)`

The 3-dimensional power cone $\{ (x,y,z) \in \mathbb{R}^3 : x^{exponent} y^{1-exponent} >= |z|, x \ge 0, y \ge 0 \}$ with parameter `exponent`

.

`MathOptInterface.DualPowerCone`

— Type.`DualPowerCone{T <: Real}(exponent::T)`

The 3-dimensional power cone $\{ (u,v,w) \in \mathbb{R}^3 : (\frac{u}{exponent})^{exponent} (\frac{v}{1-exponent})^{1-exponent} \ge |w|, u \ge 0, v \ge 0 \}$ with parameter `exponent`

.

`PositiveSemidefiniteConeTriangle(side_dimension)`

The (vectorized) cone of symmetric positive semidefinite matrices, with `side_dimension`

rows and columns. The entries of the upper-right triangular part of the matrix are given column by column (or equivalently, the entries of the lower-left triangular part are given row by row). A vectorized cone of `dimension`

$n$ corresponds to a square matrix with side dimension $\sqrt{1/4 + 2 n} - 1/2$. (Because a $d \times d$ matrix has $d(d+1)/2$ elements in the upper or lower triangle.)

**Examples**

The matrix

corresponds to $(1, 2, 3, 4, 5, 6)$ for `PositiveSemidefiniteConeTriangle(3)`

**Note**

Two packed storage formats exist for symmetric matrices, the respective orders of the entries are:

upper triangular column by column (or lower triangular row by row);

lower triangular column by column (or upper triangular row by row).

The advantage of the first format is the mapping between the `(i, j)`

matrix indices and the `k`

index of the vectorized form. It is simpler and does not depend on the side dimension of the matrix. Indeed,

the entry of matrix indices

`(i, j)`

has vectorized index`k = div((j-1)*j, 2) + i`

if $i \leq j$ and`k = div((i-1)*i, 2) + j`

if $j \leq i$;and the entry with vectorized index

`k`

has matrix indices`i = isqrt(2k)`

and`j = k - div((i-1)*i, 2)`

or`j = isqrt(2k)`

and`i = k - div((j-1)*j, 2)`

.

**Duality note**

The scalar product for the symmetric matrix in its vectorized form is the sum of the pairwise product of the diagonal entries plus twice the sum of the pairwise product of the upper diagonal entries; see [p. 634, 1]. This has important consequence for duality. Consider for example the following problem

The dual is the following problem

Why do we use $2y_2$ in the dual constraint instead of $y_2$ ? The reason is that $2y_2$ is the scalar product between $y$ and the symmetric matrix whose vectorized form is $(0, 1, 0)$. Indeed, with our modified scalar products we have

**References**

[1] Boyd, S. and Vandenberghe, L.. *Convex optimization*. Cambridge university press, 2004.

`PositiveSemidefiniteConeSquare(side_dimension)`

The cone of symmetric positive semidefinite matrices, with side length `side_dimension`

. The entries of the matrix are given column by column (or equivalently, row by row). The matrix is both constrained to be symmetric and to be positive semidefinite. That is, if the functions in entries $(i, j)$ and $(j, i)$ are different, then a constraint will be added to make sure that the entries are equal.

**Examples**

Constraining the matrix

to be symmetric positive semidefinite can be achieved by constraining the vector $(1, -z, -y, 0)$ (or $(1, -y, -z, 0)$) to belong to the `PositiveSemidefiniteConeSquare(2)`

. It both constrains $y = z$ and $(1, -y, 0)$ (or $(1, -z, 0)$) to be in `PositiveSemidefiniteConeTriangle(2)`

.

`LogDetConeTriangle(side_dimension)`

The Log-Determinant cone $\{ (t, X) \in \mathbb{R}^{1 + d(d+1)/2} : t \le \log(\det(X)) \}$ where the matrix `X`

is represented in the same symmetric packed format as in the `PositiveSemidefiniteConeTriangle`

. The argument `side_dimension`

is the side dimension of the matrix `X`

, i.e., its number of rows or columns.

`LogDetConeSquare(side_dimension)`

The Log-Determinant cone $\{ (t, X) \in \mathbb{R}^{1 + d^2} : t \le \log(\det(X)), X \text{ symmetric} \}$ where the matrix `X`

is represented in the same format as in the `PositiveSemidefiniteConeSquare`

. Similarly to `PositiveSemidefiniteConeSquare`

, constraints are added to ensures that `X`

is symmetric. The argument `side_dimension`

is the side dimension of the matrix `X`

, i.e., its number of rows or columns.

`RootDetConeTriangle(side_dimension)`

The Root-Determinant cone $\{ (t, X) \in \mathbb{R}^{1 + d(d+1)/2} : t \le \det(X)^{1/d} \}$ where the matrix `X`

is represented in the same symmetric packed format as in the `PositiveSemidefiniteConeTriangle`

. The argument `side_dimension`

is the side dimension of the matrix `X`

, i.e., its number of rows or columns.

`RootDetConeSquare(side_dimension)`

The Root-Determinant cone $\{ (t, X) \in \mathbb{R}^{1 + d^2} : t \le \det(X)^{1/d}, X \text{ symmetric} \}$ where the matrix `X`

is represented in the same format as in the `PositiveSemidefiniteConeSquare`

. Similarly to `PositiveSemidefiniteConeSquare`

, constraints are added to ensure that `X`

is symmetric. The argument `side_dimension`

is the side dimension of the matrix `X`

, i.e., its number of rows or columns.

`MathOptInterface.Integer`

— Type.`Integer()`

The set of integers $\mathbb{Z}$.

`MathOptInterface.ZeroOne`

— Type.`ZeroOne()`

The set $\{ 0, 1 \}$.

`MathOptInterface.Semicontinuous`

— Type.`Semicontinuous{T <: Real}(lower::T,upper::T)`

The set $\{0\} \cup [lower,upper]$.

`MathOptInterface.Semiinteger`

— Type.`Semiinteger{T <: Real}(lower::T,upper::T)`

The set $\{0\} \cup \{lower,lower+1,\ldots,upper-1,upper\}$.

`MathOptInterface.SOS1`

— Type.`SOS1{T <: Real}(weights::Vector{T})`

The set corresponding to the special ordered set (SOS) constraint of type 1. Of the variables in the set, at most one can be nonzero. The `weights`

induce an ordering of the variables; as such, they should be unique values. The *k*th element in the set corresponds to the *k*th weight in `weights`

. See here for a description of SOS constraints and their potential uses.

`MathOptInterface.SOS2`

— Type.`SOS2{T <: Real}(weights::Vector{T})`

The set corresponding to the special ordered set (SOS) constraint of type 2. Of the variables in the set, at most two can be nonzero, and if two are nonzero, they must be adjacent in the ordering of the set. The `weights`

induce an ordering of the variables; as such, they should be unique values. The *k*th element in the set corresponds to the *k*th weight in `weights`

. See here for a description of SOS constraints and their potential uses.

Functions for getting and setting properties of sets.

`MathOptInterface.dimension`

— Function.`dimension(s::AbstractSet)`

Return the `output_dimension`

that an `AbstractFunction`

should have to be used with the set `s`

.

**Examples**

```
julia> dimension(Reals(4))
4
julia> dimension(LessThan(3.0))
1
julia> dimension(PositiveSemidefiniteConeTriangle(2))
3
```

## Modifications

Functions for modifying objective and constraint functions.

`MathOptInterface.modify`

— Function.**Constraint Function**

`modify(model::ModelLike, ci::ConstraintIndex, change::AbstractFunctionModification)`

Apply the modification specified by `change`

to the function of constraint `ci`

.

An `ModifyConstraintNotAllowed`

error is thrown if modifying constraints is not supported by the model `model`

.

**Examples**

`modify(model, ci, ScalarConstantChange(10.0))`

**Objective Function**

`modify(model::ModelLike, ::ObjectiveFunction, change::AbstractFunctionModification)`

Apply the modification specified by `change`

to the objective function of `model`

. To change the function completely, call `set`

instead.

An `ModifyObjectiveNotAllowed`

error is thrown if modifying objectives is not supported by the model `model`

.

**Examples**

`modify(model, ObjectiveFunction{ScalarAffineFunction{Float64}}(), ScalarConstantChange(10.0))`

`AbstractFunctionModification`

An abstract supertype for structs which specify partial modifications to functions, to be used for making small modifications instead of replacing the functions entirely.

`ScalarConstantChange{T}(new_constant::T)`

A struct used to request a change in the constant term of a scalar-valued function. Applicable to `ScalarAffineFunction`

and `ScalarQuadraticFunction`

.

`VectorConstantChange{T}(new_constant::Vector{T})`

A struct used to request a change in the constant vector of a vector-valued function. Applicable to `VectorAffineFunction`

and `VectorQuadraticFunction`

.

`ScalarCoefficientChange{T}(variable::VariableIndex, new_coefficient::T)`

A struct used to request a change in the linear coefficient of a single variable in a scalar-valued function. Applicable to `ScalarAffineFunction`

and `ScalarQuadraticFunction`

.

`MathOptInterface.MultirowChange`

— Type.`MultirowChange{T}(variable::VariableIndex, new_coefficients::Vector{Tuple{Int64, T}})`

A struct used to request a change in the linear coefficients of a single variable in a vector-valued function. New coefficients are specified by `(output_index, coefficient)`

tuples. Applicable to `VectorAffineFunction`

and `VectorQuadraticFunction`

.

## Nonlinear programming (NLP)

### Attributes

`MathOptInterface.NLPBlock`

— Type.`NLPBlock()`

Holds the `NLPBlockData`

that represents a set of nonlinear constraints, and optionally a nonlinear objective.

`MathOptInterface.NLPBoundsPair`

— Type.`NLPBoundsPair(lower,upper)`

A struct holding a pair of lower and upper bounds. `-Inf`

and `Inf`

can be used to indicate no lower or upper bound, respectively.

`MathOptInterface.NLPBlockData`

— Type.```
struct NLPBlockData
constraint_bounds::Vector{NLPBoundsPair}
evaluator::AbstractNLPEvaluator
has_objective::Bool
end
```

A `struct`

encoding a set of nonlinear constraints of the form $lb \le g(x) \le ub$ and, if `has_objective == true`

, a nonlinear objective function $f(x)$. `constraint_bounds`

holds the pairs of $lb$ and $ub$ elements. It is an error to set both a nonlinear objective function and another objective function using an `ObjectiveFunction`

attribute. The `evaluator`

is a callback object that is used to query function values, derivatives, and expression graphs. If `has_objective == false`

, then it is an error to query properties of the objective function, and in Hessian-of-the-Lagrangian queries, `σ`

must be set to zero. Throughout the evaluator, all variables are ordered according to ListOfVariableIndices().

`MathOptInterface.NLPBlockDual`

— Type.```
NLPBlockDual(N)
NLPBlockDual()
```

The Lagrange multipliers on the constraints from the `NLPBlock`

in result `N`

. If `N`

is omitted, it is 1 by default.

`NLPBlockDualStart()`

An initial assignment of the Lagrange multipliers on the constraints from the `NLPBlock`

that the solver may use to warm-start the solve.

### NLP evaluator methods

`AbstractNLPEvaluator`

Abstract supertype for the callback object used in `NLPBlock`

.

`MathOptInterface.initialize`

— Function.`initialize(d::AbstractNLPEvaluator, requested_features::Vector{Symbol})`

Must be called before any other methods. The vector `requested_features`

lists features requested by the solver. These may include `:Grad`

for gradients of $f$, `:Jac`

for explicit Jacobians of $g$, `:JacVec`

for Jacobian-vector products, `:HessVec`

for Hessian-vector and Hessian-of-Lagrangian-vector products, `:Hess`

for explicit Hessians and Hessian-of-Lagrangians, and `:ExprGraph`

for expression graphs.

`MathOptInterface.features_available`

— Function.`features_available(d::AbstractNLPEvaluator)`

Returns the subset of features available for this problem instance, as a list of symbols in the same format as in `initialize`

.

`MathOptInterface.eval_objective`

— Function.`eval_objective(d::AbstractNLPEvaluator, x)`

Evaluate the objective $f(x)$, returning a scalar value.

`MathOptInterface.eval_constraint`

— Function.`eval_constraint(d::AbstractNLPEvaluator, g, x)`

Evaluate the constraint function $g(x)$, storing the result in the vector `g`

which must be of the appropriate size.

`MathOptInterface.eval_objective_gradient`

— Function.`eval_objective_gradient(d::AbstractNLPEvaluator, g, x)`

Evaluate $\nabla f(x)$ as a dense vector, storing the result in the vector `g`

which must be of the appropriate size.

`MathOptInterface.jacobian_structure`

— Function.`jacobian_structure(d::AbstractNLPEvaluator)::Vector{Tuple{Int64,Int64}}`

Returns the sparsity structure of the Jacobian matrix $J_g(x) = \left[ \begin{array}{c} \nabla g_1(x) \\ \nabla g_2(x) \\ \vdots \\ \nabla g_m(x) \end{array}\right]$ where $g_i$ is the $i\text{th}$ component of $g$. The sparsity structure is assumed to be independent of the point $x$. Returns a vector of tuples, `(row, column)`

, where each indicates the position of a structurally nonzero element. These indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together.

`MathOptInterface.hessian_lagrangian_structure`

— Function.`hessian_lagrangian_structure(d::AbstractNLPEvaluator)::Vector{Tuple{Int64,Int64}}`

Returns the sparsity structure of the Hessian-of-the-Lagrangian matrix $\nabla^2 f + \sum_{i=1}^m \nabla^2 g_i$ as a vector of tuples, where each indicates the position of a structurally nonzero element. These indices are not required to be sorted and can contain duplicates, in which case the solver should combine the corresponding elements by adding them together. Any mix of lower and upper-triangular indices is valid. Elements `(i,j)`

and `(j,i)`

, if both present, should be treated as duplicates.

`MathOptInterface.eval_constraint_jacobian`

— Function.`eval_constraint_jacobian(d::AbstractNLPEvaluator, J, x)`

Evaluates the sparse Jacobian matrix $J_g(x) = \left[ \begin{array}{c} \nabla g_1(x) \\ \nabla g_2(x) \\ \vdots \\ \nabla g_m(x) \end{array}\right]$. The result is stored in the vector `J`

in the same order as the indices returned by `jacobian_structure`

.

`eval_constraint_jacobian_product(d::AbstractNLPEvaluator, y, x, w)`

Computes the Jacobian-vector product $J_g(x)w$, storing the result in the vector `y`

.

`eval_constraint_jacobian_transpose_product(d::AbstractNLPEvaluator, y, x, w)`

Computes the Jacobian-transpose-vector product $J_g(x)^Tw$, storing the result in the vector `y`

.

`MathOptInterface.eval_hessian_lagrangian`

— Function.`eval_hessian_lagrangian(d::AbstractNLPEvaluator, H, x, σ, μ)`

Given scalar weight `σ`

and vector of constraint weights `μ`

, computes the sparse Hessian-of-the-Lagrangian matrix $\sigma\nabla^2 f(x) + \sum_{i=1}^m \mu_i \nabla^2 g_i(x)$, storing the result in the vector `H`

in the same order as the indices returned by `hessian_lagrangian_structure`

.

`MathOptInterface.eval_hessian_lagrangian_product`

— Function.`eval_hessian_lagrangian_prod(d::AbstractNLPEvaluator, h, x, v, σ, μ)`

Given scalar weight `σ`

and vector of constraint weights `μ`

, computes the Hessian-of-the-Lagrangian-vector product $\left(\sigma\nabla^2 f(x) + \sum_{i=1}^m \mu_i \nabla^2 g_i(x)\right)v$, storing the result in the vector `h`

.

`MathOptInterface.objective_expr`

— Function.`objective_expr(d::AbstractNLPEvaluator)`

Returns an expression graph for the objective function as a standard Julia `Expr`

object. All sums and products are flattened out as simple `Expr(:+,...)`

and `Expr(:*,...)`

objects. The symbol `x`

is used as a placeholder for the vector of decision variables. No other undefined symbols are permitted; coefficients are embedded as explicit values. For example, the expression $x_1+\sin(x_2/\exp(x_3))$ would be represented as the Julia object `:(x[1] + sin(x[2]/exp(x[3])))`

. See the Julia manual for more information on the structure of `Expr`

objects. There are currently no restrictions on recognized functions; typically these will be built-in Julia functions like `^`

, `exp`

, `log`

, `cos`

, `tan`

, `sqrt`

, etc., but modeling interfaces may choose to extend these basic functions.

`MathOptInterface.constraint_expr`

— Function.`constraint_expr(d::AbstractNLPEvaluator, i)`

Returns an expression graph for the $i\text{th}$ constraint in the same format as described above, with an additional comparison operator indicating the sense of and bounds on the constraint. The right-hand side of the comparison must be a constant; that is, `:(x[1]^3 <= 1)`

is allowed, while `:(1 <= x[1]^3)`

is not valid. Double-sided constraints are allowed, in which case both the lower bound and upper bounds should be constants; for example, `:(-1 <= cos(x[1]) + sin(x[2]) <= 1)`

is valid.

## Errors

When an MOI call fails on a model, precise errors should be thrown when possible instead of simply calling `error`

with a message. The docstrings for the respective methods describe the errors that the implementation should thrown in certain situations. This error-reporting system allows code to distinguish between internal errors (that should be shown to the user) and unsupported operations which may have automatic workarounds.

When an invalid index is used in an MOI call, an `InvalidIndex`

should be thrown:

`MathOptInterface.InvalidIndex`

— Type.```
struct InvalidIndex{IndexType<:Index} <: Exception
index::IndexType
end
```

An error indicating that the index `index`

is invalid.

The rest of the errors defined in MOI fall in two categories represented by the following two abstract types:

`UnsupportedError <: Exception`

Abstract type for error thrown when an element is not supported by the model.

`MathOptInterface.NotAllowedError`

— Type.`NotAllowedError <: Exception`

Abstract type for error thrown when an operation is supported but cannot be applied in the current state of the model.

The different `UnsupportedError`

and `NotAllowedError`

are the following errors:

```
struct UnsupportedAttribute{AttrType} <: UnsupportedError
attr::AttrType
message::String
end
```

An error indicating that the attribute `attr`

is not supported by the model, i.e. that `supports`

returns `false`

.

```
struct SetAttributeNotAllowed{AttrType} <: NotAllowedError
attr::AttrType
message::String # Human-friendly explanation why the attribute cannot be set
end
```

An error indicating that the attribute `attr`

is supported (see `supports`

) but cannot be set for some reason (see the error string).

```
struct AddVariableNotAllowed <: NotAllowedError
message::String # Human-friendly explanation why the attribute cannot be set
end
```

An error indicating that variables cannot be added to the model.

```
struct UnsupportedConstraint{F<:AbstractFunction, S<:AbstractSet} <: UnsupportedError
message::String # Human-friendly explanation why the attribute cannot be set
end
```

An error indicating that constraints of type `F`

-in-`S`

are not supported by the model, i.e. that `supports_constraint`

returns `false`

.

```
struct AddConstraintNotAllowed{F<:AbstractFunction, S<:AbstractSet} <: NotAllowedError
message::String # Human-friendly explanation why the attribute cannot be set
end
```

An error indicating that constraints of type `F`

-in-`S`

are supported (see `supports_constraint`

) but cannot be added.

```
struct ModifyConstraintNotAllowed{F<:AbstractFunction, S<:AbstractSet,
C<:AbstractFunctionModification} <: NotAllowedError
constraint_index::ConstraintIndex{F, S}
change::C
message::String
end
```

An error indicating that the constraint modification `change`

cannot be applied to the constraint of index `ci`

.

```
struct ModifyObjectiveNotAllowed{C<:AbstractFunctionModification} <: NotAllowedError
change::C
message::String
end
```

An error indicating that the objective modification `change`

cannot be applied to the objective.

```
struct DeleteNotAllowed{IndexType <: Index} <: NotAllowedError
index::IndexType
message::String
end
```

An error indicating that the index `index`

cannot be deleted.

## Bridges

Bridges can be used for automatic reformulation of a certain constraint type into equivalent constraints.

`AbstractBridge`

A bridge represents a bridged constraint in an `AbstractBridgeOptimizer`

. It contains the indices of the constraints that it has created in the model. These can be obtained using `MOI.NumberOfConstraints`

and `MOI.ListOfConstraintIndices`

and using the bridge in place of a `ModelLike`

. Attributes of the bridged model such as `MOI.ConstraintDual`

and `MOI.ConstraintPrimal`

, can be obtained using the bridge in place of the constraint index. These calls are used by the `AbstractBridgeOptimizer`

to communicate with the bridge so they should be implemented by the bridge.

`AbstractBridgeOptimizer`

A bridge optimizer applies given constraint bridges to a given optimizer thus extending the types of supported constraints. The attributes of the inner optimizer are automatically transformed to make the bridges transparent, e.g. the variables and constraints created by the bridges are hidden.

By convention, the inner optimizer should be stored in a `model`

field and the dictionary mapping constraint indices to bridges should be stored in a `bridges`

field. If a bridge optimizer deviates from these conventions, it should implement the functions `MOI.optimize!`

and `bridge`

respectively.

`SingleBridgeOptimizer{BT<:AbstractBridge, MT<:MOI.ModelLike, OT<:MOI.ModelLike} <: AbstractBridgeOptimizer`

The `SingleBridgeOptimizer`

bridges any constraint supported by the bridge `BT`

. This is in contrast with the `LazyBridgeOptimizer`

which only bridges the constraints that are unsupported by the internal model, even if they are supported by one of its bridges.

`LazyBridgeOptimizer{OT<:MOI.ModelLike, MT<:MOI.ModelLike} <: AbstractBridgeOptimizer`

The `LazyBridgeOptimizer`

combines several bridges, which are added using the `add_bridge`

function. Whenever a constraint is added, it only attempts to bridge it if it is not supported by the internal model (hence its name `Lazy`

). When bridging a constraint, it selects the minimal number of bridges needed. For instance, a constraint `F`

-in-`S`

can be bridged into a constraint `F1`

-in-`S1`

(supported by the internal model) using bridge 1 or bridged into a constraint `F2`

-in-`S2`

(unsupported by the internal model) using bridge 2 which can then be bridged into a constraint `F3`

-in-`S3`

(supported by the internal model) using bridge 3, it will choose bridge 1 as it allows to bridge `F`

-in-`S`

using only one bridge instead of two if it uses bridge 2 and 3.

`MathOptInterface.Bridges.add_bridge`

— Function.`add_bridge(b::LazyBridgeOptimizer, BT::Type{<:AbstractBridge})`

Enable the use of the bridges of type `BT`

by `b`

.

Below is the list of bridges implemented in this package.

`SplitIntervalBridge{T}`

The `SplitIntervalBridge`

splits a constraint $l ≤ ⟨a, x⟩ + α ≤ u$ into the constraints $⟨a, x⟩ + α ≥ l$ and $⟨a, x⟩ + α ≤ u$.

`RSOCBridge{T}`

The `RotatedSecondOrderCone`

is `SecondOrderCone`

representable; see [1, p. 104]. Indeed, we have $2tu = (t/√2 + u/√2)^2 - (t/√2 - u/√2)^2$ hence

is equivalent to

We can therefore use the transformation $(t, u, x) \mapsto (t/√2+u/√2, t/√2-u/√2, x)$. Note that the linear transformation is a symmetric involution (i.e. it is its own transpose and its own inverse). That means in particular that the norm is of constraint primal and duals are preserved by the tranformation.

[1] Ben-Tal, Aharon, and Arkadi Nemirovski. *Lectures on modern convex optimization: analysis, algorithms, and engineering applications*. Society for Industrial and Applied Mathematics, 2001.

`GeoMeanBridge{T}`

The `GeometricMeanCone`

is `SecondOrderCone`

representable; see [1, p. 105]. The reformulation is best described in an example. Consider the cone of dimension 4

This can be rewritten as $\exists x_{21} \ge 0$ such that

Note that we need to create $x_{21}$ and not use $t^4$ directly as $t$ is allowed to be negative. Now, this is equivalent to

[1] Ben-Tal, Aharon, and Arkadi Nemirovski. *Lectures on modern convex optimization: analysis, algorithms, and engineering applications*. Society for Industrial and Applied Mathematics, 2001.

`SquarePSDBridge{T}`

The `SquarePSDBridge`

reformulates the constraint of a square matrix to be PSD and symmetric, i.e. belongs to the `MOI.PositiveSemidefiniteConeSquare`

, to a list of equality constraints for pair or off-diagonal entries with different expressions and a PSD constraint the upper triangular part of the matrix.

For instance, the constraint for the matrix

to be PSD can be broken down to the constraint of the symmetric matrix

and the equality constraint between the off-diagonal entries (2, 3) and (3, 2) $2x == 1$. Note that now symmetrization constraint need to be added between the off-diagonal entries (1, 2) and (2, 1) or between (1, 3) and (3, 1) since the expressions are the same.

`RootDetBridge{T}`

The `RootDetConeTriangle`

is representable by a `PositiveSemidefiniteConeTriangle`

and an `GeometricMeanCone`

constraints; see [1, p. 149]. Indeed, $t \le \det(X)^(1/n)$ if and only if there exists a lower triangular matrix $Δ$ such that

[1] Ben-Tal, Aharon, and Arkadi Nemirovski. *Lectures on modern convex optimization: analysis, algorithms, and engineering applications*. Society for Industrial and Applied Mathematics, 2001.

`LogDetBridge{T}`

The `LogDetConeTriangle`

is representable by a `PositiveSemidefiniteConeTriangle`

and `ExponentialCone`

constraints. Indeed, $\log\det(X) = \log(\delta_1) + \cdots + \log(\delta_n)$ where $\delta_1$, ..., $\delta_n$ are the eigenvalues of $X$. Adapting, the method from [1, p. 149], we see that $t \le \log(\det(X))$ if and only if there exists a lower triangular matrix $Δ$ such that

[1] Ben-Tal, Aharon, and Arkadi Nemirovski. *Lectures on modern convex optimization: analysis, algorithms, and engineering applications*. Society for Industrial and Applied Mathematics, 2001. ```

The `SOCtoPSDBridge`

transforms the second order cone constraint $\lVert x \rVert \le t$ into the semidefinite cone constraints

Indeed by the Schur Complement, it is positive definite iff

which is equivalent to

The `RSOCtoPSDBridge`

transforms the second order cone constraint $\lVert x \rVert \le 2tu$ with $u \ge 0$ into the semidefinite cone constraints

Indeed by the Schur Complement, it is positive definite iff

which is equivalent to

For each bridge defined in this package, a corresponding bridge optimizer is available with the same name without the "Bridge" suffix, e.g., `SplitInterval`

is an `SingleBridgeOptimizer`

for the `SplitIntervalBridge`

.