Standard form

Functions

MathOptInterface.VariableIndexType
VariableIndex

A type-safe wrapper for Int64 for use in referencing variables in a model. To allow for deletion, indices need not be consecutive.

source
MathOptInterface.VectorOfVariablesType
VectorOfVariables(variables)

The function that extracts the vector of variables referenced by variables, a Vector{VariableIndex}. This function is naturally be used for constraints that apply to groups of variables, such as an "all different" constraint, an indicator constraint, or a complementarity constraint.

source
MathOptInterface.ScalarAffineTermType
struct ScalarAffineTerm{T}
    coefficient::T
    variable::VariableIndex
end

Represents $c x_i$ where $c$ is coefficient and $x_i$ is the variable identified by variable.

source
MathOptInterface.ScalarAffineFunctionType
ScalarAffineFunction{T}(terms, constant)

The scalar-valued affine function $a^T x + b$, where:

  • $a$ is a sparse vector specified by a list of ScalarAffineTerm structs.
  • $b$ is a scalar specified by constant::T

Duplicate variable indices in terms are accepted, and the corresponding coefficients are summed together.

source
MathOptInterface.VectorAffineTermType
struct VectorAffineTerm{T}
    output_index::Int64
    scalar_term::ScalarAffineTerm{T}
end

A ScalarAffineTerm plus its index of the output component of a VectorAffineFunction or VectorQuadraticFunction. output_index can also be interpreted as a row index into a sparse matrix, where the scalar_term contains the column index and coefficient.

source
MathOptInterface.VectorAffineFunctionType
VectorAffineFunction{T}(terms, constants)

The vector-valued affine function $A x + b$, where:

  • $A$ is a sparse matrix specified by a list of VectorAffineTerm objects.
  • $b$ is a vector specified by constants

Duplicate indices in the $A$ are accepted, and the corresponding coefficients are summed together.

source
MathOptInterface.ScalarQuadraticTermType
struct ScalarQuadraticTerm{T}
    coefficient::T
    variable_1::VariableIndex
    variable_2::VariableIndex
end

Represents $c x_i x_j$ where $c$ is coefficient, $x_i$ is the variable identified by variable_1 and $x_j$ is the variable identified by variable_2.

source
MathOptInterface.ScalarQuadraticFunctionType
ScalarQuadraticFunction{T}(quadratic_terms, affine_terms, constant)

The scalar-valued quadratic function $\frac{1}{2}x^TQx + a^T x + b$, where:

  • $a$ is a sparse vector specified by a list of ScalarAffineTerm structs.
  • $b$ is a scalar specified by constant.
  • $Q$ is a symmetric matrix specified by a list of ScalarQuadraticTerm structs.

Duplicate indices in $a$ or $Q$ are accepted, and the corresponding coefficients are summed together. "Mirrored" indices (q,r) and (r,q) (where r and q are VariableIndexes) are considered duplicates; only one need be specified.

For example, for two scalar variables $y, z$, the quadratic expression $yz + y^2$ is represented by the terms ScalarQuadraticTerm.([1.0, 2.0], [y, y], [z, y]).

source
MathOptInterface.VectorQuadraticFunctionType
VectorQuadraticFunction{T}(quadratic_terms, affine_terms, constants)

The vector-valued quadratic function with ith component ("output index") defined as $\frac{1}{2}x^TQ_ix + a_i^T x + b_i$, where:

  • $a_i$ is a sparse vector specified by the VectorAffineTerms with output_index == i.
  • $b_i$ is a scalar specified by constants[i]
  • $Q_i$ is a symmetric matrix specified by the VectorQuadraticTerm with output_index == i.

Duplicate indices in $a_i$ or $Q_i$ are accepted, and the corresponding coefficients are summed together. "Mirrored" indices (q,r) and (r,q) (where r and q are VariableIndexes) are considered duplicates; only one need be specified.

source

Utilities

MathOptInterface.constantMethod
constant(f::Union{ScalarAffineFunction, ScalarQuadraticFunction})

Returns the constant term of the scalar function

source
MathOptInterface.constantMethod
constant(f::Union{VectorAffineFunction, VectorQuadraticFunction})

Returns the vector of constant terms of the vector function

source
MathOptInterface.constantMethod
constant(f::VariableIndex, ::Type{T}) where {T}

The constant term of a VariableIndex function is the zero value of the specified type T.

source
MathOptInterface.constantMethod
constant(f::VectorOfVariables, ::Type{T}) where {T}

The constant term of a VectorOfVariables function is a vector of zero values of the specified type T.

source

Sets

Utilities

MathOptInterface.dual_setFunction
dual_set(s::AbstractSet)

Return the dual set of s, that is the dual cone of the set. This follows the definition of duality discussed in Duality.

See Dual cone for more information.

If the dual cone is not defined it returns an error.

Examples

julia> dual_set(Reals(4))
Zeros(4)

julia> dual_set(SecondOrderCone(5))
SecondOrderCone(5)

julia> dual_set(ExponentialCone())
DualExponentialCone()
source
MathOptInterface.dual_set_typeFunction
dual_set_type(S::Type{<:AbstractSet})

Return the type of dual set of sets of type S, as returned by dual_set. If the dual cone is not defined it returns an error.

Examples

julia> dual_set_type(Reals)
Zeros

julia> dual_set_type(SecondOrderCone)
SecondOrderCone

julia> dual_set_type(ExponentialCone)
DualExponentialCone
source
MathOptInterface.supports_dimension_updateFunction
supports_dimension_update(S::Type{<:MOI.AbstractVectorSet})

Return a Bool indicating whether the elimination of any dimension of n-dimensional sets of type S give an n-1-dimensional set S. By default, this function returns false so it should only be implemented for sets that supports dimension update.

For instance, supports_dimension_update(MOI.Nonnegatives} is true because the elimination of any dimension of the n-dimensional nonnegative orthant gives the n-1-dimensional nonnegative orthant. However supports_dimension_update(MOI.ExponentialCone} is false.

source

Scalar sets

List of recognized scalar sets.

MathOptInterface.EqualToType
EqualTo{T <: Number}(value::T)

The set containing the single point $x \in \mathbb{R}$ where $x$ is given by value.

source
MathOptInterface.IntervalType
Interval{T <: Real}(lower::T,upper::T)

The interval $[lower, upper] \subseteq \mathbb{R}$. If lower or upper is -Inf or Inf, respectively, the set is interpreted as a one-sided interval.

Interval(s::GreaterThan{<:AbstractFloat})

Construct a (right-unbounded) Interval equivalent to the given GreaterThan set.

Interval(s::LessThan{<:AbstractFloat})

Construct a (left-unbounded) Interval equivalent to the given LessThan set.

Interval(s::EqualTo{<:Real})

Construct a (degenerate) Interval equivalent to the given EqualTo set.

source

Vector sets

List of recognized vector sets.

MathOptInterface.NormInfinityConeType
NormInfinityCone(dimension)

The $\ell_\infty$-norm cone $\{ (t,x) \in \mathbb{R}^{dimension} : t \ge \lVert x \rVert_\infty = \max_i \lvert x_i \rvert \}$ of dimension dimension.

source
MathOptInterface.NormOneConeType
NormOneCone(dimension)

The $\ell_1$-norm cone $\{ (t,x) \in \mathbb{R}^{dimension} : t \ge \lVert x \rVert_1 = \sum_i \lvert x_i \rvert \}$ of dimension dimension.

source
MathOptInterface.SecondOrderConeType
SecondOrderCone(dimension)

The second-order cone (or Lorenz cone or $\ell_2$-norm cone) $\{ (t,x) \in \mathbb{R}^{dimension} : t \ge \lVert x \rVert_2 \}$ of dimension dimension.

source
MathOptInterface.GeometricMeanConeType
GeometricMeanCone(dimension)

The geometric mean cone $\{ (t,x) \in \mathbb{R}^{n+1} : x \ge 0, t \le \sqrt[n]{x_1 x_2 \cdots x_n} \}$, where dimension = n + 1 >= 2.

Duality note

The dual of the geometric mean cone is $\{ (u, v) \in \mathbb{R}^{n+1} : u \le 0, v \ge 0, -u \le n \sqrt[n]{\prod_i v_i} \}$, where dimension = n + 1 >= 2.

source
MathOptInterface.PowerConeType
PowerCone{T <: Real}(exponent::T)

The 3-dimensional power cone $\{ (x,y,z) \in \mathbb{R}^3 : x^{exponent} y^{1-exponent} \ge |z|, x \ge 0, y \ge 0 \}$ with parameter exponent.

source
MathOptInterface.DualPowerConeType
DualPowerCone{T <: Real}(exponent::T)

The 3-dimensional power cone $\{ (u,v,w) \in \mathbb{R}^3 : (\frac{u}{exponent})^{exponent} (\frac{v}{1-exponent})^{1-exponent} \ge |w|, u \ge 0, v \ge 0 \}$ with parameter exponent.

source
MathOptInterface.RelativeEntropyConeType
RelativeEntropyCone(dimension)

The relative entropy cone $\{ (u, v, w) \in \mathbb{R}^{1+2n} : u \ge \sum_{i=1}^n w_i \log(\frac{w_i}{v_i}), v_i \ge 0, w_i \ge 0 \}$, where dimension = 2n + 1 >= 1.

Duality note

The dual of the relative entropy cone is $\{ (u, v, w) \in \mathbb{R}^{1+2n} : \forall i, w_i \ge u (\log (\frac{u}{v_i}) - 1), v_i \ge 0, u > 0 \}$ of dimension dimension${}=2n+1$.

source
MathOptInterface.NormSpectralConeType
NormSpectralCone(row_dim, column_dim)

The epigraph of the matrix spectral norm (maximum singular value function) $\{ (t, X) \in \mathbb{R}^{1 + row_dim \times column_dim} : t \ge \sigma_1(X) \}$, where $\sigma_i$ is the $i$th singular value of the matrix $X$ of row dimension row_dim and column dimension column_dim.

The matrix X is vectorized by stacking the columns, matching the behavior of Julia's vec function.

source
MathOptInterface.NormNuclearConeType
NormNuclearCone(row_dim, column_dim)

The epigraph of the matrix nuclear norm (sum of singular values function) $\{ (t, X) \in \mathbb{R}^{1 + row_dim \times column_dim} : t \ge \sum_i \sigma_i(X) \}$, where $\sigma_i$ is the $i$th singular value of the matrix $X$ of row dimension row_dim and column dimension column_dim.

The matrix X is vectorized by stacking the columns, matching the behavior of Julia's vec function.

source
MathOptInterface.SOS1Type
SOS1{T <: Real}(weights::Vector{T})

The set corresponding to the special ordered set (SOS) constraint of type 1. Of the variables in the set, at most one can be nonzero. The weights induce an ordering of the variables; as such, they should be unique values. The kth element in the set corresponds to the kth weight in weights. See here for a description of SOS constraints and their potential uses.

source
MathOptInterface.SOS2Type
SOS2{T <: Real}(weights::Vector{T})

The set corresponding to the special ordered set (SOS) constraint of type 2. Of the variables in the set, at most two can be nonzero, and if two are nonzero, they must be adjacent in the ordering of the set. The weights induce an ordering of the variables; as such, they should be unique values. The kth element in the set corresponds to the kth weight in weights. See here for a description of SOS constraints and their potential uses.

source
MathOptInterface.IndicatorType
Indicator{A<:ActivationCondition,S<:AbstractScalarSet}(set::S)

The set corresponding to an indicator constraint.

When A is ACTIVATE_ON_ZERO, this means: $\{(y, x) \in \{0, 1\} \times \mathbb{R}^n : y = 0 \implies x \in set\}$

When A is ACTIVATE_ON_ONE, this means: $\{(y, x) \in \{0, 1\} \times \mathbb{R}^n : y = 1 \implies x \in set\}$

Notes

Most solvers expect that the first row of the function is interpretable as a variable index x_i (e.g., 1.0 * x + 0.0). An error will be thrown if this is not the case.

Example

The constraint $\{(y, x) \in \{0, 1\} \times \mathbb{R}^2 : y = 1 \implies x_1 + x_2 \leq 9 \}$ is defined as

f = MOI.VectorAffineFunction(
    [
        MOI.VectorAffineTerm(1, MOI.ScalarAffineTerm(1.0, y)),
        MOI.VectorAffineTerm(2, MOI.ScalarAffineTerm(1.0, x1)),
        MOI.VectorAffineTerm(2, MOI.ScalarAffineTerm(1.0, x2)),
    ],
    [0.0, 0.0],
)
s = MOI.Indicator{MOI.ACTIVATE_ON_ONE}(MOI.LessThan(9.0))
MOI.add_constraint(model, f, s)
source
MathOptInterface.ComplementsType
Complements(dimension::Base.Integer)

The set corresponding to a mixed complementarity constraint.

Complementarity constraints should be specified with an AbstractVectorFunction-in-Complements(dimension) constraint.

The dimension of the vector-valued function F must be dimension. This defines a complementarity constraint between the scalar function F[i] and the variable in F[i + dimension/2]. Thus, F[i + dimension/2] must be interpretable as a single variable x_i (e.g., 1.0 * x + 0.0), and dimension must be even.

The mixed complementarity problem consists of finding x_i in the interval [lb, ub] (i.e., in the set Interval(lb, ub)), such that the following holds:

  1. F_i(x) == 0 if lb_i < x_i < ub_i
  2. F_i(x) >= 0 if lb_i == x_i
  3. F_i(x) <= 0 if x_i == ub_i

Classically, the bounding set for x_i is Interval(0, Inf), which recovers: 0 <= F_i(x) ⟂ x_i >= 0, where the operator implies F_i(x) * x_i = 0.

Examples

The problem:

x -in- Interval(-1, 1)
[-4 * x - 3, x] -in- Complements(2)

defines the mixed complementarity problem where the following holds:

  1. -4 * x - 3 == 0 if -1 < x < 1
  2. -4 * x - 3 >= 0 if x == -1
  3. -4 * x - 3 <= 0 if x == 1

There are three solutions:

  1. x = -3/4 with F(x) = 0
  2. x = -1 with F(x) = 1
  3. x = 1 with F(x) = -7

The function F can also be defined in terms of single variables. For example, the problem:

[x_3, x_4] -in- Nonnegatives(2)
[x_1, x_2, x_3, x_4] -in- Complements(4)

defines the complementarity problem where 0 <= x_1 ⟂ x_3 >= 0 and 0 <= x_2 ⟂ x_4 >= 0.

source

Matrix sets

Matrix sets are vectorized in order to be subtypes of AbstractVectorSet. For sets of symmetric matrices, storing both the (i, j) and (j, i) elements is redundant so there exists the AbstractSymmetricMatrixSetTriangle set to represent only the vectorization of the upper triangular part of the matrix. When the matrix of expressions constrained to be in the set is not symmetric and hence the (i, j) and (j, i) elements should be constrained to be symmetric, the AbstractSymmetricMatrixSetSquare set can be used. The Bridges.Constraint.SquareBridge can transform a set from the square form to the triangular_form by adding appropriate constraints if the (i, j) and (j, i) expressions are different.

MathOptInterface.AbstractSymmetricMatrixSetTriangleType
abstract type AbstractSymmetricMatrixSetTriangle <: AbstractVectorSet end

Abstract supertype for subsets of the (vectorized) cone of symmetric matrices, with side_dimension rows and columns. The entries of the upper-right triangular part of the matrix are given column by column (or equivalently, the entries of the lower-left triangular part are given row by row). A vectorized cone of dimension $n$ corresponds to a square matrix with side dimension $\sqrt{1/4 + 2 n} - 1/2$. (Because a $d \times d$ matrix has $d(d + 1) / 2$ elements in the upper or lower triangle.)

Examples

The matrix

\[\begin{bmatrix} 1 & 2 & 4\\ 2 & 3 & 5\\ 4 & 5 & 6 \end{bmatrix}\]

has side_dimension 3 and vectorization $(1, 2, 3, 4, 5, 6)$.

Note

Two packed storage formats exist for symmetric matrices, the respective orders of the entries are:

  • upper triangular column by column (or lower triangular row by row);
  • lower triangular column by column (or upper triangular row by row).

The advantage of the first format is the mapping between the (i, j) matrix indices and the k index of the vectorized form. It is simpler and does not depend on the side dimension of the matrix. Indeed,

  • the entry of matrix indices (i, j) has vectorized index k = div((j - 1) * j, 2) + i if $i \leq j$ and k = div((i - 1) * i, 2) + j if $j \leq i$;
  • and the entry with vectorized index k has matrix indices i = div(1 + isqrt(8k - 7), 2) and j = k - div((i - 1) * i, 2) or j = div(1 + isqrt(8k - 7), 2) and i = k - div((j - 1) * j, 2).

Duality note

The scalar product for the symmetric matrix in its vectorized form is the sum of the pairwise product of the diagonal entries plus twice the sum of the pairwise product of the upper diagonal entries; see [p. 634, 1]. This has important consequence for duality.

Consider for example the following problem (PositiveSemidefiniteConeTriangle is a subtype of AbstractSymmetricMatrixSetTriangle)

\[\begin{align*} & \max_{x \in \mathbb{R}} & x \\ & \;\;\text{s.t.} & (1, -x, 1) & \in \text{PositiveSemidefiniteConeTriangle}(2). \end{align*}\]

The dual is the following problem

\[\begin{align*} & \min_{x \in \mathbb{R}^3} & y_1 + y_3 \\ & \;\;\text{s.t.} & 2y_2 & = 1\\ & & y & \in \text{PositiveSemidefiniteConeTriangle}(2). \end{align*}\]

Why do we use $2y_2$ in the dual constraint instead of $y_2$ ? The reason is that $2y_2$ is the scalar product between $y$ and the symmetric matrix whose vectorized form is $(0, 1, 0)$. Indeed, with our modified scalar products we have

\[\langle (0, 1, 0), (y_1, y_2, y_3) \rangle = \mathrm{trace} \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \begin{pmatrix} y_1 & y_2\\ y_2 & y_3 \end{pmatrix} = 2y_2.\]

References

[1] Boyd, S. and Vandenberghe, L.. Convex optimization. Cambridge university press, 2004.

source
MathOptInterface.AbstractSymmetricMatrixSetSquareType
abstract type AbstractSymmetricMatrixSetSquare <: AbstractVectorSet end

Abstract supertype for subsets of the (vectorized) cone of symmetric matrices, with side_dimension rows and columns. The entries of the matrix are given column by column (or equivalently, row by row). The matrix is both constrained to be symmetric and to have its triangular_form belong to the corresponding set. That is, if the functions in entries $(i, j)$ and $(j, i)$ are different, then a constraint will be added to make sure that the entries are equal.

Examples

PositiveSemidefiniteConeSquare is a subtype of AbstractSymmetricMatrixSetSquare and constraining the matrix

\[\begin{bmatrix} 1 & -y\\ -z & 0\\ \end{bmatrix}\]

to be symmetric positive semidefinite can be achieved by constraining the vector $(1, -z, -y, 0)$ (or $(1, -y, -z, 0)$) to belong to the PositiveSemidefiniteConeSquare(2). It both constrains $y = z$ and $(1, -y, 0)$ (or $(1, -z, 0)$) to be in PositiveSemidefiniteConeTriangle(2), since triangular_form(PositiveSemidefiniteConeSquare) is PositiveSemidefiniteConeTriangle.

source
MathOptInterface.side_dimensionFunction
side_dimension(set::Union{AbstractSymmetricMatrixSetTriangle,
                          AbstractSymmetricMatrixSetSquare})

Side dimension of the matrices in set. By convention, it should be stored in the side_dimension field but if it is not the case for a subtype of AbstractSymmetricMatrixSetTriangle, the method should be implemented for this subtype.

source

List of recognized matrix sets.

MathOptInterface.PositiveSemidefiniteConeSquareType
PositiveSemidefiniteConeSquare(side_dimension) <: AbstractSymmetricMatrixSetSquare

The cone of symmetric positive semidefinite matrices, with side length side_dimension.

See AbstractSymmetricMatrixSetSquare for more details on the vectorized form.

The entries of the matrix are given column by column (or equivalently, row by row).

The matrix is both constrained to be symmetric and to be positive semidefinite. That is, if the functions in entries $(i, j)$ and $(j, i)$ are different, then a constraint will be added to make sure that the entries are equal.

Examples

Constraining the matrix

\[\begin{bmatrix} 1 & -y\\ -z & 0\\ \end{bmatrix}\]

to be symmetric positive semidefinite can be achieved by constraining the vector $(1, -z, -y, 0)$ (or $(1, -y, -z, 0)$) to belong to the PositiveSemidefiniteConeSquare(2).

It both constrains $y = z$ and $(1, -y, 0)$ (or $(1, -z, 0)$) to be in PositiveSemidefiniteConeTriangle(2).

source
MathOptInterface.LogDetConeTriangleType
LogDetConeTriangle(side_dimension)

The log-determinant cone $\{ (t, u, X) \in \mathbb{R}^{2 + d(d+1)/2} : t \le u \log(\det(X/u)), u > 0 \}$, where the matrix X is represented in the same symmetric packed format as in the PositiveSemidefiniteConeTriangle.

The argument side_dimension is the side dimension of the matrix X, i.e., its number of rows or columns.

source
MathOptInterface.LogDetConeSquareType
LogDetConeSquare(side_dimension)

The log-determinant cone $\{ (t, u, X) \in \mathbb{R}^{2 + d^2} : t \le u \log(\det(X/u)), X \text{ symmetric}, u > 0 \}$, where the matrix X is represented in the same format as in the PositiveSemidefiniteConeSquare.

Similarly to PositiveSemidefiniteConeSquare, constraints are added to ensure that X is symmetric.

The argument side_dimension is the side dimension of the matrix X, i.e., its number of rows or columns.

source
MathOptInterface.RootDetConeTriangleType
RootDetConeTriangle(side_dimension)

The root-determinant cone $\{ (t, X) \in \mathbb{R}^{1 + d(d+1)/2} : t \le \det(X)^{1/d} \}$, where the matrix X is represented in the same symmetric packed format as in the PositiveSemidefiniteConeTriangle.

The argument side_dimension is the side dimension of the matrix X, i.e., its number of rows or columns.

source