Reference

DiffOpt.AbstractLazyScalarFunctionType
abstract type AbstractLazyScalarFunction <: MOI.AbstractScalarFunction end

Subtype of MOI.AbstractScalarFunction that is not a standard MOI scalar function but can be converted to one using standard_form.

The function can also be inspected lazily using JuMP.coefficient or quad_sym_half.

source
DiffOpt.ForwardConstraintFunctionType
ForwardConstraintFunction <: MOI.AbstractConstraintAttribute

A MOI.AbstractConstraintAttribute to set input data to forward differentiation, that is, problem input data.

For instance, if the scalar constraint of index ci contains θ * (x + 2y) <= 5θ, for the purpose of computing the derivative with respect to θ, the following should be set:

MOI.set(model, DiffOpt.ForwardConstraintFunction(), ci, 1.0 * x + 2.0 * y - 5.0)

Note that we use -5 as the ForwardConstraintFunction sets the tangent of the ConstraintFunction so we consider the expression θ * (x + 2y - 5).

source
DiffOpt.ForwardObjectiveFunctionType
ForwardObjectiveFunction <: MOI.AbstractModelAttribute

A MOI.AbstractModelAttribute to set input data to forward differentiation, that is, problem input data. The possible values are any MOI.AbstractScalarFunction. A MOI.ScalarQuadraticFunction can only be used in linearly constrained quadratic models.

For instance, if the objective contains θ * (x + 2y), for the purpose of computing the derivative with respect to θ, the following should be set:

MOI.set(model, DiffOpt.ForwardObjectiveFunction(), 1.0 * x + 2.0 * y)

where x and y are the relevant MOI.VariableIndex.

source
DiffOpt.ForwardVariablePrimalType
ForwardVariablePrimal <: MOI.AbstractVariableAttribute

A MOI.AbstractVariableAttribute to get output data from forward differentiation, that is, problem solution.

For instance, to get the tangent of the variable of index vi corresponding to the tangents given to ForwardObjectiveFunction and ForwardConstraintFunction, do the following:

MOI.get(model, DiffOpt.ForwardVariablePrimal(), vi)
source
DiffOpt.IndexMappedFunctionType
IndexMappedFunction{F<:MOI.AbstractFunction} <: AbstractLazyScalarFunction

Lazily represents the function MOI.Utilities.map_indices(index_map, DiffOpt.standard_form(func)).

source
DiffOpt.MOItoJuMPType
MOItoJuMP{F<:MOI.AbstractScalarFunction} <: JuMP.AbstractJuMPScalar

Lazily represents the function JuMP.jump_function(model, DiffOpt.standard_form(func)).

source
DiffOpt.MatrixScalarQuadraticFunctionType
struct MatrixScalarQuadraticFunction{T, VT, MT} <: MOI.AbstractScalarFunction
    affine::VectorScalarAffineFunction{T,VT}
    terms::MT
end

Represents the function x' * terms * x / 2 + affine as an MOI.AbstractScalarFunction where x[i] = MOI.VariableIndex(i). Use standard_form to convert it to a MOI.ScalarQuadraticFunction{T}.

source
DiffOpt.MatrixVectorAffineFunctionType
MatrixVectorAffineFunction{T, VT} <: MOI.AbstractVectorFunction

Represents the function terms * x + constant as an MOI.AbstractVectorFunction where x[i] = MOI.VariableIndex(i). Use standard_form to convert it to a MOI.VectorAffineFunction{T}.

source
DiffOpt.ModelConstructorType
ModelConstructor <: MOI.AbstractOptimizerAttribute

Determines which subtype of DiffOpt.AbstractModel to use for differentiation. When set to nothing, the first one out of model.model_constructors that support the problem is used.

source
DiffOpt.ObjectiveDualStartType
struct ObjectiveDualStart <: MOI.AbstractModelAttribute end

If the objective function had a dual, it would be -1 for the Lagrangian function to be the same. When the MOI.Bridges.Objective.SlackBridge is used, it creates a constraint. The dual of this constraint is therefore -1 as well. When setting this attribute, it allows to set the constraint dual of this constraint.

source
DiffOpt.ObjectiveFunctionAttributeType
struct ObjectiveFunctionAttribute{A,F} <: MOI.AbstractModelAttribute
    attr::A
end

Objective function attribute attr for the function type F. The type F is used by a MOI.Bridges.AbstractBridgeOptimizer to keep track of its position in a chain of objective bridges.

source
DiffOpt.ObjectiveSlackGapPrimalStartType
struct ObjectiveSlackGapPrimalStart <: MOI.AbstractModelAttribute end

If the objective function had a dual, it would be -1 for the Lagrangian function to be the same. When the MOI.Bridges.Objective.SlackBridge is used, it creates a constraint. The dual of this constraint is therefore -1 as well. When setting this attribute, it allows to set the constraint dual of this constraint.

source
DiffOpt.ProductOfSetsType
ProductOfSets{T} <: MOI.Utilities.OrderedProductOfSets{T}

The MOI.Utilities.@product_of_sets macro requires to know the list of sets at compile time. In DiffOpt however, the list depends on what the user is going to use as set as DiffOpt supports any set as long as it implements the required function of MathOptSetDistances. For this type, the list of sets can be given a run-time.

source
DiffOpt.ReverseConstraintFunctionType
ReverseConstraintFunction

An MOI.AbstractConstraintAttribute to get output data to reverse differentiation, that is, problem input data.

For instance, if the following returns x + 2y + 5, it means that the tangent has coordinate 1 for the coefficient of x, coordinate 2 for the coefficient of y and 5 for the function constant. If the constraint is of the form func == constant or func <= constant, the tangent for the constant on the right-hand side is -5.

MOI.get(model, DiffOpt.ReverseConstraintFunction(), ci)
source
DiffOpt.ReverseObjectiveFunctionType
ReverseObjectiveFunction <: MOI.AbstractModelAttribute

A MOI.AbstractModelAttribute to get output data to reverse differentiation, that is, problem input data.

For instance, to get the tangent of the objective function corresponding to the tangent given to ReverseVariablePrimal, do the following:

func = MOI.get(model, DiffOpt.ReverseObjectiveFunction())

Then, to get the sensitivity of the linear term with variable x, do

JuMP.coefficient(func, x)

To get the sensitivity with respect to the quadratic term with variables x and y, do either

JuMP.coefficient(func, x, y)

or

DiffOpt.quad_sym_half(func, x, y)
Warning

These two lines are not equivalent in case x == y, see quad_sym_half for the details on the difference between these two functions.

source
DiffOpt.ReverseVariablePrimalType
ReverseVariablePrimal <: MOI.AbstractVariableAttribute

A MOI.AbstractVariableAttribute to set input data to reverse differentiation, that is, problem solution.

For instance, to set the tangent of the variable of index vi, do the following:

MOI.set(model, DiffOpt.ReverseVariablePrimal(), x)
source
DiffOpt.SparseVectorAffineFunctionType
struct SparseVectorAffineFunction{T} <: MOI.AbstractVectorFunction
    terms::SparseArrays.SparseMatrixCSC{T,Int}
    constants::Vector{T}
end

The vector-valued affine function $A x + b$, where:

  • $A$ is the sparse matrix given by terms
  • $b$ is the vector constants
source
DiffOpt.VectorScalarAffineFunctionType
VectorScalarAffineFunction{T, VT} <: MOI.AbstractScalarFunction

Represents the function x ⋅ terms + constant as an MOI.AbstractScalarFunction where x[i] = MOI.VariableIndex(i). Use standard_form to convert it to a MOI.ScalarAffineFunction{T}.

source
DiffOpt.DπMethod
Dπ(v::Vector{Float64}, model, cones::ProductOfSets)

Given a model, its cones, find the gradient of the projection of the vectors v of length equal to the number of rows in the conic form onto the cartesian product of the cones corresponding to these rows. For more info, refer to https://github.com/matbesancon/MathOptSetDistances.jl

source
DiffOpt.dU_from_dQ!Method
dU_from_dQ!(dQ, U)

Return the solution dU of the matrix equation dQ = dU' * U + U' * dU where dQ and U are the two argument of the function.

This function overwrites the first argument dQ to store the solution. The matrix U is not however modified.

The matrix dQ is assumed to be symmetric and the matrix U is assumed to be upper triangular.

We can exploit the structure of U here:

  • If the factorization was obtained from SVD, U would be orthogonal
  • If the factorization was obtained from Cholesky, U would be upper triangular.

The MOI bridge uses Cholesky in order to exploit sparsity so we are in the second case.

We look for an upper triangular dU as well.

We can find each column of dU by solving a triangular linear system once the previous column have been found. Indeed, let dj be the jth column of dU dU' * U = vcat(dj'U for j in axes(U, 2)) Therefore, dQ[j, 1:j] = dj'U[:, 1:j] + U[:, j]'dU[:, 1:j]SodQ[j, 1:(j-1)] - U[:, j]' * dU[:, 1:(j-1)] = dj'U[:, 1:(j-1)]anddQ[j, j] / 2 = dj'U[:, j]`

source
DiffOpt.diff_optimizerMethod
diff_optimizer(optimizer_constructor)::Optimizer

Creates a DiffOpt.Optimizer, which is an MOI layer with an internal optimizer and other utility methods. Results (primal, dual and slack values) are obtained by querying the internal optimizer instantiated using the optimizer_constructor. These values are required for find jacobians with respect to problem data.

One define a differentiable model by using any solver of choice. Example:

julia> import DiffOpt, HiGHS

julia> model = DiffOpt.diff_optimizer(HiGHS.Optimizer)
julia> x = model.add_variable(model)
julia> model.add_constraint(model, ...)
source
DiffOpt.map_rowsMethod
map_rows(f::Function, model, cones::ProductOfSets, map_mode::Union{Nested{T}, Flattened{T}})

Given a model, its cones and map_mode of type Nested (resp. Flattened), return a Vector{T} of length equal to the number of cones (resp. rows) in the conic form where the value for the index (resp. rows) corresponding to each cone is equal to f(ci, r) where ci is the corresponding constraint index in model and r is a UnitRange of the corresponding rows in the conic form.

source
DiffOpt.quad_sym_halfFunction
quad_sym_half(func, vi1::MOI.VariableIndex, vi2::MOI.VariableIndex)

Return Q[i,j] = Q[j,i] where the quadratic terms of func is represented by x' Q x / 2 for a symmetric matrix Q where x[i] = vi1 and x[j] = vi2. Note that while this is equal to JuMP.coefficient(func, vi1, vi2) if vi1 != vi2, in the case vi1 == vi2, it is rather equal to 2JuMP.coefficient(func, vi1, vi2).

source
DiffOpt.standard_formFunction
standard_form(func::AbstractLazyScalarFunction)

Converts func to a standard MOI scalar function.

standard_form(func::MOItoJuMP)

Converts func to a standard JuMP scalar function.

source
DiffOpt.ΔQ_from_ΔU!Method
ΔQ_from_ΔU!(ΔU, U)

Return the symmetric solution ΔQ of the matrix equation triu(ΔU) = 2triu(U * ΔQ) where ΔU and U are the two argument of the function.

This function overwrites the first argument ΔU to store the solution. The matrix U is not however modified.

The matrix U is assumed to be upper triangular.

We can exploit the structure of U here:

  • If the factorization was obtained from SVD, U would be orthogonal
  • If the factorization was obtained from Cholesky, U would be upper triangular.

The MOI bridge uses Cholesky in order to exploit sparsity so we are in the second case.

We can find each column of ΔQ by solving a triangular linear system.

source
DiffOpt.πMethod
π(v::Vector{Float64}, model::MOI.ModelLike, cones::ProductOfSets)

Given a model, its cones, find the projection of the vectors v of length equal to the number of rows in the conic form onto the cartesian product of the cones corresponding to these rows. For more info, refer to https://github.com/matbesancon/MathOptSetDistances.jl

source
DiffOpt.QuadraticProgram.ModelType
DiffOpt.QuadraticProgram.Model <: DiffOpt.AbstractModel

Model to differentiate quadratic programs.

For the reverse differentiation, it differentiates the optimal solution z and return product of jacobian matrices (dz / dQ, dz / dq, etc) with the backward pass vector dl / dz

The method computes the product of

  1. jacobian of problem solution z* with respect to problem parameters set with the DiffOpt.ReverseVariablePrimal
  2. a backward pass vector dl / dz, where l can be a loss function

Note that this method does not returns the actual jacobians.

For more info refer eqn(7) and eqn(8) of https://arxiv.org/pdf/1703.00443.pdf

source
DiffOpt.ConicProgram.ModelType
Diffopt.ConicProgram.Model <: DiffOpt.AbstractModel

Model to differentiate conic programs.

The forward differentiation computes the product of the derivative (Jacobian) at the conic program parameters A, b, c to the perturbations dA, db, dc.

The reverse differentiation computes the product of the transpose of the derivative (Jacobian) at the conic program parameters A, b, c to the perturbations dx, dy, ds.

For theoretical background, refer Section 3 of Differentiating Through a Cone Program, https://arxiv.org/abs/1904.09043

source