# Reference

DiffOpt.ForwardConstraintFunctionType
ForwardConstraintFunction <: MOI.AbstractConstraintAttribute

A MOI.AbstractConstraintAttribute to set input data to forward differentiation, that is, problem input data.

For instance, if the scalar constraint of index ci contains θ * (x + 2y) <= 5θ, for the purpose of computing the derivative with respect to θ, the following should be set:

MOI.set(model, DiffOpt.ForwardConstraintFunction(), ci, 1.0 * x + 2.0 * y - 5.0)

Note that we use -5 as the ForwardConstraintFunction sets the tangent of the ConstraintFunction so we consider the expression θ * (x + 2y - 5).

source
DiffOpt.ForwardObjectiveFunctionType
ForwardObjectiveFunction <: MOI.AbstractModelAttribute

A MOI.AbstractModelAttribute to set input data to forward differentiation, that is, problem input data. The possible values are any MOI.AbstractScalarFunction. A MOI.ScalarQuadraticFunction can only be used in linearly constrained quadratic models.

For instance, if the objective contains θ * (x + 2y), for the purpose of computing the derivative with respect to θ, the following should be set:

MOI.set(model, DiffOpt.ForwardObjectiveFunction(), 1.0 * x + 2.0 * y)

where x and y are the relevant MOI.VariableIndex.

source
DiffOpt.ForwardVariablePrimalType
ForwardVariablePrimal <: MOI.AbstractVariableAttribute

A MOI.AbstractVariableAttribute to get output data from forward differentiation, that is, problem solution.

For instance, to get the tangent of the variable of index vi corresponding to the tangents given to ForwardObjectiveFunction and ForwardConstraintFunction, do the following:

MOI.get(model, DiffOpt.ForwardVariablePrimal(), vi)
source
DiffOpt.IndexMappedFunctionType
IndexMappedFunction{F<:MOI.AbstractFunction} <: AbstractLazyScalarFunction

Lazily represents the function MOI.Utilities.map_indices(index_map, DiffOpt.standard_form(func)).

source
DiffOpt.MOItoJuMPType
MOItoJuMP{F<:MOI.AbstractScalarFunction} <: JuMP.AbstractJuMPScalar

Lazily represents the function JuMP.jump_function(model, DiffOpt.standard_form(func)).

source
DiffOpt.MatrixScalarQuadraticFunctionType
struct MatrixScalarQuadraticFunction{T, VT, MT} <: MOI.AbstractScalarFunction
affine::VectorScalarAffineFunction{T,VT}
terms::MT
end

Represents the function x' * terms * x / 2 + affine as an MOI.AbstractScalarFunction where x[i] = MOI.VariableIndex(i). Use standard_form to convert it to a MOI.ScalarQuadraticFunction{T}.

source
DiffOpt.ProductOfSetsType
ProductOfSets{T} <: MOI.Utilities.OrderedProductOfSets{T}

The MOI.Utilities.@product_of_sets macro requires to know the list of sets at compile time. In DiffOpt however, the list depends on what the user is going to use as set as DiffOpt supports any set as long as it implements the required function of MathOptSetDistances. For this type, the list of sets can be given a run-time.

source
DiffOpt.ReverseConstraintFunctionType
ReverseConstraintFunction

An MOI.AbstractConstraintAttribute to get output data to reverse differentiation, that is, problem input data.

For instance, if the following returns x + 2y + 5, it means that the tangent has coordinate 1 for the coefficient of x, coordinate 2 for the coefficient of y and 5 for the function constant. If the constraint is of the form func == constant or func <= constant, the tangent for the constant on the right-hand side is -5.

MOI.get(model, DiffOpt.ReverseConstraintFunction(), ci)
source
DiffOpt.ReverseObjectiveFunctionType
ReverseObjectiveFunction <: MOI.AbstractModelAttribute

A MOI.AbstractModelAttribute to get output data to reverse differentiation, that is, problem input data.

For instance, to get the tangent of the objective function corresponding to the tangent given to ReverseVariablePrimal, do the following:

func = MOI.get(model, DiffOpt.ReverseObjectiveFunction())

Then, to get the sensitivity of the linear term with variable x, do

JuMP.coefficient(func, x)

To get the sensitivity with respect to the quadratic term with variables x and y, do either

JuMP.coefficient(func, x, y)

or

DiffOpt.quad_sym_half(func, x, y)
Warning

These two lines are not equivalent in case x == y, see quad_sym_half for the details on the difference between these two functions.

source
DiffOpt.ReverseVariablePrimalType
ReverseVariablePrimal <: MOI.AbstractVariableAttribute

A MOI.AbstractVariableAttribute to set input data to reverse differentiation, that is, problem solution.

For instance, to set the tangent of the variable of index vi, do the following:

MOI.set(model, DiffOpt.ReverseVariablePrimal(), x)
source
DiffOpt.DπMethod
Dπ(v::Vector{Float64}, model, cones::ProductOfSets)

Given a model, its cones, find the gradient of the projection of the vectors v of length equal to the number of rows in the conic form onto the cartesian product of the cones corresponding to these rows. For more info, refer to https://github.com/matbesancon/MathOptSetDistances.jl

source
DiffOpt.diff_optimizerMethod
diff_optimizer(optimizer_constructor)::Optimizer

Creates a DiffOpt.Optimizer, which is an MOI layer with an internal optimizer and other utility methods. Results (primal, dual and slack values) are obtained by querying the internal optimizer instantiated using the optimizer_constructor. These values are required for find jacobians with respect to problem data.

One define a differentiable model by using any solver of choice. Example:

julia> import DiffOpt, HiGHS

julia> model = DiffOpt.diff_optimizer(HiGHS.Optimizer)
julia> model.add_constraint(model, ...)
source
DiffOpt.map_rowsMethod
map_rows(f::Function, model, cones::ProductOfSets, map_mode::Union{Nested{T}, Flattened{T}})

Given a model, its cones and map_mode of type Nested (resp. Flattened), return a Vector{T} of length equal to the number of cones (resp. rows) in the conic form where the value for the index (resp. rows) corresponding to each cone is equal to f(ci, r) where ci is the corresponding constraint index in model and r is a UnitRange of the corresponding rows in the conic form.

source
DiffOpt.quad_sym_halfFunction
quad_sym_half(func, vi1::MOI.VariableIndex, vi2::MOI.VariableIndex)

Return Q[i,j] = Q[j,i] where the quadratic terms of func is represented by x' Q x / 2 for a symmetric matrix Q where x[i] = vi1 and x[j] = vi2. Note that while this is equal to JuMP.coefficient(func, vi1, vi2) if vi1 != vi2, in the case vi1 == vi2, it is rather equal to 2JuMP.coefficient(func, vi1, vi2).

source
DiffOpt.reverse_differentiate!Function
reverse_differentiate!(model::MOI.ModelLike)

Wrapper method for the backward pass / reverse differentiation. This method will consider as input a currently solved problem and differentials with respect to the solution set with the ReverseVariablePrimal attribute. The output problem data differentials can be queried with the attributes ReverseObjectiveFunction and ReverseConstraintFunction.

source
DiffOpt.standard_formFunction
standard_form(func::AbstractLazyScalarFunction)

Converts func to a standard MOI scalar function.

standard_form(func::MOItoJuMP)

Converts func to a standard JuMP scalar function.

source
DiffOpt.πMethod
π(v::Vector{Float64}, model::MOI.ModelLike, cones::ProductOfSets)

Given a model, its cones, find the projection of the vectors v of length equal to the number of rows in the conic form onto the cartesian product of the cones corresponding to these rows. For more info, refer to https://github.com/matbesancon/MathOptSetDistances.jl

source
DiffOpt.QuadraticProgram.ModelType
DiffOpt.QuadraticProgram.Model <: DiffOpt.AbstractModel

For the reverse differentiation, it differentiates the optimal solution z and return product of jacobian matrices (dz / dQ, dz / dq, etc) with the backward pass vector dl / dz

The method computes the product of

1. jacobian of problem solution z* with respect to problem parameters set with the DiffOpt.ReverseVariablePrimal
2. a backward pass vector dl / dz, where l can be a loss function

Note that this method does not returns the actual jacobians.

source
DiffOpt.ConicProgram.ModelType
Diffopt.ConicProgram.Model <: DiffOpt.AbstractModel

Model to differentiate conic programs.

The forward differentiation computes the product of the derivative (Jacobian) at the conic program parameters A, b, c to the perturbations dA, db, dc.

The reverse differentiation computes the product of the transpose of the derivative (Jacobian) at the conic program parameters A, b, c to the perturbations dx, dy, ds.

For theoretical background, refer Section 3 of Differentiating Through a Cone Program, https://arxiv.org/abs/1904.09043

source