JuMP
This page lists the public API of JuMP
.
This page is an unstructured list of the JuMP API. For a more structured overview, read the Manual or Tutorial parts of this documentation.
Load all of the public the API into the current scope with:
using JuMP
Alternatively, load only the module with:
import JuMP
and then prefix all calls with JuMP.
to create JuMP.<NAME>
.
@NLconstraint
JuMP.@NLconstraint
— Macro@NLconstraint(model::GenericModel, expr)
Add a constraint described by the nonlinear expression expr
. See also @constraint
. For example:
julia> model = Model();
julia> @variable(model, x)
x
julia> @NLconstraint(model, sin(x) <= 1)
sin(x) - 1.0 ≤ 0
julia> @NLconstraint(model, [i = 1:3], sin(i * x) <= 1 / i)
3-element Vector{NonlinearConstraintRef{ScalarShape}}:
(sin(1.0 * x) - 1.0 / 1.0) - 0.0 ≤ 0
(sin(2.0 * x) - 1.0 / 2.0) - 0.0 ≤ 0
(sin(3.0 * x) - 1.0 / 3.0) - 0.0 ≤ 0
@NLconstraints
JuMP.@NLconstraints
— Macro@NLconstraints(model, args...)
Adds multiple nonlinear constraints to model at once, in the same fashion as the @NLconstraint
macro.
The model must be the first argument, and multiple constraints can be added on multiple lines wrapped in a begin ... end
block.
The macro returns a tuple containing the constraints that were defined.
Example
julia> model = Model();
julia> @variable(model, x);
julia> @variable(model, y);
julia> @variable(model, t);
julia> @variable(model, z[1:2]);
julia> a = [4, 5];
julia> @NLconstraints(model, begin
t >= sqrt(x^2 + y^2)
[i = 1:2], z[i] <= log(a[i])
end)
((t - sqrt(x ^ 2.0 + y ^ 2.0)) - 0.0 ≥ 0, NonlinearConstraintRef{ScalarShape}[(z[1] - log(4.0)) - 0.0 ≤ 0, (z[2] - log(5.0)) - 0.0 ≤ 0])
@NLexpression
JuMP.@NLexpression
— Macro@NLexpression(args...)
Efficiently build a nonlinear expression which can then be inserted in other nonlinear constraints and the objective. See also [@expression
]. For example:
julia> model = Model();
julia> @variable(model, x)
x
julia> @variable(model, y)
y
julia> @NLexpression(model, my_expr, sin(x)^2 + cos(x^2))
subexpression[1]: sin(x) ^ 2.0 + cos(x ^ 2.0)
julia> @NLconstraint(model, my_expr + y >= 5)
(subexpression[1] + y) - 5.0 ≥ 0
julia> @NLobjective(model, Min, my_expr)
Indexing over sets and anonymous expressions are also supported:
julia> @NLexpression(model, my_expr_1[i=1:3], sin(i * x))
3-element Vector{NonlinearExpression}:
subexpression[2]: sin(1.0 * x)
subexpression[3]: sin(2.0 * x)
subexpression[4]: sin(3.0 * x)
julia> my_expr_2 = @NLexpression(model, log(1 + sum(exp(my_expr_1[i]) for i in 1:2)))
subexpression[5]: log(1.0 + (exp(subexpression[2]) + exp(subexpression[3])))
@NLexpressions
JuMP.@NLexpressions
— Macro@NLexpressions(model, args...)
Adds multiple nonlinear expressions to model at once, in the same fashion as the @NLexpression
macro.
The model must be the first argument, and multiple expressions can be added on multiple lines wrapped in a begin ... end
block.
The macro returns a tuple containing the expressions that were defined.
Example
julia> model = Model();
julia> @variable(model, x);
julia> @variable(model, y);
julia> @variable(model, z[1:2]);
julia> a = [4, 5];
julia> @NLexpressions(model, begin
my_expr, sqrt(x^2 + y^2)
my_expr_1[i = 1:2], log(a[i]) - z[i]
end)
(subexpression[1]: sqrt(x ^ 2.0 + y ^ 2.0), NonlinearExpression[subexpression[2]: log(4.0) - z[1], subexpression[3]: log(5.0) - z[2]])
@NLobjective
JuMP.@NLobjective
— Macro@NLobjective(model, sense, expression)
Add a nonlinear objective to model
with optimization sense sense
. sense
must be Max
or Min
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> @NLobjective(model, Max, 2x + 1 + sin(x))
julia> print(model)
Max 2.0 * x + 1.0 + sin(x)
Subject to
@NLparameter
JuMP.@NLparameter
— Macro@NLparameter(model, param == value)
Create and return a nonlinear parameter param
attached to the model model
with initial value set to value
. Nonlinear parameters may be used only in nonlinear expressions.
Example
julia> model = Model();
julia> @NLparameter(model, x == 10)
x == 10.0
julia> value(x)
10.0
@NLparameter(model, value = param_value)
Create and return an anonymous nonlinear parameter param
attached to the model model
with initial value set to param_value
. Nonlinear parameters may be used only in nonlinear expressions.
Example
julia> model = Model();
julia> x = @NLparameter(model, value = 10)
parameter[1] == 10.0
julia> value(x)
10.0
@NLparameter(model, param_collection[...] == value_expr)
Create and return a collection of nonlinear parameters param_collection
attached to the model model
with initial value set to value_expr
(may depend on index sets). Uses the same syntax for specifying index sets as @variable
.
Example
julia> model = Model();
julia> @NLparameter(model, y[i = 1:3] == 2 * i)
3-element Vector{NonlinearParameter}:
parameter[1] == 2.0
parameter[2] == 4.0
parameter[3] == 6.0
julia> value(y[2])
4.0
@NLparameter(model, [...] == value_expr)
Create and return an anonymous collection of nonlinear parameters attached to the model model
with initial value set to value_expr
(may depend on index sets). Uses the same syntax for specifying index sets as @variable
.
Example
julia> model = Model();
julia> y = @NLparameter(model, [i = 1:3] == 2 * i)
3-element Vector{NonlinearParameter}:
parameter[1] == 2.0
parameter[2] == 4.0
parameter[3] == 6.0
julia> value(y[2])
4.0
@NLparameters
JuMP.@NLparameters
— Macro @NLparameters(model, args...)
Create and return multiple nonlinear parameters attached to model model
, in the same fashion as @NLparameter
macro.
The model must be the first argument, and multiple parameters can be added on multiple lines wrapped in a begin ... end
block. Distinct parameters need to be placed on separate lines as in the following example.
The macro returns a tuple containing the parameters that were defined.
Example
julia> model = Model();
julia> @NLparameters(model, begin
x == 10
b == 156
end);
julia> value(x)
10.0
@build_constraint
JuMP.@build_constraint
— Macro@build_constraint(constraint_expr)
Constructs a ScalarConstraint
or VectorConstraint
using the same machinery as @constraint
but without adding the constraint to a model.
Constraints using broadcast operators like x .<= 1
are also supported and will create arrays of ScalarConstraint
or VectorConstraint
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> @build_constraint(2x >= 1)
ScalarConstraint{AffExpr, MathOptInterface.GreaterThan{Float64}}(2 x, MathOptInterface.GreaterThan{Float64}(1.0))
@constraint
JuMP.@constraint
— Macro@constraint(m::GenericModel, expr, kw_args...)
Add a constraint described by the expression expr
.
@constraint(m::GenericModel, ref[i=..., j=..., ...], expr, kw_args...)
Add a group of constraints described by the expression expr
parametrized by i
, j
, ...
The expression expr
can either be
- of the form
func in set
constraining the functionfunc
to belong to the setset
which is either aMOI.AbstractSet
or one of the JuMP shortcutsSecondOrderCone
,RotatedSecondOrderCone
andPSDCone
, e.g.@constraint(model, [1, x-1, y-2] in SecondOrderCone())
constrains the norm of[x-1, y-2]
be less than 1; - of the form
a sign b
, wheresign
is one of==
,≥
,>=
,≤
and<=
building the single constraint enforcing the comparison to hold for the expressiona
andb
, e.g.@constraint(m, x^2 + y^2 == 1)
constrainsx
andy
to lie on the unit circle; - of the form
a ≤ b ≤ c
ora ≥ b ≥ c
(where≤
and<=
(resp.≥
and>=
) can be used interchangeably) constraining the paired the expressionb
to lie betweena
andc
; - of the forms
@constraint(m, a .sign b)
or@constraint(m, a .sign b .sign c)
which broadcast the constraint creation to each element of the vectors.
The recognized keyword arguments in kw_args
are the following:
base_name
: Sets the name prefix used to generate constraint names. It corresponds to the constraint name for scalar constraints, otherwise, the constraint names are set tobase_name[...]
for each index...
of the axesaxes
.container
: Specify the container type.set_string_name::Bool = true
: control whether to set theMOI.ConstraintName
attribute. Passingset_string_name = false
can improve performance.
Note for extending the constraint macro
Each constraint will be created using add_constraint(m, build_constraint(_error, func, set))
where
_error
is an error function showing the constraint call in addition to the error message given as argument,func
is the expression that is constrained- and
set
is the set in which it is constrained to belong.
For expr
of the first type (i.e. @constraint(m, func in set)
), func
and set
are passed unchanged to build_constraint
but for the other types, they are determined from the expressions and signs. For instance, @constraint(m, x^2 + y^2 == 1)
is transformed into add_constraint(m, build_constraint(_error, x^2 + y^2, MOI.EqualTo(1.0)))
.
To extend JuMP to accept new constraints of this form, it is necessary to add the corresponding methods to build_constraint
. Note that this will likely mean that either func
or set
will be some custom type, rather than e.g. a Symbol
, since we will likely want to dispatch on the type of the function or set appearing in the constraint.
For extensions that need to create constraints with more information than just func
and set
, an additional positional argument can be specified to @constraint
that will then be passed on build_constraint
. Hence, we can enable this syntax by defining extensions of build_constraint(_error, func, set, my_arg; kw_args...)
. This produces the user syntax: @constraint(model, ref[...], expr, my_arg, kw_args...)
.
@constraints
JuMP.@constraints
— Macro@constraints(model, args...)
Adds groups of constraints at once, in the same fashion as the @constraint
macro.
The model must be the first argument, and multiple constraints can be added on multiple lines wrapped in a begin ... end
block.
The macro returns a tuple containing the constraints that were defined.
Example
julia> model = Model();
julia> @variable(model, w);
julia> @variable(model, x);
julia> @variable(model, y);
julia> @variable(model, z[1:3]);
julia> @constraints(model, begin
x >= 1
y - w <= 2
sum_to_one[i=1:3], z[i] + y == 1
end);
julia> print(model)
Feasibility
Subject to
sum_to_one[1] : y + z[1] = 1
sum_to_one[2] : y + z[2] = 1
sum_to_one[3] : y + z[3] = 1
x ≥ 1
-w + y ≤ 2
@expression
JuMP.@expression
— Macro@expression(args...)
Efficiently builds a linear or quadratic expression but does not add to model immediately. Instead, returns the expression which can then be inserted in other constraints.
Example
julia> model = Model();
julia> @variable(model, x[1:5]);
julia> @variable(model, y);
julia> @variable(model, z);
julia> @expression(model, shared, sum(i * x[i] for i in 1:5))
x[1] + 2 x[2] + 3 x[3] + 4 x[4] + 5 x[5]
julia> @constraint(model, shared + y >= 5)
x[1] + 2 x[2] + 3 x[3] + 4 x[4] + 5 x[5] + y ≥ 5
julia> @constraint(model, shared + z <= 10)
x[1] + 2 x[2] + 3 x[3] + 4 x[4] + 5 x[5] + z ≤ 10
The ref
accepts index sets in the same way as @variable
, and those indices can be used in the construction of the expressions:
julia> @expression(model, expr[i = 1:3], i * sum(x[j] for j in 1:3))
3-element Vector{AffExpr}:
x[1] + x[2] + x[3]
2 x[1] + 2 x[2] + 2 x[3]
3 x[1] + 3 x[2] + 3 x[3]
Anonymous syntax is also supported:
julia> expr = @expression(model, [i in 1:3], i * sum(x[j] for j in 1:3))
3-element Vector{AffExpr}:
x[1] + x[2] + x[3]
2 x[1] + 2 x[2] + 2 x[3]
3 x[1] + 3 x[2] + 3 x[3]
@expressions
JuMP.@expressions
— Macro@expressions(model, args...)
Adds multiple expressions to model at once, in the same fashion as the @expression
macro.
The model must be the first argument, and multiple expressions can be added on multiple lines wrapped in a begin ... end
block.
The macro returns a tuple containing the expressions that were defined.
Example
julia> model = Model();
julia> @variable(model, x);
julia> @variable(model, y);
julia> @variable(model, z[1:2]);
julia> a = [4, 5];
julia> @expressions(model, begin
my_expr, x^2 + y^2
my_expr_1[i = 1:2], a[i] - z[i]
end)
(x² + y², AffExpr[-z[1] + 4, -z[2] + 5])
@objective
JuMP.@objective
— Macro@objective(model::GenericModel, sense, func)
Set the objective sense to sense
and objective function to func
. The objective sense can be either Min
, Max
, MOI.MIN_SENSE
, MOI.MAX_SENSE
or MOI.FEASIBILITY_SENSE
; see MOI.ObjectiveSense
.
In order to set the sense programmatically, i.e., when sense
is a Julia variable whose value is the sense, one of the three MOI.ObjectiveSense
values should be used.
Example
To minimize the value of the variable x
, do as follows:
julia> model = Model();
julia> @variable(model, x)
x
julia> @objective(model, Min, x)
x
To maximize the value of the affine expression 2x - 1
, do as follows:
julia> @objective(model, Max, 2x - 1)
2 x - 1
To set a quadratic objective and set the objective sense programmatically, do as follows:
julia> sense = MIN_SENSE
MIN_SENSE::OptimizationSense = 0
julia> @objective(model, sense, x^2 - 2x + 1)
x² - 2 x + 1
@operator
JuMP.@operator
— Macro@operator(model, operator, dim, f[, ∇f[, ∇²f]])
Add the nonlinear operator operator
in model
with dim
arguments, and create a new NonlinearOperator
object called operator
in the current scope.
The function f
evaluates the operator and must return a scalar.
The optional function ∇f
evaluates the first derivative, and the optional function ∇²f
evaluates the second derivative.
∇²f
may be provided only if ∇f
is also provided.
Univariate syntax
If dim == 1
, then the method signatures of each function must be:
f(::T)::T where {T<:Real}
∇f(::T)::T where {T<:Real}
∇²f(::T)::T where {T<:Real}
Multivariate syntax
If dim > 1
, then the method signatures of each function must be:
f(x::T...)::T where {T<:Real}
∇f(g::AbstractVector{T}, x::T...)::Nothing where {T<:Real}
∇²f(H::AbstractMatrix{T}, x::T...)::Nothing where {T<:Real}
Where the gradient vector g
and Hessian matrix H
are filled in-place. For the Hessian, you must fill in the non-zero lower-triangular entries only. Setting an off-diagonal upper-triangular element may error.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> f(x::Float64) = x^2
f (generic function with 1 method)
julia> ∇f(x::Float64) = 2 * x
∇f (generic function with 1 method)
julia> ∇²f(x::Float64) = 2.0
∇²f (generic function with 1 method)
julia> @operator(model, op_f, 1, f, ∇f, ∇²f)
NonlinearOperator(f, :op_f)
julia> @objective(model, Min, op_f(x))
op_f(x)
julia> op_f(2.0)
4.0
julia> model[:op_f]
NonlinearOperator(f, :op_f)
julia> model[:op_f](x)
op_f(x)
Non-macro version
This macro is provided as helpful syntax that matches the style of the rest of the JuMP macros. However, you may also add operators outside the macro using add_nonlinear_operator
. For example:
julia> model = Model();
julia> f(x) = x^2
f (generic function with 1 method)
julia> @operator(model, op_f, 1, f)
NonlinearOperator(f, :op_f)
is equivalent to
julia> model = Model();
julia> f(x) = x^2
f (generic function with 1 method)
julia> op_f = model[:op_f] = add_nonlinear_operator(model, 1, f; name = :op_f)
NonlinearOperator(f, :op_f)
@variable
JuMP.@variable
— Macro@variable(model, expr, args..., kw_args...)
Add a variable to the model model
described by the expression expr
, the positional arguments args
and the keyword arguments kw_args
.
Anonymous and named variables
expr
must be one of the forms:
- Omitted, like
@variable(model)
, which creates an anonymous variable - A single symbol like
@variable(model, x)
- A container expression like
@variable(model, x[i=1:3])
- An anoymous container expression like
@variable(model, [i=1:3])
Bounds
In addition, the expression can have bounds, such as:
@variable(model, x >= 0)
@variable(model, x <= 0)
@variable(model, x == 0)
@variable(model, 0 <= x <= 1)
and bounds can depend on the indices of the container expressions:
@variable(model, -i <= x[i=1:3] <= i)
Sets
You can explicitly specify the set to which the variable belongs:
@variable(model, x in MOI.Interval(0.0, 1.0))
For more information on this syntax, read Variables constrained on creation.
Positional arguments
The recognized positional arguments in args
are the following:
Bin
: restricts the variable to theMOI.ZeroOne
set, that is,{0, 1}
. For example,@variable(model, x, Bin)
. Note: you cannot use@variable(model, Bin)
, use thebinary
keyword instead.Int
: restricts the variable to the set of integers, that is, ..., -2, -1, 0, 1, 2, ... For example,@variable(model, x, Int)
. Note: you cannot use@variable(model, Int)
, use theinteger
keyword instead.Symmetric
: Only available when creating a square matrix of variables, i.e., whenexpr
is of the formvarname[1:n,1:n]
orvarname[i=1:n,j=1:n]
, it creates a symmetric matrix of variables.PSD
: A restrictive extension toSymmetric
which constraints a square matrix of variables toSymmetric
and constrains to be positive semidefinite.
Keyword arguments
Four keyword arguments are useful in all cases:
base_name
: Sets the name prefix used to generate variable names. It corresponds to the variable name for scalar variable, otherwise, the variable names are set tobase_name[...]
for each index...
of the axesaxes
.start::Float64
: specify the value passed toset_start_value
for each variablecontainer
: specify the container type. See Forcing the container type for more information.set_string_name::Bool = true
: control whether to set theMOI.VariableName
attribute. Passingset_string_name = false
can improve performance.
Other keyword arguments are needed to disambiguate sitations with anonymous variables:
lower_bound::Float64
: an alternative tox >= lb
, sets the value of the variable lower bound.upper_bound::Float64
: an alternative tox <= ub
, sets the value of the variable upper bound.binary::Bool
: an alternative to passingBin
, sets whether the variable is binary or not.integer::Bool
: an alternative to passingInt
, sets whether the variable is integer or not.set::MOI.AbstractSet
: an alternative to usingx in set
variable_type
: used by JuMP extensions. See Extend@variable
for more information.
Example
The following are equivalent ways of creating a variable x
of name x
with lower bound 0:
julia> model = Model();
julia> @variable(model, x >= 0)
x
julia> model = Model();
julia> @variable(model, x, lower_bound = 0)
x
julia> model = Model();
julia> x = @variable(model, base_name = "x", lower_bound = 0)
x
Other examples:
julia> model = Model();
julia> @variable(model, x[i=1:3] <= i, Int, start = sqrt(i), lower_bound = -i)
3-element Vector{VariableRef}:
x[1]
x[2]
x[3]
julia> @variable(model, y[i=1:3], container = DenseAxisArray, set = MOI.ZeroOne())
1-dimensional DenseAxisArray{VariableRef,1,...} with index sets:
Dimension 1, Base.OneTo(3)
And data, a 3-element Vector{VariableRef}:
y[1]
y[2]
y[3]
julia> @variable(model, z[i=1:3], set_string_name = false)
3-element Vector{VariableRef}:
_[7]
_[8]
_[9]
@variables
JuMP.@variables
— Macro@variables(model, args...)
Adds multiple variables to model at once, in the same fashion as the @variable
macro.
The model must be the first argument, and multiple variables can be added on multiple lines wrapped in a begin ... end
block.
The macro returns a tuple containing the variables that were defined.
Example
julia> model = Model();
julia> @variables(model, begin
x
y[i = 1:2] >= 0, (start = i)
z, Bin, (start = 0, base_name = "Z")
end)
(x, VariableRef[y[1], y[2]], Z)
Keyword arguments must be contained within parentheses (refer to the example above).
add_bridge
JuMP.add_bridge
— Functionadd_bridge(
model::GenericModel{T},
BT::Type{<:MOI.Bridges.AbstractBridge};
coefficient_type::Type{S} = T,
) where {T,S}
Add BT{T}
to the list of bridges that can be used to transform unsupported constraints into an equivalent formulation using only constraints supported by the optimizer.
See also: remove_bridge
.
Example
julia> model = Model();
julia> add_bridge(model, MOI.Bridges.Constraint.SOCtoNonConvexQuadBridge)
julia> add_bridge(
model,
MOI.Bridges.Constraint.NumberConversionBridge;
coefficient_type = Complex{Float64}
)
add_constraint
JuMP.add_constraint
— Functionadd_constraint(model::GenericModel, con::AbstractConstraint, name::String="")
Add a constraint con
to Model model
and sets its name.
add_nonlinear_constraint
JuMP.add_nonlinear_constraint
— Functionadd_nonlinear_constraint(model::Model, expr::Expr)
Add a nonlinear constraint described by the Julia expression ex
to model
.
This function is most useful if the expression ex
is generated programmatically, and you cannot use @NLconstraint
.
Notes
- You must interpolate the variables directly into the expression
expr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> add_nonlinear_constraint(model, :($(x) + $(x)^2 <= 1))
(x + x ^ 2.0) - 1.0 ≤ 0
add_nonlinear_expression
JuMP.add_nonlinear_expression
— Functionadd_nonlinear_expression(model::Model, expr::Expr)
Add a nonlinear expression expr
to model
.
This function is most useful if the expression expr
is generated programmatically, and you cannot use @NLexpression
.
Notes
- You must interpolate the variables directly into the expression
expr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> add_nonlinear_expression(model, :($(x) + $(x)^2))
subexpression[1]: x + x ^ 2.0
add_nonlinear_operator
JuMP.add_nonlinear_operator
— Functionadd_nonlinear_operator(
model::Model,
dim::Int,
f::Function,
[∇f::Function,]
[∇²f::Function];
[name::Symbol = Symbol(f),]
)
Add a new nonlinear operator with dim
input arguments to model
and associate it with the name name
.
The function f
evaluates the operator and must return a scalar.
The optional function ∇f
evaluates the first derivative, and the optional function ∇²f
evaluates the second derivative.
∇²f
may be provided only if ∇f
is also provided.
Univariate syntax
If dim == 1
, then the method signatures of each function must be:
f(::T)::T where {T<:Real}
∇f(::T)::T where {T<:Real}
∇²f(::T)::T where {T<:Real}
Multivariate syntax
If dim > 1
, then the method signatures of each function must be:
f(x::T...)::T where {T<:Real}
∇f(g::AbstractVector{T}, x::T...)::Nothing where {T<:Real}
∇²f(H::AbstractMatrix{T}, x::T...)::Nothing where {T<:Real}
Where the gradient vector g
and Hessian matrix H
are filled in-place. For the Hessian, you must fill in the non-zero lower-triangular entries only. Setting an off-diagonal upper-triangular element may error.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> f(x::Float64) = x^2
f (generic function with 1 method)
julia> ∇f(x::Float64) = 2 * x
∇f (generic function with 1 method)
julia> ∇²f(x::Float64) = 2.0
∇²f (generic function with 1 method)
julia> op_f = add_nonlinear_operator(model, 1, f, ∇f, ∇²f)
NonlinearOperator(f, :f)
julia> @objective(model, Min, op_f(x))
f(x)
julia> op_f(2.0)
4.0
add_nonlinear_parameter
JuMP.add_nonlinear_parameter
— Functionadd_nonlinear_parameter(model::Model, value::Real)
Add an anonymous parameter to the model.
add_to_expression!
JuMP.add_to_expression!
— Functionadd_to_expression!(expression, terms...)
Updates expression
in place to expression + (*)(terms...)
. This is typically much more efficient than expression += (*)(terms...)
. For example, add_to_expression!(expression, a, b)
produces the same result as expression += a*b
, and add_to_expression!(expression, a)
produces the same result as expression += a
.
Only a few methods are defined, mostly for internal use, and only for the cases when (1) they can be implemented efficiently and (2) expression
is capable of storing the result. For example, add_to_expression!(::AffExpr, ::GenericVariableRef, ::GenericVariableRef)
is not defined because a GenericAffExpr
cannot store the product of two variables.
add_to_function_constant
JuMP.add_to_function_constant
— Functionadd_to_function_constant(constraint::ConstraintRef, value)
Add value
to the function constant term.
Note that for scalar constraints, JuMP will aggregate all constant terms onto the right-hand side of the constraint so instead of modifying the function, the set will be translated by -value
. For example, given a constraint 2x <= 3
, add_to_function_constant(c, 4)
will modify it to 2x <= -1
.
Example
For scalar constraints, the set is translated by -value
:
julia> model = Model();
julia> @variable(model, x);
julia> @constraint(model, con, 0 <= 2x - 1 <= 2)
con : 2 x ∈ [1, 3]
julia> add_to_function_constant(con, 4)
julia> con
con : 2 x ∈ [-3, -1]
For vector constraints, the constant is added to the function:
julia> model = Model();
julia> @variable(model, x);
julia> @variable(model, y);
julia> @constraint(model, con, [x + y, x, y] in SecondOrderCone())
con : [x + y, x, y] ∈ MathOptInterface.SecondOrderCone(3)
julia> add_to_function_constant(con, [1, 2, 2])
julia> con
con : [x + y + 1, x + 2, y + 2] ∈ MathOptInterface.SecondOrderCone(3)
add_variable
JuMP.add_variable
— Functionadd_variable(m::GenericModel, v::AbstractVariable, name::String="")
Add a variable v
to Model m
and sets its name.
all_constraints
JuMP.all_constraints
— Functionall_constraints(model::GenericModel, function_type, set_type)::Vector{<:ConstraintRef}
Return a list of all constraints currently in the model where the function has type function_type
and the set has type set_type
. The constraints are ordered by creation time.
See also list_of_constraint_types
and num_constraints
.
Example
julia> model = Model();
julia> @variable(model, x >= 0, Bin);
julia> @constraint(model, 2x <= 1);
julia> all_constraints(model, VariableRef, MOI.GreaterThan{Float64})
1-element Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.VariableIndex, MathOptInterface.GreaterThan{Float64}}, ScalarShape}}:
x ≥ 0
julia> all_constraints(model, VariableRef, MOI.ZeroOne)
1-element Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.VariableIndex, MathOptInterface.ZeroOne}, ScalarShape}}:
x binary
julia> all_constraints(model, AffExpr, MOI.LessThan{Float64})
1-element Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}}:
2 x ≤ 1
all_constraints(
model::GenericModel;
include_variable_in_set_constraints::Bool,
)::Vector{ConstraintRef}
Return a list of all constraints in model
.
If include_variable_in_set_constraints == true
, then VariableRef
constraints such as VariableRef
-in-Integer
are included. To return only the structural constraints (e.g., the rows in the constraint matrix of a linear program), pass include_variable_in_set_constraints = false
.
Example
julia> model = Model();
julia> @variable(model, x >= 0, Int);
julia> @constraint(model, 2x <= 1);
julia> @NLconstraint(model, x^2 <= 1);
julia> all_constraints(model; include_variable_in_set_constraints = true)
4-element Vector{ConstraintRef}:
2 x ≤ 1
x ≥ 0
x integer
x ^ 2.0 - 1.0 ≤ 0
julia> all_constraints(model; include_variable_in_set_constraints = false)
2-element Vector{ConstraintRef}:
2 x ≤ 1
x ^ 2.0 - 1.0 ≤ 0
Performance considerations
Note that this function is type-unstable because it returns an abstractly typed vector. If performance is a problem, consider using list_of_constraint_types
and a function barrier. See the Performance tips for extensions section of the documentation for more details.
all_nonlinear_constraints
JuMP.all_nonlinear_constraints
— Functionall_nonlinear_constraints(model::GenericModel)
Return a vector of all nonlinear constraint references in the model in the order they were added to the model.
all_variables
JuMP.all_variables
— Functionall_variables(model::GenericModel{T})::Vector{GenericVariableRef{T}} where {T}
Returns a list of all variables currently in the model. The variables are ordered by creation time.
Example
julia> model = Model();
julia> @variable(model, x);
julia> @variable(model, y);
julia> all_variables(model)
2-element Vector{VariableRef}:
x
y
anonymous_name
JuMP.anonymous_name
— Functionanonymous_name(::MIME, x::AbstractVariableRef)
The name to use for an anonymous variable x
when printing.
backend
JuMP.backend
— Functionbackend(model::GenericModel)
Return the lower-level MathOptInterface model that sits underneath JuMP. This model depends on which operating mode JuMP is in (see mode
).
- If JuMP is in
DIRECT
mode (i.e., the model was created usingdirect_model
), the backend will be the optimizer passed todirect_model
. - If JuMP is in
MANUAL
orAUTOMATIC
mode, the backend is aMOI.Utilities.CachingOptimizer
.
This function should only be used by advanced users looking to access low-level MathOptInterface or solver-specific functionality.
Notes
If JuMP is not in DIRECT
mode, the type returned by backend
may change between any JuMP releases. Therefore, only use the public API exposed by MathOptInterface, and do not access internal fields. If you require access to the innermost optimizer, see unsafe_backend
. Alternatively, use direct_model
to create a JuMP model in DIRECT
mode.
See also: unsafe_backend
.
barrier_iterations
JuMP.barrier_iterations
— Functionbarrier_iterations(model::GenericModel)
Gets the cumulative number of barrier iterations during the most recent optimization.
Solvers must implement MOI.BarrierIterations()
to use this function.
bridge_constraints
JuMP.bridge_constraints
— Functionbridge_constraints(model::GenericModel)
When in direct mode, return false
.
When in manual or automatic mode, return a Bool
indicating whether the optimizer is set and unsupported constraints are automatically bridged to equivalent supported constraints when an appropriate transformation is available.
build_constraint
JuMP.build_constraint
— Functionbuild_constraint(
_error::Function,
Q::LinearAlgebra.Symmetric{V, M},
::PSDCone,
) where {V<:AbstractJuMPScalar,M<:AbstractMatrix{V}}
Return a VectorConstraint
of shape SymmetricMatrixShape
constraining the matrix Q
to be positive semidefinite.
This function is used by the @constraint
macros as follows:
julia> import LinearAlgebra
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2]);
julia> @constraint(model, LinearAlgebra.Symmetric(Q) in PSDCone())
[Q[1,1] Q[1,2];
Q[1,2] Q[2,2]] ∈ PSDCone()
The form above is usually used when the entries of Q
are affine or quadratic expressions, but it can also be used when the entries are variables to get the reference of the semidefinite constraint, e.g.,
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2], Symmetric)
2×2 LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}:
Q[1,1] Q[1,2]
Q[1,2] Q[2,2]
julia> @constraint(model, Q in PSDCone())
[Q[1,1] Q[1,2];
Q[1,2] Q[2,2]] ∈ PSDCone()
build_constraint(
_error::Function,
Q::AbstractMatrix{<:AbstractJuMPScalar},
::PSDCone,
)
Return a VectorConstraint
of shape SquareMatrixShape
constraining the matrix Q
to be symmetric and positive semidefinite.
This function is used by the @constraint
macro as follows:
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2]);
julia> @constraint(model, Q in PSDCone())
[Q[1,1] Q[1,2];
Q[2,1] Q[2,2]] ∈ PSDCone()
build_constraint(
_error::Function,
Q::LinearAlgebra.Hermitian{V,M},
::HermitianPSDCone,
) where {V<:AbstractJuMPScalar,M<:AbstractMatrix{V}}
Return a VectorConstraint
of shape HermitianMatrixShape
constraining the matrix Q
to be Hermitian positive semidefinite.
This function is used by the @constraint
macros as follows:
julia> import LinearAlgebra
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2]);
julia> @constraint(model, LinearAlgebra.Hermitian(Q) in HermitianPSDCone())
[Q[1,1] Q[1,2];
Q[1,2] Q[2,2]] ∈ HermitianPSDCone()
build_constraint(
_error::Function,
f::AbstractVector{<:AbstractJuMPScalar},
::Nonnegatives,
extra::Union{MOI.AbstractVectorSet,AbstractVectorSet},
)
A helper method that re-writes
@constraint(model, X >= Y, extra)
into
@constraint(model, X - Y in extra)
build_constraint(
_error::Function,
f::AbstractVector{<:AbstractJuMPScalar},
::Nonpositives,
extra::Union{MOI.AbstractVectorSet,AbstractVectorSet},
)
A helper method that re-writes
@constraint(model, Y <= X, extra)
into
@constraint(model, X - Y in extra)
build_variable
JuMP.build_variable
— Functionbuild_variable(
_error::Function,
info::VariableInfo,
args...;
kwargs...,
)
Return a new AbstractVariable
object.
This method should only be implemented by developers creating JuMP extensions. It should never be called by users of JuMP.
Arguments
_error
: a function to call instead oferror
._error
annotates the error message with additional information for the user.info
: an instance ofVariableInfo
. This has a variety of fields relating to the variable such asinfo.lower_bound
andinfo.binary
.args
: optional additional positional arguments for extending the@variable
macro.kwargs
: optional keyword arguments for extending the@variable
macro.
See also: @variable
Extensions should define a method with ONE positional argument to dispatch the call to a different method. Creating an extension that relies on multiple positional arguments leads to MethodError
s if the user passes the arguments in the wrong order.
Example
@variable(model, x, Foo)
will call
build_variable(_error::Function, info::VariableInfo, ::Type{Foo})
Passing special-case positional arguments such as Bin
, Int
, and PSD
is okay, along with keyword arguments:
@variable(model, x, Int, Foo(), mykwarg = true)
# or
@variable(model, x, Foo(), Int, mykwarg = true)
will call
build_variable(_error::Function, info::VariableInfo, ::Foo; mykwarg)
and info.integer
will be true.
Note that the order of the positional arguments does not matter.
build_variable(_error::Function, variables, ::SymmetricMatrixSpace)
Return a VariablesConstrainedOnCreation
of shape SymmetricMatrixShape
creating variables in MOI.Reals
, i.e. "free" variables unless they are constrained after their creation.
This function is used by the @variable
macro as follows:
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2], Symmetric)
2×2 LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}:
Q[1,1] Q[1,2]
Q[1,2] Q[2,2]
build_variable(_error::Function, variables, ::SkewSymmetricMatrixSpace)
Return a VariablesConstrainedOnCreation
of shape SkewSymmetricMatrixShape
creating variables in MOI.Reals
, i.e. "free" variables unless they are constrained after their creation.
This function is used by the @variable
macro as follows:
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2] in SkewSymmetricMatrixSpace())
2×2 Matrix{AffExpr}:
0 Q[1,2]
-Q[1,2] 0
build_variable(_error::Function, variables, ::HermitianMatrixSpace)
Return a VariablesConstrainedOnCreation
of shape HermitianMatrixShape
creating variables in MOI.Reals
, i.e. "free" variables unless they are constrained after their creation.
This function is used by the @variable
macro as follows:
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2] in HermitianMatrixSpace())
2×2 LinearAlgebra.Hermitian{GenericAffExpr{ComplexF64, VariableRef}, Matrix{GenericAffExpr{ComplexF64, VariableRef}}}:
real(Q[1,1]) real(Q[1,2]) + imag(Q[1,2]) im
real(Q[1,2]) - imag(Q[1,2]) im real(Q[2,2])
build_variable(_error::Function, variables, ::PSDCone)
Return a VariablesConstrainedOnCreation
of shape SymmetricMatrixShape
constraining the variables to be positive semidefinite.
This function is used by the @variable
macro as follows:
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2], PSD)
2×2 LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}:
Q[1,1] Q[1,2]
Q[1,2] Q[2,2]
callback_node_status
JuMP.callback_node_status
— Functioncallback_node_status(cb_data, model::GenericModel)
Return an MOI.CallbackNodeStatusCode
enum, indicating if the current primal solution available from callback_value
is integer feasible.
callback_value
JuMP.callback_value
— Functioncallback_value(cb_data, x::GenericVariableRef)
Return the primal solution of a variable inside a callback.
cb_data
is the argument to the callback function, and the type is dependent on the solver.
callback_value(cb_data, expr::Union{GenericAffExpr, GenericQuadExpr})
Return the primal solution of an affine or quadratic expression inside a callback by getting the value for each variable appearing in the expression.
cb_data
is the argument to the callback function, and the type is dependent on the solver.
check_belongs_to_model
JuMP.check_belongs_to_model
— Functioncheck_belongs_to_model(func::AbstractJuMPScalar, model::AbstractModel)
Throw VariableNotOwned
if the owner_model
of one of the variables of the function func
is not model
.
check_belongs_to_model(constraint::AbstractConstraint, model::AbstractModel)
Throw VariableNotOwned
if the owner_model
of one of the variables of the constraint constraint
is not model
.
coefficient
JuMP.coefficient
— Functioncoefficient(v1::GenericVariableRef{T}, v2::GenericVariableRef{T}) where {T}
Return one(T)
if v1 == v2
, and zero(T)
otherwise.
This is a fallback for other coefficient
methods to simplify code in which the expression may be a single variable.
coefficient(a::GenericAffExpr{C,V}, v::V) where {C,V}
Return the coefficient associated with variable v
in the affine expression a
.
coefficient(a::GenericAffExpr{C,V}, v1::V, v2::V) where {C,V}
Return the coefficient associated with the term v1 * v2
in the quadratic expression a
.
Note that coefficient(a, v1, v2)
is the same as coefficient(a, v2, v1)
.
coefficient(a::GenericQuadExpr{C,V}, v::V) where {C,V}
Return the coefficient associated with variable v
in the affine component of a
.
compute_conflict!
JuMP.compute_conflict!
— Functioncompute_conflict!(model::GenericModel)
Compute a conflict if the model is infeasible. If an optimizer has not been set yet (see set_optimizer
), a NoOptimizer
error is thrown.
The status of the conflict can be checked with the MOI.ConflictStatus
model attribute. Then, the status for each constraint can be queried with the MOI.ConstraintConflictStatus
attribute.
constant
JuMP.constant
— Functionconstant(aff::GenericAffExpr{C, V})::C
Return the constant of the affine expression.
constant(aff::GenericQuadExpr{C, V})::C
Return the constant of the quadratic expression.
constraint_by_name
JuMP.constraint_by_name
— Functionconstraint_by_name(model::AbstractModel,
name::String)::Union{ConstraintRef, Nothing}
Return the reference of the constraint with name attribute name
or Nothing
if no constraint has this name attribute. Throws an error if several constraints have name
as their name attribute.
constraint_by_name(model::AbstractModel,
name::String,
F::Type{<:Union{AbstractJuMPScalar,
Vector{<:AbstractJuMPScalar},
MOI.AbstactFunction}},
S::Type{<:MOI.AbstractSet})::Union{ConstraintRef, Nothing}
Similar to the method above, except that it throws an error if the constraint is not an F
-in-S
contraint where F
is either the JuMP or MOI type of the function, and S
is the MOI type of the set. This method is recommended if you know the type of the function and set since its returned type can be inferred while for the method above (i.e. without F
and S
), the exact return type of the constraint index cannot be inferred.
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, con, x^2 == 1)
con : x² = 1
julia> constraint_by_name(model, "kon")
julia> constraint_by_name(model, "con")
con : x² = 1
julia> constraint_by_name(model, "con", AffExpr, MOI.EqualTo{Float64})
julia> constraint_by_name(model, "con", QuadExpr, MOI.EqualTo{Float64})
con : x² = 1
constraint_object
JuMP.constraint_object
— Functionconstraint_object(con_ref::ConstraintRef)
Return the underlying constraint data for the constraint referenced by ref
.
constraint_ref_with_index
JuMP.constraint_ref_with_index
— Functionconstraint_ref_with_index(model::AbstractModel, index::MOI.ConstraintIndex)
Return a ConstraintRef
of model
corresponding to index
.
constraint_string
JuMP.constraint_string
— Functionconstraint_string(
mode::MIME,
ref::ConstraintRef;
in_math_mode::Bool = false)
Return a string representation of the constraint ref
, given the mode
.
constraints_string
JuMP.constraints_string
— Functionconstraints_string(mode, model::AbstractModel)::Vector{String}
Return a list of String
s describing each constraint of the model.
copy_conflict
JuMP.copy_conflict
— Functioncopy_conflict(model::GenericModel)
Return a copy of the current conflict for the model model
and a GenericReferenceMap
that can be used to obtain the variable and constraint reference of the new model corresponding to a given model
's reference.
This is a convenience function that provides a filtering function for copy_model
.
Note
Model copy is not supported in DIRECT
mode, i.e. when a model is constructed using the direct_model
constructor instead of the Model
constructor. Moreover, independently on whether an optimizer was provided at model construction, the new model will have no optimizer, i.e., an optimizer will have to be provided to the new model in the optimize!
call.
Example
In the following example, a model model
is constructed with a variable x
and two constraints c1
and c2
. This model has no solution, as the two constraints are mutually exclusive. The solver is asked to compute a conflict with compute_conflict!
. The parts of model
participating in the conflict are then copied into a model iis_model
.
julia> using JuMP
julia> import Gurobi
julia> model = Model(Gurobi.Optimizer);
julia> set_silent(model)
julia> @variable(model, x >= 0)
x
julia> @constraint(model, c1, x >= 2)
c1 : x ≥ 2
julia> @constraint(model, c2, x <= 1)
c2 : x ≤ 1
julia> optimize!(model)
julia> compute_conflict!(model)
julia> if get_attribute(model, MOI.ConflictStatus()) == MOI.CONFLICT_FOUND
iis_model, reference_map = copy_conflict(model)
print(iis_model)
end
Feasibility
Subject to
c1 : x ≥ 2
c2 : x ≤ 1
copy_extension_data
JuMP.copy_extension_data
— Functioncopy_extension_data(data, new_model::AbstractModel, model::AbstractModel)
Return a copy of the extension data data
of the model model
to the extension data of the new model new_model
.
A method should be added for any JuMP extension storing data in the ext
field.
Do not engage in type piracy by implementing this method for types of data
that you did not define! JuMP extensions should store types that they define in model.ext
, rather than regular Julia types.
copy_model
JuMP.copy_model
— Functioncopy_model(model::GenericModel; filter_constraints::Union{Nothing, Function}=nothing)
Return a copy of the model model
and a GenericReferenceMap
that can be used to obtain the variable and constraint reference of the new model corresponding to a given model
's reference. A Base.copy(::AbstractModel)
method has also been implemented, it is similar to copy_model
but does not return the reference map.
If the filter_constraints
argument is given, only the constraints for which this function returns true
will be copied. This function is given a constraint reference as argument.
Note
Model copy is not supported in DIRECT
mode, i.e. when a model is constructed using the direct_model
constructor instead of the Model
constructor. Moreover, independently on whether an optimizer was provided at model construction, the new model will have no optimizer, i.e., an optimizer will have to be provided to the new model in the optimize!
call.
Example
In the following example, a model model
is constructed with a variable x
and a constraint cref
. It is then copied into a model new_model
with the new references assigned to x_new
and cref_new
.
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, cref, x == 2)
cref : x = 2
julia> new_model, reference_map = copy_model(model);
julia> x_new = reference_map[x]
x
julia> cref_new = reference_map[cref]
cref : x = 2
delete
JuMP.delete
— Functiondelete(model::GenericModel, con_ref::ConstraintRef)
Delete the constraint associated with constraint_ref
from the model model
.
Note that delete
does not unregister the name from the model, so adding a new constraint of the same name will throw an error. Use unregister
to unregister the name after deletion.
Example
julia> model = Model();
julia> @variable(model, x);
julia> @constraint(model, c, 2x <= 1)
c : 2 x ≤ 1
julia> delete(model, c)
julia> unregister(model, :c)
julia> print(model)
Feasibility
Subject to
julia> model[:c]
ERROR: KeyError: key :c not found
Stacktrace:
[...]
delete(model::GenericModel, con_refs::Vector{<:ConstraintRef})
Delete the constraints associated with con_refs
from the model model
. Solvers may implement specialized methods for deleting multiple constraints of the same concrete type, i.e., when isconcretetype(eltype(con_refs))
. These may be more efficient than repeatedly calling the single constraint delete method.
See also: unregister
delete(model::GenericModel, variable_ref::GenericVariableRef)
Delete the variable associated with variable_ref
from the model model
.
Note that delete
does not unregister the name from the model, so adding a new variable of the same name will throw an error. Use unregister
to unregister the name after deletion.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> delete(model, x)
julia> unregister(model, :x)
julia> print(model)
Feasibility
Subject to
julia> model[:x]
ERROR: KeyError: key :x not found
Stacktrace:
[...]
delete(model::GenericModel, variable_refs::Vector{<:GenericVariableRef})
Delete the variables associated with variable_refs
from the model model
. Solvers may implement methods for deleting multiple variables that are more efficient than repeatedly calling the single variable delete method.
See also: unregister
delete(model::Model, c::NonlinearConstraintRef)
Delete the nonlinear constraint c
from model
.
delete_lower_bound
JuMP.delete_lower_bound
— Functiondelete_lower_bound(v::GenericVariableRef)
Delete the lower bound constraint of a variable.
See also LowerBoundRef
, has_lower_bound
, lower_bound
, set_lower_bound
.
Examples
julia> model = Model();
julia> @variable(model, x >= 1.0);
julia> has_lower_bound(x)
true
julia> delete_lower_bound(x)
julia> has_lower_bound(x)
false
delete_upper_bound
JuMP.delete_upper_bound
— Functiondelete_upper_bound(v::GenericVariableRef)
Delete the upper bound constraint of a variable.
Errors if one does not exist.
See also UpperBoundRef
, has_upper_bound
, upper_bound
, set_upper_bound
.
Examples
julia> model = Model();
julia> @variable(model, x <= 1.0);
julia> has_upper_bound(x)
true
julia> delete_upper_bound(x)
julia> has_upper_bound(x)
false
direct_generic_model
JuMP.direct_generic_model
— Functiondirect_generic_model(
value_type::Type{T},
backend::MOI.ModelLike;
) where {T<:Real}
Return a new JuMP model using backend
to store the model and solve it.
As opposed to the Model
constructor, no cache of the model is stored outside of backend
and no bridges are automatically applied to backend
.
Notes
The absence of a cache reduces the memory footprint but, it is important to bear in mind the following implications of creating models using this direct mode:
- When
backend
does not support an operation, such as modifying constraints or adding variables/constraints after solving, an error is thrown. For models created using theModel
constructor, such situations can be dealt with by storing the modifications in a cache and loading them into the optimizer whenoptimize!
is called. - No constraint bridging is supported by default.
- The optimizer used cannot be changed the model is constructed.
- The model created cannot be copied.
direct_generic_model(::Type{T}, factory::MOI.OptimizerWithAttributes)
Create a direct_generic_model
using factory
, a MOI.OptimizerWithAttributes
object created by optimizer_with_attributes
.
Example
julia> import HiGHS
julia> optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"presolve" => "off",
MOI.Silent() => true,
);
julia> model = direct_generic_model(Float64, optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: DIRECT
Solver name: HiGHS
is equivalent to:
julia> import HiGHS
julia> model = direct_generic_model(Float64, HiGHS.Optimizer())
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: DIRECT
Solver name: HiGHS
julia> set_attribute(model, "presolve", "off")
julia> set_attribute(model, MOI.Silent(), true)
direct_model
JuMP.direct_model
— Functiondirect_model(backend::MOI.ModelLike)
Return a new JuMP model using backend
to store the model and solve it.
As opposed to the Model
constructor, no cache of the model is stored outside of backend
and no bridges are automatically applied to backend
.
Notes
The absence of a cache reduces the memory footprint but, it is important to bear in mind the following implications of creating models using this direct mode:
- When
backend
does not support an operation, such as modifying constraints or adding variables/constraints after solving, an error is thrown. For models created using theModel
constructor, such situations can be dealt with by storing the modifications in a cache and loading them into the optimizer whenoptimize!
is called. - No constraint bridging is supported by default.
- The optimizer used cannot be changed the model is constructed.
- The model created cannot be copied.
direct_model(factory::MOI.OptimizerWithAttributes)
Create a direct_model
using factory
, a MOI.OptimizerWithAttributes
object created by optimizer_with_attributes
.
Example
julia> import HiGHS
julia> optimizer = optimizer_with_attributes(
HiGHS.Optimizer,
"presolve" => "off",
MOI.Silent() => true,
);
julia> model = direct_model(optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: DIRECT
Solver name: HiGHS
is equivalent to:
julia> import HiGHS
julia> model = direct_model(HiGHS.Optimizer())
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: DIRECT
Solver name: HiGHS
julia> set_attribute(model, "presolve", "off")
julia> set_attribute(model, MOI.Silent(), true)
drop_zeros!
JuMP.drop_zeros!
— Functiondrop_zeros!(expr::GenericAffExpr)
Remove terms in the affine expression with 0
coefficients.
drop_zeros!(expr::GenericQuadExpr)
Remove terms in the quadratic expression with 0
coefficients.
dual
JuMP.dual
— Functiondual(con_ref::ConstraintRef; result::Int = 1)
Return the dual value of constraint con_ref
associated with result index result
of the most-recent solution returned by the solver.
Use has_dual
to check if a result exists before asking for values.
See also: result_count
, shadow_price
.
dual(c::NonlinearConstraintRef)
Return the dual of the nonlinear constraint c
.
dual_objective_value
JuMP.dual_objective_value
— Functiondual_objective_value(model::GenericModel; result::Int = 1)
Return the value of the objective of the dual problem associated with result index result
of the most-recent solution returned by the solver.
Throws MOI.UnsupportedAttribute{MOI.DualObjectiveValue}
if the solver does not support this attribute.
See also: result_count
.
dual_shape
JuMP.dual_shape
— Functiondual_shape(shape::AbstractShape)::AbstractShape
Returns the shape of the dual space of the space of objects of shape shape
. By default, the dual_shape
of a shape is itself. See the examples section below for an example for which this is not the case.
Example
Consider polynomial constraints for which the dual is moment constraints and moment constraints for which the dual is polynomial constraints. Shapes for polynomials can be defined as follows:
struct Polynomial
coefficients::Vector{Float64}
monomials::Vector{Monomial}
end
struct PolynomialShape <: AbstractShape
monomials::Vector{Monomial}
end
JuMP.reshape_vector(x::Vector, shape::PolynomialShape) = Polynomial(x, shape.monomials)
and a shape for moments can be defined as follows:
struct Moments
coefficients::Vector{Float64}
monomials::Vector{Monomial}
end
struct MomentsShape <: AbstractShape
monomials::Vector{Monomial}
end
JuMP.reshape_vector(x::Vector, shape::MomentsShape) = Moments(x, shape.monomials)
Then dual_shape
allows the definition of the shape of the dual of polynomial and moment constraints:
dual_shape(shape::PolynomialShape) = MomentsShape(shape.monomials)
dual_shape(shape::MomentsShape) = PolynomialShape(shape.monomials)
dual_start_value
JuMP.dual_start_value
— Functiondual_start_value(con_ref::ConstraintRef)
Return the dual start value (MOI attribute ConstraintDualStart
) of the constraint con_ref
.
Note: If no dual start value has been set, dual_start_value
will return nothing
.
See also set_dual_start_value
.
dual_status
JuMP.dual_status
— Functiondual_status(model::GenericModel; result::Int = 1)
Return a MOI.ResultStatusCode
describing the status of the most recent dual solution of the solver (i.e., the MOI.DualStatus
attribute) associated with the result index result
.
See also: result_count
.
error_if_direct_mode
JuMP.error_if_direct_mode
— Functionerror_if_direct_mode(model::GenericModel, func::Symbol)
Errors if model
is in direct mode during a call from the function named func
.
Used internally within JuMP, or by JuMP extensions who do not want to support models in direct mode.
fix
JuMP.fix
— Functionfix(v::GenericVariableRef, value::Number; force::Bool = false)
Fix a variable to a value. Update the fixing constraint if one exists, otherwise create a new one.
If the variable already has variable bounds and force=false
, calling fix
will throw an error. If force=true
, existing variable bounds will be deleted, and the fixing constraint will be added. Note a variable will have no bounds after a call to unfix
.
See also FixRef
, is_fixed
, fix_value
, unfix
.
Examples
julia> model = Model();
julia> @variable(model, x);
julia> is_fixed(x)
false
julia> fix(x, 1.0)
julia> is_fixed(x)
true
julia> model = Model();
julia> @variable(model, 0 <= x <= 1);
julia> is_fixed(x)
false
julia> fix(x, 1.0; force = true)
julia> is_fixed(x)
true
fix_discrete_variables
JuMP.fix_discrete_variables
— Functionfix_discrete_variables([var_value::Function = value,] model::GenericModel)
Modifies model
to convert all binary and integer variables to continuous variables with fixed bounds of var_value(x)
.
Return
Returns a function that can be called without any arguments to restore the original model. The behavior of this function is undefined if additional changes are made to the affected variables in the meantime.
Notes
- An error is thrown if semi-continuous or semi-integer constraints are present (support may be added for these in the future).
- All other constraints are ignored (left in place). This includes discrete constraints like SOS and indicator constraints.
Example
julia> model = Model();
julia> @variable(model, x, Bin, start = 1);
julia> @variable(model, 1 <= y <= 10, Int, start = 2);
julia> @objective(model, Min, x + y);
julia> undo_relax = fix_discrete_variables(start_value, model);
julia> print(model)
Min x + y
Subject to
x = 1
y = 2
julia> undo_relax()
julia> print(model)
Min x + y
Subject to
y ≥ 1
y ≤ 10
y integer
x binary
fix_value
JuMP.fix_value
— Functionfix_value(v::GenericVariableRef)
Return the value to which a variable is fixed.
Error if one does not exist.
See also FixRef
, is_fixed
, fix
, unfix
.
Examples
julia> model = Model();
julia> @variable(model, x == 1);
julia> fix_value(x)
1.0
flatten!
JuMP.flatten!
— Functionflatten!(expr::GenericNonlinearExpr)
Flatten a nonlinear expression in-place by lifting nested +
and *
nodes into a single n-ary operation.
Motivation
Nonlinear expressions created using operator overloading can be deeply nested and unbalanced. For example, prod(x for i in 1:4)
creates *(x, *(x, *(x, x)))
instead of the more preferable *(x, x, x, x)
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> y = prod(x for i in 1:4)
((x²) * x) * x
julia> flatten!(y)
(x²) * x * x
julia> flatten!(sin(prod(x for i in 1:4)))
sin((x²) * x * x)
function_string
JuMP.function_string
— Functionfunction_string(
mode::MIME,
func::Union{JuMP.AbstractJuMPScalar,Vector{<:JuMP.AbstractJuMPScalar}},
)
Return a String
representing the function func
using print mode mode
.
get_attribute
JuMP.get_attribute
— Functionget_attribute(model::GenericModel, attr::MOI.AbstractModelAttribute)
get_attribute(x::GenericVariableRef, attr::MOI.AbstractVariableAttribute)
get_attribute(cr::ConstraintRef, attr::MOI.AbstractConstraintAttribute)
Get the value of a solver-specifc attribute attr
.
This is equivalent to calling MOI.get
with the associated MOI model and, for variables and constraints, with the associated MOI.VariableIndex
or MOI.ConstraintIndex
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, c, 2 * x <= 1)
c : 2 x ≤ 1
julia> get_attribute(model, MOI.Name())
""
julia> get_attribute(x, MOI.VariableName())
"x"
julia> get_attribute(c, MOI.ConstraintName())
"c"
get_attribute(
model::Union{GenericModel,MOI.OptimizerWithAttributes},
attr::Union{AbstractString,MOI.AbstractOptimizerAttribute},
)
Get the value of a solver-specifc attribute attr
.
This is equivalent to calling MOI.get
with the associated MOI model.
If attr
is an AbstractString
, it is converted to MOI.RawOptimizerAttribute
.
Example
julia> import HiGHS
julia> opt = optimizer_with_attributes(HiGHS.Optimizer, "output_flag" => true);
julia> model = Model(opt);
julia> get_attribute(model, "output_flag")
true
julia> get_attribute(model, MOI.RawOptimizerAttribute("output_flag"))
true
julia> get_attribute(opt, "output_flag")
true
julia> get_attribute(opt, MOI.RawOptimizerAttribute("output_flag"))
true
get_optimizer_attribute
JuMP.get_optimizer_attribute
— Functionget_optimizer_attribute(
model::Union{GenericModel,MOI.OptimizerWithAttributes},
attr::Union{AbstractString,MOI.AbstractOptimizerAttribute},
)
Return the value associated with the solver-specific attribute attr
.
If attr
is an AbstractString
, this is equivalent to get_optimizer_attribute(model, MOI.RawOptimizerAttribute(name))
.
This method will remain in all v1.X releases of JuMP, but it may be removed in a future v2.0 release. We recommend using get_attribute
instead.
See also: set_optimizer_attribute
, set_optimizer_attributes
.
Example
julia> import Ipopt
julia> model = Model(Ipopt.Optimizer);
julia> get_optimizer_attribute(model, MOI.Silent())
false
has_duals
JuMP.has_duals
— Functionhas_duals(model::GenericModel; result::Int = 1)
Return true
if the solver has a dual solution in result index result
available to query, otherwise return false
.
See also dual
, shadow_price
, and result_count
.
has_lower_bound
JuMP.has_lower_bound
— Functionhas_lower_bound(v::GenericVariableRef)
Return true
if v
has a lower bound. If true
, the lower bound can be queried with lower_bound
.
See also LowerBoundRef
, lower_bound
, set_lower_bound
, delete_lower_bound
.
Examples
julia> model = Model();
julia> @variable(model, x >= 1.0);
julia> has_lower_bound(x)
true
has_start_value
JuMP.has_start_value
— Functionhas_start_value(variable::AbstractVariableRef)
Return true
if the variable has a start value set otherwise return false
.
See also set_start_value
.
has_upper_bound
JuMP.has_upper_bound
— Functionhas_upper_bound(v::GenericVariableRef)
Return true
if v
has a upper bound. If true
, the upper bound can be queried with upper_bound
.
See also UpperBoundRef
, upper_bound
, set_upper_bound
, delete_upper_bound
.
Examples
julia> model = Model();
julia> @variable(model, x <= 1.0);
julia> has_upper_bound(x)
true
has_values
JuMP.has_values
— Functionhas_values(model::GenericModel; result::Int = 1)
Return true
if the solver has a primal solution in result index result
available to query, otherwise return false
.
See also value
and result_count
.
in_set_string
JuMP.in_set_string
— Functionin_set_string(mode::MIME, set)
Return a String
representing the membership to the set set
using print mode mode
.
index
JuMP.index
— Functionindex(cr::ConstraintRef)::MOI.ConstraintIndex
Return the index of the constraint that corresponds to cr
in the MOI backend.
index(v::GenericVariableRef)::MOI.VariableIndex
Return the index of the variable that corresponds to v
in the MOI backend.
index(p::NonlinearParameter)::MOI.Nonlinear.ParameterIndex
Return the index of the nonlinear parameter associated with p
.
index(ex::NonlinearExpression)::MOI.Nonlinear.ExpressionIndex
Return the index of the nonlinear expression associated with ex
.
is_binary
JuMP.is_binary
— Functionis_binary(v::GenericVariableRef)
Return true
if v
is constrained to be binary.
See also BinaryRef
, set_binary
, unset_binary
.
Examples
julia> model = Model();
julia> @variable(model, x, Bin);
julia> is_binary(x)
true
is_fixed
JuMP.is_fixed
— Functionis_fixed(v::GenericVariableRef)
Return true
if v
is a fixed variable. If true
, the fixed value can be queried with fix_value
.
See also FixRef
, fix_value
, fix
, unfix
.
Examples
julia> model = Model();
julia> @variable(model, x);
julia> is_fixed(x)
false
julia> fix(x, 1.0)
julia> is_fixed(x)
true
is_integer
JuMP.is_integer
— Functionis_integer(v::GenericVariableRef)
Return true
if v
is constrained to be integer.
See also IntegerRef
, set_integer
, unset_integer
.
Examples
julia> model = Model();
julia> @variable(model, x);
julia> is_integer(x)
false
julia> set_integer(x)
julia> is_integer(x)
true
is_parameter
JuMP.is_parameter
— Functionis_parameter(x::GenericVariableRef)::Bool
Return true
if x
is constrained to be a parameter.
See also ParameterRef
, set_parameter_value
, parameter_value
.
Examples
julia> model = Model();
julia> @variable(model, p in Parameter(2))
p
julia> is_parameter(p)
true
julia> @variable(model, x)
x
julia> is_parameter(x)
false
is_valid
JuMP.is_valid
— Functionis_valid(model::GenericModel, con_ref::ConstraintRef{<:AbstractModel})
Return true
if constraint_ref
refers to a valid constraint in model
.
is_valid(model::GenericModel, variable_ref::GenericVariableRef)
Return true
if variable
refers to a valid variable in model
.
is_valid(model::Model, c::NonlinearConstraintRef)
Return true
if c
refers to a valid nonlinear constraint in model
.
isequal_canonical
JuMP.isequal_canonical
— Functionisequal_canonical(
aff::GenericAffExpr{C,V},
other::GenericAffExpr{C,V}
) where {C,V}
Return true
if aff
is equal to other
after dropping zeros and disregarding the order. Mainly useful for testing.
jump_function
JuMP.jump_function
— Functionjump_function(x)
Given an MathOptInterface object x
, return the JuMP equivalent.
See also: moi_function
.
jump_function_type
JuMP.jump_function_type
— Functionjump_function_type(::Type{T}) where {T}
Given an MathOptInterface object type T
, return the JuMP equivalent.
See also: moi_function_type
.
latex_formulation
JuMP.latex_formulation
— Functionlatex_formulation(model::AbstractModel)
Wrap model
in a type so that it can be pretty-printed as text/latex
in a notebook like IJulia, or in Documenter.
To render the model, end the cell with latex_formulation(model)
, or call display(latex_formulation(model))
in to force the display of the model from inside a function.
linear_terms
JuMP.linear_terms
— Functionlinear_terms(aff::GenericAffExpr{C, V})
Provides an iterator over coefficient-variable tuples (a_i::C, x_i::V)
in the linear part of the affine expression.
linear_terms(quad::GenericQuadExpr{C, V})
Provides an iterator over tuples (coefficient::C, variable::V)
in the linear part of the quadratic expression.
list_of_constraint_types
JuMP.list_of_constraint_types
— Functionlist_of_constraint_types(model::GenericModel)::Vector{Tuple{Type,Type}}
Return a list of tuples of the form (F, S)
where F
is a JuMP function type and S
is an MOI set type such that all_constraints(model, F, S)
returns a nonempty list.
Example
julia> model = Model();
julia> @variable(model, x >= 0, Bin);
julia> @constraint(model, 2x <= 1);
julia> list_of_constraint_types(model)
3-element Vector{Tuple{Type, Type}}:
(AffExpr, MathOptInterface.LessThan{Float64})
(VariableRef, MathOptInterface.GreaterThan{Float64})
(VariableRef, MathOptInterface.ZeroOne)
Performance considerations
Iterating over the list of function and set types is a type-unstable operation. Consider using a function barrier. See the Performance tips for extensions section of the documentation for more details.
lower_bound
JuMP.lower_bound
— Functionlower_bound(v::GenericVariableRef)
Return the lower bound of a variable. Error if one does not exist.
See also LowerBoundRef
, has_lower_bound
, set_lower_bound
, delete_lower_bound
.
Examples
julia> model = Model();
julia> @variable(model, x >= 1.0);
julia> lower_bound(x)
1.0
lp_sensitivity_report
JuMP.lp_sensitivity_report
— Functionlp_sensitivity_report(model::GenericModel{T}; atol::T = Base.rtoldefault(T))::SensitivityReport{T} where {T}
Given a linear program model
with a current optimal basis, return a SensitivityReport
object, which maps:
- Every variable reference to a tuple
(d_lo, d_hi)::Tuple{T,T}
, explaining how much the objective coefficient of the corresponding variable can change by, such that the original basis remains optimal. - Every constraint reference to a tuple
(d_lo, d_hi)::Tuple{T,T}
, explaining how much the right-hand side of the corresponding constraint can change by, such that the basis remains optimal.
Both tuples are relative, rather than absolute. So given a objective coefficient of 1.0
and a tuple (-0.5, 0.5)
, the objective coefficient can range between 1.0 - 0.5
an 1.0 + 0.5
.
atol
is the primal/dual optimality tolerance, and should match the tolerance of the solver used to compute the basis.
Note: interval constraints are NOT supported.
Example
julia> import HiGHS
julia> model = Model(HiGHS.Optimizer);
julia> set_silent(model)
julia> @variable(model, -1 <= x <= 2)
x
julia> @objective(model, Min, x)
x
julia> optimize!(model)
julia> report = lp_sensitivity_report(model; atol = 1e-7);
julia> dx_lo, dx_hi = report[x]
(-1.0, Inf)
julia> println(
"The objective coefficient of `x` can decrease by $dx_lo or " *
"increase by $dx_hi."
)
The objective coefficient of `x` can decrease by -1.0 or increase by Inf.
julia> dRHS_lo, dRHS_hi = report[LowerBoundRef(x)]
(-Inf, 3.0)
julia> println(
"The lower bound of `x` can decrease by $dRHS_lo or increase " *
"by $dRHS_hi."
)
The lower bound of `x` can decrease by -Inf or increase by 3.0.
map_coefficients
JuMP.map_coefficients
— Functionmap_coefficients(f::Function, a::GenericAffExpr)
Apply f
to the coefficients and constant term of an GenericAffExpr
a
and return a new expression.
See also: map_coefficients_inplace!
Example
julia> model = Model();
julia> @variable(model, x);
julia> a = GenericAffExpr(1.0, x => 1.0)
x + 1
julia> map_coefficients(c -> 2 * c, a)
2 x + 2
julia> a
x + 1
map_coefficients(f::Function, a::GenericQuadExpr)
Apply f
to the coefficients and constant term of an GenericQuadExpr
a
and return a new expression.
See also: map_coefficients_inplace!
Example
julia> model = Model();
julia> @variable(model, x);
julia> a = @expression(model, x^2 + x + 1)
x² + x + 1
julia> map_coefficients(c -> 2 * c, a)
2 x² + 2 x + 2
julia> a
x² + x + 1
map_coefficients_inplace!
JuMP.map_coefficients_inplace!
— Functionmap_coefficients_inplace!(f::Function, a::GenericAffExpr)
Apply f
to the coefficients and constant term of an GenericAffExpr
a
and update them in-place.
See also: map_coefficients
Example
julia> model = Model();
julia> @variable(model, x);
julia> a = GenericAffExpr(1.0, x => 1.0)
x + 1
julia> map_coefficients_inplace!(c -> 2 * c, a)
2 x + 2
julia> a
2 x + 2
map_coefficients_inplace!(f::Function, a::GenericQuadExpr)
Apply f
to the coefficients and constant term of an GenericQuadExpr
a
and update them in-place.
See also: map_coefficients
Example
julia> model = Model();
julia> @variable(model, x);
julia> a = @expression(model, x^2 + x + 1)
x² + x + 1
julia> map_coefficients_inplace!(c -> 2 * c, a)
2 x² + 2 x + 2
julia> a
2 x² + 2 x + 2
mode
JuMP.mode
— Functionmodel_convert
JuMP.model_convert
— Functionmodel_convert(
model::AbstractModel,
rhs::Union{
AbstractConstraint,
Number,
AbstractJuMPScalar,
MOI.AbstractSet,
},
)
Convert the coefficients and constants of functions and sets in the rhs
to the coefficient type value_type(typeof(model))
.
Purpose
Creating and adding a constraint is a two-step process. The first step calls build_constraint
, and the result of that is passed to add_constraint
.
However, because build_constraint
does not take the model
as an argument, the coefficients and constants of the function or set might be different than value_type(typeof(model))
.
Therefore, the result of build_constraint
is converted in a call to model_convert
before the result is passed to add_constraint
.
model_string
JuMP.model_string
— Functionmodel_string(mode::MIME, model::AbstractModel)
Return a String
representation of model
given the mode
.
moi_function
JuMP.moi_function
— Functionmoi_function(x)
Given a JuMP object x
, return the MathOptInterface equivalent.
See also: jump_function
.
moi_function_type
JuMP.moi_function_type
— Functionmoi_function_type(::Type{T}) where {T}
Given a JuMP object type T
, return the MathOptInterface equivalent.
See also: jump_function_type
.
moi_set
JuMP.moi_set
— Functionmoi_set(constraint::AbstractConstraint)
Return the set of the constraint constraint
in the function-in-set form as a MathOptInterface.AbstractSet
.
moi_set(s::AbstractVectorSet, dim::Int)
Returns the MOI set of dimension dim
corresponding to the JuMP set s
.
moi_set(s::AbstractScalarSet)
Returns the MOI set corresponding to the JuMP set s
.
name
JuMP.name
— Functionname(con_ref::ConstraintRef)
Get a constraint's name attribute.
name(v::GenericVariableRef)::String
Get a variable's name attribute.
node_count
JuMP.node_count
— Functionnode_count(model::GenericModel)
Gets the total number of branch-and-bound nodes explored during the most recent optimization in a Mixed Integer Program.
Solvers must implement MOI.NodeCount()
to use this function.
nonlinear_constraint_string
JuMP.nonlinear_constraint_string
— Functionnonlinear_constraint_string(
model::GenericModel,
mode::MIME,
c::_NonlinearConstraint,
)
Return a string representation of the nonlinear constraint c
belonging to model
, given the mode
.
nonlinear_dual_start_value
JuMP.nonlinear_dual_start_value
— Functionnonlinear_dual_start_value(model::Model)
Return the current value of the MOI attribute MOI.NLPBlockDualStart
.
nonlinear_expr_string
JuMP.nonlinear_expr_string
— Functionnonlinear_expr_string(
model::GenericModel,
mode::MIME,
c::MOI.Nonlinear.Expression,
)
Return a string representation of the nonlinear expression c
belonging to model
, given the mode
.
nonlinear_model
JuMP.nonlinear_model
— Functionnonlinear_model(
model::GenericModel;
force::Bool = false,
)::Union{MOI.Nonlinear.Model,Nothing}
If model
has nonlinear components, return a MOI.Nonlinear.Model
, otherwise return nothing
.
If force
, always return a MOI.Nonlinear.Model
, and if one does not exist for the model, create an empty one.
normalized_coefficient
JuMP.normalized_coefficient
— Functionnormalized_coefficient(con_ref::ConstraintRef, variable::GenericVariableRef)
Return the coefficient associated with variable
in constraint
after JuMP has normalized the constraint into its standard form. See also set_normalized_coefficient
.
normalized_rhs
JuMP.normalized_rhs
— Functionnormalized_rhs(con_ref::ConstraintRef)
Return the right-hand side term of con_ref
after JuMP has converted the constraint into its normalized form. See also set_normalized_rhs
.
num_constraints
JuMP.num_constraints
— Functionnum_constraints(model::GenericModel, function_type, set_type)::Int64
Return the number of constraints currently in the model where the function has type function_type
and the set has type set_type
.
See also list_of_constraint_types
and all_constraints
.
Example
julia> model = Model();
julia> @variable(model, x >= 0, Bin);
julia> @variable(model, y);
julia> @constraint(model, y in MOI.GreaterThan(1.0));
julia> @constraint(model, y <= 1.0);
julia> @constraint(model, 2x <= 1);
julia> num_constraints(model, VariableRef, MOI.GreaterThan{Float64})
2
julia> num_constraints(model, VariableRef, MOI.ZeroOne)
1
julia> num_constraints(model, AffExpr, MOI.LessThan{Float64})
2
num_constraints(model::GenericModel; count_variable_in_set_constraints::Bool)
Return the number of constraints in model
.
If count_variable_in_set_constraints == true
, then VariableRef
constraints such as VariableRef
-in-Integer
are included. To count only the number of structural constraints (e.g., the rows in the constraint matrix of a linear program), pass count_variable_in_set_constraints = false
.
Example
julia> model = Model();
julia> @variable(model, x >= 0, Int);
julia> @constraint(model, 2x <= 1);
julia> num_constraints(model; count_variable_in_set_constraints = true)
3
julia> num_constraints(model; count_variable_in_set_constraints = false)
1
num_nonlinear_constraints
JuMP.num_nonlinear_constraints
— Functionnum_nonlinear_constraints(model::GenericModel)
Returns the number of nonlinear constraints associated with the model
.
num_variables
JuMP.num_variables
— Functionnum_variables(model::GenericModel)::Int64
Returns number of variables in model
.
object_dictionary
JuMP.object_dictionary
— Functionobject_dictionary(model::GenericModel)
Return the dictionary that maps the symbol name of a variable, constraint, or expression to the corresponding object.
Objects are registered to a specific symbol in the macros. For example, @variable(model, x[1:2, 1:2])
registers the array of variables x
to the symbol :x
.
This method should be defined for any subtype of AbstractModel
.
objective_bound
JuMP.objective_bound
— Functionobjective_bound(model::GenericModel)
Return the best known bound on the optimal objective value after a call to optimize!(model)
.
For scalar-valued objectives, this function returns a Float64
. For vector-valued objectives, it returns a Vector{Float64}
.
In the case of a vector-valued objective, this returns the ideal point, that is, the point obtained if each objective was optimized independently.
objective_function
JuMP.objective_function
— Functionobjective_function(
model::GenericModel,
T::Type = objective_function_type(model),
)
Return an object of type T
representing the objective function.
Error if the objective is not convertible to type T
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> @objective(model, Min, 2x + 1)
2 x + 1
julia> objective_function(model, AffExpr)
2 x + 1
julia> objective_function(model, QuadExpr)
2 x + 1
julia> typeof(objective_function(model, QuadExpr))
QuadExpr (alias for GenericQuadExpr{Float64, GenericVariableRef{Float64}})
We see with the last two commands that even if the objective function is affine, as it is convertible to a quadratic function, it can be queried as a quadratic function and the result is quadratic.
However, it is not convertible to a variable.
julia> objective_function(model, VariableRef)
ERROR: InexactError: convert(MathOptInterface.VariableIndex, 1.0 + 2.0 MOI.VariableIndex(1))
[...]
objective_function_string
JuMP.objective_function_string
— Functionobjective_function_string(mode, model::AbstractModel)::String
Return a String
describing the objective function of the model.
objective_function_type
JuMP.objective_function_type
— Functionobjective_function_type(model::GenericModel)::AbstractJuMPScalar
Return the type of the objective function.
objective_sense
JuMP.objective_sense
— Functionobjective_sense(model::GenericModel)::MOI.OptimizationSense
Return the objective sense.
objective_value
JuMP.objective_value
— Functionobjective_value(model::GenericModel; result::Int = 1)
Return the objective value associated with result index result
of the most-recent solution returned by the solver.
For scalar-valued objectives, this function returns a Float64
. For vector-valued objectives, it returns a Vector{Float64}
.
See also: result_count
.
op_ifelse
JuMP.op_ifelse
— Functionop_ifelse(a, x, y)
A function that falls back to ifelse(a, x, y)
, but when called with a JuMP variables or expression in the first argument, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_ifelse(true, 1.0, 2.0)
1.0
julia> op_ifelse(x, 1.0, 2.0)
ifelse(x, 1.0, 2.0)
julia> op_ifelse(true, x, 2.0)
x
operator_to_set
JuMP.operator_to_set
— Functionoperator_to_set(_error::Function, ::Val{sense_symbol})
Converts a sense symbol to a set set
such that @constraint(model, func sense_symbol 0)
is equivalent to @constraint(model, func in set)
for any func::AbstractJuMPScalar
.
Example
Once a custom set is defined you can directly create a JuMP constraint with it:
julia> struct CustomSet{T} <: MOI.AbstractScalarSet
value::T
end
julia> Base.copy(x::CustomSet) = CustomSet(x.value)
julia> model = Model();
julia> @variable(model, x)
x
julia> cref = @constraint(model, x in CustomSet(1.0))
x ∈ CustomSet{Float64}(1.0)
However, there might be an appropriate sign that could be used in order to provide a more convenient syntax:
julia> JuMP.operator_to_set(::Function, ::Val{:⊰}) = CustomSet(0.0)
julia> MOIU.supports_shift_constant(::Type{<:CustomSet}) = true
julia> MOIU.shift_constant(set::CustomSet, value) = CustomSet(set.value + value)
julia> cref = @constraint(model, x ⊰ 1)
x ∈ CustomSet{Float64}(1.0)
Note that the whole function is first moved to the right-hand side, then the sign is transformed into a set with zero constant and finally the constant is moved to the set with MOIU.shift_constant
.
operator_warn
JuMP.operator_warn
— Functionoperator_warn(model::AbstractModel)
operator_warn(model::GenericModel)
This function is called on the model whenever two affine expressions are added together without using destructive_add!
, and at least one of the two expressions has more than 50 terms.
For the case of Model
, if this function is called more than 20,000 times then a warning is generated once.
optimize!
JuMP.optimize!
— Functionoptimize!(
model::GenericModel;
ignore_optimize_hook = (model.optimize_hook === nothing),
_differentiation_backend::MOI.Nonlinear.AbstractAutomaticDifferentiation =
MOI.Nonlinear.SparseReverseMode(),
kwargs...,
)
Optimize the model.
If an optimizer has not been set yet (see set_optimizer
), a NoOptimizer
error is thrown.
If ignore_optimize_hook == true
, the optimize hook is ignored and the model is solved as if the hook was not set. Keyword arguments kwargs
are passed to the optimize_hook
. An error is thrown if optimize_hook
is nothing
and keyword arguments are provided.
Experimental features
These features may change or be removed in any future version of JuMP.
Pass _differentiation_backend
to set the MOI.Nonlinear.AbstractAutomaticDifferentiation
backend used to compute derivatives of nonlinear programs.
If you require only :ExprGraph
, it is more efficient to pass _differentiation_backend = MOI.Nonlinear.ExprGraphOnly()
.
optimizer_index
JuMP.optimizer_index
— Functionoptimizer_index(x::GenericVariableRef)::MOI.VariableIndex
optimizer_index(x::ConstraintRef{<:GenericModel})::MOI.ConstraintIndex
Return the index that corresponds to x
in the optimizer model.
Throws NoOptimizer
if no optimizer is set, and throws an ErrorException
if the optimizer is set but is not attached.
optimizer_with_attributes
JuMP.optimizer_with_attributes
— Functionoptimizer_with_attributes(optimizer_constructor, attrs::Pair...)
Groups an optimizer constructor with the list of attributes attrs
. Note that it is equivalent to MOI.OptimizerWithAttributes
.
When provided to the Model
constructor or to set_optimizer
, it creates an optimizer by calling optimizer_constructor()
, and then sets the attributes using set_attribute
.
See also: set_attribute
, get_attribute
.
Note
The string names of the attributes are specific to each solver. One should consult the solver's documentation to find the attributes of interest.
Example
julia> import HiGHS
julia> optimizer = optimizer_with_attributes(
HiGHS.Optimizer, "presolve" => "off", MOI.Silent() => true,
);
julia> model = Model(optimizer);
is equivalent to:
julia> import HiGHS
julia> model = Model(HiGHS.Optimizer);
julia> set_attribute(model, "presolve", "off")
julia> set_attribute(model, MOI.Silent(), true)
owner_model
JuMP.owner_model
— Functionowner_model(s::AbstractJuMPScalar)
Return the model owning the scalar s
.
parameter_value
JuMP.parameter_value
— Functionparameter_value(x::GenericVariableRef)
Return the value of the parameter x
.
Errors if x
is not a parameter.
See also ParameterRef
, is_parameter
, set_parameter_value
.
Examples
julia> model = Model();
julia> @variable(model, p in Parameter(2))
p
julia> parameter_value(p)
2.0
julia> set_parameter_value(p, 2.5)
julia> parameter_value(p)
2.5
parse_constraint
JuMP.parse_constraint
— Functionparse_constraint(_error::Function, expr::Expr)
The entry-point for all constraint-related parsing.
Arguments
- The
_error
function is passed everywhere to provide better error messages expr
comes from the@constraint
macro. There are two possibilities:@constraint(model, expr)
@constraint(model, name[args], expr)
expr
is the main component of the constraint.
Supported syntax
JuMP currently supports the following expr
objects:
lhs <= rhs
lhs == rhs
lhs >= rhs
l <= body <= u
u >= body >= l
lhs ⟂ rhs
lhs in rhs
lhs ∈ rhs
z => {constraint}
!z => {constraint}
as well as all broadcasted variants.
Extensions
The infrastructure behind parse_constraint
is extendable. See parse_constraint_head
and parse_constraint_call
for details.
parse_constraint_call
JuMP.parse_constraint_call
— Functionparse_constraint_call(
_error::Function,
is_vectorized::Bool,
::Val{op},
args...,
)
Implement this method to intercept the parsing of a :call
expression with operator op
.
Extending the constraint macro at parse time is an advanced operation and has the potential to interfere with existing JuMP syntax. Please discuss with the developer chatroom before publishing any code that implements these methods.
Arguments
_error
: a function that accepts aString
and throws the string as an error, along with some descriptive information of the macro from which it was thrown.is_vectorized
: a boolean to indicate ifop
should be broadcast or notop
: the first element of the.args
field of theExpr
to interceptargs...
: the.args
field of theExpr
.
Returns
This function must return:
parse_code::Expr
: an expression containing any setup or rewriting code that needs to be called beforebuild_constraint
build_code::Expr
: an expression that callsbuild_constraint(
orbuild_constraint.(
depending onis_vectorized
.
See also: parse_constraint_head
, build_constraint
parse_constraint_call(
_error::Function,
vectorized::Bool,
::Val{op},
lhs,
rhs,
) where {op}
Fallback handler for binary operators. These might be infix operators like @constraint(model, lhs op rhs)
, or normal operators like @constraint(model, op(lhs, rhs))
.
In both cases, we rewrite as lhs - rhs in operator_to_set(_error, op)
.
See operator_to_set
for details.
parse_constraint_head
JuMP.parse_constraint_head
— Functionparse_constraint_head(_error::Function, ::Val{head}, args...)
Implement this method to intercept the parsing of an expression with head head
.
Extending the constraint macro at parse time is an advanced operation and has the potential to interfere with existing JuMP syntax. Please discuss with the developer chatroom before publishing any code that implements these methods.
Arguments
_error
: a function that accepts aString
and throws the string as an error, along with some descriptive information of the macro from which it was thrown.head
: the.head
field of theExpr
to interceptargs...
: the.args
field of theExpr
.
Returns
This function must return:
is_vectorized::Bool
: whether the expression represents a broadcasted expression likex .<= 1
parse_code::Expr
: an expression containing any setup or rewriting code that needs to be called beforebuild_constraint
build_code::Expr
: an expression that callsbuild_constraint(
orbuild_constraint.(
depending onis_vectorized
.
Existing implementations
JuMP currently implements:
::Val{:call}
, which forwards calls toparse_constraint_call
::Val{:comparison}
, which handles the special case ofl <= body <= u
.
See also: parse_constraint_call
, build_constraint
parse_one_operator_variable
JuMP.parse_one_operator_variable
— Functionparse_one_operator_variable(_error::Function, infoexpr::_VariableInfoExpr, sense::Val{S}, value) where S
Update infoexr
for a variable expression in the @variable
macro of the form variable name S value
.
parse_ternary_variable
JuMP.parse_ternary_variable
— Functionparse_ternary_variable(_error, variable_info, lhs_sense, lhs, rhs_sense, rhs)
A hook for JuMP extensiosn to intercept the parsing of a :comparison
expression, which has the form lhs lhs_sense variable rhs_sense rhs
.
parse_variable
JuMP.parse_variable
— Functionparse_variable(_error::Function, ::_VariableInfoExpr, args...)
A hook for extensions to intercept the parsing of inequality constraints in the @variable
macro.
primal_feasibility_report
JuMP.primal_feasibility_report
— Functionprimal_feasibility_report(
model::GenericModel{T},
point::AbstractDict{GenericVariableRef{T},T} = _last_primal_solution(model),
atol::T = zero(T),
skip_missing::Bool = false,
)::Dict{Any,T}
Given a dictionary point
, which maps variables to primal values, return a dictionary whose keys are the constraints with an infeasibility greater than the supplied tolerance atol
. The value corresponding to each key is the respective infeasibility. Infeasibility is defined as the distance between the primal value of the constraint (see MOI.ConstraintPrimal
) and the nearest point by Euclidean distance in the corresponding set.
Notes
- If
skip_missing = true
, constraints containing variables that are not inpoint
will be ignored. - If
skip_missing = false
and a partial primal solution is provided, an error will be thrown. - If no point is provided, the primal solution from the last time the model was solved is used.
Example
julia> model = Model();
julia> @variable(model, 0.5 <= x <= 1);
julia> primal_feasibility_report(model, Dict(x => 0.2))
Dict{Any, Float64} with 1 entry:
x ≥ 0.5 => 0.3
primal_feasibility_report(
point::Function,
model::GenericModel{T};
atol::T = zero(T),
skip_missing::Bool = false,
) where {T}
A form of primal_feasibility_report
where a function is passed as the first argument instead of a dictionary as the second argument.
Example
julia> model = Model();
julia> @variable(model, 0.5 <= x <= 1, start = 1.3);
julia> primal_feasibility_report(model) do v
return start_value(v)
end
Dict{Any, Float64} with 1 entry:
x ≤ 1 => 0.3
primal_status
JuMP.primal_status
— Functionprimal_status(model::GenericModel; result::Int = 1)
Return a MOI.ResultStatusCode
describing the status of the most recent primal solution of the solver (i.e., the MOI.PrimalStatus
attribute) associated with the result index result
.
See also: result_count
.
print_active_bridges
JuMP.print_active_bridges
— Functionprint_active_bridges([io::IO = stdout,] model::GenericModel)
Print a list of the variable, constraint, and objective bridges that are currently used in the model.
print_active_bridges([io::IO = stdout,] model::GenericModel, ::Type{F}) where {F}
Print a list of bridges required for an objective function of type F
.
print_active_bridges(
[io::IO = stdout,]
model::GenericModel,
F::Type,
S::Type{<:MOI.AbstractSet},
)
Print a list of bridges required for a constraint of type F
-in-S
.
print_active_bridges(
[io::IO = stdout,]
model::GenericModel,
S::Type{<:MOI.AbstractSet},
)
Print a list of bridges required to add a variable constrained to the set S
.
print_bridge_graph
JuMP.print_bridge_graph
— Function print_bridge_graph([io::IO,] model::GenericModel)
Print the hyper-graph containing all variable, constraint, and objective types that could be obtained by bridging the variables, constraints, and objectives that are present in the model.
This function is intended for advanced users. If you want to see only the bridges that are currently used, use print_active_bridges
instead.
Explanation of output
Each node in the hyper-graph corresponds to a variable, constraint, or objective type.
- Variable nodes are indicated by
[ ]
- Constraint nodes are indicated by
( )
- Objective nodes are indicated by
| |
The number inside each pair of brackets is an index of the node in the hyper-graph.
Note that this hyper-graph is the full list of possible transformations. When the bridged model is created, we select the shortest hyper-path(s) from this graph, so many nodes may be un-used.
For more information, see Legat, B., Dowson, O., Garcia, J., and Lubin, M. (2020). "MathOptInterface: a data structure for mathematical optimization problems." URL: https://arxiv.org/abs/2002.03447
quad_terms
JuMP.quad_terms
— Functionquad_terms(quad::GenericQuadExpr{C, V})
Provides an iterator over tuples (coefficient::C, var_1::V, var_2::V)
in the quadratic part of the quadratic expression.
raw_status
JuMP.raw_status
— Functionraw_status(model::GenericModel)
Return the reason why the solver stopped in its own words (i.e., the MathOptInterface model attribute RawStatusString
).
read_from_file
JuMP.read_from_file
— Functionread_from_file(
filename::String;
format::MOI.FileFormats.FileFormat = MOI.FileFormats.FORMAT_AUTOMATIC,
kwargs...,
)
Return a JuMP model read from filename
in the format format
.
If the filename ends in .gz
, it will be uncompressed using Gzip. If the filename ends in .bz2
, it will be uncompressed using BZip2.
Other kwargs
are passed to the Model
constructor of the chosen format.
reduced_cost
JuMP.reduced_cost
— Functionreduced_cost(x::GenericVariableRef{T})::T where {T}
Return the reduced cost associated with variable x
.
Equivalent to querying the shadow price of the active variable bound (if one exists and is active).
See also: shadow_price
.
register
JuMP.register
— Functionregister(
model::Model,
op::Symbol,
dimension::Integer,
f::Function;
autodiff:Bool = false,
)
Register the user-defined function f
that takes dimension
arguments in model
as the symbol op
.
The function f
must support all subtypes of Real
as arguments. Do not assume that the inputs are Float64
.
Notes
- For this method, you must explicitly set
autodiff = true
, because no user-provided gradient function∇f
is given. - Second-derivative information is only computed if
dimension == 1
. op
does not have to be the same symbol asf
, but it is generally more readable if it is.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> f(x::T) where {T<:Real} = x^2
f (generic function with 1 method)
julia> register(model, :foo, 1, f; autodiff = true)
julia> @NLobjective(model, Min, foo(x))
julia> model = Model();
julia> @variable(model, x[1:2])
2-element Vector{VariableRef}:
x[1]
x[2]
julia> g(x::T, y::T) where {T<:Real} = x * y
g (generic function with 1 method)
julia> register(model, :g, 2, g; autodiff = true)
julia> @NLobjective(model, Min, g(x[1], x[2]))
register(
model::Model,
s::Symbol,
dimension::Integer,
f::Function,
∇f::Function;
autodiff:Bool = false,
)
Register the user-defined function f
that takes dimension
arguments in model
as the symbol s
. In addition, provide a gradient function ∇f
.
The functions f
and ∇f
must support all subtypes of Real
as arguments. Do not assume that the inputs are Float64
.
Notes
- If the function
f
is univariate (i.e.,dimension == 1
),∇f
must return a number which represents the first-order derivative of the functionf
. - If the function
f
is multi-variate,∇f
must have a signature matching∇f(g::AbstractVector{T}, args::T...) where {T<:Real}
, where the first argument is a vectorg
that is modified in-place with the gradient. - If
autodiff = true
anddimension == 1
, use automatic differentiation to compute the second-order derivative information. Ifautodiff = false
, only first-order derivative information will be used. s
does not have to be the same symbol asf
, but it is generally more readable if it is.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> f(x::T) where {T<:Real} = x^2
f (generic function with 1 method)
julia> ∇f(x::T) where {T<:Real} = 2 * x
∇f (generic function with 1 method)
julia> register(model, :foo, 1, f, ∇f; autodiff = true)
julia> @NLobjective(model, Min, foo(x))
julia> model = Model();
julia> @variable(model, x[1:2])
2-element Vector{VariableRef}:
x[1]
x[2]
julia> g(x::T, y::T) where {T<:Real} = x * y
g (generic function with 1 method)
julia> function ∇g(g::AbstractVector{T}, x::T, y::T) where {T<:Real}
g[1] = y
g[2] = x
return
end
∇g (generic function with 1 method)
julia> register(model, :g, 2, g, ∇g)
julia> @NLobjective(model, Min, g(x[1], x[2]))
register(
model::Model,
s::Symbol,
dimension::Integer,
f::Function,
∇f::Function,
∇²f::Function,
)
Register the user-defined function f
that takes dimension
arguments in model
as the symbol s
. In addition, provide a gradient function ∇f
and a hessian function ∇²f
.
∇f
and ∇²f
must return numbers corresponding to the first- and second-order derivatives of the function f
respectively.
Notes
- Because automatic differentiation is not used, you can assume the inputs are all
Float64
. - This method will throw an error if
dimension > 1
. s
does not have to be the same symbol asf
, but it is generally more readable if it is.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> f(x::Float64) = x^2
f (generic function with 1 method)
julia> ∇f(x::Float64) = 2 * x
∇f (generic function with 1 method)
julia> ∇²f(x::Float64) = 2.0
∇²f (generic function with 1 method)
julia> register(model, :foo, 1, f, ∇f, ∇²f)
julia> @NLobjective(model, Min, foo(x))
relative_gap
JuMP.relative_gap
— Functionrelative_gap(model::GenericModel)
Return the final relative optimality gap after a call to optimize!(model)
. Exact value depends upon implementation of MathOptInterface.RelativeGap() by the particular solver used for optimization.
relax_integrality
JuMP.relax_integrality
— Functionrelax_integrality(model::GenericModel)
Modifies model
to "relax" all binary and integrality constraints on variables. Specifically,
- Binary constraints are deleted, and variable bounds are tightened if necessary to ensure the variable is constrained to the interval $[0, 1]$.
- Integrality constraints are deleted without modifying variable bounds.
- An error is thrown if semi-continuous or semi-integer constraints are present (support may be added for these in the future).
- All other constraints are ignored (left in place). This includes discrete constraints like SOS and indicator constraints.
Returns a function that can be called without any arguments to restore the original model. The behavior of this function is undefined if additional changes are made to the affected variables in the meantime.
Example
julia> model = Model();
julia> @variable(model, x, Bin);
julia> @variable(model, 1 <= y <= 10, Int);
julia> @objective(model, Min, x + y);
julia> undo_relax = relax_integrality(model);
julia> print(model)
Min x + y
Subject to
x ≥ 0
y ≥ 1
x ≤ 1
y ≤ 10
julia> undo_relax()
julia> print(model)
Min x + y
Subject to
y ≥ 1
y ≤ 10
y integer
x binary
relax_with_penalty!
JuMP.relax_with_penalty!
— Functionrelax_with_penalty!(
model::GenericModel{T},
[penalties::Dict{ConstraintRef,T}];
[default::Union{Nothing,Real} = nothing,]
) where {T}
Destructively modify the model in-place to create a penalized relaxation of the constraints.
This is a destructive routine that modifies the model in-place. If you don't want to modify the original model, use copy_model
to create a copy before calling relax_with_penalty!
.
Reformulation
See MOI.Utilities.ScalarPenaltyRelaxation
for details of the reformulation.
For each constraint ci
, the penalty passed to MOI.Utilities.ScalarPenaltyRelaxation
is get(penalties, ci, default)
. If the value is nothing
, because ci
does not exist in penalties
and default = nothing
, then the constraint is skipped.
Return value
This function returns a Dict{ConstraintRef,AffExpr}
that maps each constraint index to the corresponding y + z
as an AffExpr
. In an optimal solution, query the value of these functions to compute the violation of each constraint.
Relax a subset of constraints
To relax a subset of constraints, pass a penalties
dictionary and set default = nothing
.
Example
julia> function new_model()
model = Model()
@variable(model, x)
@objective(model, Max, 2x + 1)
@constraint(model, c1, 2x - 1 <= -2)
@constraint(model, c2, 3x >= 0)
return model
end
new_model (generic function with 1 method)
julia> model_1 = new_model();
julia> penalty_map = relax_with_penalty!(model_1; default = 2.0);
julia> penalty_map[model_1[:c1]]
_[3]
julia> penalty_map[model_1[:c2]]
_[2]
julia> print(model_1)
Max 2 x - 2 _[2] - 2 _[3] + 1
Subject to
c2 : 3 x + _[2] ≥ 0
c1 : 2 x - _[3] ≤ -1
_[2] ≥ 0
_[3] ≥ 0
julia> model_2 = new_model();
julia> relax_with_penalty!(model_2, Dict(model_2[:c2] => 3.0))
Dict{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.GreaterThan{Float64}}, ScalarShape}, AffExpr} with 1 entry:
c2 : 3 x + _[2] ≥ 0 => _[2]
julia> print(model_2)
Max 2 x - 3 _[2] + 1
Subject to
c2 : 3 x + _[2] ≥ 0
c1 : 2 x ≤ -1
_[2] ≥ 0
remove_bridge
JuMP.remove_bridge
— Functionremove_bridge(
model::GenericModel{S},
BT::Type{<:MOI.Bridges.AbstractBridge};
coefficient_type::Type{T} = S,
) where {S,T}
Remove BT{T}
from the list of bridges that can be used to transform unsupported constraints into an equivalent formulation using only constraints supported by the optimizer.
See also: add_bridge
.
Example
julia> model = Model();
julia> add_bridge(model, MOI.Bridges.Constraint.SOCtoNonConvexQuadBridge)
julia> remove_bridge(model, MOI.Bridges.Constraint.SOCtoNonConvexQuadBridge)
julia> add_bridge(
model,
MOI.Bridges.Constraint.NumberConversionBridge;
coefficient_type = Complex{Float64},
)
julia> remove_bridge(
model,
MOI.Bridges.Constraint.NumberConversionBridge;
coefficient_type = Complex{Float64},
)
reshape_set
JuMP.reshape_set
— Functionreshape_set(vectorized_set::MOI.AbstractSet, shape::AbstractShape)
Return a set in its original shape shape
given its vectorized form vectorized_form
.
Example
Given a SymmetricMatrixShape
of vectorized form [1, 2, 3] in MOI.PositiveSemidefinieConeTriangle(2)
, the following code returns the set of the original constraint Symmetric(Matrix[1 2; 2 3]) in PSDCone()
:
julia> reshape_set(MOI.PositiveSemidefiniteConeTriangle(2), SymmetricMatrixShape(2))
PSDCone()
reshape_vector
JuMP.reshape_vector
— Functionreshape_vector(vectorized_form::Vector, shape::AbstractShape)
Return an object in its original shape shape
given its vectorized form vectorized_form
.
Example
Given a SymmetricMatrixShape
of vectorized form [1, 2, 3]
, the following code returns the matrix Symmetric(Matrix[1 2; 2 3])
:
julia> reshape_vector([1, 2, 3], SymmetricMatrixShape(2))
2×2 LinearAlgebra.Symmetric{Int64, Matrix{Int64}}:
1 2
2 3
result_count
JuMP.result_count
— Functionresult_count(model::GenericModel)
Return the number of results available to query after a call to optimize!
.
reverse_sense
JuMP.reverse_sense
— Functionreverse_sense(::Val{T}) where {T}
Given an (in)equality symbol T
, return a new Val
object with the opposite (in)equality symbol.
set_attribute
JuMP.set_attribute
— Functionset_attribute(model::GenericModel, attr::MOI.AbstractModelAttribute, value)
set_attribute(x::GenericVariableRef, attr::MOI.AbstractVariableAttribute, value)
set_attribute(cr::ConstraintRef, attr::MOI.AbstractConstraintAttribute, value)
Set the value of a solver-specifc attribute attr
to value
.
This is equivalent to calling MOI.set
with the associated MOI model and, for variables and constraints, with the associated MOI.VariableIndex
or MOI.ConstraintIndex
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, c, 2 * x <= 1)
c : 2 x ≤ 1
julia> set_attribute(model, MOI.Name(), "model_new")
julia> set_attribute(x, MOI.VariableName(), "x_new")
julia> set_attribute(c, MOI.ConstraintName(), "c_new")
set_attribute(
model::Union{GenericModel,MOI.OptimizerWithAttributes},
attr::Union{AbstractString,MOI.AbstractOptimizerAttribute},
value,
)
Set the value of a solver-specifc attribute attr
to value
.
This is equivalent to calling MOI.set
with the associated MOI model.
If attr
is an AbstractString
, it is converted to MOI.RawOptimizerAttribute
.
Example
julia> import HiGHS
julia> opt = optimizer_with_attributes(HiGHS.Optimizer, "output_flag" => false);
julia> model = Model(opt);
julia> set_attribute(model, "output_flag", false)
julia> set_attribute(model, MOI.RawOptimizerAttribute("output_flag"), true)
julia> set_attribute(opt, "output_flag", true)
julia> set_attribute(opt, MOI.RawOptimizerAttribute("output_flag"), false)
set_attributes
JuMP.set_attributes
— Functionset_attributes(
destination::Union{
GenericModel,
MOI.OptimizerWithAttributes,
GenericVariableRef,
ConstraintRef,
},
pairs::Pair...,
)
Given a list of attribute => value
pairs, calls set_attribute(destination, attribute, value)
for each pair.
See also: set_attribute
, get_attribute
.
Example
julia> import Ipopt
julia> model = Model(Ipopt.Optimizer);
julia> set_attributes(model, "tol" => 1e-4, "max_iter" => 100)
is equivalent to:
julia> import Ipopt
julia> model = Model(Ipopt.Optimizer);
julia> set_attribute(model, "tol", 1e-4)
julia> set_attribute(model, "max_iter", 100)
set_binary
JuMP.set_binary
— Functionset_binary(v::GenericVariableRef)
Add a constraint on the variable v
that it must take values in the set $\{0,1\}$.
See also BinaryRef
, is_binary
, unset_binary
.
Examples
julia> model = Model();
julia> @variable(model, x);
julia> is_binary(x)
false
julia> set_binary(x)
julia> is_binary(x)
true
set_dual_start_value
JuMP.set_dual_start_value
— Functionset_dual_start_value(con_ref::ConstraintRef, value)
Set the dual start value (MOI attribute ConstraintDualStart
) of the constraint con_ref
to value
. To remove a dual start value set it to nothing
.
See also dual_start_value
.
set_integer
JuMP.set_integer
— Functionset_integer(variable_ref::GenericVariableRef)
Add an integrality constraint on the variable variable_ref
.
See also IntegerRef
, is_integer
, unset_integer
.
Examples
julia> model = Model();
julia> @variable(model, x);
julia> is_integer(x)
false
julia> set_integer(x)
julia> is_integer(x)
true
set_lower_bound
JuMP.set_lower_bound
— Functionset_lower_bound(v::GenericVariableRef, lower::Number)
Set the lower bound of a variable. If one does not exist, create a new lower bound constraint.
See also LowerBoundRef
, has_lower_bound
, lower_bound
, delete_lower_bound
.
Examples
julia> model = Model();
julia> @variable(model, x >= 1.0);
julia> lower_bound(x)
1.0
julia> set_lower_bound(x, 2.0)
julia> lower_bound(x)
2.0
set_name
JuMP.set_name
— Functionset_name(con_ref::ConstraintRef, s::AbstractString)
Set a constraint's name attribute.
set_name(v::GenericVariableRef, s::AbstractString)
Set a variable's name attribute.
set_nonlinear_dual_start_value
JuMP.set_nonlinear_dual_start_value
— Functionset_nonlinear_dual_start_value(
model::Model,
start::Union{Nothing,Vector{Float64}},
)
Set the value of the MOI attribute MOI.NLPBlockDualStart
.
The start vector corresponds to the Lagrangian duals of the nonlinear constraints, in the order given by all_nonlinear_constraints
. That is, you must pass a single start vector corresponding to all of the nonlinear constraints in a single function call; you cannot set the dual start value of nonlinear constraints one-by-one. The example below demonstrates how to use all_nonlinear_constraints
to create a mapping between the nonlinear constraint references and the start vector.
Pass nothing
to unset a previous start.
Example
julia> model = Model();
julia> @variable(model, x[1:2]);
julia> nl1 = @NLconstraint(model, x[1] <= sqrt(x[2]));
julia> nl2 = @NLconstraint(model, x[1] >= exp(x[2]));
julia> start = Dict(nl1 => -1.0, nl2 => 1.0);
julia> start_vector = [start[con] for con in all_nonlinear_constraints(model)]
2-element Vector{Float64}:
-1.0
1.0
julia> set_nonlinear_dual_start_value(model, start_vector)
julia> nonlinear_dual_start_value(model)
2-element Vector{Float64}:
-1.0
1.0
set_nonlinear_objective
JuMP.set_nonlinear_objective
— Functionset_nonlinear_objective(
model::Model,
sense::MOI.OptimizationSense,
expr::Expr,
)
Set the nonlinear objective of model
to the expression expr
, with the optimization sense sense
.
This function is most useful if the expression expr
is generated programmatically, and you cannot use @NLobjective
.
Notes
- You must interpolate the variables directly into the expression
expr
. - You must use
MIN_SENSE
orMAX_SENSE
instead ofMin
andMax
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> set_nonlinear_objective(model, MIN_SENSE, :($(x) + $(x)^2))
set_normalized_coefficient
JuMP.set_normalized_coefficient
— Functionset_normalized_coefficient(con_ref::ConstraintRef, variable::GenericVariableRef, value)
Set the coefficient of variable
in the constraint constraint
to value
.
Note that prior to this step, JuMP will aggregate multiple terms containing the same variable. For example, given a constraint 2x + 3x <= 2
, set_normalized_coefficient(con, x, 4)
will create the constraint 4x <= 2
.
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, con, 2x + 3x <= 2)
con : 5 x ≤ 2
julia> set_normalized_coefficient(con, x, 4)
julia> con
con : 4 x ≤ 2
set_normalized_coefficients
JuMP.set_normalized_coefficients
— Functionset_normalized_coefficients(
con_ref::ConstraintRef,
variable,
new_coefficients::Vector{Tuple{Int64,T}},
)
Set the coefficients of variable
in the constraint con_ref
to new_coefficients
, where each element in new_coefficients
is a tuple which maps the row to a new coefficient.
Note that prior to this step, during constraint creation, JuMP will aggregate multiple terms containing the same variable.
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, con, [2x + 3x, 4x] in MOI.Nonnegatives(2))
con : [5 x, 4 x] ∈ MathOptInterface.Nonnegatives(2)
julia> set_normalized_coefficients(con, x, [(1, 2.0), (2, 5.0)])
julia> con
con : [2 x, 5 x] ∈ MathOptInterface.Nonnegatives(2)
set_normalized_rhs
JuMP.set_normalized_rhs
— Functionset_normalized_rhs(con_ref::ConstraintRef, value)
Set the right-hand side term of constraint
to value
.
Note that prior to this step, JuMP will aggregate all constant terms onto the right-hand side of the constraint. For example, given a constraint 2x + 1 <= 2
, set_normalized_rhs(con, 4)
will create the constraint 2x <= 4
, not 2x + 1 <= 4
.
julia> model = Model();
julia> @variable(model, x);
julia> @constraint(model, con, 2x + 1 <= 2)
con : 2 x ≤ 1
julia> set_normalized_rhs(con, 4)
julia> con
con : 2 x ≤ 4
set_objective
JuMP.set_objective
— Functionset_objective(model::AbstractModel, sense::MOI.OptimizationSense, func)
The functional equivalent of the @objective
macro.
Sets the objective sense and objective function simultaneously, and is equivalent to calling set_objective_sense
and set_objective_function
separately.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> set_objective(model, MIN_SENSE, x)
set_objective_coefficient
JuMP.set_objective_coefficient
— Functionset_objective_coefficient(model::GenericModel, variable::GenericVariableRef, coefficient::Real)
Set the linear objective coefficient associated with Variable
to coefficient
.
Note: this function will throw an error if a nonlinear objective is set.
set_objective_function
JuMP.set_objective_function
— Functionset_objective_function(model::GenericModel, func::MOI.AbstractFunction)
set_objective_function(model::GenericModel, func::AbstractJuMPScalar)
set_objective_function(model::GenericModel, func::Real)
set_objective_function(model::GenericModel, func::Vector{<:AbstractJuMPScalar})
Sets the objective function of the model to the given function. See set_objective_sense
to set the objective sense. These are low-level functions; the recommended way to set the objective is with the @objective
macro.
set_objective_sense
JuMP.set_objective_sense
— Functionset_objective_sense(model::GenericModel, sense::MOI.OptimizationSense)
Sets the objective sense of the model to the given sense. See set_objective_function
to set the objective function. These are low-level functions; the recommended way to set the objective is with the @objective
macro.
set_optimize_hook
JuMP.set_optimize_hook
— Functionset_optimize_hook(model::GenericModel, f::Union{Function,Nothing})
Set the function f
as the optimize hook for model
.
f
should have a signature f(model::GenericModel; kwargs...)
, where the kwargs
are those passed to optimize!
.
Notes
- The optimize hook should generally modify the model, or some external state in some way, and then call
optimize!(model; ignore_optimize_hook = true)
to optimize the problem, bypassing the hook. - Use
set_optimize_hook(model, nothing)
to unset an optimize hook.
Example
julia> model = Model();
julia> function my_hook(model::Model; kwargs...)
println(kwargs)
println("Calling with `ignore_optimize_hook = true`")
optimize!(model; ignore_optimize_hook = true)
return
end
my_hook (generic function with 1 method)
julia> set_optimize_hook(model, my_hook)
my_hook (generic function with 1 method)
julia> optimize!(model; test_arg = true)
Base.Pairs{Symbol, Bool, Tuple{Symbol}, NamedTuple{(:test_arg,), Tuple{Bool}}}(:test_arg => 1)
Calling with `ignore_optimize_hook = true`
ERROR: NoOptimizer()
[...]
set_optimizer
JuMP.set_optimizer
— Functionset_optimizer(
model::GenericModel,
optimizer_factory;
add_bridges::Bool = true,
)
Creates an empty MathOptInterface.AbstractOptimizer
instance by calling optimizer_factory()
and sets it as the optimizer of model
. Specifically, optimizer_factory
must be callable with zero arguments and return an empty MathOptInterface.AbstractOptimizer
.
If add_bridges
is true, constraints and objectives that are not supported by the optimizer are automatically bridged to equivalent supported formulation. Passing add_bridges = false
can improve performance if the solver natively supports all of the elements in model
.
See set_attribute
for setting solver-specific parameters of the optimizer.
Example
julia> import HiGHS
julia> model = Model();
julia> set_optimizer(model, () -> HiGHS.Optimizer())
julia> set_optimizer(model, HiGHS.Optimizer; add_bridges = false)
set_optimizer_attribute
JuMP.set_optimizer_attribute
— Functionset_optimizer_attribute(
model::Union{GenericModel,MOI.OptimizerWithAttributes},
attr::Union{AbstractString,MOI.AbstractOptimizerAttribute},
value,
)
Set the solver-specific attribute attr
in model
to value
.
If attr
is an AbstractString
, this is equivalent to set_optimizer_attribute(model, MOI.RawOptimizerAttribute(name), value)
.
This method will remain in all v1.X releases of JuMP, but it may be removed in a future v2.0 release. We recommend using set_attribute
instead.
See also: set_optimizer_attributes
, get_optimizer_attribute
.
Example
julia> model = Model();
julia> set_optimizer_attribute(model, MOI.Silent(), true)
set_optimizer_attributes
JuMP.set_optimizer_attributes
— Functionset_optimizer_attributes(
model::Union{GenericModel,MOI.OptimizerWithAttributes},
pairs::Pair...,
)
Given a list of attribute => value
pairs, calls set_optimizer_attribute(model, attribute, value)
for each pair.
This method will remain in all v1.X releases of JuMP, but it may be removed in a future v2.0 release. We recommend using set_attributes
instead.
See also: set_optimizer_attribute
, get_optimizer_attribute
.
Example
julia> import Ipopt
julia> model = Model(Ipopt.Optimizer);
julia> set_optimizer_attributes(model, "tol" => 1e-4, "max_iter" => 100)
is equivalent to:
julia> import Ipopt
julia> model = Model(Ipopt.Optimizer);
julia> set_optimizer_attribute(model, "tol", 1e-4)
julia> set_optimizer_attribute(model, "max_iter", 100)
set_parameter_value
JuMP.set_parameter_value
— Functionset_parameter_value(x::GenericVariableRef, value)
Update the parameter constraint on the variable x
to value
.
Errors if x
is not a parameter.
See also ParameterRef
, is_parameter
, parameter_value
.
Examples
julia> model = Model();
julia> @variable(model, p in Parameter(2))
p
julia> parameter_value(p)
2.0
julia> set_parameter_value(p, 2.5)
julia> parameter_value(p)
2.5
set_silent
JuMP.set_silent
— Functionset_silent(model::GenericModel)
Takes precedence over any other attribute controlling verbosity and requires the solver to produce no output.
See also: unset_silent
.
set_start_value
JuMP.set_start_value
— Functionset_start_value(con_ref::ConstraintRef, value)
Set the primal start value (MOI.ConstraintPrimalStart
) of the constraint con_ref
to value
. To remove a primal start value set it to nothing
.
See also start_value
.
set_start_value(variable::GenericVariableRef, value::Union{Real,Nothing})
Set the start value (MOI attribute VariablePrimalStart
) of the variable
to value
.
Pass nothing
to unset the start value.
Note: VariablePrimalStart
s are sometimes called "MIP-starts" or "warmstarts".
See also start_value
.
set_start_values
JuMP.set_start_values
— Functionset_start_values(
model::GenericModel;
variable_primal_start::Union{Nothing,Function} = value,
constraint_primal_start::Union{Nothing,Function} = value,
constraint_dual_start::Union{Nothing,Function} = dual,
nonlinear_dual_start::Union{Nothing,Function} = nonlinear_dual_start_value,
)
Set the primal and dual starting values in model
using the functions provided.
If any keyword argument is nothing
, the corresponding start value is skipped.
If the optimizer does not support setting the starting value, the value will be skipped.
variable_primal_start
This function controls the primal starting solution for the variables. It is equivalent to calling set_start_value
for each variable, or setting the MOI.VariablePrimalStart
attribute.
If it is a function, it must have the form variable_primal_start(x::VariableRef)
that maps each variable x
to the starting primal value.
The default is value
.
constraint_primal_start
This function controls the primal starting solution for the constraints. It is equivalent to calling set_start_value
for each constraint, or setting the MOI.ConstraintPrimalStart
attribute.
If it is a function, it must have the form constraint_primal_start(ci::ConstraintRef)
that maps each constraint ci
to the starting primal value.
The default is value
.
constraint_dual_start
This function controls the dual starting solution for the constraints. It is equivalent to calling set_dual_start_value
for each constraint, or setting the MOI.ConstraintDualStart
attribute.
If it is a function, it must have the form constraint_dual_start(ci::ConstraintRef)
that maps each constraint ci
to the starting dual value.
The default is dual
.
nonlinear_dual_start
This function controls the dual starting solution for the nonlinear constraints It is equivalent to calling set_nonlinear_dual_start_value
.
If it is a function, it must have the form nonlinear_dual_start(model::GenericModel)
that returns a vector corresponding to the dual start of the constraints.
The default is nonlinear_dual_start_value
.
set_string_names_on_creation
JuMP.set_string_names_on_creation
— Functionset_string_names_on_creation(model::GenericModel, value::Bool)
Set the default argument of the set_string_name
keyword in the @variable
and @constraint
macros to value
. This is used to determine whether to assign String
names to all variables and constraints in model
.
By default, value
is true
. However, for larger models calling set_string_names_on_creation(model, false)
can improve performance at the cost of reducing the readability of printing and solver log messages.
set_time_limit_sec
JuMP.set_time_limit_sec
— Functionset_time_limit_sec(model::GenericModel, limit::Float64)
Set the time limit (in seconds) of the solver.
Can be unset using unset_time_limit_sec
or with limit
set to nothing
.
See also: unset_time_limit_sec
, time_limit_sec
.
set_upper_bound
JuMP.set_upper_bound
— Functionset_upper_bound(v::GenericVariableRef, upper::Number)
Set the upper bound of a variable. If one does not exist, create an upper bound constraint.
See also UpperBoundRef
, has_upper_bound
, upper_bound
, delete_upper_bound
.
Examples
julia> model = Model();
julia> @variable(model, x <= 1.0);
julia> upper_bound(x)
1.0
julia> set_upper_bound(x, 2.0)
julia> upper_bound(x)
2.0
set_value
JuMP.set_value
— Functionset_value(p::NonlinearParameter, v::Number)
Store the value v
in the nonlinear parameter p
.
Example
julia> model = Model();
julia> @NLparameter(model, p == 0)
p == 0.0
julia> set_value(p, 5)
5
julia> value(p)
5.0
shadow_price
JuMP.shadow_price
— Functionshadow_price(con_ref::ConstraintRef)
Return the change in the objective from an infinitesimal relaxation of the constraint.
This value is computed from dual
and can be queried only when has_duals
is true
and the objective sense is MIN_SENSE
or MAX_SENSE
(not FEASIBILITY_SENSE
). For linear constraints, the shadow prices differ at most in sign from the dual
value depending on the objective sense.
See also reduced_cost
.
Notes
- The function simply translates signs from
dual
and does not validate the conditions needed to guarantee the sensitivity interpretation of the shadow price. The caller is responsible, e.g., for checking whether the solver converged to an optimal primal-dual pair or a proof of infeasibility. - The computation is based on the current objective sense of the model. If this has changed since the last solve, the results will be incorrect.
- Relaxation of equality constraints (and hence the shadow price) is defined based on which sense of the equality constraint is active.
shape
JuMP.shape
— Functionshape(c::AbstractConstraint)::AbstractShape
Return the shape of the constraint c
.
show_backend_summary
JuMP.show_backend_summary
— Functionshow_backend_summary(io::IO, model::GenericModel)
Print a summary of the optimizer backing model
.
AbstractModel
s should implement this method.
show_constraints_summary
JuMP.show_constraints_summary
— Functionshow_constraints_summary(io::IO, model::AbstractModel)
Write to io
a summary of the number of constraints.
show_objective_function_summary
JuMP.show_objective_function_summary
— Functionshow_objective_function_summary(io::IO, model::AbstractModel)
Write to io
a summary of the objective function type.
simplex_iterations
JuMP.simplex_iterations
— Functionsimplex_iterations(model::GenericModel)
Gets the cumulative number of simplex iterations during the most-recent optimization.
Solvers must implement MOI.SimplexIterations()
to use this function.
solution_summary
JuMP.solution_summary
— Functionsolution_summary(model::GenericModel; result::Int = 1, verbose::Bool = false)
Return a struct that can be used print a summary of the solution in result result
.
If verbose=true
, write out the primal solution for every variable and the dual solution for every constraint, excluding those with empty names.
Example
When called at the REPL, the summary is automatically printed:
julia> model = Model();
julia> solution_summary(model)
* Solver : No optimizer attached.
* Status
Result count : 0
Termination status : OPTIMIZE_NOT_CALLED
Message from the solver:
"optimize not called"
* Candidate solution (result #1)
Primal status : NO_SOLUTION
Dual status : NO_SOLUTION
* Work counters
Use print
to force the printing of the summary from inside a function:
julia> model = Model();
julia> function foo(model)
print(solution_summary(model))
return
end
foo (generic function with 1 method)
julia> foo(model)
* Solver : No optimizer attached.
* Status
Result count : 0
Termination status : OPTIMIZE_NOT_CALLED
Message from the solver:
"optimize not called"
* Candidate solution (result #1)
Primal status : NO_SOLUTION
Dual status : NO_SOLUTION
* Work counters
solve_time
JuMP.solve_time
— Functionsolve_time(model::GenericModel)
If available, returns the solve time reported by the solver. Returns "ArgumentError: ModelLike of type Solver.Optimizer
does not support accessing the attribute MathOptInterface.SolveTimeSec()" if the attribute is not implemented.
solver_name
JuMP.solver_name
— Functionsolver_name(model::GenericModel)
If available, returns the SolverName
property of the underlying optimizer.
Returns "No optimizer attached"
in AUTOMATIC
or MANUAL
modes when no optimizer is attached.
Returns "SolverName() attribute not implemented by the optimizer."
if the attribute is not implemented.
start_value
JuMP.start_value
— Functionstart_value(con_ref::ConstraintRef)
Return the primal start value (MOI.ConstraintPrimalStart
) of the constraint con_ref
.
Note: If no primal start value has been set, start_value
will return nothing
.
See also set_start_value
.
start_value(v::GenericVariableRef)
Return the start value (MOI attribute VariablePrimalStart
) of the variable v
.
Note: VariablePrimalStart
s are sometimes called "MIP-starts" or "warmstarts".
See also set_start_value
.
termination_status
JuMP.termination_status
— Functiontermination_status(model::GenericModel)
Return a MOI.TerminationStatusCode
describing why the solver stopped (i.e., the MOI.TerminationStatus
attribute).
time_limit_sec
JuMP.time_limit_sec
— Functiontime_limit_sec(model::GenericModel)
Return the time limit (in seconds) of the model
.
Returns nothing
if unset.
See also: set_time_limit_sec
, unset_time_limit_sec
.
triangle_vec
JuMP.triangle_vec
— Functiontriangle_vec(matrix::Matrix)
Return the upper triangle of a matrix concatenated into a vector in the order required by JuMP and MathOptInterface for Triangle
sets.
Example
julia> model = Model();
julia> @variable(model, X[1:3, 1:3], Symmetric);
julia> @variable(model, t)
t
julia> @constraint(model, [t; triangle_vec(X)] in MOI.RootDetConeTriangle(3))
[t, X[1,1], X[1,2], X[2,2], X[1,3], X[2,3], X[3,3]] ∈ MathOptInterface.RootDetConeTriangle(3)
unfix
JuMP.unfix
— Functionunfix(v::GenericVariableRef)
Delete the fixing constraint of a variable.
Error if one does not exist.
See also FixRef
, is_fixed
, fix_value
, fix
.
Examples
julia> model = Model();
julia> @variable(model, x == 1);
julia> is_fixed(x)
true
julia> unfix(x)
julia> is_fixed(x)
false
unregister
JuMP.unregister
— Functionunregister(model::GenericModel, key::Symbol)
Unregister the name key
from model
so that a new variable, constraint, or expression can be created with the same key.
Note that this will not delete the object model[key]
; it will just remove the reference at model[key]
. To delete the object, use delete
as well.
See also: delete
, object_dictionary
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> @variable(model, x)
ERROR: An object of name x is already attached to this model. If this
is intended, consider using the anonymous construction syntax, e.g.,
`x = @variable(model, [1:N], ...)` where the name of the object does
not appear inside the macro.
Alternatively, use `unregister(model, :x)` to first unregister
the existing name from the model. Note that this will not delete the
object; it will just remove the reference at `model[:x]`.
Stacktrace:
[...]
julia> num_variables(model)
1
julia> unregister(model, :x)
julia> @variable(model, x)
x
julia> num_variables(model)
2
unsafe_backend
JuMP.unsafe_backend
— Functionunsafe_backend(model::GenericModel)
Return the innermost optimizer associated with the JuMP model model
.
This function should only be used by advanced users looking to access low-level solver-specific functionality. It has a high-risk of incorrect usage. We strongly suggest you use the alternative suggested below.
See also: backend
.
Unsafe behavior
This function is unsafe for two main reasons.
First, the formulation and order of variables and constraints in the unsafe backend may be different to the variables and constraints in model
. This can happen because of bridges, or because the solver requires the variables or constraints in a specific order. In addition, the variable or constraint index returned by index
at the JuMP level may be different to the index of the corresponding variable or constraint in the unsafe_backend
. There is no solution to this. Use the alternative suggested below instead.
Second, the unsafe_backend
may be empty, or lack some modifications made to the JuMP model. Thus, before calling unsafe_backend
you should first call MOI.Utilities.attach_optimizer
to ensure that the backend is synchronized with the JuMP model.
julia> import HiGHS
julia> model = Model(HiGHS.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: HiGHS
julia> MOI.Utilities.attach_optimizer(model)
julia> inner = unsafe_backend(model)
A HiGHS model with 0 columns and 0 rows.
Moreover, if you modify the JuMP model, the reference you have to the backend (i.e., inner
in the example above) may be out-dated, and you should call MOI.Utilities.attach_optimizer
again.
This function is also unsafe in the reverse direction: if you modify the unsafe backend, e.g., by adding a new constraint to inner
, the changes may be silently discarded by JuMP when the JuMP model
is modified or solved.
Alternative
Instead of unsafe_backend
, create a model using direct_model
and call backend
instead.
For example, instead of:
julia> import HiGHS
julia> model = Model(HiGHS.Optimizer);
julia> set_silent(model)
julia> @variable(model, x >= 0)
x
julia> MOI.Utilities.attach_optimizer(model)
julia> highs = unsafe_backend(model)
A HiGHS model with 1 columns and 0 rows.
Use:
julia> import HiGHS
julia> model = direct_model(HiGHS.Optimizer());
julia> set_silent(model)
julia> @variable(model, x >= 0)
x
julia> highs = backend(model) # No need to call `attach_optimizer`.
A HiGHS model with 1 columns and 0 rows.
unset_binary
JuMP.unset_binary
— Functionunset_binary(variable_ref::GenericVariableRef)
Remove the binary constraint on the variable variable_ref
.
See also BinaryRef
, is_binary
, set_binary
.
Examples
julia> model = Model();
julia> @variable(model, x, Bin);
julia> is_binary(x)
true
julia> unset_binary(x)
julia> is_binary(x)
false
unset_integer
JuMP.unset_integer
— Functionunset_integer(variable_ref::GenericVariableRef)
Remove the integrality constraint on the variable variable_ref
.
Errors if one does not exist.
See also IntegerRef
, is_integer
, set_integer
.
Examples
julia> model = Model();
julia> @variable(model, x, Int);
julia> is_integer(x)
true
julia> unset_integer(x)
julia> is_integer(x)
false
unset_silent
JuMP.unset_silent
— Functionunset_silent(model::GenericModel)
Neutralize the effect of the set_silent
function and let the solver attributes control the verbosity.
See also: set_silent
.
unset_time_limit_sec
JuMP.unset_time_limit_sec
— Functionunset_time_limit_sec(model::GenericModel)
Unset the time limit of the solver.
See also: set_time_limit_sec
, time_limit_sec
.
upper_bound
JuMP.upper_bound
— Functionupper_bound(v::GenericVariableRef)
Return the upper bound of a variable.
Error if one does not exist.
See also UpperBoundRef
, has_upper_bound
, set_upper_bound
, delete_upper_bound
.
Examples
julia> model = Model();
julia> @variable(model, x <= 1.0);
julia> upper_bound(x)
1.0
value
JuMP.value
— Functionvalue(con_ref::ConstraintRef; result::Int = 1)
Return the primal value of constraint con_ref
associated with result index result
of the most-recent solution returned by the solver.
That is, if con_ref
is the reference of a constraint func
-in-set
, it returns the value of func
evaluated at the value of the variables (given by value(::GenericVariableRef)
).
Use has_values
to check if a result exists before asking for values.
See also: result_count
.
Note
For scalar constraints, the constant is moved to the set
so it is not taken into account in the primal value of the constraint. For instance, the constraint @constraint(model, 2x + 3y + 1 == 5)
is transformed into 2x + 3y
-in-MOI.EqualTo(4)
so the value returned by this function is the evaluation of 2x + 3y
.
value(var_value::Function, con_ref::ConstraintRef)
Evaluate the primal value of the constraint con_ref
using var_value(v)
as the value for each variable v
.
value(v::GenericVariableRef; result = 1)
Return the value of variable v
associated with result index result
of the most-recent returned by the solver.
Use has_values
to check if a result exists before asking for values.
See also: result_count
.
value(var_value::Function, v::GenericVariableRef)
Evaluate the value of the variable v
as var_value(v)
.
value(var_value::Function, ex::GenericAffExpr)
Evaluate ex
using var_value(v)
as the value for each variable v
.
value(v::GenericAffExpr; result::Int = 1)
Return the value of the GenericAffExpr
v
associated with result index result
of the most-recent solution returned by the solver.
See also: result_count
.
value(var_value::Function, ex::GenericQuadExpr)
Evaluate ex
using var_value(v)
as the value for each variable v
.
value(v::GenericQuadExpr; result::Int = 1)
Return the value of the GenericQuadExpr
v
associated with result index result
of the most-recent solution returned by the solver.
Replaces getvalue
for most use cases.
See also: result_count
.
value(p::NonlinearParameter)
Return the current value stored in the nonlinear parameter p
.
Example
julia> model = Model();
julia> @NLparameter(model, p == 10)
p == 10.0
julia> value(p)
10.0
value(ex::NonlinearExpression; result::Int = 1)
Return the value of the NonlinearExpression
ex
associated with result index result
of the most-recent solution returned by the solver.
Replaces getvalue
for most use cases.
See also: result_count
.
value(var_value::Function, ex::NonlinearExpression)
Evaluate ex
using var_value(v)
as the value for each variable v
.
value(c::NonlinearConstraintRef; result::Int = 1)
Return the value of the NonlinearConstraintRef
c
associated with result index result
of the most-recent solution returned by the solver.
See also: result_count
.
value(var_value::Function, c::NonlinearConstraintRef)
Evaluate c
using var_value(v)
as the value for each variable v
.
value_type
JuMP.value_type
— Functionvalue_type(::Type{<:Union{AbstractModel,AbstractVariableRef}})
Return the return type of value
for variables of that model. It defaults to Float64
if it is not implemented.
variable_by_name
JuMP.variable_by_name
— Functionvariable_by_name(
model::AbstractModel,
name::String,
)::Union{AbstractVariableRef,Nothing}
Returns the reference of the variable with name attribute name
or Nothing
if no variable has this name attribute. Throws an error if several variables have name
as their name attribute.
Examples
julia> model = Model();
julia> @variable(model, x)
x
julia> variable_by_name(model, "x")
x
julia> @variable(model, base_name="x")
x
julia> variable_by_name(model, "x")
ERROR: Multiple variables have the name x.
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] get(::MOIU.Model{Float64}, ::Type{MathOptInterface.VariableIndex}, ::String) at /home/blegat/.julia/dev/MathOptInterface/src/Utilities/model.jl:222
[3] get at /home/blegat/.julia/dev/MathOptInterface/src/Utilities/universalfallback.jl:201 [inlined]
[4] get(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MOIU.Model{Float64}}}, ::Type{MathOptInterface.VariableIndex}, ::String) at /home/blegat/.julia/dev/MathOptInterface/src/Utilities/cachingoptimizer.jl:490
[5] variable_by_name(::GenericModel, ::String) at /home/blegat/.julia/dev/JuMP/src/variables.jl:268
[6] top-level scope at none:0
julia> var = @variable(model, base_name="y")
y
julia> variable_by_name(model, "y")
y
julia> set_name(var, "z")
julia> variable_by_name(model, "y")
julia> variable_by_name(model, "z")
z
julia> @variable(model, u[1:2])
2-element Vector{VariableRef}:
u[1]
u[2]
julia> variable_by_name(model, "u[2]")
u[2]
variable_ref_type
JuMP.variable_ref_type
— Functionvariable_ref_type(::Union{F,Type{F}}) where {F}
A helper function used internally by JuMP and some JuMP extensions. Returns the variable type associated with the model or expression type F
.
vectorize
JuMP.vectorize
— Functionvectorize(matrix::AbstractMatrix, ::Shape)
Convert the matrix
into a vector according to Shape
.
write_to_file
JuMP.write_to_file
— Functionwrite_to_file(
model::GenericModel,
filename::String;
format::MOI.FileFormats.FileFormat = MOI.FileFormats.FORMAT_AUTOMATIC,
kwargs...,
)
Write the JuMP model model
to filename
in the format format
.
If the filename ends in .gz
, it will be compressed using Gzip. If the filename ends in .bz2
, it will be compressed using BZip2.
Other kwargs
are passed to the Model
constructor of the chosen format.
AbstractConstraint
JuMP.AbstractConstraint
— Typeabstract type AbstractConstraint
An abstract base type for all constraint types. AbstractConstraint
s store the function and set directly, unlike ConstraintRef
s that are merely references to constraints stored in a model. AbstractConstraint
s do not need to be attached to a model.
AbstractJuMPScalar
JuMP.AbstractJuMPScalar
— TypeAbstractJuMPScalar <: MutableArithmetics.AbstractMutable
Abstract base type for all scalar types
The subtyping of AbstractMutable
will allow calls of some Base
functions to be redirected to a method in MA that handles type promotion more carefully (e.g. the promotion in sparse matrix products in SparseArrays usually does not work for JuMP types) and exploits the mutability of AffExpr
and QuadExpr
.
AbstractModel
JuMP.AbstractModel
— TypeAbstractModel
An abstract type that should be subtyped for users creating JuMP extensions.
AbstractScalarSet
JuMP.AbstractScalarSet
— TypeAbstractScalarSet
An abstract type for defining new scalar sets in JuMP.
Implement moi_set(::AbstractScalarSet)
to convert the type into an MOI set.
See also: moi_set
.
AbstractShape
JuMP.AbstractShape
— TypeAbstractShape
Abstract vectorizable shape. Given a flat vector form of an object of shape shape
, the original object can be obtained by reshape_vector
.
AbstractVariable
JuMP.AbstractVariable
— TypeAbstractVariable
Variable returned by build_variable
. It represents a variable that has not been added yet to any model. It can be added to a given model
with add_variable
.
AbstractVariableRef
JuMP.AbstractVariableRef
— TypeAbstractVariableRef
Variable returned by add_variable
. Affine (resp. quadratic) operations with variables of type V<:AbstractVariableRef
and coefficients of type T
create a GenericAffExpr{T,V}
(resp. GenericQuadExpr{T,V}
).
AbstractVectorSet
JuMP.AbstractVectorSet
— TypeAbstractVectorSet
An abstract type for defining new sets in JuMP.
Implement moi_set(::AbstractVectorSet, dim::Int)
to convert the type into an MOI set.
See also: moi_set
.
AffExpr
JuMP.AffExpr
— TypeAffExpr
Alias for GenericAffExpr{Float64,VariableRef}
, the specific GenericAffExpr
used by JuMP.
BinaryRef
JuMP.BinaryRef
— FunctionBinaryRef(v::GenericVariableRef)
Return a constraint reference to the constraint constraining v
to be binary. Errors if one does not exist.
See also is_binary
, set_binary
, unset_binary
.
Examples
julia> model = Model();
julia> @variable(model, x, Bin);
julia> BinaryRef(x)
x binary
BridgeableConstraint
JuMP.BridgeableConstraint
— TypeBridgeableConstraint(
constraint::C,
bridge_type::B;
coefficient_type::Type{T} = Float64,
) where {C<:AbstractConstraint,B<:Type{<:MOI.Bridges.AbstractBridge},T}
An AbstractConstraint
representinng that constraint
that can be bridged by the bridge of type bridge_type{coefficient_type}
.
Adding a BridgeableConstraint
to a model is equivalent to:
add_bridge(model, bridge_type; coefficient_type = coefficient_type)
add_constraint(model, constraint)
Example
Given a new scalar set type CustomSet
with a bridge CustomBridge
that can bridge F
-in-CustomSet
constraints, when the user does:
model = Model()
@variable(model, x)
@constraint(model, x + 1 in CustomSet())
optimize!(model)
with an optimizer that does not support F
-in-CustomSet
constraints, the constraint will not be bridged unless they first call add_bridge(model, CustomBridge)
.
In order to automatically add the CustomBridge
to any model to which an F
-in-CustomSet
is added, add the following method:
function JuMP.build_constraint(
_error::Function,
func::AbstractJuMPScalar,
set::CustomSet,
)
constraint = ScalarConstraint(func, set)
return BridgeableConstraint(constraint, CustomBridge)
end
Note
JuMP extensions should extend JuMP.build_constraint
only if they also defined CustomSet
, for three reasons:
- It is problematic if multiple extensions overload the same JuMP method.
- A missing method will not inform the users that they forgot to load the extension module defining the
build_constraint
method. - Defining a method where neither the function nor any of the argument types are defined in the package is called type piracy and is discouraged in the Julia style guide.
ComplexPlane
JuMP.ComplexPlane
— TypeComplexPlane
Complex plane object that can be used to create a complex variable in the @variable
macro.
Example
Consider the following example:
julia> model = Model();
julia> @variable(model, x in ComplexPlane())
real(x) + imag(x) im
julia> all_variables(model)
2-element Vector{VariableRef}:
real(x)
imag(x)
We see in the output of the last command that two real variables were created. The Julia variable x
binds to an affine expression in terms of these two variables that parametrize the complex plane.
ComplexVariable
JuMP.ComplexVariable
— TypeComplexVariable{S,T,U,V} <: AbstractVariable
A struct used when adding complex variables.
See also: ComplexPlane
.
ConstraintNotOwned
JuMP.ConstraintNotOwned
— Typestruct ConstraintNotOwned{C <: ConstraintRef} <: Exception
constraint_ref::C
end
The constraint constraint_ref
was used in a model different to owner_model(constraint_ref)
.
ConstraintRef
JuMP.ConstraintRef
— TypeConstraintRef
Holds a reference to the model and the corresponding MOI.ConstraintIndex.
FixRef
JuMP.FixRef
— FunctionFixRef(v::GenericVariableRef)
Return a constraint reference to the constraint fixing the value of v
.
Errors if one does not exist.
See also is_fixed
, fix_value
, fix
, unfix
.
Examples
julia> model = Model();
julia> @variable(model, x == 1);
julia> FixRef(x)
x = 1
GenericAffExpr
JuMP.GenericAffExpr
— Typemutable struct GenericAffExpr{CoefType,VarType} <: AbstractJuMPScalar
constant::CoefType
terms::OrderedDict{VarType,CoefType}
end
An expression type representing an affine expression of the form: $\sum a_i x_i + c$.
Fields
.constant
: the constantc
in the expression..terms
: anOrderedDict
, with keys ofVarType
and values ofCoefType
describing the sparse vectora
.
GenericModel
JuMP.GenericModel
— TypeGenericModel{T}(
[optimizer_factory;]
add_bridges::Bool = true,
) where {T<:Real}
Create a new instance of a JuMP model.
If optimizer_factory
is provided, the model is initialized with the optimizer returned by MOI.instantiate(optimizer_factory)
.
If optimizer_factory
is not provided, use set_optimizer
to set the optimizer before calling optimize!
.
If add_bridges
, JuMP adds a MOI.Bridges.LazyBridgeOptimizer
to automatically reformulate the problem into a form supported by the optimizer.
Value type T
Passing a type other than Float64
as the value type T
is an advanced operation. The value type must match that expected by the chosen optimizer. Consult the optimizers documentation for details.
If not documented, assume that the optimizer supports only Float64
.
Choosing an unsupported value type will throw an MOI.UnsupportedConstraint
or an MOI.UnsupportedAttribute
error, the timing of which (during the model construction or during a call to optimize!
) depends on how the solver is interfaced to JuMP.
Example
julia> model = GenericModel{BigFloat}();
julia> typeof(model)
GenericModel{BigFloat}
GenericNonlinearExpr
JuMP.GenericNonlinearExpr
— TypeGenericNonlinearExpr{V}(head::Symbol, args::Vector{Any})
GenericNonlinearExpr{V}(head::Symbol, args::Any...)
The scalar-valued nonlinear function head(args...)
, represented as a symbolic expression tree, with the call operator head
and ordered arguments in args
.
V
is the type of AbstractVariableRef
present in the expression, and is used to help dispatch JuMP extensions.
head
The head::Symbol
must be an operator supported by the model.
The default list of supported univariate operators is given by:
and the default list of supported multivariate operators is given by:
Additional operators can be add using @operator
.
See the full list of operators supported by a MOI.ModelLike
by querying the MOI.ListOfSupportedNonlinearOperators
attribute.
args
The vector args
contains the arguments to the nonlinear function. If the operator is univariate, it must contain one element. Otherwise, it may contain multiple elements.
Given a subtype of AbstractVariableRef
, V
, for GenericNonlinearExpr{V}
, each element must be one of the following:
- A constant value of type
<:Real
- A
V
- A
GenericAffExpr{T,V}
- A
GenericQuadExpr{T,V}
- A
GenericNonlinearExpr{V}
where T<:Real
and T == value_type(V)
.
Unsupported operators
If the optimizer does not support head
, an MOI.UnsupportedNonlinearOperator
error will be thrown.
There is no guarantee about when this error will be thrown; it may be thrown when the function is first added to the model, or it may be thrown when optimize!
is called.
Example
To represent the function $f(x) = sin(x)^2$, do:
julia> model = Model();
julia> @variable(model, x)
x
julia> f = sin(x)^2
sin(x) ^ 2.0
julia> f = GenericNonlinearExpr{VariableRef}(
:^,
GenericNonlinearExpr{VariableRef}(:sin, x),
2.0,
)
sin(x) ^ 2.0
GenericQuadExpr
JuMP.GenericQuadExpr
— Typemutable struct GenericQuadExpr{CoefType,VarType} <: AbstractJuMPScalar
aff::GenericAffExpr{CoefType,VarType}
terms::OrderedDict{UnorderedPair{VarType}, CoefType}
end
An expression type representing an quadratic expression of the form: $\sum q_{i,j} x_i x_j + \sum a_i x_i + c$.
Fields
.aff
: anGenericAffExpr
representing the affine portion of the expression..terms
: anOrderedDict
, with keys ofUnorderedPair{VarType}
and values ofCoefType
, describing the sparse list of termsq
.
GenericReferenceMap
JuMP.GenericReferenceMap
— TypeGenericReferenceMap{T}
Mapping between variable and constraint reference of a model and its copy. The reference of the copied model can be obtained by indexing the map with the reference of the corresponding reference of the original model.
GenericVariableRef
JuMP.GenericVariableRef
— TypeGenericVariableRef{T} <: AbstractVariableRef
Holds a reference to the model and the corresponding MOI.VariableIndex.
HermitianMatrixShape
JuMP.HermitianMatrixShape
— TypeHermitianMatrixShape
Shape object for a Hermitian square matrix of side_dimension
rows and columns. The vectorized form corresponds to MOI.HermitianPositiveSemidefiniteConeTriangle
.
HermitianMatrixSpace
JuMP.HermitianMatrixSpace
— TypeHermitianMatrixSpace()
Use in the @variable
macro to constrain a matrix of variables to be hermitian.
Example
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2] in HermitianMatrixSpace())
2×2 LinearAlgebra.Hermitian{GenericAffExpr{ComplexF64, VariableRef}, Matrix{GenericAffExpr{ComplexF64, VariableRef}}}:
real(Q[1,1]) real(Q[1,2]) + imag(Q[1,2]) im
real(Q[1,2]) - imag(Q[1,2]) im real(Q[2,2])
HermitianPSDCone
JuMP.HermitianPSDCone
— TypeHermitianPSDCone
Hermitian positive semidefinite cone object that can be used to create a Hermitian positive semidefinite square matrix in the @variable
and @constraint
macros.
Example
Consider the following example:
julia> model = Model();
julia> @variable(model, H[1:3, 1:3] in HermitianPSDCone())
3×3 LinearAlgebra.Hermitian{GenericAffExpr{ComplexF64, VariableRef}, Matrix{GenericAffExpr{ComplexF64, VariableRef}}}:
real(H[1,1]) … real(H[1,3]) + imag(H[1,3]) im
real(H[1,2]) - imag(H[1,2]) im real(H[2,3]) + imag(H[2,3]) im
real(H[1,3]) - imag(H[1,3]) im real(H[3,3])
julia> all_variables(model)
9-element Vector{VariableRef}:
real(H[1,1])
real(H[1,2])
real(H[2,2])
real(H[1,3])
real(H[2,3])
real(H[3,3])
imag(H[1,2])
imag(H[1,3])
imag(H[2,3])
julia> all_constraints(model, Vector{VariableRef}, MOI.HermitianPositiveSemidefiniteConeTriangle)
1-element Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.VectorOfVariables, MathOptInterface.HermitianPositiveSemidefiniteConeTriangle}}}:
[real(H[1,1]), real(H[1,2]), real(H[2,2]), real(H[1,3]), real(H[2,3]), real(H[3,3]), imag(H[1,2]), imag(H[1,3]), imag(H[2,3])] ∈ MathOptInterface.HermitianPositiveSemidefiniteConeTriangle(3)
We see in the output of the last commands that 9 real variables were created. The matrix H
contrains affine expressions in terms of these 9 variables that parametrize a Hermitian matrix.
IntegerRef
JuMP.IntegerRef
— FunctionIntegerRef(v::GenericVariableRef)
Return a constraint reference to the constraint constraining v
to be integer.
Errors if one does not exist.
See also is_integer
, set_integer
, unset_integer
.
Examples
julia> model = Model();
julia> @variable(model, x, Int);
julia> IntegerRef(x)
x integer
LinearTermIterator
JuMP.LinearTermIterator
— TypeLinearTermIterator{GAE<:GenericAffExpr}
A struct that implements the iterate
protocol in order to iterate over tuples of (coefficient, variable)
in the GenericAffExpr
.
LowerBoundRef
JuMP.LowerBoundRef
— FunctionLowerBoundRef(v::GenericVariableRef)
Return a constraint reference to the lower bound constraint of v
.
Errors if one does not exist.
See also has_lower_bound
, lower_bound
, set_lower_bound
, delete_lower_bound
.
Examples
julia> model = Model();
julia> @variable(model, x >= 1.0);
julia> LowerBoundRef(x)
x ≥ 1
Model
JuMP.Model
— TypeModel([optimizer_factory;] add_bridges::Bool = true)
Create a new instance of a JuMP model.
If optimizer_factory
is provided, the model is initialized with thhe optimizer returned by MOI.instantiate(optimizer_factory)
.
If optimizer_factory
is not provided, use set_optimizer
to set the optimizer before calling optimize!
.
If add_bridges
, JuMP adds a MOI.Bridges.LazyBridgeOptimizer
to automatically reformulate the problem into a form supported by the optimizer.
Example
julia> import Ipopt
julia> model = Model(Ipopt.Optimizer);
julia> solver_name(model)
"Ipopt"
julia> import HiGHS
julia> import MultiObjectiveAlgorithms as MOA
julia> model = Model(() -> MOA.Optimizer(HiGHS.Optimizer); add_bridges = false);
ModelMode
JuMP.ModelMode
— TypeModelMode
An enum to describe the state of the CachingOptimizer inside a JuMP model.
NLPEvaluator
JuMP.NLPEvaluator
— FunctionNLPEvaluator(
model::Model,
_differentiation_backend::MOI.Nonlinear.AbstractAutomaticDifferentiation =
MOI.Nonlinear.SparseReverseMode(),
)
Return an MOI.AbstractNLPEvaluator
constructed from model
Before using, you must initialize the evaluator using MOI.initialize
.
Experimental
These features may change or be removed in any future version of JuMP.
Pass _differentiation_backend
to specify the differentiation backend used to compute derivatives.
NoOptimizer
JuMP.NoOptimizer
— Typestruct NoOptimizer <: Exception end
No optimizer is set. The optimizer can be provided to the Model
constructor or by calling set_optimizer
.
NonlinearConstraintIndex
JuMP.NonlinearConstraintIndex
— TypeConstraintIndex
An index to a nonlinear constraint that is returned by add_constraint
.
Given data::Model
and c::ConstraintIndex
, use data[c]
to retrieve the corresponding Constraint
.
NonlinearConstraintRef
JuMP.NonlinearConstraintRef
— TypeNonlinearConstraintRef
NonlinearExpr
JuMP.NonlinearExpr
— TypeNonlinearExpr
Alias for GenericNonlinearExpr{VariableRef}
, the specific GenericNonlinearExpr
used by JuMP.
NonlinearExpression
JuMP.NonlinearExpression
— TypeNonlinearExpression <: AbstractJuMPScalar
A struct to represent a nonlinear expression.
Create an expression using @NLexpression
.
NonlinearOperator
JuMP.NonlinearOperator
— TypeNonlinearOperator(func::Function, head::Symbol)
A callable struct (functor) representing a function named head
.
When called with AbstractJuMPScalar
s, the struct returns a GenericNonlinearExpr
.
When called with non-JuMP types, the struct returns the evaluation of func(args...)
.
Unless head
is special-cased by the optimizer, the operator must have already been added to the model using add_nonlinear_operator
or @operator
.
Example
julia> model = Model();
julia> @variable(model, x)
x
julia> f(x::Float64) = x^2
f (generic function with 1 method)
julia> ∇f(x::Float64) = 2 * x
∇f (generic function with 1 method)
julia> ∇²f(x::Float64) = 2.0
∇²f (generic function with 1 method)
julia> @operator(model, op_f, 1, f, ∇f, ∇²f)
NonlinearOperator(f, :op_f)
julia> bar = NonlinearOperator(f, :op_f)
NonlinearOperator(f, :op_f)
julia> @objective(model, Min, bar(x))
op_f(x)
julia> bar(2.0)
4.0
NonlinearParameter
JuMP.NonlinearParameter
— TypeNonlinearParameter <: AbstractJuMPScalar
A struct to represent a nonlinear parameter.
Create a parameter using @NLparameter
.
Nonnegatives
JuMP.Nonnegatives
— TypeNonnegatives()
The JuMP equivalent of the MOI.Nonnegatives
set, in which the dimension is inferred from the corresponding function.
Example
julia> model = Model();
julia> @variable(model, x[1:2])
2-element Vector{VariableRef}:
x[1]
x[2]
julia> @constraint(model, x in Nonnegatives())
[x[1], x[2]] ∈ MathOptInterface.Nonnegatives(2)
julia> A = [1 2; 3 4];
julia> b = [5, 6];
julia> @constraint(model, A * x >= b)
[x[1] + 2 x[2] - 5, 3 x[1] + 4 x[2] - 6] ∈ MathOptInterface.Nonnegatives(2)
Nonpositives
JuMP.Nonpositives
— TypeNonpositives()
The JuMP equivalent of the MOI.Nonpositives
set, in which the dimension is inferred from the corresponding function.
Example
julia> model = Model();
julia> @variable(model, x[1:2])
2-element Vector{VariableRef}:
x[1]
x[2]
julia> @constraint(model, x in Nonpositives())
[x[1], x[2]] ∈ MathOptInterface.Nonpositives(2)
julia> A = [1 2; 3 4];
julia> b = [5, 6];
julia> @constraint(model, A * x <= b)
[x[1] + 2 x[2] - 5, 3 x[1] + 4 x[2] - 6] ∈ MathOptInterface.Nonpositives(2)
OptimizationSense
JuMP.OptimizationSense
— TypeOptimizationSense
An enum for the value of the ObjectiveSense
attribute.
Values
Possible values are:
MIN_SENSE
: the goal is to minimize the objective functionMAX_SENSE
: the goal is to maximize the objective functionFEASIBILITY_SENSE
: the model does not have an objective function
OptimizeNotCalled
JuMP.OptimizeNotCalled
— Typestruct OptimizeNotCalled <: Exception end
A result attribute cannot be queried before optimize!
is called.
PSDCone
JuMP.PSDCone
— TypePSDCone
Positive semidefinite cone object that can be used to constrain a square matrix to be positive semidefinite in the @constraint
macro. If the matrix has type Symmetric
then the columns vectorization (the vector obtained by concatenating the columns) of its upper triangular part is constrained to belong to the MOI.PositiveSemidefiniteConeTriangle
set, otherwise its column vectorization is constrained to belong to the MOI.PositiveSemidefiniteConeSquare
set.
Example
Consider the following example:
julia> model = Model();
julia> @variable(model, x)
x
julia> a = [ x 2x
2x x];
julia> b = [1 2
2 4];
julia> cref = @constraint(model, a >= b, PSDCone())
[x - 1 2 x - 2;
2 x - 2 x - 4] ∈ PSDCone()
julia> jump_function(constraint_object(cref))
4-element Vector{AffExpr}:
x - 1
2 x - 2
2 x - 2
x - 4
julia> moi_set(constraint_object(cref))
MathOptInterface.PositiveSemidefiniteConeSquare(2)
We see in the output of the last command that the vectorization of the matrix is constrained to belong to the PositiveSemidefiniteConeSquare
.
julia> using LinearAlgebra # For Symmetric
julia> cref = @constraint(model, Symmetric(a - b) in PSDCone())
[x - 1 2 x - 2;
2 x - 2 x - 4] ∈ PSDCone()
julia> jump_function(constraint_object(cref))
3-element Vector{AffExpr}:
x - 1
2 x - 2
x - 4
julia> moi_set(constraint_object(cref))
MathOptInterface.PositiveSemidefiniteConeTriangle(2)
As we see in the output of the last command, the vectorization of only the upper triangular part of the matrix is constrained to belong to the PositiveSemidefiniteConeSquare
.
Parameter
JuMP.Parameter
— TypeParameter(value)
A short-cut for the MOI.Parameter
set.
Example
julia> model = Model();
julia> @variable(model, x in Parameter(2))
x
julia> print(model)
Feasibility
Subject to
x ∈ MathOptInterface.Parameter{Float64}(2.0)
ParameterRef
JuMP.ParameterRef
— FunctionParameterRef(x::GenericVariableRef)
Return a constraint reference to the constraint constraining x
to be a parameter.
Errors if one does not exist.
See also is_parameter
, set_parameter_value
, parameter_value
.
Examples
julia> model = Model();
julia> @variable(model, p in Parameter(2))
p
julia> ParameterRef(p)
p ∈ MathOptInterface.Parameter{Float64}(2.0)
julia> @variable(model, x);
julia> ParameterRef(x)
ERROR: Variable x is not a parameter.
Stacktrace:
[...]
QuadExpr
JuMP.QuadExpr
— TypeQuadExpr
An alias for GenericQuadExpr{Float64,VariableRef}
, the specific GenericQuadExpr
used by JuMP.
QuadTermIterator
JuMP.QuadTermIterator
— TypeQuadTermIterator{GQE<:GenericQuadExpr}
A struct that implements the iterate
protocol in order to iterate over tuples of (coefficient, variable, variable)
in the GenericQuadExpr
.
ReferenceMap
JuMP.ReferenceMap
— TypeGenericReferenceMap{T}
Mapping between variable and constraint reference of a model and its copy. The reference of the copied model can be obtained by indexing the map with the reference of the corresponding reference of the original model.
ResultStatusCode
JuMP.ResultStatusCode
— TypeResultStatusCode
An Enum of possible values for the PrimalStatus
and DualStatus
attributes.
The values indicate how to interpret the result vector.
Values
Possible values are:
NO_SOLUTION
: the result vector is empty.FEASIBLE_POINT
: the result vector is a feasible point.NEARLY_FEASIBLE_POINT
: the result vector is feasible if some constraint tolerances are relaxed.INFEASIBLE_POINT
: the result vector is an infeasible point.INFEASIBILITY_CERTIFICATE
: the result vector is an infeasibility certificate. If thePrimalStatus
isINFEASIBILITY_CERTIFICATE
, then the primal result vector is a certificate of dual infeasibility. If theDualStatus
isINFEASIBILITY_CERTIFICATE
, then the dual result vector is a proof of primal infeasibility.NEARLY_INFEASIBILITY_CERTIFICATE
: the result satisfies a relaxed criterion for a certificate of infeasibility.REDUCTION_CERTIFICATE
: the result vector is an ill-posed certificate; see this article for details. If thePrimalStatus
isREDUCTION_CERTIFICATE
, then the primal result vector is a proof that the dual problem is ill-posed. If theDualStatus
isREDUCTION_CERTIFICATE
, then the dual result vector is a proof that the primal is ill-posed.NEARLY_REDUCTION_CERTIFICATE
: the result satisfies a relaxed criterion for an ill-posed certificate.UNKNOWN_RESULT_STATUS
: the result vector contains a solution with an unknown interpretation.OTHER_RESULT_STATUS
: the result vector contains a solution with an interpretation not covered by one of the statuses defined above
RotatedSecondOrderCone
JuMP.RotatedSecondOrderCone
— TypeRotatedSecondOrderCone
Rotated second order cone object that can be used to constrain the square of the euclidean norm of a vector x
to be less than or equal to $2tu$ where t
and u
are nonnegative scalars. This is a shortcut for the MOI.RotatedSecondOrderCone
.
Example
The following constrains $\|(x-1, x-2)\|^2_2 \le 2tx$ and $t, x \ge 0$:
julia> model = Model();
julia> @variable(model, x)
x
julia> @variable(model, t)
t
julia> @constraint(model, [t, x, x-1, x-2] in RotatedSecondOrderCone())
[t, x, x - 1, x - 2] ∈ MathOptInterface.RotatedSecondOrderCone(4)
SOS1
JuMP.SOS1
— TypeSOS1
SOS1 (Special Ordered Sets type 1) object than can be used to constrain a vector x
to a set where at most 1 variable can take a non-zero value, all others being at 0. The weights
, when specified, induce an ordering of the variables; as such, they should be unique values. The kth element in the set corresponds to the kth weight in weights
. See here for a description of SOS constraints and their potential uses. This is a shortcut for the MathOptInterface.SOS1
set.
SOS2
JuMP.SOS2
— TypeSOS2
SOS1 (Special Ordered Sets type 2) object than can be used to constrain a vector x
to a set where at most 2 variables can take a non-zero value, all others being at 0. In addition, if two are non-zero these must be consecutive in their ordering. The weights
induce an ordering of the variables; as such, they should be unique values. The kth element in the set corresponds to the kth weight in weights
. See here for a description of SOS constraints and their potential uses. This is a shortcut for the MathOptInterface.SOS2
set.
ScalarConstraint
JuMP.ScalarConstraint
— Typestruct ScalarConstraint
The data for a scalar constraint. The func
field contains a JuMP object representing the function and the set
field contains the MOI set. See also the documentation on JuMP's representation of constraints for more background.
ScalarShape
JuMP.ScalarShape
— TypeScalarShape
Shape of scalar constraints.
ScalarVariable
JuMP.ScalarVariable
— TypeScalarVariable{S,T,U,V} <: AbstractVariable
A struct used when adding variables.
See also: add_variable
.
SecondOrderCone
JuMP.SecondOrderCone
— TypeSecondOrderCone
Second order cone object that can be used to constrain the euclidean norm of a vector x
to be less than or equal to a nonnegative scalar t
. This is a shortcut for the MOI.SecondOrderCone
.
Example
The following constrains $\|(x-1, x-2)\|_2 \le t$ and $t \ge 0$:
julia> model = Model();
julia> @variable(model, x)
x
julia> @variable(model, t)
t
julia> @constraint(model, [t, x-1, x-2] in SecondOrderCone())
[t, x - 1, x - 2] ∈ MathOptInterface.SecondOrderCone(3)
Semicontinuous
JuMP.Semicontinuous
— TypeSemicontinuous(lower, upper)
A short-cut for the MOI.Semicontinuous
set.
This short-cut is useful because it automatically promotes lower
and upper
to the same type, and converts them into the element type supported by the JuMP model.
Example
julia> model = Model();
julia> @variable(model, x in Semicontinuous(1, 2))
x
julia> print(model)
Feasibility
Subject to
x ∈ MathOptInterface.Semicontinuous{Int64}(1, 2)
Semiinteger
JuMP.Semiinteger
— TypeSemiinteger(lower, upper)
A short-cut for the MOI.Semiinteger
set.
This short-cut is useful because it automatically promotes lower
and upper
to the same type, and converts them into the element type supported by the JuMP model.
Example
julia> model = Model();
julia> @variable(model, x in Semiinteger(3, 5))
x
julia> print(model)
Feasibility
Subject to
x ∈ MathOptInterface.Semiinteger{Int64}(3, 5)
SensitivityReport
JuMP.SensitivityReport
— TypeSensitivityReport
SkewSymmetricMatrixShape
JuMP.SkewSymmetricMatrixShape
— TypeSkewSymmetricMatrixShape
Shape object for a skew symmetric square matrix of side_dimension
rows and columns. The vectorized form contains the entries of the upper-right triangular part of the matrix (without the diagonal) given column by column (or equivalently, the entries of the lower-left triangular part given row by row). The diagonal is zero.
SkewSymmetricMatrixSpace
JuMP.SkewSymmetricMatrixSpace
— TypeSkewSymmetricMatrixSpace()
Use in the @variable
macro to constrain a matrix of variables to be skew-symmetric.
Example
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2] in SkewSymmetricMatrixSpace())
2×2 Matrix{AffExpr}:
0 Q[1,2]
-Q[1,2] 0
SquareMatrixShape
JuMP.SquareMatrixShape
— TypeSquareMatrixShape
Shape object for a square matrix of side_dimension
rows and columns. The vectorized form contains the entries of the the matrix given column by column (or equivalently, the entries of the lower-left triangular part given row by row).
SymmetricMatrixShape
JuMP.SymmetricMatrixShape
— TypeSymmetricMatrixShape
Shape object for a symmetric square matrix of side_dimension
rows and columns. The vectorized form contains the entries of the upper-right triangular part of the matrix given column by column (or equivalently, the entries of the lower-left triangular part given row by row).
SymmetricMatrixSpace
JuMP.SymmetricMatrixSpace
— TypeSymmetricMatrixSpace()
Use in the @variable
macro to constrain a matrix of variables to be symmetric.
Example
julia> model = Model();
julia> @variable(model, Q[1:2, 1:2] in SymmetricMatrixSpace())
2×2 LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}:
Q[1,1] Q[1,2]
Q[1,2] Q[2,2]
TerminationStatusCode
JuMP.TerminationStatusCode
— TypeTerminationStatusCode
An Enum of possible values for the TerminationStatus
attribute. This attribute is meant to explain the reason why the optimizer stopped executing in the most recent call to optimize!
.
Values
Possible values are:
OPTIMIZE_NOT_CALLED
: The algorithm has not started.OPTIMAL
: The algorithm found a globally optimal solution.INFEASIBLE
: The algorithm concluded that no feasible solution exists.DUAL_INFEASIBLE
: The algorithm concluded that no dual bound exists for the problem. If, additionally, a feasible (primal) solution is known to exist, this status typically implies that the problem is unbounded, with some technical exceptions.LOCALLY_SOLVED
: The algorithm converged to a stationary point, local optimal solution, could not find directions for improvement, or otherwise completed its search without global guarantees.LOCALLY_INFEASIBLE
: The algorithm converged to an infeasible point or otherwise completed its search without finding a feasible solution, without guarantees that no feasible solution exists.INFEASIBLE_OR_UNBOUNDED
: The algorithm stopped because it decided that the problem is infeasible or unbounded; this occasionally happens during MIP presolve.ALMOST_OPTIMAL
: The algorithm found a globally optimal solution to relaxed tolerances.ALMOST_INFEASIBLE
: The algorithm concluded that no feasible solution exists within relaxed tolerances.ALMOST_DUAL_INFEASIBLE
: The algorithm concluded that no dual bound exists for the problem within relaxed tolerances.ALMOST_LOCALLY_SOLVED
: The algorithm converged to a stationary point, local optimal solution, or could not find directions for improvement within relaxed tolerances.ITERATION_LIMIT
: An iterative algorithm stopped after conducting the maximum number of iterations.TIME_LIMIT
: The algorithm stopped after a user-specified computation time.NODE_LIMIT
: A branch-and-bound algorithm stopped because it explored a maximum number of nodes in the branch-and-bound tree.SOLUTION_LIMIT
: The algorithm stopped because it found the required number of solutions. This is often used in MIPs to get the solver to return the first feasible solution it encounters.MEMORY_LIMIT
: The algorithm stopped because it ran out of memory.OBJECTIVE_LIMIT
: The algorithm stopped because it found a solution better than a minimum limit set by the user.NORM_LIMIT
: The algorithm stopped because the norm of an iterate became too large.OTHER_LIMIT
: The algorithm stopped due to a limit not covered by one of the_LIMIT_
statuses above.SLOW_PROGRESS
: The algorithm stopped because it was unable to continue making progress towards the solution.NUMERICAL_ERROR
: The algorithm stopped because it encountered unrecoverable numerical error.INVALID_MODEL
: The algorithm stopped because the model is invalid.INVALID_OPTION
: The algorithm stopped because it was provided an invalid option.INTERRUPTED
: The algorithm stopped because of an interrupt signal.OTHER_ERROR
: The algorithm stopped because of an error not covered by one of the statuses defined above.
UnorderedPair
JuMP.UnorderedPair
— TypeUnorderedPair(a::T, b::T)
A wrapper type used by GenericQuadExpr
with fields .a
and .b
.
UpperBoundRef
JuMP.UpperBoundRef
— FunctionUpperBoundRef(v::GenericVariableRef)
Return a constraint reference to the upper bound constraint of v
.
Errors if one does not exist.
See also has_upper_bound
, upper_bound
, set_upper_bound
, delete_upper_bound
.
Examples
julia> model = Model();
julia> @variable(model, x <= 1.0);
julia> UpperBoundRef(x)
x ≤ 1
VariableConstrainedOnCreation
JuMP.VariableConstrainedOnCreation
— TypeVariableConstrainedOnCreation <: AbstractVariable
Variable scalar_variables
constrained to belong to set
.
Adding this variable can be understood as doing:
function JuMP.add_variable(
model::GenericModel,
variable::VariableConstrainedOnCreation,
names,
)
var_ref = add_variable(model, variable.scalar_variable, name)
add_constraint(model, VectorConstraint(var_ref, variable.set))
return var_ref
end
but adds the variables with MOI.add_constrained_variable(model, variable.set)
instead. See the MOI documentation for the difference between adding the variables with MOI.add_constrained_variable
and adding them with MOI.add_variable
and adding the constraint separately.
VariableInfo
JuMP.VariableInfo
— TypeVariableInfo{S,T,U,V}
A struct by JuMP internally when creating variables. This may also be used by JuMP extensions to create new types of variables.
See also: ScalarVariable
.
VariableNotOwned
JuMP.VariableNotOwned
— Typestruct VariableNotOwned{V<:AbstractVariableRef} <: Exception
variable::V
end
The variable variable
was used in a model different to owner_model(variable)
.
VariableRef
JuMP.VariableRef
— TypeGenericVariableRef{T} <: AbstractVariableRef
Holds a reference to the model and the corresponding MOI.VariableIndex.
VariablesConstrainedOnCreation
JuMP.VariablesConstrainedOnCreation
— TypeVariablesConstrainedOnCreation <: AbstractVariable
Vector of variables scalar_variables
constrained to belong to set
. Adding this variable can be thought as doing:
function JuMP.add_variable(
model::GenericModel,
variable::VariablesConstrainedOnCreation,
names,
)
v_names = vectorize(names, variable.shape)
var_refs = add_variable.(model, variable.scalar_variables, v_names)
add_constraint(model, VectorConstraint(var_refs, variable.set))
return reshape_vector(var_refs, variable.shape)
end
but adds the variables with MOI.add_constrained_variables(model, variable.set)
instead. See the MOI documentation for the difference between adding the variables with MOI.add_constrained_variables
and adding them with MOI.add_variables
and adding the constraint separately.
VectorConstraint
JuMP.VectorConstraint
— Typestruct VectorConstraint
The data for a vector constraint. The func
field contains a JuMP object representing the function and the set
field contains the MOI set. The shape
field contains an AbstractShape
matching the form in which the constraint was constructed (e.g., by using matrices or flat vectors). See also the documentation on JuMP's representation of constraints.
VectorShape
JuMP.VectorShape
— TypeVectorShape
Vector for which the vectorized form corresponds exactly to the vector given.
Zeros
JuMP.Zeros
— TypeZeros()
The JuMP equivalent of the MOI.Zeros
set, in which the dimension is inferred from the corresponding function.
Example
julia> model = Model();
julia> @variable(model, x[1:2])
2-element Vector{VariableRef}:
x[1]
x[2]
julia> @constraint(model, x in Zeros())
[x[1], x[2]] ∈ MathOptInterface.Zeros(2)
julia> A = [1 2; 3 4];
julia> b = [5, 6];
julia> @constraint(model, A * x == b)
[x[1] + 2 x[2] - 5, 3 x[1] + 4 x[2] - 6] ∈ MathOptInterface.Zeros(2)
ALMOST_DUAL_INFEASIBLE
JuMP.ALMOST_DUAL_INFEASIBLE
— ConstantALMOST_DUAL_INFEASIBLE::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
ALMOST_DUAL_INFEASIBLE
: The algorithm concluded that no dual bound exists for the problem within relaxed tolerances.
ALMOST_INFEASIBLE
JuMP.ALMOST_INFEASIBLE
— ConstantALMOST_INFEASIBLE::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
ALMOST_INFEASIBLE
: The algorithm concluded that no feasible solution exists within relaxed tolerances.
ALMOST_LOCALLY_SOLVED
JuMP.ALMOST_LOCALLY_SOLVED
— ConstantALMOST_LOCALLY_SOLVED::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
ALMOST_LOCALLY_SOLVED
: The algorithm converged to a stationary point, local optimal solution, or could not find directions for improvement within relaxed tolerances.
ALMOST_OPTIMAL
JuMP.ALMOST_OPTIMAL
— ConstantALMOST_OPTIMAL::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
ALMOST_OPTIMAL
: The algorithm found a globally optimal solution to relaxed tolerances.
AUTOMATIC
JuMP.AUTOMATIC
— Constantmoi_backend
field holds a CachingOptimizer in AUTOMATIC mode.
DIRECT
JuMP.DIRECT
— Constantmoi_backend
field holds an AbstractOptimizer. No extra copy of the model is stored. The moi_backend
must support add_constraint
etc.
DUAL_INFEASIBLE
JuMP.DUAL_INFEASIBLE
— ConstantDUAL_INFEASIBLE::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
DUAL_INFEASIBLE
: The algorithm concluded that no dual bound exists for the problem. If, additionally, a feasible (primal) solution is known to exist, this status typically implies that the problem is unbounded, with some technical exceptions.
FEASIBILITY_SENSE
JuMP.FEASIBILITY_SENSE
— ConstantFEASIBILITY_SENSE::OptimizationSense
An instance of the OptimizationSense
enum.
FEASIBILITY_SENSE
: the model does not have an objective function
FEASIBLE_POINT
JuMP.FEASIBLE_POINT
— ConstantFEASIBLE_POINT::ResultStatusCode
An instance of the ResultStatusCode
enum.
FEASIBLE_POINT
: the result vector is a feasible point.
INFEASIBILITY_CERTIFICATE
JuMP.INFEASIBILITY_CERTIFICATE
— ConstantINFEASIBILITY_CERTIFICATE::ResultStatusCode
An instance of the ResultStatusCode
enum.
INFEASIBILITY_CERTIFICATE
: the result vector is an infeasibility certificate. If the PrimalStatus
is INFEASIBILITY_CERTIFICATE
, then the primal result vector is a certificate of dual infeasibility. If the DualStatus
is INFEASIBILITY_CERTIFICATE
, then the dual result vector is a proof of primal infeasibility.
INFEASIBLE
JuMP.INFEASIBLE
— ConstantINFEASIBLE::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
INFEASIBLE
: The algorithm concluded that no feasible solution exists.
INFEASIBLE_OR_UNBOUNDED
JuMP.INFEASIBLE_OR_UNBOUNDED
— ConstantINFEASIBLE_OR_UNBOUNDED::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
INFEASIBLE_OR_UNBOUNDED
: The algorithm stopped because it decided that the problem is infeasible or unbounded; this occasionally happens during MIP presolve.
INFEASIBLE_POINT
JuMP.INFEASIBLE_POINT
— ConstantINFEASIBLE_POINT::ResultStatusCode
An instance of the ResultStatusCode
enum.
INFEASIBLE_POINT
: the result vector is an infeasible point.
INTERRUPTED
JuMP.INTERRUPTED
— ConstantINTERRUPTED::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
INTERRUPTED
: The algorithm stopped because of an interrupt signal.
INVALID_MODEL
JuMP.INVALID_MODEL
— ConstantINVALID_MODEL::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
INVALID_MODEL
: The algorithm stopped because the model is invalid.
INVALID_OPTION
JuMP.INVALID_OPTION
— ConstantINVALID_OPTION::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
INVALID_OPTION
: The algorithm stopped because it was provided an invalid option.
ITERATION_LIMIT
JuMP.ITERATION_LIMIT
— ConstantITERATION_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
ITERATION_LIMIT
: An iterative algorithm stopped after conducting the maximum number of iterations.
LOCALLY_INFEASIBLE
JuMP.LOCALLY_INFEASIBLE
— ConstantLOCALLY_INFEASIBLE::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
LOCALLY_INFEASIBLE
: The algorithm converged to an infeasible point or otherwise completed its search without finding a feasible solution, without guarantees that no feasible solution exists.
LOCALLY_SOLVED
JuMP.LOCALLY_SOLVED
— ConstantLOCALLY_SOLVED::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
LOCALLY_SOLVED
: The algorithm converged to a stationary point, local optimal solution, could not find directions for improvement, or otherwise completed its search without global guarantees.
MANUAL
JuMP.MANUAL
— Constantmoi_backend
field holds a CachingOptimizer in MANUAL mode.
MAX_SENSE
JuMP.MAX_SENSE
— ConstantMAX_SENSE::OptimizationSense
An instance of the OptimizationSense
enum.
MAX_SENSE
: the goal is to maximize the objective function
MEMORY_LIMIT
JuMP.MEMORY_LIMIT
— ConstantMEMORY_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
MEMORY_LIMIT
: The algorithm stopped because it ran out of memory.
MIN_SENSE
JuMP.MIN_SENSE
— ConstantMIN_SENSE::OptimizationSense
An instance of the OptimizationSense
enum.
MIN_SENSE
: the goal is to minimize the objective function
NEARLY_FEASIBLE_POINT
JuMP.NEARLY_FEASIBLE_POINT
— ConstantNEARLY_FEASIBLE_POINT::ResultStatusCode
An instance of the ResultStatusCode
enum.
NEARLY_FEASIBLE_POINT
: the result vector is feasible if some constraint tolerances are relaxed.
NEARLY_INFEASIBILITY_CERTIFICATE
JuMP.NEARLY_INFEASIBILITY_CERTIFICATE
— ConstantNEARLY_INFEASIBILITY_CERTIFICATE::ResultStatusCode
An instance of the ResultStatusCode
enum.
NEARLY_INFEASIBILITY_CERTIFICATE
: the result satisfies a relaxed criterion for a certificate of infeasibility.
NEARLY_REDUCTION_CERTIFICATE
JuMP.NEARLY_REDUCTION_CERTIFICATE
— ConstantNEARLY_REDUCTION_CERTIFICATE::ResultStatusCode
An instance of the ResultStatusCode
enum.
NEARLY_REDUCTION_CERTIFICATE
: the result satisfies a relaxed criterion for an ill-posed certificate.
NODE_LIMIT
JuMP.NODE_LIMIT
— ConstantNODE_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
NODE_LIMIT
: A branch-and-bound algorithm stopped because it explored a maximum number of nodes in the branch-and-bound tree.
NORM_LIMIT
JuMP.NORM_LIMIT
— ConstantNORM_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
NORM_LIMIT
: The algorithm stopped because the norm of an iterate became too large.
NO_SOLUTION
JuMP.NO_SOLUTION
— ConstantNO_SOLUTION::ResultStatusCode
An instance of the ResultStatusCode
enum.
NO_SOLUTION
: the result vector is empty.
NUMERICAL_ERROR
JuMP.NUMERICAL_ERROR
— ConstantNUMERICAL_ERROR::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
NUMERICAL_ERROR
: The algorithm stopped because it encountered unrecoverable numerical error.
OBJECTIVE_LIMIT
JuMP.OBJECTIVE_LIMIT
— ConstantOBJECTIVE_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
OBJECTIVE_LIMIT
: The algorithm stopped because it found a solution better than a minimum limit set by the user.
OPTIMAL
JuMP.OPTIMAL
— ConstantOPTIMAL::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
OPTIMAL
: The algorithm found a globally optimal solution.
OPTIMIZE_NOT_CALLED
JuMP.OPTIMIZE_NOT_CALLED
— ConstantOPTIMIZE_NOT_CALLED::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
OPTIMIZE_NOT_CALLED
: The algorithm has not started.
OTHER_ERROR
JuMP.OTHER_ERROR
— ConstantOTHER_ERROR::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
OTHER_ERROR
: The algorithm stopped because of an error not covered by one of the statuses defined above.
OTHER_LIMIT
JuMP.OTHER_LIMIT
— ConstantOTHER_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
OTHER_LIMIT
: The algorithm stopped due to a limit not covered by one of the _LIMIT_
statuses above.
OTHER_RESULT_STATUS
JuMP.OTHER_RESULT_STATUS
— ConstantOTHER_RESULT_STATUS::ResultStatusCode
An instance of the ResultStatusCode
enum.
OTHER_RESULT_STATUS
: the result vector contains a solution with an interpretation not covered by one of the statuses defined above
REDUCTION_CERTIFICATE
JuMP.REDUCTION_CERTIFICATE
— ConstantREDUCTION_CERTIFICATE::ResultStatusCode
An instance of the ResultStatusCode
enum.
REDUCTION_CERTIFICATE
: the result vector is an ill-posed certificate; see this article for details. If the PrimalStatus
is REDUCTION_CERTIFICATE
, then the primal result vector is a proof that the dual problem is ill-posed. If the DualStatus
is REDUCTION_CERTIFICATE
, then the dual result vector is a proof that the primal is ill-posed.
SLOW_PROGRESS
JuMP.SLOW_PROGRESS
— ConstantSLOW_PROGRESS::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
SLOW_PROGRESS
: The algorithm stopped because it was unable to continue making progress towards the solution.
SOLUTION_LIMIT
JuMP.SOLUTION_LIMIT
— ConstantSOLUTION_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
SOLUTION_LIMIT
: The algorithm stopped because it found the required number of solutions. This is often used in MIPs to get the solver to return the first feasible solution it encounters.
TIME_LIMIT
JuMP.TIME_LIMIT
— ConstantTIME_LIMIT::TerminationStatusCode
An instance of the TerminationStatusCode
enum.
TIME_LIMIT
: The algorithm stopped after a user-specified computation time.
UNKNOWN_RESULT_STATUS
JuMP.UNKNOWN_RESULT_STATUS
— ConstantUNKNOWN_RESULT_STATUS::ResultStatusCode
An instance of the ResultStatusCode
enum.
UNKNOWN_RESULT_STATUS
: the result vector contains a solution with an unknown interpretation.
op_and
JuMP.op_and
— Constantop_and(x, y)
A function that falls back to x & y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_and(true, false)
false
julia> op_and(true, x)
true && x
op_equal_to
JuMP.op_equal_to
— Constantop_equal_to(x, y)
A function that falls back to x == y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_equal_to(2, 2)
true
julia> op_equal_to(x, 2)
x == 2
op_greater_than_or_equal_to
JuMP.op_greater_than_or_equal_to
— Constantop_greater_than_or_equal_to(x, y)
A function that falls back to x >= y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_greater_than_or_equal_to(2, 2)
true
julia> op_greater_than_or_equal_to(x, 2)
x >= 2
op_less_than_or_equal_to
JuMP.op_less_than_or_equal_to
— Constantop_less_than_or_equal_to(x, y)
A function that falls back to x <= y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_less_than_or_equal_to(2, 2)
true
julia> op_less_than_or_equal_to(x, 2)
x <= 2
op_or
JuMP.op_or
— Constantop_or(x, y)
A function that falls back to x | y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_or(true, false)
true
julia> op_or(true, x)
true || x
op_strictly_greater_than
JuMP.op_strictly_greater_than
— Constantop_strictly_greater_than(x, y)
A function that falls back to x > y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_strictly_greater_than(1, 2)
false
julia> op_strictly_greater_than(x, 2)
x > 2
op_strictly_less_than
JuMP.op_strictly_less_than
— Constantop_strictly_less_than(x, y)
A function that falls back to x < y
, but when called with JuMP variables or expressions, returns a GenericNonlinearExpr
.
Example
julia> model = Model();
julia> @variable(model, x);
julia> op_strictly_less_than(1, 2)
true
julia> op_strictly_less_than(x, 2)
x < 2
Base.empty!(::GenericModel)
Base.empty!
— Methodempty!(model::GenericModel)::GenericModel
Empty the model, that is, remove all variables, constraints and model attributes but not optimizer attributes. Always return the argument.
Note: removes extensions data.
Base.isempty(::GenericModel)
Base.isempty
— Methodisempty(model::GenericModel)
Verifies whether the model is empty, that is, whether the MOI backend is empty and whether the model is in the same state as at its creation apart from optimizer attributes.
Base.copy(::AbstractModel)
Base.copy
— Methodcopy(model::AbstractModel)
Return a copy of the model model
. It is similar to copy_model
except that it does not return the mapping between the references of model
and its copy.
Note
Model copy is not supported in DIRECT
mode, i.e. when a model is constructed using the direct_model
constructor instead of the Model
constructor. Moreover, independently on whether an optimizer was provided at model construction, the new model will have no optimizer, i.e., an optimizer will have to be provided to the new model in the optimize!
call.
Example
In the following example, a model model
is constructed with a variable x
and a constraint cref
. It is then copied into a model new_model
with the new references assigned to x_new
and cref_new
.
julia> model = Model();
julia> @variable(model, x)
x
julia> @constraint(model, cref, x == 2)
cref : x = 2
julia> new_model = copy(model);
julia> x_new = model[:x]
x
julia> cref_new = model[:cref]
cref : x = 2
Base.write(::IO, ::GenericModel; ::MOI.FileFormats.FileFormat)
Base.write
— MethodBase.write(
io::IO,
model::GenericModel;
format::MOI.FileFormats.FileFormat = MOI.FileFormats.FORMAT_MOF,
kwargs...,
)
Write the JuMP model model
to io
in the format format
.
Other kwargs
are passed to the Model
constructor of the chosen format.
MOI.Utilities.reset_optimizer(::GenericModel)
MathOptInterface.Utilities.reset_optimizer
— MethodMOIU.reset_optimizer(model::GenericModel)
Call MOIU.reset_optimizer
on the backend of model
.
Cannot be called in direct mode.
MOI.Utilities.drop_optimizer(::GenericModel)
MathOptInterface.Utilities.drop_optimizer
— MethodMOIU.drop_optimizer(model::GenericModel)
Call MOIU.drop_optimizer
on the backend of model
.
Cannot be called in direct mode.
MOI.Utilities.attach_optimizer(::GenericModel)
MathOptInterface.Utilities.attach_optimizer
— MethodMOIU.attach_optimizer(model::GenericModel)
Call MOIU.attach_optimizer
on the backend of model
.
Cannot be called in direct mode.