Solution API

More information can be found in the Querying Solutions section of the manual.

JuMP.optimize!Function
optimize!(model::Model;
          ignore_optimize_hook=(model.optimize_hook === nothing),
          kwargs...)

Optimize the model. If an optimizer has not been set yet (see set_optimizer), a NoOptimizer error is thrown.

Keyword arguments kwargs are passed to the optimize_hook. An error is thrown if optimize_hook is nothing and keyword arguments are provided.

source
JuMP.termination_statusFunction
termination_status(model::Model)

Return the reason why the solver stopped (i.e., the MathOptInterface model attribute TerminationStatus).

source
MathOptInterface.TerminationStatusCodeType
TerminationStatusCode

An Enum of possible values for the TerminationStatus attribute. This attribute is meant to explain the reason why the optimizer stopped executing in the most recent call to optimize!.

If no call has been made to optimize!, then the TerminationStatus is:

  • OPTIMIZE_NOT_CALLED: The algorithm has not started.

OK

These are generally OK statuses, i.e., the algorithm ran to completion normally.

  • OPTIMAL: The algorithm found a globally optimal solution.
  • INFEASIBLE: The algorithm concluded that no feasible solution exists.
  • DUAL_INFEASIBLE: The algorithm concluded that no dual bound exists for the problem. If, additionally, a feasible (primal) solution is known to exist, this status typically implies that the problem is unbounded, with some technical exceptions.
  • LOCALLY_SOLVED: The algorithm converged to a stationary point, local optimal solution, could not find directions for improvement, or otherwise completed its search without global guarantees.
  • LOCALLY_INFEASIBLE: The algorithm converged to an infeasible point or otherwise completed its search without finding a feasible solution, without guarantees that no feasible solution exists.
  • INFEASIBLE_OR_UNBOUNDED: The algorithm stopped because it decided that the problem is infeasible or unbounded; this occasionally happens during MIP presolve.

Solved to relaxed tolerances

  • ALMOST_OPTIMAL: The algorithm found a globally optimal solution to relaxed tolerances.
  • ALMOST_INFEASIBLE: The algorithm concluded that no feasible solution exists within relaxed tolerances.
  • ALMOST_DUAL_INFEASIBLE: The algorithm concluded that no dual bound exists for the problem within relaxed tolerances.
  • ALMOST_LOCALLY_SOLVED: The algorithm converged to a stationary point, local optimal solution, or could not find directions for improvement within relaxed tolerances.

Limits

The optimizer stopped because of some user-defined limit.

  • ITERATION_LIMIT: An iterative algorithm stopped after conducting the maximum number of iterations.
  • TIME_LIMIT: The algorithm stopped after a user-specified computation time.
  • NODE_LIMIT: A branch-and-bound algorithm stopped because it explored a maximum number of nodes in the branch-and-bound tree.
  • SOLUTION_LIMIT: The algorithm stopped because it found the required number of solutions. This is often used in MIPs to get the solver to return the first feasible solution it encounters.
  • MEMORY_LIMIT: The algorithm stopped because it ran out of memory.
  • OBJECTIVE_LIMIT: The algorithm stopped because it found a solution better than a minimum limit set by the user.
  • NORM_LIMIT: The algorithm stopped because the norm of an iterate became too large.
  • OTHER_LIMIT: The algorithm stopped due to a limit not covered by one of the above.

Problematic

This group of statuses means that something unexpected or problematic happened.

  • SLOW_PROGRESS: The algorithm stopped because it was unable to continue making progress towards the solution.
  • NUMERICAL_ERROR: The algorithm stopped because it encountered unrecoverable numerical error.
  • INVALID_MODEL: The algorithm stopped because the model is invalid.
  • INVALID_OPTION: The algorithm stopped because it was provided an invalid option.
  • INTERRUPTED: The algorithm stopped because of an interrupt signal.
  • OTHER_ERROR: The algorithm stopped because of an error not covered by one of the statuses defined above.
source
JuMP.raw_statusFunction
raw_status(model::Model)

Return the reason why the solver stopped in its own words (i.e., the MathOptInterface model attribute RawStatusString).

source
JuMP.primal_statusFunction
primal_status(model::Model; result::Int = 1)

Return the status of the most recent primal solution of the solver (i.e., the MathOptInterface model attribute PrimalStatus) associated with the result index result.

See also: result_count.

source
MathOptInterface.ResultStatusCodeType
ResultStatusCode

An Enum of possible values for the PrimalStatus and DualStatus attributes. The values indicate how to interpret the result vector.

  • NO_SOLUTION: the result vector is empty.
  • FEASIBLE_POINT: the result vector is a feasible point.
  • NEARLY_FEASIBLE_POINT: the result vector is feasible if some constraint tolerances are relaxed.
  • INFEASIBLE_POINT: the result vector is an infeasible point.
  • INFEASIBILITY_CERTIFICATE: the result vector is an infeasibility certificate. If the PrimalStatus is INFEASIBILITY_CERTIFICATE, then the primal result vector is a certificate of dual infeasibility. If the DualStatus is INFEASIBILITY_CERTIFICATE, then the dual result vector is a proof of primal infeasibility.
  • NEARLY_INFEASIBILITY_CERTIFICATE: the result satisfies a relaxed criterion for a certificate of infeasibility.
  • REDUCTION_CERTIFICATE: the result vector is an ill-posed certificate; see this article for details. If the PrimalStatus is REDUCTION_CERTIFICATE, then the primal result vector is a proof that the dual problem is ill-posed. If the DualStatus is REDUCTION_CERTIFICATE, then the dual result vector is a proof that the primal is ill-posed.
  • NEARLY_REDUCTION_CERTIFICATE: the result satisfies a relaxed criterion for an ill-posed certificate.
  • UNKNOWN_RESULT_STATUS: the result vector contains a solution with an unknown interpretation.
  • OTHER_RESULT_STATUS: the result vector contains a solution with an interpretation not covered by one of the statuses defined above.
source
JuMP.has_valuesFunction
has_values(model::Model; result::Int = 1)

Return true if the solver has a primal solution in result index result available to query, otherwise return false.

See also value and result_count.

source
JuMP.valueFunction
value(con_ref::ConstraintRef; result::Int = 1)

Return the primal value of constraint con_ref associated with result index result of the most-recent solution returned by the solver.

That is, if con_ref is the reference of a constraint func-in-set, it returns the value of func evaluated at the value of the variables (given by value(::VariableRef)).

Use has_values to check if a result exists before asking for values.

See also: result_count.

Note

For scalar contraints, the constant is moved to the set so it is not taken into account in the primal value of the constraint. For instance, the constraint @constraint(model, 2x + 3y + 1 == 5) is transformed into 2x + 3y-in-MOI.EqualTo(4) so the value returned by this function is the evaluation of 2x + 3y. ```

source
value(con_ref::ConstraintRef, var_value::Function)

Evaluate the primal value of the constraint con_ref using var_value(v) as the value for each variable v.

source
value(v::VariableRef; result = 1)

Return the value of variable v associated with result index result of the most-recent returned by the solver.

Use has_values to check if a result exists before asking for values.

See also: result_count.

source
value(v::VariableRef, var_value::Function)

Evaluate the value of the variable v as var_value(v).

source
value(ex::GenericAffExpr, var_value::Function)

Evaluate ex using var_value(v) as the value for each variable v.

source
value(v::GenericAffExpr; result::Int = 1)

Return the value of the GenericAffExpr v associated with result index result of the most-recent solution returned by the solver.

Replaces getvalue for most use cases.

See also: result_count.

source
value(v::GenericQuadExpr; result::Int = 1)

Return the value of the GenericQuadExpr v associated with result index result of the most-recent solution returned by the solver.

Replaces getvalue for most use cases.

See also: result_count.

source
value(p::NonlinearParameter)

Return the current value stored in the nonlinear parameter p.

Example

model = Model()
@NLparameter(model, p == 10)
value(p)

# output
10.0
source
value(ex::NonlinearExpression, var_value::Function)

Evaluate ex using var_value(v) as the value for each variable v.

source
value(ex::NonlinearExpression; result::Int = 1)

Return the value of the NonlinearExpression ex associated with result index result of the most-recent solution returned by the solver.

Replaces getvalue for most use cases.

See also: result_count.

source
JuMP.dual_statusFunction
dual_status(model::Model; result::Int = 1)

Return the status of the most recent dual solution of the solver (i.e., the MathOptInterface model attribute DualStatus) associated with the result index result.

See also: result_count.

source
JuMP.dualFunction
dual(con_ref::ConstraintRef; result::Int = 1)

Return the dual value of constraint con_ref associated with result index result of the most-recent solution returned by the solver.

Use has_dual to check if a result exists before asking for values.

See also: result_count, shadow_price.

source
JuMP.shadow_priceFunction
shadow_price(con_ref::ConstraintRef)

Return the change in the objective from an infinitesimal relaxation of the constraint.

This value is computed from dual and can be queried only when has_duals is true and the objective sense is MIN_SENSE or MAX_SENSE (not FEASIBILITY_SENSE). For linear constraints, the shadow prices differ at most in sign from the dual value depending on the objective sense.

See also reduced_cost.

Notes

  • The function simply translates signs from dual and does not validate the conditions needed to guarantee the sensitivity interpretation of the shadow price. The caller is responsible, e.g., for checking whether the solver converged to an optimal primal-dual pair or a proof of infeasibility.
  • The computation is based on the current objective sense of the model. If this has changed since the last solve, the results will be incorrect.
  • Relaxation of equality constraints (and hence the shadow price) is defined based on which sense of the equality constraint is active.
source
JuMP.reduced_costFunction
reduced_cost(x::VariableRef)::Float64

Return the reduced cost associated with variable x.

Equivalent to querying the shadow price of the active variable bound (if one exists and is active).

See also: shadow_price.

source
JuMP.objective_boundFunction
objective_bound(model::Model)

Return the best known bound on the optimal objective value after a call to optimize!(model).

source
JuMP.objective_valueFunction
objective_value(model::Model; result::Int = 1)

Return the objective value associated with result index result of the most-recent solution returned by the solver.

See also: result_count.

source
JuMP.dual_objective_valueFunction
dual_objective_value(model::Model; result::Int = 1)

Return the value of the objective of the dual problem associated with result index result of the most-recent solution returned by the solver.

Throws MOI.UnsupportedAttribute{MOI.DualObjectiveValue} if the solver does not support this attribute.

See also: result_count.

source
JuMP.solve_timeFunction
solve_time(model::Model)

If available, returns the solve time reported by the solver. Returns "ArgumentError: ModelLike of type Solver.Optimizer does not support accessing the attribute MathOptInterface.SolveTime()" if the attribute is not implemented.

source
JuMP.relative_gapFunction
relative_gap(model::Model)

Return the final relative optimality gap after a call to optimize!(model). Exact value depends upon implementation of MathOptInterface.RelativeGap() by the particular solver used for optimization.

source
JuMP.simplex_iterationsFunction
simplex_iterations(model::Model)

Gets the cumulative number of simplex iterations during the most-recent optimization.

Solvers must implement MOI.SimplexIterations() to use this function.

source
JuMP.barrier_iterationsFunction
barrier_iterations(model::Model)

Gets the cumulative number of barrier iterations during the most recent optimization.

Solvers must implement MOI.BarrierIterations() to use this function.

source
JuMP.node_countFunction
node_count(model::Model)

Gets the total number of branch-and-bound nodes explored during the most recent optimization in a Mixed Integer Program.

Solvers must implement MOI.NodeCount() to use this function.

source
JuMP.compute_conflict!Function
compute_conflict!(model::Model)

Compute a conflict if the model is infeasible. If an optimizer has not been set yet (see set_optimizer), a NoOptimizer error is thrown.

The status of the conflict can be checked with the MOI.ConflictStatus model attribute. Then, the status for each constraint can be queried with the MOI.ConstraintConflictStatus attribute.

source
MathOptInterface.compute_conflict!Function
compute_conflict!(optimizer::AbstractOptimizer)

Computes a minimal subset of constraints such that the model with the other constraint removed is still infeasible.

Some solvers call a set of conflicting constraints an Irreducible Inconsistent Subsystem (IIS).

See also ConflictStatus and ConstraintConflictStatus.

Note

If the model is modified after a call to compute_conflict!, the implementor is not obliged to purge the conflict. Any calls to the above attributes may return values for the original conflict without a warning. Similarly, when modifying the model, the conflict can be discarded.

MathOptInterface.ConflictStatusCodeType
ConflictStatusCode

An Enum of possible values for the ConflictStatus attribute. This attribute is meant to explain the reason why the conflict finder stopped executing in the most recent call to compute_conflict!.

Possible values are:

  • COMPUTE_CONFLICT_NOT_CALLED: the function compute_conflict! has not yet been called
  • NO_CONFLICT_EXISTS: there is no conflict because the problem is feasible
  • NO_CONFLICT_FOUND: the solver could not find a conflict
  • CONFLICT_FOUND: at least one conflict could be found
source
MathOptInterface.ConflictParticipationStatusCodeType
ConflictParticipationStatusCode

An Enum of possible values for the ConstraintConflictStatus attribute. This attribute is meant to indicate whether a given constraint participates or not in the last computed conflict.

Possible values are:

  • NOT_IN_CONFLICT: the constraint does not participate in the conflict
  • IN_CONFLICT: the constraint participates in the conflict
  • MAYBE_IN_CONFLICT: the constraint may participate in the conflict, the solver was not able to prove that the constraint can be excluded from the conflict
source
JuMP.lp_objective_perturbation_rangeFunction
lp_objective_perturbation_range(var::VariableRef;
                                optimality_tolerance::Float64)
                                ::Tuple{Float64, Float64}

Gives the range by which the cost coefficient can change and the current LP basis remains optimal, i.e., the reduced costs remain valid.

Notes

  • The range denotes valid changes, Δ ∈ [l, u], for which cost[var] += Δ do not violate the current optimality conditions.
  • optimality_tolerance is the dual feasibility tolerance, this should preferably match the tolerance used by the solver. The defualt tolerance should however apply in most situations (c.f. "Computational Techniques of the Simplex Method" by István Maros, section 9.3.4).
source
JuMP.lp_rhs_perturbation_rangeFunction
lp_rhs_perturbation_range(constraint::ConstraintRef;
                          feasibility_tolerance::Float64)
                          ::Tuple{Float64, Float64}

Gives the range by which the rhs coefficient can change and the current LP basis remains feasible, i.e., where the shadow prices apply.

Notes

  • The rhs coefficient is the value right of the relation, i.e., b for the constraint when of the form a*x □ b, where □ is ≤, =, or ≥.
  • The range denotes valid changes, e.g., for a*x <= b + Δ, the LP basis remains feasible for all Δ ∈ [l, u].
  • feasibility_tolerance is the primal feasibility tolerance, this should preferably match the tolerance used by the solver. The default tolerance should however apply in most situations (c.f. "Computational Techniques of the Simplex Method" by István Maros, section 9.3.4).
source
JuMP.lp_sensitivity_reportFunction
lp_sensitivity_report(model::Model; atol::Float64 = 1e-8)::SensitivityReport

Given a linear program model with a current optimal basis, return a SensitivityReport object, which maps:

  • Every variable reference to a tuple (d_lo, d_hi)::Tuple{Float64,Float64}, explaining how much the objective coefficient of the corresponding variable can change by, such that the original basis remains optimal.
  • Every constraint reference to a tuple (d_lo, d_hi)::Tuple{Float64,Float64}, explaining how much the right-hand side of the corresponding constraint can change by, such that the basis remains optimal.

Both tuples are relative, rather than absolute. So given a objective coefficient of 1.0 and a tuple (-0.5, 0.5), the objective coefficient can range between 1.0 - 0.5 an 1.0 + 0.5.

atol is the primal/dual optimality tolerance, and should match the tolerance of the solver used to compute the basis.

Note: interval constraints are NOT supported.

Example

model = Model(GLPK.Optimizer)
@variable(model, -1 <= x <= 2)
@objective(model, Min, x)
optimize!(model)
report = lp_sensitivity_report(model; atol = 1e-7)
dx_lo, dx_hi = report[x]
println(
    "The objective coefficient of `x` can decrease by $dx_lo or " *
    "increase by $dx_hi."
)
c = LowerBoundRef(x)
dRHS_lo, dRHS_hi = report[c]
println(
    "The lower bound of `x` can decrease by $dRHS_lo or increase " *
    "by $dRHS_hi."
)
source