Arbitrary precision arithmetic
This tutorial was generated using Literate.jl. Download the source as a .jl file.
The purpose of this tutorial is to explain how to use a solver which supports arithmetic using a number type other than Float64.
Required packages
This tutorial uses the following packages:
using JuMP
import CDDLib
import ClarabelHigher-precision arithmetic
To create a model with a number type other than Float64, use GenericModel with an optimizer which supports the same number type:
model = GenericModel{BigFloat}(Clarabel.Optimizer{BigFloat})A JuMP Model
├ value_type: BigFloat
├ solver: Clarabel
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: noneThe syntax for adding decision variables is the same as a normal JuMP model, except that values are converted to BigFloat:
@variable(model, -1 <= x[1:2] <= sqrt(big"2"))2-element Vector{GenericVariableRef{BigFloat}}:
x[1]
x[2]Note that each x is now a GenericVariableRef{BigFloat}, which means that the value of x in a solution will be a BigFloat.
The lower and upper bounds of the decision variables are also BigFloat:
lower_bound(x[1])-1.0typeof(lower_bound(x[1]))BigFloatupper_bound(x[2])1.414213562373095048801688724209698078569671875376948073176679737990732478462102typeof(upper_bound(x[2]))BigFloatThe syntax for adding constraints is the same as a normal JuMP model, except that coefficients are converted to BigFloat:
@constraint(model, c, x[1] == big"2" * x[2])\[ x_{1} - 2.0 x_{2} = 0.0 \]
The function is a GenericAffExpr with BigFloat for the coefficient and variable types;
constraint = constraint_object(c)
typeof(constraint.func)GenericAffExpr{BigFloat, GenericVariableRef{BigFloat}}and the set is a MOI.EqualTo{BigFloat}:
typeof(constraint.set)MathOptInterface.EqualTo{BigFloat}The syntax for adding and objective is the same as a normal JuMP model, except that coefficients are converted to BigFloat:
@objective(model, Min, 3x[1]^2 + 2x[2]^2 - x[1] - big"4" * x[2])\[ 3.0 x_{1}^2 + 2.0 x_{2}^2 - x_{1} - 4.0 x_{2} \]
typeof(objective_function(model))GenericQuadExpr{BigFloat, GenericVariableRef{BigFloat}}Here's the model we have built:
print(model)Min 3.0 x[1]² + 2.0 x[2]² - x[1] - 4.0 x[2]
Subject to
c : x[1] - 2.0 x[2] = 0.0
x[1] ≥ -1.0
x[2] ≥ -1.0
x[1] ≤ 1.414213562373095048801688724209698078569671875376948073176679737990732478462102
x[2] ≤ 1.414213562373095048801688724209698078569671875376948073176679737990732478462102Let's solve and inspect the solution:
optimize!(model)
assert_is_solved_and_feasible(model; dual = true)
solution_summary(model)solution_summary(; result = 1, verbose = false)
├ solver_name : Clarabel
├ Termination
│ ├ termination_status : OPTIMAL
│ ├ result_count : 1
│ └ raw_status : SOLVED
├ Solution (result = 1)
│ ├ primal_status : FEASIBLE_POINT
│ ├ dual_status : FEASIBLE_POINT
│ ├ objective_value : -6.42857e-01
│ └ dual_objective_value : -6.42857e-01
└ Work counters
├ solve_time (sec) : 1.73088e-03
└ barrier_iterations : 5The value of each decision variable is a BigFloat:
value.(x)2-element Vector{BigFloat}:
0.4285714246558161076147072906813123533593766450416896337912086518811186790735189
0.2142857123279078924828007272730108809297577877991360649674411645247653239673801as well as other solution attributes like the objective value:
objective_value(model)-0.6428571428571422964607590389935242587959291815638830868454759876473734138856053and dual solution:
dual(c)1.571428571977140845343978069015092190548250919787945065022059071052557047888032This problem has an analytic solution of x = [3//7, 3//14]. Currently, our solution has an error of approximately 1e-9:
value.(x) .- [3 // 7, 3 // 14]2-element Vector{BigFloat}:
-3.915612463813864137890116218069194783529738937637362776690309892355053792476207e-09
-1.957806393231484987012703404784527926486578220746844549760948961746906215408591e-09But by reducing the tolerances, we can obtain a more accurate solution:
set_attribute(model, "tol_gap_abs", 1e-32)
set_attribute(model, "tol_gap_rel", 1e-32)
optimize!(model)
assert_is_solved_and_feasible(model)
value.(x) .- [3 // 7, 3 // 14]2-element Vector{BigFloat}:
-3.915612463813864137890116218069194783529738937637362776690309892355053792476207e-09
-1.957806393231484987012703404784527926486578220746844549760948961746906215408591e-09Rational arithmetic
In addition to higher-precision floating point number types like BigFloat, JuMP also supports solvers with exact rational arithmetic. One example is CDDLib.jl, which supports the Rational{BigInt} number type:
model = GenericModel{Rational{BigInt}}(CDDLib.Optimizer{Rational{BigInt}})A JuMP Model
├ value_type: Rational{BigInt}
├ solver: CDD
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: noneAs before, we can create variables using rational bounds:
@variable(model, 1 // 7 <= x[1:2] <= 2 // 3)2-element Vector{GenericVariableRef{Rational{BigInt}}}:
x[1]
x[2]lower_bound(x[1])1//7typeof(lower_bound(x[1]))Rational{BigInt}As well as constraints:
@constraint(model, c1, (2 // 1) * x[1] + x[2] <= 1)\[ 2//1 x_{1} + x_{2} \leq 1//1 \]
@constraint(model, c2, x[1] + 3x[2] <= 9 // 4)\[ x_{1} + 3//1 x_{2} \leq 9//4 \]
and objective functions:
@objective(model, Max, sum(x))\[ x_{1} + x_{2} \]
Here's the model we have built:
print(model)Max x[1] + x[2]
Subject to
c1 : 2//1 x[1] + x[2] ≤ 1//1
c2 : x[1] + 3//1 x[2] ≤ 9//4
x[1] ≥ 1//7
x[2] ≥ 1//7
x[1] ≤ 2//3
x[2] ≤ 2//3Let's solve and inspect the solution:
optimize!(model)
assert_is_solved_and_feasible(model)
solution_summary(model)solution_summary(; result = 1, verbose = false)
├ solver_name : CDD
├ Termination
│ ├ termination_status : OPTIMAL
│ ├ result_count : 1
│ └ raw_status : Optimal
└ Solution (result = 1)
├ primal_status : FEASIBLE_POINT
├ dual_status : NO_SOLUTION
└ objective_value : 5//6The optimal values are given in exact rational arithmetic:
value.(x)2-element Vector{Rational{BigInt}}:
1//6
2//3objective_value(model)5//6value(c2)13//6