Primal and dual warm-starts
This tutorial was generated using Literate.jl. Download the source as a .jl
file.
Some conic solvers have the ability to set warm-starts for the primal and dual solution. This can improve performance, particularly if you are repeatedly solving a sequence of related problems.
The purpose of this tutorial is to demonstrate how to write a function that sets the primal and dual starts as the optimal solution stored in a model. It is intended to be a starting point for which you can modify if you want to do something similar in your own code.
See set_start_values
for a generic implementation of this function that was added to JuMP after this tutorial was written.
Required packages
This tutorial uses the following packages:
using JuMP
import SCS
A basic function
The main component of this tutorial is the following function. The most important observation is that we cache all of the solution values first, and then we modify the model second. (Alternating between querying a value and modifying the model is not allowed in JuMP.)
function set_optimal_start_values(model::Model)
# Store a mapping of the variable primal solution
variable_primal = Dict(x => value(x) for x in all_variables(model))
# In the following, we loop through every constraint and store a mapping
# from the constraint index to a tuple containing the primal and dual
# solutions.
constraint_solution = Dict()
for (F, S) in list_of_constraint_types(model)
# We add a try-catch here because some constraint types might not
# support getting the primal or dual solution.
try
for ci in all_constraints(model, F, S)
constraint_solution[ci] = (value(ci), dual(ci))
end
catch
@info("Something went wrong getting $F-in-$S. Skipping")
end
end
# Now we can loop through our cached solutions and set the starting values.
for (x, primal_start) in variable_primal
set_start_value(x, primal_start)
end
for (ci, (primal_start, dual_start)) in constraint_solution
set_start_value(ci, primal_start)
set_dual_start_value(ci, dual_start)
end
return
end
set_optimal_start_values (generic function with 1 method)
Testing the function
To test our function, we use the following linear program:
model = Model(SCS.Optimizer)
@variable(model, x[1:3] >= 0)
@constraint(model, sum(x) <= 1)
@objective(model, Max, sum(i * x[i] for i in 1:3))
optimize!(model)
@assert is_solved_and_feasible(model)
------------------------------------------------------------------
SCS v3.2.6 - Splitting Conic Solver
(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem: variables n: 3, constraints m: 4
cones: l: linear vars: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
max_iters: 100000, normalize: 1, rho_x: 1.00e-06
acceleration_lookback: 10, acceleration_interval: 10
compiled with openmp parallelization enabled
lin-sys: sparse-direct-amd-qdldl
nnz(A): 6, nnz(P): 0
------------------------------------------------------------------
iter | pri res | dua res | gap | obj | scale | time (s)
------------------------------------------------------------------
0| 4.42e+01 1.00e+00 1.28e+02 -6.64e+01 1.00e-01 1.08e-04
75| 5.30e-07 2.63e-06 3.15e-07 -3.00e+00 1.00e-01 1.55e-04
------------------------------------------------------------------
status: solved
timings: total: 1.56e-04s = setup: 3.57e-05s + solve: 1.21e-04s
lin-sys: 1.38e-05s, cones: 6.88e-06s, accel: 3.63e-06s
------------------------------------------------------------------
objective = -2.999998
------------------------------------------------------------------
By looking at the log, we can see that SCS took 75 iterations to find the optimal solution. Now we set the optimal solution as our starting point:
set_optimal_start_values(model)
and we re-optimize:
optimize!(model)
------------------------------------------------------------------
SCS v3.2.6 - Splitting Conic Solver
(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem: variables n: 3, constraints m: 4
cones: l: linear vars: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
max_iters: 100000, normalize: 1, rho_x: 1.00e-06
acceleration_lookback: 10, acceleration_interval: 10
compiled with openmp parallelization enabled
lin-sys: sparse-direct-amd-qdldl
nnz(A): 6, nnz(P): 0
------------------------------------------------------------------
iter | pri res | dua res | gap | obj | scale | time (s)
------------------------------------------------------------------
0| 1.90e-05 1.56e-06 9.14e-05 -3.00e+00 1.00e-01 8.24e-05
------------------------------------------------------------------
status: solved
timings: total: 8.33e-05s = setup: 3.72e-05s + solve: 4.61e-05s
lin-sys: 7.51e-07s, cones: 1.57e-06s, accel: 3.00e-08s
------------------------------------------------------------------
objective = -3.000044
------------------------------------------------------------------
Now the optimization terminates after 0 iterations because our starting point is already optimal.
Caveats
Some solvers do not support setting some parts of the starting solution, for example, they may support only set_start_value
for variables.
If you encounter an UnsupportedSupported
attribute error for MOI.VariablePrimalStart
, MOI.ConstraintPrimalStart
, or MOI.ConstraintDualStart
, comment out the corresponding part of the set_optimal_start_values
function.