# Primal and dual warm-starts

This tutorial was generated using Literate.jl. Download the source as a .jl file.

Some conic solvers have the ability to set warm-starts for the primal and dual solution. This can improve performance, particularly if you are repeatedly solving a sequence of related problems.

Tip

See set_start_values for a generic implementation of this function that was added to JuMP after this tutorial was written.

In this tutorial, we demonstrate how to write a function that sets the primal and dual starts as the optimal solution stored in a model. It is intended to be a starting point for which you can modify if you want to do something similar in your own code.

Warning

This tutorial does not set start values for nonlinear models.

This tutorial uses the following packages:

using JuMP
import SCS

The main component of this tutorial is the following function. The most important observation is that we cache all of the solution values first, and then we modify the model second. (Alternating between querying a value and modifying the model is not allowed in JuMP.)

function set_optimal_start_values(model::Model)
# Store a mapping of the variable primal solution
variable_primal = Dict(x => value(x) for x in all_variables(model))
# In the following, we loop through every constraint and store a mapping
# from the constraint index to a tuple containing the primal and dual
# solutions.
constraint_solution = Dict()
for (F, S) in list_of_constraint_types(model)
# We add a try-catch here because some constraint types might not
# support getting the primal or dual solution.
try
for ci in all_constraints(model, F, S)
constraint_solution[ci] = (value(ci), dual(ci))
end
catch
@info("Something went wrong getting $F-in-$S. Skipping")
end
end
# Now we can loop through our cached solutions and set the starting values.
for (x, primal_start) in variable_primal
set_start_value(x, primal_start)
end
for (ci, (primal_start, dual_start)) in constraint_solution
set_start_value(ci, primal_start)
set_dual_start_value(ci, dual_start)
end
return
end
set_optimal_start_values (generic function with 1 method)

To test our function, we use the following linear program:

model = Model(SCS.Optimizer)
@variable(model, x[1:3] >= 0)
@constraint(model, sum(x) <= 1)
@objective(model, Max, sum(i * x[i] for i in 1:3))
optimize!(model)
------------------------------------------------------------------
SCS v3.2.3 - Splitting Conic Solver
(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 3, constraints m: 4
cones: 	  l: linear vars: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
max_iters: 100000, normalize: 1, rho_x: 1.00e-06
acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
nnz(A): 6, nnz(P): 0
------------------------------------------------------------------
iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
0| 4.42e+01  1.00e+00  1.28e+02 -6.64e+01  1.00e-01  6.62e-05
75| 5.30e-07  2.63e-06  3.15e-07 -3.00e+00  1.00e-01  1.22e-04
------------------------------------------------------------------
status:  solved
timings: total: 1.22e-04s = setup: 5.14e-05s + solve: 7.10e-05s
lin-sys: 1.32e-05s, cones: 6.70e-06s, accel: 4.00e-06s
------------------------------------------------------------------
objective = -2.999998
------------------------------------------------------------------

By looking at the log, we can see that SCS took 75 iterations to find the optimal solution. Now we set the optimal solution as our starting point:

set_optimal_start_values(model)

and we re-optimize:

optimize!(model)
------------------------------------------------------------------
SCS v3.2.3 - Splitting Conic Solver
(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 3, constraints m: 4
cones: 	  l: linear vars: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
max_iters: 100000, normalize: 1, rho_x: 1.00e-06
acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
nnz(A): 6, nnz(P): 0
------------------------------------------------------------------
iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
0| 1.90e-05  1.56e-06  9.14e-05 -3.00e+00  1.00e-01  6.82e-05
------------------------------------------------------------------
status:  solved
timings: total: 6.91e-05s = setup: 5.38e-05s + solve: 1.53e-05s
lin-sys: 8.00e-07s, cones: 1.30e-06s, accel: 0.00e+00s
------------------------------------------------------------------
objective = -3.000044
------------------------------------------------------------------

Now the optimization terminates after 0 iterations because our starting point is already optimal.

Note that some solvers do not support setting some parts of the starting solution, for example, they may support only set_start_value for variables. If you encounter an UnsupportedSupported attribute error for MOI.VariablePrimalStart, MOI.ConstraintPrimalStart, or MOI.ConstraintDualStart, comment out the corresponding part of the set_optimal_start_values function.