Sensitivity analysis of a linear program
This tutorial was generated using Literate.jl. Download the source as a .jl
file.
This tutorial explains how to use the lp_sensitivity_report
function to create sensitivity reports like those that are produced by the Excel Solver. This is most often used in introductory classes to linear programming.
In brief, sensitivity analysis of a linear program is about asking two questions:
- Given an optimal solution, how much can the objective coefficients change by before a different solution becomes optimal?
- Given an optimal solution, how much can the right-hand side of a linear constraint change by before a different solution becomes optimal?
JuMP provides a function, lp_sensitivity_report
, to help us compute these values, but this tutorial extends that to create more informative tables in the form of a DataFrame
.
Setup
This tutorial uses the following packages:
using JuMP
import HiGHS
import DataFrames
as well as this small linear program:
model = Model(HiGHS.Optimizer)
@variable(model, x >= 0)
@variable(model, 0 <= y <= 3)
@variable(model, z <= 1)
@objective(model, Min, 12x + 20y - z)
@constraint(model, c1, 6x + 8y >= 100)
@constraint(model, c2, 7x + 12y >= 120)
@constraint(model, c3, x + y <= 20)
optimize!(model)
@assert is_solved_and_feasible(model)
solution_summary(model; verbose = true)
* Solver : HiGHS
* Status
Result count : 1
Termination status : OPTIMAL
Message from the solver:
"kHighsModelStatusOptimal"
* Candidate solution (result #1)
Primal status : FEASIBLE_POINT
Dual status : FEASIBLE_POINT
Objective value : 2.04000e+02
Objective bound : 2.04000e+02
Relative gap : 0.00000e+00
Dual objective value : 2.04000e+02
Primal solution :
x : 1.50000e+01
y : 1.25000e+00
z : 1.00000e+00
Dual solution :
c1 : 2.50000e-01
c2 : 1.50000e+00
c3 : 0.00000e+00
* Work counters
Solve time (sec) : 2.84433e-04
Simplex iterations : 2
Barrier iterations : 0
Node count : -1
Can you identify:
- The objective coefficient of each variable?
- The right-hand side of each constraint?
- The optimal primal and dual solutions?
Sensitivity reports
Now let's call lp_sensitivity_report
:
report = lp_sensitivity_report(model)
SensitivityReport{Float64}(Dict{ConstraintRef, Tuple{Float64, Float64}}(c1 : 6 x + 8 y ≥ 100 => (-4.0, 2.857142857142857), x ≥ 0 => (-Inf, 15.0), z ≤ 1 => (-Inf, Inf), y ≥ 0 => (-Inf, 1.25), c2 : 7 x + 12 y ≥ 120 => (-3.3333333333333335, 4.666666666666667), y ≤ 3 => (-1.75, Inf), c3 : x + y ≤ 20 => (-3.75, Inf)), Dict{VariableRef, Tuple{Float64, Float64}}(y => (-4.0, 0.5714285714285714), x => (-0.3333333333333333, 3.0), z => (-Inf, 1.0)))
It returns a SensitivityReport
object, which maps:
- Every variable reference to a tuple
(d_lo, d_hi)::Tuple{Float64,Float64}
, explaining how much the objective coefficient of the corresponding variable can change by, such that the original basis remains optimal. - Every constraint reference to a tuple
(d_lo, d_hi)::Tuple{Float64,Float64}
, explaining how much the right-hand side of the corresponding constraint can change by, such that the basis remains optimal.
Both tuples are relative, rather than absolute. So, given an objective coefficient of 1.0
and a tuple (-0.5, 0.5)
, the objective coefficient can range between 1.0 - 0.5
an 1.0 + 0.5
.
For example:
report[x]
(-0.3333333333333333, 3.0)
indicates that the objective coefficient on x
, that is, 12
, can decrease by -0.333
or increase by 3.0
and the primal solution (15, 1.25)
will remain optimal. In addition:
report[c1]
(-4.0, 2.857142857142857)
means that the right-hand side of the c1
constraint (100), can decrease by 4 units, or increase by 2.85 units, and the primal solution (15, 1.25)
will remain optimal.
Variable sensitivity
By themselves, the tuples aren't informative. Let's put them in context by collating a range of other information about a variable:
function variable_report(xi)
return (
name = name(xi),
lower_bound = has_lower_bound(xi) ? lower_bound(xi) : -Inf,
value = value(xi),
upper_bound = has_upper_bound(xi) ? upper_bound(xi) : Inf,
reduced_cost = reduced_cost(xi),
obj_coefficient = coefficient(objective_function(model), xi),
allowed_decrease = report[xi][1],
allowed_increase = report[xi][2],
)
end
variable_report (generic function with 1 method)
Calling our function on x
:
x_report = variable_report(x)
(name = "x", lower_bound = 0.0, value = 15.0, upper_bound = Inf, reduced_cost = 0.0, obj_coefficient = 12.0, allowed_decrease = -0.3333333333333333, allowed_increase = 3.0)
That's a bit hard to read, so let's call this on every variable in the model and put things into a DataFrame:
variable_df =
DataFrames.DataFrame(variable_report(xi) for xi in all_variables(model))
Row | name | lower_bound | value | upper_bound | reduced_cost | obj_coefficient | allowed_decrease | allowed_increase |
---|---|---|---|---|---|---|---|---|
String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | x | 0.0 | 15.0 | Inf | 0.0 | 12.0 | -0.333333 | 3.0 |
2 | y | 0.0 | 1.25 | 3.0 | 0.0 | 20.0 | -4.0 | 0.571429 |
3 | z | -Inf | 1.0 | 1.0 | -1.0 | -1.0 | -Inf | 1.0 |
Constraint sensitivity
We can do something similar with constraints:
function constraint_report(c::ConstraintRef)
return (
name = name(c),
value = value(c),
rhs = normalized_rhs(c),
slack = normalized_rhs(c) - value(c),
shadow_price = shadow_price(c),
allowed_decrease = report[c][1],
allowed_increase = report[c][2],
)
end
c1_report = constraint_report(c1)
(name = "c1", value = 100.0, rhs = 100.0, slack = 0.0, shadow_price = -0.25, allowed_decrease = -4.0, allowed_increase = 2.857142857142857)
That's a bit hard to read, so let's call this on every variable in the model and put things into a DataFrame:
constraint_df = DataFrames.DataFrame(
constraint_report(ci) for (F, S) in list_of_constraint_types(model) for
ci in all_constraints(model, F, S) if F == AffExpr
)
Row | name | value | rhs | slack | shadow_price | allowed_decrease | allowed_increase |
---|---|---|---|---|---|---|---|
String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | c1 | 100.0 | 100.0 | 0.0 | -0.25 | -4.0 | 2.85714 |
2 | c2 | 120.0 | 120.0 | 0.0 | -1.5 | -3.33333 | 4.66667 |
3 | c3 | 16.25 | 20.0 | 3.75 | 0.0 | -3.75 | Inf |
Analysis questions
Now we can use these dataframes to ask questions of the solution.
For example, we can find basic variables by looking for variables with a reduced cost of 0:
basic = filter(row -> iszero(row.reduced_cost), variable_df)
Row | name | lower_bound | value | upper_bound | reduced_cost | obj_coefficient | allowed_decrease | allowed_increase |
---|---|---|---|---|---|---|---|---|
String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | x | 0.0 | 15.0 | Inf | 0.0 | 12.0 | -0.333333 | 3.0 |
2 | y | 0.0 | 1.25 | 3.0 | 0.0 | 20.0 | -4.0 | 0.571429 |
and non-basic variables by looking for non-zero reduced costs:
non_basic = filter(row -> !iszero(row.reduced_cost), variable_df)
Row | name | lower_bound | value | upper_bound | reduced_cost | obj_coefficient | allowed_decrease | allowed_increase |
---|---|---|---|---|---|---|---|---|
String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | z | -Inf | 1.0 | 1.0 | -1.0 | -1.0 | -Inf | 1.0 |
we can also find constraints that are binding by looking for zero slacks:
binding = filter(row -> iszero(row.slack), constraint_df)
Row | name | value | rhs | slack | shadow_price | allowed_decrease | allowed_increase |
---|---|---|---|---|---|---|---|
String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | c1 | 100.0 | 100.0 | 0.0 | -0.25 | -4.0 | 2.85714 |
2 | c2 | 120.0 | 120.0 | 0.0 | -1.5 | -3.33333 | 4.66667 |
or non-zero shadow prices:
binding2 = filter(row -> !iszero(row.shadow_price), constraint_df)
Row | name | value | rhs | slack | shadow_price | allowed_decrease | allowed_increase |
---|---|---|---|---|---|---|---|
String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | c1 | 100.0 | 100.0 | 0.0 | -0.25 | -4.0 | 2.85714 |
2 | c2 | 120.0 | 120.0 | 0.0 | -1.5 | -3.33333 | 4.66667 |