Benders decomposition

This tutorial was generated using Literate.jl. Download the source as a .jl file.

This tutorial was originally contributed by Shuvomoy Das Gupta.

This tutorial describes how to implement Benders decomposition in JuMP. It uses the following packages:

using JuMP
import GLPK
import HiGHS
import Printf

Theory

Benders decomposition is a useful algorithm for solving convex optimization problems with a large number of variables. It works best when a larger problem can be decomposed into two (or more) smaller problems that are individually much easier to solve.

This tutorial demonstrates Benders decomposition on the following mixed-integer linear program:

\[\begin{aligned} \text{min} \ & c_1(x) + c_2(y) \\ \text{subject to} \ & f_1(x) \in S_1 \\ & f_2(y) \in S_2 \\ & f_3(x, y) \in S_3 \\ & x \in \mathbb{Z}^m \\ & y \in \mathbb{R}^n \\ \end{aligned}\]

where the functions $f$ and $c$ are linear, and the sets $S$ are inequality sets like $\ge l$, $\le u$, or $= b$.

Any mixed integer programming problem can be written in the form above.

If there are relatively few integer variables, and many more continuous variables, then it may be beneficial to decompose the problem into a small problem containing only integer variables and a linear program containing only continuous variables. Hopefully, the linear program will be much easier to solve in isolation than in the full mixed-integer linear program.

For example, if we knew a feasible solution for $\bar{x}$, we could obtain a solution for $y$ by solving:

\[\begin{aligned} V_2(\bar{x}) = & \text{min} \ & c_2(y)\\ & \text{subject to} \ & f_2(y) \in S_2 \\ & & f_3(x, y) \in S_3 \\ & & x = \bar{x} & \ [\pi] \\ & & y \in \mathbb{R}^n \\ \end{aligned}\]

Note that we have included a "copy" of the x variable to simplify computing $\pi$, which is the dual of $V_2$ with respect to $\bar{x}$.

Because this model is a linear program, it is easy to solve.

Replacing the $c_2(y)$ component of the objective in our original problem with $V_2$ yields:

\[\begin{aligned} V_1 = & \text{min} \ & c_1(x) + V_2(x) \\ & \text{subject to} \ & f_1(x) \in S_1 \\ & & x \in \mathbb{Z}^m. \end{aligned}\]

This problem looks a lot simpler to solve because it involves only $x$ and a subset of the constraints, but we need to do something else with $V_2$ first.

Because $\bar{x}$ is a constant that appears on the right-hand side of the constraints, $V_2$ is a convex function with respect to $\bar{x}$, and the dual variable $\pi$ is a subgradient of $V_2(x)$ with respect to $x$. Therefore, if we have a candidate solution $x_k$, then we can solve $V_2(x_k)$ and obtain a feasible dual vector $\pi_k$. Using these values, we can construct a first-order Taylor-series approximation of $V_2$ about the point $x_k$:

\[V_2(x) \ge V_2(x_k) + \pi_k^\top (x - x_k).\]

By convexity, we know that this inequality holds for all $x$, and we call these inequalities cuts.

Benders decomposition is an iterative technique that replaces $V_2(x)$ with a new decision variable $\theta$, and approximates it from below using cuts:

\[\begin{aligned} V_1^K = & \text{min} \ & c_1(x) + \theta \\ & \text{subject to} \ & f_1(x) \in S_1 \\ & & x \in \mathbb{Z}^m \\ & & \theta \ge M \\ & & \theta \ge V_2(x_k) + \pi_k^\top(x - x_k) & \quad \forall k = 1,\ldots,K. \end{aligned}\]

This integer program is called the first-stage subproblem.

To generate cuts, we solve $V_1^K$ to obtain a candidate first-stage solution $x_k$, then we use that solution to solve $V_2(x_k)$. Then, using the optimal objective value and dual solution from $V_2$, we add a new cut to form $V_1^{K+1}$ and repeat.

Bounds

Due to convexity, we know that $V_2(x) \ge \theta$ for all $x$. Therefore, the optimal objective value of $V_1^K$ provides a valid lower bound on the objective value of the full problem. In addition, if we take a feasible solution for $x$ from the first-stage problem, then $c_1(x) + V_2(x)$ is a valid upper bound on the objective value of the full problem.

Benders decomposition uses the lower and upper bounds to determine when it has found the global optimal solution.

Monolithic problem

As an example problem, we consider the following variant of The max-flow problem, in which there is a binary variable to decide whether to open each arc for a cost of 0.1 unit, and we can open at most 11 arcs:

G = [
    0 3 2 2 0 0 0 0
    0 0 0 0 5 1 0 0
    0 0 0 0 1 3 1 0
    0 0 0 0 0 1 0 0
    0 0 0 0 0 0 0 4
    0 0 0 0 0 0 0 2
    0 0 0 0 0 0 0 4
    0 0 0 0 0 0 0 0
]
n = size(G, 1)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x[1:n, 1:n], Bin)
@variable(model, y[1:n, 1:n] >= 0)
@constraint(model, sum(x) <= 11)
@constraint(model, [i = 1:n, j = 1:n], y[i, j] <= G[i, j] * x[i, j])
@constraint(model, [i = 2:n-1], sum(y[i, :]) == sum(y[:, i]))
@objective(model, Min, 0.1 * sum(x) - sum(y[1, :]))
optimize!(model)
solution_summary(model)
* Solver : HiGHS

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "kHighsModelStatusOptimal"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : -5.10000e+00
  Objective bound    : -5.10000e+00
  Relative gap       : 0.00000e+00
  Dual objective value : NaN

* Work counters
  Solve time (sec)   : 1.47390e-03
  Simplex iterations : 15
  Barrier iterations : -1
  Node count         : 1

The optimal objective value is -5.1:

objective_value(model)
-5.1

and the optimal flows are:

function optimal_flows(x)
    return [(i, j) => x[i, j] for i in 1:n for j in 1:n if x[i, j] > 0]
end

monolithic_solution = optimal_flows(value.(y))
9-element Vector{Pair{Tuple{Int64, Int64}, Float64}}:
 (1, 2) => 3.0
 (1, 3) => 2.0
 (1, 4) => 1.0
 (2, 5) => 3.0
 (3, 5) => 1.0
 (3, 6) => 1.0
 (4, 6) => 1.0
 (5, 8) => 4.0
 (6, 8) => 2.0

Iterative method

Warning

This is a basic implementation for pedagogical purposes. We haven't discussed any of the computational tricks that are required to build a performant implementation for large-scale problems. See In-place iterative method for one improvement that helps computation time.

We start by formulating the first-stage subproblem. It includes the x variables, and the constraints involving only x, and the terms in the objective containing only x. We also need an initial lower bound on the cost-to-go variable θ. One valid lower bound is to assume that we do not pay for opening arcs, and there is flow all the arcs.

M = -sum(G)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x[1:n, 1:n], Bin)
@variable(model, θ >= M)
@constraint(model, sum(x) <= 11)
@objective(model, Min, 0.1 * sum(x) + θ)
model
A JuMP Model
├ solver: HiGHS
├ objective_sense: MIN_SENSE
│ └ objective_function_type: AffExpr
├ num_variables: 65
├ num_constraints: 66
│ ├ AffExpr in MOI.LessThan{Float64}: 1
│ ├ VariableRef in MOI.GreaterThan{Float64}: 1
│ └ VariableRef in MOI.ZeroOne: 64
└ Names registered in the model
  └ :x, :θ

For the next step, we need a function that takes a first-stage candidate solution x and returns the optimal solution from the second-stage subproblem:

function solve_subproblem(x_bar)
    model = Model(HiGHS.Optimizer)
    set_silent(model)
    @variable(model, x[i in 1:n, j in 1:n] == x_bar[i, j])
    @variable(model, y[1:n, 1:n] >= 0)
    @constraint(model, [i = 1:n, j = 1:n], y[i, j] <= G[i, j] * x[i, j])
    @constraint(model, [i = 2:n-1], sum(y[i, :]) == sum(y[:, i]))
    @objective(model, Min, -sum(y[1, :]))
    optimize!(model)
    @assert is_solved_and_feasible(model; dual = true)
    return (obj = objective_value(model), y = value.(y), π = reduced_cost.(x))
end
solve_subproblem (generic function with 1 method)

Note that solve_subproblem returns a NamedTuple of the objective value, the optimal primal solution for y, and the optimal dual solution for π, which we obtained from the reduced_cost of the x variables.

We're almost ready for our optimization loop, but first, here's a helpful function for logging:

function print_iteration(k, args...)
    f(x) = Printf.@sprintf("%12.4e", x)
    println(lpad(k, 9), " ", join(f.(args), " "))
    return
end
print_iteration (generic function with 1 method)

We also need to put a limit on the number of iterations before termination:

MAXIMUM_ITERATIONS = 100
100

And a way to check if the lower and upper bounds are close-enough to terminate:

ABSOLUTE_OPTIMALITY_GAP = 1e-6
1.0e-6

Now we're ready to iterate Benders decomposition:

println("Iteration  Lower Bound  Upper Bound          Gap")
for k in 1:MAXIMUM_ITERATIONS
    optimize!(model)
    @assert is_solved_and_feasible(model)
    lower_bound = objective_value(model)
    x_k = value.(x)
    ret = solve_subproblem(x_k)
    upper_bound = (objective_value(model) - value(θ)) + ret.obj
    gap = abs(upper_bound - lower_bound) / abs(upper_bound)
    print_iteration(k, lower_bound, upper_bound, gap)
    if gap < ABSOLUTE_OPTIMALITY_GAP
        println("Terminating with the optimal solution")
        break
    end
    cut = @constraint(model, θ >= ret.obj + sum(ret.π .* (x .- x_k)))
    @info "Adding the cut $(cut)"
end
Iteration  Lower Bound  Upper Bound          Gap
        1  -2.9000e+01   0.0000e+00          Inf
[ Info: Adding the cut 3 x[1,2] + 2 x[1,3] + 2 x[1,4] + θ ≥ 0
        2  -6.7000e+00   3.0000e-01   2.3333e+01
[ Info: Adding the cut 5 x[2,5] + x[3,5] + x[2,6] + 3 x[3,6] + x[4,6] + x[3,7] + θ ≥ 0
        3  -6.5000e+00   5.0000e-01   1.4000e+01
[ Info: Adding the cut x[3,7] + 4 x[5,8] + 2 x[6,8] + θ ≥ 0
        4  -6.2000e+00  -4.2000e+00   4.7619e-01
[ Info: Adding the cut 3 x[1,2] + x[3,5] + 2 x[6,8] + 4 x[7,8] + θ ≥ 0
        5  -6.1000e+00  -4.1000e+00   4.8780e-01
[ Info: Adding the cut 3 x[1,2] + x[3,5] + 3 x[3,6] + x[4,6] + x[3,7] + θ ≥ 0
        6  -6.1000e+00  -4.1000e+00   4.8780e-01
[ Info: Adding the cut 3 x[1,2] + 2 x[1,3] + x[4,6] + θ ≥ 0
        7  -5.1000e+00  -5.1000e+00   0.0000e+00
Terminating with the optimal solution

Finally, we can obtain the optimal solution:

optimize!(model)
@assert is_solved_and_feasible(model)
x_optimal = value.(x)
optimal_ret = solve_subproblem(x_optimal)
iterative_solution = optimal_flows(optimal_ret.y)
9-element Vector{Pair{Tuple{Int64, Int64}, Float64}}:
 (1, 2) => 3.0
 (1, 3) => 2.0
 (1, 4) => 1.0
 (2, 5) => 3.0
 (3, 5) => 1.0
 (3, 6) => 1.0
 (4, 6) => 1.0
 (5, 8) => 4.0
 (6, 8) => 2.0

which is the same as the monolithic solution:

iterative_solution == monolithic_solution
true

and it has the same objective value:

objective_value(model)
-5.1

Callback method

The Iterative method section implemented Benders decomposition using a loop. In each iteration, we re-solved the first-stage subproblem to generate a candidate solution. However, modern MILP solvers such as CPLEX, Gurobi, and GLPK provide lazy constraint callbacks which allow us to add new cuts while the solver is running. This can be more efficient than an iterative method because we can avoid repeating work such as solving the root node of the first-stage MILP at each iteration.

Tip

We use GLPK for this model because HiGHS does not support lazy constraints. For more information on callbacks, read the page Solver-independent callbacks.

As before, we construct the same first-stage subproblem:

lazy_model = Model(GLPK.Optimizer)
set_silent(lazy_model)
@variable(lazy_model, x[1:n, 1:n], Bin)
@variable(lazy_model, θ >= M)
@constraint(lazy_model, sum(x) <= 11)
@objective(lazy_model, Min, 0.1 * sum(x) + θ)
lazy_model
A JuMP Model
├ solver: GLPK
├ objective_sense: MIN_SENSE
│ └ objective_function_type: AffExpr
├ num_variables: 65
├ num_constraints: 66
│ ├ AffExpr in MOI.LessThan{Float64}: 1
│ ├ VariableRef in MOI.GreaterThan{Float64}: 1
│ └ VariableRef in MOI.ZeroOne: 64
└ Names registered in the model
  └ :x, :θ

What differs is that we write a callback function instead of a loop:

number_of_subproblem_solves = 0
function my_callback(cb_data)
    status = callback_node_status(cb_data, lazy_model)
    if status != MOI.CALLBACK_NODE_STATUS_INTEGER
        # Only add the constraint if `x` is an integer feasible solution
        return
    end
    x_k = callback_value.(cb_data, x)
    θ_k = callback_value(cb_data, θ)
    global number_of_subproblem_solves += 1
    ret = solve_subproblem(x_k)
    if θ_k < (ret.obj - 1e-6)
        # Only add the constraint if θ_k violates the constraint
        cut = @build_constraint(θ >= ret.obj + sum(ret.π .* (x .- x_k)))
        MOI.submit(lazy_model, MOI.LazyConstraint(cb_data), cut)
    end
    return
end

set_attribute(lazy_model, MOI.LazyConstraintCallback(), my_callback)

Now when we optimize!, our callback is run:

optimize!(lazy_model)
@assert is_solved_and_feasible(lazy_model)

For this model, the callback algorithm required more solves of the subproblem:

number_of_subproblem_solves
14

But for larger problems, you can expect the callback algorithm to be more efficient than the iterative algorithm.

Finally, we can obtain the optimal solution:

x_optimal = value.(x)
optimal_ret = solve_subproblem(x_optimal)
callback_solution = optimal_flows(optimal_ret.y)
9-element Vector{Pair{Tuple{Int64, Int64}, Float64}}:
 (1, 2) => 3.0
 (1, 3) => 2.0
 (1, 4) => 1.0
 (2, 5) => 3.0
 (3, 5) => 1.0
 (3, 6) => 1.0
 (4, 6) => 1.0
 (5, 8) => 4.0
 (6, 8) => 2.0

which is the same as the monolithic solution:

callback_solution == monolithic_solution
true

In-place iterative method

Our implementation of the iterative method has a problem: every time we need to solve the subproblem, we must rebuild it from scratch. This is expensive, and it can be the bottleneck in the solution process. We can improve our implementation by using re-using the subproblem between solves.

First, we create our first-stage problem as usual:

model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x[1:n, 1:n], Bin)
@variable(model, θ >= M)
@constraint(model, sum(x) <= 11)
@objective(model, Min, 0.1 * sum(x) + θ)
model
A JuMP Model
├ solver: HiGHS
├ objective_sense: MIN_SENSE
│ └ objective_function_type: AffExpr
├ num_variables: 65
├ num_constraints: 66
│ ├ AffExpr in MOI.LessThan{Float64}: 1
│ ├ VariableRef in MOI.GreaterThan{Float64}: 1
│ └ VariableRef in MOI.ZeroOne: 64
└ Names registered in the model
  └ :x, :θ

Then, instead of building the subproblem in a function, we build it once here:

subproblem = Model(HiGHS.Optimizer)
set_silent(subproblem)
@variable(subproblem, x_copy[i in 1:n, j in 1:n])
@variable(subproblem, y[1:n, 1:n] >= 0)
@constraint(subproblem, [i = 1:n, j = 1:n], y[i, j] <= G[i, j] * x_copy[i, j])
@constraint(subproblem, [i = 2:n-1], sum(y[i, :]) == sum(y[:, i]))
@objective(subproblem, Min, -sum(y[1, :]))
subproblem
A JuMP Model
├ solver: HiGHS
├ objective_sense: MIN_SENSE
│ └ objective_function_type: AffExpr
├ num_variables: 128
├ num_constraints: 134
│ ├ AffExpr in MOI.EqualTo{Float64}: 6
│ ├ AffExpr in MOI.LessThan{Float64}: 64
│ └ VariableRef in MOI.GreaterThan{Float64}: 64
└ Names registered in the model
  └ :x_copy, :y

Our function to solve the subproblem is also slightly different because we need to fix the value of the x_copy variables to the value of x from the first-stage problem:

function solve_subproblem(model, x)
    fix.(model[:x_copy], x)
    optimize!(model)
    @assert is_solved_and_feasible(model; dual = true)
    return (
        obj = objective_value(model),
        y = value.(model[:y]),
        π = reduced_cost.(model[:x_copy]),
    )
end
solve_subproblem (generic function with 2 methods)

Now we're ready to iterate our in-place Benders decomposition:

println("Iteration  Lower Bound  Upper Bound          Gap")
for k in 1:MAXIMUM_ITERATIONS
    optimize!(model)
    @assert is_solved_and_feasible(model)
    lower_bound = objective_value(model)
    x_k = value.(x)
    ret = solve_subproblem(subproblem, x_k)
    upper_bound = (objective_value(model) - value(θ)) + ret.obj
    gap = abs(upper_bound - lower_bound) / abs(upper_bound)
    print_iteration(k, lower_bound, upper_bound, gap)
    if gap < ABSOLUTE_OPTIMALITY_GAP
        println("Terminating with the optimal solution")
        break
    end
    cut = @constraint(model, θ >= ret.obj + sum(ret.π .* (x .- x_k)))
    @info "Adding the cut $(cut)"
end
Iteration  Lower Bound  Upper Bound          Gap
        1  -2.9000e+01   0.0000e+00          Inf
[ Info: Adding the cut 3 x[1,2] + 2 x[1,3] + 2 x[1,4] + θ ≥ 0
        2  -6.7000e+00   3.0000e-01   2.3333e+01
[ Info: Adding the cut 5 x[2,5] + x[3,5] + x[2,6] + 3 x[3,6] + x[4,6] + x[3,7] + θ ≥ 0
        3  -6.5000e+00   5.0000e-01   1.4000e+01
[ Info: Adding the cut x[3,7] + 4 x[5,8] + 2 x[6,8] + θ ≥ 0
        4  -6.2000e+00  -4.2000e+00   4.7619e-01
[ Info: Adding the cut 3 x[1,2] + x[3,5] + 2 x[6,8] + 4 x[7,8] + θ ≥ 0
        5  -6.1000e+00  -4.1000e+00   4.8780e-01
[ Info: Adding the cut 3 x[1,2] + x[3,5] + 3 x[3,6] + x[4,6] + x[3,7] + θ ≥ 0
        6  -6.1000e+00  -4.1000e+00   4.8780e-01
[ Info: Adding the cut 3 x[1,2] + 2 x[1,3] + x[4,6] + θ ≥ 0
        7  -5.1000e+00  -5.1000e+00   0.0000e+00
Terminating with the optimal solution

Finally, we can obtain the optimal solution:

optimize!(model)
@assert is_solved_and_feasible(model)
x_optimal = value.(x)
optimal_ret = solve_subproblem(subproblem, x_optimal)
inplace_solution = optimal_flows(optimal_ret.y)
9-element Vector{Pair{Tuple{Int64, Int64}, Float64}}:
 (1, 2) => 3.0
 (1, 3) => 2.0
 (1, 4) => 1.0
 (2, 5) => 3.0
 (3, 5) => 1.0
 (3, 6) => 1.0
 (4, 6) => 1.0
 (5, 8) => 4.0
 (6, 8) => 2.0

which is the same as the monolithic solution:

inplace_solution == monolithic_solution
true

Feasibility cuts

So far, we have discussed only Benders optimality cuts. However, for some first-stage values of x, the subproblem might be infeasible. The solution is to add a Benders feasibility cut:

\[v_k + u_k^\top (x - x_k) \le 0\]

where $u_k$ is a dual unbounded ray of the subproblem and $v_k$ is the intercept of the unbounded ray.

As a variation of our example which leads to infeasibilities, we add a constraint that sum(y) >= 1. This means we need a choice of first-stage x for which at least one unit can flow.

The first-stage problem remains the same:

model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x[1:n, 1:n], Bin)
@variable(model, θ >= M)
@constraint(model, sum(x) <= 11)
@objective(model, Min, 0.1 * sum(x) + θ)
model
A JuMP Model
├ solver: HiGHS
├ objective_sense: MIN_SENSE
│ └ objective_function_type: AffExpr
├ num_variables: 65
├ num_constraints: 66
│ ├ AffExpr in MOI.LessThan{Float64}: 1
│ ├ VariableRef in MOI.GreaterThan{Float64}: 1
│ └ VariableRef in MOI.ZeroOne: 64
└ Names registered in the model
  └ :x, :θ

But the subproblem has a new constraint that sum(y) >= 1:

subproblem = Model(HiGHS.Optimizer)
set_silent(subproblem)
# We need to turn presolve off so that HiGHS will return an infeasibility
# certificate.
set_attribute(subproblem, "presolve", "off")
@variable(subproblem, x_copy[i in 1:n, j in 1:n])
@variable(subproblem, y[1:n, 1:n] >= 0)
@constraint(subproblem, sum(y) >= 1)  # <--- THIS IS NEW
@constraint(subproblem, [i = 1:n, j = 1:n], y[i, j] <= G[i, j] * x_copy[i, j])
@constraint(subproblem, [i = 2:n-1], sum(y[i, :]) == sum(y[:, i]))
@objective(subproblem, Min, -sum(y[1, :]))
subproblem
A JuMP Model
├ solver: HiGHS
├ objective_sense: MIN_SENSE
│ └ objective_function_type: AffExpr
├ num_variables: 128
├ num_constraints: 135
│ ├ AffExpr in MOI.EqualTo{Float64}: 6
│ ├ AffExpr in MOI.GreaterThan{Float64}: 1
│ ├ AffExpr in MOI.LessThan{Float64}: 64
│ └ VariableRef in MOI.GreaterThan{Float64}: 64
└ Names registered in the model
  └ :x_copy, :y

The function to solve the subproblem now checks for feasibility, and returns the dual objective value and an dual unbounded ray if the subproblem is infeasible:

function solve_subproblem_with_feasibility(model, x)
    fix.(model[:x_copy], x)
    optimize!(model)
    if is_solved_and_feasible(model; dual = true)
        return (
            is_feasible = true,
            obj = objective_value(model),
            y = value.(model[:y]),
            π = reduced_cost.(model[:x_copy]),
        )
    end
    return (
        is_feasible = false,
        v = dual_objective_value(model),
        u = reduced_cost.(model[:x_copy]),
    )
end
solve_subproblem_with_feasibility (generic function with 1 method)

Now we're ready to iterate our in-place Benders decomposition:

println("Iteration  Lower Bound  Upper Bound          Gap")
for k in 1:MAXIMUM_ITERATIONS
    optimize!(model)
    @assert is_solved_and_feasible(model)
    lower_bound = objective_value(model)
    x_k = value.(x)
    ret = solve_subproblem_with_feasibility(subproblem, x_k)
    if ret.is_feasible
        # Benders Optimality Cuts
        upper_bound = (objective_value(model) - value(θ)) + ret.obj
        gap = abs(upper_bound - lower_bound) / abs(upper_bound)
        print_iteration(k, lower_bound, upper_bound, gap)
        if gap < ABSOLUTE_OPTIMALITY_GAP
            println("Terminating with the optimal solution")
            break
        end
        @constraint(model, θ >= ret.obj + sum(ret.π .* (x .- x_k)))
    else
        # Benders Feasibility Cuts
        cut = @constraint(model, ret.v + sum(ret.u .* (x .- x_k)) <= 0)
        @info "Adding the feasibility cut $(cut)"
    end
end
Iteration  Lower Bound  Upper Bound          Gap
[ Info: Adding the feasibility cut -3 x[1,2] - 2 x[1,3] - 2 x[1,4] - 5.000000000000002 x[2,5] - x[3,5] - 2 x[2,6] - 6 x[3,6] - 2 x[4,6] - 2.0000000000000004 x[3,7] - 4 x[5,8] ≤ -1
[ Info: Adding the feasibility cut -2.999999999999999 x[1,2] - 1.9999999999999996 x[1,3] - 1.9999999999999996 x[1,4] - 10 x[2,5] - 2 x[3,5] - 1.9999999999999996 x[2,6] - 5.999999999999998 x[3,6] - 1.9999999999999996 x[4,6] - 1.9999999999999998 x[3,7] ≤ -0.9999999999999998
[ Info: Adding the feasibility cut -2.9999999999999987 x[1,2] - 5.999999999999998 x[1,3] - 1.9999999999999991 x[1,4] - 10 x[2,5] - 1.9999999999999991 x[2,6] - 1.9999999999999991 x[4,6] ≤ -0.9999999999999996
[ Info: Adding the feasibility cut -4 x[1,3] - 3.9999999999999982 x[1,4] - 9.999999999999998 x[2,5] - 2 x[2,6] - 4 x[5,8] - 1.9999999999999996 x[6,8] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -3 x[1,2] - 4 x[1,3] - 2 x[1,4] - 10 x[2,5] - x[3,5] - 4 x[6,8] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -6 x[1,2] - 6 x[1,3] - 4 x[1,4] - 4 x[5,8] - 2 x[6,8] ≤ -1
[ Info: Adding the feasibility cut -6 x[1,2] - 6 x[1,3] - 2 x[4,6] - 4 x[5,8] - 2 x[6,8] ≤ -1
[ Info: Adding the feasibility cut -6 x[1,2] - 2 x[1,4] - 2 x[3,5] - 6 x[3,6] - x[4,6] - 2 x[3,7] - 4 x[5,8] - 2 x[6,8] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -6 x[1,2] - 2 x[3,5] - 6 x[3,6] - 2 x[4,6] - 2 x[3,7] - 4 x[5,8] - 2 x[6,8] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -5 x[2,5] - x[3,5] - x[2,6] - 3 x[3,6] - x[4,6] - x[3,7] - 8 x[5,8] - 4 x[6,8] - 8 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -2 x[1,3] - 6 x[1,4] - 9.999999999999998 x[2,5] - x[3,5] - 3 x[2,6] - 6.000000000000002 x[3,6] - x[3,7] - 3.999999999999999 x[5,8] - 3.999999999999999 x[7,8] ≤ -0.9999999999999998
[ Info: Adding the feasibility cut -1.9999999999999987 x[1,3] - 5.999999999999997 x[1,4] - 14.999999999999996 x[2,5] - 1.9999999999999991 x[3,5] - 2.9999999999999987 x[2,6] - 5.9999999999999964 x[3,6] - x[3,7] - 3.999999999999999 x[7,8] ≤ -0.9999999999999998
[ Info: Adding the feasibility cut -3 x[1,2] - 5.999999999999997 x[1,4] - 9.999999999999996 x[2,5] - 2.999999999999999 x[3,5] - 1.999999999999999 x[2,6] - 8.999999999999996 x[3,6] - 1.9999999999999996 x[3,7] - 3.999999999999999 x[7,8] ≤ -0.9999999999999998
[ Info: Adding the feasibility cut -3 x[1,2] - 6 x[1,4] - 2 x[2,6] - 9 x[3,6] - 2 x[3,7] - 12 x[5,8] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -3 x[1,2] - 6 x[1,4] - 2 x[2,6] - 9 x[3,6] - 3 x[3,7] - 12 x[5,8] ≤ -1
[ Info: Adding the feasibility cut -2 x[1,4] - 3 x[3,7] - 12 x[5,8] - 6 x[6,8] ≤ -1
[ Info: Adding the feasibility cut -3 x[3,7] - 12 x[5,8] - 6 x[6,8] ≤ -1
[ Info: Adding the feasibility cut -3 x[2,6] - 9 x[3,6] - 3.0000000000000004 x[4,6] - x[3,7] - 12 x[5,8] - 8 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -15 x[2,5] - 3 x[3,5] - 3 x[2,6] - 9 x[3,6] - 3 x[4,6] - 2 x[3,7] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -9 x[1,2] - 3 x[3,5] - 9 x[3,6] - 3 x[4,6] - 2 x[3,7] - 4 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -6 x[1,2] - 2 x[3,5] - 6 x[3,6] - 2 x[4,6] - 4 x[5,8] - 2 x[6,8] - 12 x[7,8] ≤ -1
[ Info: Adding the feasibility cut -3 x[1,2] - 5 x[2,5] - 2 x[3,5] - 2 x[2,6] - 9 x[3,6] - 3 x[4,6] - 4 x[5,8] - 12 x[7,8] ≤ -1
       23  -2.8700e+01  -7.0000e-01   4.0000e+01
[ Info: Adding the feasibility cut -4 x[1,3] - 4 x[1,4] - 15 x[2,5] - x[3,5] - 2 x[2,6] - x[3,7] - 2 x[6,8] ≤ -1
[ Info: Adding the feasibility cut -6 x[1,3] - 4 x[1,4] - 15 x[2,5] - 2 x[2,6] - 2 x[6,8] ≤ -1
[ Info: Adding the feasibility cut -x[1,4] - 7.5 x[2,5] - 1.5 x[3,5] - 1.5 x[3,7] - 3 x[6,8] ≤ -0.5
       27  -1.4400e+01  -2.4000e+00   5.0000e+00
[ Info: Adding the feasibility cut -6 x[1,3] - 15 x[2,5] - 3 x[2,6] - 3 x[4,6] ≤ -1
       29  -8.2000e+00  -2.0000e-01   4.0000e+01
       30  -7.9000e+00  -4.9000e+00   6.1224e-01
       31  -5.3000e+00  -3.3000e+00   6.0606e-01
       32  -5.3000e+00  -1.3000e+00   3.0769e+00
       33  -5.2000e+00  -4.2000e+00   2.3810e-01
       34  -5.2000e+00  -4.2000e+00   2.3810e-01
       35  -5.1000e+00  -4.1000e+00   2.4390e-01
       36  -5.1000e+00  -4.1000e+00   2.4390e-01
       37  -5.1000e+00  -5.1000e+00   0.0000e+00
Terminating with the optimal solution

Finally, we can obtain the optimal solution:

optimize!(model)
@assert is_solved_and_feasible(model)
x_optimal = value.(x)
optimal_ret = solve_subproblem(subproblem, x_optimal)
feasible_inplace_solution = optimal_flows(optimal_ret.y)
9-element Vector{Pair{Tuple{Int64, Int64}, Float64}}:
 (1, 2) => 3.0
 (1, 3) => 2.0
 (1, 4) => 1.0
 (2, 5) => 3.0
 (3, 5) => 1.0
 (3, 6) => 1.0
 (4, 6) => 1.0
 (5, 8) => 4.0
 (6, 8) => 2.0

which is the same as the monolithic solution (because sum(y) >= 1 in the monolithic solution):

feasible_inplace_solution == monolithic_solution
true