This notebook is parte of the talk on ParameterJuMP.jl in the third annual JuMP-dev workshop, held in Santiago, Chile, 2019
The main purpose of this notebook is to show an application of ParameterJuMP.jl. ParameterJuMP is well suited for Benders like decompositions therefore we shall try to demonstrate the usage of the library in one of the simplest problems that fits in the Benders decomposition framework. Norm-1 regression, which is a particular case of quantile regression is one of such problems.
Note that this is NOT the standard technique to solve Norm-1 regressions. Taylor made methods are available here for instance.
This notebook will require the following libraries:
ParameterJuMP itself
using ParameterJuMP
JuMP: the julia mathematical programming modeling tool
using JuMP
GLPK: A linear programing solver (other solvers could be used - such as Clp, Xpress, Gurobi, CPLEX and so on)
using Xpress const OPTIMIZER = Xpress.Optimizer;
TimerOutputs: a time measuring library to demonstrate the advantage of using ParameterJuMP
using TimerOutputs
The following two julia default libraries
using LinearAlgebra # just use the dot function using Random # to use random number generators
Plots library
using Plots gr(); # plotting backend
We will apply Norm-1 regression to the Linear Regression problem. Linear regression is a statistical tool to obtain the relation between one dependent variable and other explanatory variables. In other words, given a set of $n$ explanatory variables $X = \{ X_1, \dots, X_n \}$ we would like to obtain the best possible estimate for $Y$. In order to accomplish such a task we make the hypothesis that $Y$ is aapproximately linear function of $X$:
\[ Y = \sum_{j =1}^n \beta_j X_j + \varepsilon \]
where $\varepsilon$ is some random error.
The estimation of the $\beta$ values relies on observations of the variables: $\{y^i, x_1^i, \dots, x_n^i\}_i$
In this notebook we will solve a problem where the explanatory variables are sinusoids of differents frequencies
First, we define the number of explanatory variables and observations
const N_Candidates = 700 const N_Observations = 5000 const N_Nodes = 1000 const Observations = 1:N_Observations const Candidates = 1:N_Candidates const Nodes = 1:N_Nodes ;
Initialize a random number generator to keep results deterministic
rng = Random.MersenneTwister(123);
Building regressors (explanatory) sinusoids
const X = zeros(N_Candidates, N_Observations) const time = [obs / N_Observations * 1 for obs in Observations] for obs in Observations, cand in Candidates t = time[obs] f = cand X[cand, obs] = sin(2 * pi * f * t) end
Define coefficients
β = zeros(N_Candidates) for i ∈ Candidates if rand(rng) ≤ (1-i/N_Candidates)^2 && i≤100 β[i] = 4*rand(rng)/i end end println("First coefs: $(β[1:min(10, N_Candidates)])")
First coefs: [3.76206, 0.790906, 0.883406, 0.0521332, 0.0870966, 0.315345, 0.352853, 0.231924, 0.198475, 0.102393]
Create noisy observations
const y = X' * β .+ 0.1*randn(rng, N_Observations) plt = plot(time, y, xlabel = "Time (s)", ylabel = "Amplitude") plot!(plt, time, X'[:,1]) plot!(plt, time, X'[:,3]) plot!(plt, time, X'[:,9])
The classic tool to estimate linear regression models is the Least Squares method.
The least squares method relies on solving the optimization problem:
\[ \max \Bigg\{ \sum_{i \in Observations} \Big( y_i - \sum_{j \in Candidates} \beta_j x_{i,j} \Big) ^2 \Bigg\} \]
In Norm-1 regression, the quadratic functions are replaced by absolute values:
\[ \max\Bigg\{ \sum_{i \in Observations} \Big| y_i - \sum_{j \in Candidates} \beta_j x_{i,j} \Big| \Bigg\} \]
This optimization problem can be recast as a Linear Programming Problem:
\[ \begin{align} & \min_{\varepsilon^{up}, \varepsilon^{dw}, \beta} && \sum_{i \in Observations} {\varepsilon^{up}}_i + {\varepsilon^{dw}}_i && \notag \\ & \text{subject to} && {\varepsilon^{up}}_i \geq + y_i - \sum_{j \in Candidates} \beta_j x_{i,j} && \forall i \in Observations \notag \\ & && {\varepsilon^{dw}}_i \geq - y_i + \sum_{j \in Candidates} \beta_j x_{i,j} && \forall i \in Observations \notag \\ & && {\varepsilon^{up}}_i, {\varepsilon^{dw}}_i \geq 0 && \forall i \in Observations \notag \\ \end{align} \]
Where $Observations$ is the set of all observations.
This linear programming problem can be described in julia with JuMP
# create an alias for the sum function (just for fun!) ∑ = sum # define the model function full_model_regression() time_build = @elapsed begin # measure time to create a model # initialize a optimization model full_model = Model(with_optimizer(OPTIMIZER)) # create optimization variables of the problem @variables(full_model, begin ɛ_up[Observations] ≥ 0 ɛ_dw[Observations] ≥ 0 β[1:N_Candidates] # 0 ≤ β[Candidates] ≤ 8 end) # define constraints of the model @constraints(full_model, begin ɛ_up_ctr[i in Observations], ɛ_up[i] ≥ + ∑(X[j,i] * β[j] for j ∈ Candidates) - y[i] ɛ_dw_ctr[i in Observations], ɛ_dw[i] ≥ - ∑(X[j,i] * β[j] for j ∈ Candidates) + y[i] end) # construct the objective function to be minimized @objective(full_model, Min, ∑(ɛ_up[i] + ɛ_dw[i] for i ∈ Observations)) end # solve the problem time_solve = @elapsed optimize!(full_model) println("First coefficients in solution: $(value.(β)[1:min(10, N_Candidates)])") println("Objective value: $(objective_value(full_model))") println("Time in solve: $time_solve") println("Time in build: $time_build") return nothing end
full_model_regression (generic function with 1 method)
Now we execute the functionthat builds the model and solves it
N_Observations*N_Candidates < 10_000_000 && full_model_regression()
First coefficients in solution: [3.76363, 0.796369, 0.878873, 0.0543967, 0. 0903491, 0.316708, 0.351747, 0.22979, 0.196402, 0.106295] Objective value: 362.6158214675506 Time in solve: 268.339451346 Time in build: 3.546058937
Benders decompostions is used to solve large optimization problems with some special characteristics. LP's can be solved with classical linear optimization methods such as the Simplex method or Interior point methods provided by solvers like GLPK. However, these methods do not scale linearly with the problem size. In the Benders decomposition framework we break the problem in two pieces: A master and a slave problem.
Of course some variables will belong to both problems, this is where the cleverness of Benders kicks in: The master problem is solved and passes the shared variables to the slave. The slave problem is solved with the shared variables FIXED to the values given by the master problem. The solution of the slave problem can be used to generate a constraint to the master problem to describe the linear approximation of the cost function of the shared variables. In many cases, like stochastic programming, the slave problems have a interesting structure and might be broken in smaller problem to be solved in parallel.
We will descibe the decomposition similarly to what is done in: Introduction to Linear Optimization, Bertsimas & Tsitsiklis (Chapter 6.5): Where the problem in question has the form
\[ \begin{align} & \min_{x, y_k} && c^T x && + f_1^T y_1 && + \dots && + f_n^T y_n && \notag \\ & \text{subject to} && Ax && && && && = b \notag \\ & && B_1 x && + D_1 y_1 && && && = d_1 \notag \\ & && \dots && && \dots && && \notag \\ & && B_n x && && && + D_n y_n && = d_n \notag \\ & && x, && y_1, && && y_n && \geq 0 \notag \\ \end{align} \]
Given a solution for the $x$ variables we can define the slave problem as
\[ \begin{align} z_k(x) \ = \ & \min_{y_k} && f_k^T y_k && \notag \\ & \text{subject to} && D_k y_k && = d_k - B_k x \notag \\ & && y_k && \geq 0 \notag \\ \end{align} \]
The $z_k(x)$ function represents the cost of the subproblem given a solution for $x$. This function is a convex function because $x$ affects only the right hand side of the problem (this is a standard resutls in LP theory).
For the special case of the Norm-1 reggression the problem is written as:
\[ \begin{align} z_k(\beta) \ = \ & \min_{\varepsilon^{up}, \varepsilon^{dw}} && \sum_{i \in ObsSet(k)} {\varepsilon^{up}}_i + {\varepsilon^{dw}}_i && \notag \\ & \text{subject to} && {\varepsilon^{up}}_i \geq + y_i - \sum_{j \in Candidates} \beta_j x_{i,j} && \forall i \in ObsSet(k) \notag \\ & && {\varepsilon^{dw}}_i \geq - y_i + \sum_{j \in Candidates} \beta_j x_{i,j} && \forall i \in ObsSet(k) \notag \\ & && {\varepsilon^{up}}_i, {\varepsilon^{dw}}_i \geq 0 && \forall i \in ObsSet(k) \notag \\ \end{align} \]
The collection $ObsSet(k)$ is a sub-set of the NObservations. Any partition of the NObservations collection is valid. In this notebook we will partition with the function:
function ObsSet(K) obs_per_block = div(N_Observations, N_Nodes) return (1 + (K - 1) * obs_per_block):(K * obs_per_block) end
ObsSet (generic function with 1 method)
Which can be written in JuMP as follows.
At this point we make a small detour to highlight the ParameterJuMP application. Every time you a find a IF block with the flag PARAM
it means that we have two different implmentatins of the method: one relying on ParameterJuMP and the other using pure JuMP.
function slave_model(PARAM, K) # initialize the JuMP model slave = if PARAM # special constructor exported by ParameterJuMP # to add the functionality to the model ModelWithParams(with_optimizer(OPTIMIZER)) else # regular JuMP constructor Model(with_optimizer(OPTIMIZER)) end # Define local optimization variables for norm-1 error @variables(slave, begin ɛ_up[ObsSet(K)] ≥ 0 ɛ_dw[ObsSet(K)] ≥ 0 end) # create the regression coefficient representation if PARAM # here is the main constructor of the Parameter JuMP packages # it will create model *parameters* instead of variables # variables are added to the optimization model, while parameters # are not. Parameters are merged with LP problem constants and do not # increase the model dimensions. β = Parameters(slave, zeros(N_Candidates)) else # Create fixed variables @variable(slave, β[1:N_Candidates] == 0) end # create local constraints # note that *parameter* algebra is implemented just like variables # algebra. We can multiply parameters by constants, add parameters, # sum parameters and varaibles and so on. @constraints(slave, begin ɛ_up_ctr[i in ObsSet(K)], ɛ_up[i] ≥ + ∑(X[j,i] * β[j] for j ∈ Candidates) - y[i] ɛ_dw_ctr[i in ObsSet(K)], ɛ_dw[i] ≥ - ∑(X[j,i] * β[j] for j ∈ Candidates) + y[i] end) # ATTENTION β[j] * X[j,i] Is much slower # create local objective function @objective(slave, Min, ∑(ɛ_up[i] + ɛ_dw[i] for i ∈ ObsSet(K))) # return the correct group of parameters if PARAM return (slave, β) else return (slave, β, FixRef.(β)) end end
slave_model (generic function with 1 method)
Now that all pieces of the original problem can be representad by the convex $z_k(x)$ functions we can recast the problem in the the equivalent form:
\[ \begin{align} & \min_{x} && c^T x + z_1(x) + \dots + z_n(x) && \notag \\ & \text{subject to} && Ax = b && \notag \\ & && x \geq 0 && \notag \\ \end{align} \]
However we cannot pass a problem in this for to a linear programming solver (it could be passed to other kinds of solvers).
Another standart result of optimization theory is that a convex function an be represented by its supporting hyper-planes:
\[ \begin{align} z_k(x) \ = \ & \min_{z, x} && z && \notag \\ & \text{subject to} && z \geq \pi_k(\hat{x}) (x - \hat{x}) + z_k(\hat{x}), \ \forall \hat{x} \in dom(z_k) && \notag \\ \end{align} \]
Then we can re-write (again) the master problem as
\[ \begin{align} & \min_{x, z_k} && c^T x + z_1 + \dots + z_n \notag \\ & \text{subject to} && z_i \geq \pi_i(\hat{x}) (x - \hat{x}) + z_i(\hat{x}), \ \forall \hat{x} \in dom(z_i), i \in \{1, \dots, n\} \notag \\ & && Ax = b \notag \\ & && x \geq 0 \notag \\ \end{align} \]
Which is a linear program!
However, it has infinitely many constraints !!!
We can relax thhe infinite constraints and write:
\[ \begin{align} & \min_{x, z_k} && c^T x + z_1 + \dots + z_n \notag \\ & \text{subject to} && Ax = b \notag \\ & && x \geq 0 \notag \\ \end{align} \]
But now its only an underestimated problem. In the case of our problem it can be written as:
\[ \begin{align} & \min_{\varepsilon, \beta} && \sum_{i \in Nodes} \varepsilon_i \notag \\ & \text{subject to} && \varepsilon_i \geq 0 \notag \\ \end{align} \]
This model can be written in JUMP
function master_model(PARAM) master = Model(with_optimizer(OPTIMIZER)) @variables(master, begin ɛ[Nodes] ≥ 0 β[1:N_Candidates] # 0 ≤ β[Candidates] ≤ 8 end) @objective(master, Min, ∑(ɛ[i] for i ∈ Nodes)) sol = zeros(N_Candidates) return (master, ɛ, β, sol) end
master_model (generic function with 1 method)
The method to solve the master problem and query its solution is given here:
function master_solve(PARAM, master_model) model = master_model[1] β = master_model[3] optimize!(model) return (value.(β), objective_value(model)) end
master_solve (generic function with 1 method)
With these building blocks in hand, we can start building the algorithm.
So far we know how to:
Solve the relaxed master problem
Obtain the solution for the $\hat{x}$ (or $\beta$ in our case)
Now we can:
Fix the values of $\hat{x}$ in the slave problems
Solve the slave problem
query the solution of the slave problem to obtain the supporting hyperplane
the value of $z_k(\hat{x})$, which is the objectie value of the slave problem
and the derivative $\pi_k(\hat{x}) = \frac{d z_k(x)}{d x} \Big|_{x = \hat{x}}$ the derivative is the dual variable associated to the variable $\hat{x}$, which results by applying the chain rule on the constraints duals.
These new steps are executed by the function:
function slave_solve(PARAM, model, master_solution) β0 = master_solution[1] slave = model[1] # The first step is to fix the values given by the master problem @timeit "fix" if PARAM # *parameters* can be set to new values and the optimization # model will be automatically updated β_p = model[2] ParameterJuMP.setvalue!.(β_p, β0) else # JuMP also has the hability to fix variables to new values β_v = model[2] β_v_ref = model[3] fix.(β_v, β0) end # here the slave problem is solved @timeit "opt" optimize!(slave) # query dual variables, which are sensitivities # they represent the subgradient (almost a derivative) # of the objective function for infinitesimal variations # of the constants in the linear constraints @timeit "dual" if PARAM # we can query dual values of *parameters* π = dual.(β_p) else # or, in pure JuMP, we query the duals form # constraints that fix the values of our regression # coefficients π = dual.(β_v_ref) end # π2 = shadow_price.(β_fix) # @show ∑(π .- π2) obj = objective_value(slave) rhs = obj - dot(π, β0) return (rhs, π, obj) end
slave_solve (generic function with 1 method)
Now that we have cutting plane in hand we can add them to the master problem:
function master_add_cut(PARAM, master_model, cut_info, node) master = master_model[1] ɛ = master_model[2] β = master_model[3] rhs = cut_info[1] π = cut_info[2] @constraint(master, ɛ[node] ≥ ∑(π[j] * β[j] for j ∈ Candidates) + rhs) end
master_add_cut (generic function with 1 method)
The complete algorithm is
Solve the relaxed master problem
Obtain the solution for the $\hat{x}$ (or $\beta$ in our case)
Fix the values of $\hat{x}$ in the slave problems
Solve the slave problem
query the solution of the slave problem to obtain the supporting hyperplane
add hyperplane to master problem
repeat
Now we grab all the pieces that we built and we write the benders algorithm by calling the above function in a proper order.
The macros @timeit
are use to time each step of the algorithm.
function decomposed_model(PARAM) reset_timer!() # reset timer fo comparision time_init = @elapsed @timeit "Init" begin println("Initialize decomposed model") # Create the mastter problem with no cuts println("Build master problem") @timeit "Master" master = master_model(PARAM) # initialize solution for the regression coefficients in zero println("Build initial solution") @timeit "Sol" solution = (zeros(N_Candidates), Inf) best_sol = deepcopy(solution) # Create the slave problems println("Build slave problems") @timeit "Slaves" slaves = [slave_model(PARAM, i) for i ∈ Candidates] # Save initial version of the slave problems and create # the first set of cuts println("Build initial cuts") @timeit "Cuts" cuts = [slave_solve(PARAM, slaves[i], solution) for i ∈ Candidates] end UB = +Inf LB = -Inf println("Initialize Iterative step") time_loop = @elapsed @timeit "Loop" for k in 1:80 # Add cuts generated from each slave problem to the master problem @timeit "add cuts" for i ∈ Candidates master_add_cut(PARAM, master, cuts[i], i) end # Solve the master problem with the new set of cuts # obtain new solution candidate for the regression coefficients @timeit "solve master" solution = master_solve(PARAM, master) # Pass the new candidate solution to each of the slave problems # Solve the slave problems and obtain cuttin planes # @show solution[2] @timeit "solve nodes" for i ∈ Candidates cuts[i] = slave_solve(PARAM, slaves[i], solution) end LB = solution[2] new_UB = ∑(cuts[i][3] for i ∈ Candidates) if new_UB ≤ UB best_sol = deepcopy(solution) end UB = min(UB, new_UB) println("Iter = $k, LB = $LB, UB = $UB") if abs(UB - LB)/(abs(UB)+abs(LB)) < 0.05 println("Converged!") break end end println("First coefficients in solution: $(solution[1][1:min(10, N_Candidates)])") println("Objective value: $(solution[2])") println("Time in loop: $time_loop") println("Time in init: $time_init") print_timer() return best_sol[1] end
decomposed_model (generic function with 1 method)
Run benders decomposition with pure JuMP
GC.gc() β1 = decomposed_model(false);
Initialize decomposed model Build master problem Build initial solution Build slave problems Build initial cuts Initialize Iterative step Iter = 1, LB = 0.0, UB = 356653.53178661753 Iter = 2, LB = 30.399676563064446, UB = 356653.53178661753 Iter = 3, LB = 86.26967456145769, UB = 356653.53178661753 Iter = 4, LB = 149.98183770027555, UB = 300.9817490670204 Iter = 5, LB = 204.95734750716665, UB = 267.5103302069098 Iter = 6, LB = 227.4023133882345, UB = 251.87819595553975 Iter = 7, LB = 233.85017559155568, UB = 241.6185092949669 Converged! First coefficients in solution: [3.76401, 0.794833, 0.878023, 0.056713, 0.0 95081, 0.320758, 0.351928, 0.226681, 0.193147, 0.105357] Objective value: 233.85017559155568 Time in loop: 149.186656266 Time in init: 17.09593682 ───────────────────────────────────────────────────────────────────────── Time Allocations ────────────────────── ─────────────────────── Tot / % measured: 167s / 100% 8.79GiB / 100% Section ncalls time %tot avg alloc %tot avg ───────────────────────────────────────────────────────────────────────── Loop 1 149s 89.7% 149s 5.45GiB 62.1% 5.45GiB solve master 7 120s 72.0% 17.1s 33.3MiB 0.37% 4.76MiB solve nodes 7 27.4s 16.5% 3.92s 4.88GiB 55.6% 714MiB fix 4.90k 11.3s 6.78% 2.30ms 3.61GiB 41.2% 773KiB dual 4.90k 10.2s 6.14% 2.09ms 1.25GiB 14.3% 268KiB opt 4.90k 5.80s 3.49% 1.18ms 13.5MiB 0.15% 2.81KiB add cuts 7 1.97s 1.18% 281ms 542MiB 6.03% 77.4MiB Init 1 17.1s 10.3% 17.1s 3.33GiB 37.9% 3.33GiB Cuts 1 13.1s 7.89% 13.1s 2.37GiB 27.0% 2.37GiB opt 700 10.9s 6.56% 15.6ms 2.10GiB 24.0% 3.07MiB dual 700 1.62s 0.97% 2.31ms 190MiB 2.11% 278KiB fix 700 167ms 0.10% 239μs 39.7MiB 0.44% 58.1KiB Slaves 1 3.85s 2.31% 3.85s 0.94GiB 10.8% 0.94GiB Master 1 2.92ms 0.00% 2.92ms 871KiB 0.01% 871KiB Sol 1 6.42μs 0.00% 6.42μs 5.66KiB 0.00% 5.66KiB ─────────────────────────────────────────────────────────────────────────
Run benders decomposition with ParameterJuMP
GC.gc() β2 = decomposed_model(true);
Initialize decomposed model Build master problem Build initial solution Build slave problems Build initial cuts Initialize Iterative step Iter = 1, LB = 0.0, UB = 356653.53178661753 Iter = 2, LB = 30.970893155886884, UB = 356653.53178661753 Iter = 3, LB = 85.03827213320974, UB = 356653.53178661753 Iter = 4, LB = 137.43316230115607, UB = 324.51973053567116 Iter = 5, LB = 198.2684305114623, UB = 264.5511437349908 Iter = 6, LB = 225.27179951526546, UB = 252.06650777298213 Iter = 7, LB = 233.04451775497623, UB = 244.32095435502725 Converged! First coefficients in solution: [3.76189, 0.793195, 0.876522, 0.0542087, 0. 0937809, 0.31955, 0.351403, 0.226661, 0.194298, 0.105961] Objective value: 233.04451775497623 Time in loop: 86.080175646 Time in init: 5.860248356 ───────────────────────────────────────────────────────────────────────── Time Allocations ────────────────────── ─────────────────────── Tot / % measured: 91.9s / 100% 1.48GiB / 100% Section ncalls time %tot avg alloc %tot avg ───────────────────────────────────────────────────────────────────────── Loop 1 86.1s 93.6% 86.1s 661MiB 43.6% 661MiB solve master 7 80.1s 87.2% 11.4s 33.3MiB 2.20% 4.76MiB solve nodes 7 3.45s 3.76% 493ms 87.6MiB 5.78% 12.5MiB opt 4.90k 3.00s 3.26% 612μs 54.3MiB 3.58% 11.4KiB dual 4.90k 245ms 0.27% 49.9μs 30.4MiB 2.00% 6.34KiB fix 4.90k 143ms 0.16% 29.2μs 383KiB 0.02% 80.0B add cuts 7 2.48s 2.70% 355ms 539MiB 35.6% 77.1MiB Init 1 5.86s 6.37% 5.86s 855MiB 56.4% 855MiB Slaves 1 3.42s 3.72% 3.42s 684MiB 45.2% 684MiB Cuts 1 2.43s 2.65% 2.43s 170MiB 11.2% 170MiB opt 700 1.85s 2.01% 2.64ms 106MiB 7.01% 155KiB dual 700 327ms 0.36% 468μs 36.8MiB 2.43% 53.8KiB fix 700 18.4ms 0.02% 26.3μs 54.7KiB 0.00% 80.0B Master 1 8.03ms 0.01% 8.03ms 871KiB 0.06% 871KiB Sol 1 1.71μs 0.00% 1.71μs 5.66KiB 0.00% 5.66KiB ─────────────────────────────────────────────────────────────────────────
Plot resulting time series from the benders base estimations
const y1 = X' * β1 const y2 = X' * β2 plt = plot(time, y, xlabel = "Time (s)", ylabel = "Amplitude") plot!(plt, time, y1) plot!(plt, time, y2)
ParameterJuMP was developed by Joaquim Dias Garcia (@joaquimg) and Benoît Legat (@blegat)