# Experiment design

This tutorial was generated using Literate.jl. Download the source as a .jl file.

This tutorial was originally contributed by Arpit Bhatia and Chris Coey.

This tutorial covers experiment design examples (D-optimal, A-optimal, and E-optimal) from section 7.5 of the book Convex Optimization by Boyd and Vandenberghe.

The tutorial uses the following packages

using JuMP
import SCS
import LinearAlgebra
import Random
Info

This tutorial uses sets from MathOptInterface. By default, JuMP exports the MOI symbol as an alias for the MathOptInterface.jl package. We recommend making this more explicit in your code by adding the following lines:

import MathOptInterface as MOI

We set a seed so the random numbers are repeatable:

Random.seed!(1234)
Random.TaskLocalRNG()

## The relaxed experiment design problem

The basic experiment design problem is as follows.

Given the menu of possible choices for experiments, $v_{1}, \ldots, v_{p}$, and the total number $m$ of experiments to be carried out, choose the numbers of each type of experiment, that is, $m_{1}, \ldots, m_{p}$ to make the error covariance $E$ small (in some sense).

The variables $m_{1}, \ldots, m_{p}$ must, of course, be integers and sum to $m$ the given total number of experiments. This leads to the optimization problem:

\begin{aligned} \min\left(\mathrm{w.r.t.} \mathbf{S}_{+}^{n}\right) & E=\left(\sum_{j=1}^{p} m_{j} v_{j} v_{j}^{T}\right)^{-1} \\ \text{subject to} & m_{i} \geq 0 \\ & \sum\limits_{i=1}^p m_{i} = m \\ & m_{i} \in \mathbb{Z},\quad i=1,\ldots,p \end{aligned}

The basic experiment design problem can be a hard combinatorial problem when $m$, the total number of experiments, is comparable to $n$, since in this case the $m_{i}$ are all small integers.

In the case when $m$ is large compared to $n$, however, a good approximate solution can be found by ignoring, or relaxing, the constraint that the $m_{i}$ are integers.

Let $\lambda_{i}=m_{i} / m,$ which is the fraction of the total number of experiments for which $a_{j}=v_{i},$ or the relative frequency of experiment $i$. We can express the error covariance in terms of $\lambda_{i}$ as:

$$$E=\frac{1}{m}\left(\sum_{i=1}^{p} \lambda_{i} v_{i} v_{i}^{T}\right)^{-1}$$$

The vector $\lambda \in \mathbf{R}^{p}$ satisfies $\lambda \succeq 0, \mathbf{1}^{T} \lambda=1,$ and also, each $\lambda_{i}$ is an integer multiple of $1 / m$. By ignoring this last constraint, we arrive at the problem:

\begin{aligned} \min\left(\mathrm{w.r.t.} \mathbf{S}_{+}^{n}\right) & E=(1 / m)\left(\sum_{i=1}^{p} \lambda_{i} v_{i} v_{i}^{T}\right)^{-1} \\ \text{subject to:}\quad & \lambda \succeq 0 \\ & \mathbf{1}^{T} \lambda=1 \end{aligned}

Several scalarizations have been proposed for the experiment design problem, which is a vector optimization problem over the positive semidefinite cone.

q = 4 # dimension of estimate space
p = 8 # number of experimental vectors
n_max = 3 # upper bound on lambda
n = 12

V = randn(q, p)

eye = Matrix{Float64}(LinearAlgebra.I, q, q);

## A-optimal design

In A-optimal experiment design, we minimize tr $E$, the trace of the covariance matrix. This objective is simply the mean of the norm of the error squared:

$$$\mathbf{E}\|e\|_{2}^{2}=\mathbf{E} \operatorname{tr}\left(e e^{T}\right)=\operatorname{tr} E$$$

The A-optimal experiment design problem in SDP form is

\begin{aligned} \min & \mathbf{1}^{T} u \\ \text{subject to} & \left[\begin{aligned}{\sum_{i=1}^{p} \lambda_{i} v_{i} v_{i}^{T}} & {e_{k}} \\ {e_{k}^{T}} & {u_{k}}\end{aligned}\right] \succeq 0, \quad k=1, \ldots, n \\ & \lambda \succeq 0 \\ & \mathbf{1}^{T} \lambda=1 \end{aligned}
aOpt = Model(SCS.Optimizer)
set_silent(aOpt)
@variable(aOpt, np[1:p], lower_bound = 0, upper_bound = n_max)
@variable(aOpt, u[1:q], lower_bound = 0)
@constraint(aOpt, sum(np) <= n)
for i in 1:q
matrix = [
V*LinearAlgebra.diagm(0 => np ./ n)*V' eye[:, i]
eye[i, :]' u[i]
]
@constraint(aOpt, matrix >= 0, PSDCone())
end
@objective(aOpt, Min, sum(u))
optimize!(aOpt)
objective_value(aOpt)
5.103223362935152
value.(np)
8-element Vector{Float64}:
2.9495211350241117
1.7790921964770656
8.104871340374779e-6
4.706163786116579e-6
2.1082669841842727
1.6983463798099407
1.2635010842547627
2.2012292330174965

## E-optimal design

In $E$ -optimal design, we minimize the norm of the error covariance matrix, that is, the maximum eigenvalue of $E$.

Since the diameter (twice the longest semi-axis) of the confidence ellipsoid $\mathcal{E}$ is proportional to $\|E\|_{2}^{1 / 2}$, minimizing $\|E\|_{2}$ can be interpreted geometrically as minimizing the diameter of the confidence ellipsoid.

E-optimal design can also be interpreted as minimizing the maximum variance of $q^{T} e$, over all $q$ with $\|q\|_{2}=1$. The E-optimal experiment design problem in SDP form is:

\begin{aligned} \min & t \\ \text{subject to} & \sum_{i=1}^{p} \lambda_{i} v_{i} v_{i}^{T} \succeq t I \\ & \lambda \succeq 0 \\ & \mathbf{1}^{T} \lambda=1 \end{aligned}
eOpt = Model(SCS.Optimizer)
set_silent(eOpt)
@variable(eOpt, 0 <= np[1:p] <= n_max)
@variable(eOpt, t)
@constraint(
eOpt,
V * LinearAlgebra.diagm(0 => np ./ n) * V' - (t .* eye) >= 0,
PSDCone(),
)
@constraint(eOpt, sum(np) <= n)
@objective(eOpt, Max, t)
optimize!(eOpt)
objective_value(eOpt)
0.43538638565427923
value.(np)
8-element Vector{Float64}:
2.999998678490912
2.752810163966476
-3.609586470621804e-7
-1.4872470371469733e-6
2.181846279169085
2.3252635465673577
0.21896764651694434
1.5211163049621212

## D-optimal design

The most widely used scalarization is called $D$ -optimal design, in which we minimize the determinant of the error covariance matrix $E$. This corresponds to designing the experiment to minimize the volume of the resulting confidence ellipsoid (for a fixed confidence level). Ignoring the constant factor $1 / m$ in $E$, and taking the logarithm of the objective, we can pose this problem as convex optimization problem:

\begin{aligned} \min & \log \operatorname{det}\left(\sum_{i=1}^{p} \lambda_{i} v_{i} v_{i}^{T}\right)^{-1} \\ \text{subject to} & \lambda \succeq 0 \\ & \mathbf{1}^{T} \lambda=1 \end{aligned}
dOpt = Model(SCS.Optimizer)
set_silent(dOpt)
@variable(dOpt, np[1:p], lower_bound = 0, upper_bound = n_max)
@variable(dOpt, t)
@objective(dOpt, Max, t)
@constraint(dOpt, sum(np) <= n)
E = V * LinearAlgebra.diagm(0 => np ./ n) * V'
@constraint(dOpt, [t; 1; triangle_vec(E)] in MOI.LogDetConeTriangle(q))
optimize!(dOpt)
objective_value(dOpt)
0.3084539346728666
value.(np)
8-element Vector{Float64}:
0.4275824266217608
2.9100163168942075
2.9168724521318788e-6
2.6291979699361115e-6
2.915780629983959
2.673375664592317
2.7353950122201236
0.33783880862535554