Polynomial Optimization

Contributed by: Benoît Legat

Introduction

Consider the polynomial optimization problem [L09, Example 2.2] of minimizing the polynomial $x^3 - x^2 + 2xy - y^2 + y^3$ over the polyhedron defined by the inequalities $x \ge 0, y \ge 0$ and $x + y \geq 1$.

[L09] Lasserre, J. B. Moments, positive polynomials and their applications. World Scientific, 2009.

using DynamicPolynomials
@polyvar x y
p = x^3 - x^2 + 2x*y -y^2 + y^3
using SumOfSquares
S = @set x >= 0 && y >= 0 && x + y >= 1
p(x=>1, y=>0), p(x=>1//2, y=>1//2), p(x=>0, y=>1)
(0, 1//4, 0)

A local solver only uses the local information given by the the value, gradient and hessian of the objective function and constraints at a given solution. When it converges, it therefore only guarantees that the found solution is a local minimum. In this example, the optimal solutions are $(x, y) = (1, 0)$ and $(x, y) = (0, 1)$ with objective value $0$ but Ipopt only finds the local minimum $(1/2, 1/2)$ with objective value $1/4$.

import Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, a >= 0)
@variable(model, b >= 0)
@constraint(model, a + b >= 1)
@NLobjective(model, Min, a^3 - a^2 + 2a*b - b^2 + b^3)
optimize!(model)
Success: SDP solved
Primal objective value: 0.0000000e+00
Dual objective value: 0.0000000e+00
Relative primal infeasibility: 5.23e-17
Relative dual infeasibility: 5.00e-11
Real Relative Gap: 0.00e+00
XZ Relative Gap: 1.00e-10
DIMACS error measures: 7.85e-17 0.00e+00 5.00e-11 0.00e+00 0.00e+00 1.00e-10
This is Ipopt version 3.14.13, running with linear solver MUMPS 5.6.1.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        2
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        2
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.9999940e-06 9.80e-01 1.33e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  5.3716721e-05 9.40e-01 1.18e+00  -1.0 3.97e-01    -  3.23e-02 5.03e-02h  1
   2  2.8887051e-01 0.00e+00 4.87e+01  -1.0 4.95e-01   2.0 1.00e+00 1.00e+00f  1
   3  2.8997185e-01 0.00e+00 1.00e-06  -1.0 1.33e-03    -  1.00e+00 1.00e+00f  1
   4  2.8022658e-01 0.00e+00 3.97e-01  -2.5 1.19e-02   1.5 1.00e+00 1.00e+00f  1
   5  2.5585775e-01 0.00e+00 3.45e-01  -2.5 3.10e-02   1.0 1.00e+00 1.00e+00f  1
   6  2.5292585e-01 0.00e+00 1.16e-01  -2.5 1.03e-02   0.6 1.00e+00 3.73e-01f  2
   7  2.5283150e-01 0.00e+00 1.54e-04  -2.5 1.25e-04   0.1 1.00e+00 1.00e+00f  1
   8  2.5021392e-01 0.00e+00 1.14e-02  -3.8 3.48e-03   0.5 1.00e+00 1.00e+00f  1
   9  2.5014901e-01 0.00e+00 9.49e-05  -3.8 8.65e-05   0.0 1.00e+00 1.00e+00f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  2.5000198e-01 0.00e+00 5.74e-04  -5.7 1.96e-04   0.5 1.00e+00 1.00e+00f  1
  11  2.5000184e-01 0.00e+00 1.48e-06  -5.7 1.90e-07   0.9 1.00e+00 1.00e+00f  1
  12  2.5000000e-01 0.00e+00 6.39e-06  -8.6 2.46e-06   0.4 1.00e+00 1.00e+00f  1
  13  2.5000000e-01 0.00e+00 1.67e-10  -8.6 2.41e-11   0.8 1.00e+00 1.00e+00h  1

Number of Iterations....: 13

                                   (scaled)                 (unscaled)
Objective...............:   2.4999999500590342e-01    2.4999999500590342e-01
Dual infeasibility......:   1.6744750030994737e-10    1.6744750030994737e-10
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5059035596801718e-09    2.5059035596801718e-09
Overall NLP error.......:   2.5059035596801718e-09    2.5059035596801718e-09


Number of objective function evaluations             = 19
Number of objective gradient evaluations             = 14
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 19
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations             = 13
Total seconds in IPOPT                               = 0.009

EXIT: Optimal Solution Found.

As we can see below, the termination status is LOCALLY_SOLVED and not of OPTIMAL because Ipopt only guarantees local optimality.

solution_summary(model)
* Solver : Ipopt

* Status
  Result count       : 1
  Termination status : LOCALLY_SOLVED
  Message from the solver:
  "Solve_Succeeded"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : FEASIBLE_POINT
  Objective value    : 2.50000e-01
  Dual objective value : 7.50000e-01

* Work counters
  Solve time (sec)   : 1.01531e-02

Indeed, the solution found is not globally optimal:

value(a), value(b)
(0.49999999667062867, 0.4999999966705758)

Note that the problem can be written equivalently as follows using registered functions. The difference is that the gradient and hessian will be computed via the Symbolic Differentiation provided by MultivariatePolynomials instead of JuMP's Automatic Differentiation:

f(a, b) = p(x => a, y => b)
∇p = differentiate(p, [x, y])
function ∇f(g, a, b)
    for i in eachindex(g)
        g[i] = ∇p[i](x => a, y => b)
    end
end
∇²p = differentiate(∇p, [x, y])
function ∇²f(H, a, b)
    for j in axes(∇²p, 2)
        for i in j:size(∇²p, 1)
            H[i, j] = ∇²p[i, j](x => a, y => b)
        end
    end
end
using Ipopt
gmodel = Model(Ipopt.Optimizer)
@variable(gmodel, a >= 0)
@variable(gmodel, b >= 0)
@constraint(gmodel, a + b >= 1)
register(gmodel, :f, 2, f, ∇f, ∇²f)
@NLobjective(gmodel, Min, f(a, b))
optimize!(gmodel)
This is Ipopt version 3.14.13, running with linear solver MUMPS 5.6.1.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        2
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        2
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        1
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.9999940e-06 9.80e-01 1.33e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  5.3716721e-05 9.40e-01 1.18e+00  -1.0 3.97e-01    -  3.23e-02 5.03e-02h  1
   2  2.8887051e-01 0.00e+00 4.87e+01  -1.0 4.95e-01   2.0 1.00e+00 1.00e+00f  1
   3  2.8997185e-01 0.00e+00 1.00e-06  -1.0 1.33e-03    -  1.00e+00 1.00e+00f  1
   4  2.8022658e-01 0.00e+00 3.97e-01  -2.5 1.19e-02   1.5 1.00e+00 1.00e+00f  1
   5  2.5585775e-01 0.00e+00 3.45e-01  -2.5 3.10e-02   1.0 1.00e+00 1.00e+00f  1
   6  2.5292585e-01 0.00e+00 1.16e-01  -2.5 1.03e-02   0.6 1.00e+00 3.73e-01f  2
   7  2.5283150e-01 0.00e+00 1.54e-04  -2.5 1.25e-04   0.1 1.00e+00 1.00e+00f  1
   8  2.5021392e-01 0.00e+00 1.14e-02  -3.8 3.48e-03   0.5 1.00e+00 1.00e+00f  1
   9  2.5014901e-01 0.00e+00 9.49e-05  -3.8 8.65e-05   0.0 1.00e+00 1.00e+00f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  2.5000198e-01 0.00e+00 5.74e-04  -5.7 1.96e-04   0.5 1.00e+00 1.00e+00f  1
  11  2.5000184e-01 0.00e+00 1.48e-06  -5.7 1.90e-07   0.9 1.00e+00 1.00e+00f  1
  12  2.5000000e-01 0.00e+00 6.39e-06  -8.6 2.46e-06   0.4 1.00e+00 1.00e+00f  1
  13  2.5000000e-01 0.00e+00 1.67e-10  -8.6 2.41e-11   0.8 1.00e+00 1.00e+00h  1

Number of Iterations....: 13

                                   (scaled)                 (unscaled)
Objective...............:   2.4999999500590339e-01    2.4999999500590339e-01
Dual infeasibility......:   1.6744750030994737e-10    1.6744750030994737e-10
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5059035596801718e-09    2.5059035596801718e-09
Overall NLP error.......:   2.5059035596801718e-09    2.5059035596801718e-09


Number of objective function evaluations             = 19
Number of objective gradient evaluations             = 14
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 19
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations             = 13
Total seconds in IPOPT                               = 0.092

EXIT: Optimal Solution Found.

Even if we have the algebraic expressions of gradient and hessian, Ipopt is not using these symbolic expressions but only local information hence it can still only provide local guarantees:

solution_summary(gmodel)
* Solver : Ipopt

* Status
  Result count       : 1
  Termination status : LOCALLY_SOLVED
  Message from the solver:
  "Solve_Succeeded"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : FEASIBLE_POINT
  Objective value    : 2.50000e-01
  Dual objective value : 7.50000e-01

* Work counters
  Solve time (sec)   : 9.27331e-02

and the same solution is found:

value(a), value(b)
(0.49999999667062867, 0.4999999966705758)

Sum-of-Squares approach

We will now see how to find the optimal solution using Sum of Squares Programming. We first need to pick an SDP solver, see here for a list of the available choices.

import SCS
scs = SCS.Optimizer
import Dualization
dual_scs = Dualization.dual_optimizer(scs)
#11 (generic function with 1 method)

A Sum-of-Squares certificate that $p \ge \alpha$ over the domain S, ensures that $\alpha$ is a lower bound to the polynomial optimization problem. The following program searches for the largest lower bound and finds zero.

model = SOSModel(dual_scs)
@variable(model, α)
@objective(model, Max, α)
@constraint(model, c3, p >= α, domain = S)
optimize!(model)
------------------------------------------------------------------
	       SCS v3.2.3 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 10, constraints m: 25
cones: 	  z: primal zero / dual free vars: 1
	  s: psd vars: 24, ssize: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
	  alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
	  max_iters: 100000, normalize: 1, rho_x: 1.00e-06
	  acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
	  nnz(A): 37, nnz(P): 0
------------------------------------------------------------------
 iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
     0| 1.90e+01  1.01e+00  5.26e+01 -2.61e+01  1.00e-01  8.12e-04
   125| 4.27e-05  3.97e-05  9.70e-06  8.12e-06  8.63e-01  1.86e-03
------------------------------------------------------------------
status:  solved
timings: total: 1.86e-03s = setup: 7.46e-04s + solve: 1.12e-03s
	 lin-sys: 8.40e-05s, cones: 9.39e-04s, accel: 8.70e-06s
------------------------------------------------------------------
objective = 0.000008
------------------------------------------------------------------

This time, the termination status is OPTIMAL but this does not necessarily mean that we found the optimal solution to the polynomial optimization problem. This only means that CSDP founds an optimal solution to the Sum-of-Squares relaxation.

solution_summary(model)
* Solver : Dual model with SCS attached

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "solved"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : FEASIBLE_POINT
  Objective value    : 1.29679e-05
  Dual objective value : 3.27176e-06

* Work counters
  Solve time (sec)   : 1.86367e-03

The feasibility of the primal solution guarantees that the objective value 0 is a lower bound to the polynomial optimization problem. The optimality means that it's the best lower bound we can get (at this degree of the hierarcy). Using the solution $(1/2, 1/2)$ found by Ipopt of objective value $1/4$ and this certificate of lower bound $0$ we know that the optimal objective value is in the interval $[0, 1/4]$ but we still do not know what it is (if we consider that we did not try the solutions $(1, 0)$ and $(0, 1)$ as done in the introduction). If the dual of the constraint c3 was atomic, its atoms would have given optimal solutions of objective value $0$ but that is not the case.

ν3 = moment_matrix(c3)
atomic_measure(ν3, 1e-3) # Returns nothing as the dual is not atomic

Fortunately, there is a hierarchy of programs with increasingly better bounds that can be solved until we get one with atom dual variables. This comes from the way the Sum-of-Squares constraint with domain S is formulated. The polynomial $p - \alpha$ is guaranteed to be nonnegative over the domain S if there exists Sum-of-Squares polynomials $s_0$, $s_1$, $s_2$, $s_3$ such that

\[p - \alpha = s_0 + s_1 x + s_2 y + s_3 (x + y - 1).\]

Indeed, in the domain S, $x$, $y$ and $x + y - 1$ are nonnegative so the right-hand side is a sum of squares hence is nonnegative. Once the degrees of $s_1$, $s_2$ and $s_3$ have been decided, the degree needed for $s_0$ will be determined but we have a freedom in choosing the degrees of $s_1$, $s_2$ and $s_3$. By default, they are chosen so that the degrees of $s_1 x$, $s_2 y$ and $s_3 (x + y - 1)$ match those of $p - \alpha$ but this can be overwritten using the maxdegree keyword argument.

The maxdegree keyword argument

The maximum total degree (i.e. maximum sum of the exponents of $x$ and $y$) of the monomials of $p$ is 3 so the constraint in the program above is equivalent to @constraint(model, p >= α, domain = S, maxdegree = 3). That is, since $x$, $y$ and $x + y - 1$ have total degree 1, the sum of squares polynomials $s_1$, $s_2$ and $s_3$ have been chosen with maximum total degree $2$. Since these polynomials are sums of squares, their degree must be even so the next maximum total degree to try is 4. For this reason, the keywords maxdegree = 4 and maxdegree = 5 have the same effect in this example. In general, if the polynomials in the domain are not all odd or all even, each value of maxdegree has a different effect in the choice of the maximum total degree of some $s_i$.

model = SOSModel(dual_scs)
@variable(model, α)
@objective(model, Max, α)
@constraint(model, c4, p >= α, domain = S, maxdegree = 4)
optimize!(model)
------------------------------------------------------------------
	       SCS v3.2.3 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 10, constraints m: 25
cones: 	  z: primal zero / dual free vars: 1
	  s: psd vars: 24, ssize: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
	  alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
	  max_iters: 100000, normalize: 1, rho_x: 1.00e-06
	  acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
	  nnz(A): 37, nnz(P): 0
------------------------------------------------------------------
 iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
     0| 1.90e+01  1.01e+00  5.26e+01 -2.61e+01  1.00e-01  1.73e-04
   125| 4.27e-05  3.97e-05  9.70e-06  8.12e-06  8.63e-01  1.15e-03
------------------------------------------------------------------
status:  solved
timings: total: 1.15e-03s = setup: 1.14e-04s + solve: 1.04e-03s
	 lin-sys: 7.68e-05s, cones: 8.74e-04s, accel: 9.10e-06s
------------------------------------------------------------------
objective = 0.000008
------------------------------------------------------------------

We can see that the basis of the moment matrix didn't increase:

moment_matrix(c4)
MomentMatrix with row/column basis:
 MonomialBasis([1, y, x])
And entries in a 3×3 SymMatrix{Float64}:
 0.9999988622359884   0.49999780589712706  0.49999780589711845
 0.49999780589712706  0.49992054531121133  7.5635370326707e-5
 0.49999780589711845  7.5635370326707e-5   0.4999205453112032

This is because of the Newton polytope reduction that determined that gram matrix will be zero for these degrees so it reduced the problem back to the equivalent of maxdegree 3 Let's turn this off with newton_polytope = nothing

function sos(solver, deg)
    model = SOSModel(solver)
    @variable(model, α)
    @objective(model, Max, α)
    @constraint(model, c, p >= α, domain = S, maxdegree = deg, newton_polytope = nothing)
    optimize!(model)
    return model
end
dual_model4 = sos(dual_scs, 4)
------------------------------------------------------------------
	       SCS v3.2.3 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 15, constraints m: 40
cones: 	  z: primal zero / dual free vars: 1
	  s: psd vars: 39, ssize: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
	  alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
	  max_iters: 100000, normalize: 1, rho_x: 1.00e-06
	  acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
	  nnz(A): 52, nnz(P): 0
------------------------------------------------------------------
 iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
     0| 1.06e+01  4.30e-01  3.19e+01 -1.57e+01  1.00e-01  2.46e-04
   125| 2.44e-04  2.14e-05  1.20e-05 -4.90e-05  1.00e-01  1.90e-03
------------------------------------------------------------------
status:  solved
timings: total: 1.91e-03s = setup: 1.14e-04s + solve: 1.79e-03s
	 lin-sys: 1.14e-04s, cones: 1.56e-03s, accel: 2.81e-05s
------------------------------------------------------------------
objective = -0.000049
------------------------------------------------------------------

We see that the lower bound is still 0:

solution_summary(dual_model4)
* Solver : Dual model with SCS attached

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "solved"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : FEASIBLE_POINT
  Objective value    : -4.29823e-05
  Dual objective value : -5.49868e-05

* Work counters
  Solve time (sec)   : 1.90607e-03

Let's now look at which solution we can extract from the moment matrix:

dual_ν4 = moment_matrix(dual_model4[:c])
MomentMatrix with row/column basis:
 MonomialBasis([1, y, x, y^2, x*y, x^2])
And entries in a 6×6 SymMatrix{Float64}:
  1.000008464979478        0.5001118271563897      …   0.5003336181951981
  0.5001118271563897       0.5003460975047838         -8.014565848110562e-5
  0.5001118271563981      -0.00012664913632054315      0.500446375522822
  0.5003336181951908       0.5004463755228099          1.952914136044151
 -0.00012729223307681071  -9.194362986632279e-5       -1.8561449853461536
  0.5003336181951981      -8.014565848110562e-5    …   8.524707147813995

Looking at the singular values, 4 seems to be a reasonable rank:

using LinearAlgebra
svdvals(Matrix(dual_ν4.Q))
6-element Vector{Float64}:
 11.292137082083762
  6.61278028163025
  1.5788661771503862
  1.0597600660608346
  0.4594854767807916
  2.881579200339836e-17

The solution we extract is (0.5, 0.5) which is the solution found by Ipopt:

atomic_measure(dual_ν4, FixedRank(4))
Atomic measure on the variables x, y with 1 atoms:
 at [0.5000778765242948, 0.5001018243812642] with weight 1.607942233266083

This process is quite sensitive numerically so let's try to solve it without dualization as well:

model4 = sos(scs, 4)
------------------------------------------------------------------
	       SCS v3.2.3 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 40, constraints m: 54
cones: 	  z: primal zero / dual free vars: 15
	  s: psd vars: 39, ssize: 4
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
	  alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
	  max_iters: 100000, normalize: 1, rho_x: 1.00e-06
	  acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
	  nnz(A): 91, nnz(P): 0
------------------------------------------------------------------
 iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
     0| 2.21e+01  1.35e+00  3.74e+01 -1.86e+01  1.00e-01  2.85e-04
   175| 7.26e-05  8.28e-06  2.10e-05 -1.57e-05  1.00e-01  3.10e-03
------------------------------------------------------------------
status:  solved
timings: total: 3.10e-03s = setup: 1.47e-04s + solve: 2.95e-03s
	 lin-sys: 3.51e-04s, cones: 2.36e-03s, accel: 7.02e-05s
------------------------------------------------------------------
objective = -0.000016
------------------------------------------------------------------

We see that the lower bound is again 0:

solution_summary(model4)
* Solver : SCS

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "solved"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : FEASIBLE_POINT
  Objective value    : 2.61826e-05
  Dual objective value : 5.20758e-06

* Work counters
  Solve time (sec)   : 3.09981e-03

The moment matrix is the following

ν4 = moment_matrix(model4[:c])
MomentMatrix with row/column basis:
 MonomialBasis([1, y, x, y^2, x*y, x^2])
And entries in a 6×6 SymMatrix{Float64}:
 1.000000718147661      0.49999933411667724    …  0.4999833790296048
 0.49999933411667724    0.4999834837557063        8.47685913694486e-6
 0.4999993341166777     1.4825415899917967e-5     0.49997241665101183
 0.49998337902960416    0.4999724166510108        0.2622800675747075
 1.5473299855504304e-5  7.379232991922922e-6      0.254485306969413
 0.4999833790296048     8.47685913694486e-6    …  0.7968257005648283

Looking at the singular values, 3 seems to be a reasonable rank:

svdvals(Matrix(ν4.Q))
6-element Vector{Float64}:
 2.198952582631606
 1.0175199095063332
 0.6030514893216565
 0.019380459836314458
 0.016994381823595424
 3.318801439569113e-17

This time, the dual variable is atomic as it is the moments of the measure $0.5 \delta(x-1, y) + 0.5 \delta(x, y-1)$ where $\delta(x, y)$ is the dirac measure centered at $(0, 0)$. Therefore the program provides both a certificate that $0$ is a lower bound and a certificate that it is also an upper bound since it is attained at the global minimizers $(1, 0)$ and $(0, 1)$.

atomic_measure(ν4, FixedRank(3))
Atomic measure on the variables x, y with 2 atoms:
 at [0.9946508364409633, 0.005346125382847034] with weight 0.5511753483415659
 at [-0.011834484693320546, 1.0118314465171312] with weight 0.5285747070035308

A deeper look into atom extraction

The moment matrix is transformed into a system of polynomials equations whose solutions give the atoms. This transformation uses the SVD decomposition of the moment matrix and discards the equations corresponding to the lowest singular values. When this system of equation has an infinite number of solutions, atomic_measure concludes that the measure is not atomic. For instance, with maxdegree = 3, we obtain the system $x + y = 1$ which contains a whole line of solution. This explains atomic_measure returned nothing.

ν3 = moment_matrix(c3)
SumOfSquares.MultivariateMoments.compute_support!(ν3, LeadingRelativeRankTol(1e-3))

With maxdegree = 4, we obtain the system

\[\begin{aligned} x + y & = 1\\ y^2 & = y\\ xy & = 0\\ -y + y^2 - x*y & = 0 y^2 - 2y + 1 & = x^2 \end{aligned}\]

ν4 = moment_matrix(model4[:c])
SumOfSquares.MultivariateMoments.compute_support!(ν4, FixedRank(3))

This system can be reduced to the equivalent system

\[\begin{aligned} x + y & = 1\\ y^2 & = y \end{aligned}\]

which has the solutions $(0, 1)$ and $(1, 0)$.

SemialgebraicSets.compute_gröbner_basis!(ideal(ν4.support))
ν4.support
Algebraic Set defined by 2 equalities
 -0.9999969618238103 + 0.9999999999999999*y + x = 0
 0.005409377779388048 - 1.017177571899978*y + y^2 = 0

The solutions of this system then give the minimizers

collect(ν4.support)
2-element Vector{Vector{Float64}}:
 [0.9946508364409633, 0.005346125382847034]
 [-0.011834484693320546, 1.0118314465171312]

The function atomic_measure then reuses the matrix of moments to find the weights $1/2$, $1/2$ corresponding to the diracs centered respectively at $(0, 1)$ and $(1, 0)$. This details how the function obtained the result $0.5 \delta(x-1, y) + 0.5 \delta(x, y-1)$ given in the previous section.

HomotopyContinuation

As discussed in the previous section, the atom extraction relies on the solution of a system of algebraic equations. The atomic_measure function takes an optional algebraic_solver argument that is used to solve this system of equation. If no solver is provided, the default solver of SemialgebraicSets.jl is used which currently computes the Gröbner basis, then the multiplication matrices and then the Schur decomposition of a random combination of these matrices. As the system of equations is obtained from a numerical solution and is represented using floating point coefficients, homotopy continuation is recommended as it is more numerically robust than Gröbner basis computation. The following uses homotopy continuation to solve the system of equations.

using HomotopyContinuation
algebraic_solver = SemialgebraicSetsHCSolver(; excess_residual_tol = 1e-1, real_tol = 1e-1, compile = false)
atomic_measure(ν4, FixedRank(3), Echelon(), algebraic_solver)
Atomic measure on the variables x, y with 2 atoms:
 at [1.0055766640161028, 0.007564448391363832] with weight 0.539289603920206
 at [-0.01902425990967022, 1.0016672434605225] with weight 0.5397862305287066

As the system has 3 equations for 2 variables and the coefficients of the equations are to be treated with tolerance since they originate from the solution of an SDP, we need to set excess_residual_tol and real_tol to a high tolerance otherwise, HomotopyContinuation would consider that there is no solution. Indeed, as the system is overdetermined (it has more equations than variables) HomotopyContinuation expects to have excess solution hence it filters out excess solution among the solution found. It determines which solution are in excess by comparing the infinity norm of the residuals of the equations at the solution with excess_residual_tol. It also filters out solution for which the absolute value of the imaginary part of one of the entry is larger than real_tol and strips out the imaginary part. The raw solutions obtained by HomotopyContinuation can be obtained as follows:

F = HomotopyContinuation.System(ν4.support)
res = HomotopyContinuation.solve(F, algebraic_solver.options...)
path_results(res)
4-element Vector{HomotopyContinuation.PathResult}:
 PathResult:
 • return_code → :excess_solution
 • solution → ComplexF64[1.0154616173907127 + 0.006608944950549774im, 0.01727921405522395 - 0.008079406477792615im]
 • accuracy → 0.032777
 • residual → 0.014464
 • condition_jacobian → 12.492
 • steps → 30 / 0
 • extended_precision → false
 • path_number → 1

 PathResult:
 • return_code → :excess_solution
 • solution → ComplexF64[11.934682123672445 - 19.15030051735899im, 12.006135573261714 - 19.101149082598404im]
 • accuracy → 44.603
 • residual → 17.175
 • condition_jacobian → 95.219
 • steps → 33 / 0
 • extended_precision → false
 • path_number → 2

 PathResult:
 • return_code → :excess_solution
 • solution → ComplexF64[0.021256214175370946 - 0.007025482213112428im, 1.0165695629966875 + 0.00899109986992318im]
 • accuracy → 0.03788
 • residual → 0.034346
 • condition_jacobian → 9.8452
 • steps → 29 / 0
 • extended_precision → false
 • path_number → 3

 PathResult:
 • return_code → :excess_solution
 • solution → ComplexF64[0.7158085092400011 + 0.39127281692156707im, 0.7901138700585448 + 0.4390043953171966im]
 • accuracy → 0.97228
 • residual → 0.52768
 • condition_jacobian → 3.4279
 • steps → 20 / 0
 • extended_precision → false
 • path_number → 4

The printed residual above shows why 1e-1 allows to filter how the 2 actual solutions from the 2 excess solutions.


This page was generated using Literate.jl.