# Sensitivity Analysis of SVM

This notebook illustrates sensitivity analysis of data points in a Support Vector Machine (inspired from @matbesancon's SimpleSVMs.)

For reference, Section 10.1 of https://online.stat.psu.edu/stat508/book/export/html/792 gives an intuitive explanation of what it means to have a sensitive hyperplane or data point. The general form of the SVM training problem is given below (without regularization):

$$$\begin{split} \begin{array} {ll} \mbox{minimize} & \sum_{i=1}^{N} \xi_{i} \\ \mbox{s.t.} & \xi_{i} \ge 0 \quad i=1..N \\ & y_{i} (w^T X_{i} + b) \ge 1 - \xi[i]\\ \end{array} \end{split}$$$

where

• X, y are the N data points
• ξ is the soft-margin loss.

## Define and solve the SVM

Import the libraries.

using SCS, DiffOpt, LinearAlgebra, JuMP
import Random, Plots

Construct separable, non-trivial data points.

N = 100
D = 2
Random.seed!(62)
X = vcat(randn(N ÷ 2, D), randn(N ÷ 2, D) .+ [4.0, 1.5]')
y = append!(ones(N ÷ 2), -ones(N ÷ 2));

Let's initialize a special model that can understand sensitivities

model = Model(() -> diff_optimizer(SCS.Optimizer))
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: SCS

@variable(model, l[1:N])
@variable(model, w[1:D])
@variable(model, b);

@constraint(
model,
1.0 * l ∈ MOI.Nonnegatives(N),
)
@constraint(
model,
cons,
y .* (X * w .+ b) + l .- 1 ∈ MOI.Nonnegatives(N),
);

Define the objective and solve

@objective(
model,
Min,
sum(l),
)

optimize!(model) # solve

We can visualize the separating hyperplane.

loss = objective_value(model)
wv = value.(w)
bv = value(b)

svm_x = [0.0, 3.0]
svm_y = (-bv .- wv * svm_x )/wv

p = Plots.scatter(X[:,1], X[:,2], color = [yi > 0 ? :red : :blue for yi in y], label = "")
Plots.yaxis!(p, (-2, 4.5))

## Experiment 2: Gradient of hyperplane wrt the data point coordinates

Similar to previous example, construct perturbations in data points coordinates X.

∇ = Float64[]
dX = zeros(N, D);

begin differentiating

for Xi in 1:N
dX[Xi, :] = ones(D)  # set

for i in 1:D
MOI.set(
model,
DiffOpt.ForwardInConstraint(),
cons,
MOI.Utilities.vectorize(dX[:,i] .* MOI.SingleVariable(w[i])),
)
end

DiffOpt.forward(model)

dw = MOI.get.(
model,
DiffOpt.ForwardOutVariablePrimal(),
w
)
db = MOI.get(
model,
DiffOpt.ForwardOutVariablePrimal(),
b
)
push!(∇, norm(dw) + norm(db))

dX[Xi, :] = zeros(D)  # reset the change made ago
end
normalize!(∇);

We can visualize point sensitivity with respect to the separating hyperplane. Note that the gradients are normalized.

p3 = Plots.scatter(
X[:,1], X[:,2],
color = [yi > 0 ? :red : :blue for yi in y], label = "",
markersize = ∇ * 20,
)
Plots.yaxis!(p3, (-2, 4.5))
Plots.plot!(p3, svm_x, svm_y, label = "loss = \$(round(loss, digits=2))", width=3)