Operations

Convex.jl currently supports the following functions. These functions may be composed according to the DCP composition rules to form new convex, concave, or affine expressions. Convex.jl transforms each problem into an equivalent cone program in order to pass the problem to a specialized solver. Depending on the types of functions used in the problem, the conic constraints may include linear, second-order, exponential, or semidefinite constraints, as well as any binary or integer constraints placed on the variables. Below, we list each function available in Convex.jl organized by the (most complex) type of cone used to represent that function, and indicate which solvers may be used to solve problems with those cones. Problems mixing many different conic constraints can be solved by any solver that supports every kind of cone present in the problem.

In the notes column in the tables below, we denote implicit constraints imposed on the arguments to the function by IC, and parameter restrictions that the arguments must obey by PR. (Convex.jl will automatically impose ICs; the user must make sure to satisfy PRs.) Elementwise means that the function operates elementwise on vector arguments, returning a vector of the same size.

Linear Program Representable Functions

An optimization problem using only these functions can be solved by any LP solver.

operationdescriptionvexityslopenotes
x+y or x.+yadditionaffineincreasingnone
x-y or x.-ysubtractionaffineincreasing in \(x\) decreasing in \(y\)none none
x*ymultiplicationaffineincreasing if constant term \(\ge 0\) decreasing if constant term \(\le 0\) not monotonic otherwisePR: one argument is constant
x/ydivisionaffineincreasingPR: \(y\) is scalar constant
dot(*)(x, y)elementwise multiplicationaffineincreasingPR: one argument is constant
dot(/)(x, y)elementwise divisionaffineincreasingPR: one argument is constant
x[1:4, 2:3]indexing and slicingaffineincreasingnone
diag(x, k)\(k\)-th diagonal of a matrixaffineincreasingnone
diagm(x)construct diagonal matrixaffineincreasingPR: \(x\) is a vector
x'transposeaffineincreasingnone
vec(x)vector representationaffineincreasingnone
dot(x,y)\(\sum_i x_i y_i\)affineincreasingPR: one argument is constant
kron(x,y)Kronecker productaffineincreasingPR: one argument is constant
vecdot(x,y)dot(vec(x),vec(y))affineincreasingPR: one argument is constant
sum(x)\(\sum_{ij} x_{ij}\)affineincreasingnone
sum(x, k)sum elements across dimension \(k\)affineincreasingnone
sumlargest(x, k)sum of \(k\) largest elements of \(x\)convexincreasingnone
sumsmallest(x, k)sum of \(k\) smallest elements of \(x\)concaveincreasingnone
dotsort(a, b)dot(sort(a),sort(b))convexincreasingPR: one argument is constant
reshape(x, m, n)reshape into \(m \times n\)affineincreasingnone
minimum(x)\(\min(x)\)concaveincreasingnone
maximum(x)\(\max(x)\)convexincreasingnone
[x y] or [x; y]hcat(x, y) or vcat(x, y)stackingaffineincreasingnone
tr(x)\(\mathrm{tr} \left(X \right)\)affineincreasingnone
partialtrace(x,sys,dims)Partial traceaffineincreasingnone
partialtranspose(x,sys,dims)Partial transposeaffineincreasingnone
conv(h,x)\(h \in \mathbb{R}^m\) \(x \in \mathbb{R}^m\) \(h*x \in \mathbb{R}^{m+n-1}\) entry \(i\) is given by \(\sum_{j=1}^m h_jx_{i-j}\)affineincreasing if \(h\ge 0\) decreasing if \(h\le 0\) not monotonic otherwisePR: \(h\) is constant
min(x,y)\(\min(x,y)\)concaveincreasingnone
max(x,y)\(\max(x,y)\)convexincreasingnone
pos(x)\(\max(x,0)\)convexincreasingnone
neg(x)\(\max(-x,0)\)convexdecreasingnone
invpos(x)\(1/x\)convexdecreasingIC: \(x\>0\)
abs(x)\(\left|x\right|\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)none

Second-Order Cone Representable Functions

An optimization problem using these functions can be solved by any SOCP solver (including ECOS, SCS, Mosek, Gurobi, and CPLEX). Of course, if an optimization problem has both LP and SOCP representable functions, then any solver that can solve both LPs and SOCPs can solve the problem.

operationdescriptionvexityslopenotes
norm(x, p)\((\sum x_i^p)^{1/p}\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)PR: p >= 1
vecnorm(x, p)\((\sum x_{ij}^p)^{1/p}\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)PR: p >= 1
quadform(x, P)\(x^T P x\)convex in \(x\) affine in \(P\)increasing on \(x \ge 0\) decreasing on \(x \le 0\) increasing in \(P\)PR: either \(x\) or \(P\) must be constant; if \(x\) is not constant, then \(P\) must be symmetric and positive semidefinite
quadoverlin(x, y)\(x^T x/y\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\) decreasing in \(y\)IC: \(y \> 0\)
sumsquares(x)\(\sum x_i^2\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)none
sqrt(x)\(\sqrt{x}\)concavedecreasingIC: \(x\>0\)
square(x), x^2\(x^2\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)PR : \(x\) is scalar
dot(^)(x,2)\(x.^2\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)elementwise
geomean(x, y)\(\sqrt{xy}\)concaveincreasingIC: \(x\ge0\), \(y\ge0\)
huber(x, M=1)\(\begin{cases} x^2 &|x| \leq M \\ 2M|x| - M^2 &|x| \> M \end{cases}\)convexincreasing on \(x \ge 0\) decreasing on \(x \le 0\)PR: \(M\>=1\)

Exponential Cone Representable Functions

An optimization problem using these functions can be solved by any exponential cone solver (SCS).

operationdescriptionvexityslopenotes
logsumexp(x)\(\log(\sum_i \exp(x_i))\)convexincreasingnone
exp(x)\(\exp(x)\)convexincreasingnone
log(x)\(\log(x)\)concaveincreasingIC: \(x\>0\)
entropy(x)\(\sum_{ij} -x_{ij} \log (x_{ij})\)concavenot monotonicIC: \(x\>0\)
logisticloss(x)\(\log(1 + \exp(x_i))\)convexincreasingnone

Semidefinite Program Representable Functions

An optimization problem using these functions can be solved by any SDP solver (including SCS and Mosek).

operationdescriptionvexityslopenotes
nuclearnorm(x)sum of singular values of \(x\)convexnot monotonicnone
operatornorm(x)max of singular values of \(x\)convexnot monotonicnone
lambdamax(x)max eigenvalue of \(x\)convexnot monotonicnone
lambdamin(x)min eigenvalue of \(x\)concavenot monotonicnone
matrixfrac(x, P)\(x^TP^{-1}x\)convexnot monotonicIC: P is positive semidefinite

Exponential + SDP representable Functions

An optimization problem using these functions can be solved by any solver that supports exponential constraints and semidefinite constraints simultaneously (SCS).

operationdescriptionvexityslopenotes
logdet(x)log of determinant of \(x\)concaveincreasingIC: x is positive semidefinite

Promotions

When an atom or constraint is applied to a scalar and a higher dimensional variable, the scalars are promoted. For example, we can do max(x, 0) gives an expression with the shape of x whose elements are the maximum of the corresponding element of x and 0.