Supported Operations

Convex.jl currently supports the following functions. These functions may be composed according to the DCP composition rules to form new convex, concave, or affine expressions.

Convex.jl transforms each problem into an equivalent conic program in order to pass the problem to a specialized solver. Depending on the types of functions used in the problem, the conic constraints may include linear, second-order, exponential, or semidefinite constraints, as well as any binary or integer constraints placed on the variables.

Below, we list each function available in Convex.jl organized by the (most complex) type of cone used to represent that function, and indicate which solvers may be used to solve problems with those cones. Problems mixing many different conic constraints can be solved by any solver that supports every kind of cone present in the problem.

In the notes column in the tables below, we denote implicit constraints imposed on the arguments to the function by IC, and parameter restrictions that the arguments must obey by PR. Convex.jl will automatically impose ICs; the user must make sure to satisfy PRs.

Elementwise means that the function operates elementwise on vector arguments, returning a vector of the same size.

Linear Program Representable Functions

An optimization problem using only these functions can be solved by any LP solver.

operationdescriptionvexityslopenotes
x+y or x.+yadditionaffineincreasing
x-y or x.-ysubtractionaffineincreasing in $x$ decreasing in $y$
x*ymultiplicationaffineincreasing if constant term $\ge 0$ decreasing if constant term $\le 0$ not monotonic otherwisePR: one argument is constant
x/ydivisionaffineincreasingPR: $y$ is scalar constant
x .* yelementwise multiplicationaffineincreasingPR: one argument is constant
x ./ yelementwise divisionaffineincreasingPR: one argument is constant
x[1:4, 2:3]indexing and slicingaffineincreasing
diag(x, k)$k$-th diagonal of a matrixaffineincreasing
diagm(x)construct diagonal matrixaffineincreasingPR: $x$ is a vector
x'transposeaffineincreasing
vec(x)vector representationaffineincreasing
dot(x,y)$\sum_i x_i y_i$affineincreasingPR: one argument is constant
kron(x,y)Kronecker productaffineincreasingPR: one argument is constant
sum(x)$\sum_{ij} x_{ij}$affineincreasing
sum(x, k)sum elements across dimension $k$affineincreasing
sumlargest(x, k)sum of $k$ largest elements of $x$convexincreasing
sumsmallest(x, k)sum of $k$ smallest elements of $x$concaveincreasing
dotsort(a, b)dot(sort(a),sort(b))convexincreasingPR: one argument is constant
reshape(x, m, n)reshape into $m \times n$affineincreasing
minimum(x)$\min(x)$concaveincreasing
maximum(x)$\max(x)$convexincreasing
[x y] or [x; y] hcat(x, y) or vcat(x, y)stackingaffineincreasing
tr(x)computes the trace $\mathrm{tr} \left(X \right)$affineincreasing
partialtrace(x,sys,dims)Partial traceaffineincreasing
partialtranspose(x,sys,dims)Partial transposeaffineincreasing
conv(h,x)$h \in \mathbb{R}^m$, $x \in \mathbb{R}^n$, $h\star x \in \mathbb{R}^{m+n-1}$; entry $i$ is given by $\sum_{j=1}^m h_jx_{i-j+1}$ with $x_k=0$ for $k$ out of boundsaffineincreasing if $h\ge 0$ decreasing if $h\le 0$ not monotonic otherwisePR: $h$ is constant
min(x,y)$\min(x,y)$concaveincreasing
max(x,y)$\max(x,y)$convexincreasing
pos(x)$\max(x,0)$convexincreasing
neg(x)$\max(-x,0)$convexdecreasing
invpos(x)$1/x$convexdecreasingIC: $x>0$
abs(x)$\left|x\right|$convexincreasing on $x \ge 0$ decreasing on $x \le 0$
opnorm(x, 1)maximum absolute column sum: $\max_{1 ≤ j ≤ n} \sum_{i=1}^m \left|x_{ij}\right|$convexincreasing on $x \ge 0$ decreasing on $x \le 0$
opnorm(x, Inf)maximum absolute row sum: $\max_{1 ≤ i ≤ m} \sum_{j=1}^n \left|x_{ij}\right|$convexincreasing on $x \ge 0$ decreasing on $x \le 0$

Second-Order Cone Representable Functions

An optimization problem using these functions can be solved by any SOCP solver (including ECOS, SCS, Mosek, Gurobi, and CPLEX). Of course, if an optimization problem has both LP and SOCP representable functions, then any solver that can solve both LPs and SOCPs can solve the problem.

operationdescriptionvexityslopenotes
norm(x, p)$(\sum |x_i|^p)^{1/p}$convexincreasing on $x \ge 0$ decreasing on $x \le 0$PR: p >= 1
quadform(x, P; assume_psd=false)$x^T P x$convex in $x$ affine in $P$increasing on $x \ge 0$ decreasing on $x \le 0$ increasing in $P$PR: either $x$ or $P$ must be constant; if $x$ is not constant, then $P$ must be symmetric and positive semidefinite. Pass assume_psd=true to skip checking if P is positive semidefinite.
quadoverlin(x, y)$x^T x/y$convexincreasing on $x \ge 0$ decreasing on $x \le 0$ decreasing in $y$IC: $y > 0$
sumsquares(x)$\sum x_i^2$convexincreasing on $x \ge 0$ decreasing on $x \le 0$
sqrt(x)$\sqrt{x}$concavedecreasingIC: $x>0$
square(x), x^2$x^2$convexincreasing on $x \ge 0$ decreasing on $x \le 0$PR : $x$ is scalar
x .^ 2$x.^2$convexincreasing on $x \ge 0$ decreasing on $x \le 0$elementwise
geomean(x, y)$\sqrt{xy}$concaveincreasingIC: $x\ge0$, $y\ge0$
huber(x, M=1)$\begin{cases} x^2 &|x| \leq M \\ 2M|x| - M^2 &|x| > M \end{cases}$convexincreasing on $x \ge 0$ decreasing on $x \le 0$PR: $M>=1$

Note that for p=1 and p=Inf, the function norm(x,p) is a linear-program representable, and does not need a SOCP solver, and for a matrix x, norm(x,p) is defined as norm(vec(x), p).

Exponential Cone Representable Functions

An optimization problem using these functions can be solved by any exponential cone solver (SCS).

operationdescriptionvexityslopenotes
logsumexp(x)$\log(\sum_i \exp(x_i))$convexincreasing
exp(x)$\exp(x)$convexincreasing
log(x)$\log(x)$concaveincreasingIC: $x>0$
entropy(x)$\sum_{ij} -x_{ij} \log(x_{ij})$concavenot monotonicIC: $x>0$
entropy_elementwise(x)$-x \log(x)$concavenot monotonicIC: $x>0$
logisticloss(x)$\log(1 + \exp(x_i))$convexincreasing
relative_entropy(x, y)$\sum_i x_i \log(x_i / y_i)$convexnot monotonicIC: $x>0, y>0$

Semidefinite Program Representable Functions

An optimization problem using these functions can be solved by any SDP solver (including SCS and Mosek).

operationdescriptionvexityslopenotes
nuclearnorm(x)sum of singular values of $x$convexnot monotonic
opnorm(x, 2) (operatornorm(x))max of singular values of $x$convexnot monotonic
eigmax(x)max eigenvalue of $x$convexnot monotonic
eigmin(x)min eigenvalue of $x$concavenot monotonic
rootdet(X)n-th root-determinant of the $n$-by-$n$ matrix X, that is $det(X)^{1/n}$concavenot monotonic
matrixfrac(x, P)$x^TP^{-1}x$convexnot monotonicIC: P is positive semidefinite
sumlargesteigs(x, k)sum of top $k$ eigenvalues of $x$convexnot monotonicIC: P symmetric
Convex.Constraint((T, A, B), GeometricMeanHypoConeSquare(t, size(T, 1)))$T \preceq A \#_t B = A^{1/2} (A^{-1/2} B A^{-1/2})^t A^{1/2}$concaveincreasingIC: $A \succeq 0$, $B \succeq 0$, $t \in [0,1]$
Convex.Constraint((T, A, B), GeometricMeanEpiConeSquare(t, size(T, 1)))$T \succeq A \#_t B = A^{1/2} (A^{-1/2} B A^{-1/2})^t A^{1/2}$convexnot monotonicIC: $A \succeq 0$, $B \succeq 0$, $t \in [-1, 0] \cup [1, 2]$
quantum_entropy(X)$-\textrm{Tr}(X \log X)$concavenot monotonicIC: $X \succeq 0$; uses natural log
quantum_relative_entropy(A, B)$\textrm{Tr}(A \log A - A \log B)$convexnot monotonicIC: $A \succeq 0$, $B \succeq 0$; uses natural log
trace_logm(X, C)$\textrm{Tr}(C \log X)$concave in Xnot monotonicIC: $X \succeq 0$, $C \succeq 0$, $C$ constant; uses natural log
trace_mpower(A, t, C)$\textrm{Tr}(C A^t)$concave in A for $t \in [0,1]$, convex for $t \in [-1,0] \cup [1,2]$not monotonicIC: $X \succeq 0$, $C \succeq 0$, $C$ constant, $t \in [-1, 2]$
lieb_ando(A, B, K, t)$\textrm{Tr}(K' A^{1-t} K B^t)$concave in A,B for $t \in [0,1]$, convex for $t \in [-1,0] \cup [1,2]$not monotonicIC: $A \succeq 0$, $B \succeq 0$, $K$ constant, $t \in [-1, 2]$
Convex.Constraint((T, X, Y), RelativeEntropyEpiConeSquare(size(X, 1), m, k, e))$T \succeq e' X^{1/2} \log(X^{1/2} Y^{-1} X^{1/2}) X^{1/2} e$convexnot monotonicIC: $e$ constant; uses natural log

Exponential + SDP representable Functions

An optimization problem using these functions can be solved by any solver that supports exponential constraints and semidefinite constraints simultaneously (SCS).

operationdescriptionvexityslopenotes
logdet(x)log of determinant of $x$concaveincreasingIC: x is positive semidefinite

Promotions

When an atom or constraint is applied to a scalar and a higher dimensional variable, the scalars are promoted. For example, we can do max(x, 0) gives an expression with the shape of x whose elements are the maximum of the corresponding element of x and 0.