Nonlinear Modeling

JuMP has support for general smooth nonlinear (convex and nonconvex) optimization problems. JuMP is able to provide exact, sparse second-order derivatives to solvers. This information can improve solver accuracy and performance.

There are three main changes to solve nonlinear programs in JuMP.

Info

There are some restrictions on what syntax you can use in the @NLxxx macros. Make sure to read the Syntax notes.

Set a nonlinear objective

Use @NLobjective to set a nonlinear objective.

julia> @NLobjective(model, Min, exp(x[1]) - sqrt(x[2]))

Add a nonlinear constraint

Use @NLconstraint to add a nonlinear constraint.

julia> @NLconstraint(model, exp(x[1]) <= 1)
exp(x[1]) - 1.0 ≤ 0

julia> @NLconstraint(model, [i = 1:2], x[i]^i >= i)
2-element Array{ConstraintRef{Model,NonlinearConstraintIndex,ScalarShape},1}:
 x[1] ^ 1.0 - 1.0 ≥ 0
 x[2] ^ 2.0 - 2.0 ≥ 0

julia> @NLconstraint(model, con[i = 1:2], prod(x[j] for j = 1:i) == i)
2-element Array{ConstraintRef{Model,NonlinearConstraintIndex,ScalarShape},1}:
 (*)(x[1]) - 1.0 = 0
 x[1] * x[2] - 2.0 = 0
Info

You can only create nonlinear constraints with <=, >=, and ==. More general Nonlinear-in-Set constraints are not supported.

Create a nonlinear expression

Use @NLexpression to create nonlinear expression objects. The syntax is identical to @expression, except that the expression can contain nonlinear terms.

julia> expr = @NLexpression(model, exp(x[1]) + sqrt(x[2]))
"Reference to nonlinear expression #1"

julia> my_anon_expr = @NLexpression(model, [i = 1:2], sin(x[i]))
2-element Array{NonlinearExpression,1}:
 "Reference to nonlinear expression #2"
 "Reference to nonlinear expression #3"

julia> @NLexpression(model, my_expr[i = 1:2], sin(x[i]))
2-element Array{NonlinearExpression,1}:
 "Reference to nonlinear expression #4"
 "Reference to nonlinear expression #5"

Nonlinear expression can be used in @NLobjective, @NLconstraint, and even nested in other @NLexpressions.

julia> @NLobjective(model, Min, expr^2 + 1)

julia> @NLconstraint(model, [i = 1:2], my_expr[i] <= i)
2-element Array{ConstraintRef{Model,NonlinearConstraintIndex,ScalarShape},1}:
 subexpression[4] - 1.0 ≤ 0
 subexpression[5] - 2.0 ≤ 0

julia> @NLexpression(model, nested[i = 1:2], sin(my_expr[i]))
2-element Array{NonlinearExpression,1}:
 "Reference to nonlinear expression #6"
 "Reference to nonlinear expression #7"

Create a nonlinear parameter

For nonlinear models only, JuMP offers a syntax for explicit "parameter" objects, which are constants in the model that can be efficiently updated between solves.

Nonlinear parameters are declared by using the @NLparameter macro and may be indexed by arbitrary sets analogously to JuMP variables and expressions.

The initial value of the parameter must be provided on the right-hand side of the == sign.

julia> @NLparameter(model, p[i = 1:2] == i)
2-element Array{NonlinearParameter,1}:
 "Reference to nonlinear parameter #1"
 "Reference to nonlinear parameter #2"
Info

A parameter is not an optimization variable. It must be fixed to a value with ==. If you want a parameter that is <= or >=, create a variable instead using @variable.

Use value and set_value to query or update the value of a parameter.

julia> value.(p)
2-element Array{Float64,1}:
 1.0
 2.0

julia> set_value(p[2], 3.0)
3.0

julia> value.(p)
2-element Array{Float64,1}:
 1.0
 3.0
Info

There is no anonymous syntax for creating parameters.

Nonlinear parameters can be used within nonlinear macros only:

julia> @objective(model, Max, p[1] * x)
ERROR: MethodError: no method matching *(::NonlinearParameter, ::VariableRef)
[...]

julia> @NLobjective(model, Max, p[1] * x)

julia> @expression(model, my_expr, p[1] * x^2)
ERROR: MethodError: no method matching *(::NonlinearParameter, ::GenericQuadExpr{Float64,VariableRef})
[...]

julia> @NLexpression(model, my_nl_expr, p[1] * x^2)
"Reference to nonlinear expression #1"

When to use a parameter

Nonlinear parameters are useful when solving nonlinear models in a sequence:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, z)
@NLparameter(model, x == 1.0)
@NLobjective(model, Min, (z - x)^2)
optimize!(model)
@show value(z) # Equals 1.0.

# Now, update the value of x to solve a different problem.
set_value(x, 5.0)
optimize!(model)
@show value(z) # Equals 5.0
----------------------------------------------------------------------------
	SCS v2.1.2 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
----------------------------------------------------------------------------
Lin-sys: sparse-indirect, nnz in A = 3813, CG tol ~ 1/iter^(2.00)
eps = 1.00e-05, alpha = 1.50, max_iters = 5000, normalize = 1, scale = 1.00
acceleration_lookback = 10, rho_x = 1.00e-03
Variables n = 612, constraints m = 1813
Cones:	linear vars: 601
	soc vars: 12, soc blks: 1
	exp vars: 1200, dual exp vars: 0
Setup time: 8.23e-04s
----------------------------------------------------------------------------
 Iter | pri res | dua res | rel gap | pri obj | dua obj | kap/tau | time (s)
----------------------------------------------------------------------------
     0| 2.05e+20  5.75e+20  1.00e+00 -2.33e+22  3.63e+22  5.04e+22  6.25e-03
   100| 2.20e-07  3.38e-05  8.02e-08  1.02e+02  1.02e+02  1.74e-14  3.31e-01
   180| 2.70e-08  7.14e-06  1.27e-08  1.02e+02  1.02e+02  1.40e-14  5.89e-01
----------------------------------------------------------------------------
Status: Solved
Timing: Solve time: 5.89e-01s
	Lin-sys: avg # CG iterations: 3.31, avg solve time: 7.30e-05s
	Cones: avg projection time: 2.92e-03s
	Acceleration: avg step time: 1.92e-04s
----------------------------------------------------------------------------
Error metrics:
dist(s, K) = 1.8426e-16, dist(y, K*) = 8.8818e-16, s'y/|s||y| = -7.8514e-12
primal res: |Ax + s - b|_2 / (1 + |b|_2) = 2.7028e-08
dual res:   |A'y + c|_2 / (1 + |c|_2) = 7.1357e-06
rel gap:    |c'x + b'y| / (1 + |c'x| + |b'y|) = 1.2737e-08
----------------------------------------------------------------------------
c'x = 101.9931, -b'y = 101.9931
============================================================================
----------------------------------------------------------------------------
	SCS v2.1.2 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
----------------------------------------------------------------------------
Lin-sys: sparse-indirect, nnz in A = 3857, CG tol ~ 1/iter^(2.00)
eps = 1.00e-05, alpha = 1.50, max_iters = 5000, normalize = 1, scale = 1.00
acceleration_lookback = 10, rho_x = 1.00e-03
Variables n = 623, constraints m = 1824
Cones:	linear vars: 624
	exp vars: 1200, dual exp vars: 0
Setup time: 9.07e-04s
----------------------------------------------------------------------------
 Iter | pri res | dua res | rel gap | pri obj | dua obj | kap/tau | time (s)
----------------------------------------------------------------------------
     0| 2.04e+20  7.26e+20  1.00e+00 -3.07e+22  4.05e+22  6.21e+22  6.25e-03
   100| 1.02e-06  4.99e-05  2.88e-07  1.18e+02  1.18e+02  1.17e-15  3.60e-01
   200| 2.28e-06  5.11e-05  1.40e-06  1.18e+02  1.18e+02  1.07e-14  7.09e-01
   220| 9.41e-08  5.93e-06  7.42e-08  1.18e+02  1.18e+02  6.68e-15  7.78e-01
----------------------------------------------------------------------------
Status: Solved
Timing: Solve time: 7.78e-01s
	Lin-sys: avg # CG iterations: 5.18, avg solve time: 9.92e-05s
	Cones: avg projection time: 3.17e-03s
	Acceleration: avg step time: 1.97e-04s
----------------------------------------------------------------------------
Error metrics:
dist(s, K) = 1.1807e-15, dist(y, K*) = 0.0000e+00, s'y/|s||y| = -8.0429e-12
primal res: |Ax + s - b|_2 / (1 + |b|_2) = 9.4084e-08
dual res:   |A'y + c|_2 / (1 + |c|_2) = 5.9292e-06
rel gap:    |c'x + b'y| / (1 + |c'x| + |b'y|) = 7.4215e-08
----------------------------------------------------------------------------
c'x = 117.8507, -b'y = 117.8507
============================================================================
value(z) = 1.0
value(z) = 5.0
Info

Using nonlinear parameters can be faster than creating a new model from scratch with updated data because JuMP is able to avoid repeating a number of steps in processing the model before handing it off to the solver.

Syntax notes

The syntax accepted in nonlinear macros is more restricted than the syntax for linear and quadratic macros. We note some important points below.

No operator overloading

There is no operator overloading provided to build up nonlinear expressions. For example, if x is a JuMP variable, the code 3x will return an AffExpr object that can be used inside of future expressions and linear constraints. However, the code sin(x) is an error. All nonlinear expressions must be inside of macros.

julia> expr = sin(x) + 1
ERROR: sin is not defined for type AbstractVariableRef. Are you trying to build a nonlinear problem? Make sure you use @NLconstraint/@NLobjective.
[...]

julia> expr = @NLexpression(model, sin(x) + 1)
"Reference to nonlinear expression #1"

Scalar operations only

With the exception of the splatting syntax discussed below, all expressions must be simple scalar operations. You cannot use dot, matrix-vector products, vector slices, etc.

julia> @NLobjective(model, Min, c' * x + 3y)
ERROR: Unexpected array [1 2] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.
[...]

Translate vector operations into explicit sum() operations:

julia> @NLobjective(model, Min, sum(c[i] * x[i] for i = 1:2) + 3y)

Or use an @expression:

julia> @expression(model, expr, c' * x)
x[1] + 2 x[2]

julia> @NLobjective(model, Min, expr + 3y)

Splatting

The splatting operator ... is recognized in a very restricted setting for expanding function arguments. The expression splatted can be only a symbol. More complex expressions are not recognized.

julia> model = Model();

julia> @variable(model, x[1:3]);

julia> @NLconstraint(model, *(x...) <= 1.0)
x[1] * x[2] * x[3] - 1.0 ≤ 0

julia> @NLconstraint(model, *((x / 2)...) <= 0.0)
ERROR: LoadError: Unexpected expression in (*)(x / 2...). JuMP supports splatting only symbols. For example, x... is ok, but (x + 1)..., [x; y]... and g(f(y)...) are not.

User-defined Functions

JuMP's library of recognized univariate functions is derived from the Calculus.jl package.

In addition to this list of functions, it is possible to register custom user-defined nonlinear functions.

Tip

User-defined functions can be used anywhere in @NLobjective, @NLconstraint, and @NLexpression.

Tip

JuMP will attempt to automatically register functions it detects in your nonlinear expressions, which means that in most cases, manually registering a function is not needed. Two exceptions are if you want to provide custom derivatives, or if the function is not available in the scope of the nonlinear expression.

Warning

User-defined functions must return a scalar output. For a work-around, see User-defined functions with vector outputs.

Automatic differentiation

JuMP does not support black-box optimization, so all user-defined functions must provide derivatives in some form.

Fortunately, JuMP supports automatic differentiation of user-defined functions, a feature to our knowledge not available in any comparable modeling systems.

Info

Automatic differentiation is not finite differencing. JuMP's automatically computed derivatives are not subject to approximation error.

JuMP uses ForwardDiff.jl to perform automatic differentiation; see the ForwardDiff.jl documentation for a description of how to write a function suitable for automatic differentiation.

Warning

Get an error like No method matching Float64(::ForwardDiff.Dual)? Read this section, and see the guidelines at ForwardDiff.jl.

The most common error is that your user-defined function is not generic with respect to the number type, i.e., don't assume that the input to the function is Float64.

f(x::Float64) = 2 * x  # This will not work.
f(x::Real)    = 2 * x  # This is good.
f(x)          = 2 * x  # This is also good.

Another reason you may encounter this error is if you create arrays inside your function which are Float64.

function bad_f(x...)
    y = zeros(length(x))  # This constructs an array of `Float64`!
    for i = 1:length(x)
        y[i] = x[i]^i
    end
    return sum(y)
end

function good_f(x::T...) where {T<:Real}
    y = zeros(T, length(x))  # Construct an array of type `T` instead!
    for i = 1:length(x)
        y[i] = x[i]^i
    end
    return sum(y)
end

Register a function

To register a user-defined function with derivatives computed by automatic differentiation, use the register method as in the following example:

square(x) = x^2
f(x, y) = (x - 1)^2 + (y - 2)^2

model = Model()

register(model, :square, 1, square; autodiff = true)
register(model, :my_f, 2, f; autodiff = true)

@variable(model, x[1:2] >= 0.5)
@NLobjective(model, Min, my_f(x[1], square(x[2])))

The above code creates a JuMP model with the objective function (x[1] - 1)^2 + (x[2]^2 - 2)^2. The arguments to register are:

  1. The model for which the functions are registered.
  2. A Julia symbol object which serves as the name of the user-defined function in JuMP expressions.
  3. The number of input arguments that the function takes.
  4. The Julia method which computes the function
  5. A flag to instruct JuMP to compute exact gradients automatically.
Tip

The symbol :my_f doesn't have to match the name of the function f. However, it's generally more readable if it does. Make sure you use my_f and not f in the macros.

Warning

If you use multi-variate user-defined functions, JuMP will disable second-derivative information. This can lead to significant slow-downs in some cases. Only use a user-defined function if you cannot write out the expression algebraically in the macro.

Warning

User-defined functions cannot be re-registered and will not update if you modify the underlying Julia function. If you want to change a user-defined function between solves, rebuild the model or use a different name. To use a different name programmatically, see Raw expression input.

Register a function and gradient

Forward-mode automatic differentiation as implemented by ForwardDiff.jl has a computational cost that scales linearly with the number of input dimensions. As such, it is not the most efficient way to compute gradients of user-defined functions if the number of input arguments is large. In this case, users may want to provide their own routines for evaluating gradients.

Univariate functions

For univariate functions, the gradient function ∇f should return a number that represents the first-order derivative:

f(x) = x^2
∇f(x) = 2x
model = Model()
register(model, :my_square, 1, f, ∇f; autodiff = true)
@variable(model, x >= 0)
@NLobjective(model, Min, my_square(x))

If autodiff = true, JuMP will use automatic differentiation to compute the hessian.

Multivariate functions

For multivariate functions, the gradient function ∇f must take a gradient vector as the first argument that is filled in-place:

f(x, y) = (x - 1)^2 + (y - 2)^2
function ∇f(g::Vector{T}, x::T, y::T) where {T}
    g[1] = 2 * (x - 1)
    g[2] = 2 * (y - 2)
    return
end

model = Model()
register(model, :my_square, 2, f, ∇f)
@variable(model, x[1:2] >= 0)
@NLobjective(model, Min, my_square(x[1], x[2]))

Hessian information is not supported for multivariate functions.

Register a function, gradient, and hessian

Warning

The ability to explicitly register a hessian is only available for univariate functions.

Instead of automatically differentiating the hessian, you can instead pass a function which returns a number representing the second-order derivative.

f(x) = x^2
∇f(x) = 2x
∇²f(x) = 2
model = Model()
register(model, :my_square, 1, f, ∇f, ∇²f)
@variable(model, x >= 0)
@NLobjective(model, Min, my_square(x))

User-defined functions with vector inputs

User-defined functions which take vectors as input arguments (e.g. f(x::Vector)) are not supported. Instead, use Julia's splatting syntax to create a function with scalar arguments. For example, instead of

f(x::Vector) = sum(x[i]^i for i in 1:length(x))

define:

f(x...) = sum(x[i]^i for i in 1:length(x))

This function f can be used in a JuMP model as follows:

model = Model()
@variable(model, x[1:5] >= 0)
f(x...) = sum(x[i]^i for i in 1:length(x))
register(model, :f, 5, f; autodiff = true)
@NLobjective(model, Min, f(x...))
Tip

Make sure to read the syntax restrictions of Splatting.

Factors affecting solution time

The execution time when solving a nonlinear programming problem can be divided into two parts, the time spent in the optimization algorithm (the solver) and the time spent evaluating the nonlinear functions and corresponding derivatives. Ipopt explicitly displays these two timings in its output, for example:

Total CPU secs in IPOPT (w/o function evaluations)   =      7.412
Total CPU secs in NLP function evaluations           =      2.083

For Ipopt in particular, one can improve the performance by installing advanced sparse linear algebra packages, see Installation Guide. For other solvers, see their respective documentation for performance tips.

The function evaluation time, on the other hand, is the responsibility of the modeling language. JuMP computes derivatives by using reverse-mode automatic differentiation with graph coloring methods for exploiting sparsity of the Hessian matrix [1]. As a conservative bound, JuMP's performance here currently may be expected to be within a factor of 5 of AMPL's.

Querying derivatives from a JuMP model

For some advanced use cases, one may want to directly query the derivatives of a JuMP model instead of handing the problem off to a solver. Internally, JuMP implements the AbstractNLPEvaluator interface from MathOptInterface. To obtain an NLP evaluator object from a JuMP model, use NLPEvaluator. index returns the MOI.VariableIndex corresponding to a JuMP variable. MOI.VariableIndex itself is a type-safe wrapper for Int64 (stored in the .value field.)

For example:

raw_index(v::MOI.VariableIndex) = v.value
model = Model()
@variable(model, x)
@variable(model, y)
@NLobjective(model, Min, sin(x) + sin(y))
values = zeros(2)
x_index = raw_index(JuMP.index(x))
y_index = raw_index(JuMP.index(y))
values[x_index] = 2.0
values[y_index] = 3.0
d = NLPEvaluator(model)
MOI.initialize(d, [:Grad])
MOI.eval_objective(d, values) # == sin(2.0) + sin(3.0)

# output
1.0504174348855488
∇f = zeros(2)
MOI.eval_objective_gradient(d, ∇f, values)
(∇f[x_index], ∇f[y_index]) # == (cos(2.0), cos(3.0))

# output
(-0.4161468365471424, -0.9899924966004454)

Only nonlinear constraints (those added with @NLconstraint), and nonlinear objectives (added with @NLobjective) exist in the scope of the NLPEvaluator.

The NLPEvaluator does not evaluate derivatives of linear or quadratic constraints or objectives.

The index method applied to a nonlinear constraint reference object returns its index as a NonlinearConstraintIndex. The .value field of NonlinearConstraintIndex stores the raw integer index. For example:

julia> model = Model();

julia> @variable(model, x);

julia> @NLconstraint(model, cons1, sin(x) <= 1);

julia> @NLconstraint(model, cons2, x + 5 == 10);

julia> typeof(cons1)
ConstraintRef{Model,NonlinearConstraintIndex,ScalarShape}

julia> index(cons1)
NonlinearConstraintIndex(1)

julia> index(cons2)
NonlinearConstraintIndex(2)

Note that for one-sided nonlinear constraints, JuMP subtracts any values on the right-hand side when computing expressions. In other words, one-sided nonlinear constraints are always transformed to have a right-hand side of zero.

This method of querying derivatives directly from a JuMP model is convenient for interacting with the model in a structured way, e.g., for accessing derivatives of specific variables. For example, in statistical maximum likelihood estimation problems, one is often interested in the Hessian matrix at the optimal solution, which can be queried using the NLPEvaluator.

Raw expression input

Warning

This section requires advanced knowledge of Julia's Expr. You should read the Expressions and evaluation section of the Julia documentation first.

In addition to the @NLobjective and @NLconstraint macros, it is also possible to provide Julia Expr objects directly by using set_NL_objective and add_NL_constraint.

This input form may be useful if the expressions are generated programmatically.

Set the objective function

Use set_NL_objective to set a nonlinear objective.

julia> expr = :($(x) + $(x)^2)
:(x + x ^ 2)

julia> set_NL_objective(model, MOI.MIN_SENSE, expr)

This is equivalent to

julia> @NLobjective(model, Min, x + x^2)
Note

You must interpolate the variables directly into the expression expr.

Note

You must use MOI.MIN_SENSE or MOI.MAX_SENSE instead of Min and Max.

Add a constraint

Use add_NL_constraint to add a nonlinear constraint.

julia> expr = :($(x) + $(x)^2)
:(x + x ^ 2)

julia> add_NL_constraint(model, :($(expr) <= 1))
(x + x ^ 2.0) - 1.0 ≤ 0

This is equivalent to

julia> @NLconstraint(model, Min, x + x^2 <= 1)
(x + x ^ 2.0) - 1.0 ≤ 0

More complicated examples

Raw expression input is most useful when the expressions are generated programmatically, often in conjunction with user-defined functions.

As an example, we construct a model with the nonlinear constraints f(x) <= 1, where f(x) = x^2 and f(x) = sin(x)^2:

julia> function main(functions::Vector{Function})
           model = Model()
           @variable(model, x)
           for (i, f) in enumerate(functions)
               f_sym = Symbol("f_$(i)")
               register(model, f_sym, 1, f; autodiff = true)
               add_NL_constraint(model, :($(f_sym)($(x)) <= 1))
           end
           print(model)
           return
       end
main (generic function with 1 method)

julia> main([x -> x^2, x -> sin(x)^2])
Feasibility
Subject to
 f_1(x) - 1.0 ≤ 0
 f_2(x) - 1.0 ≤ 0

As another example, we construct a model with the constraint x^2 + sin(x)^2 <= 1:

julia> function main(functions::Vector{Function})
           model = Model()
           @variable(model, x)
           expr = Expr(:call, :+)
           for (i, f) in enumerate(functions)
               f_sym = Symbol("f_$(i)")
               register(model, f_sym, 1, f; autodiff = true)
               push!(expr.args, :($(f_sym)($(x))))
           end
           add_NL_constraint(model, :($(expr) <= 1))
           print(model)
           return
       end
main (generic function with 1 method)

julia> main([x -> x^2, x -> sin(x)^2])
Feasibility
Subject to
 (f_1(x) + f_2(x)) - 1.0 ≤ 0
  • 1Dunning, Huchette, and Lubin, "JuMP: A Modeling Language for Mathematical Optimization", SIAM Review, PDF.