# Nonlinear Modeling

More information can be found in the Nonlinear Modeling section of the manual.

## Constraints

`JuMP.@NLconstraint`

— Macro`@NLconstraint(m::Model, expr)`

Add a constraint described by the nonlinear expression `expr`

. See also `@constraint`

. For example:

```
@NLconstraint(model, sin(x) <= 1)
@NLconstraint(model, [i = 1:3], sin(i * x) <= 1 / i)
```

`JuMP.@NLconstraints`

— Macro`@NLconstraints(model, args...)`

Adds multiple nonlinear constraints to model at once, in the same fashion as the `@NLconstraint`

macro.

The model must be the first argument, and multiple variables can be added on multiple lines wrapped in a `begin ... end`

block.

**Examples**

```
@NLconstraints(model, begin
t >= sqrt(x^2 + y^2)
[i = 1:2], z[i] <= log(a[i])
end)
```

`JuMP.NonlinearConstraintIndex`

— Type`NonlinearConstraintIndex(index::Int64)`

A struct to refer to the 1-indexed nonlinear constraint `index`

.

`JuMP.num_nl_constraints`

— Function`num_nl_constraints(model::Model)`

Returns the number of nonlinear constraints associated with the `model`

.

`JuMP.add_NL_constraint`

— Function`add_NL_constraint(model::Model, expr::Expr)`

Add a nonlinear constraint described by the Julia expression `ex`

to `model`

.

This function is most useful if the expression `ex`

is generated programmatically, and you cannot use `@NLconstraint`

.

**Notes**

- You must interpolate the variables directly into the expression
`expr`

.

**Examples**

```
julia> add_NL_constraint(model, :($(x) + $(x)^2 <= 1))
(x + x ^ 2.0) - 1.0 ≤ 0
```

## Expressions

`JuMP.@NLexpression`

— Macro`@NLexpression(args...)`

Efficiently build a nonlinear expression which can then be inserted in other nonlinear constraints and the objective. See also [`@expression`

]. For example:

```
@NLexpression(model, my_expr, sin(x)^2 + cos(x^2))
@NLconstraint(model, my_expr + y >= 5)
@NLobjective(model, Min, my_expr)
```

Indexing over sets and anonymous expressions are also supported:

```
@NLexpression(m, my_expr_1[i=1:3], sin(i * x))
my_expr_2 = @NLexpression(m, log(1 + sum(exp(x[i])) for i in 1:2))
```

`JuMP.@NLexpressions`

— Macro`@NLexpressions(model, args...)`

Adds multiple nonlinear expressions to model at once, in the same fashion as the `@NLexpression`

macro.

The model must be the first argument, and multiple variables can be added on multiple lines wrapped in a `begin ... end`

block.

**Examples**

```
@NLexpressions(model, begin
my_expr, sqrt(x^2 + y^2)
my_expr_1[i = 1:2], log(a[i]) - z[i]
end)
```

`JuMP.NonlinearExpression`

— Type`NonlinearExpression`

A struct to represent a nonlinear expression.

Create an expression using `@NLexpression`

.

## Objectives

`JuMP.@NLobjective`

— Macro`@NLobjective(model, sense, expression)`

Add a nonlinear objective to `model`

with optimization sense `sense`

. `sense`

must be `Max`

or `Min`

.

**Example**

`@NLobjective(model, Max, 2x + 1 + sin(x))`

`JuMP.set_NL_objective`

— Function`set_NL_objective(model::Model, sense::MOI.OptimizationSense, expr::Expr)`

Set the nonlinear objective of `model`

to the expression `expr`

, with the optimization sense `sense`

.

This function is most useful if the expression `expr`

is generated programmatically, and you cannot use `@NLobjective`

.

**Notes**

- You must interpolate the variables directly into the expression
`expr`

. - You must use
`MOI.MIN_SENSE`

or`MOI.MAX_SENSE`

instead of`Min`

and`Max`

.

**Examples**

`julia> set_NL_objective(model, MOI.MIN_SENSE, :($(x) + $(x)^2))`

## Parameters

`JuMP.@NLparameter`

— Macro`@NLparameter(model, param == value)`

Create and return a nonlinear parameter `param`

attached to the model `model`

with initial value set to `value`

. Nonlinear parameters may be used only in nonlinear expressions.

**Example**

```
model = Model()
@NLparameter(model, x == 10)
value(x)
# output
10.0
```

`@NLparameter(model, param_collection[...] == value_expr)`

Create and return a collection of nonlinear parameters `param_collection`

attached to the model `model`

with initial value set to `value_expr`

(may depend on index sets). Uses the same syntax for specifying index sets as `@variable`

.

**Example**

```
model = Model()
@NLparameter(model, y[i = 1:10] == 2 * i)
value(y[9])
# output
18.0
```

`JuMP.NonlinearParameter`

— Type`NonlinearParameter`

A struct to represent a nonlinear parameter.

Create a parameter using `@NLparameter`

.

`JuMP.value`

— Method`value(p::NonlinearParameter)`

Return the current value stored in the nonlinear parameter `p`

.

**Example**

```
model = Model()
@NLparameter(model, p == 10)
value(p)
# output
10.0
```

`JuMP.set_value`

— Method`set_value(p::NonlinearParameter, v::Number)`

Store the value `v`

in the nonlinear parameter `p`

.

**Example**

```
model = Model()
@NLparameter(model, p == 0)
set_value(p, 5)
value(p)
# output
5.0
```

## User-defined functions

`JuMP.register`

— Function```
register(
model::Model,
s::Symbol,
dimension::Integer,
f::Function;
autodiff:Bool = false,
)
```

Register the user-defined function `f`

that takes `dimension`

arguments in `model`

as the symbol `s`

.

The function `f`

must support all subtypes of `Real`

as arguments. Do not assume that the inputs are `Float64`

.

**Notes**

- For this method, you must explicitly set
`autodiff = true`

, because no user-provided gradient function`∇f`

is given. - Second-derivative information is only computed if
`dimension == 1`

. `s`

does not have to be the same symbol as`f`

, but it is generally more readable if it is.

**Examples**

```
model = Model()
@variable(model, x)
f(x::T) where {T<:Real} = x^2
register(model, :foo, 1, f; autodiff = true)
@NLobjective(model, Min, foo(x))
```

```
model = Model()
@variable(model, x[1:2])
g(x::T, y::T) where {T<:Real} = x * y
register(model, :g, 2, g; autodiff = true)
@NLobjective(model, Min, g(x[1], x[2]))
```

```
register(
model::Model,
s::Symbol,
dimension::Integer,
f::Function,
∇f::Function;
autodiff:Bool = false,
)
```

Register the user-defined function `f`

that takes `dimension`

arguments in `model`

as the symbol `s`

. In addition, provide a gradient function `∇f`

.

The functions `f`

and ∇f must support all subtypes of `Real`

as arguments. Do not assume that the inputs are `Float64`

.

**Notes**

- If the function
`f`

is univariate (i.e.,`dimension == 1`

),`∇f`

must return a number which represents the first-order derivative of the function`f`

. - If the function
`f`

is multi-variate,`∇f`

must have a signature matching`∇f(g::Vector{T}, args::T...) where {T<:Real}`

, where the first argument is a vector`g`

that is modified in-place with the gradient. - If
`autodiff = true`

and`dimension == 1`

, use automatic differentiation to compute the second-order derivative information. If`autodiff = false`

, only first-order derivative information will be used. `s`

does not have to be the same symbol as`f`

, but it is generally more readable if it is.

**Examples**

```
model = Model()
@variable(model, x)
f(x::T) where {T<:Real} = x^2
∇f(x::T) where {T<:Real} = 2 * x
register(model, :foo, 1, f, ∇f; autodiff = true)
@NLobjective(model, Min, foo(x))
```

```
model = Model()
@variable(model, x[1:2])
g(x::T, y::T) where {T<:Real} = x * y
function ∇g(g::Vector{T}, x::T, y::T) where {T<:Real}
g[1] = y
g[2] = x
return
end
register(model, :g, 2, g, ∇g; autodiff = true)
@NLobjective(model, Min, g(x[1], x[2]))
```

```
register(
model::Model,
s::Symbol,
dimension::Integer,
f::Function,
∇f::Function,
∇²f::Function,
)
```

Register the user-defined function `f`

that takes `dimension`

arguments in `model`

as the symbol `s`

. In addition, provide a gradient function `∇f`

and a hessian function `∇²f`

.

`∇f`

and `∇²f`

must return numbers corresponding to the first- and second-order derivatives of the function `f`

respectively.

**Notes**

- Because automatic differentiation is not used, you can assume the inputs are all
`Float64`

. - This method will throw an error if
`dimension > 1`

. `s`

does not have to be the same symbol as`f`

, but it is generally more readable if it is.

**Examples**

```
model = Model()
@variable(model, x)
f(x::Float64) = x^2
∇f(x::Float64) = 2 * x
∇²f(x::Float64) = 2.0
register(model, :foo, 1, f, ∇f, ∇²f)
@NLobjective(model, Min, foo(x))
```

## Derivatives

`JuMP.NLPEvaluator`

— Type`NLPEvaluator(m::Model)`

Return an `MOI.AbstractNLPEvaluator`

constructed from the model `model`

.

Before using, you must initialize the evaluator using `MOI.initialize`

.