Performance tips
By now you should have read the other "getting started" tutorials. You're almost ready to write your own models, but before you do so there are some important things to be aware of.
Read the Julia performance tips
The first thing to do is read the Performance tips section of the Julia manual. The most important rule is to avoid global variables! This is particularly important if you're learning JuMP after using a language like MATLAB.
The "time-to-first-solve" issue
Similar to the infamous time-to-first-plot plotting problem, JuMP suffers from time-to-first-solve latency. This latency occurs because the first time you call JuMP code in each session, Julia needs to compile a lot of code specific to your problem. This issue is actively being worked on, but there are a few things you can do to improve things.
Don't call JuMP from the command line
In other languages, you might be used to a workflow like:
$ julia my_script.jl
This doesn't work for JuMP, because we have to pay the compilation latency every time you run the script. Instead, use one of the suggested workflows from the Julia documentation.
Disable bridges if none are being used
At present, the majority of the latency problems are caused by JuMP's bridging mechanism. If you only use constraints that are natively supported by the solver, you can disable bridges by passing add_bridges = false
to Model
.
model = Model(HiGHS.Optimizer; add_bridges = false)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: HiGHS
Use PackageCompiler
As a final option, consider using PackageCompiler.jl to create a custom sysimage.
This is a good option if you have finished prototyping a model, and you now want to call it frequently from the command line without paying the compilation price.
Use macros to build expressions
What
Use JuMP's macros (or add_to_expression!
) to build expressions. Avoid constructing expressions outside the macros.
Why
Constructing an expression outside the macro results in intermediate copies of the expression. For example,
x[1] + x[2] + x[3]
is equivalent to
a = x[1]
b = a + x[2]
c = b + x[3]
Since we only care about c
, the a
and b
expressions are not needed and constructing them slows the program down!
JuMP's macros rewrite the expressions to operate in-place and avoid these extra copies. Because they allocate less memory, they are faster, particularly for large expressions.
Example
model = Model()
@variable(model, x[1:3])
3-element Vector{VariableRef}:
x[1]
x[2]
x[3]
Here's what happens if we construct the expression outside the macro:
@allocated x[1] + x[2] + x[3]
1344
The @allocated
measures how many bytes were allocated during the evaluation of an expression. Fewer is better.
If we use the @expression
macro, we get many fewer allocations:
@allocated @expression(model, x[1] + x[2] + x[3])
800
This tutorial was generated using Literate.jl. View the source .jl
file on GitHub.