Benchmarks
Functions to help benchmark the performance of solver wrappers. See The Benchmarks submodule for more details.
MathOptInterface.Benchmarks.suite
— Functionsuite(
new_model::Function;
exclude::Vector{Regex} = Regex[]
)
Create a suite of benchmarks. new_model
should be a function that takes no arguments, and returns a new instance of the optimizer you wish to benchmark.
Use exclude
to exclude a subset of benchmarks.
Examples
suite() do
GLPK.Optimizer()
end
suite(exclude = [r"delete"]) do
Gurobi.Optimizer(OutputFlag=0)
end
MathOptInterface.Benchmarks.create_baseline
— Functioncreate_baseline(suite, name::String; directory::String = ""; kwargs...)
Run all benchmarks in suite
and save to files called name
in directory
.
Extra kwargs
are based to BenchmarkTools.run
.
Examples
my_suite = suite(() -> GLPK.Optimizer())
create_baseline(my_suite, "glpk_master"; directory = "/tmp", verbose = true)
MathOptInterface.Benchmarks.compare_against_baseline
— Functioncompare_against_baseline(
suite, name::String; directory::String = "",
report_filename::String = "report.txt"
)
Run all benchmarks in suite
and compare against files called name
in directory
that were created by a call to create_baseline
.
A report summarizing the comparison is written to report_filename
in directory
.
Extra kwargs
are based to BenchmarkTools.run
.
Examples
my_suite = suite(() -> GLPK.Optimizer())
compare_against_baseline(
my_suite, "glpk_master"; directory = "/tmp", verbose = true
)