Benchmarks

Functions to help benchmark the performance of solver wrappers. See The Benchmarks submodule for more details.

MathOptInterface.Benchmarks.suiteFunction
suite(
    new_model::Function;
    exclude::Vector{Regex} = Regex[]
)

Create a suite of benchmarks. new_model should be a function that takes no arguments, and returns a new instance of the optimizer you wish to benchmark.

Use exclude to exclude a subset of benchmarks.

Example

julia> MOI.Benchmarks.suite() do
           return GLPK.Optimizer()
       end

julia> MOI.Benchmarks.suite(; exclude = [r"delete"]) do
           return Gurobi.Optimizer()
       end
source
MathOptInterface.Benchmarks.create_baselineFunction
create_baseline(suite, name::String; directory::String = ""; kwargs...)

Run all benchmarks in suite and save to files called name in directory.

Extra kwargs are based to BenchmarkTools.run.

Example

julia> import MathOptInterface as MOI

julia> import GLPK

julia> my_suite = MOI.Benchmarks.suite(() -> GLPK.Optimizer());

julia> MOI.Benchmarks.create_baseline(
           my_suite,
           "glpk_master";
           directory = "/tmp",
           verbose = true,
       )
source
MathOptInterface.Benchmarks.compare_against_baselineFunction
compare_against_baseline(
    suite, name::String; directory::String = "",
    report_filename::String = "report.txt"
)

Run all benchmarks in suite and compare against files called name in directory that were created by a call to create_baseline.

A report summarizing the comparison is written to report_filename in directory.

Extra kwargs are based to BenchmarkTools.run.

Example

julia> import MathOptInterface as MOI

julia> import GLPK

julia> my_suite = MOI.Benchmarks.suite(() -> GLPK.Optimizer());

julia> MOI.Benchmarks.compare_against_baseline(
           my_suite,
           "glpk_master";
           directory = "/tmp",
           verbose = true,
       )
source