Implementing a solver interface

This guide outlines the basic steps to implement an interface to MathOptInterface for a new solver.

Danger

Implementing an interface to MathOptInterface for a new solver is a lot of work. Before starting, we recommend that you join the Developer chatroom and explain a little bit about the solver you are wrapping. If you have questions that are not answered by this guide, please ask them in the Developer chatroom so we can improve this guide.

A note on the API

The API of MathOptInterface is large and varied. In order to support the diversity of solvers and use-cases, we make heavy use of duck-typing. That is, solvers are not expected to implement the full API, nor is there a well-defined minimal subset of what must be implemented. Instead, you should implement the API as necessary to make the solver function as you require.

The main reason for using duck-typing is that solvers work in different ways and target different use-cases.

For example:

  • Some solvers support incremental problem construction, support modification after a solve, and have native support for things like variable names.
  • Other solvers are "one-shot" solvers that require all of the problem data to construct and solve the problem in a single function call. They do not support modification or things like variable names.
  • Other "solvers" are not solvers at all, but things like file readers. These may only support functions like read_from_file, and may not even support the ability to add variables or constraints directly.
  • Finally, some "solvers" are layers which take a problem as input, transform it according to some rules, and pass the transformed problem to an inner solver.

Preliminaries

Before starting on your wrapper, you should do some background research and make the solver accessible via Julia.

Decide if MathOptInterface is right for you

The first step in writing a wrapper is to decide whether implementing an interface is the right thing to do.

MathOptInterface is an abstraction layer for unifying constrained mathematical optimization solvers. If your solver doesn't fit in the category, for example, it implements a derivative-free algorithm for unconstrained objective functions, MathOptInterface may not be the right tool for the job.

Tip

If you're not sure whether you should write an interface, ask in the Developer chatroom.

Find a similar solver already wrapped

The next step is to find (if possible) a similar solver that is already wrapped. Although not strictly necessary, this will be a good place to look for inspiration when implementing your wrapper.

The JuMP documentation has a good list of solvers, along with the problem classes they support.

Tip

If you're not sure which solver is most similar, ask in the Developer chatroom.

Create a low-level interface

Before writing a MathOptInterface wrapper, you first need to be able to call the solver from Julia.

Wrapping solvers written in Julia

If your solver is written in Julia, there's nothing to do here. Go to the next section.

Wrapping solvers written in C

Julia is well suited to wrapping solvers written in C.

Info

This is not true for C++. If you have a solver written in C++, first write a C interface, then wrap the C interface.

Before writing a MathOptInterface wrapper, there are a few extra steps.

Create a JLL

If the C code is publicly available under an open source license, create a JLL package via Yggdrasil. The easiest way to do this is to copy an existing solver. Good examples to follow are the COIN-OR solvers.

Warning

Building the solver via Yggdrasil is non-trivial. Please ask the Developer chatroom for help.

If the code is commercial or not publicly available, the user will need to manually install the solver. See Gurobi.jl or CPLEX.jl for examples of how to structure this.

Use Clang.jl to wrap the C API

The next step is to use Clang.jl to automatically wrap the C API. The easiest way to do this is to follow an example. Good examples to follow are Cbc.jl and HiGHS.jl.

Sometimes, you will need to make manual modifications to the resulting files.

Solvers written in other languages

Ask the Developer chatroom for advice. You may be able to use one of the JuliaInterop packages to call out to the solver.

For example, SeDuMi.jl uses MATLAB.jl to call the SeDuMi solver written in MATLAB.

Structuring the package

Structure your wrapper as a Julia package. Consult the Julia documentation if you haven't done this before.

MOI solver interfaces may be in the same package as the solver itself (either the C wrapper if the solver is accessible through C, or the Julia code if the solver is written in Julia, for example), or in a separate package which depends on the solver package.

Note

The JuMP core contributors request that you do not use "JuMP" in the name of your package without prior consent.

Your package should have the following structure:

/.github
    /workflows
        ci.yml
        format_check.yml
        TagBot.yml
/gen
    gen.jl  # Code to wrap the C API
/src
    NewSolver.jl
    /gen
        libnewsolver_api.jl
        libnewsolver_common.jl
    /MOI_wrapper
        MOI_wrapper.jl
        other_files.jl
/test
    runtests.jl
    /MOI_wrapper
        MOI_wrapper.jl
.gitignore
.JuliaFormatter.toml
README.md
LICENSE.md
Project.toml
  • The /.github folder contains the scripts for GitHub actions. The easiest way to write these is to copy the ones from an existing solver.
  • The /gen and /src/gen folders are only needed if you are wrapping a solver written in C.
  • The /src/MOI_wrapper folder contains the Julia code for the MOI wrapper.
  • The /test folder contains code for testing your package. See Setup tests for more information.
  • The .JuliaFormatter.toml and .github/workflows/format_check.yml enforce code formatting using JuliaFormatter.jl. Check existing solvers or JuMP.jl for details.

Documentation

Your package must include documentation explaining how to use the package. The easiest approach is to include documentation in your README.md. A more involved option is to use Documenter.jl.

Examples of packages with README-based documentation include:

Examples of packages with Documenter-based documentation include:

Setup tests

The best way to implement an interface to MathOptInterface is via test-driven development.

The MOI.Test submodule contains a large test suite to help check that you have implemented things correctly.

Follow the guide How to test a solver to set up the tests for your package.

Tip

Run the tests frequently when developing. However, at the start there is going to be a lot of errors. Start by excluding large classes of tests (for example, exclude = ["test_basic_", "test_model_"], implement any missing methods until the tests pass, then remove an exclusion and repeat.

Initial code

By this point, you should have a package setup with tests, formatting, and access to the underlying solver. Now it's time to start writing the wrapper.

The Optimizer object

The first object to create is a subtype of AbstractOptimizer. This type is going to store everything related to the problem.

By convention, these optimizers should not be exported and should be named PackageName.Optimizer.

import MathOptInterface as MOI

struct Optimizer <: MOI.AbstractOptimizer
    # Fields go here
end

Optimizer objects for C solvers

Warning

This section is important if you wrap a solver written in C.

Wrapping a solver written in C will require the use of pointers, and for you to manually free the solver's memory when the Optimizer is garbage collected by Julia.

Never pass a pointer directly to a Julia ccall function.

Instead, store the pointer as a field in your Optimizer, and implement Base.cconvert and Base.unsafe_convert. Then you can pass Optimizer to any ccall function that expects the pointer.

In addition, make sure you implement a finalizer for each model you create.

If newsolver_createProblem() is the low-level function that creates the problem pointer in C, and newsolver_freeProblem(::Ptr{Cvoid}) is the low-level function that frees memory associated with the pointer, your Optimizer() function should look like this:

struct Optimizer <: MOI.AbstractOptimizer
    ptr::Ptr{Cvoid}

    function Optimizer()
        ptr = newsolver_createProblem()
        model = Optimizer(ptr)
        finalizer(model) do m
            newsolver_freeProblem(m)
            return
        end
        return model
    end
end

Base.cconvert(::Type{Ptr{Cvoid}}, model::Optimizer) = model
Base.unsafe_convert(::Type{Ptr{Cvoid}}, model::Optimizer) = model.ptr

Implement methods for Optimizer

All Optimizers must implement the following methods:

Other methods, detailed below, are optional or depend on how you implement the interface.

Tip

For this and all future methods, read the docstrings to understand what each method does, what it expects as input, and what it produces as output. If it isn't clear, let us know and we will improve the docstrings. It is also very helpful to look at an existing wrapper for a similar solver.

You should also implement Base.summary(::IO, ::Optimizer) to print a nice string when someone shows your model. For example

function Base.summary(io::IO, model::Optimizer)
    return print(io, "NewSolver with the pointer $(model.ptr)")
end

Implement attributes

MathOptInterface uses attributes to manage different aspects of the problem.

For each attribute

  • get gets the current value of the attribute
  • set sets a new value of the attribute. Not all attributes can be set. For example, the user can't modify the SolverName.
  • supports returns a Bool indicating whether the solver supports the attribute.
Info

Use attribute_value_type to check the value expected by a given attribute. You should make sure that your get function correctly infers to this type (or a subtype of it).

Each column in the table indicates whether you need to implement the particular method for each attribute.

Attributegetsetsupports
SolverNameYesNoNo
SolverVersionYesNoNo
RawSolverYesNoNo
NameYesYesYes
SilentYesYesYes
TimeLimitSecYesYesYes
ObjectiveLimitYesYesYes
SolutionLimitYesYesYes
NodeLimitYesYesYes
RawOptimizerAttributeYesYesYes
NumberOfThreadsYesYesYes
AbsoluteGapToleranceYesYesYes
RelativeGapToleranceYesYesYes

For example:

function MOI.get(model::Optimizer, ::MOI.Silent)
    return # true if MOI.Silent is set
end

function MOI.set(model::Optimizer, ::MOI.Silent, v::Bool)
    if v
        # Set a parameter to turn off printing
    else
        # Restore the default printing
    end
    return
end

MOI.supports(::Optimizer, ::MOI.Silent) = true

Define supports_constraint

The next step is to define which constraints and objective functions you plan to support.

For each function-set constraint pair, define supports_constraint:

function MOI.supports_constraint(
    ::Optimizer,
    ::Type{MOI.VariableIndex},
    ::Type{MOI.ZeroOne},
)
    return true
end

To make this easier, you may want to use Unions:

function MOI.supports_constraint(
    ::Optimizer,
    ::Type{MOI.VariableIndex},
    ::Type{<:Union{MOI.LessThan,MOI.GreaterThan,MOI.EqualTo}},
)
    return true
end
Tip

Only support a constraint if your solver has native support for it.

The big decision: incremental modification?

Now you need to decide whether to support incremental modification or not.

Incremental modification means that the user can add variables and constraints one-by-one without needing to rebuild the entire problem, and they can modify the problem data after an optimize! call. Supporting incremental modification means implementing functions like add_variable and add_constraint.

The alternative is to accept the problem data in a single optimize! or copy_to function call. Because these functions see all of the data at once, it can typically call a more efficient function to load data into the underlying solver.

Good examples of solvers supporting incremental modification are MILP solvers like GLPK.jl and Gurobi.jl. Examples of non-incremental solvers are AmplNLWriter.jl and SCS.jl

It is possible for a solver to implement both approaches, but you should probably start with one for simplicity.

Tip

Only support incremental modification if your solver has native support for it.

In general, supporting incremental modification is more work, and it usually requires some extra book-keeping. However, it provides a more efficient interface to the solver if the problem is going to be resolved multiple times with small modifications. Moreover, once you've implemented incremental modification, it's usually not much extra work to add a copy_to interface. The converse is not true.

Tip

If this is your first time writing an interface, start with the one-shot optimize!.

The non-incremental interface

There are two ways to implement the non-incremental interface. The first uses a two-argument version of optimize!, the second implements copy_to followed by the one-argument version of optimize!.

If your solver does not support modification, and requires all data to solve the problem in a single function call, you should implement the "one-shot" optimize!.

If your solver separates data loading and the actual optimization into separate steps, implement the copy_to interface.

The incremental interface

Warning

Writing this interface is a lot of work. The easiest way is to consult the source code of a similar solver.

To implement the incremental interface, implement the following functions:

Info

Solvers do not have to support AbstractScalarFunction in GreaterThan, LessThan, EqualTo, or Interval with a nonzero constant in the function. Throw ScalarFunctionConstantNotZero if the function constant is not zero.

In addition, you should implement the following model attributes:

Attributegetsetsupports
ListOfModelAttributesSetYesNoNo
ObjectiveFunctionTypeYesNoNo
ObjectiveFunctionYesYesYes
ObjectiveSenseYesYesYes
NameYesYesYes

Variable-related attributes:

Attributegetsetsupports
ListOfVariableAttributesSetYesNoNo
ListOfVariablesWithAttributeSetYesNoNo
NumberOfVariablesYesNoNo
ListOfVariableIndicesYesNoNo

Constraint-related attributes:

Attributegetsetsupports
ListOfConstraintAttributesSetYesNoNo
ListOfConstraintsWithAttributeSetYesNoNo
NumberOfConstraintsYesNoNo
ListOfConstraintTypesPresentYesNoNo
ConstraintFunctionYesYesNo
ConstraintSetYesYesNo

Modifications

If your solver supports modifying data in-place, implement modify for the following AbstractModifications:

Variables constrained on creation

Some solvers require variables be associated with a set when they are created. This conflicts with the incremental modification approach, since you cannot first add a free variable and then constrain it to the set.

If this is the case, implement:

By default, MathOptInterface assumes solvers support free variables. If your solver does not support free variables, define:

MOI.supports_add_constrained_variables(::Optimizer, ::Type{Reals}) = false

Incremental and copy_to

If you implement the incremental interface, you have the option of also implementing copy_to.

If you don't want to implement copy_to, for example, because the solver has no API for building the problem in a single function call, define the following fallback:

MOI.supports_incremental_interface(::Optimizer) = true

function MOI.copy_to(dest::Optimizer, src::MOI.ModelLike)
    return MOI.Utilities.default_copy_to(dest, src)
end

Names

Regardless of which interface you implement, you have the option of implementing the Name attribute for variables and constraints:

Attributegetsetsupports
VariableNameYesYesYes
ConstraintNameYesYesYes

If you implement names, you must also implement the following three methods:

function MOI.get(model::Optimizer, ::Type{MOI.VariableIndex}, name::String)
    return # The variable named `name`.
end

function MOI.get(model::Optimizer, ::Type{MOI.ConstraintIndex}, name::String)
    return # The constraint any type named `name`.
end

function MOI.get(
    model::Optimizer,
    ::Type{MOI.ConstraintIndex{F,S}},
    name::String,
) where {F,S}
    return # The constraint of type F-in-S named `name`.
end

These methods have the following rules:

  • If there is no variable or constraint with the name, return nothing
  • If there is a single variable or constraint with that name, return the variable or constraint
  • If there are multiple variables or constraints with the name, throw an error.
Warning

You should not implement ConstraintName for VariableIndex constraints. If you implement ConstraintName for other constraints, you can add the following two methods to disable ConstraintName for VariableIndex constraints.

function MOI.supports(
    ::Optimizer,
    ::MOI.ConstraintName,
    ::Type{<:MOI.ConstraintIndex{MOI.VariableIndex,<:MOI.AbstractScalarSet}},
)
    return throw(MOI.VariableIndexConstraintNameError())
end
function MOI.set(
    ::Optimizer,
    ::MOI.ConstraintName,
    ::MOI.ConstraintIndex{MOI.VariableIndex,<:MOI.AbstractScalarSet},
    ::String,
)
    return throw(MOI.VariableIndexConstraintNameError())
end

Solutions

Implement optimize! to solve the model:

All Optimizers must implement the following attributes:

Info

You only need to implement get for solution attributes. Don't implement set or supports.

Note

Solver wrappers should document how the low-level statuses map to the MOI statuses. Statuses like NEARLY_FEASIBLE_POINT and INFEASIBLE_POINT, are designed to be used when the solver explicitly indicates that relaxed tolerances are satisfied or the returned point is infeasible, respectively.

You should also implement the following attributes:

Tip

Attributes like VariablePrimal and ObjectiveValue are indexed by the result count. Use MOI.check_result_index_bounds(model, attr) to throw an error if the attribute is not available.

If your solver returns dual solutions, implement:

For integer solvers, implement:

If applicable, implement:

If your solver uses the Simplex method, implement:

If your solver accepts primal or dual warm-starts, implement:

Other tips

Here are some other points to be aware of when writing your wrapper.

Unsupported constraints at runtime

In some cases, your solver may support a particular type of constraint (for example, quadratic constraints), but only if the data meets some condition (for example, it is convex).

In this case, declare that you support the constraint, and throw AddConstraintNotAllowed.

Dealing with multiple variable bounds

MathOptInterface uses VariableIndex constraints to represent variable bounds. Defining multiple variable bounds on a single variable is not allowed.

Throw LowerBoundAlreadySet or UpperBoundAlreadySet if the user adds a constraint that results in multiple bounds.

Only throw if the constraints conflict. It is okay to add VariableIndex-in-GreaterThan and then VariableIndex-in-LessThan, but not VariableIndex-in-Interval and then VariableIndex-in-LessThan,

Expect duplicate coefficients

Solvers must expect that functions such as ScalarAffineFunction and VectorQuadraticFunction may contain duplicate coefficients.

For example, ScalarAffineFunction([ScalarAffineTerm(x, 1), ScalarAffineTerm(x, 1)], 0.0).

Use Utilities.canonical to return a new function with the duplicate coefficients aggregated together.

Don't modify user-data

All data passed to the solver must be copied immediately to internal data structures. Solvers may not modify any input vectors and must assume that input vectors will not be modified by users in the future.

This applies, for example, to the terms vector in ScalarAffineFunction. Vectors returned to the user, for example, via ObjectiveFunction or ConstraintFunction attributes, must not be modified by the solver afterwards. The in-place version of get! can be used by users to avoid extra copies in this case.

Column Generation

There is no special interface for column generation. If the solver has a special API for setting coefficients in existing constraints when adding a new variable, it is possible to queue modifications and new variables and then call the solver's API once all of the new coefficients are known.

Solver-specific attributes

You don't need to restrict yourself to the attributes defined in the MathOptInterface.jl package.

Solver-specific attributes should be specified by creating an appropriate subtype of AbstractModelAttribute, AbstractOptimizerAttribute, AbstractVariableAttribute, or AbstractConstraintAttribute.

For example, Gurobi.jl adds attributes for multiobjective optimization by defining:

struct NumberOfObjectives <: MOI.AbstractModelAttribute end

function MOI.set(model::Optimizer, ::NumberOfObjectives, n::Integer)
    # Code to set NumberOfObjectives
    return
end

function MOI.get(model::Optimizer, ::NumberOfObjectives)
    n = # Code to get NumberOfObjectives
    return n
end

Then, the user can write:

model = Gurobi.Optimizer()
MOI.set(model, Gurobi.NumberofObjectives(), 3)