How and why ParametricOptInterface is needed
JuMP and MathOptInterface have support for parameters. Parameters are decision variables that belong to the Parameter set. The Parameter set is conceptually similar to the EqualTo set, except that solvers may treat a decision variable constrained to the Parameter set as a constant, and they do not need to add it as a decision variable to the model.
In JuMP, a parameter can be added using the following syntax:
julia> using JuMPjulia> model = Model();julia> @variable(model, p in Parameter(2))pjulia> parameter_value(p)2.0julia> set_parameter_value(p, 3.0)julia> parameter_value(p)3.0
In MathOptInterface, a parameter can be added using the following syntax:
julia> import MathOptInterface as MOIjulia> model = MOI.Utilities.Model{Float64}();julia> p, p_con = MOI.add_constrained_variable(model, MOI.Parameter(2.0))(MOI.VariableIndex(1), MathOptInterface.ConstraintIndex{MathOptInterface.VariableIndex, MathOptInterface.Parameter{Float64}}(1))julia> MOI.get(model, MOI.ConstraintSet(), p_con)MathOptInterface.Parameter{Float64}(2.0)julia> new_set = MOI.Parameter(3.0)MathOptInterface.Parameter{Float64}(3.0)julia> MOI.set(model, MOI.ConstraintSet(), p_con, new_set)julia> MOI.get(model, MOI.ConstraintSet(), p_con)MathOptInterface.Parameter{Float64}(3.0)
Some solvers have native support for parameters
Some solvers have native support for parameters. One example is Ipopt. To demonstrate, look at the following example. Even though there are two @variable calls, the log of Ipopt shows that it solved a problem with only one decision variable:
julia> using JuMP, Ipoptjulia> model = Model(Ipopt.Optimizer)A JuMP Model ├ solver: Ipopt ├ objective_sense: FEASIBILITY_SENSE ├ num_variables: 0 ├ num_constraints: 0 └ Names registered in the model: nonejulia> @variable(model, x)xjulia> @variable(model, p in Parameter(1))pjulia> @constraint(model, x + p >= 3)x + p ≥ 3julia> @objective(model, Min, 2x)2 xjulia> optimize!(model)****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit https://github.com/coin-or/Ipopt ****************************************************************************** This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2. Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 1 Number of nonzeros in Lagrangian Hessian.............: 0 Total number of variables............................: 1 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 1 inequality constraints with only lower bounds: 1 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 0.0000000e+00 2.00e+00 5.00e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 4.1399999e+00 0.00e+00 1.00e-06 -1.0 2.07e+00 - 1.00e+00 1.00e+00h 1 2 4.0028284e+00 0.00e+00 2.83e-08 -2.5 6.86e-02 - 1.00e+00 1.00e+00f 1 3 4.0001504e+00 0.00e+00 1.50e-09 -3.8 1.34e-03 - 1.00e+00 1.00e+00f 1 4 4.0000018e+00 0.00e+00 1.84e-11 -5.7 7.43e-05 - 1.00e+00 1.00e+00f 1 5 3.9999999e+00 0.00e+00 2.49e-14 -8.6 9.21e-07 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 5 (scaled) (unscaled) Objective...............: 3.9999999425059038e+00 3.9999999425059038e+00 Dual infeasibility......: 2.4868995751603507e-14 2.4868995751603507e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 2.5059039288067052e-09 2.5059039288067052e-09 Overall NLP error.......: 2.5059039288067052e-09 2.5059039288067052e-09 Number of objective function evaluations = 6 Number of objective gradient evaluations = 6 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 6 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 1 Number of Lagrangian Hessian evaluations = 1 Total seconds in IPOPT = 3.827 EXIT: Optimal Solution Found.
Internally, Ipopt replaced the parameter p with the constant 1.0, and solved the problem:
julia> using JuMP, Ipoptjulia> model = Model(Ipopt.Optimizer)A JuMP Model ├ solver: Ipopt ├ objective_sense: FEASIBILITY_SENSE ├ num_variables: 0 ├ num_constraints: 0 └ Names registered in the model: nonejulia> @variable(model, x)xjulia> @constraint(model, x + 1 >= 3)x ≥ 2julia> @objective(model, Min, 2x)2 xjulia> optimize!(model)This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2. Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 1 Number of nonzeros in Lagrangian Hessian.............: 0 Total number of variables............................: 1 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 1 inequality constraints with only lower bounds: 1 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 0.0000000e+00 2.00e+00 5.00e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 4.1599999e+00 0.00e+00 1.00e-06 -1.0 2.08e+00 - 1.00e+00 1.00e+00h 1 2 4.0028285e+00 0.00e+00 2.83e-08 -2.5 7.86e-02 - 1.00e+00 1.00e+00f 1 3 4.0001504e+00 0.00e+00 1.50e-09 -3.8 1.34e-03 - 1.00e+00 1.00e+00f 1 4 4.0000018e+00 0.00e+00 1.84e-11 -5.7 7.43e-05 - 1.00e+00 1.00e+00f 1 5 4.0000000e+00 0.00e+00 2.49e-14 -8.6 9.21e-07 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 5 (scaled) (unscaled) Objective...............: 3.9999999625059033e+00 3.9999999625059033e+00 Dual infeasibility......: 2.4868995751603507e-14 2.4868995751603507e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 2.5059034847174954e-09 2.5059034847174954e-09 Overall NLP error.......: 2.5059034847174954e-09 2.5059034847174954e-09 Number of objective function evaluations = 6 Number of objective gradient evaluations = 6 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 6 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 1 Number of Lagrangian Hessian evaluations = 1 Total seconds in IPOPT = 0.002 EXIT: Optimal Solution Found.
Why parameters are useful
Parameters are most useful when you want to solve a sequence of problems in which some of the data changes between iterations:
julia> using JuMP, Ipoptjulia> model = Model(Ipopt.Optimizer)A JuMP Model ├ solver: Ipopt ├ objective_sense: FEASIBILITY_SENSE ├ num_variables: 0 ├ num_constraints: 0 └ Names registered in the model: nonejulia> set_silent(model)julia> @variable(model, x)xjulia> @variable(model, p in Parameter(1))pjulia> @constraint(model, x + p >= 3)x + p ≥ 3julia> @objective(model, Min, 2x)2 xjulia> solution = Dict{Int,Float64}()Dict{Int64, Float64}()julia> for p_value in 0:5 set_parameter_value(p, p_value) optimize!(model) assert_is_solved_and_feasible(model) solution[p_value] = value(x) endjulia> solutionDict{Int64, Float64} with 6 entries: 0 => 3.0 4 => -1.0 5 => -2.0 2 => 1.0 3 => -2.8747e-8 1 => 2.0
Some solvers do not have native support for parameters
Even though solvers like Ipopt support parameters, many solvers do not. One example is HiGHS. Despite the fact that HiGHS doesn't support parameters, you can still build and solve a model with parameters:
julia> using JuMP, HiGHSjulia> model = Model(HiGHS.Optimizer)A JuMP Model ├ solver: HiGHS ├ objective_sense: FEASIBILITY_SENSE ├ num_variables: 0 ├ num_constraints: 0 └ Names registered in the model: nonejulia> @variable(model, x)xjulia> @variable(model, p in Parameter(1))pjulia> @constraint(model, x + p >= 3)x + p ≥ 3julia> @objective(model, Min, 2x)2 xjulia> optimize!(model)Running HiGHS 1.13.1 (git hash: 1d267d97c): Copyright (c) 2026 under Apache 2.0 license terms Using BLAS: blastrampoline LP has 1 row; 2 cols; 2 nonzeros Coefficient ranges: Matrix [1e+00, 1e+00] Cost [2e+00, 2e+00] Bound [1e+00, 1e+00] RHS [3e+00, 3e+00] Presolving model 0 rows, 0 cols, 0 nonzeros 0s 0 rows, 0 cols, 0 nonzeros 0s Presolve reductions: rows 0(-1); columns 0(-2); nonzeros 0(-2) - Reduced to empty Performed postsolve Solving the original LP from the solution after postsolve Model status : Optimal Objective value : 4.0000000000e+00 P-D objective error : 0.0000000000e+00 HiGHS run time : 0.00
This works because, behind the scenes, the bridges in MathOptInterface rewrote p in Parameter(1) to p in MOI.EqualTo(1.0):
julia> print_active_bridges(model)* Supported objective: MOI.ScalarAffineFunction{Float64} * Supported constraint: MOI.ScalarAffineFunction{Float64}-in-MOI.GreaterThan{Float64} * Unsupported variable: MOI.Parameter{Float64} | bridged by: | MOIB.Variable.ParameterToEqualToBridge{Float64} | may introduce: | * Supported variable: MOI.EqualTo{Float64}
Thus, HiGHS solved the problem:
julia> using JuMP, HiGHSjulia> model = Model(HiGHS.Optimizer)A JuMP Model ├ solver: HiGHS ├ objective_sense: FEASIBILITY_SENSE ├ num_variables: 0 ├ num_constraints: 0 └ Names registered in the model: nonejulia> @variable(model, x)xjulia> @variable(model, p == 1)pjulia> @constraint(model, x + p >= 3)x + p ≥ 3julia> @objective(model, Min, 2x)2 xjulia> optimize!(model)Running HiGHS 1.13.1 (git hash: 1d267d97c): Copyright (c) 2026 under Apache 2.0 license terms Using BLAS: blastrampoline LP has 1 row; 2 cols; 2 nonzeros Coefficient ranges: Matrix [1e+00, 1e+00] Cost [2e+00, 2e+00] Bound [1e+00, 1e+00] RHS [3e+00, 3e+00] Presolving model 0 rows, 0 cols, 0 nonzeros 0s 0 rows, 0 cols, 0 nonzeros 0s Presolve reductions: rows 0(-1); columns 0(-2); nonzeros 0(-2) - Reduced to empty Performed postsolve Solving the original LP from the solution after postsolve Model status : Optimal Objective value : 4.0000000000e+00 P-D objective error : 0.0000000000e+00 HiGHS run time : 0.00
The downside to the bridge approach is that it adds a new decision variable with fixed bounds for every parameter in the problem. Moreover, the bridge approach does not handle parameter * variable terms, because the resulting problem is a quadratic constraint:
julia> using JuMP, HiGHS
julia> model = Model(HiGHS.Optimizer);
julia> @variable(model, x);
julia> @variable(model, p in Parameter(1));
julia> @constraint(model, p * x >= 3)
ERROR: Constraints of type MathOptInterface.ScalarQuadraticFunction{Float64}-in-MathOptInterface.GreaterThan{Float64} are not supported by the solver.
If you expected the solver to support your problem, you may have an error in your formulation. Otherwise, consider using a different solver.
The list of available solvers, along with the problem types they support, is available at https://jump.dev/JuMP.jl/stable/installation/#Supported-solvers.
Stacktrace:
[...]ParametricOptInterface
ParametricOptInterface provides Optimizer, which is a meta-optimizer that wraps another optimizer. Instead of adding fixed variables to the model, POI substitutes out the parameters with their value before passing the constraint or objective to the inner optimizer. When the parameter value is changed, POI efficiently modifies the inner optimizer to reflect the new parameter values.
julia> using JuMP, HiGHSjulia> import ParametricOptInterface as POIjulia> model = Model(() -> POI.Optimizer(HiGHS.Optimizer()));julia> @variable(model, x);julia> @variable(model, p in Parameter(1));julia> @constraint(model, x + p >= 3);julia> @objective(model, Min, 2x);julia> optimize!(model)Running HiGHS 1.13.1 (git hash: 1d267d97c): Copyright (c) 2026 under Apache 2.0 license terms Using BLAS: blastrampoline LP has 1 row; 1 col; 1 nonzero Coefficient ranges: Matrix [1e+00, 1e+00] Cost [2e+00, 2e+00] Bound [0e+00, 0e+00] RHS [2e+00, 2e+00] Presolving model 0 rows, 0 cols, 0 nonzeros 0s 0 rows, 0 cols, 0 nonzeros 0s Presolve reductions: rows 0(-1); columns 0(-1); nonzeros 0(-1) - Reduced to empty Performed postsolve Solving the original LP from the solution after postsolve Model status : Optimal Objective value : 4.0000000000e+00 P-D objective error : 0.0000000000e+00 HiGHS run time : 0.00
Note how HiGHS now solves a problem with one decision variable.
Because POI replaces parameters with their constant value, POI supports parameter * variable terms:
julia> using JuMP, HiGHSjulia> import ParametricOptInterface as POIjulia> model = Model(() -> POI.Optimizer(HiGHS.Optimizer()));julia> @variable(model, x);julia> @variable(model, p in Parameter(1));julia> @constraint(model, p * x >= 3)p*x ≥ 3julia> @objective(model, Min, 2x)2 xjulia> optimize!(model)Running HiGHS 1.13.1 (git hash: 1d267d97c): Copyright (c) 2026 under Apache 2.0 license terms Using BLAS: blastrampoline LP has 1 row; 1 col; 1 nonzero Coefficient ranges: Matrix [1e+00, 1e+00] Cost [2e+00, 2e+00] Bound [0e+00, 0e+00] RHS [3e+00, 3e+00] Presolving model 0 rows, 0 cols, 0 nonzeros 0s 0 rows, 0 cols, 0 nonzeros 0s Presolve reductions: rows 0(-1); columns 0(-1); nonzeros 0(-1) - Reduced to empty Performed postsolve Solving the original LP from the solution after postsolve Model status : Optimal Objective value : 6.0000000000e+00 P-D objective error : 0.0000000000e+00 HiGHS run time : 0.00
When to use ParametricOptInterface
To summarize, you should use ParametricOptInterface when:
- you are using a solver that does not have native support for parameters
- you are solving a single problem for multiple values of the parameters.
For problems with a small number of parameters, and in which the parameters appear additively in the constraints and the objective, it may be more efficient to use the bridge approach. In general, you should try with and without POI and choose the approach which works best for your model.