Stream: helpdesk (published)

Topic: Juniper tuning


view this post on Zulip DrChainsaw (May 30 2021 at 09:25):

I’m toying around with Juniper.jl and I wonder if there exist and guideline for how to tune it if convergence is too slow. Problem has about 900 binary variables.

Is it e.g. possible to find advice for what is a good strategy if all constraints are linear (and presumably it is very easy to find feasible solutions) and it is just the objective which is non-linear.

Furthermore, the non-linearity of the objective does not seem to matter much; Even if I change to a linear objective Juniper seems to choke (while Cbc solves it near instantly).

Here is a (perhaps too) simplified mwe for the latter. In case it is not clear from the example I'm not very experienced with mathematical programming so I'm fully aware that I might have formulated the problem in a stupid way.

function createmodel()
    optimizer = Juniper.Optimizer
    params = Dict{Symbol,Any}()
    params[:nl_solver] = with_optimizer(Ipopt.Optimizer, print_level=0)
    params[:mip_solver] = with_optimizer(Cbc.Optimizer, logLevel=0)
    return Model(with_optimizer(optimizer, params))
end

function mwe(psize; model=createmodel(), rng = Random.MersenneTwister(1))
    # psize x psize choice variables
    assign = @variable(model, assign[i=1:psize,j=1:psize], Bin; start = repeat(diagm(ones(psize)), psize)[i,j])

    # Constraint to select excactly one in each row and column
    @constraint(model, [j=collect(1:psize)], sum(assign[1:psize, j]) == 1)
    @constraint(model, [i=collect(1:psize)], sum(assign[i, j] for j in 1:psize) == 1)

    costmat = abs.(randn(rng, psize, psize))
    cost = @expression(model, cost[i=1:psize,j=1:psize], assign[i, j] * costmat[i, j])

    # Minimize the maximum cost, real objective does not have this in it at all though
    maxcost = @variable(model, maxcost)
    @constraint(model, [i=1:length(cost)], maxcost >= cost[i])
    @objective(model, Min, maxcost)

    optimize!(model)

    @show JuMP.termination_status(model)
    @show JuMP.objective_value(model)
    return JuMP.value.(assign)
end

mwe(30) # Very slow convergence

mwe(30; model=JuMP.Model(JuMP.with_optimizer(Cbc.Optimizer))) # Pretty much instant

view this post on Zulip Ole Kröger (May 30 2021 at 09:30):

Hey @DrChainsaw thanks for using Juniper :wink:
Juniper works best, when the problem is non-linear and non-convex. Cbc can do a lot of special stuff for this linear problem. Nevertheless the first result using the feasibility pump should be good. I'll have a look as well.

view this post on Zulip DrChainsaw (May 30 2021 at 09:38):

@Ole Kröger Thanks for making it!

I can make the non-linear mwe version as well, but it will be a bit more involved. I was thinking that making a linear version of the problem should not make it harder, but there is ofc the risk that it is hard for a different reason.

Real problem shall maximize the harmonic mean of something like log(a .* assigned ./ (b .* assigned + eps)) in case this helps.

view this post on Zulip Ole Kröger (May 30 2021 at 09:52):

The non linear solver Ipopt can't do all the great stuff a linear solver can do. And 900 variables is unfortunately already quite a bit for a non-convex MINLP solver

view this post on Zulip DrChainsaw (May 30 2021 at 10:14):

The non linear solver Ipopt can't do all the great stuff a linear solver can do

Yes, this much I have gathered :)

The linear mwe was more or less me trying to break down the problem to see if I could find the pain points. E.g. if Cbc can't solve the linear mwe then there would be little hope for Juniper to solve the non-linear version and I would need to think of another way to formulate the problem.

I'm not expecting Juniper to solve it as fast as Cbc due to the very reasons you mentioned. More hoping that there would exist some guideline when constraints are linear/simple (e.g. use more strong branching). It was a shot in the dark given how little I know about how to tackle MINLPs.

Eitherway, I'm just basically toying around and if there is no simple guide here I'll just keep playing with it a bit while reading up on the different strategies while the program runs to see if I get an idea. That is also part of the fun.

Fwiw, I do get pretty good solutions if I set the time out. Perhaps I'll add a genetic algorithm to play with the Juniper strategies :troll:

view this post on Zulip Ole Kröger (May 30 2021 at 10:16):

Understand, unfortunately I don't have an answer for that :smiley: You might want to play around with running in parallel and longer strong branching. Have you actually let Juniper run til the end? If so: How long does it take?

view this post on Zulip DrChainsaw (May 30 2021 at 10:34):

I have left it running for a couple of days and the "symptom" is that neither incumbent nor BestBound seems to budge from their initial values. Increasing strong branching seems to give a better solution (i.e a better incumbent which stays for the reminder of the optimization) so that is what I'm playing with now.

view this post on Zulip DrChainsaw (May 30 2021 at 10:40):

Ha! I just did what seemed like a trivial change to the non-linear version and it termined in just a few minutes!!

Something like log(a .* assigned ./ (sum(f(assigned)) + eps)) => log(a ./ (sum(assigned .* f(assigned) + eps)) (not exactly this, I have verified that they are equivalent) but basically moving a variable from nominator to the denominator.

view this post on Zulip Ole Kröger (May 30 2021 at 15:32):

Oh damn I missed your message. Great that you have figured it out though :thumbs_up:


Last updated: Dec 28 2024 at 04:38 UTC