Stream: helpdesk (published)

Topic: Tricks for solving an ODE with a large derivative?


view this post on Zulip Robbie Rosati (Mar 05 2021 at 01:55):

I'm trying to solve an ODE with DifferentialEquations.jl

Γ=u(t)Γ\mathbf{\Gamma^\prime} = \mathbf{u}(t) \mathbf{\Gamma}

where \Gamma and u are matrices. u is a block matrix, and has an element or two that are large enough that Tsit5() and Rodas5() overflow using Float64s. Using BigFloats slows down the solvers to a crawl, and I haven't actually gotten the equation of motion to solve.
Are there any tricks for speeding this up and/or avoiding overflows? If these were scalars I would just solve the log of this equation, but matrix logs need commutativity to apply the usual identity.

Here's some working and probably badly optimized code:

using DifferentialEquations
using LinearAlgebra
using LambertW

const d = 3
const Tend = 75.0
const Tstart = Tend - 65.0
const k1 = 10.0
const L = 2/k1
const λ = 0.7
const ϕ₀ = 20.0
const πv = -3L
const H₀ = 1/(3-Eh(0.0))

function Eh(t)
    return 3L*λ/(2+L*λ)
end
const πw = sqrt(L*λ*(3-Eh(0.0)) - 9L^2)

function g(t)
    f = ϕ(t)
    return Diagonal([1,exp(k1*f),exp(k1*f)])
end

function V(t)
    return exp(λ*ϕ(t))
end

function H(t)
    return sqrt(V(t)/(3-Eh(t)))
end

function ϕ(t)
    return ϕ₀-2/λ*lambertw(3λ*L*t/(2*sqrt(3-Eh(t)))*exp(λ*ϕ₀/2))
end

function Mab(t)
    e = Eh(t)
    Vab = Diagonal((3-e).*[λ^2,k1*λ/2,k1*λ/2])
    piu = [πv, πw, 0]
    pil = g(t)*piu
    gvl = [λ*(3-e),0.0,0.0]
    gvu = inv(g(t))*gvl
    Rππ = (piu'.*pil - I*2*e)/L^2
    Pot = (3-e)*piu'.*pil + piu'.*gvl + gvu'.*pil
    return Vab + Rππ + Pot
end

function hp(t)
    return [ 0 exp(k1*ϕ(t))*k1*πw/2 0;
            k1*πw/2  k1*πv/2 0;
            0 0 k1*πv/2]
end

const loga = -127.0
function Gprime(dG,G,logk,t)
    @inbounds begin
        T = eltype(G)
        u = Array{T}(undef,2*d,2*d)
        u[1:d,1:d] .= zeros(T,(d,d))
        u[1:d,d+1:end] .= Matrix{T}(I,d,d)
        u[d+1:end,1:d] .= -Matrix{T}(I,d,d)*exp(2*(logk-t-loga-log(H(t)))) .- Mab(t)
        u[d+1:end,d+1:end] .= Matrix{T}(I,d,d)*(Eh(t)-3)
         = hp(t)
        G11 = @view G[1:d,:]
        G12 = @view G[d+1:end,:]
        dG .= u*G .- vcat(*G11, *G12)
    end
    nothing
end


prob = ODEProblem(Gprime,Matrix{Float64}(I,2d,2d),(Float64(Tstart),Float64(Tend)),-136.0)
solve(prob,Tsit5(),reltol=1e-10,abstol=1e-10,progress=true)

Typically I get retcodes of Unstable or DtLessThanMin.

view this post on Zulip Mason Protter (Mar 05 2021 at 02:02):

@Christopher Rackauckas if he's around is probably the ideal person to comment, but I can try to give some very generic advice.

view this post on Zulip Mason Protter (Mar 05 2021 at 02:02):

First off, are you sure that your differential equation is correct? E.g. are there some consistency checks you can do?

view this post on Zulip Christopher Rackauckas (Mar 05 2021 at 02:03):

Yes, first make sure it's actually the equation you wanted to solve. That's 99.9% of issues.

view this post on Zulip Christopher Rackauckas (Mar 05 2021 at 02:04):

If it does solve with BigFloat but not Float64, then you might want to look into alternative number types. ArbFloats.jl, DoubleFloats.jl, etc.

view this post on Zulip Christopher Rackauckas (Mar 05 2021 at 02:04):

There's a large set of higher precision number types that are faster than the generic BigFloat.

view this post on Zulip Mason Protter (Mar 05 2021 at 02:05):

It may also be that there's a clever way you can rescale your problem so that you can just use Float64

view this post on Zulip Mason Protter (Mar 05 2021 at 02:06):

E.g. a very simple example would be that if the solution is of the form

f(x) = 1e100 + g(x)

then you can often just subtract off the big part and solve for g and get more stable numerics.

view this post on Zulip Christopher Rackauckas (Mar 05 2021 at 02:08):

Using a non-generic ODE solver could help too. Defining the problem with a DiffEqArrayOperator and using the Magnus integrators could help too https://diffeq.sciml.ai/stable/solvers/nonautonomous_linear_ode/#State-Independent-Solvers

view this post on Zulip Mason Protter (Mar 05 2021 at 02:08):

But seriously, there's been so many times where I was tryng to solve an equation and getting a solution that oscillated like crazy or was super stiff or blew up in other ways and like the fourth time I double checked the equation I realized I had made a mistake defining the diffeq.

view this post on Zulip Robbie Rosati (Mar 05 2021 at 02:09):

Mason Protter said:

It may also be that there's a clever way you can rescale your problem so that you can just use Float64

Yeah, if this equation is correct that's my hope. I'll triple check it and make sure.

view this post on Zulip Mason Protter (Mar 05 2021 at 02:19):

One trick you can use is try to do an asymptotic expansion on your function to see various behaviour and factor it out. E.g. you may be able to show that for large xx, your solution goes as

f(x)exf(x) \approx e^{-x}

in which case, you might be able to write

f(x)=exg(x)f(x) = e^{-x} g(x)

then you can stick this form of ff into your differential equation and solve for gg instead.

view this post on Zulip Andrew Dolgert (Mar 05 2021 at 04:34):

@Robbie Rosati Have you estimated the condition number of the u-matrix? If some numbers are very large, then you may have two problems: both overflow and stiffness. Mason suggests factorization can help. If it isn't stiff, then, yes, rescale the whole thing, but it's most likely stiff. There are some other ways to take Mason's advice: Picard iterations will approximate an integral over an interval. Operator splitting will let you combine a stable integration with an unstable one. Basically, find that condition number, and it may send you to a chapter on stiff equations.

view this post on Zulip Robbie Rosati (Mar 05 2021 at 17:44):

Thanks everyone for the replies. So, some updates:

I think there's a paper working on a similar model that estimated the growth of the solution, I might try @Mason Protter 's suggestion and see if I can estimate the growth and factor it out.

view this post on Zulip Mason Protter (Mar 05 2021 at 17:52):

You may want to give https://github.com/dzhang314/MultiFloats.jl a try. Should be about 2x faster than Double64, but with worse NaN handling.

view this post on Zulip Mason Protter (Mar 05 2021 at 17:53):

Though they don't support transcendental functions yet, so that might be a dealbreaker.


Last updated: Dec 28 2024 at 04:38 UTC