I'm trying to solve an ODE with DifferentialEquations.jl
where \Gamma
and u
are matrices. u
is a block matrix, and has an element or two that are large enough that Tsit5()
and Rodas5()
overflow using Float64
s. Using BigFloat
s slows down the solvers to a crawl, and I haven't actually gotten the equation of motion to solve.
Are there any tricks for speeding this up and/or avoiding overflows? If these were scalars I would just solve the log of this equation, but matrix logs need commutativity to apply the usual identity.
Here's some working and probably badly optimized code:
using DifferentialEquations
using LinearAlgebra
using LambertW
const d = 3
const Tend = 75.0
const Tstart = Tend - 65.0
const k1 = 10.0
const L = 2/k1
const λ = 0.7
const ϕ₀ = 20.0
const πv = -3L
const H₀ = 1/(3-Eh(0.0))
function Eh(t)
return 3L*λ/(2+L*λ)
end
const πw = sqrt(L*λ*(3-Eh(0.0)) - 9L^2)
function g(t)
f = ϕ(t)
return Diagonal([1,exp(k1*f),exp(k1*f)])
end
function V(t)
return exp(λ*ϕ(t))
end
function H(t)
return sqrt(V(t)/(3-Eh(t)))
end
function ϕ(t)
return ϕ₀-2/λ*lambertw(3λ*L*t/(2*sqrt(3-Eh(t)))*exp(λ*ϕ₀/2))
end
function Mab(t)
e = Eh(t)
Vab = Diagonal((3-e).*[λ^2,k1*λ/2,k1*λ/2])
piu = [πv, πw, 0]
pil = g(t)*piu
gvl = [λ*(3-e),0.0,0.0]
gvu = inv(g(t))*gvl
Rππ = (piu'.*pil - I*2*e)/L^2
Pot = (3-e)*piu'.*pil + piu'.*gvl + gvu'.*pil
return Vab + Rππ + Pot
end
function hp(t)
return [ 0 exp(k1*ϕ(t))*k1*πw/2 0;
k1*πw/2 k1*πv/2 0;
0 0 k1*πv/2]
end
const loga = -127.0
function Gprime(dG,G,logk,t)
@inbounds begin
T = eltype(G)
u = Array{T}(undef,2*d,2*d)
u[1:d,1:d] .= zeros(T,(d,d))
u[1:d,d+1:end] .= Matrix{T}(I,d,d)
u[d+1:end,1:d] .= -Matrix{T}(I,d,d)*exp(2*(logk-t-loga-log(H(t)))) .- Mab(t)
u[d+1:end,d+1:end] .= Matrix{T}(I,d,d)*(Eh(t)-3)
Gπ = hp(t)
G11 = @view G[1:d,:]
G12 = @view G[d+1:end,:]
dG .= u*G .- vcat(Gπ*G11, Gπ*G12)
end
nothing
end
prob = ODEProblem(Gprime,Matrix{Float64}(I,2d,2d),(Float64(Tstart),Float64(Tend)),-136.0)
solve(prob,Tsit5(),reltol=1e-10,abstol=1e-10,progress=true)
Typically I get retcodes of Unstable or DtLessThanMin.
@Christopher Rackauckas if he's around is probably the ideal person to comment, but I can try to give some very generic advice.
First off, are you sure that your differential equation is correct? E.g. are there some consistency checks you can do?
Yes, first make sure it's actually the equation you wanted to solve. That's 99.9% of issues.
If it does solve with BigFloat but not Float64, then you might want to look into alternative number types. ArbFloats.jl, DoubleFloats.jl, etc.
There's a large set of higher precision number types that are faster than the generic BigFloat.
It may also be that there's a clever way you can rescale your problem so that you can just use Float64
E.g. a very simple example would be that if the solution is of the form
f(x) = 1e100 + g(x)
then you can often just subtract off the big part and solve for g
and get more stable numerics.
Using a non-generic ODE solver could help too. Defining the problem with a DiffEqArrayOperator and using the Magnus integrators could help too https://diffeq.sciml.ai/stable/solvers/nonautonomous_linear_ode/#State-Independent-Solvers
But seriously, there's been so many times where I was tryng to solve an equation and getting a solution that oscillated like crazy or was super stiff or blew up in other ways and like the fourth time I double checked the equation I realized I had made a mistake defining the diffeq.
Mason Protter said:
It may also be that there's a clever way you can rescale your problem so that you can just use Float64
Yeah, if this equation is correct that's my hope. I'll triple check it and make sure.
One trick you can use is try to do an asymptotic expansion on your function to see various behaviour and factor it out. E.g. you may be able to show that for large , your solution goes as
in which case, you might be able to write
then you can stick this form of into your differential equation and solve for instead.
@Robbie Rosati Have you estimated the condition number of the u-matrix? If some numbers are very large, then you may have two problems: both overflow and stiffness. Mason suggests factorization can help. If it isn't stiff, then, yes, rescale the whole thing, but it's most likely stiff. There are some other ways to take Mason's advice: Picard iterations will approximate an integral over an interval. Operator splitting will let you combine a stable integration with an unstable one. Basically, find that condition number, and it may send you to a chapter on stiff equations.
Thanks everyone for the replies. So, some updates:
Tend
. This also decreases the condition number by a lot.Rosenbrock23(autodiff=false)
(with the original Tend
) and I get DtLessThanMin with Float64
s. It seems to be working with Double64
s but is very slow. Is there a better solver to try, or is there a better way to massage the equation?I think there's a paper working on a similar model that estimated the growth of the solution, I might try @Mason Protter 's suggestion and see if I can estimate the growth and factor it out.
You may want to give https://github.com/dzhang314/MultiFloats.jl a try. Should be about 2x faster than Double64
, but with worse NaN
handling.
Though they don't support transcendental functions yet, so that might be a dealbreaker.
Last updated: Dec 28 2024 at 04:38 UTC