I'm thinking of having a family of methods where I check to see if the world age they were compiled with is earlier than X and if so call invokelatest
, but I can't see an easy way to check the world age of a method. Might somebody know how best to do this?
Probably not the best way, but so far this has worked for me:
struct NoFutureWarn end
struct FutureFunction{F}
f::F
end
function (f::FutureFunction)(args...)
# try_advance_world_age! does not mutate anything in this version as those parts are stripped out here
# I had some caching logic which only makes sense in the application this was used for so that
# this code will never be called for f.f again if world age could be advanced
fnew = try_advance_world_age!(f)
if f.f === fnew
# The world age has advanced and f.f is now safe to use
return fnew(args...)
end
@warn "Calling $(f.f) from a future world age. This is quite slow and should be avoided if possible.
This warning will only display once." maxlog=1
f(NoFutureWarn(), args...)
end
(f::FutureFunction)(::NoFutureWarn, args...) = Base.invokelatest(f.f, args...)
get_current_world() = ccall(:jl_get_tls_world_age, UInt, ())
try_advance_world_age!(x, args...) = x
function try_advance_world_age!(f::FutureFunction, currentworld=get_current_world())
all(methods(f.f)) do fmethod
fmethod.primary_world <= currentworld
end && return f.f
return f
end
Not sure if it can be made type stable(r) in some way. It was only used in a very specific application and in that context it was fast enough.
Thanks, this has helped me get halfway to where I want, I think.
const relevant_world_age = Ref(Base.get_world_counter())
function update_world_age!()
relevant_world_age[] = Ref(Base.get_world_counter())
end
function invokerelevant(f, args...; kwargs...)
fmethod = getmethod(f, args...; kwargs...) # help! this function is made up, I don't know what to do here
if fmethod.primary_world < relevant_world_age[]
Base.invokelatest(fmethod, args...; kwargs..)
else
fmethod(args...; kwargs...)
end
end
The idea is that this would act like invokelatest
but only when necessary.
I think the main issue now is getmethod
(which doesn't actually exist). One could probably use first(methods(f, Tuple{typeof(args)...}))
, but since half the point of this is to be lightweight I'm not sure if that's a good fit.
I'm also aware that you can't call methods :( so imagine f
in place of fmethod
in the invocation lines.
you want methods
(the function, not the concept)
help?> methods
search: methods methodswith Method MethodError hasmethod
methods(f, [types], [module])
Return the method table for f.
If types is specified, return an array of methods whose types match.
If module is specified, return an array of methods defined in that
module. A list of modules can also be specified as an array.
│ Julia 1.4
│
│ At least Julia 1.4 is required for specifying a module.
See also: which and @which.
As I understand it, methods
just gives the list of methods with compatible function signatures. Here we want the method that will be invoked.
If first
is good enough, is there anything better than this approach?
@generated function getmethod(::F, argtypes...) where { F <: Function }
first(methods(F.instance, argtypes))
end
Oh, and is there any way this could handle kwargs?
methods
takes two arguments
first the function, second an optional signature
e.g.:
julia> methods(+, (Int, Int))
# 1 method for generic function "+" from Base:
[1] +(x::T, y::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8}
@ int.jl:87
keyword arguments don't participate in dispatch, so you don't use them for method selection
julia> kwfunc(x; args...) = args
kwfunc (generic function with 1 method)
julia> methods(kwfunc)
# 1 method for generic function "kwfunc" from Main:
[1] kwfunc(x; args...)
@ REPL[2]:1
julia> methods(kwfunc)[1].sig
Tuple{typeof(kwfunc), Any}
julia> methods(kwfunc, (Int,))
# 1 method for generic function "kwfunc" from Main:
[1] kwfunc(x; args...)
@ REPL[2]:1
Ok, so I take it there's probably nothing better than my getmethod
implementation above?
I think your invokerelevant
already is invokelatest
but yes
in general, if methods
returns only a single argument, that is the method that will be called with arguments of that type
if it returns an empty list, you get a MethodError (there's no method after all)
if it returns a list with more than one thing, it's an ambiguity
so the getmethod
you're thinking of is already methods
I'm under the impression that doing invokelatest
all the time in a whole bunch of places isn't a great idea.
indeed! :)
there's currently no way around the forced type instability
after all, the whole purpose of invokelatest
is to call possibly-changed code, with a possibly-changed return type
by proxy, your invokerelevant
has the same issue since it uses invokelatest
internally
Mmm, the idea with invokelatest
is that at least I can avoid it unintentionally triggering.
Since in this situation I'm writing a package that supports lazy-loaded packages, and when doing so can bump the minimum world age.
Do you know about Requires.jl?
people tend to not like using it, because it causes MASSIVE amounts of invalidations & recompilation
I do, but this is working a little differently. Instead of executing code when a package is loaded, we're loading a package when code is executed if the package is not already loaded.
The idea is that this way a package can be written with say ~20 "soft dependencies" and if the user executes something that needs package X and X can be loaded, then this package will do so.
sounds like https://github.com/JuliaLang/julia/pull/47040
There is some overlap. For reference, this is the code for loading packages on-demand I have: https://github.com/tecosaur/DataToolkitBase.jl/blob/main/src/model/usepkg.jl
It basically lets you do this:
function foobar()
@use JSON3
JSON3.read(source)
end
Timothy said:
I'm under the impression that doing
invokelatest
all the time in a whole bunch of places isn't a great idea.
Actually, invokelatest
is very very fast unless the worldage has actually changed.
Hmm, is there any potential issue/overhead when the world age changes but nothing relevant has changed?
Hard to measure, but that's what I attempted to probe here: https://julialang.zulipchat.com/#narrow/stream/225542-helpdesk/topic/.E2.9C.94.20world.20age/near/308194057
What I saw was that at least relative to the performance cost of an eval
, there was no measurable cost to invokelatest
if there's no relative change.
Sorry, what I actually saw there was that irrelevant changes to the world-age do have a substantial impact.
Annnd wait, that's wrong again. I actually measured it poorly.
Here is a better comparison:
Pure eval creating a function:
@benchmark @eval blargh() = 1
#+end_src
#+RESULTS:
: BenchmarkTools.Trial: 10000 samples with 1 evaluation.
: Range (min … max): 169.600 μs … 7.494 ms ┊ GC (min … max): 0.00% … 93.67%
: Time (median): 174.199 μs ┊ GC (median): 0.00%
: Time (mean ± σ): 180.871 μs ± 80.589 μs ┊ GC (mean ± σ): 0.39% ± 0.94%
:
: ██▅▄▂▁ ▂
: ███████▇▇▇▇███▇█▇▅▄▃▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▃▄▅▃▁▄▃▄▅▁▄▄▅ █
: 170 μs Histogram: log(frequency) by time 428 μs <
:
: Memory estimate: 6.63 KiB, allocs estimate: 126.
And invokelatest
on top of that eval
:
#+begin_src julia
f() = 1
function advance_worldage()
@eval blah() = 1
Base.invokelatest(f)
end
@benchmark advance_worldage()
#+end_src
#+RESULTS:
: BenchmarkTools.Trial: 10000 samples with 1 evaluation.
: Range (min … max): 168.360 μs … 9.291 ms ┊ GC (min … max): 0.00% … 97.52%
: Time (median): 173.700 μs ┊ GC (median): 0.00%
: Time (mean ± σ): 182.926 μs ± 98.141 μs ┊ GC (mean ± σ): 0.50% ± 0.98%
:
: ▇█▅▃▂▁▁ ▂▁▁▂▁▁ ▂
: ██████████████████▆▆▅▃▄▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▆▄▃▄▄▃▃▄▅▄ █
: 168 μs Histogram: log(frequency) by time 424 μs <
:
: Memory estimate: 6.63 KiB, allocs estimate: 126.
So my conclusion I guess remains that I don't see any significant overhead to invokelatest
if the change doesn't actually affect the method table of the function being called, or any of it's downstream callees
And in case you worry that maybe the eval
ing the same body many times doesn't change the worldage, I did check the world-age before and after the benchmark, and the difference was 38684
.
So think invokelatest
is always doing the smart thing here.
cc @Michael Fiano, this might be of interest. Turns out the overhead I was attributing to invokelatest
before was actually just that eval
ing a function is heavier than eval
ing a variable binding.
It's interesting that invokelatest
seems to have the exact same overhead as my invokerecent
(20.4us and 20.8us over a baseline of 12.5us in one test case)
Probably because they're essentially doing the same thing
Mmm, though it seems like the bulk of the overhead comes from my getmethod
.
Just for reference, benchmarks (from a different machine)
julia> @benchmark sum([1,2,3])
BenchmarkTools.Trial: 10000 samples with 996 evaluations.
Range (min … max): 23.710 ns … 2.325 μs ┊ GC (min … max): 0.00% … 96.48%
Time (median): 28.759 ns ┊ GC (median): 0.00%
Time (mean ± σ): 33.361 ns ± 93.276 ns ┊ GC (mean ± σ): 11.35% ± 4.01%
▂▄▆▅▅▆██▅▁ ▁▁▁ ▂
███████████████████▇▇▆▆▅▄▅▄▄▅▅▄▄▅▅▄▅▄▄▄▄▄▆████▇▇▆▅▅▄▄▃▄▇▇▇▇ █
23.7 ns Histogram: log(frequency) by time 51.7 ns <
Memory estimate: 80 bytes, allocs estimate: 1.
julia> @benchmark Base.invokelatest(sum, [1,2,3])
BenchmarkTools.Trial: 10000 samples with 988 evaluations.
Range (min … max): 46.141 ns … 2.463 μs ┊ GC (min … max): 0.00% … 96.64%
Time (median): 49.425 ns ┊ GC (median): 0.00%
Time (mean ± σ): 54.329 ns ± 94.346 ns ┊ GC (mean ± σ): 7.05% ± 3.97%
▃▅▅▅▆█▇▅▂▁▁▁▁ ▁▁▁ ▂
▅██████████████▇█▇▇▆▆▇▆▆▅▆▆▄▄▃▃▁▄▃▄▁▃▄▄▃▁▆█████▇▆▅▆▅▄▅▆▇▇▇▇ █
46.1 ns Histogram: log(frequency) by time 76 ns <
Memory estimate: 80 bytes, allocs estimate: 1.
julia> @benchmark invokerecent(sum, [1,2,3])
BenchmarkTools.Trial: 10000 samples with 989 evaluations.
Range (min … max): 44.554 ns … 2.475 μs ┊ GC (min … max): 0.00% … 96.14%
Time (median): 47.411 ns ┊ GC (median): 0.00%
Time (mean ± σ): 52.438 ns ± 94.743 ns ┊ GC (mean ± σ): 7.34% ± 3.98%
▆▆▆▇██▇▆▄▃▃▂▁▁ ▁▂▂▁ ▃
██████████████████▇▇▆▆▆▅▆▆▆▃▃▁▄▄▄▁▁▁▁▁▃▃▁▅█████▇█▆▅▃▄▃▅▇███ █
44.6 ns Histogram: log(frequency) by time 73.4 ns <
Memory estimate: 80 bytes, allocs estimate: 1.
julia> @benchmark getmethod(sum, [1,2,3])
BenchmarkTools.Trial: 10000 samples with 997 evaluations.
Range (min … max): 20.349 ns … 2.378 μs ┊ GC (min … max): 0.00% … 97.64%
Time (median): 24.670 ns ┊ GC (median): 0.00%
Time (mean ± σ): 29.012 ns ± 93.224 ns ┊ GC (mean ± σ): 12.67% ± 3.90%
▁▃▄▅▅▅▄▃▆█▇▅▁ ▁ ▂
█████████████▇█████▇▆▆▆▆▄▄▅▂▄▄▄▄▄▅▅▄▄▃▅▆▇█████▆▅▅▅▃▃▄▆▆▆▇▇▇ █
20.3 ns Histogram: log(frequency) by time 47.4 ns <
Memory estimate: 80 bytes, allocs estimate: 1.
Hmm, if I add the package re-run bits the overhead grows from ~10ns to ~40ns per call.
function invokerecent(f::Function, args...; kwargs...)
method = getmethod(f, args...)
try
if method.primary_world >= RECENT_WORLD_AGE[]
f(args...; kwargs...)
else
Base.invokelatest(f, args...; kwargs...)
end
catch e
if e isa PkgRequiredRerunNeeded
update_recency!()
invokerecent(f, args...; kwargs...)
else
rethrow(e)
end
end
end
Might anyone have any suggestions with this?
Hmm, it looks like I can get much better performance (back to 10ns overhead instead of 30ns) is I return the error instead of throwing it.
function invokerecent2(f::Function, args...; kwargs...)
method = getmethod(f, args...)
res = if method.primary_world >= RECENT_WORLD_AGE[]
f(args...; kwargs...)
else
Base.invokelatest(f, args...; kwargs...)
end
if res isa PkgRequiredRerunNeeded
update_recency!()
invokerecent(f, args...; kwargs...)
else
res
end
end
that's not surprising - stack unwinding & setting up a function for that is expensive
(relatively speaking)
What I can't see here is the impact this has on type inference
Sukera said:
that's not surprising - stack unwinding & setting up a function for that is expensive
I'm not surprised that raising an exception is so expensive, but I am surprised at how much of an impact just adding a try ... catch end
block has.
I wonder if the situation could be any nicer if Julia had first-class delimited continuations :thinking:
It seems like there are a few ways this could be done well with LLVM, Chez-like segmented stacks (https://llvm.org/docs/SegmentedStacks.html) look interesting but seem to complicate GC.
That link seems more related to ensuring different threads can grow their stack dynamically
(which is not a problem in single threaded code)
Oh, that link is about segmented stacks existing in LLVM, for how segmented stacks can be used to implement delimited continuations see other resources like https://legacy.cs.indiana.edu/~dyb/pubs/stack.pdf
Hmm, I seem to have hit the latest challenge in this endeavour, a good getmethod
implementation for types not functions. E.g.
struct Foo end
(::Foo)(x::Int) = 2x
f = Foo()
getmethod(f, 1) # function I'd like to define
For reference, this seems to work nicely with functions
@generated function getmethod(::F, argtypes...) where { F <: Function }
first(methods(F.instance, argtypes))
end
This takes ~10ns.
I can do
function getmethod(t, args...)
first(methods(t, typeof.(args)))
end
but that takes ~500ns, which seems like a bit much.
because julia is not specializing getmethod
on t
, and neither should you
https://docs.julialang.org/en/v1/manual/performance-tips/#Be-aware-of-when-Julia-avoids-specializing
I take it you're talking about the getmethod(::F, ...)
implementation? That works nicely, it's the type instance one I'm trying to improve.
no, I'm talking about the getmethod(t, args...)
one
your generated function does specialize, since you parametrize (per my link)
singleing out ::Function
is not the general case though, since anything can be callable - you're not going to make that faster
I think I've arrived at my solution:
@generated function getmethod(T, args...)
if hasmethod(empty, Tuple{Type{T}})
first(methods(empty(T), args))
else
:(first(methods(T, $(args))))
end
end
Last updated: Dec 28 2024 at 04:38 UTC