If I run
using SnoopCompile
@snoopl "func_names.csv" "llvm_timings.yaml" begin
my_code()
end
times, info = SnoopCompile.read_snoopl("func_names.csv", "llvm_timings.yaml")
sum(first, times)
Is sum(first, times)
going to return approximately the total time spent by LLVM optimizing the function call?
When I snoop on inference using @snoopi_deep
, I get that about half of the compilation time is in inference. With the approach above, I get a time corresponding roughly to the other half, indicating that (if the approach is reasonable) the compilation time is roughly 50/50 inferece/LLVM
You mean LLVM compiling binary code? I heard Valentin Churavy say that it’s about 30% LLVM time
Yeah, I'm after the percentage of total compilation time that is taken up by LLVM, since I will not attempt to reduce this any further. LLVM really dislikes some things, e.g., very large functions, so the LLVM time can be greater in some situations than others
Rik Huijzer said:
You mean LLVM compiling binary code? I heard Valentin Churavy say that it’s about 30% LLVM time
I think that's a rule of thumb, because I've also seen breakdowns like what @Fredrik Bagge Carlson mentioned. Large functions definitely appear to be a culprit.
either way, LLVM time is a big chunk and there's no really easy way for us to speed that up
Mechanisms for finer-grained application of optlevels could help. e.g. exclude "glue code" but keep optimizing hot loops, broadcasts etc. The current module-level setting is too coarse.
Maybe julia could add some benchmarks to the llvm benchmark tracker
https://www.npopov.com/2020/05/10/Make-LLVM-fast-again.html
http://llvm-compile-time-tracker.com/index.php
Compile with -DENABLE_TIMINGS
or uncomment:
https://github.com/JuliaLang/julia/blob/4873773d37e06d01ad13a0d55df684789ddd29ad/src/options.h#L87
That will give you a breakdown of where time was spent whenever a Julia process exits.
JULIA_LLVM_ARGS="-time-passes" julia
will give you a breakdown of where LLVM spent its time.
Last updated: Nov 22 2024 at 04:41 UTC