Stream: helpdesk (published)

Topic: Snoop on LLVM


view this post on Zulip Fredrik Bagge Carlson (Jun 23 2022 at 05:48):

If I run

using SnoopCompile
@snoopl "func_names.csv" "llvm_timings.yaml" begin
   my_code()
end
times, info = SnoopCompile.read_snoopl("func_names.csv", "llvm_timings.yaml")
sum(first, times)

Is sum(first, times) going to return approximately the total time spent by LLVM optimizing the function call?

view this post on Zulip Fredrik Bagge Carlson (Jun 23 2022 at 05:49):

When I snoop on inference using @snoopi_deep, I get that about half of the compilation time is in inference. With the approach above, I get a time corresponding roughly to the other half, indicating that (if the approach is reasonable) the compilation time is roughly 50/50 inferece/LLVM

view this post on Zulip Rik Huijzer (Jun 23 2022 at 06:26):

You mean LLVM compiling binary code? I heard Valentin Churavy say that it’s about 30% LLVM time

view this post on Zulip Fredrik Bagge Carlson (Jun 23 2022 at 06:36):

Yeah, I'm after the percentage of total compilation time that is taken up by LLVM, since I will not attempt to reduce this any further. LLVM really dislikes some things, e.g., very large functions, so the LLVM time can be greater in some situations than others

view this post on Zulip Brian Chen (Jun 23 2022 at 13:55):

Rik Huijzer said:

You mean LLVM compiling binary code? I heard Valentin Churavy say that it’s about 30% LLVM time

I think that's a rule of thumb, because I've also seen breakdowns like what @Fredrik Bagge Carlson mentioned. Large functions definitely appear to be a culprit.

view this post on Zulip Sukera (Jun 23 2022 at 17:57):

either way, LLVM time is a big chunk and there's no really easy way for us to speed that up

view this post on Zulip Brian Chen (Jun 23 2022 at 17:58):

Mechanisms for finer-grained application of optlevels could help. e.g. exclude "glue code" but keep optimizing hot loops, broadcasts etc. The current module-level setting is too coarse.

view this post on Zulip jar (Jun 23 2022 at 18:15):

Maybe julia could add some benchmarks to the llvm benchmark tracker

https://www.npopov.com/2020/05/10/Make-LLVM-fast-again.html
http://llvm-compile-time-tracker.com/index.php

view this post on Zulip chriselrod (Jun 23 2022 at 19:10):

Compile with -DENABLE_TIMINGS or uncomment:
https://github.com/JuliaLang/julia/blob/4873773d37e06d01ad13a0d55df684789ddd29ad/src/options.h#L87

view this post on Zulip chriselrod (Jun 23 2022 at 19:11):

That will give you a breakdown of where time was spent whenever a Julia process exits.

view this post on Zulip chriselrod (Jun 23 2022 at 19:12):

JULIA_LLVM_ARGS="-time-passes" julia

view this post on Zulip chriselrod (Jun 23 2022 at 19:12):

will give you a breakdown of where LLVM spent its time.


Last updated: Oct 02 2023 at 04:34 UTC