Stream: helpdesk (published)

Topic: See allocated memory


view this post on Zulip DrChainsaw (Mar 31 2023 at 13:10):

Is it possible to see what objects Julia has memory allocated for (i.e. what variables can't be garbage collected)?

Reason is that I'm doing some interactive plotting where I load some data from disk and plot it in windows. Problem is that every time I generate a plot the amount of memory used by Julia (as seen by windows) increases until it is way more than the amount of RAM I have and things just become dead slow and programs start crashing in the background, meaning I have to restart the REPL every couple of plots.

Functions don't return anything but the actual plot which only plots some heavily aggregated statistics (e.g. mean and a few percentiles) and even if I set all variables in workspace to 0 and run GC.gc() multiple times windows insists that Julia is taking more and more memory, even if I'm just running the same same plot command multiple times.

view this post on Zulip Santtu (Apr 02 2023 at 05:57):

We need a more detailed description to figure out what's wrong. How much data are you loading for each plot? What plotting library are you using? What commands did you issue in the REPL?

view this post on Zulip Santtu (Apr 02 2023 at 06:06):

But you might try Debugger.jl ([link]) to observe sizes of variables.

[link]: https://github.com/JuliaDebug/Debugger.jl

view this post on Zulip DrChainsaw (Apr 02 2023 at 09:55):

There are alot of packages involved and I would guess the root cause might be difficult to sort out.

As a first step, I wanted to see if windows and Julia agree on how much memory Julia needs. There are many packages involved and I think at least a few of them use global caches (Dagger.jl being one of my prime suspects). Instead of manually going through the source code of each such package I was hoping to find some kind of way to profile what the GC thinks it can not clean up atm.

After a bit of search prompt engineering I found Base.varinfo which looks promising, but even varinfo(;all=true, imported=true, recursive=true, sortby=:size) seems to skip const globals in loaded packages.

Debugger sounds a bit heavy handed since I'm more after a profiler, but I'll look into it (have never used it).

When I run a single load->mangle->plot call with @time it reports around 10Gb of allocations which seems somewhat reasonable given the amount of data being processed. Running it again gives a similar number, but windows just keeps increasing the commit charge with each call.

Non rigorous example: After letting my session sit over the night with a 45 GB commit charge the working set size was down to just over 1 GB the next day (commit charge stayed the same), but running the plot command again bumped up the commit charge to 69 GB and a 13 GB working set.

view this post on Zulip DrChainsaw (Apr 02 2023 at 09:58):

I'm not even certain that the commit charge is the problem, but it is the one thing I have observed which just gets larger. The symptom is that calls suddenly get two orders of magnitude slower and if I keep going without restating then programs (explorer, browsers and eventually vs code) starts to crash.


Last updated: Nov 22 2024 at 04:41 UTC