Stream: helpdesk (published)

Topic: LoopVectorization silently failing


view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:15):

In the spirit of @brrrt (see #random > Rename LoopVectorization.@avx?), I recently spent days debugging the following bug:

# get hidden neuron firing rate
reservoir.ξhidden!(cache.r)
@avx cache.r .+= reservoir.f.(u)

# get output neuron firing rate
reservoir.ξ!(cache.z)
if isexploring[]
    reservoir.ξ!(cache.z)
    @avx cache.z .+= reservoir.Wout * cache.r
else
    @avx cache.z .= reservoir.Wout * cache.r
end

# update du
@avx du .= (-u .+ reservoir.λ * reservoir.Wr * cache.r .+
                  reservoir.Win * input(t) .+
                  reservoir.Wfb * cache.z) ./ reservoir.τ

This code fails to produce anything resembling working. A red herring related to the DE stepping is why I spent so many days on it until I found out that removing @avx from the line corresponding to du fixed things. Now, I am wondering why the error occurred (like a more detailed explanation than LoopVectorization.jl is unsafe). The size of the weight matrices are:

view this post on Zulip Mason Protter (May 13 2021 at 17:17):

I think it's probably this:
image.png

view this post on Zulip Mason Protter (May 13 2021 at 17:18):

Dimensions of size 1 are not kosher with LV, but I never really understood why

view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:21):

Yeah but that warning only mentions broadcasting. The full expression is z .= z .+ Wout * r, so the 1x1000 array is embedded in an expression that uses broadcast, but it is not part of the broadcast itself. This is partially why I asked the question — to get clarity on if this kind of expression is still bad. And if the following would "fix" things:

tmp = Wout * r
@avx z .+= tmp

view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:22):

Also it is weird that I only needed to remove @avx from the du expression which involves no 1xN arrays!

view this post on Zulip Mason Protter (May 13 2021 at 17:25):

What's the size of r?

view this post on Zulip Mason Protter (May 13 2021 at 17:26):

If r is a 1000xN array, then Wout * r is a 1xN array so the problem is not solved

view this post on Zulip Mason Protter (May 13 2021 at 17:27):

You can make Wout a Transpose{Vector} though

view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:27):

Ah, you're right! r is 1000x1, but I assume 1x1 arrays also not supported

view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:28):

Yeah let me try with the transpose

view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:28):

Thanks, this makes sense now

view this post on Zulip Mason Protter (May 13 2021 at 17:28):

Is r a Matrix, or is it a Vector?

view this post on Zulip Mason Protter (May 13 2021 at 17:30):

You'll need r to be a Vector in order to not get a 1x1 array out

julia> let Wout = rand(1000)', r = rand(1000, 1)
           Wout * r
       end
1×1 adjoint(::Vector{Float64}) with eltype Float64:
 244.01241695906867

julia> let Wout = rand(1000)', r = rand(1000)
           Wout * r
       end
236.71936258108883

view this post on Zulip Kyle Daruwalla (May 13 2021 at 17:31):

It's a Vector


Last updated: Oct 02 2023 at 04:34 UTC