Stream: helpdesk (published)

Topic: Checking for code inefficiencies


view this post on Zulip Davi Sales Barreira (May 15 2021 at 01:55):

Hey everyone, I'm still struggling on spotting code inefficiencies and how to improve them.
Take these two functions below. Is there some inefficiency in them that could somehow be improved?

function w2gaussiancost(μa,Σa,μb, Σb)
    w2   = norm(μa - μb)^2 + tr(Σa + Σb - 2*(Σa^(1/2)* Σb*Σa^(1/2))^(1/2))
    return w2
end

function w2gaussianmap(μa,Σa,μb, Σb)
    A    = Σa^(-1/2)*(Σa^(1/2)* Σb*Σa^(1/2))^(1/2)*Σa^(-1/2)
    T(x) = μb + (A - μa)
    return T
end

view this post on Zulip Brenhin Keller (May 15 2021 at 02:13):

What are the types of the arguments?

view this post on Zulip Davi Sales Barreira (May 15 2021 at 02:16):

μ\mu is a vector of nn dimensions and Σ\Sigma a matrix n×nn \times n.

view this post on Zulip Brenhin Keller (May 15 2021 at 02:19):

Gotcha. Yeah, you're pretty much dependent on the efficiency of the underlying lin alg methods in that case.
I don't see anything super obvious, but perhaps someone else will.

view this post on Zulip Brenhin Keller (May 15 2021 at 02:20):

If you suspect something isn't as efficient as it should be, you could try profiling and see if anyhing looks odd there

view this post on Zulip Brenhin Keller (May 15 2021 at 02:23):

I guess one other thing could be checking whether or not ^(1/2) is going to be the most efficient way to take the matrix square root

view this post on Zulip Pablo Zubieta (May 15 2021 at 02:43):

You might be able to avoid taking the square root of Σa twice by assigning its value to another variable and passing that to the expression

view this post on Zulip Pablo Zubieta (May 15 2021 at 02:44):

Same for Σa^(-1/2)

view this post on Zulip Brenhin Keller (May 15 2021 at 02:56):

Oh true yeah, the compiler almost certainly isn't going to be smart enough to do that for you

view this post on Zulip Michael Abbott (May 15 2021 at 03:10):

How big is n? My guess is that the square root is going to dwarf everything:

julia> begin
           n = 10
           Σa, Σb = (rand(n,n) for _ in 1:2)
           μa, μb = (rand(n) for _ in 1:2)
           rtΣa = @btime sqrt($Σa)
           @btime $rtΣa * $Σb * $rtΣa
           @btime w2gaussiancost($μa,$Σa,$μb,$Σb)  # original
           @btime w2gaussiancost_2($μa,$Σa,$μb,$Σb) # calls sqrt twice
       end;
  114.958 μs (10 allocations: 16.33 KiB)
  1.271 μs (2 allocations: 3.53 KiB)
  334.083 μs (49 allocations: 66.06 KiB)
  215.042 μs (25 allocations: 36.42 KiB)

view this post on Zulip Mason Protter (May 15 2021 at 05:30):

What properties does Σa have? Is there any structure to it?

view this post on Zulip Mason Protter (May 15 2021 at 05:31):

E.g. is it hermitian? Positive definite?

view this post on Zulip Davi Sales Barreira (May 27 2021 at 20:27):

It's positive semi-definite. It's the covariance for a Mv Normal distribution.

view this post on Zulip Davi Sales Barreira (May 27 2021 at 20:28):

I'm mostly worried about the performance for Σ1/2\Sigma^{-1/2}.

view this post on Zulip Mason Protter (May 27 2021 at 21:05):

Okay, are you sure you need the actual inverse square root? Would a Cholesky factor work for you instead?

view this post on Zulip Mason Protter (May 27 2021 at 21:06):

The pivoted Cholesky decomposition works on positive semi-definite matrices

view this post on Zulip Mason Protter (May 27 2021 at 21:11):

If you can't use Cholesky for some reason, I'd use either an SVD or Eigen decomposition and then use that to accellerate products of the form A * Σ^(-1/2)

view this post on Zulip Davi Sales Barreira (May 27 2021 at 21:54):

I think I do actually need it. But I'll see if perhaps the Cholesky decomp. works. Thanks for the input.

view this post on Zulip Mason Protter (May 27 2021 at 22:28):

@Davi Sales Barreira are you sure that your matrix is positive semi-definite, and not fully positive definite? For a semi-definite matrix, this whole thing is just ill conditioned when you take an inverse square root.

view this post on Zulip Davi Sales Barreira (May 27 2021 at 22:39):

It's a covariance matrix for the Multivariate Gaussian, so I think it is indeed positive semi-definite, but it is also Symmetric, so this might solve the ill condition. Doesn't it?

view this post on Zulip Davi Sales Barreira (May 27 2021 at 22:40):

The whole thing I'm trying to calculate is

 A    = Σa^(-1/2)*(Σa^(1/2)* Σb*Σa^(1/2))^(1/2)*Σa^(-1/2)

Which is the solution for the Optimal Transport map between multivariate gaussians. So I do think it should not be ill conditioned in any way, because this map is proved to always exist.

view this post on Zulip Mason Protter (May 27 2021 at 22:48):

Σa^(-1/2) is ill defined if the matrix is semi-definite, regardless of whether or not it's symmetric.

view this post on Zulip Mason Protter (May 27 2021 at 22:58):

Hmm, maybe it's okay actually, the square root does seem to improve it's conditioning quite a bit.

view this post on Zulip Davi Sales Barreira (May 27 2021 at 23:07):

I cannot prove it, but this result is very well established. So it's probably possible to prove it's indeed well conditioned.

view this post on Zulip Mason Protter (May 27 2021 at 23:24):

How big are the matrices you’re dealing with?

view this post on Zulip Davi Sales Barreira (May 28 2021 at 19:48):

They can be quite big. It is for a package, so it can be arbitrarily big.


Last updated: Dec 28 2024 at 04:38 UTC