Stream: helpdesk (published)

Topic: Constraining method signatures


view this post on Zulip Jesper Stemann Andersen (Nov 22 2021 at 10:39):

How to be more type-specific than ::Function when passing methods to methods (defining higher-order functions) or defining structs with methods as fields?

Just use https://github.com/yuyichao/FunctionWrappers.jl ?

view this post on Zulip Sebastian Pfitzner (Nov 22 2021 at 10:41):

why do you want to do that?

view this post on Zulip Maarten (Nov 22 2021 at 10:45):

for structs with methods as fields I just use

struct test{A}
fun::A
end

julia is then able to infer the exact fun, and keeps things type stable

view this post on Zulip Sebastian Pfitzner (Nov 22 2021 at 10:48):

yeah, I was mostly asking about the first usecase

view this post on Zulip Jesper Stemann Andersen (Nov 22 2021 at 10:55):

In a related thread, Mason Protter: answers that basically, this is not what you would do in Julia - you would prefer not to constrain the Function, as that would hinder the inter-operability of methods - i.e. "dispatch nirvana".

But how do I explain to colleagues that their "Pythonic" or "Csharpish" way of doing things is not "Julian"? :-) A really strong point (in my memory) of the micro-benchmarks once highlighted was "you can write any sort of algorithm in Julia". And indeed you can. But can I get type inference (and hence static type analysis) as well, please? :-)

So use case: Inferrable types - code navigation etc.

view this post on Zulip Sukera (Nov 22 2021 at 11:34):

you use where F

view this post on Zulip Sukera (Nov 22 2021 at 11:34):

see also https://docs.julialang.org/en/v1/manual/performance-tips/#Be-aware-of-when-Julia-avoids-specializing

view this post on Zulip Sukera (Nov 22 2021 at 11:35):

in general ::T in a function signature _does not generally improve performance or inferrability_ - it is mainly a tool for deciding dispatch

view this post on Zulip Sukera (Nov 22 2021 at 11:36):

this may be unfamiliar to people used to static languages like java or C# - julia is a dynamic (though compiled) language

view this post on Zulip Sukera (Nov 22 2021 at 11:36):

these things are not exclusive!

view this post on Zulip Jesper Stemann Andersen (Nov 22 2021 at 12:18):

Ah, I see, so in the following, f2 enable more specialization than f1?

f1(f::Function, x) = f(x)
f2(f::F, x) where F <: Function = f(x)

I can't quite see the difference:

julia> (@which f1(sin, 0)).specializations
svec(MethodInstance for f1(::typeof(sin), ::Float64), MethodInstance for f1(::typeof(sin), ::Int64), #undef, #undef, #undef, #undef, #undef, #undef)

julia> (@which f2(sin, 0)).specializations
svec(MethodInstance for f2(::typeof(sin), ::Float64), MethodInstance for f2(::typeof(sin), ::Int64), #undef, #undef, #undef, #undef, #undef, #undef)

view this post on Zulip Jesper Stemann Andersen (Nov 22 2021 at 12:30):

I am actually not looking for performance - what I am seeking is inferability - to enable static analysis (whether by a human or JET). I would prefer to constrain the methods that will be accepted, such that my program is more expressive - more clear.

Despite that "Julian"/dispatch-way of doing it, where "it" being a function f of two objects dependent on a distance measure, might be:

abstract type AbstractFoo end

abstract type AbstractFooDistance end

struct FooDistance1 end
struct FooDistance2 end

dist(foo1::AbstractFoo, foo2::AbstractFoo, d::FooDistance1) = 1
dist(foo1::AbstractFoo, foo2::AbstractFoo, d::FooDistance2) = 2

f(foo1::AbstractFoo, foo2::AbstractFoo, d::AbstractFooDistance) = dist(foo1, foo2, d)

The most common approach would be:

abstract type AbstractFoo end

dist1(foo1::AbstractFoo, foo2::AbstractFoo) = 1
dist2(foo1::AbstractFoo, foo2::AbstractFoo) = 2

f(foo1::AbstractFoo, foo2::AbstractFoo, d::Function) = d(foo1, foo2)

which leaves d quite open.

A FunctionWrapper seems to enable specifying the signature of accepted d's:

f(foo1::AbstractFoo, foo2::AbstractFoo, d::FunctionWrapper{Int,(AbstractFoo,AbstractFoo)}) = d(foo1, foo2)

... at the cost of having to wrap the functions at the call sites:

f(foo1, foo2, FunctionWrapper{Int,(AbstractFoo,AbstractFoo)}(dist1))

view this post on Zulip Sukera (Nov 22 2021 at 12:50):

I can't quite see the difference:

See:

Julia will always specialize when the argument is used within the method, but not if the argument is just passed through to another function.

view this post on Zulip Sukera (Nov 22 2021 at 12:50):

it will still infer correctly, it just may not need to specialize

view this post on Zulip Sukera (Nov 22 2021 at 12:51):

i.e. the differently inferred methods may point to the same non-specialized code

view this post on Zulip Sukera (Nov 22 2021 at 12:52):

unless you really need FunctionWrappers, I wouldn't start with it in mind

view this post on Zulip Sukera (Nov 22 2021 at 12:52):

in general, julia style seems to be shallower type hierarchies, in contrast to e.g. Java or C#, where very deeply nested type hierarchies are common

view this post on Zulip Sukera (Nov 22 2021 at 12:53):

one reason for this is that abstract types don't hold state, i.e. there is no structural inheritance

view this post on Zulip Jesper Stemann Andersen (Nov 22 2021 at 13:38):

Sukera said:

unless you really need FunctionWrappers, I wouldn't start with it in mind

We don't - I prefer to define the distance types and let dispatch handle it - i.e., avoiding ::Function. It was more for sake of argument - saying "you shouldn't do it like that - that's not the way it should be done in Julia" doesn't feel like a strong argument.

view this post on Zulip Chad Scherrer (Nov 22 2021 at 14:05):

This sounds related to a problem I've run into a few times. If you pass f as a function (generically or ::Function) you usually can't know anything statically about the function.

It's often useful to know "this function returns a function" or "this function returns a lower-triangular matrix of Float64s". Ideally, there could even be a way to dispatch on things like this. But Julia isn't set up this way. For dispatch, you can only do ::Function or ::typeof(f). You either know nothing or everything, there's no in between. So you're often stuck with just running it and seeing what you get. In cases where the function is expensive, this can be painful.

I had high hopes for Core.Compiler.return_type, but currently it doesn't seem to help much. Maybe that will change?

view this post on Zulip Sukera (Nov 22 2021 at 14:13):

Core.Compiler.return_type itself is internal anyway, no?

view this post on Zulip Sukera (Nov 22 2021 at 14:13):

if we get to dispatch on something like that, we will probably get syntax for it

view this post on Zulip Sukera (Nov 22 2021 at 14:14):

at its core, julia is still a dynamic language, you have to keep that in mind - just because we _can_ access some things from the compiler, doesn't mean we necessarily should

view this post on Zulip Sukera (Nov 22 2021 at 14:14):

(though I do think this desire will only grow stronger, to support static binaries better)

view this post on Zulip Chad Scherrer (Nov 22 2021 at 14:34):

I'd argue the opposite of that - It's referred to as a dynamic language, but just because you can ignore the types doesn't mean you should :upside_down:

I like Keno's description of "locally static neighborhoods" as the key to Julia's performance. So to make things better and faster, we should make those neighborhoods bigger, when we can.

As for functions, the best workaround I've found for this is making structs callable, then using a type hierarchy for dispatch. Then you can dispatch on abstract types to encode things you know statically.

view this post on Zulip Simeon Schaub (Nov 22 2021 at 16:12):

I had high hopes for Core.Compiler.return_type, but currently it doesn't seem to help much. Maybe that will change?

What do you mean by that? return_type does exactly what it's supposed to. Dispatching on it OTOH is usually a bad idea though, since because of the way Julia is designed, type inference is never guaranteed to be exact, so Any is always a valid answer for return_type.

view this post on Zulip Chad Scherrer (Nov 22 2021 at 16:19):

Just that it seems to return Any for cases that seem reasonable to infer. Maybe there's a way to help it, say adding methods to cases I know? Would that propagate correctly?

view this post on Zulip Sukera (Nov 22 2021 at 16:24):

depends on the specific case

view this post on Zulip Mason Protter (Nov 22 2021 at 18:28):

no idea if this is useful to you but you can always do this sort of pattern:

struct MethodWrapper{F, T <: Tuple, R}
    f::F
end

MethodWrapper(f::F, Ts)  where {F}           = MethodWrapper{F, toTuple(Ts), Any}(f)
MethodWrapper(f::F, (Ts, R)::Pair) where {F} = MethodWrapper{F, toTuple(Ts),   R}(f)
toTuple(::Type{T}) where {T} = Tuple{T}
toTuple(Ts::Tuple) = Tuple{Ts...}

(M::MethodWrapper{F, Ts, R})(  args...) where {F, Ts, R} = invoke(M.f, Ts, args...)::R
(M::MethodWrapper{F, Ts, Any})(args...) where {F, Ts}    = invoke(M.f, Ts, args...)

function Base.show(io::IO, M::MethodWrapper{F, T, R}) where {F, T <: Tuple, R}
    Ts = collect(T.parameters)
    print(io, string(M.f), "(::$(Ts[1])", (", ::$(Ts[i])" for i in 2:length(Ts))..., ")::$R")
end

macro method(ex::Expr)
    @assert ex.head  (:call, :(::))
    if ex.head == :call
        R = Any
    else
        R  = ex.args[2]
        ex = ex.args[1]
    end
    f = ex.args[1]
    Ts = map(ex.args[2:end]) do arg::Expr
        @assert arg.head == :(::)
        arg.args[length(arg.args)]
    end
    esc(:(MethodWrapper($f, ($(Tuple(Ts)...),) => $R)))
end

view this post on Zulip Mason Protter (Nov 22 2021 at 18:29):

and then

f(x, y) = x^2 + 2x*y
m = @method f(::Real, ::Complex) :: Complex

#+RESULTS:
 f(::Real, ::Complex)::Complex
typeof(m)

#+RESULTS:
 MethodWrapper{typeof(f),Tuple{Real,Complex},Complex}
m(1, 1 + im)

#+RESULTS:
 3 + 2im
m2 = @method f(::Real, ::Complex)::Real
m2(1, 1+ im)

#+RESULTS:
 TypeError: in typeassert, expected Real, got a value of type Complex{Int64}

view this post on Zulip Chad Scherrer (Nov 22 2021 at 18:40):

Ok this is really interesting:

julia> @benchmark $f(a,b) setup = (a=rand(); b=rand() + rand()im)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min  max):  1.162 ns  4.749 ns   GC (min  max): 0.00%  0.00%
 Time  (median):     1.172 ns              GC (median):    0.00%
 Time  (mean ± σ):   1.179 ns ± 0.074 ns   GC (mean ± σ):  0.00% ± 0.00%

               
  ▂▁▁▁▁▁▁▁▁▁▁▁▁█▁▆▁▁▁▁▁▁▁▁▁▁▁▅▁▃▁▁▁▁▁▁▁▁▁▁▁▃▁▂▁▁▁▁▁▁▁▁▁▁▁▃▂ 
  1.16 ns        Histogram: frequency by time        1.2 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark $m(a,b) setup = (a=rand(); b=rand() + rand()im)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min  max):  1.162 ns  4.819 ns   GC (min  max): 0.00%  0.00%
 Time  (median):     1.182 ns              GC (median):    0.00%
 Time  (mean ± σ):   1.182 ns ± 0.067 ns   GC (mean ± σ):  0.00% ± 0.00%

                                          
  █▃▁▁▁▁▁▁▁▁▁▁▁▁█▅▁▁▁▁▁▁▁▁▁▁▁▁█▅▁▁▁▁▁▁▁▁▁▁▁▁█▁▆▁▁▁▁▁▁▁▁▁▁▁▂ 
  1.16 ns        Histogram: frequency by time        1.2 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> Core.Compiler.return_type(m, Tuple{Real, Complex})
Complex

julia> Core.Compiler.return_type(f, Tuple{Real, Complex})
Any

view this post on Zulip Sukera (Nov 22 2021 at 18:47):

it's not too surprising imo

view this post on Zulip Sukera (Nov 22 2021 at 18:47):

it's basically helping inference along by asserting the return type at the end of the call

view this post on Zulip Mason Protter (Nov 22 2021 at 18:52):

Yeah. Inference basically refuses to work on abstract types

view this post on Zulip Mason Protter (Nov 22 2021 at 18:53):

Core.Compiler.return_type(+, Tuple{Real, Real})

#+RESULTS:
: Any
Core.Compiler.return_type((x, y) -> (x + y)::Real, Tuple{Real, Real})

#+RESULTS:
: Real

view this post on Zulip Mason Protter (Nov 22 2021 at 18:53):

There's no magic happening here

view this post on Zulip Mason Protter (Nov 22 2021 at 18:54):

In fact, you can even just lie to the compiler this way:

Core.Compiler.return_type((x, y) -> (x + y)::String, Tuple{Real, Real})

#+RESULTS:
: String

view this post on Zulip Sukera (Nov 22 2021 at 18:55):

yup

view this post on Zulip Sukera (Nov 22 2021 at 18:56):

inference can't really know what Real + Real would be - they can't be instantiated after all

view this post on Zulip Sukera (Nov 22 2021 at 18:56):

imo that's a good thing, because I think that makes the trail you have to go search for until you find the instability much shorter

view this post on Zulip Mason Protter (Nov 22 2021 at 18:58):

This is why I think that type classes could be a potentially useful thing in julia even though we already have multiple dispatch. Besides being one way of expressing traits, they also allow you to express abstract type inference because they're more structured

view this post on Zulip Sukera (Nov 22 2021 at 19:37):

I just want to properly express that a thing is iterable AND indexable in the type system

view this post on Zulip Sukera (Nov 22 2021 at 19:38):

but without necessarily having it be <: AbstractArray (non-rectangular and all that)

view this post on Zulip Sukera (Nov 22 2021 at 19:39):

really, AbstractArray should just mean Indexable, Iterable and Rectangular

view this post on Zulip Chad Scherrer (Nov 22 2021 at 19:59):

Yeah, I mostly use Tricks.jl to test for iterability, but it would be much better to have it built in and fast

view this post on Zulip Chad Scherrer (Dec 07 2021 at 14:16):

Mason Protter said:

no idea if this is useful to you but you can always do this sort of pattern:

struct MethodWrapper{F, T <: Tuple, R}
    f::F
end

MethodWrapper(f::F, Ts)  where {F}           = MethodWrapper{F, toTuple(Ts), Any}(f)
MethodWrapper(f::F, (Ts, R)::Pair) where {F} = MethodWrapper{F, toTuple(Ts),   R}(f)
toTuple(::Type{T}) where {T} = Tuple{T}
toTuple(Ts::Tuple) = Tuple{Ts...}

(M::MethodWrapper{F, Ts, R})(  args...) where {F, Ts, R} = invoke(M.f, Ts, args...)::R
(M::MethodWrapper{F, Ts, Any})(args...) where {F, Ts}    = invoke(M.f, Ts, args...)

function Base.show(io::IO, M::MethodWrapper{F, T, R}) where {F, T <: Tuple, R}
    Ts = collect(T.parameters)
    print(io, string(M.f), "(::$(Ts[1])", (", ::$(Ts[i])" for i in 2:length(Ts))..., ")::$R")
end

macro method(ex::Expr)
    @assert ex.head  (:call, :(::))
    if ex.head == :call
        R = Any
    else
        R  = ex.args[2]
        ex = ex.args[1]
    end
    f = ex.args[1]
    Ts = map(ex.args[2:end]) do arg::Expr
        @assert arg.head == :(::)
        arg.args[length(arg.args)]
    end
    esc(:(MethodWrapper($f, ($(Tuple(Ts)...),) => $R)))
end

@Mason Protter have you considered making this into a package? I think lots of people use FunctionWrappers.jl to solve this problem, but that adds a lot of overhead. Catlab.jl is one example:
https://github.com/AlgebraicJulia/Catlab.jl/issues/586#issuecomment-987502657
@Evan Patterson @James Fairbanks

view this post on Zulip Mason Protter (Dec 07 2021 at 16:33):

Yeah I could stick it in a package if people would find it helpful.


Last updated: Oct 02 2023 at 04:34 UTC