is it only me that I might not have something well configured or generally LanguageServer.jl and meta-programming don't dance together ?
If that's the out of the box state, are there any nifty solutions ?
sadly that's the state of things :/
hmm. any possible solutions or even discussions in the horizon ? this can get rather wild with package using a lot of macros (like JuMP.jl and Makie.jl to name a few popular ones)
macros are tricky because they are running actual user code on the source AST
maybe @Sebastian Pfitzner knows about this though
I completely lack how knowledge on how LanguageServer.jl works at the moment. Does it not evaluate anything and just does static AST analysis ? then it's tough.
what about JET.jl ? I think JET interprets some code and as I see here it also expands the macros. that should work..
I remember some discussion about incorcporating JET as a linter and I've heard VSCode people already have something that works, but not sure what the stage is there for big adoption in LanguageServer.jl
JET works on a completely different level
you wouldn't use JET solely as a linter, you're probably thinking of the (not at all finished/workable) https://github.com/aviatesk/JETLS.jl
which tries to incorporate the things JET.jl reports as issues via a LanguageServer
doesn't integrate with the existing LanguageServer.jl
at the moment, LS.jl does its own parsing, but I'm not sure that it actually expands macros during that (as that may have sideffects)
in theory we could do some very limited analysis of macro code, but a generic solution is not really possible with the current LS.jl design
julia has some introspection tools, like@which
, methods
, .... ofc to use these you need to evaluate the code. Does the current design of LanguageServer.jl make use of such tools , i.e. evaluates the code ?
The core issue is that the language server would need to get VASTLY more information from a julia session that has loaded the package code. There just is no way a julia session in the current form can provide that information though, especially not for top level code (which is evaluated eagerly)
Last updated: Nov 06 2024 at 04:40 UTC