Hello, I've been thinking about writing a Julia package thats similar to lmfit <http://lmfit.github.io/lmfit-py/intro.html>, which provides a nice api for developing and exploring fitting models. I'll show you one of the key parts of the api, you create a Model from a function, and the model knows about all arguments which will be the fit parameters. You can then give parameters values, bounds, and fix or unfix them easily, and pass a Parameters object into a fit function call to be used for starting guesses. In [1]: import lmfit
In [*8*]: def myfunc(x,a,b,c=4): return x*a+b*c In [*9*]: model=lmfit.Model(myfunc) In [*10*]: params = model.make_params(myfunc);params.pretty_print() Parameters({ 'a': <Parameter 'a', -inf, bounds=[-inf:inf]>, 'b': <Parameter 'b', -inf, bounds=[-inf:inf]>, 'c': <Parameter 'c', 4, bounds=[-inf:inf]>, }) Clearly, we can't do exactly the same thing in Julia because myfunc could have multiple definitions to worth with multiple dispatch. So perhaps something like the following could work? *julia> **myfunc(x,a,b;c=4) = x*a+b*c* *myfunc (generic function with 1 method)* *julia> **method = @which myfunc(4,5,6;c=7)* *myfunc(x, a, b) at none:1* *julia> **model = Model(method)* It looks like Base.arg_decl_parts will get me a list of the arguments, but it doesn't know about the keyword arguments or default values (neither does @which). Maybe use this API, but require keyword arguments to be added manually? Or I could try something like *julia> **x,a,b=collect(1:10),4,5* *julia> **@Model myfunc(x,a,b;c=4)* Then all the information about desired arguments is in the argument to the macro. But it seems annoying to for you to define a bunch of probably useless variables just to make the macro call to build the mode. I don't have a good sense for how any of these would interact with callable types, any problems that come to mind? Any thoughts would be appreciated.