Hello,
Our current approach to defining custom UDFs and their implementations works by…

  1.  Defining the udf and its implementation in a class member function
  2.  Creating a Calcite definitions in the form of `schema.Function`s via  
`ScalarFunctionImpl.create` or `AggregateFunctionImpl.create` with that class 
and member function
  3.  Returning a Symbol -> Function map from our Schema’s implementation of 
`AbstractSchema.getFunctionMultimap`
That map is then consumed by the CatalogReader used for validation.

As far as we can tell, this is the only way to provide UDFs to Calcite when 
using JdbcMeta as Meta for the Avatica Service and handler.

This method of defining UDFs and their implementations severely limits the 
kinds of functions we are allowed to define, which would otherwise be allowed 
by defining them via SqlOperators and an SqlOperatorTable.
For example, it does not allow the definition of variadic functions, which is a 
particularly sore spot for our users.
The flip side of using SqlOperators is that it is unclear how to then bind 
implementations to those operators for Calcite’s execution.

I’ve previously been pointed to this example [1] but this is only applicable 
when building up the entire pipeline, which is a non-starter for us.

Any help is greatly appreciated.


[1] 
https://github.com/zabetak/calcite-tutorial/blob/31cce59c747e0b763a934109db6a6e7055f175ae/solution/src/main/java/com/github/zabetak/calcite/tutorial/LuceneQueryProcessor.java#L166

Reply via email to