Hi Bowen,
thanks for this proposal after our discussion around the FunctionCatalog
rework. I like the architecture proposed in the FLIP because it is also
based on existing concepts and just slightly modifies the code base.
However, I would like to discuss some unanswered questions:
1) Terminology: Can we use the term "module" instead of "plugin"? Flink
also introduced the concept of a "plugin" recently for filesystems and
other purposes. So far our table extensions don't rely on this plugin
mechanism which means that classes in the classpath could potentially
clash. The term "module" also for function modules fits better here.
2) Intermediate plugin changes: Can a user call usePlugins(...) multiple
times within a session and change to completely different set of
modules? I guess we need to support this to have interactive sessions.
But what are the side effects if a plugin is dropped or a newly loaded
plugin overrides a function previous function name?
3) Missing CorePlugin: What happens if the CorePlugin is not loaded?
Will CAST, AS etc. not be available? This could lead to weird behavior
but I'm also fine with letting advanced users deal with the problem
themselves. They can still return function definitions from the
BuiltInFunctionDefinition class.
4) SQL: Can we already discuss how a SQL command for this would look
like? E.g.:
LOAD MODULE 'hive' [WITH ('prop'='myProp', ...)]; UNLOAD MODULE 'hive';
Thanks,
Timo
On 19.09.19 23:53, Bowen Li wrote:
Thanks everyone for your feedback. I've converted it to a FLIP wiki [1].
Please take another look. If there's no more concerns, I'd like to start a
voting thread for it.
Thanks
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-68%3A+Extend+Core+Table+System+with+Modular+Plugins
On Tue, Sep 17, 2019 at 11:25 AM Bowen Li <bowenl...@gmail.com> wrote:
Hi devs,
We'd like to kick off a conversation on "FLIP-68: Extend Core Table
System with Modular Plugins" [1].
The modular approach was raised in discussion of how to support Hive
built-in functions in FLIP-57 [2]. As we discussed and looked deeper, we
think it’s a good opportunity to broaden the design and the corresponding
problem it aims to solve. The motivation is to expand Flink’s core table
system and enable users to do customizations by writing pluggable modules.
There are two aspects of the motivation:
1. Enpower users to write code and do customized developement for Flink
table core
2. Enable users to integrate Flink with cores and built-in objects of
other systems, so users can reuse what they are familiar with in other SQL
systems seamlessly as core and built-ins of Flink table
Please take a look, and feedbacks are welcome.
Bowen
[1]
https://docs.google.com/document/d/17CPMpMbPDjvM4selUVEfh_tqUK_oV0TODAUA9dfHakc/edit?usp=sharing
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-57-Rework-FunctionCatalog-td32291.html