Hi Devs,

I am interested in learning more about MODEL, ML_PREDICT, and ML_EVALUATE
functionalities added in the following FLIP.

https://cwiki.apache.org/confluence/display/FLINK/FLIP-437%3A+Support+ML+Models+in+Flink+SQL


I see the original FLIP has extensibility to local model providers in Flink.


Is there a way to do pluggable local model providers in Python? Like, say,
generate embeddings using Sentence transformer models running locally in
Flink.


An option could be to introduce a Model Provider factory implementation in
Java that internally uses a predict function in Python . But I see this
puts in a lot of work related to Java to Python communication/translation
inside the provider.


Something like PythonRuntimeProvider along with PredictRuntimeProvider /
AsyncRuntimeProvider which can handle Java -> Python translations out of
the box would be helpful to de-duplicate that effort.


Can you please point to, if there are any discussions related to this
already ? Or any other ways to achieve the same? Please share your thoughts.


-Thanks,

Swapna Marru

Reply via email to