[
https://issues.apache.org/jira/browse/FLINK-34992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17943456#comment-17943456
]
Shengkai Fang commented on FLINK-34992:
---------------------------------------
> ModelFactory, ModelSource and ModelRuntimeProvider
+1 for the structure. But I think it's better we can have a FLIP to make it
clearer, and others can help to contribute the ai model source.
> `task` is needed since for remote model, it's just an endpoint we can call,
> you don't know the model type.
I think the framework actually doesn't care about the model type. Framework
only cares about the input schema and output schema for the operator(Model). As
for evaluation, I think it's not clear what metric is used to evaluate model. I
am prone to add more specific loss functions and let users to determine which
loss function is used.
After reading the examples, I prefer to use `provider` instead of `format` if
the model is local.
```
{{CREATE MODEL `my_import_model`}}
{{INPUT (f1 INT, f2 STRING)}}
{{OUTPUT (label FLOAT)}}
{{WITH(}}
{{ }}{{'task'}} {{= }}{{{}'regression'{}}}{{{},{}}}
{{ }}{{'type'}} {{= }}{{{}'import'{}}}{{{},{}}}
{{ }}{{'provider'}} {{= }}{{{}'ONNX'{}}}{{{},{}}}
{{ }}{{'path'}} {{=
}}{{{}'[http://storage.googleapis.com/storage/t.onnx']{}}}{{{},{}}}
{{)}}
```
Could you explain more details about the design?
> FLIP-437: Support ML Models in Flink SQL
> ----------------------------------------
>
> Key: FLINK-34992
> URL: https://issues.apache.org/jira/browse/FLINK-34992
> Project: Flink
> Issue Type: New Feature
> Components: Table SQL / API, Table SQL / Planner, Table SQL / Runtime
> Reporter: Hao Li
> Priority: Major
>
> This is an umbrella task for FLIP-437. FLIP-437:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-437%3A+Support+ML+Models+in+Flink+SQL
--
This message was sent by Atlassian Jira
(v8.20.10#820010)