[jira] [Commented] (FLINK-30884) GetTable from Flink catalog should be judged whether it is a sink table

2023-02-03 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684104#comment-17684104
 ] 

luoyuxia commented on FLINK-30884:
--

I think now I understand your case. It's valid, but I still don't think it's 
Catalog's responsibility to . Catalog is designed to store table ignoring table 
source or sink. 

For you case, I think you can implement it in DynamicTableSink/ 
DynamicTableSource, you can generate these information  in method 
getScanRuntimeProvider. 

 

> GetTable from Flink catalog should be judged whether it is a sink table
> ---
>
> Key: FLINK-30884
> URL: https://issues.apache.org/jira/browse/FLINK-30884
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: yuanfenghu
>Priority: Major
>
> I want to say that when I use a third-party persistent catalog to manage the 
> metadata of my persistent table,
> I may want to judge the options of the table I need to generate by whether it 
> is a sink table
> For example when using kafka connector
> When I use kafka as source table,
> The following parameters are required:
> offset, properties.group.id, etc.
> When I use kafka as the sink representation, I will pass in some parameters 
> only about the sink table, for example:
> sink.delivery-guarantee
> sink.partitioner
> So why can't we add a switch to tell the catalog this information, which is 
> very useful in platform development! !
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30884) GetTable from Flink catalog should be judged whether it is a sink table

2023-02-03 Thread yuanfenghu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684088#comment-17684088
 ] 

yuanfenghu commented on FLINK-30884:


In one case, we used an external catalog to manage my kafka tables, preserving 
information such as:

The topic format and columns and columnttypes

But the kafka table can actually be used to read or write. When it is used as a 
write table, I may add some other parameters such as:

sink.delivery-guarantee

sink.partitioner

The properties. The sasl. Jaas. Config authentication information, etc

As source I need to add these parameters:

scan.startup.mode

offset

The properties. The sasl. Jaas. Config authentication information, etc

I need to ask some external systems to generate this information. Therefore, I 
want to know whether gettable is a sink table so that I can determine how to 
generate these options

> GetTable from Flink catalog should be judged whether it is a sink table
> ---
>
> Key: FLINK-30884
> URL: https://issues.apache.org/jira/browse/FLINK-30884
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: yuanfenghu
>Priority: Major
>
> I want to say that when I use a third-party persistent catalog to manage the 
> metadata of my persistent table,
> I may want to judge the options of the table I need to generate by whether it 
> is a sink table
> For example when using kafka connector
> When I use kafka as source table,
> The following parameters are required:
> offset, properties.group.id, etc.
> When I use kafka as the sink representation, I will pass in some parameters 
> only about the sink table, for example:
> sink.delivery-guarantee
> sink.partitioner
> So why can't we add a switch to tell the catalog this information, which is 
> very useful in platform development! !
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30884) GetTable from Flink catalog should be judged whether it is a sink table

2023-02-03 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683812#comment-17683812
 ] 

luoyuxia commented on FLINK-30884:
--

Hi, [~heigebupahei] Could you please explain about the use scenario about 
catalog need to know whether the table is a source or sink?

>From my side, a table stored in catalog can be either source or sink.  I 
>really don't think catalog may need this information.

> GetTable from Flink catalog should be judged whether it is a sink table
> ---
>
> Key: FLINK-30884
> URL: https://issues.apache.org/jira/browse/FLINK-30884
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: yuanfenghu
>Priority: Major
>
> I want to say that when I use a third-party persistent catalog to manage the 
> metadata of my persistent table,
> I may want to judge the options of the table I need to generate by whether it 
> is a sink table
> For example when using kafka connector
> When I use kafka as source table,
> The following parameters are required:
> offset, properties.group.id, etc.
> When I use kafka as the sink representation, I will pass in some parameters 
> only about the sink table, for example:
> sink.delivery-guarantee
> sink.partitioner
> So why can't we add a switch to tell the catalog this information, which is 
> very useful in platform development! !
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30884) GetTable from Flink catalog should be judged whether it is a sink table

2023-02-02 Thread yuanfenghu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683386#comment-17683386
 ] 

yuanfenghu commented on FLINK-30884:


[~fsk119] [~godfrey]  

> GetTable from Flink catalog should be judged whether it is a sink table
> ---
>
> Key: FLINK-30884
> URL: https://issues.apache.org/jira/browse/FLINK-30884
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: yuanfenghu
>Priority: Blocker
>
> I want to say that when I use a third-party persistent catalog to manage the 
> metadata of my persistent table,
> I may want to judge the options of the table I need to generate by whether it 
> is a sink table
> For example when using kafka connector
> When I use kafka as source table,
> The following parameters are required:
> offset, properties.group.id, etc.
> When I use kafka as the sink representation, I will pass in some parameters 
> only about the sink table, for example:
> sink.delivery-guarantee
> sink.partitioner
> So why can't we add a switch to tell the catalog this information, which is 
> very useful in platform development! !
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)