[ 
https://issues.apache.org/jira/browse/FLINK-21643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296685#comment-17296685
 ] 

Maciej Obuchowski edited comment on FLINK-21643 at 3/7/21, 1:45 AM:
--------------------------------------------------------------------

As I already have solution (working on production with Oracle) on my Flink 
fork, I'll provide it as draft solution.

 

EDIT: here: https://github.com/apache/flink/pull/15102


was (Author: mobuchowski):
As I already have solution (working on production with Oracle) on my Flink 
fork, I'll provide it as draft solution.

> JDBC sink should be able to execute statements on multiple tables
> -----------------------------------------------------------------
>
>                 Key: FLINK-21643
>                 URL: https://issues.apache.org/jira/browse/FLINK-21643
>             Project: Flink
>          Issue Type: New Feature
>          Components: Connectors / JDBC
>    Affects Versions: 1.12.2
>            Reporter: Maciej Obuchowski
>            Priority: Major
>              Labels: pull-request-available
>
> Currently datastream JDBC sink supports outputting data only to one table - 
> by having to provide SQL template, from which SimpleBatchStatementExecutor 
> creates PreparedStatement. Creating multiple sinks, each of which writes data 
> to one table is impractical for moderate to large number of tables - 
> relational databases don't usually tolerate large number of connections.
> I propose adding DynamicBatchStatementExecutor, which will additionally 
> require
> 1) provided mechanism to create SQL statements based on given object
> 2) cache for prepared statements
> 3) mechanism for determining which statement should be used for given object



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to