Hey there,
Hope everyone is well!
Correct me if I am wrong but it seems like Flink does not support
connection pooling for JDBC sinks.
So for every sink we open a connection, if we have N sinks we have N
open connections.
There are multiple JDBC connection pooling libraries which can be used
to support having N JDBC sinks
but only M open connections to the DB where as M < N.
Given the amount of different JDBC connection pooling options, I would
propose to update the JDBCSink
(https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/JdbcSink.java)
with another method which would take some sort of an interface of the
likes of:
```
public interface ConnectionProvider {
Connection getConnection();
}
```
to build the SinkFunction.
This would allow the user to use whatever connection pooling library
he/she wants and he/she would only have to implement the
`ConnectionProvider` interface.
There is already the `JdbcConnectionProvider`
(https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/connection/JdbcConnectionProvider.java)
interface but it is marked as internal.
Let me know what you think; I can turn this into a jira ticket and try
my best if wanted.
Best regards,
Dario