Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-07 Thread lorenzo . affetti
Thanks joao for your replies!

I also saw the latest PR that allows properties to be specified.

Thank for adding the pain points as well, that clarifies a lot.
On May 7, 2024 at 09:50 +0200, Muhammet Orazov , 
wrote:
> Thanks João for pointing it out. I didn't know about the PR, I am going
> to check it.
>
> Best,
> Muhammet
>
>
> On 2024-05-06 14:45, João Boto wrote:
> > Hi Muhammet,
> >
> > Have you had a chance to review the recently merged pull request [1]?
> > We've introduced a new feature allowing users to include ad hoc
> > configurations in the 'JdbcConnectionOptions' class.
> > ```
> > new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
> > .withUrl(FakeDBUtils.TEST_DB_URL)
> > .withProperty("keyA", "valueA")
> > .build();
> > ```
> >
> > This provides flexibility by enabling users to specify additional
> > configuration parameters dynamically.
> >
> > [1] https://github.com/apache/flink-connector-jdbc/pull/115/files
> >
> > Best
> >
> > On 2024/05/06 07:34:06 Muhammet Orazov wrote:
> > > > Morning João,
> > > >
> > > > Recently we had a case where the JDBC drivers authentication was
> > > > different than username&password authentication. For it to work,
> > > > certain
> > > > hacks required, there interface would have been helpful.
> > > >
> > > > But I agree maybe the interface module separation is not required at
> > > > the
> > > > moment.
> > > >
> > > > Thanks for your efforts!
> > > >
> > > > Best,
> > > > Muhammet
> > > >
> > > >
> > > > On 2024-05-03 12:25, João Boto wrote:
> > > > > > Hi Muhammet,
> > > > > >
> > > > > > While I generally agree, given our current usage, I'm struggling to
> > > > > > discern any clear advantage. We already have abstract 
> > > > > > implementations
> > > > > > that cover all necessary interfaces and offer essential 
> > > > > > functionality,
> > > > > > complemented by a robust set of reusable tests to streamline
> > > > > > implementation.
> > > > > >
> > > > > > With this established infrastructure in place, coupled with the 
> > > > > > added
> > > > > > import overhead of introducing another module, I find it difficult 
> > > > > > to
> > > > > > identify any distinct benefits at this point.
> > > > > >
> > > > > > Best
> > > > > >
> > > > > > On 2024/04/26 02:18:52 Muhammet Orazov wrote:
> > > > > > >> Hey João,
> > > > > > >>
> > > > > > >> Thanks for FLIP proposal!
> > > > > > >>
> > > > > > >> Since proposal is to introduce modules, would it make sense
> > > > > > >> to have another module for APIs (flink-jdbc-connector-api)?
> > > > > > >>
> > > > > > >> For this I would suggest to move all public interfaces (e.g,
> > > > > > >> JdbcRowConverter, JdbcConnectionProvider). And even convert
> > > > > > >> some classes into interface with their default implementations,
> > > > > > >> for example, JdbcSink, JdbcConnectionOptions.
> > > > > > >>
> > > > > > >> This way users would have clear interfaces to build their own
> > > > > > >> JDBC based Flink connectors.
> > > > > > >>
> > > > > > >> Here I am not suggesting to introduce new interfaces, only
> > > > > > >> suggest also to separate the API from the core implementation.
> > > > > > >>
> > > > > > >> What do you think?
> > > > > > >>
> > > > > > >> Best,
> > > > > > >> Muhammet
> > > > > > >>
> > > > > > >>
> > > > > > >> On 2024-04-25 08:54, Joao Boto wrote:
> > > > > > > >> > Hi all,
> > > > > > > >> >
> > > > > > > >> > I'd like to start a discussion on FLIP-449: Reorganization of
> > > > > > > >> > flink-connector-jdbc [1].
> > > > > > > >> > As Flink continues to evolve, we've noticed an increasing 
> > > > > > > >> > level of
> > > > > > > >> > complexity within the JDBC connector.
> > > > > > > >> > The proposed solution is to address this complexity by 
> > > > > > > >> > separating the
> > > > > > > >> > core
> > > > > > > >> > functionality from individual database components, thereby 
> > > > > > > >> > streamlining
> > > > > > > >> > the
> > > > > > > >> > structure into distinct modules.
> > > > > > > >> >
> > > > > > > >> > Looking forward to your feedback and suggestions, thanks.
> > > > > > > >> > Best regards,
> > > > > > > >> > Joao Boto
> > > > > > > >> >
> > > > > > > >> > [1]
> > > > > > > >> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> > > > > > >>
> > > >


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-07 Thread Muhammet Orazov
Thanks João for pointing it out. I didn't know about the PR, I am going 
to check it.


Best,
Muhammet


On 2024-05-06 14:45, João Boto wrote:

Hi Muhammet,

Have you had a chance to review the recently merged pull request [1]?
We've introduced a new feature allowing users to include ad hoc 
configurations in the 'JdbcConnectionOptions' class.

```
 new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl(FakeDBUtils.TEST_DB_URL)
.withProperty("keyA", "valueA")
.build();
```

This provides flexibility by enabling users to specify additional 
configuration parameters dynamically.


[1] https://github.com/apache/flink-connector-jdbc/pull/115/files

Best

On 2024/05/06 07:34:06 Muhammet Orazov wrote:

Morning João,

Recently we had a case where the JDBC drivers authentication was
different than username&password authentication. For it to work, 
certain

hacks required, there interface would have been helpful.

But I agree maybe the interface module separation is not required at 
the

moment.

Thanks for your efforts!

Best,
Muhammet


On 2024-05-03 12:25, João Boto wrote:
> Hi Muhammet,
>
> While I generally agree, given our current usage, I'm struggling to
> discern any clear advantage. We already have abstract implementations
> that cover all necessary interfaces and offer essential functionality,
> complemented by a robust set of reusable tests to streamline
> implementation.
>
> With this established infrastructure in place, coupled with the added
> import overhead of introducing another module, I find it difficult to
> identify any distinct benefits at this point.
>
> Best
>
> On 2024/04/26 02:18:52 Muhammet Orazov wrote:
>> Hey João,
>>
>> Thanks for FLIP proposal!
>>
>> Since proposal is to introduce modules, would it make sense
>> to have another module for APIs (flink-jdbc-connector-api)?
>>
>> For this I would suggest to move all public interfaces (e.g,
>> JdbcRowConverter, JdbcConnectionProvider). And even convert
>> some classes into interface with their default implementations,
>> for example, JdbcSink, JdbcConnectionOptions.
>>
>> This way users would have clear interfaces to build their own
>> JDBC based Flink connectors.
>>
>> Here I am not suggesting to introduce new interfaces, only
>> suggest also to separate the API from the core implementation.
>>
>> What do you think?
>>
>> Best,
>> Muhammet
>>
>>
>> On 2024-04-25 08:54, Joao Boto wrote:
>> > Hi all,
>> >
>> > I'd like to start a discussion on FLIP-449: Reorganization of
>> > flink-connector-jdbc [1].
>> > As Flink continues to evolve, we've noticed an increasing level of
>> > complexity within the JDBC connector.
>> > The proposed solution is to address this complexity by separating the
>> > core
>> > functionality from individual database components, thereby streamlining
>> > the
>> > structure into distinct modules.
>> >
>> > Looking forward to your feedback and suggestions, thanks.
>> > Best regards,
>> > Joao Boto
>> >
>> > [1]
>> > 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
>>



Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-06 Thread João Boto
Hi Jeyhun,

> > I would also ask to include a sample usage and changes for end-users in the
> > FLIP.

We want to ensure a seamless transition for end-users, minimizing any 
disruptions in their current usage of the connector. To achieve this, we will 
uphold consistency by maintaining the same interfaces (though deprecated) 
within the existing package.

> > Also, in order to ensure the backwards compatibility, do you think at some
> > point we might need to decouple interface and implementations and put only
> > interfaces in flink-connector-jdbc module?

I think flink-connector-jdbc  should be only a shade jar, we could use it to 
deprecate some current interfaces, extending the new on the new package.. but 
that is all..
We propose that the flink-connector-jdbc be solely packaged as a shade jar. 
This would enable us to deprecate existing interfaces while introducing the 
same interface on new package.

Best

On 2024/05/03 14:08:28 Jeyhun Karimov wrote:
> Hi Boto,
> 
> Thanks for driving this FLIP. +1 for it.
> 
> I would also ask to include a sample usage and changes for end-users in the
> FLIP.
> 
> flink-connector-jdbc: The current module, which will be transformed to
> > shade all other modules and maintain backward compatibility.
> 
> 
> Also, in order to ensure the backwards compatibility, do you think at some
> point we might need to decouple interface and implementations and put only
> interfaces in flink-connector-jdbc module?
> 
> Regards,
> Jeyhun
> 
> On Fri, May 3, 2024 at 2:56 PM João Boto  wrote:
> 
> > Hi,
> >
> > > > You can now update the derby implementation and the core independently
> > and decide at your own will when to include the new derby in the core?
> > Not really, we are talking about creating modules in the same repository,
> > not about externalizing the database modules. That is, whenever there is a
> > release, both the core and the DBs will be released at the same time.
> >
> > > > For clarity of motivation, could you please add some concrete examples
> > (just a couple) to the FLIP to clarify when this really comes in handy?
> > Added.
> >
> > Best
> >
> > On 2024/04/26 07:59:30 lorenzo.affe...@ververica.com.INVALID wrote:
> > > Hello Joao,
> > > thank your for your proposal, modularity is always welcome :)
> > >
> > > > To maintain clarity and minimize conflicts, we're currently leaning
> > towards maintaining the existing structure, where
> > flink-connector-jdbc-${version}.jar remains shaded for simplicity,
> > encompassing the core functionality and all database-related features
> > within the same JAR.
> > >
> > > I do agree with this approach as the usecase of reading/writing to
> > different DBs could be quite common.
> > >
> > > However, I am missing what would be the concrete advantage in this
> > change for connector maintainability.
> > > I make an example:
> > > You can now update the derby implementation and the core independently
> > and decide at your own will when to include the new derby in the core?
> > >
> > > For clarity of motivation, could you please add some concrete examples
> > (just a couple) to the FLIP to clarify when this really comes in handy?
> > >
> > > Thank you!
> > > On Apr 26, 2024 at 04:19 +0200, Muhammet Orazov
> > , wrote:
> > > > Hey João,
> > > >
> > > > Thanks for FLIP proposal!
> > > >
> > > > Since proposal is to introduce modules, would it make sense
> > > > to have another module for APIs (flink-jdbc-connector-api)?
> > > >
> > > > For this I would suggest to move all public interfaces (e.g,
> > > > JdbcRowConverter, JdbcConnectionProvider). And even convert
> > > > some classes into interface with their default implementations,
> > > > for example, JdbcSink, JdbcConnectionOptions.
> > > >
> > > > This way users would have clear interfaces to build their own
> > > > JDBC based Flink connectors.
> > > >
> > > > Here I am not suggesting to introduce new interfaces, only
> > > > suggest also to separate the API from the core implementation.
> > > >
> > > > What do you think?
> > > >
> > > > Best,
> > > > Muhammet
> > > >
> > > >
> > > > On 2024-04-25 08:54, Joao Boto wrote:
> > > > > Hi all,
> > > > >
> > > > > I'd like to start a discussion on FLIP-449: Reorganization of
> > > > > flink-connector-jdbc [1].
> > > > > As Flink continues to evolve, we've noticed an increasing level of
> > > > > complexity within the JDBC connector.
> > > > > The proposed solution is to address this complexity by separating the
> > > > > core
> > > > > functionality from individual database components, thereby
> > streamlining
> > > > > the
> > > > > structure into distinct modules.
> > > > >
> > > > > Looking forward to your feedback and suggestions, thanks.
> > > > > Best regards,
> > > > > Joao Boto
> > > > >
> > > > > [1]
> > > > >
> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> > >
> >
> 


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-06 Thread João Boto
Hi Muhammet,

Have you had a chance to review the recently merged pull request [1]? 
We've introduced a new feature allowing users to include ad hoc configurations 
in the 'JdbcConnectionOptions' class.
```
 new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl(FakeDBUtils.TEST_DB_URL)
.withProperty("keyA", "valueA")
.build();
```

This provides flexibility by enabling users to specify additional configuration 
parameters dynamically. 

[1] https://github.com/apache/flink-connector-jdbc/pull/115/files

Best

On 2024/05/06 07:34:06 Muhammet Orazov wrote:
> Morning João,
> 
> Recently we had a case where the JDBC drivers authentication was 
> different than username&password authentication. For it to work, certain 
> hacks required, there interface would have been helpful.
> 
> But I agree maybe the interface module separation is not required at the 
> moment.
> 
> Thanks for your efforts!
> 
> Best,
> Muhammet
> 
> 
> On 2024-05-03 12:25, João Boto wrote:
> > Hi Muhammet,
> > 
> > While I generally agree, given our current usage, I'm struggling to 
> > discern any clear advantage. We already have abstract implementations 
> > that cover all necessary interfaces and offer essential functionality, 
> > complemented by a robust set of reusable tests to streamline 
> > implementation.
> > 
> > With this established infrastructure in place, coupled with the added 
> > import overhead of introducing another module, I find it difficult to 
> > identify any distinct benefits at this point.
> > 
> > Best
> > 
> > On 2024/04/26 02:18:52 Muhammet Orazov wrote:
> >> Hey João,
> >> 
> >> Thanks for FLIP proposal!
> >> 
> >> Since proposal is to introduce modules, would it make sense
> >> to have another module for APIs (flink-jdbc-connector-api)?
> >> 
> >> For this I would suggest to move all public interfaces (e.g,
> >> JdbcRowConverter, JdbcConnectionProvider). And even convert
> >> some classes into interface with their default implementations,
> >> for example, JdbcSink, JdbcConnectionOptions.
> >> 
> >> This way users would have clear interfaces to build their own
> >> JDBC based Flink connectors.
> >> 
> >> Here I am not suggesting to introduce new interfaces, only
> >> suggest also to separate the API from the core implementation.
> >> 
> >> What do you think?
> >> 
> >> Best,
> >> Muhammet
> >> 
> >> 
> >> On 2024-04-25 08:54, Joao Boto wrote:
> >> > Hi all,
> >> >
> >> > I'd like to start a discussion on FLIP-449: Reorganization of
> >> > flink-connector-jdbc [1].
> >> > As Flink continues to evolve, we've noticed an increasing level of
> >> > complexity within the JDBC connector.
> >> > The proposed solution is to address this complexity by separating the
> >> > core
> >> > functionality from individual database components, thereby streamlining
> >> > the
> >> > structure into distinct modules.
> >> >
> >> > Looking forward to your feedback and suggestions, thanks.
> >> > Best regards,
> >> > Joao Boto
> >> >
> >> > [1]
> >> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> >> 
> 


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-06 Thread Muhammet Orazov

Morning João,

Recently we had a case where the JDBC drivers authentication was 
different than username&password authentication. For it to work, certain 
hacks required, there interface would have been helpful.


But I agree maybe the interface module separation is not required at the 
moment.


Thanks for your efforts!

Best,
Muhammet


On 2024-05-03 12:25, João Boto wrote:

Hi Muhammet,

While I generally agree, given our current usage, I'm struggling to 
discern any clear advantage. We already have abstract implementations 
that cover all necessary interfaces and offer essential functionality, 
complemented by a robust set of reusable tests to streamline 
implementation.


With this established infrastructure in place, coupled with the added 
import overhead of introducing another module, I find it difficult to 
identify any distinct benefits at this point.


Best

On 2024/04/26 02:18:52 Muhammet Orazov wrote:

Hey João,

Thanks for FLIP proposal!

Since proposal is to introduce modules, would it make sense
to have another module for APIs (flink-jdbc-connector-api)?

For this I would suggest to move all public interfaces (e.g,
JdbcRowConverter, JdbcConnectionProvider). And even convert
some classes into interface with their default implementations,
for example, JdbcSink, JdbcConnectionOptions.

This way users would have clear interfaces to build their own
JDBC based Flink connectors.

Here I am not suggesting to introduce new interfaces, only
suggest also to separate the API from the core implementation.

What do you think?

Best,
Muhammet


On 2024-04-25 08:54, Joao Boto wrote:
> Hi all,
>
> I'd like to start a discussion on FLIP-449: Reorganization of
> flink-connector-jdbc [1].
> As Flink continues to evolve, we've noticed an increasing level of
> complexity within the JDBC connector.
> The proposed solution is to address this complexity by separating the
> core
> functionality from individual database components, thereby streamlining
> the
> structure into distinct modules.
>
> Looking forward to your feedback and suggestions, thanks.
> Best regards,
> Joao Boto
>
> [1]
> 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc



Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-03 Thread Jeyhun Karimov
Hi Boto,

Thanks for driving this FLIP. +1 for it.

I would also ask to include a sample usage and changes for end-users in the
FLIP.

flink-connector-jdbc: The current module, which will be transformed to
> shade all other modules and maintain backward compatibility.


Also, in order to ensure the backwards compatibility, do you think at some
point we might need to decouple interface and implementations and put only
interfaces in flink-connector-jdbc module?

Regards,
Jeyhun

On Fri, May 3, 2024 at 2:56 PM João Boto  wrote:

> Hi,
>
> > > You can now update the derby implementation and the core independently
> and decide at your own will when to include the new derby in the core?
> Not really, we are talking about creating modules in the same repository,
> not about externalizing the database modules. That is, whenever there is a
> release, both the core and the DBs will be released at the same time.
>
> > > For clarity of motivation, could you please add some concrete examples
> (just a couple) to the FLIP to clarify when this really comes in handy?
> Added.
>
> Best
>
> On 2024/04/26 07:59:30 lorenzo.affe...@ververica.com.INVALID wrote:
> > Hello Joao,
> > thank your for your proposal, modularity is always welcome :)
> >
> > > To maintain clarity and minimize conflicts, we're currently leaning
> towards maintaining the existing structure, where
> flink-connector-jdbc-${version}.jar remains shaded for simplicity,
> encompassing the core functionality and all database-related features
> within the same JAR.
> >
> > I do agree with this approach as the usecase of reading/writing to
> different DBs could be quite common.
> >
> > However, I am missing what would be the concrete advantage in this
> change for connector maintainability.
> > I make an example:
> > You can now update the derby implementation and the core independently
> and decide at your own will when to include the new derby in the core?
> >
> > For clarity of motivation, could you please add some concrete examples
> (just a couple) to the FLIP to clarify when this really comes in handy?
> >
> > Thank you!
> > On Apr 26, 2024 at 04:19 +0200, Muhammet Orazov
> , wrote:
> > > Hey João,
> > >
> > > Thanks for FLIP proposal!
> > >
> > > Since proposal is to introduce modules, would it make sense
> > > to have another module for APIs (flink-jdbc-connector-api)?
> > >
> > > For this I would suggest to move all public interfaces (e.g,
> > > JdbcRowConverter, JdbcConnectionProvider). And even convert
> > > some classes into interface with their default implementations,
> > > for example, JdbcSink, JdbcConnectionOptions.
> > >
> > > This way users would have clear interfaces to build their own
> > > JDBC based Flink connectors.
> > >
> > > Here I am not suggesting to introduce new interfaces, only
> > > suggest also to separate the API from the core implementation.
> > >
> > > What do you think?
> > >
> > > Best,
> > > Muhammet
> > >
> > >
> > > On 2024-04-25 08:54, Joao Boto wrote:
> > > > Hi all,
> > > >
> > > > I'd like to start a discussion on FLIP-449: Reorganization of
> > > > flink-connector-jdbc [1].
> > > > As Flink continues to evolve, we've noticed an increasing level of
> > > > complexity within the JDBC connector.
> > > > The proposed solution is to address this complexity by separating the
> > > > core
> > > > functionality from individual database components, thereby
> streamlining
> > > > the
> > > > structure into distinct modules.
> > > >
> > > > Looking forward to your feedback and suggestions, thanks.
> > > > Best regards,
> > > > Joao Boto
> > > >
> > > > [1]
> > > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> >
>


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-03 Thread João Boto
Hi,

> > You can now update the derby implementation and the core independently and 
> > decide at your own will when to include the new derby in the core?
Not really, we are talking about creating modules in the same repository, not 
about externalizing the database modules. That is, whenever there is a release, 
both the core and the DBs will be released at the same time.

> > For clarity of motivation, could you please add some concrete examples 
> > (just a couple) to the FLIP to clarify when this really comes in handy?
Added.

Best

On 2024/04/26 07:59:30 lorenzo.affe...@ververica.com.INVALID wrote:
> Hello Joao,
> thank your for your proposal, modularity is always welcome :)
> 
> > To maintain clarity and minimize conflicts, we're currently leaning towards 
> > maintaining the existing structure, where 
> > flink-connector-jdbc-${version}.jar remains shaded for simplicity, 
> > encompassing the core functionality and all database-related features 
> > within the same JAR.
> 
> I do agree with this approach as the usecase of reading/writing to different 
> DBs could be quite common.
> 
> However, I am missing what would be the concrete advantage in this change for 
> connector maintainability.
> I make an example:
> You can now update the derby implementation and the core independently and 
> decide at your own will when to include the new derby in the core?
> 
> For clarity of motivation, could you please add some concrete examples (just 
> a couple) to the FLIP to clarify when this really comes in handy?
> 
> Thank you!
> On Apr 26, 2024 at 04:19 +0200, Muhammet Orazov 
> , wrote:
> > Hey João,
> >
> > Thanks for FLIP proposal!
> >
> > Since proposal is to introduce modules, would it make sense
> > to have another module for APIs (flink-jdbc-connector-api)?
> >
> > For this I would suggest to move all public interfaces (e.g,
> > JdbcRowConverter, JdbcConnectionProvider). And even convert
> > some classes into interface with their default implementations,
> > for example, JdbcSink, JdbcConnectionOptions.
> >
> > This way users would have clear interfaces to build their own
> > JDBC based Flink connectors.
> >
> > Here I am not suggesting to introduce new interfaces, only
> > suggest also to separate the API from the core implementation.
> >
> > What do you think?
> >
> > Best,
> > Muhammet
> >
> >
> > On 2024-04-25 08:54, Joao Boto wrote:
> > > Hi all,
> > >
> > > I'd like to start a discussion on FLIP-449: Reorganization of
> > > flink-connector-jdbc [1].
> > > As Flink continues to evolve, we've noticed an increasing level of
> > > complexity within the JDBC connector.
> > > The proposed solution is to address this complexity by separating the
> > > core
> > > functionality from individual database components, thereby streamlining
> > > the
> > > structure into distinct modules.
> > >
> > > Looking forward to your feedback and suggestions, thanks.
> > > Best regards,
> > > Joao Boto
> > >
> > > [1]
> > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> 


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-03 Thread João Boto
Hi Muhammet,

While I generally agree, given our current usage, I'm struggling to discern any 
clear advantage. We already have abstract implementations that cover all 
necessary interfaces and offer essential functionality, complemented by a 
robust set of reusable tests to streamline implementation.

With this established infrastructure in place, coupled with the added import 
overhead of introducing another module, I find it difficult to identify any 
distinct benefits at this point.

Best

On 2024/04/26 02:18:52 Muhammet Orazov wrote:
> Hey João,
> 
> Thanks for FLIP proposal!
> 
> Since proposal is to introduce modules, would it make sense
> to have another module for APIs (flink-jdbc-connector-api)?
> 
> For this I would suggest to move all public interfaces (e.g,
> JdbcRowConverter, JdbcConnectionProvider). And even convert
> some classes into interface with their default implementations,
> for example, JdbcSink, JdbcConnectionOptions.
> 
> This way users would have clear interfaces to build their own
> JDBC based Flink connectors.
> 
> Here I am not suggesting to introduce new interfaces, only
> suggest also to separate the API from the core implementation.
> 
> What do you think?
> 
> Best,
> Muhammet
> 
> 
> On 2024-04-25 08:54, Joao Boto wrote:
> > Hi all,
> > 
> > I'd like to start a discussion on FLIP-449: Reorganization of
> > flink-connector-jdbc [1].
> > As Flink continues to evolve, we've noticed an increasing level of
> > complexity within the JDBC connector.
> > The proposed solution is to address this complexity by separating the 
> > core
> > functionality from individual database components, thereby streamlining 
> > the
> > structure into distinct modules.
> > 
> > Looking forward to your feedback and suggestions, thanks.
> > Best regards,
> > Joao Boto
> > 
> > [1]
> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> 


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-04-26 Thread lorenzo . affetti
Hello Joao,
thank your for your proposal, modularity is always welcome :)

> To maintain clarity and minimize conflicts, we're currently leaning towards 
> maintaining the existing structure, where flink-connector-jdbc-${version}.jar 
> remains shaded for simplicity, encompassing the core functionality and all 
> database-related features within the same JAR.

I do agree with this approach as the usecase of reading/writing to different 
DBs could be quite common.

However, I am missing what would be the concrete advantage in this change for 
connector maintainability.
I make an example:
You can now update the derby implementation and the core independently and 
decide at your own will when to include the new derby in the core?

For clarity of motivation, could you please add some concrete examples (just a 
couple) to the FLIP to clarify when this really comes in handy?

Thank you!
On Apr 26, 2024 at 04:19 +0200, Muhammet Orazov 
, wrote:
> Hey João,
>
> Thanks for FLIP proposal!
>
> Since proposal is to introduce modules, would it make sense
> to have another module for APIs (flink-jdbc-connector-api)?
>
> For this I would suggest to move all public interfaces (e.g,
> JdbcRowConverter, JdbcConnectionProvider). And even convert
> some classes into interface with their default implementations,
> for example, JdbcSink, JdbcConnectionOptions.
>
> This way users would have clear interfaces to build their own
> JDBC based Flink connectors.
>
> Here I am not suggesting to introduce new interfaces, only
> suggest also to separate the API from the core implementation.
>
> What do you think?
>
> Best,
> Muhammet
>
>
> On 2024-04-25 08:54, Joao Boto wrote:
> > Hi all,
> >
> > I'd like to start a discussion on FLIP-449: Reorganization of
> > flink-connector-jdbc [1].
> > As Flink continues to evolve, we've noticed an increasing level of
> > complexity within the JDBC connector.
> > The proposed solution is to address this complexity by separating the
> > core
> > functionality from individual database components, thereby streamlining
> > the
> > structure into distinct modules.
> >
> > Looking forward to your feedback and suggestions, thanks.
> > Best regards,
> > Joao Boto
> >
> > [1]
> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc


Re: [DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-04-25 Thread Muhammet Orazov

Hey João,

Thanks for FLIP proposal!

Since proposal is to introduce modules, would it make sense
to have another module for APIs (flink-jdbc-connector-api)?

For this I would suggest to move all public interfaces (e.g,
JdbcRowConverter, JdbcConnectionProvider). And even convert
some classes into interface with their default implementations,
for example, JdbcSink, JdbcConnectionOptions.

This way users would have clear interfaces to build their own
JDBC based Flink connectors.

Here I am not suggesting to introduce new interfaces, only
suggest also to separate the API from the core implementation.

What do you think?

Best,
Muhammet


On 2024-04-25 08:54, Joao Boto wrote:

Hi all,

I'd like to start a discussion on FLIP-449: Reorganization of
flink-connector-jdbc [1].
As Flink continues to evolve, we've noticed an increasing level of
complexity within the JDBC connector.
The proposed solution is to address this complexity by separating the 
core
functionality from individual database components, thereby streamlining 
the

structure into distinct modules.

Looking forward to your feedback and suggestions, thanks.
Best regards,
Joao Boto

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc


Re: Re:[DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-04-25 Thread Yuepeng Pan
Hi, Boto.

It's already clear enough to me.
Thanks for your reply.

Best,
Yuepeng Pan

On 2024/04/25 15:41:01 João Boto wrote:
> Hi Pan,
> 
> Users who wish to utilize only one database and prefer not to use 
> flink-connector-jdbc-${version}.jar + ${database-connector-driver}.jar should 
> opt for option 1: flink-connector-jdbc-core-${version}.jar + 
> flink-connector-jdbc-mysql-${version}.jar + ${database-connector-driver}.jar.
> 
> We could introduce a flink-connector-jdbc-mysql-${version}-fat.jar that 
> includes flink-connector-jdbc-core-${version}.jar, but this could create 
> potential challenges. This approach could lead to duplicate classes if a user 
> intends to read from MySQL and write to PostgreSQL while utilizing both fat 
> JARs simultaneously.
> 
> To maintain clarity and minimize conflicts, we're currently leaning towards 
> maintaining the existing structure, where flink-connector-jdbc-${version}.jar 
> remains shaded for simplicity, encompassing the core functionality and all 
> database-related features within the same JAR.
> 
> Please let me know if you require further clarification on any aspect.
> 
> Best regards,
> Joao Boto
> 
> 
> 
> On 2024/04/25 11:41:00 Yuepeng Pan wrote:
> > Hi, Boto.
> > 
> > Thanks for your driving it !
> > +1 from me on the proposal.
> > 
> > 
> > 
> > 
> > Maybe we need to ensure that a simple usage method is provided to users 
> > after the refactoring.
> > 
> > In the current situation, which supported database does the user intend to 
> > use,
> > 
> > Users only need to add the flink-connector-jdbc-${version}.jar + 
> > ${database-connector-driver}.jar
> > 
> >  into the dependencies, which could be used out of the box.
> > 
> > I noticed in FLIP that we will perform shadow related operations to ensure 
> > 
> > the same usage and semantics as before.
> > 
> > So, if users only want to use one type of database (eg. MySQL), 
> > 
> > what forms would we plan to provide jars in?
> > 
> > For example: 
> > 
> > 1. flink-connector-jdbc-core-${version}.jar + 
> > flink-connector-jdbc-mysql-${version}.jar + 
> > ${database-connector-driver}.jar.
> > 
> > 2. Or flink-connector-jdbc-mysql-${version}.jar + 
> > ${database-connector-driver}.jar.
> > 
> > 3. Or a another different concise way?
> > 
> > 
> > 
> > 
> > Thank you.
> > 
> > Best, 
> > Yuepeng Pan
> > 
> > 
> > 
> > 
> > At 2024-04-25 16:54:13, "Joao Boto"  wrote:
> > >Hi all,
> > >
> > >I'd like to start a discussion on FLIP-449: Reorganization of
> > >flink-connector-jdbc [1].
> > >As Flink continues to evolve, we've noticed an increasing level of
> > >complexity within the JDBC connector.
> > >The proposed solution is to address this complexity by separating the core
> > >functionality from individual database components, thereby streamlining the
> > >structure into distinct modules.
> > >
> > >Looking forward to your feedback and suggestions, thanks.
> > >Best regards,
> > >Joao Boto
> > >
> > >[1]
> > >https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> > 
> 


Re: Re:[DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-04-25 Thread João Boto
Hi Pan,

Users who wish to utilize only one database and prefer not to use 
flink-connector-jdbc-${version}.jar + ${database-connector-driver}.jar should 
opt for option 1: flink-connector-jdbc-core-${version}.jar + 
flink-connector-jdbc-mysql-${version}.jar + ${database-connector-driver}.jar.

We could introduce a flink-connector-jdbc-mysql-${version}-fat.jar that 
includes flink-connector-jdbc-core-${version}.jar, but this could create 
potential challenges. This approach could lead to duplicate classes if a user 
intends to read from MySQL and write to PostgreSQL while utilizing both fat 
JARs simultaneously.

To maintain clarity and minimize conflicts, we're currently leaning towards 
maintaining the existing structure, where flink-connector-jdbc-${version}.jar 
remains shaded for simplicity, encompassing the core functionality and all 
database-related features within the same JAR.

Please let me know if you require further clarification on any aspect.

Best regards,
Joao Boto



On 2024/04/25 11:41:00 Yuepeng Pan wrote:
> Hi, Boto.
> 
> Thanks for your driving it !
> +1 from me on the proposal.
> 
> 
> 
> 
> Maybe we need to ensure that a simple usage method is provided to users after 
> the refactoring.
> 
> In the current situation, which supported database does the user intend to 
> use,
> 
> Users only need to add the flink-connector-jdbc-${version}.jar + 
> ${database-connector-driver}.jar
> 
>  into the dependencies, which could be used out of the box.
> 
> I noticed in FLIP that we will perform shadow related operations to ensure 
> 
> the same usage and semantics as before.
> 
> So, if users only want to use one type of database (eg. MySQL), 
> 
> what forms would we plan to provide jars in?
> 
> For example: 
> 
> 1. flink-connector-jdbc-core-${version}.jar + 
> flink-connector-jdbc-mysql-${version}.jar + ${database-connector-driver}.jar.
> 
> 2. Or flink-connector-jdbc-mysql-${version}.jar + 
> ${database-connector-driver}.jar.
> 
> 3. Or a another different concise way?
> 
> 
> 
> 
> Thank you.
> 
> Best, 
> Yuepeng Pan
> 
> 
> 
> 
> At 2024-04-25 16:54:13, "Joao Boto"  wrote:
> >Hi all,
> >
> >I'd like to start a discussion on FLIP-449: Reorganization of
> >flink-connector-jdbc [1].
> >As Flink continues to evolve, we've noticed an increasing level of
> >complexity within the JDBC connector.
> >The proposed solution is to address this complexity by separating the core
> >functionality from individual database components, thereby streamlining the
> >structure into distinct modules.
> >
> >Looking forward to your feedback and suggestions, thanks.
> >Best regards,
> >Joao Boto
> >
> >[1]
> >https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> 


Re:[DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-04-25 Thread Yuepeng Pan
Hi, Boto.

Thanks for your driving it !
+1 from me on the proposal.




Maybe we need to ensure that a simple usage method is provided to users after 
the refactoring.

In the current situation, which supported database does the user intend to use,

Users only need to add the flink-connector-jdbc-${version}.jar + 
${database-connector-driver}.jar

 into the dependencies, which could be used out of the box.

I noticed in FLIP that we will perform shadow related operations to ensure 

the same usage and semantics as before.

So, if users only want to use one type of database (eg. MySQL), 

what forms would we plan to provide jars in?

For example: 

1. flink-connector-jdbc-core-${version}.jar + 
flink-connector-jdbc-mysql-${version}.jar + ${database-connector-driver}.jar.

2. Or flink-connector-jdbc-mysql-${version}.jar + 
${database-connector-driver}.jar.

3. Or a another different concise way?




Thank you.

Best, 
Yuepeng Pan




At 2024-04-25 16:54:13, "Joao Boto"  wrote:
>Hi all,
>
>I'd like to start a discussion on FLIP-449: Reorganization of
>flink-connector-jdbc [1].
>As Flink continues to evolve, we've noticed an increasing level of
>complexity within the JDBC connector.
>The proposed solution is to address this complexity by separating the core
>functionality from individual database components, thereby streamlining the
>structure into distinct modules.
>
>Looking forward to your feedback and suggestions, thanks.
>Best regards,
>Joao Boto
>
>[1]
>https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc


[DISCUSSION] FLIP-449: Reorganization of flink-connector-jdbc

2024-04-25 Thread Joao Boto
Hi all,

I'd like to start a discussion on FLIP-449: Reorganization of
flink-connector-jdbc [1].
As Flink continues to evolve, we've noticed an increasing level of
complexity within the JDBC connector.
The proposed solution is to address this complexity by separating the core
functionality from individual database components, thereby streamlining the
structure into distinct modules.

Looking forward to your feedback and suggestions, thanks.
Best regards,
Joao Boto

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc