Hi Leonard,

> Do we have to rely on the latest version of JDBC Connector here?

No, there's no need for us to depend on the latest version of the JDBC
Connector. Redshift has its dedicated JDBC driver [1], which includes
custom modifications tailored to Redshift's specific implementation needs.
This driver is the most suitable choice for our purposes.


> Could you collect the APIs that Redshift generally needs to use?

I am actively working on it and making progress towards creating the POC.

Bests,
Samrat

[1]
https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-download-driver.html

On Mon, Sep 11, 2023 at 12:02 PM Samrat Deb <decordea...@gmail.com> wrote:

> Hello Danny,
>
> I wanted to express my gratitude for your valuable feedback and insightful
> suggestions.
>
> I will be revising the FLIP to incorporate all of your queries and review
> suggestions. Additionally, I plan to provide a Proof of Concept (POC) for
> the connector by the end of this week. This POC will address the points
> you've raised and ensure that the FLIP aligns with your recommendations.
>
> Thank you once again for your input.
>
> Bests,
> Samrat
>
> On Thu, Sep 7, 2023 at 10:21 PM Danny Cranmer <dannycran...@apache.org>
> wrote:
>
>> Hello Leonard,
>>
>> > Do we have to rely on the latest version of JDBC Connector here? I
>> understand that as long as the version of flink minor is the same as the
>> JDBC Connector, Could you collect the APIs that Redshift generally needs
>> to
>> use?
>>
>> I agree we do not necessarily need to rely on the latest patch version,
>> only the same minor. The main issue for me is the dependency introduces a
>> blocker following a new Flink version. For example, when Flink 1.18.0 is
>> released we cannot release the AWS connectors until the JDBC is complete.
>> But I think this is a good tradeoff.
>>
>> > Splitting a separate redshift repository does not solve this coupling
>> problem
>>
>> Arguably it solves the AWS<>JDBC coupling problem, but creates a new, more
>> complex one!
>>
>> Thanks,
>>
>> On Thu, Sep 7, 2023 at 5:26 PM Leonard Xu <xbjt...@gmail.com> wrote:
>>
>> > Thanks Samrat and  Danny for driving this FLIP.
>> >
>> > >> an effective approach is to utilize the latest version of
>> > flink-connector-jdbc
>> > > as a Maven dependency
>> > >
>> > > When we have stable source/sink APIs and the connector versions are
>> > > decoupled from Flink this makes sense. But right now this would mean
>> that
>> > > the JDBC connector will block the AWS connector for each new Flink
>> > version
>> > > support release (1.18, 1.19, 1.20, 2.0 etc). That being said, I cannot
>> > > think of a cleaner alternative, without pulling the core JDBC bits out
>> > into
>> > > a dedicated project that is decoupled from and released independently
>> of
>> > > Flink. Splitting flink-connector-redshift into a dedicated repo would
>> > > decouple AWS/JDBC, but obviously introduce a new connector that is
>> > blocked
>> > > by both AWS and JDBC.
>> >
>> > Do we have to rely on the latest version of JDBC Connector here? I
>> > understand that as long as the version of flink minor is the same as the
>> > JDBC Connector, Could you collect the APIs that Redshift generally
>> needs to
>> > use?
>> >
>> > Assuming that AWS Connector(Redshift) depends on JDBC Connector and
>> wants
>> > a higher version of JDBC Connector, I understand that the correct
>> approach
>> > is to promote the release of JDBC Connector and looks like we have no
>> more
>> > options.
>> >
>> > Splitting a separate redshift repository does not solve this coupling
>> > problem, from a user perspective, redshift should also be in the AWS
>> > Connector repo.
>> >
>> > Best,
>> > Leonard
>>
>

Reply via email to