Hi Martijn,

Thanks for your feedback. I makes total sense to me.

I'll enable it for Cassandra.

Best

Etienne

Le 29/06/2023 à 10:54, Martijn Visser a écrit :

Hi Etienne,

I think it all depends on the actual maintainers of the connector to
make a decision on that: if their unreleased version of the connector
should be compatible with a new Flink version, then they should test
against it. For example, that's already done at Elasticsearch [1] and
JDBC [2].

Choosing which versions to support is a decision by the maintainers in
the community, and it always requires an action by a maintainer to
update the CI config to set the correct versions whenever a new Flink
version is released.

Best regards,

Martijn

[1]https://github.com/apache/flink-connector-elasticsearch/blob/main/.github/workflows/push_pr.yml
[2]https://github.com/apache/flink-connector-jdbc/blob/main/.github/workflows/push_pr.yml

On Wed, Jun 28, 2023 at 6:09 PM Etienne Chauchot<echauc...@apache.org>  wrote:
Hi all,

Connectors are external to flink. As such, they need to be tested
against stable (released) versions of Flink.

But I was wondering if it would make sense to test connectors in PRs
also against latest flink master snapshot to allow to discover failures
before merging the PRs, ** while the author is still available **,
rather than discovering them in nightly tests (that test against
snapshot) after the merge. That would allow the author to anticipate
potential failures and provide more future proof code (even if master is
subject to change before the connector release).

Of course, if a breaking change was introduced in master, such tests
will fail. But they should be considered as a preview of how the code
will behave against the current snapshot of the next flink version.

WDYT ?


Best

Etienne

Reply via email to