With the model of externalized Flink connector repo (which I fully
support), there is one challenge of supporting versions of two upstream
projects (similar to what Peter Vary mentioned earlier).
E.g., today the Flink Iceberg connector lives in Iceberg repo. We have
separate modules 1.13, 1.14,
+1 having an option storing every version of a connector in one repo
Also, it would be good to have the major(.minor) version of the connected
system in the name of the connector jar, depending of the compatibility. I
think this compatibility is mostly system dependent.
Thanks, Peter
On Fri,
Hi Peter,
I think this also depends on the support SLA that the technology that you
connect to provides. For example, with Flink and Elasticsearch, we choose
to follow Elasticsearch supported versions. So that means that when support
for Elasticsearch 8 is introduced, support for Elasticsearch 6
Thanks for the quick response!
Would this mean, that we have different connectors for Iceberg 0.14, and
Iceberg 0.15. Would these different versions kept in different repository?
My feeling is that this model is fine for the stable/slow moving systems
like Hive/HBase. For other systems, which
If you look at ElasticSearch [1] as an example there are different variants
of the connector depending on the "connected" system:
- flink-connector-elasticsearch6
- flink-connector-elasticsearch7
Looks like Hive and HBase follow a similar pattern in the main Flink repo/
[1]
Hi Team,
Just joining the conversation for the first time, so pardon me if I repeat
already answered questions.
It might be already discussed, but I think the version for the "connected"
system could be important as well.
There might be some API changes between Iceberg 0.14.2, and 1.0.0, which
2) No; the branch names would not have a Flink version in them; v1.0.0,
v1.0.1 etc.
On 29/09/2022 14:03, Martijn Visser wrote:
If I summarize it correctly, that means that:
1. The versioning scheme would be -, where there will never be a
patch release for a minor version if a newer minor
If I summarize it correctly, that means that:
1. The versioning scheme would be -, where there will never be a
patch release for a minor version if a newer minor version already exists.
E.g., 1.0.0-1.15; 1.0.1-1.15; 1.1.0-1.15; 1.2.0-1.15;
2. The branch naming scheme would be
> After 1.16, only patches are accepted for 1.2.0-1.15.
I feel like this is a misunderstanding that both you and Danny ran into.
What I meant in the original proposal is that the last 2 _major_
/connector /versions are supported, with the latest receiving additional
features.
(Provided that
Hi,
Thanks for starting this discussion. It is an interesting one and yeah, it
is a tough topic. It seems like a centralized release version schema
control for decentralized connector development ;-)
In general, I like this idea, not because it is a good one but because
there might be no better
Hi all,
This is a tough topic, I also had to write things down a couple of times.
To summarize and add my thoughts:
a) I think everyone is agreeing that "Only the last 2 versions of a
connector are supported per supported Flink version, with only the latest
version receiving new features". In
c)
@Ryan:
I'm generally fine with leaving it up to the connector on how to
implement Flink version-specific behavior, so long as the branching
models stays consistent (primarily so that the release process is
identical and we can share infrastructure).
@Danny:
In a single branch per version
c) I am torn here. I do not like the idea that the connector code could
diverge for the same connector version, ie 2.1.0-1.15 and 2.1.0-1.16. If
the Flink version change requires a change to the connector code, then this
should result in a new connector version in my opinion. Going back to your
I had to write down a diagram to fully understand the discussion :D
If I recall correctly, during the externalization discussion, the "price to
pay" for the (many) advantages of taking connectors out of the main repo
was to maintain and manually consult a compatibility matrix per connector.
I'm
a) 2 Flink versions would be the obvious answer. I don't think anything
else makes much sense.
I don't want us to increment versions just because the Flink versions
change, so in your example I'd go with 2.0.0-1.16.
c)
Generally speaking I would love to avoid the Flink versions in branch
Thanks for starting this discussion. I am working on the early stages of
the new DynamoDB connector and have been pondering the same thing.
a) Makes sense. On the flip side, how many Flink versions will we
support? Right now we support 2 versions for Flink, so it makes sense to
follow this rule.
Hello,
over the past week I worked on putting the final things into place to
enable the first release of an externalized elasticsearch connector.
Then it dawned on me that there are a few things we _haven't really
decided yet_, which are rather important though.
Let's fix that.
Note that
17 matches
Mail list logo