Yes, I would say the situation is different for minor vs. patch.

Side note: in a version like 1.x.y most people in the Flink community see x as the major version and y as the minor version. I know that this is not proper semver and I could be wrong about how people see it. It doesn't change anything about the discussion, though.

Another side note: I said earlier that we would have to check the whole transitive closure for usage of "internal" classes. This is wrong: we only need to check the first level, i.e. code that is directly used by a connector (or other jar that we want to use across versions). This is still something that we don't do, so the fact that it works can be somewhat attributed to luck and the right people looking at the right thing at the right time.

Aljoscha

On 23.06.20 17:25, Konstantin Knauf wrote:
Hi Aljoscha,

Thank you for bringing this up. IMHO the situation is different for minor &
patch version upgrades.

1) I don't think we need to provide any guarantees across Flink minor
versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect users to
recompile their user JARs when upgrading the cluster to the next minor
version of Apache Flink.

2) We should aim for compatibility across patch versions (1.10.0 ->
1.10.1). Being able to apply a patch to the Flink runtime/framework without
updating the dependencies of each user jar running on the cluster seems
really useful to me.

According to the discussion in [1] around stability guarantees for
@PublicEvolving, this would mean that connectors can use @Public
and @PublicEvolving classes, but not @Internal classes, right? This
generally seems reasonable to me. If we want to make it easy for our users,
we need to aim for stable interfaces anyway.

Cheers,

Konstantin

[]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html

On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek <aljos...@apache.org>
wrote:

Hi,

this has come up a few times now and I think we need to discuss the
guarantees that we want to officially give for this. What I mean by
cross-version compatibility is using, say, a Flink 1.10 Kafka connector
dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
1.10.1. In the past, this hast mostly worked. I think this was largely
by accident, though.

The problem is that connectors, which might be annotated as @Public can
use classes internally which are annotated as @Internal or
@PublicEvolving. If those internal dependencies change, then the
connector jar can not be used for different versions. This has happened
at least in [1] and [2], where [2] was caused by the interplay of [3]
and [4]. The initial release note on [4] was saying that the Kafka 0.9
connector can be used with Flink 1.11, but this was rendered wrong by
[3]. (Also, sorry for all the []s ...)

What should we do about it? So far our strategy for ensuring that
jars/dependencies are compatible between versions was "hope". If we want
to really verify this compatibility we would have to ensure that
"public" code does not transitively use any "non-public" dependencies.

An alternative would be to say we don't support any cross-version
compatibility between Flink versions. If users want to use an older
connector they would have to copy the code, make sure it compiles
against a newer Flink version and then manage that themselves

What do you think?

Best,
Aljoscha

[1] https://issues.apache.org/jira/browse/FLINK-13586
[2] https://github.com/apache/flink/pull/12699#discussion_r442013514
[3] https://issues.apache.org/jira/browse/FLINK-17376
[4] https://issues.apache.org/jira/browse/FLINK-15115




Reply via email to