Hi Arvid,

Yes, reflection can solve this problem on the connector side, but this
increases the burden to connectors.
That also means all the existing released source connectors which use
"metricGroup" will not work on 1.14,
and they have to release a new version. It's just a problem of who changes
the code and releases new versions.

I agree with Piotr that we should
"trying to support forward compatibility of `@PublicEvolving` APIs for one
or two
releases into the future might be a good rule of thumb."

I think this is also the potential purpose that OpenInx started this
discussion: the ecosystem needs
 a compatible API to build rich connectors.


Best,
Jark

On Mon, 4 Oct 2021 at 16:22, Arvid Heise <ar...@ververica.com> wrote:

> Hi Jark,
>
> I also don't see it as a blocker issue at all. If you want to access the
> metric group across 1.13 and 1.14, you can use
>
> (MetricGroup) context.getClass().getMethod("metricGroup").invoke(context)
>
> But of course, you will not be able to access the new standardized metrics.
> For that you will need to maintain two different source/binary builds,
> since it's a new feature.
>
> I agree with Piotr, the issue is that we need a more standardized process
> around PublicEvolving. Ideally, with every new minor release, we should
> convert PublicEvolving to Public and Experimental to PublicEvolving. We
> could extend the interfaces to capture a target version and a comment for
> why the API is not Public yet. Before every release, we would go through
> the annotated classes and either find a specific reason to keep the
> annotation or move it towards Public. If we have a specific reason to keep
> it Experimental/PublicEvolving, we should plan to address that reason with
> the next release.
>
> We do have good checks in place for Public; we are just too slow with
> ensuring that new API becomes Public.
>
> On Fri, Oct 1, 2021 at 9:41 PM Piotr Nowojski <pnowoj...@apache.org>
> wrote:
>
> > Hi,
> >
> > I don't understand why we are talking about this being a blocker issue?
> New
> > sources were not marked as @Public for a good reason until 1.14. I agree,
> > we should try better at making APIs @Public sooner. I was even proposing
> to
> > create strict timeout rules (enforced via some automated checks) like
> > (unless for a very very good reason) everything marked @PublicEvolving
> > or @Experimental should be upgraded to @Public after for example 2 years
> > [1]. But for example the new Sink API IMO is too fresh to make it
> > `@Public`.
> >
> > It doesn't change the fact that if we could provide a compatibility layer
> > between 1.13.x and 1.14.x for this SourceReaderContext issue, it would
> be a
> > nice thing to do. I would be -1 for keeping it forever, but trying to
> > support forward compatibility of `@PublicEvolving` APIs for one or two
> > releases into the future might be a good rule of thumb.
> >
> > Best, Piotrek
> >
> > [1] "[DISCUSS] Dealing with deprecated and legacy code in Flink" on the
> dev
> > mailing list
> >
> >
> > pt., 1 paź 2021 o 16:56 Jark Wu <imj...@gmail.com> napisał(a):
> >
> > > Hi Arvid,
> > >
> > > > Should we expect connector devs to release different connector
> binaries
> > > for different Flink minors?
> > > From the discussion of this thread, I think the answer is obviously
> > "not",
> > > otherwise OpenInx won't start
> > >  this discussion. As a maintainer of flink-cdc-connector, I have to say
> > > that it's very hard to release
> > >  connectors for different flink versions. Usually, the connector
> > community
> > > doesn't have so much time to
> > >  maintain different branches/modules/code for different flink versions.
> > >
> > > > If we change it back, then a specific connector would work for 1.14.1
> > and
> > > 1.13.X but not for 1.14.0 and this would be even more confusing.
> > > I think this is fine. IMO, this is a blocker issue of 1.14.0 which
> breaks
> > > Source connectors.
> > > We should suggest users to use 1.14.1 if they use Source connectors.
> > >
> > > Best,
> > > Jark
> > >
> > >
> > > On Fri, 1 Oct 2021 at 19:05, Arvid Heise <ar...@ververica.com> wrote:
> > >
> > > > The issue is that if we do not mark them as Public, we will always
> have
> > > > incompatibilities. The change of SourceReaderContext#metricGroup is
> > > > perfectly fine according to the annotation. The implications that we
> > see
> > > > here just mean that the interfaces have been expected to be Public.
> > > >
> > > > And now the question is what do we expect?
> > > > Should we expect connector devs to release different connector
> binaries
> > > > for different Flink minors? Then PublicEvolving is fine.
> > > > If we expect that the same connector can work across multiple Flink
> > > > versions, we need to go into Public.
> > > >
> > > > It doesn't make sense to keep them PublicEvolving on the annotation
> but
> > > > implicitly assume them to be Public.
> > > >
> > > > @Jark Wu <imj...@gmail.com> I don't see a way to revert the change
> of
> > > > SourceReaderContext#metricGroup. For now, connector devs that expose
> > > > metrics need to release 2 versions. If we change it back, then a
> > specific
> > > > connector would work for 1.14.1 and 1.13.X but not for 1.14.0 and
> this
> > > > would be even more confusing.
> > > >
> > > > On Fri, Oct 1, 2021 at 10:49 AM Ingo Bürk <i...@ververica.com>
> wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> > [...] but also the new Source/Sink APIs as public
> > > >>
> > > >> I'm not really involved in the new Source/Sink APIs and will happily
> > > >> listen to the developers working with them here, but since they are
> > > new, we
> > > >> should also be careful not to mark them as stable too quickly. We've
> > > only
> > > >> begun updating the existing connectors to these interfaces at the
> > > moment.
> > > >> Making more progress here and keeping new APIs as Evolving for a
> > couple
> > > of
> > > >> minor releases is probably still a good idea. Maybe we should even
> > have
> > > >> actual rules on when APIs can/should be promoted?
> > > >>
> > > >> More actively checking backwards-compatibility during a release
> sounds
> > > >> like a great idea regardless, of course.
> > > >>
> > > >>
> > > >> Ingo
> > > >>
> > > >> On Fri, Oct 1, 2021 at 9:19 AM Jark Wu <imj...@gmail.com> wrote:
> > > >>
> > > >>> Hi all,
> > > >>>
> > > >>> Nice thread and great discussion! Ecosystem is one of the most
> > > important
> > > >>> things
> > > >>> to the Flink community, we should pay more attention to API
> > > >>> compatibility.
> > > >>>
> > > >>> Marking all connector APIs @Public is a good idea, not only mark
> the
> > > >>> Table/SQL
> > > >>> connector APIs public, but also the new Source/Sink APIs as public.
> > > >>> Besides, we should also add a check item to the Verify Release
> > > >>> documentation[1]
> > > >>> to verify the release is backward-compatible for connectors. From
> my
> > > >>> point
> > > >>> of view,
> > > >>> such backward incompatibility should cancel the vote.
> > > >>>
> > > >>> Regarding the SourceReaderContext#metricGroup compatibility problem
> > in
> > > >>> 1.14.0, I would
> > > >>> suggest starting a new discussion thread to see whether we have any
> > > idea
> > > >>> to
> > > >>> fix it. We should
> > > >>> fix it ASAP! Otherwise iceberg/hudi/cdc communities will get
> > frustrated
> > > >>> again when upgrading
> > > >>>  to 1.14.  Maybe we still have time to release a minor version,
> > because
> > > >>> there is no
> > > >>> connector upgraded to 1.14.0 yet. What do you think? @Leonard Xu
> > > >>> <xbjt...@gmail.com> @Arvid Heise <ar...@ververica.com> @Piotr
> > Nowojski
> > > >>> <pi...@ververica.com>
> > > >>>
> > > >>> Best,
> > > >>> Jark
> > > >>>
> > > >>> [1]:
> > > >>>
> > > >>>
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release
> > > >>>
> > > >>> On Wed, 29 Sept 2021 at 09:46, OpenInx <open...@gmail.com> wrote:
> > > >>>
> > > >>> > > Apart from this being `@PublicEvolving`
> > > >>> >
> > > >>> > From my perspective,  annotating the 'DynamicTableSink' to be a
> > > >>> > 'PublicEvolving' class is not reasonable, because that means devs
> > > could
> > > >>> > just change the basic API which all downstream connectors are
> > > >>> depending on
> > > >>> > easily when iterating flink from 1.12 to 1.13 (according to the
> > wiki
> > > >>> [1]).
> > > >>> > This implies all downstream maintainers must take on this
> > maintenance
> > > >>> > burden, and it also makes our flink ecosystem very fragile.
> > >  Changing
> > > >>> the
> > > >>> > 'DynamicTableSink' between two major versions sounds reasonable
> to
> > > me,
> > > >>> but
> > > >>> > unreasonable for uncompatibility changes between two minor
> > versions.
> > > >>>  I
> > > >>> > think we may need to check those API which are annotated
> > > >>> 'PublicEnvoling'
> > > >>> > while should be 'Public' because of  the dependency from all
> > > >>> connectors.
> > > >>> >  We should ensure the stability of those APIs that are necessary
> to
> > > >>> > implement the connector, and at the same time implement the
> updated
> > > v2
> > > >>> > version of the API. After all v2 APIs are considered stable, we
> > will
> > > >>> mark
> > > >>> > them as stable. Instead of releasing a version of the API, some
> of
> > > the
> > > >>> APIs
> > > >>> > necessary to implement the connector are marked as stable and
> some
> > > are
> > > >>> > marked as unstable, which is very unfriendly to downstream.
> Because
> > > >>> > downstream essentially every upgrade requires refactoring of the
> > > code.
> > > >>> >
> > > >>> > > We are trying to provide forward compatibility: applications
> > using
> > > >>> > `@Public` APIs compiled against Flink 1.12.x, should work fine in
> > > Flink
> > > >>> > 1.13.x
> > > >>> >
> > > >>> > Thanks for clarifying this.  Sounds reasonable to me, then we
> > apache
> > > >>> > iceberg could just use flink 1.12.x to build the flink+iceberg
> > > >>> connector
> > > >>> > and should make all the tests work fine for both flink 1.12 &
> flink
> > > >>> 1.13.
> > > >>> > For the `ResolvedCatalogTable` changes,  I don't think it
> > guarantees
> > > >>> the
> > > >>> > forward compatibility as you said, because the
> > > >>> flink-iceberg-runtime.jar
> > > >>> > compiled by flink 1.12 can still not works fine in flink 1.13
> > because
> > > >>> it
> > > >>> > will need the flink1.12's `CatalogTable` to be cast to  a
> flink1.13
> > > 's
> > > >>> > `ResolvedCatalogTable`.   In general, I agree that forward
> > > >>> compatibility is
> > > >>> > a more clear compatibility guarantee.
> > > >>> >
> > > >>> > [1].
> > > >>> >
> > > >>>
> > >
> https://cwiki.apache.org/confluence/display/FLINK/Stability+Annotations
> > > >>> >
> > > >>> >
> > > >>> > On Tue, Sep 28, 2021 at 10:33 PM David Morávek <d...@apache.org>
> > > >>> wrote:
> > > >>> >
> > > >>> > > >
> > > >>> > > > I think we have a compile time checks for breaking changes in
> > > >>> `@Public`
> > > >>> > > > marked classes/interfaces using japicmp [1].
> > > >>> > >
> > > >>> > >
> > > >>> > > Nice, thanks for pointing that out, I'll take a closer look at
> it
> > > ;)
> > > >>> > >
> > > >>> > > Best,
> > > >>> > > D.
> > > >>> > >
> > > >>> > > On Tue, Sep 28, 2021 at 4:14 PM Piotr Nowojski <
> > > pnowoj...@apache.org
> > > >>> >
> > > >>> > > wrote:
> > > >>> > >
> > > >>> > > > > - We don't have any safeguards for stable API breaks. Big
> +1
> > > for
> > > >>> > Ingo's
> > > >>> > > > effort with architectural tests [3].
> > > >>> > > >
> > > >>> > > > I think we have a compile time checks for breaking changes in
> > > >>> `@Public`
> > > >>> > > > marked classes/interfaces using japicmp [1].
> > > >>> > > >
> > > >>> > > > Piotrek
> > > >>> > > >
> > > >>> > > > [1]
> > > >>> https://github.com/apache/flink/blob/master/pom.xml#L2014:L2084
> > > >>> > > >
> > > >>> > > > wt., 28 wrz 2021 o 15:59 David Morávek <d...@apache.org>
> > > >>> napisał(a):
> > > >>> > > >
> > > >>> > > > > This is a super interesting topic and there is already a
> > great
> > > >>> > > > discussion.
> > > >>> > > > > Here are few thoughts:
> > > >>> > > > >
> > > >>> > > > > - There is a delicate balance between fast delivery of the
> > new
> > > >>> > features
> > > >>> > > > and
> > > >>> > > > > API stability. Even though we should be careful with
> breaking
> > > >>> > evolving
> > > >>> > > > > interfaces, it shouldn't stop us from making fast progress
> /
> > > >>> iterate
> > > >>> > on
> > > >>> > > > > features.
> > > >>> > > > > - There are two camps of users. One camp prefers more
> > frequent
> > > >>> > > releases /
> > > >>> > > > > new features (early adopters) and second that prefer less
> > > >>> frequent
> > > >>> > > stable
> > > >>> > > > > releases. There was already a great discussion about this
> at
> > > >>> Flink
> > > >>> > 1.14
> > > >>> > > > > thread [1].
> > > >>> > > > > - We're already trying to be explicit about which API's may
> > > >>> break via
> > > >>> > > > > annotations and the feature radar [2]. Stability
> annotations
> > > are
> > > >>> a
> > > >>> > well
> > > >>> > > > > known concept used by many projects. I think we still can
> go
> > > bit
> > > >>> > > further
> > > >>> > > > > here and aim for an IDE support (for example usages of
> guava
> > > >>> > > > @Experimental
> > > >>> > > > > interfaces get highlighted, raising more awareness about
> > > >>> potential
> > > >>> > > > issues).
> > > >>> > > > > I'm not sure how this IDE integration works though.
> > > >>> > > > > - We don't have any safeguards for stable API breaks. Big
> +1
> > > for
> > > >>> > Ingo's
> > > >>> > > > > effort with architectural tests [3].
> > > >>> > > > >
> > > >>> > > > > [1]
> > > >>> > > > >
> > > >>> > > > >
> > > >>> > > >
> > > >>> > >
> > > >>> >
> > > >>>
> > >
> >
> https://lists.apache.org/thread.html/r76e1cdba577c6f4d6c86b23fdaeb53c4e3744c20d0b3e850fc2e14a7%40%3Cdev.flink.apache.org%3E
> > > >>> > > > > [2] https://flink.apache.org/roadmap.html
> > > >>> > > > > [3] https://issues.apache.org/jira/browse/FLINK-24138
> > > >>> > > > >
> > > >>> > > > > Best,
> > > >>> > > > > D.
> > > >>> > > > >
> > > >>> > > > > On Tue, Sep 28, 2021 at 3:49 PM Leonard Xu <
> > xbjt...@gmail.com>
> > > >>> > wrote:
> > > >>> > > > >
> > > >>> > > > > > Thanks Piotr for the kindly reply, what confused me is
> that
> > > >>> > > > > > `SourceReaderContext` was marked @Public when it was born
> > in
> > > >>> flink
> > > >>> > > > 1.11,
> > > >>> > > > > > and then it was corrected to @PublicEvolving in 1.11 -_-,
> > and
> > > >>> > finally
> > > >>> > > > it
> > > >>> > > > > > was changed to @Public again...
> > > >>> > > > > >
> > > >>> > > > > > As Flink and Flink ecosystem(Flink CDC connectors)
> > > developer, I
> > > >>> > think
> > > >>> > > > > what
> > > >>> > > > > > we're discussing is meaningful and I’d like to help
> improve
> > > >>> those
> > > >>> > > > Public
> > > >>> > > > > > API check for those changes.
> > > >>> > > > > > Chesnay’s tips is a good idea that maybe we can use the
> > tool
> > > >>> like
> > > >>> > > > japicmp
> > > >>> > > > > > to do the check for every PR in CI phase.
> > > >>> > > > > >
> > > >>> > > > > > Best,
> > > >>> > > > > > Leonard
> > > >>> > > > > >
> > > >>> > > > > >
> > > >>> > > > > > > 在 2021年9月28日,21:15,Piotr Nowojski <
> pnowoj...@apache.org>
> > > 写道:
> > > >>> > > > > > >
> > > >>> > > > > > > Hi Leonard,
> > > >>> > > > > > >
> > > >>> > > > > > > Sorry for this causing you troubles, however that
> change
> > in
> > > >>> the
> > > >>> > > > return
> > > >>> > > > > > type
> > > >>> > > > > > > was done while this class still has been marked as
> > > >>> > > > > `@PublicEvolving`[1].
> > > >>> > > > > > As
> > > >>> > > > > > > of 1.13.x `SourceReaderContext` was `@PublicEvolving`
> and
> > > it
> > > >>> was
> > > >>> > > > marked
> > > >>> > > > > > as
> > > >>> > > > > > > `@Public` only starting from Flink 1.14.0 [2]. Probably
> > > what
> > > >>> > > confused
> > > >>> > > > > you
> > > >>> > > > > > > was that both of those changes (changing the return
> type
> > > and
> > > >>> > making
> > > >>> > > > it
> > > >>> > > > > > > `@Public`) happened in the same release.
> > > >>> > > > > > >
> > > >>> > > > > > > However, those changes (`SourceReaderContext` and
> > > >>> > > > > `ResolvedCatalogTable`)
> > > >>> > > > > > > should have been clearly mentioned in the release notes
> > > with
> > > >>> an
> > > >>> > > > upgrade
> > > >>> > > > > > > guide.
> > > >>> > > > > > >
> > > >>> > > > > > > Best, Piotrek
> > > >>> > > > > > >
> > > >>> > > > > > > [1]
> > > >>> > > > > > >
> > > >>> > > > > >
> > > >>> > > > >
> > > >>> > > >
> > > >>> > >
> > > >>> >
> > > >>>
> > >
> >
> https://github.com/apache/flink/commit/7f3636f6b4f8bac415a7db85917ad849636bd730#diff-a247a24ccd1afc07c5d690a8a58b1f6584329925fdf0d7dc89361b90d621b7f2R31
> > > >>> > > > > > > [2] https://issues.apache.org/jira/browse/FLINK-22357
> > > >>> > > > > > >
> > > >>> > > > > > > wt., 28 wrz 2021 o 14:49 Leonard Xu <xbjt...@gmail.com
> >
> > > >>> > > napisał(a):
> > > >>> > > > > > >
> > > >>> > > > > > >>>>
> > > >>> > > > > > >>>> Not sure if this will happen in 1.15 already. We
> will
> > > >>> needed
> > > >>> > > > > automated
> > > >>> > > > > > >>>> compatibility tests and a well-defined list of
> stable
> > > API.
> > > >>> > > > > > >>
> > > >>> > > > > > >>> We are
> > > >>> > > > > > >>> trying to provide forward compatibility: applications
> > > using
> > > >>> > > > `@Public`
> > > >>> > > > > > >> APIs
> > > >>> > > > > > >>> compiled against Flink 1.12.x, should work fine in
> > Flink
> > > >>> 1.13.x
> > > >>> > > > > > >>
> > > >>> > > > > > >> Unfortunately, I also meet forward compatibility
> issue,
> > > >>> when I
> > > >>> > do
> > > >>> > > > the
> > > >>> > > > > > >> release 1.14 check, I try to use mysql-cdc
> connector[1]
> > > >>> which
> > > >>> > > > compiled
> > > >>> > > > > > >> against 1.13.1in SQL Client, but it can not work in
> > flink
> > > >>> 1.14.0
> > > >>> > > > > > cluster,
> > > >>> > > > > > >> it failed due to the metric API compatibility broken.
> > > >>> > > > > > >>
> > > >>> > > > > > >> @Public
> > > >>> > > > > > >> public interface SourceReaderContext {
> > > >>> > > > > > >>
> > > >>> > > > > > >>   MetricGroup metricGroup();
> > > >>> > > > > > >>
> > > >>> > > > > > >>
> > > >>> > > > > > >> @Public
> > > >>> > > > > > >> public interface SourceReaderContext {
> > > >>> > > > > > >>
> > > >>> > > > > > >>    SourceReaderMetricGroup metricGroup();
> > > >>> > > > > > >>
> > > >>> > > > > > >>
> > > >>> > > > > > >> Shouldn't we mark it as @Deprecated and then delete it
> > > util
> > > >>> > 2.0.0
> > > >>> > > > > > version
> > > >>> > > > > > >> for @Public API as the our community rule [2]
> described?
> > > At
> > > >>> > least
> > > >>> > > we
> > > >>> > > > > > should
> > > >>> > > > > > >> keep them across server minor versions
> > > >>> > (<major>.<minor>.<patch>).
> > > >>> > > > > > >>
> > > >>> > > > > > >> Although these changes can be tracked to voted FLIPs
> and
> > > >>> it’s
> > > >>> > not
> > > >>> > > > the
> > > >>> > > > > > >> fault of a few developers, it show us the fact that we
> > > >>> didn’t
> > > >>> > pay
> > > >>> > > > > enough
> > > >>> > > > > > >> attention to back compatibility/forward compatibility.
> > > >>> > > > > > >>
> > > >>> > > > > > >> Best,
> > > >>> > > > > > >> Leonard
> > > >>> > > > > > >> [1]
> > > >>> > > > > > >>
> > > >>> > > > > >
> > > >>> > > > >
> > > >>> > > >
> > > >>> > >
> > > >>> >
> > > >>>
> > >
> >
> https://github.com/ververica/flink-cdc-connectors/tree/master/flink-connector-mysql-cdc
> > > >>> > > > > > >> [2]
> > > >>> > > > > > >>
> > > >>> > > > >
> > > >>> > >
> > > >>>
> > >
> https://cwiki.apache.org/confluence/display/FLINK/Stability+Annotations
> > > >>> > > > > > >>
> > > >>> > > > > > >>
> > > >>> > > > > >
> > > >>> > > > > >
> > > >>> > > > >
> > > >>> > > >
> > > >>> > >
> > > >>> >
> > > >>>
> > > >>
> > >
> >
>

Reply via email to