Hey Martijn,

Apologies, I had missed this message initially! Comments inline.

> contributions that have happened outside of the Apache realm do not play a
role when evaluating potential new committers.

I understand. This, unfortunately, is the tradeoff for developing the
connectors outside of Apache in exchange for development velocity. That
being said, the ultimate goal is to move towards committer status. In the
short-term, we will pick up FLIPs for outstanding work that needs to be
done under Apache (ie, [1], [2]) while, in parallel, developing connectors
externally. The long-term goal will be to eventually donate these
connectors to Apache through a series of FLIPs and PRs. I am hoping those
PRs would be considered in conjunction with the ad-hoc FLIP work when
evaluating potential committer status for those who shepherd the connector
donations.

> I think the best course of action would be to create a FLIP to add these
connectors to the ASF, while trying to find one or two committers in the
Flink project that are willing to help with the reviews. Would that be
possible?

Creating a FLIP to add these connectors to the ASF sounds like a good plan
to me! I will start drafting that. As for finding committers who are
willing to run point on reviews, I had asked about this above and it seemed
that wasn't a realistic ask. And I think Alexander's point above makes
sense. It wouldn't be fair to the reviewer to take a dependency on them
when reviewing PRs is on a best-effort basis.

Thanks for your thoughts on this. Please let me know if any of my
assumptions are incorrect or if I am not thinking about this in the right
way.

Best,
Claire

[1] https://issues.apache.org/jira/browse/FLINK-32673
[2] https://issues.apache.org/jira/browse/FLINK-20625

On Fri, Mar 8, 2024 at 3:04 AM Martijn Visser <martijnvis...@apache.org>
wrote:

> Hi Claire,
>
> I don't think it's a good idea to actually develop outside of Apache;
> contributions that have happened outside of the Apache realm do not play a
> role when evaluating potential new committers. I think the best course of
> action would be to create a FLIP to add these connectors to the ASF, while
> trying to find one or two committers in the Flink project that are willing
> to help with the reviews. Would that be possible?
>
> Best regards,
>
> Martijn
>
> On Thu, Feb 15, 2024 at 12:39 PM Claire McCarthy
> <clairemccar...@google.com.invalid> wrote:
>
> > Hi Alexander,
> >
> > Thanks so much for the info!
> >
> > It sounds like the best path forward is for us to develop outside of
> Apache
> > while, in parallel, working to gain committer status. Our goal will be to
> > eventually move anything we build under the Apache umbrella once we're
> more
> > plugged in to the community.
> >
> > As for migrating the existing Pub/Sub connector to the new Source API, we
> > actually have somebody currently building a new Pub/Sub connector from
> > scratch (using the new Source API). Once that is ready, we will make sure
> > to get that new implementation moved under Apache and help with the
> > migration effort.
> >
> > Thanks again for the response and I'm sure we will be chatting soon!
> >
> > Best,
> > Claire
> >
> > On Wed, Feb 14, 2024 at 7:36 AM Alexander Fedulov <
> > alexander.fedu...@gmail.com> wrote:
> >
> > > Hi Claire,
> > >
> > > Thanks for reaching out. It's great that there is interest from Google
> > > in spearheading the development of the respective Flink connectors.
> > >
> > > As of now,there is only one GCP-specific connector developed directly
> as
> > > part
> > > of ASF Flink, namely the Pub/Sub one. It has already been externalized
> > here
> > > [1].
> > > Grouping further connectors under apache/flink-connectors-gcp makes
> > sense,
> > > but
> > > it would be nice to first understand which GCP connectors you plan to
> add
> > > before we create this new umbrella project.
> > >
> > > I do not think establishing a dedicated workgroup to help with the
> > > GCP-specific
> > > development is a realistic goal, though. The development will most
> > probably
> > > take
> > > place on the regular ASF best effort basis (which involves mailing list
> > > discussions,
> > > reaching out to people for reviews, etc.) until your developers gain
> > > committer status
> > > and can work more independently.
> > >
> > > One immediate open item where the Flink community would definitely
> > > appreciate your
> > > help is with the migration of the existing Pub/Sub connector to the new
> > > Source API.
> > > As you can see here [2], it is one of the two remaining connectors
> where
> > we
> > > have not
> > > yet made progress, and it seems like a great place to start the
> > > collaboration.
> > > Flink 2.0 aims to remove the SourceFunction API, which the current
> > Pub/Sub
> > > connector
> > > relies on. It would be great if your colleagues could assist with this
> > > effort [3].
> > >
> > > Best,
> > > Alexander Fedulov
> > >
> > > [1] https://github.com/apache/flink-connector-gcp-pubsub
> > > [2] https://issues.apache.org/jira/browse/FLINK-28045
> > > [3] https://issues.apache.org/jira/browse/FLINK-32673
> > >
> > >
> > >
> > > On Tue, 13 Feb 2024 at 17:25, Claire McCarthy
> > > <clairemccar...@google.com.invalid> wrote:
> > >
> > > > Hi Devs!
> > > >
> > > > I’d like to kick off a discussion on setting up a repo for a new
> fleet
> > of
> > > > Google Cloud connectors.
> > > >
> > > > A bit of context:
> > > >
> > > >    -
> > > >
> > > >    We have a team of Google engineers who are looking to
> build/maintain
> > > >    5-10 GCP connectors for Flink.
> > > >    -
> > > >
> > > >    We are wondering if it would make sense to host our connectors
> under
> > > the
> > > >    ASF umbrella following a similar repo structure as AWS (
> > > >    https://github.com/apache/flink-connector-aws). In our case:
> > > >    apache/flink-connectors-gcp.
> > > >    -
> > > >
> > > >    Currently, we have no Flink committers on our team. We are
> actively
> > > >    involved in the Apache Beam community and have a number of ASF
> > members
> > > > on
> > > >    the team.
> > > >
> > > >
> > > > We saw that one of the original motivations for externalizing
> > connectors
> > > > was to encourage more activity and contributions around connectors by
> > > > easing the contribution overhead. We understand that the decision was
> > > > ultimately made to host the externalized connector repos under the
> ASF
> > > > organization. For the same reasons (release infra, quality assurance,
> > > > integration with the community, etc.), we would like all GCP
> connectors
> > > to
> > > > live under the ASF organization.
> > > >
> > > > We want to ask the Flink community what you all think of this idea,
> and
> > > > what would be the best way for us to go about contributing something
> > like
> > > > this. We are excited to contribute and want to learn and follow your
> > > > practices.
> > > >
> > > > A specific issue we know of is that our changes need approval from
> > Flink
> > > > committers. Do you have a suggestion for how best to go about a new
> > > > contribution like ours from a team that does not have committers? Is
> it
> > > > possible, for example, to partner with a committer (or a small
> cohort)
> > > for
> > > > tight engagement? We also know about ASF voting and release process,
> > but
> > > > that doesn't seem to be as much of a potential hurdle.
> > > >
> > > > Huge thanks in advance for sharing your thoughts!
> > > >
> > > >
> > > > Claire
> > > >
> > >
> >
>

Reply via email to