Hi Devs!

I’d like to kick off a discussion on setting up a repo for a new fleet of
Google Cloud connectors.

A bit of context:

   -

   We have a team of Google engineers who are looking to build/maintain
   5-10 GCP connectors for Flink.
   -

   We are wondering if it would make sense to host our connectors under the
   ASF umbrella following a similar repo structure as AWS (
   https://github.com/apache/flink-connector-aws). In our case:
   apache/flink-connectors-gcp.
   -

   Currently, we have no Flink committers on our team. We are actively
   involved in the Apache Beam community and have a number of ASF members on
   the team.


We saw that one of the original motivations for externalizing connectors
was to encourage more activity and contributions around connectors by
easing the contribution overhead. We understand that the decision was
ultimately made to host the externalized connector repos under the ASF
organization. For the same reasons (release infra, quality assurance,
integration with the community, etc.), we would like all GCP connectors to
live under the ASF organization.

We want to ask the Flink community what you all think of this idea, and
what would be the best way for us to go about contributing something like
this. We are excited to contribute and want to learn and follow your
practices.

A specific issue we know of is that our changes need approval from Flink
committers. Do you have a suggestion for how best to go about a new
contribution like ours from a team that does not have committers? Is it
possible, for example, to partner with a committer (or a small cohort) for
tight engagement? We also know about ASF voting and release process, but
that doesn't seem to be as much of a potential hurdle.

Huge thanks in advance for sharing your thoughts!


Claire

Reply via email to