Hi,

I would like to start a discussion regarding the streaming connectors in
Flink.
Currently, there are 12 connectors in the main repository, 4 more are
pending as pull requests (rethinkdb, kafka 0.10, ActiveMQ, HBase), and
there were some more discussed on the mailing list / JIRA / pull requests.
Recently, I started rejecting some new connector contributions because I
didn't see enough demand by our users and not enough capacity among the
committers to maintain all connectors.
Instead of just rejecting contributions, I would like to come up with a
solution that works for contributors and the Flink community. Ideally, the
solution should be documented in the contribution guidelines so that it is
transparent for new contributors.

Some ideas what we could do with new connector contributions:

#1 (current approach) Let the contributor host the connector in their own
GitHub repository, and link to the repo from the Flink 3rd party packages
page [1].

#2 Redirect the contribution to Apache Bahir. A recently created Apache
project out of Apache Spark, to host some of their streaming connectors.

#3 Use the "flink-connectors" git repository at apache [2]. For more
details, also check out the thread titled "Externalizing the Flink
connectors".

#4 Use the "project-flink" [3] GitHub account.

I would like to hear your opinion on the problem in general and the
approaches I've suggested.

I personally like #1 and #2, because they should have pretty low overhead
on the infrastructure-side for us (maven files, releases, CI / tests ...).
For #2, we would probably first start a discussion with the Bahir community
to see whether such contributions are in the scope of their project.
#3 is too much overhead for us, in my opinion, and #4 doesn't seem to have
an obvious advantage over #1.


Regards,
Robert


[1] http://flink.apache.org/community.html#third-party-packages
[2] https://git-wip-us.apache.org/repos/asf?p=flink-connectors.git
[3] https://github.com/project-flink

Reply via email to