We would need to have a stable interface between the connectors and flink and 
have very good checks that ensure that we don’t inadvertently break things.

> On 10 Dec 2015, at 15:45, Fabian Hueske <fhue...@gmail.com> wrote:
> 
> Sounds like a good idea to me.
> 
> +1
> 
> Fabian
> 
> 2015-12-10 15:31 GMT+01:00 Maximilian Michels <m...@apache.org>:
> 
>> Hi squirrels,
>> 
>> By this time, we have numerous connectors which let you insert data
>> into Flink or output data from Flink.
>> 
>> On the streaming side we have
>> 
>> - RollingSink
>> - Flume
>> - Kafka
>> - Nifi
>> - RabbitMQ
>> - Twitter
>> 
>> On the batch side we have
>> 
>> - Avro
>> - Hadoop compatibility
>> - HBase
>> - HCatalog
>> - JDBC
>> 
>> 
>> Many times we would have liked to release updates to the connectors or
>> even create new ones in between Flink releases. This is currently not
>> possible because the connectors are part of the main repository.
>> 
>> Therefore, I have created a new repository at
>> https://git-wip-us.apache.org/repos/asf/flink-connectors.git. The idea
>> is to externalize the connectors to this repository. We can then
>> update and release them independently of the main Flink repository. I
>> think this will give us more flexibility in the development process.
>> 
>> What do you think about this idea?
>> 
>> Cheers,
>> Max
>> 

Reply via email to