Hi Chris,

There has been some discussions of keeping an eco-system around Apache
Kafka v.s. making every component of it as in AK in the community.

In the past we have tried to follow the latter approach, e.g. keeping the
contributed hadoop clients within AK:
https://github.com/apache/kafka/tree/0.8/contrib

Over time, the learned lesson is that having all these modules within a
monolithic project is very hard to maintain code repo, to synchronize among
contributors, and eventually to release (think about keeping container
images, multi-language clients, operating tools / GUI, third party
dependencies etc within a single huge project). As for Connector, it is a
general framework to ingress data from / to Kafka with other data systems,
the goal is to let connector developers to easily develop their own
connectors with this framework and to not need to worry about low-level
details like parallelism / fault tolerance. Thus it is natural to keep the
developed connectors as eco-systems aside from AK project. On the other
hand, keeping all the connectors within AK would introduce all dependencies
of the other data systems into AK repo.


Guozhang



On Sat, Oct 21, 2017 at 1:28 AM, chris snow <chsnow...@gmail.com> wrote:

> I've been working with Kafka Connect for a short while, and I can't help
> but contrast it with the approach taken by Apache Camel.
>
> Camel takes an inclusive approach to components - it has a huge number of
> components (connectors) that are included as part of the official Camel
> distribution.  This makes Camel very easy to get up and running and
> connected to many diverse endpoints.
>
> On the other hand, I've found that using Kafka connectors is much more
> fragmented and time-consuming.  Is there a reason why the community doesn't
> focus on providing a more comprehensive set of connectors as part of the
> official Apache Kafka distribution?
>



-- 
-- Guozhang

Reply via email to