Yeah the real question is really are the products built on top of Kafka
(Kafka with a hat on). The last place I worked we ended up using Kinesis
rather than Kafka basically for the reason Niek mentions, it seemed easier
to accept the limitations and pay Amazon rather than run Kafka (small
company <30 devs), and my current place (<10 people) is moving towards
Azure Event Hubs (C#/Azure shop) for similar reasons.

The Kafka producer and consumer code certainly seems way better than that
for EventHubs and Kinesis (assuming you're in C# for Azure and Java for the
others).

Christian

On Thu, Nov 13, 2014 at 3:11 PM, Niek Sanders <niek.sand...@gmail.com>
wrote:

> Many similarities.
>
> For Kinesis right now:
>
> * only a 1 day max retention
> * max 50KB message size
> * guaranteed throughput based on MB/sec in and out.
> * servers hosting the shards abstracted away by SaaS
>
> For collaborative consumption, Kinesis uses DynamoDB whereas Kafka
> uses Zookeeper.
>
> Until recently, the collaborative consumption library was Java only.
> They recently released a bridge daemon (MultiLangDaemon) which lets
> you use Python too.  I wrote a Golang client for using that same
> bridge daemon in about a day (https://github.com/nieksand/gokinesis).
>
> For handling the broker topology, you just write to the Kinesis API
> which takes care of the distribution to the appropriate shards
>
> Another downside on Kinesis is that it doesn't have Kafka's neat
> producer-side message batch compression.
>
> The most compelling use case for Kinesis right now is if you're and
> AWS shop and don't want to deal with setting up and maintaining a
> Kafka cluster.  And even then it's only applicable if you're use case
> fits inside the retention and message size limitations.
>
> Best,
> Niek
>
>
> On Thu, Nov 13, 2014 at 2:32 PM, Joseph Lawson <jlaw...@roomkey.com>
> wrote:
> > Oh man they look similar.  Any comments?
>

Reply via email to