Dibyendu,
According to the pull request: https://github.com/linkedin/camus/pull/15 it
was merged into the camus-kafka-0.8 branch. I have not checked if the code
was subsequently removed, however, two at least one the important files
from this patch
Yes , I am definitely interested with such capabilities. We also using
kafka 0.7.
Guys I already asked , but nobody answer: what community using to
consume from kafka to hdfs?
My assumption was that if Camus support only Avro it will not be suitable
for all , but people transfer from kafka to
For the last 6 months, we've been using this:
https://github.com/wikimedia-incubator/kafka-hadoop-consumer
In combination with this wrapper script:
https://github.com/wikimedia/kraken/blob/master/bin/kafka-hadoop-consume
It's not great, but it works!
On Aug 9, 2013, at 2:06 PM, Felix GV
I just checked and that patch is in .8 branch. Thanks for working on back
porting it Andrew. We'd be happy to commit that work to master.
As for the kafka contrib project vs Camus, they are similar but not quite
identical. Camus is intended to be a high throughput ETL for bulk
ingestion of
I am trying to setup kafka service and connect to zookeeper that would be
shared with Other projects. Can someone advice how to configure namespace in
kafka and zookeeper.
Thanks so much
Sent from my iPad
Hi Ken,
I am also working on making the Camus fit for Non Avro message for our
requirement.
I see you mentioned about this patch
(https://github.com/linkedin/camus/commit/87917a2aea46da9d21c8f67129f6463af52f7aa8)
which supports custom data writer for Camus. But this patch is not pulled into