I agree with you. We are looking for a simple solution for data from Kafka
to Hadoop. I have tried using Camus earlier (Non-Avro) and documentation is
lacking to make it work correctly, as we do not need to introduce another
component to the solution. In the meantime, can the Kafka Hadoop
its nice information and it is used for us.123trainings also provides hadoop
online trainings
a href=http://123trainings.com/it-hadoop-bigdata-online-training.html;hadoop
online training in india/a
...@wikimedia.org
ao...@wikimedia.org; Felix GV fe...@mate1inc.com; Cosmin Lehene
cleh...@adobe.com; d...@kafka.apache.org d...@kafka.apache.org;
users@kafka.apache.org users@kafka.apache.org
Sent: Saturday, August 10, 2013 3:30 PM
Subject: Re: Kafka/Hadoop consumers and producers
So guys, just to throw
: Monday, August 12, 2013 8:20 PM
Subject: Re: Kafka/Hadoop consumers and producers
Kam,
I am perfectly fine if you pick this up. After thinking about it for a
while, we are going to upgrade to Kafka 0.8.0 and also use Camus as it
more closely matches our use case, with the caveat of we do not use
; users@kafka.apache.org
users@kafka.apache.org
Sent: Monday, August 12, 2013 7:00 PM
Subject: Re: Kafka/Hadoop consumers and producers
We've done a bit of work over at Wikimedia to debianize Kafka and make it
behave like a regular service.
https://github.com/wikimedia/operations-debs-kafka
...@googlegroups.com camus_...@googlegroups.com;
ao...@wikimedia.org ao...@wikimedia.org; Felix GV fe...@mate1inc.com;
Cosmin Lehene cleh...@adobe.com; users@kafka.apache.org
users@kafka.apache.org
Sent: Tuesday, August 13, 2013 1:03 PM
Subject: Re: Kafka/Hadoop consumers and producers
camus_...@googlegroups.com; ao...@wikimedia.org ao...@wikimedia.org;
Felix GV fe...@mate1inc.com; Cosmin Lehene cleh...@adobe.com;
d...@kafka.apache.org d...@kafka.apache.org; users@kafka.apache.org
users@kafka.apache.org
Sent: Saturday, August 10, 2013 3:30 PM
Subject: Re: Kafka/Hadoop
So guys, just to throw my 2 cents in:
1. We aren't deprecating anything. I just noticed that the Hadoop contrib
package wasn't getting as much attention as it should.
2. Andrew or anyone--if there is anyone using the contrib package who would
be willing to volunteer to kind of adopt it that
Dibyendu,
According to the pull request: https://github.com/linkedin/camus/pull/15 it
was merged into the camus-kafka-0.8 branch. I have not checked if the code
was subsequently removed, however, two at least one the important files
from this patch
Yes , I am definitely interested with such capabilities. We also using
kafka 0.7.
Guys I already asked , but nobody answer: what community using to
consume from kafka to hdfs?
My assumption was that if Camus support only Avro it will not be suitable
for all , but people transfer from kafka to
For the last 6 months, we've been using this:
https://github.com/wikimedia-incubator/kafka-hadoop-consumer
In combination with this wrapper script:
https://github.com/wikimedia/kraken/blob/master/bin/kafka-hadoop-consume
It's not great, but it works!
On Aug 9, 2013, at 2:06 PM, Felix GV
I just checked and that patch is in .8 branch. Thanks for working on back
porting it Andrew. We'd be happy to commit that work to master.
As for the kafka contrib project vs Camus, they are similar but not quite
identical. Camus is intended to be a high throughput ETL for bulk
ingestion of
Hi Ken,
I am also working on making the Camus fit for Non Avro message for our
requirement.
I see you mentioned about this patch
(https://github.com/linkedin/camus/commit/87917a2aea46da9d21c8f67129f6463af52f7aa8)
which supports custom data writer for Camus. But this patch is not pulled into
We also have a need today to ETL from Kafka into Hadoop and we do not currently
nor have any plans to use Avro.
So is the official direction based on this discussion to ditch the Kafka
contrib code and direct people to use Camus without Avro as Ken described or
are both solutions going to
Hi all,
Over at the Wikimedia Foundation, we're trying to figure out the best way to do
our ETL from Kafka into Hadoop. We don't currently use Avro and I'm not sure
if we are going to. I came across this post.
If the plan is to remove the hadoop-consumer from Kafka contrib, do you think
we
Hi Andrew,
Camus can be made to work without avro. You will need to implement a message
decoder and and a data writer. We need to add a better tutorial on how to do
this, but it isn't that difficult. If you decide to go down this path, you can
always ask questions on this list. I try to make
Vadim,
The advantages of Camus compared to the contrib consumer are the following
(but perhaps I'm forgetting some) :
- The ability to fetch all/many topics in one job (Map Reduce can
otherwise introduce a lot of overhead for small topics).
- Smarter load balancing of topic partitions
I guess I am more concerned about the long term than the short term. I
think if you guys want to have all the Hadoop+Kafka stuff then we should
move the producer there and it sounds like it would be possible to get
similar functionality from the existing consumer code. I am not in a rush I
just
We currently have a contrib package for consuming and producing messages
from mapreduce (
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tree;f=contrib;h=e53e1fb34893e733b10ff27e79e6a1dcbb8d7ab0;hb=HEAD
).
We keep running into problems (e.g. KAFKA-946) that are basically due to
the fact
19 matches
Mail list logo