We also have a need today to ETL from Kafka into Hadoop and we do not currently 
nor have any plans to use Avro. 

So is the official direction based on this discussion to ditch the Kafka 
contrib code and direct people to use Camus without Avro as Ken described or 
are both solutions going to survive? 

I can put time into the contrib code and/or work on documenting the tutorial on 
how to make Camus work without Avro. 

Which is the preferred route, for the long term?

Thanks,
Andrew

On Wednesday, August 7, 2013 10:50:53 PM UTC-6, Ken Goodhope wrote:
> Hi Andrew,
> 
> 
> 
> Camus can be made to work without avro. You will need to implement a message 
> decoder and and a data writer.   We need to add a better tutorial on how to 
> do this, but it isn't that difficult. If you decide to go down this path, you 
> can always ask questions on this list. I try to make sure each email gets 
> answered. But it can take me a day or two. 
> 
> 
> 
> -Ken
> 
> 
> 
> On Aug 7, 2013, at 9:33 AM, ao...@wikimedia.org wrote:
> 
> 
> 
> > Hi all,
> 
> > 
> 
> > Over at the Wikimedia Foundation, we're trying to figure out the best way 
> > to do our ETL from Kafka into Hadoop.  We don't currently use Avro and I'm 
> > not sure if we are going to.  I came across this post.
> 
> > 
> 
> > If the plan is to remove the hadoop-consumer from Kafka contrib, do you 
> > think we should not consider it as one of our viable options?
> 
> > 
> 
> > Thanks!
> 
> > -Andrew
> 
> > 
> 
> > -- 
> 
> > You received this message because you are subscribed to the Google Groups 
> > "Camus - Kafka ETL for Hadoop" group.
> 
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to camus_etl+unsubscr...@googlegroups.com.
> 
> > For more options, visit https://groups.google.com/groups/opt_out.
> 
> > 
> 
> >

Reply via email to