Or HDFS and use Kafka for the event of file, yup. Processing on the files can 
be done without the mapreduce overhead in Hadoop now using Apache Tez (or 
something that use Tez like Pig).  


/*******************************************
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop
********************************************/


> On May 13, 2014, at 4:40 AM, Wouter Bosland <wbosland.pr...@gmail.com> wrote:
> 
> Hello everyone,
> 
> Can Kafka be used for binary large objects of 100 MB ?
> 
> Or should I use a different solution to store these files like MongoDB and
> maybe send the location of these files in MongoDB over Kafka?
> 
> 
> 
> Thanks is advance,
> 
> Wouter

Reply via email to