Hi Vladoiu,

I am by no means a Kafka expert, but what are you optimizing for?

   - Cost could be a variable.
   - Time to bring on a new broker could be another variable. For large
   machines that could take longer since they need to stream more data.

Cheers,
Jens

On Wed, Jan 13, 2016 at 1:09 PM, Vladoiu Catalin <vladoiu.cata...@gmail.com>
wrote:

> Hi guys,
>
> I've run into a long conversation with my colleagues when we discussed the
> size of the Brokers for our new Kafka cluster and we still haven't reached
> a final conclusion.
>
> Our main concern is the size of the requests 10-20MB per request (producer
> will send big requests), maybe more and we estimate that we will have 4-5TB
> per day.
>
> Our debate is between:
> 1. Having a smaller cluster(not so many brokers) but big config, something
> like this:
> Disk: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration
> because our Hadoop cluster has that config and can easily handle that
> amount of data.
> 2. Having a bigger number of brokers but smaller broker config.
>
> I was hopping that somebody with more experience in using Kafka can advice
> on this.
>
> Thanks,
> Catalin
>



-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary>
 Twitter <https://twitter.com/tink>

Reply via email to