Hi guys,

I've run into a long conversation with my colleagues when we discussed the
size of the Brokers for our new Kafka cluster and we still haven't reached
a final conclusion.

Our main concern is the size of the requests 10-20MB per request (producer
will send big requests), maybe more and we estimate that we will have 4-5TB
per day.

Our debate is between:
1. Having a smaller cluster(not so many brokers) but big config, something
like this:
Disk: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration
because our Hadoop cluster has that config and can easily handle that
amount of data.
2. Having a bigger number of brokers but smaller broker config.

I was hopping that somebody with more experience in using Kafka can advice
on this.

Thanks,
Catalin

Reply via email to