Getting the partitioning right now is only important if your messages are
keyed. If they’re not, stop reading, start with a fairly low number of
partitions, and expand as needed.
1000 partitions per topic is generally not normal. It’s not really a
problem, but you want to size topics appropriately
Hello,
I want to size the kafka cluster with just one topic and I'm going to
process the data with Spark and others applications.
If I have six hard drives per node, is it kafka smart enough to deal with
them? I guess that the memory should be very important in this point and
all data is cached i