20000 partitions should be OK.

On 4/21/15, 12:33 AM, "Achanta Vamsi Subhash" <achanta.va...@flipkart.com>
wrote:

>We are planning to have ~20000 partitions. Will it be a bottleneck?
>
>On Mon, Apr 20, 2015 at 10:48 PM, Jiangjie Qin <j...@linkedin.com.invalid>
>wrote:
>
>> Producers usually do not query zookeeper at all.
>> Consumers usually query zookeeper at beginning or rebalance. It is
>> supposed to be in frequent if you don¹t have consumers come and go all
>>the
>> time. One exception is that if you are using zookeeper based consumer
>> offset commit, it will commit offset to zookeeper frequently.
>> In Kafka, the most heavily used mechanism for zookeeper is zookeeper
>> listener and they are not fired in a regular frequency.
>>
>> The limitation of Zookeeper usage for Kafka I am aware of is probably
>>the
>> size of each zNode. As long as you don¹t have so many partitions that
>> zNode cannot handle, it should be fine.
>>
>> Thanks.
>>
>> Jiangjie (Becket) Qin
>>
>> On 4/20/15, 5:58 AM, "Achanta Vamsi Subhash"
>><achanta.va...@flipkart.com>
>> wrote:
>>
>> >Hi,
>> >
>> >Could anyone help with this?
>> >
>> >Thanks.
>> >
>> >On Sun, Apr 19, 2015 at 12:58 AM, Achanta Vamsi Subhash <
>> >achanta.va...@flipkart.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> How often does Kafka query zookeeper while producing and consuming?
>> >>
>> >> Ex:
>> >> If there is a single partition to which we produce and a HighLevel
>> >> consumer running on it, how many read/write queries to zookeeper
>>happen.
>> >>
>> >> Extending further, multiple topics with ~100 partitions each, how
>>many
>> >> zookeeper calls will be made (read/write).
>> >>
>> >> What is the max limit of no of partitions / kafka cluster that
>>zookeeper
>> >> can handle?
>> >>
>> >> --
>> >> Regards
>> >> Vamsi Subhash
>> >>
>> >
>> >
>> >
>> >--
>> >Regards
>> >Vamsi Subhash
>>
>>
>
>
>-- 
>Regards
>Vamsi Subhash

Reply via email to