300 partitions is at lower side, surely that won't be the root cause.
How about usage of network bandwidth for your nodes?
Are they reachable to zookeeper?
Are you executing some partition-rebalancing jobs (or re-assigning
partitions) in parallel?
On 2 March 2017 at 10:57, Jun MA wrote:
> Hi,
>
There is a awesome tool called "kafka-manager", which was opened sourced by
Yahoo.
https://github.com/yahoo/kafka-manager
On 21 April 2016 at 08:07, Rajiv Kurian wrote:
> The kafka-topics.sh tool lists topics and where the partitions are. Is
> there a similar tool where I could give it a broker
7;foo'].get_producer(partitioner=pykafka.partitioners.hashing_partitioner)
> prod.produce([('p_key1', 'foo'), ('p_key2', 'foo'), ('p_key1', 'baz')])
>
>
>
> Cheers,
>
> Keith
>
>
> On Wed, Jun 10, 2015 a
Thanks,
a question, does it support keyed Producer?
I mean how it works when I have multiple partitions? Will it be able to
identify partition based of the key which I pass?
On 9 June 2015 at 00:54, Keith Bourgoin wrote:
> Hi Kafka folks,
>
> I'm happy to announce the 1.0 release of PyKafka
>
I ran into similar issue. I configured 3 disks, but partitions were
allocated only to 2 disks (disk2 and disk3). Then I found that the left out
disk (disk1) was already hosting lot number of other partitions from
different topics. So may be partition allocation happens based on "how many
partitions