Re: Doubts in Kafka

2019-01-09 Thread Peter Levart
Hi Aruna, On 1/10/19 8:19 AM, aruna ramachandran wrote: I am using keyed partitions with 1000 partitions, so I need to create 1000 consumers because consumers groups and re balancing concepts is not worked in the case of manually assigned consumers.Is there any replacement for the above problem.

Doubts in Kafka

2019-01-09 Thread aruna ramachandran
I am using keyed partitions with 1000 partitions, so I need to create 1000 consumers because consumers groups and re balancing concepts is not worked in the case of manually assigned consumers.Is there any replacement for the above problem.

Re: Configuration guidelines for a specific use-case

2019-01-09 Thread Gioacchino Vino
Hi Ryanne, I just forgot to insert the "linger.ms=0" configuration. I got this result: 5000 records sent, 706793.701055 records/sec (67.41 MB/sec), 7.29 ms avg latency, 1245.00 ms max latency, 0 ms 50th, 3 ms 95th, 197 ms 99th, 913 ms 99.9th. it's pretty good but I would like to impr

Zookeeper timeout message in logs has value < configured timeout

2019-01-09 Thread Mark Anderson
Hi, I'm experimenting with the value of zookeeper.session.timeout.ms in Kafka 2.0.1. In my broker logs I see the following message [2019-01-09 15:12:01,246] WARN Client session timed out, have not heard from server in 1369ms for sessionid 0x200d78d415e0002 (org.apache.zookeeper.ClientCnxn) Howe

Kafka doubts

2019-01-09 Thread aruna ramachandran
Hi, I don't know the device count ,the new devices may add to the system how can initially configure the partitions by the key(device id). The device count may increase up to 1 million and how Kafka scales based on the need.

Error while creating ephemeral at /brokers/ids/BROKER_ID

2019-01-09 Thread Ashish Choudhary
Hi All, I am using Kafka 2.12.2.1.0. While restarting Kafka, I am getting "Error while creating ephemeral at /brokers/ids/BROKER_ID". I observed that similar issue https://issues.apache.org/jira/browse/KAFKA-7165 has been fixed in version 2.2.0. Meanwhile can anyone please suggest any workaround

Re: Dynamic Partitioning

2019-01-09 Thread Dimitry Lvovsky
One approach you can take is to set an upper bound number of partitions a priori. Imagine your key was the username. If you had 2 partitions in your topic and 4 users writing messages, then Kafka would split these messages between two partitions of the topic (assuming the usernames are unique). Fo