I have started multiple consumers with some time delay. Even after long
period of time, later joined consumers are not getting any distribution of
partitions. Only one consumer is loaded with all the partitions. I dont see
any configuration parameter to change this behavior.
Did anyone face
I am using the following code to help kafka stream listener threads to exit
out of the blocking call of hasNext() on the consumerIterator. But the
threads never exit, when they receive allDone() signal. I am not sure
whether I am making any mistake. Please let me know is this right approach.
I am just wondering if I have no of replicas as 3, and
if min.insync.replicas 1, in that case does a read require from read
quorum or just the leader always serves the read request?
Thanks Regards,
:22 PM, Gomathivinayagam Muthuvinayagam
sankarm...@gmail.com wrote:
I see lot of interesting features with Kafka 0.8.2 beta. I am just
wondering when that will be released. Is there any timeline for that?
Thanks Regards,
--
Thanks,
Ewen
I am using Kafka 0.8.2 and I am using Kafka based storage for offset.
Whenever I restart a consumer (high level consumer api) it is not consuming
messages whichever were posted when the consumer was down.
I am using the following consumer properties
Properties props = new Properties();
the group Id all the
time?
Jiangjie (Becket) Qin
On 4/29/15, 3:17 PM, Gomathivinayagam Muthuvinayagam
sankarm...@gmail.com wrote:
I am using Kafka 0.8.2 and I am using Kafka based storage for offset.
Whenever I restart a consumer (high level consumer api) it is not
consuming
messages
I see lot of interesting features with Kafka 0.8.2 beta. I am just
wondering when that will be released. Is there any timeline for that?
Thanks Regards,
I have just posted the following question in stackoverflow. Could you
answer the following questions?
I would like to use Kafka high level consumer API, and at the same time I
would like to disable auto commit of offsets. I tried to achieve this
through the following steps.
1) auto.commit.enable
I am trying to setup a cluster where messages should never be lost once it
is published. Say if I have 3 brokers, and if I configure the replicas to
be 3 also, and if I consider max failures as 1, and I can achieve the above
requirement. But when I post a message, how do I prevent kafka from
I am trying to commit offset request in a background thread. I am able to
commit it so far. I am using high level consumer api.
So if I just use high level consumer api, and if I have disabled auto
commit, with kafka as the storage for offsets, will the high level consumer
api use automatically
10 matches
Mail list logo