Re: When a broker down, Producer LOST messages!

2018-03-01 Thread 许志峰
from 2 of your 3 brokers > with the 'acks=all' property in use making your topic resilient. > > Hope this helps, > > Tom Aley > thomas.a...@ibm.com > > > > From: "许志峰" <zhifengx...@gmail.com> > To: users@kafka.apache.org > Date: 01/03/2018 08:59

Re: When a broker down, Producer LOST messages!

2018-03-01 Thread Thomas Aley
ope this helps, Tom Aley thomas.a...@ibm.com From: "许志峰" <zhifengx...@gmail.com> To: users@kafka.apache.org Date: 01/03/2018 08:59 Subject: When a broker down, Producer LOST messages! Hi all, I have a kafka cluster with 3 nodes: node1, node2, node3 k

When a broker down, Producer LOST messages!

2018-03-01 Thread 许志峰
Hi all, I have a kafka cluster with 3 nodes: node1, node2, node3 *kafka version is 0.8.2.1, which I can not change!* A Producer writes msg to kafka, its code framework is like this in pseudo-code: Properties props = new Properties();props.put("bootstrap.servers",

Re: Lost messages and messed up offsets

2017-12-06 Thread Tom van den Berge
This problem was solved by upgrading from 0.10 to 0.11 (broker + client). Thanks for your feedback. On Thu, Nov 30, 2017 at 10:03 AM, Tom van den Berge < tom.vandenbe...@gmail.com> wrote: > The consumers are using default settings, which means that > enable.auto.commit=true and

Re: Lost messages and messed up offsets

2017-11-30 Thread Thakrar, Jayesh
Can you also check if you have partition leaders flapping or changing rapidly? Also, look at the following settings on your client configs: max.partition.fetch.bytes fetch.max.bytes receive.buffer.bytes We had a similar situation in our environment when the brokers were flooded with data. The

Re: Lost messages and messed up offsets

2017-11-30 Thread Tom van den Berge
The consumers are using default settings, which means that enable.auto.commit=true and auto.commit.interval.ms=5000. I'm not committing manually; just consuming messages. On Thu, Nov 30, 2017 at 1:09 AM, Frank Lyaruu wrote: > Do you commit the received messages? Either by

Re: [EXTERNAL] - Lost messages and messed up offsets

2017-11-30 Thread Tom van den Berge
om: Tom van den Berge [mailto:tom.vandenbe...@gmail.com] > Sent: 29 novembre 2017 17:16 > To: users@kafka.apache.org > Subject: [EXTERNAL] - Lost messages and messed up offsets > > I'm using Kafka 0.10.0. > > I'm reading messages from a single topic (20 partitions), using 4 > con

Re: Lost messages and messed up offsets

2017-11-29 Thread Frank Lyaruu
Do you commit the received messages? Either by doing it manually or setting enable.auto.commit and auto.commit.interval.ms? On Wed, Nov 29, 2017 at 11:15 PM, Tom van den Berge < tom.vandenbe...@gmail.com> wrote: > I'm using Kafka 0.10.0. > > I'm reading messages from a single topic (20

RE: [EXTERNAL] - Lost messages and messed up offsets

2017-11-29 Thread Isabelle Giguère
icienne et développeur Java _ Open Text The Content Experts -Original Message- From: Tom van den Berge [mailto:tom.vandenbe...@gmail.com] Sent: 29 novembre 2017 17:16 To: users@kafka.apache.org Subject: [EXTERNAL] - Lost messages and messed up offsets I'm using Kafka 0.10.0. I'm readin

Lost messages and messed up offsets

2017-11-29 Thread Tom van den Berge
I'm using Kafka 0.10.0. I'm reading messages from a single topic (20 partitions), using 4 consumers (one group), using a standard java consumer with default configuration, except for the key and value deserializer, and a group id; no other settings. We've been experiencing a serious problem a

Re: Retrieving lost messages produced while the consumer was down.

2015-07-21 Thread Ewen Cheslack-Postava
Since you mentioned consumer groups, I'm assuming you're using the high level consumer? Do you have auto.commit.enable set to true? It sounds like when you start up you are always getting the auto.offset.reset behavior, which indicates you don't have any offsets committed. By default, that

Retrieving lost messages produced while the consumer was down.

2015-07-21 Thread Tomas Niño Kehoe
Hi, We've been using Kafka for a couple of months, and now we're trying to to write a Simple application using the ConsumerGroup to fully understand Kafka. Having the producer continually writing data, our consumer occasionally needs to be restarted. However, once the program is brought back up,

Re: lost messages -?

2015-03-26 Thread Victor L
Where's this tool (DumpLogSegments) in Kafka distro? Is it Java class in kafka jar, or is it third party binary? Thank you, On Wed, Mar 25, 2015 at 1:11 PM, Mayuresh Gharat gharatmayures...@gmail.com wrote: DumpLogSegments will give you output something like this : offset: 780613873770

Re: lost messages -?

2015-03-26 Thread Harsha
Victor,         Its under kaka.tools.DumpLogSegments you can use kafka-run-class to execute it. --  Harsha On March 26, 2015 at 5:29:32 AM, Victor L (vlyamt...@gmail.com) wrote: Where's this tool (DumpLogSegments) in Kafka distro? Is it Java class in kafka jar, or is it third party binary?

Re: lost messages -?

2015-03-25 Thread tao xiao
You can use kafka-console-consumer consuming the topic from the beginning *kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning* On Thu, Mar 26, 2015 at 12:17 AM, Victor L vlyamt...@gmail.com wrote: Can someone let me know how to dump contents of topics? I have

Re: lost messages -?

2015-03-25 Thread Mayuresh Gharat
You can use the DumpLogSegment tool. Thanks, Mayuresh On Wed, Mar 25, 2015 at 9:17 AM, Victor L vlyamt...@gmail.com wrote: Can someone let me know how to dump contents of topics? I have producers sending messages to 3 brokers but about half of them don't seem to be consumed. I suppose they

lost messages -?

2015-03-25 Thread Victor L
Can someone let me know how to dump contents of topics? I have producers sending messages to 3 brokers but about half of them don't seem to be consumed. I suppose they are getting stuck in queues but how can i figure out where? Thks,

Re: lost messages -?

2015-03-25 Thread Mayuresh Gharat
DumpLogSegments will give you output something like this : offset: 780613873770 isvalid: true payloadsize: 8055 magic: 1 compresscodec: GZIPCompressionCodec If this is what you want you can use the tool, to detect if the messages are getting to your brokers. Console-Consumer will output the

Re: Lost messages during leader election

2014-07-28 Thread Guozhang Wang
Jun, Jad, I think in this case data loss can still happen, since the replica factor was previously one, and in handling the produce requests, if the server decides that all the produced partitions have a replica factor of 1 it will also directly send back the response instead of putting the

Re: Lost messages during leader election

2014-07-28 Thread Guozhang Wang
Hi Jad, Just to clarify, you also see data loss when you created the topic with replica factor 2, and two replicas running, and after an auto leader election triggered? If that is the case could you attach the logs of all involved brokers here? For your second question, KAFKA-1211 is designed to

Re: Lost messages during leader election

2014-07-25 Thread Guozhang Wang
Hi Jad, A follower replica can join ISR only when it has caught up to HW, which in this case would be the end of the leader replica. So in that scenario it should still be no data loss. On Thu, Jul 24, 2014 at 7:48 PM, Jad Naous jad.na...@appdynamics.com wrote: Actually, is the following

Re: Lost messages during leader election

2014-07-25 Thread Jad Naous
Thank you so much for your explanation and your patience! On Fri, Jul 25, 2014 at 10:08 AM, Guozhang Wang wangg...@gmail.com wrote: HW is updated as to the offset that the messages have been committed to all replicas. This is only updated by the leader, when it receives the fetch requests

Re: Lost messages during leader election

2014-07-25 Thread Jad Naous
Hi Guozhang, Yes, I think they are related. It seems odd to me that there should be any truncation at all since that is always an opportunity for data loss. It seems like we would want to avoid that at all costs, assuming we uphold the invariant that messages committed to an offset on any

Lost messages during leader election

2014-07-24 Thread Jad Naous
Hi, I have a test that continuously sends messages to one broker, brings up another broker, and adds it as a replica for all partitions, with it being the preferred replica for some. I have auto.leader.rebalance.enable=true, so replica election gets triggered. Data is being pumped to the old

Re: Lost messages during leader election

2014-07-24 Thread Guozhang Wang
Hi Jad, Thanks for bring this up. It seems to be a valid issue: in the current auto leader rebalancer thread's logic, if the imbalance ratio threshold is violated, then it will trigger the preferred leader election whether or not the preferred leader is in ISR or not. Guozhang On Thu, Jul 24,

Re: Lost messages during leader election

2014-07-24 Thread Jad Naous
Hi Guozhang, Isn't it also possible to lose messages even if the preferred leader is in the ISR, when the current leader is ahead by a few messages, but the preferred leader still has not caught up? Thanks, Jad. On Thu, Jul 24, 2014 at 4:59 PM, Guozhang Wang wangg...@gmail.com wrote: Hi

Re: Lost messages during leader election

2014-07-24 Thread Jad Naous
Ah yes. OK, thanks! So it seems like we should only manually trigger re-election after seeing that all replicas are in the ISR. Is there a bug to follow this up? Thanks, Jad. On Thu, Jul 24, 2014 at 6:27 PM, Guozhang Wang wangg...@gmail.com wrote: With ack=-1 all messages produced to leader

Re: Lost messages during leader election

2014-07-24 Thread Jad Naous
Actually, is the following scenario possible? - We start off with only 1 replica (the leader) - the producer continuously sends messages - a new replica (the preferred one) comes online - it becomes an ISR just after an ack is sent to the producer - the new replica gets elected as the new leader,

Re: Lost messages during leader election

2014-07-24 Thread Ashwin Jayaprakash
I'm still not sure I understand after his reply - http://qnalist.com/questions/5034216/lost-messages-during-leader-election - I really need a tutorial on Kafka. I don't understand why they made it so complicated when Cassandra and Hbase are similar but simpler. * Ashwin Jayaprakash