Re: "auto offset commit failed"

2017-02-06 Thread Michael Freeman
Next time you successfully auto commit it should be fine. Michael > On 6 Feb 2017, at 12:38, Jon Yeargers wrote: > > This message seems to come and go for various consumers: > > WARN o.a.k.c.c.i.ConsumerCoordinator - Auto offset commit failed for > group : Commit

Re: Kafka consumer offset info lost

2017-01-12 Thread Michael Freeman
If the topic has not seen traffic for a while then Kafka will remove the stored offset. When your consumer reconnects Kafka no longer has the offset so it will reprocess from earliest. Michael > On 12 Jan 2017, at 11:13, Mahendra Kariya wrote: > > Hey All, > > We

Re: why did Kafka choose pull instead of push for a consumer ?

2016-09-20 Thread Michael Freeman
Thanks for sharing Radek, great article. Michael > On 17 Sep 2016, at 21:13, Radoslaw Gruchalski wrote: > > Please read this article: > https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying > > –

Re: Kafka consumer group problem

2016-09-17 Thread Michael Freeman
Did you try props.put("group.id", "test"); On Thu, Sep 15, 2016 at 12:55 AM, Joyce Chen wrote: > Hi, > > I created a few consumers that belong to the same group_id, but I noticed > that each consumer get all messages instead of only some of the messages. > > As for the

Re: Offsets getting lost if no messages sent for a long time

2016-08-23 Thread Michael Freeman
Might be easier to handle duplicate messages as opposed to handling long periods of time without messages. Michael > On 22 Aug 2016, at 15:55, Misra, Rahul wrote: > > Hi, > > Can anybody provide any guidance on the following: > > 1. Given a limited set of groups

Re: Java 0.9.0.1 Consumer Does not failover

2016-07-14 Thread Michael Freeman
For future reference server the following is needed offsets.topic.replication.factor=3 Michael > On 14 Jul 2016, at 10:56, Michael Freeman <mikfree...@gmail.com> wrote: > > Anyone have any ideas? Looks like the group coordinator is not failing over. > Or at least not de

Co ordinator failover

2016-07-14 Thread Michael Freeman
I'm running a three broker cluster. Do I need to have offsets.topic.replication.factor=3 set in order for co ordinator failover to occur? Michael

Re: Java 0.9.0.1 Consumer Does not failover

2016-07-14 Thread Michael Freeman
Anyone have any ideas? Looks like the group coordinator is not failing over. Or at least not detected by the Java consumer. A new leader is elected so I'm at a loss. Michael > On 13 Jul 2016, at 20:58, Michael Freeman <mikfree...@gmail.com> wrote: > > Hi, > I'm runni

Re: Role of Producer

2016-07-13 Thread Michael Freeman
gt; > Btw, what is MQ? > > > > -Original Message- > From: Michael Freeman [mailto:mikfree...@gmail.com] > Sent: Wednesday, July 13, 2016 3:36 PM > To: users@kafka.apache.org > Subject: Re: Role of Producer > > Could you write them a client that uses the Kafka

Re: Role of Producer

2016-07-13 Thread Michael Freeman
Could you write them a client that uses the Kafka producer? You could also write some restful services that send the data to kafka. If they use MQ you could listen to MQ and send to Kafka. On Wed, Jul 13, 2016 at 9:31 PM, Luo, Chao wrote: > Dear Kafka guys, > > I just

Java 0.9.0.1 Consumer Does not failover

2016-07-13 Thread Michael Freeman
Hi, I'm running a Kafka cluster with 3 nodes. I have a topic with a replication factor of 3. When I stop node 1 running kafka-topics.sh shows me that node 2 and 3 have successfully failed over the partitions for the topic. The message producers are still sending messages and I can still

Re: kafka 0.9 offset unknown after cleanup

2016-05-04 Thread Michael Freeman
c. > > Thanks > > Tom Crayford > Heroku Kafka > > On Wednesday, 4 May 2016, Michael Freeman <mikfree...@gmail.com> wrote: > > > Hey Tom, > > Are there any details on the negative side effects of > > increasing the offset ret

Re: kafka 0.9 offset unknown after cleanup

2016-05-04 Thread Michael Freeman
Hey Tom, Are there any details on the negative side effects of increasing the offset retention period? I'd like to increase it but want to be aware of the risks. Thanks Michael > On 4 May 2016, at 05:06, Tom Crayford wrote: > > Jun, > > Yep, you got

Message reconsumed with 'earliest' offset reset 0.9.0.1

2016-04-05 Thread Michael Freeman
Hi, I'm using the 0.9.0.1 consumer with 'earliest' offset reset. After cleanly shutting down the consumers and restarting I see reconsumption of some old messages. The offset of the reconsumed messages is 0. If I'm committing cleanly and shutting down cleanly why is the committed offset

Re: Why does "unknown" show up in the output when describing a group using the ConsumerGroupCommand?

2016-03-30 Thread Michael Freeman
Was wondering the same. From what I can tell it shows unknown when no committed offset is recorded for that partition by the consumer. On Mon, Mar 28, 2016 at 12:25 PM, craig w wrote: > When using the ConsumerGroupCommand to describe a group (using > new-consumer, 0.9.0.1)

Re: Retry Message Consumption On Database Failure

2016-03-15 Thread Michael Freeman
last committed offset is still what you expect) > b) otherwise, abort the background processing thread. > > Would that work for your case? It's also worth mentioning that there's a > proposal to add a sticky partition assignor to Kafka, which would make 5.b > less likely. &g

Re: Retry Message Consumption On Database Failure

2016-03-11 Thread Michael Freeman
t's entirely > necessary. > > On Thu, Mar 10, 2016 at 1:40 AM, Michael Freeman <mikfree...@gmail.com> > wrote: > >> Thanks Christian, >> We would want to retry indefinitely. Or at >> least for say x minutes. If we don't poll

Increasing session.timeout.ms

2016-03-10 Thread Michael Freeman
Hi, I'm trying to set the following on a 0.9.0.1 consumer. session.timeout.ms=12 request.timeout.ms=144000 I get the below error but I can't find any documentation on acceptable ranges. "The session timeout is not within an acceptable range." Logged by AbstractCoordinator Any idea's

Re: Retry Message Consumption On Database Failure

2016-03-10 Thread Michael Freeman
ge do you? Can you just retry > and/or backoff-retry with the message you have? And just do the "commit" of > the offset if successfully? > > > > On Wed, Mar 9, 2016 at 2:00 PM, Michael Freeman <mikfree...@gmail.com> > wrote: > >> Hey, >>

Retry Message Consumption On Database Failure

2016-03-09 Thread Michael Freeman
Hey, My team is new to Kafka and we are using the examples found at. http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client We process messages from kafka and persist them to Mongo. If Mongo is unavailable we are wondering how we can re-consume