Next time you successfully auto commit it should be fine.
Michael
> On 6 Feb 2017, at 12:38, Jon Yeargers wrote:
>
> This message seems to come and go for various consumers:
>
> WARN o.a.k.c.c.i.ConsumerCoordinator - Auto offset commit failed for
> group : Commit
If the topic has not seen traffic for a while then Kafka will remove the stored
offset. When your consumer reconnects Kafka no longer has the offset so it will
reprocess from earliest.
Michael
> On 12 Jan 2017, at 11:13, Mahendra Kariya wrote:
>
> Hey All,
>
> We
Thanks for sharing Radek, great article.
Michael
> On 17 Sep 2016, at 21:13, Radoslaw Gruchalski wrote:
>
> Please read this article:
> https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
>
> –
Did you try props.put("group.id", "test");
On Thu, Sep 15, 2016 at 12:55 AM, Joyce Chen wrote:
> Hi,
>
> I created a few consumers that belong to the same group_id, but I noticed
> that each consumer get all messages instead of only some of the messages.
>
> As for the
Might be easier to handle duplicate messages as opposed to handling long
periods of time without messages.
Michael
> On 22 Aug 2016, at 15:55, Misra, Rahul wrote:
>
> Hi,
>
> Can anybody provide any guidance on the following:
>
> 1. Given a limited set of groups
For future reference server the following is needed
offsets.topic.replication.factor=3
Michael
> On 14 Jul 2016, at 10:56, Michael Freeman <mikfree...@gmail.com> wrote:
>
> Anyone have any ideas? Looks like the group coordinator is not failing over.
> Or at least not de
I'm running a three broker cluster.
Do I need to have offsets.topic.replication.factor=3 set in order for co
ordinator failover to occur?
Michael
Anyone have any ideas? Looks like the group coordinator is not failing over. Or
at least not detected by the Java consumer.
A new leader is elected so I'm at a loss.
Michael
> On 13 Jul 2016, at 20:58, Michael Freeman <mikfree...@gmail.com> wrote:
>
> Hi,
> I'm runni
gt;
> Btw, what is MQ?
>
>
>
> -Original Message-
> From: Michael Freeman [mailto:mikfree...@gmail.com]
> Sent: Wednesday, July 13, 2016 3:36 PM
> To: users@kafka.apache.org
> Subject: Re: Role of Producer
>
> Could you write them a client that uses the Kafka
Could you write them a client that uses the Kafka producer?
You could also write some restful services that send the data to kafka.
If they use MQ you could listen to MQ and send to Kafka.
On Wed, Jul 13, 2016 at 9:31 PM, Luo, Chao wrote:
> Dear Kafka guys,
>
> I just
Hi,
I'm running a Kafka cluster with 3 nodes.
I have a topic with a replication factor of 3.
When I stop node 1 running kafka-topics.sh shows me that node 2 and 3 have
successfully failed over the partitions for the topic.
The message producers are still sending messages and I can still
c.
>
> Thanks
>
> Tom Crayford
> Heroku Kafka
>
> On Wednesday, 4 May 2016, Michael Freeman <mikfree...@gmail.com> wrote:
>
> > Hey Tom,
> > Are there any details on the negative side effects of
> > increasing the offset ret
Hey Tom,
Are there any details on the negative side effects of
increasing the offset retention period? I'd like to increase it but want to be
aware of the risks.
Thanks
Michael
> On 4 May 2016, at 05:06, Tom Crayford wrote:
>
> Jun,
>
> Yep, you got
Hi,
I'm using the 0.9.0.1 consumer with 'earliest' offset reset.
After cleanly shutting down the consumers and restarting I see reconsumption of
some old messages.
The offset of the reconsumed messages is 0.
If I'm committing cleanly and shutting down cleanly why is the committed offset
Was wondering the same. From what I can tell it shows unknown when no
committed offset is recorded for that partition by the consumer.
On Mon, Mar 28, 2016 at 12:25 PM, craig w wrote:
> When using the ConsumerGroupCommand to describe a group (using
> new-consumer, 0.9.0.1)
last committed offset is still what you expect)
> b) otherwise, abort the background processing thread.
>
> Would that work for your case? It's also worth mentioning that there's a
> proposal to add a sticky partition assignor to Kafka, which would make 5.b
> less likely.
&g
t's entirely
> necessary.
>
> On Thu, Mar 10, 2016 at 1:40 AM, Michael Freeman <mikfree...@gmail.com>
> wrote:
>
>> Thanks Christian,
>> We would want to retry indefinitely. Or at
>> least for say x minutes. If we don't poll
Hi,
I'm trying to set the following on a 0.9.0.1 consumer.
session.timeout.ms=12
request.timeout.ms=144000
I get the below error but I can't find any documentation on acceptable ranges.
"The session timeout is not within an acceptable range." Logged by
AbstractCoordinator
Any idea's
ge do you? Can you just retry
> and/or backoff-retry with the message you have? And just do the "commit" of
> the offset if successfully?
>
>
>
> On Wed, Mar 9, 2016 at 2:00 PM, Michael Freeman <mikfree...@gmail.com>
> wrote:
>
>> Hey,
>>
Hey,
My team is new to Kafka and we are using the examples found at.
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
We process messages from kafka and persist them to Mongo.
If Mongo is unavailable we are wondering how we can re-consume
20 matches
Mail list logo