Dana
Everything your are saying does not answer my question of how to interrupt a 
potential deadlock artificially forced upon users of KafkaConsumer API.
I may be OK with duplicate messages, I may be OK with data loss and I am OK 
with doing an extra work to do all kind of things. I am NOT OK with getting 
stuck ok close() call when I really want my system that uses KafkaConsumer to 
exit. So Consumer.close(timeout) is what I was really asking about. 
So, is there a way now to interrupt such block? 

Cheers
Oleg

> On Apr 11, 2016, at 4:08 PM, Dana Powers <dana.pow...@gmail.com> wrote:
> 
> Not a typo. This happens because the consumer closes the coordinator,
> and the coordinator attempts to commit any pending offsets
> synchronously in order to avoid duplicate message delivery. The
> Coordinator method commitOffsetsSync will retry indefinitely unless a
> non-recoverable error is encountered. If you wanted to implement a
> timeout, you'd need to wire it up in commitOffsetsSync and plumb the
> timeout from Coordinator.close() and Consumer.close(). It doesn't look
> terribly complicated, but you should check on the dev list for more
> opinions.
> 
> -Dana
> 
> On Mon, Apr 11, 2016 at 12:45 PM, Oleg Zhurakousky
> <ozhurakou...@hortonworks.com> wrote:
>> The subject line is from the javadoc of the new KafkaConsumer.
>> Is this for real? I mean I am hoping the use of ‘indefinitely' is a typo.
>> In any event if it is indeed true, how does one break out of indefinitely 
>> blocking consumer.close() invocation?
>> 
>> Cheers
>> Oleg
> 

Reply via email to