[ 
https://issues.apache.org/jira/browse/NIFI-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307843#comment-17307843
 ] 

ASF subversion and git services commented on NIFI-8357:
-------------------------------------------------------

Commit 2f08d1f466b9f6f0b0b8a7b5893341a0d1433a4e in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2f08d1f ]

NIFI-8357: Updated Kafka 2.0 processors to automatically handle recreating 
Consumer Lease objects when an existing one is poisoned, even if using 
statically assigned partitions

This closes #4926.

Signed-off-by: Peter Turcsanyi <turcsa...@apache.org>


> ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using 
> statically assigned partitions
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: NIFI-8357
>                 URL: https://issues.apache.org/jira/browse/NIFI-8357
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>            Priority: Critical
>             Fix For: 1.14.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> If using statically assigned partitions in ConsumeKafka_2_0, 
> ConsumeKafkaRecord_2_0, ConsumeKafka_2_6, or ConsumeKafkaRecord_2_6 (via 
> adding {{partitions.}}{{<hostname}}> properties), when a client connection 
> fails, it recreates connections but does not properly assign the partitions. 
> As a result, the consumer stops consuming data from its partition(s), and the 
> Kafka client that gets created gets leaked. This can slowly build up to 
> leaking many of these connections and potentially could exhaust heap or cause 
> IOException: too many open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to