[ 
https://issues.apache.org/jira/browse/NIFI-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18014665#comment-18014665
 ] 

Zenkovac edited comment on NIFI-14864 at 8/18/25 7:54 PM:
----------------------------------------------------------

This is my final config im using to get fewer but bigger flowfiles so later in 
the processing path y use record readers and writers.

Processor ConsumeKafka:
Max Uncommitted Time = 5 sec
Schedule = 5 sec

Controller Kafka3ConnectionService:
max.partition.fetch.bytes = 10485760
fetch.min.bytes = 10485760
fetch.max.wait.ms = 5 sec


was (Author: JIRAUSER294127):
This is my final config im using to get fewer but bigger flowfiles so later in 
the processing path y use record readers and writers.

Processor ConsumeKafka:
Max Uncommitted Time = 5 sec
Schedule = 5 sec

Controller Kafka3ConnectionService:
kafka max.partition.fetch.bytes = 10485760

> ConsumeKafka performance
> ------------------------
>
>                 Key: NIFI-14864
>                 URL: https://issues.apache.org/jira/browse/NIFI-14864
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Configuration
>    Affects Versions: 2.5.0
>         Environment: nifi 2.5, kafka server 2.8
>            Reporter: Zenkovac
>            Priority: Major
>
> switching from nifi 1.19 to 2.5 and using ConsumeKafka cant get to consume 
> flowfiles with more than ~500 records per flowfile despite having millions of 
> messages available in kafka topic.
> This has a penalty performance for me because I consume thousands of 
> flowfiles vs a few in nifi 1.19 which means less disc i/o usage.
> this is my config:
> *Processing Strategy: RECORD*
> *Max Uncommitted Time* 10 sec



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to