Hi Aaron, is the number of threads set sufficiently high? Once I set it too low 
by accident on a very powerful machine, and when we got more and more flows, at 
some point NiFi slowed down tremendously. By increasing threads to the 
recommend setting (a few per core, cf. admin docs) we got NiFi back to speed.
Another cause of performance loss might be other workloads in the same cluster. 
In case of some cloud provider, you might also get throttled down for high 
disk/resource/... usage. Just a thought.
Anything in the logs? Maybe your repositories for content, flowfiles etc are 
full, and NiFi cannot cope with archiving and shuffling in the background. But 
there should be an indication in the logs.
Good luck, Lars

On 10 January 2024 18:09:07 CET, Joe Witt <joe.w...@gmail.com> wrote:
>Aaron,
>
>The usual suspects are memory consumption leading to high GC leading to
>lower performance over time, or back pressure in the flow, etc.. But your
>description does not really fit either exactly.  Does your flow see a mix
>of large objects and smaller objects?
>
>Thanks
>
>On Wed, Jan 10, 2024 at 10:07 AM Aaron Rich <aaron.r...@gmail.com> wrote:
>
>> Hi all,
>>
>>
>>
>> I’m running into an odd issue and hoping someone can point me in the right
>> direction.
>>
>>
>>
>> I have NiFi 1.19 deployed in a Kube cluster with all the repositories
>> volume mounted out. It was processing great with processors like
>> UpdateAttribute sending through 15K/5m PutFile sending through 3K/5m.
>>
>>
>>
>> With nothing changing in the deployment, the performance has dropped to
>> UpdateAttribute doing 350/5m and Putfile to 200/5m.
>>
>>
>>
>> I’m trying to determine what resource is suddenly dropping our performance
>> like this. I don’t see anything on the Kube monitoring that stands out and
>> I have restarted, cleaned repos, changed nodes but nothing is helping.
>>
>>
>>
>> I was hoping there is something from the NiFi POV that can help identify
>> the limiting resource. I'm not sure if there is additional
>> diagnostic/debug/etc information available beyond the node status graphs.
>>
>>
>>
>> Any help would be greatly appreciated.
>>
>>
>>
>> Thanks.
>>
>>
>>
>> -Aaron
>>

Reply via email to