Hi Joe,
Nothing is load balanced- it's all basic queues.
Mark,
I'm using NiFi 1.19.1.
nifi.performance.tracking.percentage sounds exactly what I might need. I'll
give that a shot.
Richard,
I hadn't looked at the rotating logs and/or cleared them out. I'll give
that a shot too.
Thank you all.
I had a similar sounding issue, although not in a Kube cluster. Nifi was
running in a docker container and the issue was the log rotation
interacting with the log file being mounted from the host. The mounted log
file was not deleted on rotation, meaning that once rotation was triggered
by log
Aaron,
What version of NiFi are you running? One thing that you can look into if
you’re running a pretty recent version, (though the user friendliness is not
great) is to update nifi.properties and set the
“nifi.performance.tracking.percentage” property from 0 to something like 5 or
10.
Hi Joe,
It's a pretty fixed size objects at a fixed interval- One 5mb-ish file, we
break down to individual rows.
I went so far as to create a "stress test" where I have a generateFlow(
creating a fix, 100k fille, in batches of 1000, every .1s) feeding right
into a putFile. I wanted to see the
You can also set the processors scheduling -> run duration to something
other than 0ms.
I've found NiFi will do heavy disk IO when things have been running for
a while / queue sizes are large. Been using tools like atop to watch
disk IO. Check settings for flow, content, and provenance repos.
Hi Aaron, is the number of threads set sufficiently high? Once I set it too low
by accident on a very powerful machine, and when we got more and more flows, at
some point NiFi slowed down tremendously. By increasing threads to the
recommend setting (a few per core, cf. admin docs) we got NiFi
Aaron,
The usual suspects are memory consumption leading to high GC leading to
lower performance over time, or back pressure in the flow, etc.. But your
description does not really fit either exactly. Does your flow see a mix
of large objects and smaller objects?
Thanks
On Wed, Jan 10, 2024 at
Hi all,
I’m running into an odd issue and hoping someone can point me in the right
direction.
I have NiFi 1.19 deployed in a Kube cluster with all the repositories
volume mounted out. It was processing great with processors like
UpdateAttribute sending through 15K/5m PutFile sending through