Out of curiosity - I'm seeing the metric "nifi_amount_threads_active" at
the max of 40 for processors that are disabled. Does that make sense? That
seems very odd to me since those processors should be doing anything at all.
On Mon, Jan 15, 2024 at 12:01 PM Aaron Rich wrote:
> Yeah - that gets t
Yeah - that gets the performance to where we need it.
But the question I have is why did the performance drop in the first place.
Everything was working fine, and then it suddenly dropped. I'm having to
adjust nifi parameters to try to get back to where performance was but I
can't find what is pul
Aaron,
It doesn’t sound like you’re back to the drawing board at all - sounds like you
have the solution in hand. Just increase the size of your Timer Driven Thread
Pool and leave it there.
Thanks
-Mark
On Jan 15, 2024, at 11:16 AM, Aaron Rich wrote:
@Mark - thanks for that note. I hadn't t
@Mark - thanks for that note. I hadn't tried restarting. When I did that,
the performance dropped back down. So I'm back to the drawing board.
@Phillip - I didn't have any other services/components/dataflows going. It
was just those 2 processors going (I tried to remove every variable I could
to m
Ditto...
@Aaron... so outside of the GenerateFlowFile -> PutFile, were there
additional components/dataflows handling data at the same time as the
"stress-test". These will all share the same thread-pool. So depending
upon your dataflow footprint and any variability regarding data volumes...
20
Aaron,
Interestingly, up to version 1.21 of NiFi, if you increase the size of the
thread pool, it increased immediately. But if you decreased the size of the
thread pool, the decrease didn’t take effect until you restart NiFi. So that’s
probably why you’re seeing the behavior you are. Even thou
So the good news is it's working now. I know what I did but I don't know
why it worked so I'm hoping others can enlighten me based on what I did.
TL;DR - "turn it off/turn in on" for Max Timer Driven Thread Count fixed
performance. Max Timer Driven Thread Count was set to 20. I changed it to
30 -
Hi Joe,
Nothing is load balanced- it's all basic queues.
Mark,
I'm using NiFi 1.19.1.
nifi.performance.tracking.percentage sounds exactly what I might need. I'll
give that a shot.
Richard,
I hadn't looked at the rotating logs and/or cleared them out. I'll give
that a shot too.
Thank you all. P
I had a similar sounding issue, although not in a Kube cluster. Nifi was
running in a docker container and the issue was the log rotation
interacting with the log file being mounted from the host. The mounted log
file was not deleted on rotation, meaning that once rotation was triggered
by log file
Aaron,
What version of NiFi are you running? One thing that you can look into if
you’re running a pretty recent version, (though the user friendliness is not
great) is to update nifi.properties and set the
“nifi.performance.tracking.percentage” property from 0 to something like 5 or
10. Restar
Hi Joe,
It's a pretty fixed size objects at a fixed interval- One 5mb-ish file, we
break down to individual rows.
I went so far as to create a "stress test" where I have a generateFlow(
creating a fix, 100k fille, in batches of 1000, every .1s) feeding right
into a putFile. I wanted to see the su
You can also set the processors scheduling -> run duration to something
other than 0ms.
I've found NiFi will do heavy disk IO when things have been running for
a while / queue sizes are large. Been using tools like atop to watch
disk IO. Check settings for flow, content, and provenance repos.
Hi Aaron, is the number of threads set sufficiently high? Once I set it too low
by accident on a very powerful machine, and when we got more and more flows, at
some point NiFi slowed down tremendously. By increasing threads to the
recommend setting (a few per core, cf. admin docs) we got NiFi ba
Aaron,
The usual suspects are memory consumption leading to high GC leading to
lower performance over time, or back pressure in the flow, etc.. But your
description does not really fit either exactly. Does your flow see a mix
of large objects and smaller objects?
Thanks
On Wed, Jan 10, 2024 at
Hi all,
I’m running into an odd issue and hoping someone can point me in the right
direction.
I have NiFi 1.19 deployed in a Kube cluster with all the repositories
volume mounted out. It was processing great with processors like
UpdateAttribute sending through 15K/5m PutFile sending through 3
15 matches
Mail list logo