Thank you so much Mark.
The pointers were helpful & definitely in the right direction.
The #flow files was huge because MySQL CDC processor had not been running
since a couple of days - resulting in accumulation of bin log entries.
And whenever I tried processing those, the CPU would max at 100% a
If you point NiFi at a large dataset this is the expected behavior.
We use micro-clusters of NiFi's (4 node cluster, 8CPU, 48GB ram). Different
clusters for different business purposes/use cases.
java.arg.2=-Xms2g
java.arg.3=-Xmx32g (48g machines - 20% less than system memory)
6 max threads per
I don't know that this is actually unexpected. What you observed is that you
had million of FlowFiles queued up to be processed. NiFi was not processing
them with 100% CPU utilization. This typically indicates one of two things: a)
You haven't allocated enough threads, or b) you have a bottlenec
buffering flowfiles like that is supported by design and common so it would
be ideal to figure out what happened.
On Mon, Jun 10, 2019, 9:02 AM Shanker Sneh wrote:
> Flowfiles were close to ~7 million .. 8 threads (as I have 4 vCPU in 1
> box). Max heap allocated is 12Gb. So the usage was ~60%
>
Flowfiles were close to ~7 million .. 8 threads (as I have 4 vCPU in 1
box). Max heap allocated is 12Gb. So the usage was ~60%
Joe, I think it has something to do with what Wookcock suggested. Clearing
up content & FlowFiles seem to have CPU manageable.
Allow me 1-2 days and I shall report back if
how many flowfiles were in queue? how many threads for nifi to use? how
was heap?
On Mon, Jun 10, 2019, 8:44 AM Shanker Sneh wrote:
> Thanks Joe for reading through and helping me. :)
>
>
>- NiFi hasn't been upgraded. its 1.8.0 (community version of Horton
>works data flow).
>- OS
Thanks Joe for reading through and helping me. :)
- NiFi hasn't been upgraded. its 1.8.0 (community version of Horton
works data flow).
- OS/Kernel is the same. Just that I have added more capacity to disk
(with better IO).
- JVM continues to be the same. Java 8.
- When CPU is 1
Sneh
It was stable for months but now is high...
has nifi been upgraded? what version before vs now?
has the os/kernel been changed?
has the jvm been updated?
when cpu is 100 what does top show?
thanks
On Mon, Jun 10, 2019, 7:59 AM Shanker Sneh wrote:
> Thanks for the suggestions Joe.
> A
Thanks for the suggestions Joe.
Actually the issue is persistent even after reverting to the
'older-regular-incremental-load' of the data flow* (which used to work fine
since months on similarly-configured hardware a few days back by utilising
just ~50% of resources)*.
These days, one of the 2-nod
You can also identify where top performance hitters are and ensure that a
ControlRate or otherwise throttled amount of data and/or threads are
leveraged at once. This allows you to effectively control how much effort
to put on any single point of the flow at once. This is necessary when you
want
We had this issue when NiFi flows were blocked and lots of content and flowfile
data was on disk and in RAM. We got around it by temporarily setting NiFi
JVMs to more RAM to allow the glut of content to pass and then lowering the JVM
RAM back to normal so that garbage collection can occur.
In
Shanker
It sounds like you've gone through some changes in general and have worked
through those. Now you have a flow running with a high volume of data
(history load) and want to know which parts of the flow are most
expensive/consuming the CPU.
You should be able to look at the statistics prov
Hello all,
I am facing strange issue with NiFi 1.8.0 (2 nodes)
My flows had been running fine since months.
Yesterday I had to do some history load which filled up my both disks (I
have FlowFile repository as separate disk).
I increased the size of the root & flowflile disk both. And 'grow' the
13 matches
Mail list logo