Thanks Corey.  I'll try that, but I'm wondering what the 
nifi.components.status.snapshot.frequency does.  Would decreasing that time (1 
min) make it take snapshots more frequently, and would this have the effect of 
decreasing or increasing the memory used?  I did decrease the 
nifi.components.status.repository.buffer.size to 180 and I'll see how that 
behaves.



Joe

-----Original Message-----
From: Corey Flowers [mailto:cflow...@onyxpoint.com] 
Sent: Monday, February 15, 2016 10:32 AM
To: dev@nifi.apache.org
Subject: EXTERNAL: Re: OutofMemory

Hey Joseph,

      I have a couple of clusters in the 600-1200 range and they have
16-32 GB jvm heap sizes respectfully. Really it depends on what processors you 
are using and your volumes. One thing that may help a little is to decrease 
your number of stored statics in the graph.
There are two properties in the conf file, one is set to 1440 and the other is 
a time, which is every 1 min. I believe these stats are stored in the heap 
space. Devs correct me if I am wrong. You could lessen the time and amount to 
buy you a little space. I don't think this is a solution, really it is more of 
a band-aide.

Good luck!





Sent from my iPhone

> On Feb 15, 2016, at 9:33 AM, Gresock, Joseph <joseph.gres...@lmco.com> wrote:
>
> Devs,
>
> We've been seeing some OutOfMemoryErrors on the NCM of our 10-node cluster 
> recently.  The flow has ~600 processors, and the NCM runs on a VM with 8GB 
> RAM.  We have 6G allocated to the Nifi JVM on this node.
>
> The specific log message we see is:
>
> WARN [Process NCM Request-6] org.apache.nifi.io.socket.SocketListener 
> Dispatching socket request encountered exception due to: 
> java.lang.OutOfMemoryError: Java heap space
>
> First, I'm hoping there's some advice on how to avoid this in the first 
> place, but barring that, is there a way to configure Nifi to auto-restart the 
> NCM when it gets this error?  I seem to remember seeing this in the past, but 
> I couldn't find anything in bootstrap.conf or nifi.properties that looked 
> related.
>
> Thanks,
> Joe

Reply via email to