I have a similar problem on my Graylog2 configuration. I have a cluster 
with two nodes. The problem is with my slave node, where we capture NetFlow 
data from our routers. The incoming messages are about 30 - 50 per second. 
I have allowed up to 4g of heap memory for the graylog-server. With a fresh 
start, the node uses up to 972.8 MB and this starts to grow over time. It 
takes approximately 24 hours until the node reaches the full 4g (shown as 
3.8 GB) and then constantly stops and re-starts. A re-start on the node 
(graylog-ctl stop && shutdown -r now) rectifies the problem, but then again 
just temporary. The graylog slave node is configured as "backend".

We have the 100% same configuration on the master node, where this problem 
does not appear. The master node runs for weeks now, processing about 10 - 
30 messages per second and uses 1.1GB of heap space. It never reaches any 
close to 3.8 GB, which would be the maximum configured. The only difference 
is, that it does not accept any NetFlow messages.

Previously we had the NetFlow messages go to the master node. Then the 
exact same behaviour would appear there as well - the node gradually 
consumes more and more memory, until it reaches a state where it constantly 
crashes and restarts. Moving the NetFlow messages to the slave seems to 
have rectified the problem on the master. Both nodes run the latest version 
of Graylog2 - 2.0.3.

Do you also run NetFlow inputs on your node? Any help is greatly 
appreciated!

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/3e213832-a4c2-4671-b08f-5a9b863b274f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to