Hi all,

I use graylog 1.2 version and also encountered the same problem. I can't 
resolve this issue by restarting graylog server.

<https://lh3.googleusercontent.com/-2-SMSIXFBjA/VkRldkeG3II/AAAAAAAAHJk/Q-Jg-4nS2IE/s1600/%25E8%259E%25A2%25E5%25B9%2595%25E5%25BF%25AB%25E7%2585%25A7%2B2015-11-12%2B%25E4%25B8%258B%25E5%258D%25886.09.30.png>



On Wednesday, October 21, 2015 at 3:55:52 PM UTC+8, Florent B wrote:
>
> Wow, I'm very happy not to be alone having this problem !!
>
> Ticket about this issue has been closed yesterday because Bernd couldn't 
> reproduce it :(
>
> https://github.com/Graylog2/graylog2-server/issues/1391
>
> I don't know what other information to bring...
>
>
> On 10/13/2015 03:32 PM, Cory Carlton wrote:
>
> We have encountered this same problem recently as well. we run graylog 1.2 
> - Separated Elasticsearch cluster. 
>
> 2015-10-13T13:26:40.614Z ERROR [KafkaJournal] Cannot write 
> /var/lib/graylog-server/journal/graylog2-committed-read-offset to disk.
> java.io.FileNotFoundException: 
> /var/lib/graylog-server/journal/graylog2-committed-read-offset (Too many 
> open files)
>
> we are pushing a large number of logs through as our system is being 
> perf/endurance tested. 
>
>
>
>
> On Friday, March 27, 2015 at 6:57:44 AM UTC-5, Jochen Schalanda wrote: 
>>
>> Hi Florent, 
>>
>> 700k open files sounds plain wrong and like a file descriptor leak. Could 
>> you please create a bug report for this at 
>> https://github.com/Graylog2/graylog2-server/issues/new and include the 
>> list of open files of the Java process running Graylog on one of those 
>> servers?
>>
>> Please also upgrade to Graylog 1.0.1 and verify that the problem still 
>> exists.
>>
>> Best regards,
>> Jochen
>>
>> On Thursday, 26 March 2015 10:13:18 UTC+1, Florent B wrote: 
>>>
>>> Hi everyone, 
>>>
>>> This night, it seems we had some network instabilities. 
>>>
>>> We have 3 Graylog servers (1.0.0). 
>>>
>>> This morning we can't read logs, web interface is showing lots of errors 
>>> (stack traces...). 
>>>
>>> In the 3 servers logs, we can see some errors like this : 
>>>
>>> 2015-03-26T10:08:23.838+01:00 ERROR [KafkaJournal] Cannot write 
>>> /var/lib/graylog-server/journal/graylog2-committed-read-offset to disk. 
>>> java.io.FileNotFoundException: 
>>> /var/lib/graylog-server/journal/graylog2-committed-read-offset (Too many 
>>> open files) 
>>>
>>> 2015-03-26T10:08:23.974+01:00 WARN  [AbstractNioSelector] Failed to 
>>> accept a connection. 
>>> java.io.IOException: Too many open files 
>>>
>>> Java server process has more than 700 000 (!!!) opened files on each 
>>> servers ! 
>>> We are not running out of space, and CPU usage is very low. 
>>>
>>> So my questions are: 
>>>
>>> How to handle this ? What can I do to avoid loosing messages ? 
>>> Is it a bug ? (some resources not freed maybe ?) 
>>>
>>> Thank you a lot. 
>>>
>>> -- 
> You received this message because you are subscribed to the Google Groups 
> "Graylog Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To view this discussion on the web visit 
> <https://groups.google.com/d/msgid/graylog2/e547fcf9-540a-467b-a1c2-e28cd3a24a03%40googlegroups.com?utm_medium=email&utm_source=footer>
> https://groups.google.com/d/msgid/graylog2/e547fcf9-540a-467b-a1c2-e28cd3a24a03%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/d3c85fa9-6704-4ab8-83d1-4767582da01e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to