Just in case you haven't seen this:

http://blog.gerhards.net/2013/07/rsyslog-why-disk-assisted-queues-keep.html?m=1

Sent from phone, thus brief.
Am 04.08.2014 22:39 schrieb "Rainer Gerhards" <[email protected]>:

> The memory part of the queue looks like it us limited to 10k, what
> probably is way too few fir this setup. I suggest to try with 500k.
>
> Sent from phone, thus brief.
> Am 04.08.2014 21:02 schrieb "David Lang" <[email protected]>:
>
>> well, it's clear that you are getting new requests FAR faster than you
>> are processing them
>>
>> Mon Aug  4 13:14:16 2014: imuxsock: submitted=3 ratelimit.discarded=0
>> ratelimit.numratelimiters=2
>> Mon Aug  4 13:14:16 2014: action 1: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: action 2: processed=603 failed=0
>> Mon Aug  4 13:14:16 2014: action 3: processed=547 failed=0
>> Mon Aug  4 13:14:16 2014: action 4: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: action 5: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: action 6: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: action 7: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: action 8: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: action 9: processed=0 failed=0
>> Mon Aug  4 13:14:16 2014: logstashforwarder: processed=270878 failed=0
>> Mon Aug  4 13:14:16 2014: imptcp(*/10514/IPv4): submitted=270859
>> Mon Aug  4 13:14:16 2014: imptcp(*/10514/IPv6): submitted=0
>> Mon Aug  4 13:14:16 2014: logstashforwarder[DA]: size=73726973
>> enqueued=114807 full=0 discarded.full=0 discarded.nf=0 maxqsize=73756802
>> Mon Aug  4 13:14:16 2014: logstashforwarder: size=147 enqueued=270878
>> full=0 discarded.full=0 discarded.nf=0 maxqsize=9770
>> Mon Aug  4 13:14:16 2014: main Q: size=0 enqueued=270878 full=0
>> discarded.full=0 discarded.nf=0 maxqsize=31209
>>
>>
>>
>> Mon Aug  4 13:15:16 2014: imuxsock: submitted=10 ratelimit.discarded=0
>> ratelimit.numratelimiters=6
>> Mon Aug  4 13:15:16 2014: action 1: processed=0 failed=0
>> Mon Aug  4 13:15:16 2014: action 2: processed=1877 failed=0
>> Mon Aug  4 13:15:16 2014: action 3: processed=592 failed=0
>> Mon Aug  4 13:15:16 2014: action 4: processed=4 failed=0
>> Mon Aug  4 13:15:16 2014: action 5: processed=2 failed=0
>> Mon Aug  4 13:15:16 2014: action 6: processed=0 failed=0
>> Mon Aug  4 13:15:16 2014: action 7: processed=0 failed=0
>> Mon Aug  4 13:15:16 2014: action 8: processed=0 failed=0
>> Mon Aug  4 13:15:16 2014: action 9: processed=0 failed=0
>> Mon Aug  4 13:15:16 2014: logstashforwarder: processed=694102 failed=0
>> Mon Aug  4 13:15:16 2014: imptcp(*/10514/IPv4): submitted=696044
>> Mon Aug  4 13:15:16 2014: imptcp(*/10514/IPv6): submitted=0
>> Mon Aug  4 13:15:16 2014: logstashforwarder[DA]: size=73817861
>> enqueued=317479 full=0 discarded.full=0 discarded.nf=0 maxqsize=73817861
>> Mon Aug  4 13:15:16 2014: logstashforwarder: size=1392 enqueued=694130
>> full=0 discarded.full=0 discarded.nf=0 maxqsize=9770
>> Mon Aug  4 13:15:16 2014: main Q: size=4150 enqueued=696078 full=0
>> discarded.full=0 discarded.nf=0 maxqsize=31209
>>
>> if you look at the queue sizes, in this timeframe you fell WAY behind,
>> you received more messages more than you processed (the difference in the
>> cache sizes for the logstashforwarder Q, logstashforwarder[DA] and main Q
>> size stats). It looks like you fell behind by >100k messages
>>
>> So this looks to me like the logstash instance just isn't able to keep
>> up, can you look at the data there?
>>
>> also, it would be good to restart this with the DA cache files removed,
>> putting messages into the DA cache files does cost performance.
>>
>> At this data volume, I'd also suggest changing the impstats time down to
>> something like 10 seconds so that the numbers don't get too big.
>>
>> David Lang
>>
>> On Mon, 4 Aug 2014, Doug McClure wrote:
>>
>>  Date: Mon, 4 Aug 2014 14:49:38 -0400
>>> From: Doug McClure <[email protected]>
>>> Reply-To: rsyslog-users <[email protected]>
>>> To: rsyslog-users <[email protected]>
>>> Subject: Re: [rsyslog] Finding the holy grail tuning setting...
>>>
>>> I appreciate it - I desire an objective approach this challenge!
>>>
>>> Attached is a fresh impstats file.  Appreciate any interpretation advice
>>> and tuning actions.
>>>
>>> Doug
>>>
>>>
>>> On Mon, Aug 4, 2014 at 1:05 PM, David Lang <[email protected]> wrote:
>>>
>>>  On Mon, 4 Aug 2014, Doug McClure wrote:
>>>>
>>>>  I've read, re-read and read again everything I can find out there on
>>>>
>>>>> queues, options, etc. and still feel I don't really know what I'm doing
>>>>> other than haphazardly changing one or more settings hoping to get more
>>>>> data through/out of rsyslog.
>>>>>
>>>>> I'm growing about one 1GB DA cache file every 10 min or so and I can't
>>>>> seem
>>>>> to increase the processing to clear them up.  I probably clear one for
>>>>> every 2-4 new ones that are created.
>>>>>
>>>>> What's the best setting to focus on to increase DA queue file
>>>>> processing?
>>>>> I've taken dequeuebatchsize from as low as 100 or 1000 (which
>>>>> everything
>>>>> seems to talk about) to as high as 100,000 or more and I can't seem to
>>>>> hit
>>>>> a sweet spot.  I have varied threads up to 200. Queue size up to 1G or
>>>>> 500M
>>>>> or 200K.
>>>>>
>>>>> What are the rules of thumb here - change X watch Y until you get to
>>>>> some
>>>>> ceiling and you need to add more system resources or other upstream?
>>>>>
>>>>>
>>>> well, rather than focusing on the DA queue handling, let's try and
>>>> figure
>>>> out what's slow and causing things to queue
>>>>
>>>> have you configured impstats? configure it to log to a file, and log
>>>> fairly frequently and we should be able to see what action is holding
>>>> things up. Once we know that we can work to figure out how to solve that
>>>> bottlneck.
>>>>
>>>> David Lang
>>>> _______________________________________________
>>>> rsyslog mailing list
>>>> http://lists.adiscon.net/mailman/listinfo/rsyslog
>>>> http://www.rsyslog.com/professional-services/
>>>> What's up with rsyslog? Follow https://twitter.com/rgerhards
>>>> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
>>>> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
>>>> DON'T LIKE THAT.
>>>>
>>>>
>> _______________________________________________
>> rsyslog mailing list
>> http://lists.adiscon.net/mailman/listinfo/rsyslog
>> http://www.rsyslog.com/professional-services/
>> What's up with rsyslog? Follow https://twitter.com/rgerhards
>> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
>> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
>> DON'T LIKE THAT.
>> _______________________________________________
>> rsyslog mailing list
>> http://lists.adiscon.net/mailman/listinfo/rsyslog
>> http://www.rsyslog.com/professional-services/
>> What's up with rsyslog? Follow https://twitter.com/rgerhards
>> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
>> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
>> DON'T LIKE THAT.
>>
>
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to