My fear with logging the complete data input is that it will make the
problem worse for my customers because this problem is only happening on
heavily loaded servers. I can't reproduce it locally.

On Tue, Sep 6, 2016 at 11:26 AM, Perrin Harkins <[email protected]> wrote:

> Hi John,
>
> The key is usually finding out what the request was that caused it. You
> can add the pid to your access logging, or write a more complete mod_perl
> handler to log the complete data input along with the pid. Then you just go
> back and look at what it was after you see which process was killed.
>
> - Perrin
>
> On Tue, Sep 6, 2016 at 10:00 AM, John Dunlap <[email protected]> wrote:
>
>> The system load reported by the uptime command, on one of my servers,
>> periodically spikes to 20-30 and then, shortly thereafter, I see this in
>> dmesg:
>>
>> [2887460.393402] Out of memory: Kill process 12533 (/usr/sbin/apach)
>> score 25 or sacrifice child
>> [2887460.394880] Killed process 12533 (/usr/sbin/apach)
>> total-vm:476432kB, anon-rss:204480kB, file-rss:0kB
>>
>> Several gigs of memory then becomes available and the system load quickly
>> returns to normal. I'm pretty sure it's a mod perl process that's doing
>> this but I'm not entirely sure how to track down the problem.
>>
>> How would you guys approach this problem?
>>
>> --
>> John Dunlap
>> *CTO | Lariat *
>>
>> *Direct:*
>> *[email protected] <[email protected]>*
>>
>> *Customer Service:*
>> 877.268.6667
>> [email protected]
>>
>
>


-- 
John Dunlap
*CTO | Lariat *

*Direct:*
*[email protected] <[email protected]>*

*Customer Service:*
877.268.6667
[email protected]

Reply via email to