Sorry, I should have explained what I meant better. You would add a handler
BEFORE the request gets to your regular application, so you catch the
details of the request that dies. I mis-remembered about the access_log. I
was actually thinking of a custom C module I used once that did this type
of
If this is a memory leak, won't the last request to be sent to the mod_perl
worker process be the last straw and not necessarily the culprit? What if
the leak is in some library code that's used in every request?
On Tue, Sep 6, 2016 at 12:43 PM, John Dunlap wrote:
> My fear with
My fear with logging the complete data input is that it will make the
problem worse for my customers because this problem is only happening on
heavily loaded servers. I can't reproduce it locally.
On Tue, Sep 6, 2016 at 11:26 AM, Perrin Harkins wrote:
> Hi John,
>
> The key
Hi John,
The key is usually finding out what the request was that caused it. You can
add the pid to your access logging, or write a more complete mod_perl
handler to log the complete data input along with the pid. Then you just go
back and look at what it was after you see which process was
The system load reported by the uptime command, on one of my servers,
periodically spikes to 20-30 and then, shortly thereafter, I see this in
dmesg:
[2887460.393402] Out of memory: Kill process 12533 (/usr/sbin/apach) score
25 or sacrifice child
[2887460.394880] Killed process 12533