RB Alright, then to you and the mod_perl community in general, since
RB I never saw a worthwhile resolution to the thread "the edge of
RB chaos,"
The resolution is that the machine was powerful enough. If you're
running your mission critical service at "the edge of chaos" then
you're not
On Thu, 11 Jan 2001, Rob Bloodgood wrote:
RB Alright, then to you and the mod_perl community in general, since
RB I never saw a worthwhile resolution to the thread "the edge of
RB chaos,"
(Warning, sweeping generalizations for the sake of illustration below.)
Hi Rob.
Here's how you
You simply cannot come forward and say, "look, I've got this big-assed
linux box, why is my site sucking?" We don't know, and it's neither our
granted. never my intention.
i described the box only to illustrate that i (should) have sufficient HW.
The very, very best minds in production
"RB" == Rob Bloodgood [EMAIL PROTECTED] writes:
RB Alright, then to you and the mod_perl community in general, since
RB I never saw a worthwhile resolution to the thread "the edge of
RB chaos,"
The resolution is that the machine was powerful enough. If you're
running your mission critical
On Mon, 8 Jan 2001, Joshua Chamas wrote:
Hey,
I like the idea of Apache::SizeLimit, to no longer worry about
setting MaxRequestsPerChild. That just seems smart, and might
get maximum usage out of each Apache child.
What I would like to see though is instead of killing the
child based on
What I would like to see though is instead of killing the
child based on VmRSS on Linux, which seems to be the apparent
size of the process in virtual memory RAM, I would like to
kill it based on the amount of unshared RAM, which is ultimately
what we care about.
We added that in, but
Perrin Harkins wrote:
We added that in, but haven't contributed a patch back because our hack only
works on Linux. It's actually pretty simple, since the data is already
there on Linux and you don't need to do any special tricks with remembering
the child init size. If you think it would
On Tue, 9 Jan 2001, Joshua Chamas wrote:
Perrin Harkins wrote:
We added that in, but haven't contributed a patch back because our hack only
works on Linux. It's actually pretty simple, since the data is already
there on Linux and you don't need to do any special tricks with
I like the idea of Apache::SizeLimit, to no longer worry about
setting MaxRequestsPerChild. That just seems smart, and might
get maximum usage out of each Apache child.
What I would like to see though is instead of killing the
child based on VmRSS on Linux, which seems to be the
IMHO, he has a point. I'd also benefit from memory usage based upon an application
threshold. He has a good idea...
Rob Bloodgood wrote:
I have a machine w/ 512MB of ram.
unload the webserver, see that I have, say, 450MB free.
So I would like to tell apache that it is allowed to use at
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
I like the idea of Apache::SizeLimit, to no longer worry about
setting MaxRequestsPerChild. That just seems smart, and might
get maximum usage out of each Apache child.
What I would like to see though is instead of killing the
child
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
I have a machine w/ 512MB of ram.
unload the webserver, see that I have, say, 450MB free.
So I would like to tell apache that it is allowed to use at most 425MB.
I was thinking about that at some point too. The catch is, different
applications have
because then all of your hard work before goes RIGHT out the window,
and I'm talking about a 10-15 MB difference between JUST FINE and
DEATH SPIRAL, because we've now just crossed that horrible, horrible
threshold of (say it quietly now) swapping! shudder
That won't happen if you use a
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
OK, so my next question about per-process size limits is this:
Is it a hard limit???
As in,
what if I alloc 10MB/per and every now then my one of my processes spikes
to a (not unreasonable) 11MB? Will it be nuked in mid process? Or just
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
OK, so my next question about per-process size limits is this:
Is it a hard limit???
As in,
what if I alloc 10MB/per and every now then my one of my
processes spikes
to a (not unreasonable) 11MB? Will it be nuked in mid process? Or just
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
It's not a hard limit, and I actually only have it check on every other
request. We do use hard limits with BSD::Resource to set maximums on CPU
and RAM, in case something goes totally out of control. That's just a
safety though.
chokes JUST a
Hey,
I like the idea of Apache::SizeLimit, to no longer worry about
setting MaxRequestsPerChild. That just seems smart, and might
get maximum usage out of each Apache child.
What I would like to see though is instead of killing the
child based on VmRSS on Linux, which seems to be the apparent
17 matches
Mail list logo