RE: Memory problems

1999-11-09 Thread Clinton Gormley

 Why is it that my memory usage is going up and up, and 
 shutting down the two
 major consumers of memory (Apache/mod_perl and MySQL) don't 
 reclaim that
 memory?
 
 I am running RedHat 6, with Apache 1.3.9, mod_perl 1.21, and 
 MySQL 3.23a.

As an aswer to my question to anybody in a similar position, it turns
out that Linux 2.2.11 (which I am using) is known to have a memory leak,
which is sorted in 2.2.12.

Clint



Re: Memory problems

1999-11-03 Thread Greg Stark


[EMAIL PROTECTED] writes:

 Thanks Greg
 
  I strongly suggest you move the images to a separate hostname
  altogether. The
  proxy is a good idea but there are other useful effects of
  having a separate
  server altogether that I plan to write about in a separate
  message sometime.
  This does mean rewriting all your img tags though.
 
 Look forward to that with interest

Briefly the reason you want a separate server and not just a proxy is because
netscape and other browsers can do requests in parallel more efficiently. If
they're on the same server the browser might choose to queue up a bunch of
image downloads on the same connection using keepalives. If it orders them
with the slow script generated page first then they get delayed by the length
of time it takes the script engine to run.

Also if you end up with a backlog queue of proxy servers waiting for a script
engine to service them then all your proxy server processes can get stuck
waiting for a script engine process. That's fine if all they do is serve
dynamic pages but not if they're also responsible for serving static objects.

The proxy server may still be useful but it doesn't replace using a separate
server on a different hostname or port for the images and static documents.
Besides you may eventually want to do that anyways and it will be easier if
you've been doing it all along.

 I have two PIII 500's, so CPU usage is no problem.  Amazingly, it's the 1Gig
 of memory which expires first.

That sounds like you have some high-latency job being handled by your perl
server. Are they sending mail? Or doing database accesses into a database that
can't keep up? If it's just perl it would only take a handful to use 100% cpu,
they must be idle waiting for something.

I agree with the [EMAIL PROTECTED], if you have to do anything at all
dependent on external i/o like sending mail you want to queue up the events
and handle them asynchronously. Anything that can arbitrarily increase the
latency off the perl httpds is a disaster waiting to happen.

Gee, I think I just summarized the two big lessons I was going to write about
in the same vein as jb's "in practice" message. I've learned these lessons the
hard way, hopefully we can build enough of a repertoire of such lessons in the
FAQ to help other people avoid this :)

-- 
greg



Memory problems

1999-11-02 Thread Clinton Gormley

Hi

I had huge problems yesterday.  Our web site made it in to the Sunday
Times and has had to serve 1/2 million request in the last 2 days.

Had I set it up to have proxy servers and a separate mod_perl server?
No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
death.

I am in the process of fixing that, but if anybody could help with this
question, it'd be appreciated.

I am running Apache 1.3.9, mod_perl 1.21, Linux 2.2.11, mysql
3.23.3-alpha.

What happened was this:  My memory usage went up and up until I got "Out
of memory" messages MySQL bailed out.  Memory usage was high, and the
server was swapping as well.  

So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
memory.  It was just unavailable.  How do you reclaim memory other than
by stopping the processes or powering down?  Is this something that
might have happened because it went past the Out of Memory stage?

Thanks

Clint



Re: Memory problems

1999-11-02 Thread Stas Bekman

 I had huge problems yesterday.  Our web site made it in to the Sunday
 Times and has had to serve 1/2 million request in the last 2 days.

Oh, I thought there was a /. effect, now it's a sunday effect :)

 Had I set it up to have proxy servers and a separate mod_perl server?
 No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
 death.
 
 I am in the process of fixing that, but if anybody could help with this
 question, it'd be appreciated.
 
 I am running Apache 1.3.9, mod_perl 1.21, Linux 2.2.11, mysql
 3.23.3-alpha.
 
 What happened was this:  My memory usage went up and up until I got "Out
 of memory" messages MySQL bailed out.  Memory usage was high, and the
 server was swapping as well.  

First, what you had to do in first place, is to set MaxClients to
such a number, that when you take the worst case of the process growing to
X size in memory, your machine wouldn't swap. Which will probably return
an Error to some of the users, when processes would be able to queue all
the requests, but it would never keep your machine down!

Other than that you have to set a limit on the resources if you need , see
Apache::SizeLimit, Apache::GTopLimit, BSD::Resource

 So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
 memory.  It was just unavailable.  How do you reclaim memory other than
 by stopping the processes or powering down?  Is this something that
 might have happened because it went past the Out of Memory stage?

Sure, when machine starts to make a heavy swapping you might wait for
hours before it gets stabilized. Since all it does it swapping in and
immediately swapping out. But when it's finished - you wouldn't see swap
zeroed - it would be still used when you check it with top (or other tool)
but it wouldn't be actually used if you have enough free real memory
(well not really - it would use it for a while). What I usually do is 

swapoff /dev/hdxxx 
swapon  /dev/hdxxx

on the swap partition if I know that there is enough
real memory available to absorb the data that will be deleted from swap.
it might take a while and make sure that you have ENOUGH memory to absorb,
it otherwise the machine will stack (I wouldn't it on a live server for
sure :)

Just remember that what you see in the stat, is not what really being
used, since linux and other OSes do lots of caching... 

Hope this helps...
___
Stas Bekman  mailto:[EMAIL PROTECTED]www.singlesheaven.com/stas  
Perl,CGI,Apache,Linux,Web,Java,PC at  www.singlesheaven.com/stas/TULARC
www.apache.org   www.perl.com  == www.modperl.com  ||  perl.apache.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: Memory problems

1999-11-02 Thread Greg Stark


Stas Bekman [EMAIL PROTECTED] writes:

  I had huge problems yesterday.  Our web site made it in to the Sunday
  Times and has had to serve 1/2 million request in the last 2 days.
 
 Oh, I thought there was a /. effect, now it's a sunday effect :)

The original concept should be credited to Larry Niven, he called the effect
"Flash crowds"

  Had I set it up to have proxy servers and a separate mod_perl server?
  No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
  death.

I strongly suggest you move the images to a separate hostname altogether. The
proxy is a good idea but there are other useful effects of having a separate
server altogether that I plan to write about in a separate message sometime.
This does mean rewriting all your img tags though.

  What happened was this:  My memory usage went up and up until I got "Out
  of memory" messages MySQL bailed out.  Memory usage was high, and the
  server was swapping as well.  
 
  So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
  memory.  It was just unavailable.  How do you reclaim memory other than
  by stopping the processes or powering down?  Is this something that
  might have happened because it went past the Out of Memory stage?

Have you rebooted yet? Linux has some problems recovering when you run out of
memory really badly. I haven't tried debugging but our mail exchangers have
done some extremely wonky things once they ran out of memory even once
everything had returned to normal. Once non-root users couldn't fork, they
just got "Resource unavailable" but root was fine and memory usage was low.

 First, what you had to do in first place, is to set MaxClients to
 such a number, that when you take the worst case of the process growing to
 X size in memory, your machine wouldn't swap. Which will probably return
 an Error to some of the users, when processes would be able to queue all
 the requests, but it would never keep your machine down!

I claim MaxClients should only be large enough to force 100% cpu usage whether
from your database or the perl script. There's no benefit to having more
processes running if they're just context switching and splitting the same
resources finer. Better to queue the users in the listen queue.

On that note you might want to set the BackLog parameter (I forget the precise
name), it depends on whether you want users to wait indefinitely or just get
an error.

-- 
greg



RE: Memory problems

1999-11-02 Thread clinton

Thanks Greg

 I strongly suggest you move the images to a separate hostname
 altogether. The
 proxy is a good idea but there are other useful effects of
 having a separate
 server altogether that I plan to write about in a separate
 message sometime.
 This does mean rewriting all your img tags though.

Look forward to that with interest

 Have you rebooted yet? Linux has some problems recovering
 when you run out of
 memory really badly. I haven't tried debugging but our mail
 exchangers have
 done some extremely wonky things once they ran out of memory even once
 everything had returned to normal. Once non-root users
 couldn't fork, they
 just got "Resource unavailable" but root was fine and memory
 usage was low.

I have rebooted.  Eventually what happened was that my ethernet driver
stopped working - gave some error message about trying to restart transition
zone (I think?  I wasn't actually there - I had to get others to pull the
plug to power down)

  First, what you had to do in first place, is to set MaxClients to
  such a number, that when you take the worst case of the
 process growing to
  X size in memory, your machine wouldn't swap. Which will
 probably return
  an Error to some of the users, when processes would be able
 to queue all
  the requests, but it would never keep your machine down!

Done


 I claim MaxClients should only be large enough to force 100%
 cpu usage whether
 from your database or the perl script. There's no benefit to
 having more
 processes running if they're just context switching and
 splitting the same
 resources finer. Better to queue the users in the listen queue.

I have two PIII 500's, so CPU usage is no problem.  Amazingly, it's the 1Gig
of memory which expires first.


 On that note you might want to set the BackLog parameter (I
 forget the precise
 name), it depends on whether you want users to wait
 indefinitely or just get
 an error.
Sounds like the route to go.  I'm also busy implementing the  proxy erver
bit.

Thanks a lot Greg

Clint



Memory problems

1999-01-04 Thread clint

Why is it that my memory usage is going up and up, and shutting down the two
major consumers of memory (Apache/mod_perl and MySQL) don't reclaim that
memory?

I am running RedHat 6, with Apache 1.3.9, mod_perl 1.21, and MySQL 3.23a.

I restarted my web server a week ago, at which stage the running program
were consuming about 200 meg of the available 1 Gig.

I have stopped and started (not -HUP) Apache/mod_perl a few times this week,
and stopped and started mysql a few times.

Now my system is using 600 meg for the same processes, that is AFTER the
buffers and cache have been removed.

Current memory usage :
 total   used   free sharedbuffers cached
Mem:971648 940672  30976 102836 199800 149288
-/+ buffers/cache: 591584 380064
Swap:   136512   1592 134920

Any ideas how I can reclaim this vanished memory without rebooting the
system?

Thanks

Clint