>I'm pretty suspicious if my 2mb Zend app was ever to blame for the memory 
>problems now. Media Temple DV hosting starts with 256mb, and even before 
>the site was released there were black QoS alert in the Plesk CP. The 
>account was completely fresh with nothing else running and I followed the 
>knowledge base stuff for optimisation, but my frequent support tickets 
>were only met with 'you need more memory', and my linux skills are not 
>good enough to work out what was really to blame.

How popular is your site? A configured VPS with 256MB should be able to
handle a reasonable load for a few personal sites. If you run "free -m"
you'll get a memory usage report - the second and third lines are the
important ones. The first is misleading since it nearly always shows over
50% of memory in use, the second is more realistic accounting for memory
caches. The third shows if you are using swap space (bad since disk usage is
slower and would indicate the need for more memory). Make sure ACP has
sufficient memory too.

I ran a few quick tests of the Zend Framework application hosting my
"Survive The Deep End" open book. The average page (no SQL) uses 2.5MB. That
increases to around 5.25MB once Zend_Db_Table is pulled into use. In
strictly ballpark terms, around 6MB seems reasonable for a ZF app with
caching employed.

One other thing you could check if your site is popular, is how many
concurrent users you're likely to have. Your Apache config might be allowing
a number of concurrent requests that's too much for your memory limit.
Optimising Apache could be looked into.

>One thing I'm still not clear on is the 'httpd' processes that gets 
>listed in the SSH terminal 'top' command. These represent a single 
>page-hit afaik, but how long does each process last for and hence consume 
>the memory? Is the memory released as soon as the page is served - after 
>the ~500ms it takes for whole PHP process to run?

The httpd (or apache2 on Ubuntu/Debian) is usually the parent process and a
number of child processes. The way Apache works, the parent process (which
is permanent until you restart Apache) spawns multiple child process who
actually handle incoming requests. So one of those httpd processes is the
parent - see the httpd.pid file to see which one (can't recall the console
sequence for this off hand). Each child can handle multiple requests during
its lifetime which is why they seem to hang around to gain memory
consumption over time. They should be killed off at intervals to prevent the
memory use accumulating too much.

There are two sides here - 1) optimising Apache and 2) avoiding Apache.

The first is a matter of configuring Apache to optimise memory consumption
over time. You see, Apache preserves child processes to handle multiple
requests, but these tend to get larger over time (as I said). One
configuration aspect, as an example, is MaxRequestsPerChild which kills any
child process once it's made X requests (thus getting rid of the growing
memory consumption of that child and starting afresh). Too high a value
leads to swelled memory use, too low is just wasting resources on creating
new children ;). Optimising any server is tricky - you need to pick a target
and incrementally change it to measure its effect. Other values to consider
could be MinSpareServers, MaxSpareServers, StartServers, MaxClients. I'd
suggest moving one or two downwards (MaxRequestsPerChild and MaxClients are
good starters - MaxClients is one of the main limits to reduce the number of
child processes to the number that utilise available memory without hitting
swap space on the disk - swap space is bad, if the parent httpd is in swap
space, then children are also created from swap which is really slow) from
your current settings, and recording the output from free over a day or two
to see it's impact. Keep optimising until you see memory being left unused
(or if a lot unused, push them back up!). Removing any non-essential modules
from being loaded may also help reduce Apache's memory footprint.

The second side is recognising that Apache is a memory hog anyway, and it's
not just serving your PHP. It's serving your images, css, and javascript! It
also might be serving "slow clients" - browsers on the client side which are
taking a long time to finish loading a page often because of limited
bandwidth availability. These keep Apache processes alive serving individual
requests for a long time - a long time while its memory is locked up!

Consider implementing a lighter alternative as a "reverse proxy". My book
site is running Nginx which is an lightweight HTTP server. You can implement
Nginx as a reverse proxy. To explain that - it means you keep Apache (it has
tight PHP support and maintains your Virtual Hosts) but set it to listen on
port 8080 for example (this also means editing the Virtual Host
configuration to also use port 8080. Nginx is then configured to proxy all
requests to Apache. The benefit is that when Apache completes a request, it
passes it back to Nginx as soon as possible. Now Apache is finished, and the
low memory Nginx can serve slow clients using less memory than Apache would
if it had to wait around. The next step is configuring Nginx to serve any
static content like images/css/js so Apache doesn't have processes doing
that with the obvious memory penalty. If they are truly static - setting an
Expires header will allow client caching (and fewer requests for these) for
a period of time.

> Sorry if this is off-topic in terms of ZF.

Not so sure it is... Sure it's not specific to the ZF, but it's still useful
to know in developing applications with the ZF.

Long email ;). Hope it's useful though.


mothmenace wrote:
> 
> Yep Wordpress has some really nice caching plugins. The 10.75mb was
> without them. But I was only using WP as a 'control' to compare against,
> as WP is so popular and I'm assuming well written.
> 
> I'm pretty suspicious if my 2mb Zend app was ever to blame for the memory
> problems now. Media Temple DV hosting starts with 256mb, and even before
> the site was released there were black QoS alert in the Plesk CP. The
> account was completely fresh with nothing else running and I followed the
> knowledge base stuff for optimisation, but my frequent support tickets
> were only met with 'you need more memory', and my linux skills are not
> good enough to work out what was really to blame.
> 
> One thing I'm still not clear on is the 'httpd' processes that gets listed
> in the SSH terminal 'top' command. These represent a single page-hit
> afaik, but how long does each process last for and hence consume the
> memory? Is the memory released as soon as the page is served - after the
> ~500ms it takes for whole PHP process to run?
> 
> Sorry if this is off-topic in terms of ZF.
> 


-----
Pádraic Brady

http://blog.astrumfutura.com
http://www.patternsforphp.com
OpenID Europe Foundation - Irish Representative
-- 
View this message in context: 
http://www.nabble.com/ZF---Memory-usage-tp20678541p21308401.html
Sent from the Zend Framework mailing list archive at Nabble.com.

Reply via email to