Under heavy load, Apache has the usual failure mode of spawning so
many threads/processes and database connections that it just exhausts
all the memory on the webserver and also kills the database.
As usual, I would use lighttpd as a frontend (also serving static
files) to handle the large number of concurrent connections to clients,
and then have it funnel this to a reasonable number of perl backends,
something like 10-30. I don't know if fastcgi works with perl, but with
PHP it certainly works very well. If you can't use fastcgi, use lighttpd
as a HTTP proxy and apache with mod_perl behind.
Recipe for good handling of heavy load is using an asynchronous
server (which by design can handle any number of concurrent connections
up to the OS' limit) in front of a small number of dynamic webpage
generating threads/processes.
Thanks for the response.
To be clear, it sounds like you are advocating solving the problem with
scaling the number of connections with a different approach, by limiting
the number of web server processes.
So, the front-end proxy would have a number of max connections, say 200,
and it would connect to another httpd/mod_perl server behind with a
lower number of connections, say 20. If the backend httpd server was
busy, the proxy connection to it would just wait in a queue until it was
available.
Is that the kind of design you had in mind?
That seems like a reasonable option as well. We already have some
lightweight Apache servers in use on the project which currently just
serve static content.
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance