James Austin wrote:


Advantages:

    1) Provides persistant perl WITH suexec for per-virtualhost user
    execution
    2) SpeedyCGI handles dynamic data, Apache handles static, hence you
    don't require a covering proxy as described in
    http://perl.apache.org/docs/1.0/guide/strategy.html

You can configure the Apache proxy server to server static content in the mod_perl configuration as well, either directly or via caching. You can have the back server serve static content on an initial request, then have the proxy cache it for some time period. We do this because it allows you to run a single back server when doing development as it knows how to serve everything.

    3) The timeout property in speedycgi means that a script with low or
    no load will drop out of memory, this means high use scripts will
    run in persistant perl and low use scripts will load into memory
then remove themself when they are no longer being activly used.

You can configure Apache to do this with your mod_perl servers too. You can configure the number of servers (as Perrin mentioned) and Apache will expand to based on load. Apache will clean up as load goes away. You can also set the children to exit after a number of requests. Check out these Apache directives: MaxClients, MaxRequestsPerChild, MaxSpareServers, MinSpareServers, StartServers.


a) Is there a better way to achive this goal?
b) Is there a way to make modperl scale and attain the desired results without creating a new server for each instance (Implementation 1)?

You mention security, but you didn't say if you had a requirement for the different processes to not share memory. If this isn't required, you can put your app-specific code in separate modules and handlers. This would avoid multiple servers and allow you to share memory with shared modules (CGI, DBI, etc.).

c) Have I missed anything (security etc. (Implementation 2))?
d) Is there a better solution (fastcgi/pperl etc.)?
e) What are there any downsides (other than those listed above) to either of these implementations?

The main problem with the one server per app implementation is likely to be scaling. While you have as many ports as you need, at some point the server will run out of memory if you get above some number of applications. If you don't think you'll have than many, it should work fine.

Jim


--
Jim Brandt
Administrative Computing Services
University at Buffalo

Reply via email to