On Friday, March 14, 2003, at 10:15 AM, Zac Stevens wrote:

On Thu, Mar 13, 2003 at 04:55:19PM -0800, David Burry wrote:
These are neat ideas. At a few companies I've worked for we already do
similar things but we have scripts that generate the httpd.conf files
and distribute them out to the web servers and gracefully restart.
Adding a new web server machine to the mix is as simple as adding the
host name to the distribution script.

I've done the same in the past. It works fine, but becomes unweildy when
you're talking about thousands of sites per server. Graceful restarts also
take a nontrivial amount of time in this environment.

Even a few hundred sites are now taking an inordinate time to do a graceful - our config is on NFS, with a separate file for each site - a design decision that I am beginning to regret... I did some testing, but I didn't account for the fact that I'd be loading the configs over NFS. Not great.


What you're talking about doing sounds like a lot more complexity to
achieve a similar thing, and more complexity means there's a lot more
that can go wrong.  For instance, what are you going to do if the LDAP
server is down, are many not-yet-cached virtual hosts just going to
fail?

Redundant LDAP servers? Or even pluggable backends - keep a DBM-format
copy on the local filesystem as a backup. I imagine many people would be
happy with a default vhost specified in the config, which could display an
"Ooops! Something's broken!" page.

We use redundancy everywhere, the backend LDAP is no exception tho this rule.


The main reason for LDAP is because we have a front-end provisioning system that creates accounts for FTP and Email in LDAP, it would be nice to keep the website configurations in there too, without the provisioning system having to write apache config files.

You're right, of course. Some form of graceful failure would be needed, but it would probably be a 'Temporarily Unavailable' error with a custom error page in Japanese and English (most of our customers are Japanese).

In my experience, the 80:20 rule definitely applies here - and I would be
inclined to suggest the ratio is even more severe. That is, more than 80%
of the vhosts contribute less than 20% of the load. While the dynamic
reconfiguration afforded by this proposal is a big win, I'm more impressed
with the opportunity to minimise the amount of wasted resources in large
environments.

This equates with my experience too. It irks me that apache spends a large amount of time and memory holding the configuration for a bunch of sites that only get hit maybe once a day (when the owner loads the page to see if the hit counter has increased - HAH!)


I'm interested to hear whether this is feasible for development against
2.0, as I don't believe the current architecture allows for plugging in
this sort of functionality as a 3rd-party module.

I was looking at implementing it in the URI-to-filename translation phase. Any memory malloc'd for a in-memory cache would only be accessable by that particular child, but that would not be so bad for a v1.0 implementation of the module.


In the future, we might look at shmem or something like that. Even a DB file held on a ramdisk might be acceptable (if a little perverse).

Nathan.

--
Nathan Ollerenshaw - Systems Engineer - Shared Hosting
ValueCommerce Japan - http://www.valuecommerce.ne.jp

In the days, When we were swinging form the trees
I was a monkey, Stealing honey from a swarm of bees
I could taste, I could taste you even then
And I would chase you down the wind



Reply via email to