On Thu, Mar 13, 2003 at 04:55:19PM -0800, David Burry wrote: > These are neat ideas. At a few companies I've worked for we already do > similar things but we have scripts that generate the httpd.conf files > and distribute them out to the web servers and gracefully restart. > Adding a new web server machine to the mix is as simple as adding the > host name to the distribution script.
I've done the same in the past. It works fine, but becomes unweildy when you're talking about thousands of sites per server. Graceful restarts also take a nontrivial amount of time in this environment. > What you're talking about doing sounds like a lot more complexity to > achieve a similar thing, and more complexity means there's a lot more > that can go wrong. For instance, what are you going to do if the LDAP > server is down, are many not-yet-cached virtual hosts just going to > fail? Redundant LDAP servers? Or even pluggable backends - keep a DBM-format copy on the local filesystem as a backup. I imagine many people would be happy with a default vhost specified in the config, which could display an "Ooops! Something's broken!" page. In my experience, the 80:20 rule definitely applies here - and I would be inclined to suggest the ratio is even more severe. That is, more than 80% of the vhosts contribute less than 20% of the load. While the dynamic reconfiguration afforded by this proposal is a big win, I'm more impressed with the opportunity to minimise the amount of wasted resources in large environments. I'm interested to hear whether this is feasible for development against 2.0, as I don't believe the current architecture allows for plugging in this sort of functionality as a 3rd-party module. Zac