> I have a high traffic website (looks like 200 GB output per month,
> something around 100000-200000 hits per day) hosted on a commercial
> service. The service does not limit my bandwidth usage, but they limit the
> number of concurrent Apache process that I can have to 41. This causes the
> server to delay accepting new connections during peak times.
That seems pretty arbitrary. They use that instead of some kind of memory
or CPU cap?
> My account is a "virtual server"; what this means is that I have access to
> the Apache httpd.conf files and can restart the Apache daemon, but do not
> have the priviledge to bind a program to port 80 (so I can't put thttpd on
> port 80).
That rules out some obvious solutions like lingerd and squid (which I think
uses a select loop). Sounds like they've made it so there's nothing you can
do except try to server your content faster. You could look at
Apache::Compress.
- Perrin